Visibility into the FPGA.

Posts Tagged Programmable Logic

Record FPGA data during 1 hour – really

Record data from FPGA over long total times - deep capture

Record FPGA data during 1 hour – really.

As ASIC, SoC and FPGA engineers, we are used to watching the operation of our designs based on single limited snapshots. RTL simulations, for instance, provide bit-level details during execution times that span over a few (milli)seconds at best. Consequently, it may not be possible to see events that happen over long times as a single coherent capture.

In this blog post, I have wanted to show what can be done in a real case with EXOSTIV. The design that runs from a FPGA board is a full system on-chip that features a Gbit Ethernet connection. The board is connected to our company network – and I have set up EXOSTIV to trigger and record the Ethernet traffic during 1 full hour. Yes, we used EXOSTIV as an Ethernet sniffer, that works from inside the FPGA – providing a ‘decoded view’ of the traffic after it has entered the FPGA Gbit Ethernet IP. Check the video below. We have accelerated it but the stop watch shows the real elapsed time.

Like it is shown on the video, we have set up EXOSTIV to record a chunk of data (256 samples) every time ‘something’ is seen on the Ethernet interface: there is an event trigger when a wide variety of events (note the ‘OR’ equation trigger definition) marking the detection of traffic happen on the Ethernet connection. Each sample is 1,248 bits wide.. This gives a quite good insight of what is ‘after’ the Gbit Ethernet IP in the FPGA. Each burst records 256 x 1,248 bits = 319,488 bits of data – and there are 30,000 such bursts collected over a total time of about 1 hour.
In total, EXOSTIV has collected a little less than 1.2 Gigabyte during this run – which is just about 15% of the total memory provided by an EXOSTIV probe.

Practically, it means that we could collect roughly 6.6 times as much data… (> 6 hours) with this single capture unit.

You can see that the bursts are recorded rather randomly, as the occurrence of a trigger depends on the actual load of our company network. (Click on pictures below to enlarge and zoom on the data)

Exostiv Dashboard after capturing 1 hour of the Ethernet traffic

Exostiv Dashboard after capturing 1 hour of the Ethernet traffic - full capture
Exostiv Dashboard after capturing 1 hour of the Ethernet traffic - detail on the deep capture

As always, thank you for reading (and watching).
– Frederic

Defining targets (for FPGA debug)

Defining targets (for FPGA debug)

I recently attended a technical seminar organized in The Netherlands by one of the major FPGA vendors (hint: it is one of the 2 top vendors among the ‘4 + now single outsider’ players in the very stable FPGA market). During the lunch, I had the opportunity to discuss with some engineers about FPGA verification. Beyond high-level definition techniques, simulation and other ‘pure-software’ techniques, they all used ‘real hardware’ for some of their verification iterations.

There seems to be a strongly entrenched habit of the FPGA engineer: we cannot resist to the appeal of being able to reconfigure the FPGA indefinitely and try a system ‘for real’.

Yet, for those who would still believe it, the days of simply throwing a design to a board without prior verification are over and for good (check this article for instance). Separate IPs and separate groups of ‘functionally-coherent’ IPs really go through all-kinds of pre-hardware validations.

Like one of my lunch companions said: ‘This is when we put things together for real that virtually *anything* can go wrong.’.

So FPGA Engineers like to go on real hardware for debug. When asked about what they use at this stage of the design flow, the answers are (unsurprisingly?) similar. Two techniques still prevail: either they connect a logic analyzer to some of the FPGA’s I/Os – or they use an ’embedded logic analyzer’.

Traditional vs. Embedded logic analyzer

These two approaches are summarized below (click on picture to zoom):

Traditional vs. Embedded Logic Analyzer for FPGA debugging

Schematically, using a traditional logic analyzer to debug FPGA consists in making a special FPGA configuration where internal nodes are routed onto some of the chip’s functional I/Os. These I/Os are ‘physically’ routed on board to a connector on which the logic analyzer can be hooked. This approach uses the ability to reconfigure the FPGA to route diverse groups of internal nodes to the same set of physical I/Os. Multiplexing groups of nodes helps reducing the number of FPGA synthesis & implementation iterations. The observed nodes evolution is stored into the logic analyzer’s memory for further analysis.

Conversely, using an embedded logic analyzer consists in reserving FPGA memory resources for storing the FPGA nodes evolution. Subsequently, this memory is read by using the device’s JTAG port. The collected traces can be visualized on a PC – usually in a waveform viewer software.

Which technique is preferred?

This is a usual trade-off between which resources you are able to mobilize for debug and the achievable debug goals.

The embedded logic analyzer is generally preferred because it does not reserve FPGA I/Os for debug. Having a sufficient number of such I/Os for debug can lead to choosing a more costly device ‘only for debug’ – which usually proves to be difficult to ask to your manager.

Conversely, the traditional logic analyzer is preferred over the embedded logic analyzer because of its larger trace storage (Mega samples vs. Kilo samples).

Over time, additional considerations have appeared:

– With FPGA commonly running at more than 350 MHz, signal integrity and PCB costs problems arise when routing a large number of I/Os to a debug connector (trace length matching, power and space issues). This goes in favor of the embedded logic analyzer (not speaking of the much larger price of a traditional logic analyzer).

– In my previous post, I evoked the gap that may exist between the order of magnitude of the memory that’s available in a FPGA and the quantity of data that is necessary for debugging efficiently.

Exploring the solutions space

Supposing that none of the above techniques is satisfactory, how can they be improved?

The picture below shows the relative positions of the logic analyzer and the embedded logic analyzers on a 2 axis chart. It shows the FPGA I/O and memory resources that have to be mobilized by each technique.

The area occupied by each solution shows the relative order of magnitude of the trace memory they provide – which is a good measure of the provided ‘observability’.

Exploring the FPGA debug solutions space
A better solution should:
1) hit the TARGET position on the chart – and –
2) provide even more observability

In my next post, I’ll come back on the impact of reaching the target defined above. Stay tuned.

Thank you for reading.
– Frederic

Long live Exostiv Labs!

Streaming bits

Long live Exostiv Labs!

So here we are. Exostiv Labs (formerly Yugo Systems) has started. Those of you who are familiar with Byte Paradigm (www.byteparadigm.com) already know that we have 10 years of business behind us. Quite some time dedicated to FPGA & System design AND Test and Measurement.

So, why Exostiv Labs – what is it and why now?

Exostiv Labs focuses on providing innovative debug & verification tools to the FPGA design community. We believe that FPGA (or Programmable Logic, ‘HIPPS’, or whatever they are called) has got its own constraints and economics. FPGA has become one of the most complex type of chip available for digital systems. Yet, the tools provided to the FPGA engineering community have failed to scale with the FPGA technology. The first and main problem in FPGA debug is observability– that is, the ability to understand the system’s behaviour from what you can observe of it.

Observability is the first target

Observability is in the roots of Exostiv Labs.

We choose our name to reflect what we think is biggest issue of FPGA debug: visibility and the need to see more – ultimately everything.
The other questions are more extensively developed in our latest white paper ‘FPGA verification tools need an upgrade’. You can download it from the link below.

Thank you for reading.

– Frederic

STAY IN TOUCH

Sign in to our Newsletter