Visibility into the FPGA.

Posts Tagged embedded logic analyzer

Deep Trace & Bandwidth

Exostiv provides deep trace AND bandwidth for maximal FPGA visibility

Deep Trace & Bandwidth

Exostiv provides the following maximum capabilities for capturing data from inside FPGA running at speed:

Capabilities.

  • 50 Gigabit per second bandwidth for collecting FPGA traces.
  • 8 Gigabyte of memory for trace storage.
  • 32,768 nodes probing simultaneously.
  • 524,288 nodes reach.
Actually, we have built EXOSTIV to provide VISIBILITY to FPGA designers performing debugging with real hardware. If you do not know why it is important, watch the following 7 minutes video. It sketches out the fundamentals of EXOSTIV.

We built EXOSTIV to provide visibility to the FPGA designer.

 

 

Interrupted captures are very useful too !

I was recently demonstrating Exostiv at a customer’s site and I received the following comment:
“Even with 50 Gbps bandwidth, this tool is hardly usable because you won’t see many nodes at a usual FPGA internal sampling frequency…”
This person was implying that – for example – probing more than 250 FPGA nodes at 200 MHz already exceeds this total bandwidth. So, Exostiv cannot be used to its fullest, right?

Wrong.

This reasoning is right if you think that only continuous captures are valuable for getting insight from FPGA.
The following short video explains why it is important. It features a case where the capture – from start to end – spans over 11 seconds ! . Depending on the trigger and data qualification (or data filtering options) – and by using the full provided trace data buffer (8GB) such an approach can let you observe specific moments of the FPGA in operation over hours !.
 

 

With the proper capture settings, EXOSTIV lets you observe FPGA over hours.

So, the features listed below are equally important for an efficient capture work.

Features.

  • 16 capture units that can be enabled/disabled dynamically
  • 16 multiplexed data groups per capture unit
  • 8k samples local buffer in each capture unit.
  • 1 trigger unit per capture unit. Defines start of capture.
  • Bit or bus condition. =, /=, <, >, range, out of range conditions
  • Repeating/interrupted capture based on trigger condition
  • Data qualification condition on input data. Capture only when the condition is true.
  • Interactive trigger or data qualification definition: no recompile needed
  • Sequential / state machine trigger in 2017 roadmap.

As always, thank you for reading (and for watching)
– Frederic

Does FPGA use define verification and debug?

Does FPGA use define verification
and debug?

You may be aware that we have run a first survey on FPGA design, debug and verification during the last month.

(By the way, many thanks to our respondents – we’ll announce the Amazon Gift Card winner in September).

In this survey, we did some ‘population segmentation’, and first asked about the primary reason for using FPGA:

– Do you use FPGA as a target technology? In other words are FPGA part of your final product’s BOM (bill of materials) list? … – or:
– Do you use FPGA as a prototyping technology for an ASIC or a SoC… ?

49 respondents were found in the first group (which we’ll also call the ‘pure FPGA’ users) and 34 in the second group. The survey was posted during about one month online and advertised on social networks. The questions were in English.

Behind this segmentation question is of course the idea that the design flow used to design FPGA is usually significantly different (and shorter) than this used to design an ASIC.
(This is claimed by FPGA vendors as one of the significant differences between ASIC and FPGA – you can see an example of this comparison here).

Since we care (a lot) about FPGA debug and verification, one of the survey questions was about the set of tools currently used for FPGA debug and verification

The chart below illustrates the percentage of respondents in each group who selected each proposed tool or methodology. They had been invited to select all choices that applied to them (click on the picture to enlarge).

Survey results

A podium shared by simulation, in-lab testing and RTL code reviews

Without significant difference between the 2 groups, simulation is the clear winner methodology, with a little more than 90% use. Frankly, we wonder how it is still possible not to simulate a FPGA design today – we have not analyzed the cases when designers are able to skip this step.

In-lab testing comes second, with near to 70% usage among the respondents. In-lab testing actually includes 2 categories of tools in the survey:

– traditional instruments -such as scopes and logic analyzers- and
– embedded logic analyzers -such as Xilinx Chipscope, Altera Signal Tap and the likes.

Manual RTL code review closes the podium. We expect that any designer actually reviews his/her code. Why is it not 100%? Probably because the ones verifying and debugging FPGA are not the RTL coders. Another possibility is that ‘reviewing the code’ might not be seen by many as a ‘verification technique’, but as the result of the process of verification and debug.

The case of code coverage

While this technique has been around for quite some time now (and is actually complementary to simulation), it is interesting to see that code coverage has got the same popularity in our 2 groups (we do not think that the difference between the groups is significant here).

It has been widely said that ‘pure FPGA designers’ are careless and do not use the same advanced and rigorous techniques of ASIC designers. Well, apparently, they do not just care about simulating some cases; they have the same level of concern for properly covering the code as ASIC designers.

Assertions: YES
UVM : not for ‘pure’ FPGA designers

UVM is quite a hot topic today, with most of the major EDA companies actively promoting this methodology. For those unfamiliar with the subject, here is brief overview. It seems to currently have a quite positive upwards trend for ASIC and SoC design. Tools and environments are appearing on the market, that can be applied to FPGA design too. Like we see on the chart, UVM has not really gained traction among our ‘pure FPGA’ respondents, although near to one engineer on 5 among our ‘ASIC engineers’ has adopted this technique.

Takeaways

1) In-lab testing is a well-entrenched habit of the FPGA designer. And there seems to be no notable difference between our groups. Like we pointed out in a previous publication, in-lab FPGA test and debug is here to stay. We simply believe that – for FPGA at least – this technique must and can be improved.

2) The FPGA engineer seems to be using a mix of methodologies when it comes to debug and verification, but not necessarily a lot of them.
Among our respondents, we have found that the average set of techniques used for debug and verification counts 2.58 different techniques (2.39 for the ‘pure FPGA players’ against 2.68 for the ASIC designers).
And – please note: simulation is there of course, often completed with code coverage.
‘The FPGA engineer who does not simulate at all’ appears to be just a myth today…

3) There is no notable difference between designers using FPGA as a target technology and those using FPGA as a prototyping technology- except for:

– Equivalence checking -and:
– UVM
… which seem to have a very limited adoption among the ‘pure FPGA players’.

In a later post, we’ll further analyze the results of our survey. We have shown which techniques are currently used for debug and verification.

Of course, the million dollar question is whether FPGA engineers are satisfied with this situation or if they are in need of improvements… Stay tuned.

Thank you for reading.
– Frederic

Defining targets (for FPGA debug)

Defining targets (for FPGA debug)

I recently attended a technical seminar organized in The Netherlands by one of the major FPGA vendors (hint: it is one of the 2 top vendors among the ‘4 + now single outsider’ players in the very stable FPGA market). During the lunch, I had the opportunity to discuss with some engineers about FPGA verification. Beyond high-level definition techniques, simulation and other ‘pure-software’ techniques, they all used ‘real hardware’ for some of their verification iterations.

There seems to be a strongly entrenched habit of the FPGA engineer: we cannot resist to the appeal of being able to reconfigure the FPGA indefinitely and try a system ‘for real’.

Yet, for those who would still believe it, the days of simply throwing a design to a board without prior verification are over and for good (check this article for instance). Separate IPs and separate groups of ‘functionally-coherent’ IPs really go through all-kinds of pre-hardware validations.

Like one of my lunch companions said: ‘This is when we put things together for real that virtually *anything* can go wrong.’.

So FPGA Engineers like to go on real hardware for debug. When asked about what they use at this stage of the design flow, the answers are (unsurprisingly?) similar. Two techniques still prevail: either they connect a logic analyzer to some of the FPGA’s I/Os – or they use an ’embedded logic analyzer’.

Traditional vs. Embedded logic analyzer

These two approaches are summarized below (click on picture to zoom):

Traditional vs. Embedded Logic Analyzer for FPGA debugging

Schematically, using a traditional logic analyzer to debug FPGA consists in making a special FPGA configuration where internal nodes are routed onto some of the chip’s functional I/Os. These I/Os are ‘physically’ routed on board to a connector on which the logic analyzer can be hooked. This approach uses the ability to reconfigure the FPGA to route diverse groups of internal nodes to the same set of physical I/Os. Multiplexing groups of nodes helps reducing the number of FPGA synthesis & implementation iterations. The observed nodes evolution is stored into the logic analyzer’s memory for further analysis.

Conversely, using an embedded logic analyzer consists in reserving FPGA memory resources for storing the FPGA nodes evolution. Subsequently, this memory is read by using the device’s JTAG port. The collected traces can be visualized on a PC – usually in a waveform viewer software.

Which technique is preferred?

This is a usual trade-off between which resources you are able to mobilize for debug and the achievable debug goals.

The embedded logic analyzer is generally preferred because it does not reserve FPGA I/Os for debug. Having a sufficient number of such I/Os for debug can lead to choosing a more costly device ‘only for debug’ – which usually proves to be difficult to ask to your manager.

Conversely, the traditional logic analyzer is preferred over the embedded logic analyzer because of its larger trace storage (Mega samples vs. Kilo samples).

Over time, additional considerations have appeared:

– With FPGA commonly running at more than 350 MHz, signal integrity and PCB costs problems arise when routing a large number of I/Os to a debug connector (trace length matching, power and space issues). This goes in favor of the embedded logic analyzer (not speaking of the much larger price of a traditional logic analyzer).

– In my previous post, I evoked the gap that may exist between the order of magnitude of the memory that’s available in a FPGA and the quantity of data that is necessary for debugging efficiently.

Exploring the solutions space

Supposing that none of the above techniques is satisfactory, how can they be improved?

The picture below shows the relative positions of the logic analyzer and the embedded logic analyzers on a 2 axis chart. It shows the FPGA I/O and memory resources that have to be mobilized by each technique.

The area occupied by each solution shows the relative order of magnitude of the trace memory they provide – which is a good measure of the provided ‘observability’.

Exploring the FPGA debug solutions space
A better solution should:
1) hit the TARGET position on the chart – and –
2) provide even more observability

In my next post, I’ll come back on the impact of reaching the target defined above. Stay tuned.

Thank you for reading.
– Frederic

Why observability matters

Observability for typical case of FPGA-based video processing systems

Why Observability matters.

At Exostiv Labs, we think that ‘Observability’ – or ‘Visibility’ – that is ‘the ability to observe (and understand) a system from its I/Os’ – is relevant – and even key to FPGA debug.

I’d like to show it with a real example taken from the field.

A typical Video processing debugging problem

We had an issue with a FPGA-based video processing system. At some point, the ‘data header’ of a frame of a video stream running at 24 frame per second (24 fps) went wrong. The problem was rather unpredictable and occurred randomly. And -by the way- the system was not our design: when we put our hands on it, improving the design methodology to avoid bugs was not an option anymore.

Approach nr 1 : a standard embedded logic analyzer storing the captured debug information in some FPGA memory

Our engineer set up a traditional embedded logic analyzer tool to capture 1.024 bits per frame, taken from the frame header (as the content of the pictures was not very relevant to him). 1kbit per frame was deemed a rather good observability – the engineer hoped to be able to capture and understand the history of events that were creating the bug. All the captured information was stored in the FPGA memory; he was lucky enough to be able to reserve up to 32 kbit of the FPGA memory for debugging purposes.

Traditional embedded logic analyzer approach

Using this approach, our engineer has been able to observe the information related to:

32 kbit / 1.024 bit = 32 frames

At 24 fps, this is the equivalent of 1,33 second of the movie

Approach nr 2 : Sending the captured debug information to an external memory

Now, let’s say that we have an external memory as big as 8 GB and sufficient bandwidth to send the debug information ‘on-the-fly’. We’ll check the consequences on the FPGA resources later.

EXOSTIV approach
8 Gbyte equals 64 Gigabit of memory.

64 Gbit / 1.024 bit = 64.000.000 frames

64 million frames can be observed with the same ‘accuracy’ (remember that we extract only 1.024 bits from each header).
At 24 fps, this is the equivalent of : [64 M / 24] = 2,66 million second or 2,66 M / 3.600 = 740 hours of the movie !

Usually, a movie is about 2 hours long. So, we’ll be able to ‘see the full movie’ with the same ‘debugging accuracy’.

Actually, with 8 GB external memory and a 2 hours movie, we have a 740 / 2 = 370 ‘scale up factor’. This basically means that we’ll be able to extract not 1.024 bit from each frame, but 1.024 x 370 = 378.880 bits per frame.

Gigabyte range storage eliminated the ‘guess work’

We have just seen that scaling up the total capture size of the debug information has a positive impact on how much of each picture can be seen AND how much of the movie can be observed.

When a bug may occur at any moment during a full 2 hours video stream, wouldn’t you feel more secure if you were certain to have captured the bug AND the history of events that have created it?

What about the required bandwidth?

Up to this point, we have considered a very theoretical example, assuming that we would be able to stream the debug information out to the external 8 GB memory. Actually, we do have this capability – by far.

At 24 fps, extracting 378.880 bits per frame requires a total bandwidth of: 24 x 378.880 = roughly 9,1 Mbps on average.

EXOSTIV™ provides up to 8GB memory and up to 4 x 6.6 Gbps bandwidth to extract debug information from FPGA.

So…

Stop the guess work during FPGA debug,

Scale up your tools and-

Increase your observation capability.

Thank you for reading
– Frederic

STAY IN TOUCH

Sign in to our Newsletter