FPGA Debug Reloaded.

Logic Observations

Get your money back in 4 weeks

Exostiv pays back in 2 weeks

Get your money back in 4 weeks

Debug productivity is notoriously hard to sell.

Engineers who ask budgets for debugging tools are still too often blamed by the Management for creating the bugs in first instance.
(‘Why would I pay more for correcting the bugs that YOU inserted in the design?’).

Putting a value on debugging is a particularly hard task… It is all about reducing the time spent on debugging but how much does it really cost and how can we be sure that a specific tool really brings an improvement?

Saving how much of the engineering time would justify buying EXOSTIV?
A rule of thumb is 4 weeks per year (or even as low as 2 weeks the lucky ones located in high salary areas).

A good example

I recently visited a company where the engineering team wanted to evaluate EXOSTIV on an existing board. This board was mounted with an FPGA supported by EXOSTIV and featured a single SFP connector. As such, it was usable ‘out of the box’. We offered to set up the project files for EXOSTIV with the engineering and within 30 minutes, we could insert a debug EXOSTIV IP into the target design. As we did it ourselves, there was no initial setup cost nor learning curve cost in this example. After FPGA implementation, EXOSTIV connected right away and we could capture data. A good demonstration as it seemed. After 2 hours, I left the engineering team with a trial unit of EXOSTIV and allowed them to use it for free until the next day.

The next day, the engineering team told me that the tool was easy to use for those used to JTAG-based logic analyzers such as Chipscope / Xilinx logic analyzer. Basically, the flow was identical. Configuring transceivers required some additional experience to understand the metrics, clock sources and so on, but this was general knowledge of FPGA structure that any engineer should learn someday.

Then, they told me that the visibility provided by Exostiv had allowed them to find and correct a bug in an Ethernet IP, that they had not been able to see before, because their tools could not reach such debug scenarios. They were about to go to production and said that the result was ‘invaluable’. This result had totally exceeded my expectations.

I was absolutely delighted.
I expected to receive a purchase order the same day.

I was wrong.

When ‘invaluable’ kills business

Actually, they were puzzled. They somehow went to the conclusion that EXOSTIV was priced too high because our model involves subscribing for EXOSTIV software for a minimum of 12 month – and here the bug resolution had been so fast… (I am still perplexed by this reasoning…). Anyway, they decided to wait until they had a new bug or alert that could justify buying the tool
EXOSTIV had revealed an issue that they were not aware of – and before being painful to anyone.

And hat about the management? Practically, nothing harmful had happened at all – so the management was not even considering a purchase…

Missed market opportunity cost

Going to production with unknown bugs has a cost that generally reduces to how much market (share) you’ll loose by arriving late on the market with a working product. In this case, it seemed that the product was already reasonably stable: the engineering team was perfectly qualified and had not seen anything wrong.

This cost is called ‘(missed) market opportunity cost’ and can be estimated at the value of the market that is left to the competitor because you are delaying your product launch. Even if this cost can be large (loosing a few % of market share should be a lot of money – or you do not address a market that is large enough), it can have no impact on a decision to invest in a new EDA tool to debug FPGA. The value can hardly be estimated accurately and its consequences are usually unpredictable and too distant. Much too complicated…

Bottom line: ask the right questions

– Will there be bugs in your design? Absolutely. FPGA are such complex beasts that this cannot be avoided. No wonder why 40% of the total design time is spent on debug and verification.

– When do those bugs cost the most? When they ‘escape’ to production: the cost of having to stop the production and get back to design is gigantic. And it si your responsibility as an engineer to find them.

– Can EXOSTIV help you find them? You bet. EXOSTIV provides unprecedented visibility.

And finally:

– Why would you reserve a budget for EXOSTIV? Because it pays back if you save 4 weeks of engineering per year. And this can be 4 weeks total for a team that shares one license.

Thank you for reading.
– Frederic

RTL or Netlist flow?

RTL or Netlist flow for FPGA debug?

RTL or Netlist flow?

EXOSTIV Dashboard ‘Core Inserter’ now proposes 2 alternate flows* for inserting EXOSTIV IP into the target design: the ‘RTL flow’ and the ‘Netlist flow’.

With the RTL flow, EXOSTIV IP is generated as a RTL (HDL) code ‘black box’, with a synthesized netlist underneath. EXOSTIV IP is inserted ‘manually’ by the designer into the target design by editing the RTL source code (VHDL or Verilog). Once inserted, the user manually runs the synthesis and implementation (place & route) of the instrumented code.

The Netlist flow lets the user pick the nodes to be observed from a synthesized design. EXOSTIV IP is then synthesized and automatically inserted into the target design netlist. The instrumented netlist can be placed and routed automatically too.

Which one is right for you?

You can choose the flow when creating a new project with EXOSTIV Dashboard software.

This choice has got consequences on the output files, on the results of the core insertion process and on the operations that you need to execute for obtaining an instrumented design.
See the table below:

RTL flow

  • Manual core insertion in RTL code
  • EXOSTIV IP is not linked to specific nodes
  • Nodes names are mapped during analysis
  • Requires synthesis, place & route and bitstream generation after insertion
  • Produces: a netlist of EXOSTIV IP, a RTL instantiation template and pinout / timing constraints

Netlist flow

  • Automatic core insertion in Netlist
  • EXOSTIV IP is linked to nodes selected from the design
  • Nodes names are mapped during EXOSTIV IP generation
  • Requires place & route and bitstream generation after insertion
  • Produces: an instrumented design netlist and optionally, a fully placed and routed design

The RTL flow…

…produces a generic debugging IP with the chosen resources and generic input port. The process of configuring EXOSTIV IP in EXOSTIV Dashboard is merely a matter of choosing the number of capture units in the IP and the width of them (see for instance an example of such IP component declaration in VHDL below). The generated modules will have ports numbered as: ‘cu0_Trig’ or ‘cu2_Data’.
Once generated, you can connect EXOSTIV IP to any RTL signal / wire from your RTL code.

Once the IP is connected in the target design RTL code and once the synthesis, P&R have run, the EXOSTIV IP inputs really have a functional meaning. It is then desirable to specify a logic name rather than an obscure ‘cu#_Trigger(0)’. It just eases analysis. So, this will be done in the analyzer, with the ‘Data Probes Remapping’ function (see screenshot below).

Typically, the RTL flow will be used when:

  • The same sets of nodes are observed for debugging. This would be the case for a AXI bus-based system, where debugging the system requires watching the AXI bus traffic.
    Typically, this supposes that there is little or no iteration for selecting the nodes observed during debug.
  • Interesting nodes are optimized, removed or renamed after synthesis, preventing from probing them. In such a case, the RTL flow gives a better ‘insurance’ to be able to watch interesting nodes. Using properties on signals such a ‘MARK_DEBUG’ for Xilinx Vivado or ‘SYN_KEEP’ for Synplify would, however, limit the risk of ‘loosing’ the interesting nets after synthesis.

The Netlist flow…

… uses a communication between the FPGA vendor tool and EXOSTIV Dashboard software to dig into the design hierarchy, extract nodes and clock information from the synthesized design (see the node selection window below).

Typically, the Netlist flow will be used when:

  • You want to save the time of design synthesis when a new EXOSTIV IP core is generated. This would be the case when successive core insertions must be run to track bugs.
  • You do not want to manually modify the RTL code – which can be time-consuming. Leaving the work of inserting and connecting EXOSTIV IP to EXOSTIV Dashboard software and the FPGA vendor tool suite proves extremely efficient.


EXOSTIV Dashboard flows are 2 alternate ways of generating and inserting EXOSTIV IP into your target design. Which one is right for you very much depends on your habits and the flow you are using for creating and debugging FPGA.

It would be too quick to dismiss the RTL flow because it is not fully automated – and requires editing the RTL code to bring the right ‘wire connections’ at the level where EXOSTIV IP is instantiated. The RTL flow fits well when the same specific nodes are systematically probed. Using scripts with conditional instantiation of IPs is a very efficient way to generate and debug designs based on a RTL approach. As we’ll explore in a future post, coding techniques exist for easing the work of wiring nodes across the hierarchy.

Similarly, you should think again before dismissing the Netlist flow because node names cannot be found after synthesis: some properties can be used for limiting this effect – and we’ll come back on this later. The integration of EXOSTIV with the FPGA vendor tool allows browsing the design hierarchy, selecting nodes to observe and then automatically generate an instrumented design. When available*, the Netlist flow can be a wonderfully productive approach for debug.

It will be also interesting to see how each flow make the most of a new feature (coming soon…):
a full scripting interface for using EXOSTIV Dashboard. We’ll come back on this too.

Thank you for reading.
– Frederic

*The Netlist flow is currently available for Xilinx FPGA only.

Exostiv for Intel (Altera) FPGA – announcement

Exostiv for Intel FPGA

Announcing… EXOSTIV for Intel FPGA

Using Intel FPGA?

We have exciting news for you: EXOSTIV will soon support Intel FPGA!
Please check the pictures above and below – this is EXOSTIV working with the ‘Attila’ dev kit of our partner, Reflex-CES, equipped with one Arria 10 GX 10AX115N4F40I3SG device.

We are now able to use EXOSTIV Dashboard Analyzer connected to an IP loaded into the design through the board QSFP port (with a QSPF to 4xSFP cable with splitter).
The board FMC connector mounted with our FMC to HDMI adapter works as an access port too! (Click here to check about the connectivity options for EXOSTIV.)

This beta version was shown during the training we co-organized with Telexsus Ltd. in Maidenhead (UK) on October 13rd and at the Europe edition of the Intel SoC FPGA Developer’s Forum (ISDF) held in Frankfurt on October 19th, where Exostiv Labs participated as a Regional Sponsor.

The Intel SoC FPGA Developer’s Forum was held in Frankfurt on Oct. 19th.

We are happy to announce the availability of EXOSTIV for Intel FPGA (formerly Altera) for the end of 2016. That’s an exciting new step for us and for EXOSTIV !

Exostiv at ISDF

We would like to thank all our customers using Intel FPGA for their patience. We’ll be in touch!
– Frederic

Debug with reduced footprint


Debug with reduced footprint

Footprint, ‘real estate’, resources, … No matter the design complexity, allocating resources to debugging is something you’ll worry about.
If you are reading these lines, it is likely that you have some interest in running some of your system debugging from a real hardware (Check this post if you do not know why it is important).

EXOSTIV enables you to get extended visibility out of running FPGA.
It impacts the target system resources in 2 ways:
– it requires logic, routing & storage resources from inside the target FPGA to place an IP used to reach internal FPGA nodes.
I’ll cover this aspect in a future post.

– it requires a physical connector to access the FPGA.
(- And NO, JTAG is not good enough because it does not support sufficiently large bandwidth – even with compression).

Read on…

Choosing the right connector

All EXOSTIV Probes provide 2 connection options:
Option #1: uses a single HDMI connector type (! this is not a full HDMI connection !)
Option #2: uses up to 4 SFP/SFP+ connectors

From there, a wider range of options is within reach if you consider using additional cables and board adapters available from Exostiv Labs or from third-party suppliers.

Which option will work for you? Follow the guidelines below:

1. Is there an existing SFP/SFP+/QSFP/QSP+ directly connected to the FPGA transceivers?

  • Check if you can reserve this FPGA resource (and the board connector) for debug – at least temporarily. You’ll need 1 SFP/SFP+ connection per used gigabit transceiver
  • QSFP/QSFP+ connectors can be used with a 4xSFP to QSFP cable with splitter.

Note: most of the Dini Group’s boards feature SFP/SFP+, quad SFP/SFP+ or QSFP/QSFP+ connectors by default. And they are directly connected to FPGA transceivers.

2. Is there another type of connector directly connected to the FPGA transceivers?

Please contact us for details on our adapters, external references and custom adapters support.

3. For all other cases: you’ll need to modify your board and add a connector.

    • Is space on the board critical? Go for HDMI or even Micro-HDMI !

See picture below – this is an Artix-7 board equipped with a tiny micro-HDMI connector, providing up to 4 x 6.6 Gbps bandwidth for debugging FPGA.

  • You do not have space constraints (lucky you)? Pick the one you like: SFP/QSFP/HDMI/micro-HDMI/other (+ adapter).

*** Check our special 12 Gbps probe test report – Click here ! ***

EXOSTIV provides standard and custom connection options that enable fast deployment with standard FPGA development kits and/or limits the footprint requirements from the target FPGA board.

Thank you for reading.
– Frederic

Debugging FPGAs at full speed

High bandwidth

Debugging FPGAs at full speed

In my previous post, I explained why increasing the available ‘window of visibility’ is a gigantic advantage when tracking system-level issues on modern complex FPGAs. EXOSTIV’s structure does not require the FPGA internal memory to grow with the depth of the capture.

Yet, mobilizing hardware resources (internal / external hardware and software, memory, logic, communication ports, …) is always the result of a compromise. Debugging FPGA at full speed requires adopting the right strategies to remove bottlenecks.

Debugging FPGA at full speed requires adopting the right strategies to remove bottlenecks.

Memory severely limits JTAG instrumentation

In traditional JTAG-based embedded instrumentation, the main bottleneck is FPGA memory. There is some ability to scale up the FPGA memory but this depends on what’s available in the FPGA after the design is implemented. Collecting a larger number of signals and deeper traces quickly exhausts the memory. Beyond a few (hundreds of?) kilobytes at best, you’re done. This severely limits the ability to debug further.

A common strategy for scaling up debug without increasing the memory consists in multiplexing data groups (see figure below), based on the principle that debugging FPGA sometimes requires to ‘overinstrument’ the design, to save on implementation iterations. ‘Multiplexing’ means segmenting the design into groups of signals. This technique helps reserve the scarce memory resources for a reduced set of signals, thereby enabling longer (‘deeper’) captures.

A displacement towards bandwidth

Unlike JTAG-based systems – where the captured data is really ‘trapped’ inside the FPGA – EXOSTIV sends the captured data to a large external memory (8 GB currently). This memory is hardly the bottleneck and it can be scaled up rather easily.

For EXOSTIV, the ‘new’ bottleneck is the available bandwidth (please note that even with this bottleneck, EXOSTIV goes much beyond the capabilities of JTAG-based embedded instrumentation).
Advanced versions of EXOSTIV provide up to 50 Gbps of total bandwidth with 8 GB of memory. However, these resources are in no way ‘unlimited’ in regard of the ‘exploding complexity’ of modern FPGA. 50 Gbps allows the continuous streaming of 1,000 bits sampled at… just 50 MHz.

Pushing the boundaries

EXOSTIV includes the following features to preserve bandwidth:

  • Data group multiplexing;
  • Data qualification;
  • Burst mode of operation;

All of the above strategies act differently with one purpose: reduce the bandwidth requirements.

Data Group Multiplexing

We have evoked this strategy previously, as a way to ‘deepen’ captures for JTAG-based systems, where the storage memory is very small.
For EXOSTIV, multiplexing the observed nodes as data groups has the function of reducing the maximal bandwidth required to flow the data out of the FPGA. While it constraints the designer to watch the data groups separately, it helps reduce the bandwidth requirements to this available on the gigabit links. If this bandwidth is kept below or equal to the maximum available, ‘continuous’ capture is possible, and data can be flown outside of the FPGA without requiring any local storage. The process can stop when the EXOSTIV Probe’s memory is full.

Data Qualification

Data qualification consists in defining ‘filters’ or ‘qualification equations’ built on the observed data. A logic condition is defined, which enables or disables the capture of data. This equation defines when ‘data is (un)interesting’. For instance, you may want to observe a data bus only when there is traffic on it and filter the IDLE times.
Coupled with some buffering in EXOSTIV IP Capture Units (click here for more details on EXOSTIV IP’s structure), this strategy effectively contributes to lowering the average bandwidth requirements.

Burst mode of operation.

Capturing repetitive chunks of data defined with a trigger condition is another solution. Like data qualification, bursting data contributes to lowering the average bandwidth requirements on the gigabit transceiver links. Using bursts must not be seen as a ‘fallback strategy’ for cases when streaming data continuously is not possible. As opposed to JTAG-based solutions, where only one single burst can be observed, EXOSTIV allows bursting data out until the Probe’s memory is full. Events occurring between bursts cannot be watched, but a large set of events that happen long in time after power up are now accessible (Remember: with JTAG-based systems, all you can see is… just one burst!).

Take away : make the most of your resources

EXOSTIV provides unprecedented memory and bandwidth resources that enable debugging modern FPGAs at full speed.
These already huge resources can be scaled with modern FPGA complexities. In addition, using the right EXOSTIV features and strategies help remove a ‘new debugging bottleneck’: bandwidth. Combining burst mode operation, data qualification and data grouping is a winning strategy to make the most of EXOSTIV’s debugging resources.

Thank you for reading.
– Frederic

You can capture tons of data. Now what?

EXOSTIV can capture very large data sets

You can capture tons of data. Now what?

Offering huge new capabilities is not always seen positively. Sometimes, engineers come to us and ask:
‘Now that I am able to collect Gigabytes of trace data from FPGA running at speed… how do I analyze that much data?’.

Does EXOSTIV just displace the problem?

No, it does not.

In this post, I’ll show:
1) why it is important to enlarge the observation window. – and:
2) how to make the most of this unprecedented capability, based on EXOSTIV Dashboard’s capture modes, triggers and data qualification capabilities.

Random bugs happen

The figure below shows a typical debug scenario. A bug occurs randomly ‘some time’ after system power up. During an initial time interval, the effects of the bug are not noticeable – after which, the system’s behaviour is sufficiently degraded to obviously see the bug’s effect. Typically, the engineer will need to detect a ‘start of investigation’ condition from which he’ll have to ‘roll back in time’ – ultimately to find the root cause of the bug. In the case of an FPGA the engineer observes the FPGA’s internal logic.

Random bugs typically occur as a result of a simulation model issue. The system’s environment could not be simulated with complete accuracy and/or some part of the system behaves differently than initially thought int he ‘real world’. In addition, long operating times are not really the ‘simulation’s best friend’.

Size matters.

Suppose that we capture some ‘observation window’ of the system. If we are lucky enough to observe a bug right when it occurs, we can get an understanding of what happens. Similarly, we are able to recognize the effects of the bug, as shown by the orange ‘X’ on the picture (called ‘start of investigation). Conversely, we have no idea about how to trigger on such conditions (if we had the ability to trigger on unknown and random bugs, they would have been corrected already).

With a reduced observation window, the process of debugging requires some luck. We can miss everything, recognize an effect without having sufficient data to find the start of investigation. We can be lucky enough to find the start of investigation but no history to roll back to the root cause… – Or: we can hope to ‘shoot right on the bug’.

You do not want to leave the debugging process to chance?

Enlarge the observation window!


With a properly sized observation window, you can be certain of finding a random bug.

Not ‘brute force’ only

EXOSTIV provides a total observation window of 8 Gigabytes.
However, EXOSTIV is not a ‘brute force’ solution only. Many of its features are equally important to implement ‘smarter’ captures in the lab.

Trigger, Data Qualification, Streaming and Burst help you define better capture windows.

1. Trigger

The total capture window may be gigantic, a triggering capability remains essential. During EXOSTIV IP setup, the nodes that need to be observed are optionally marked as ‘source for trigger’. Currently, EXOSTIV is able to implement a single or repeating trigger based on AND and OR equations defined on these signals. The AND and OR equations can also be combined. (In a future release, EXOSTIV will provide sequential triggers as well.)

With the growing complexity of systems, defining a very accurate trigger condition has sometimes become a way to overcome a too limited capture window.

This time is over. You are now able to use trigger to collect a better capture window. Maybe you can relax a little bit about defining complex triggering, because you can count on a very large capture window to collect the information that you need.

2. Data Qualification

‘Data qualification’ enables you to define a logic condition on the observed signals. Data is captured and recorded by EXOSTIV only when this condition is true.

For instance, this can be used to filter out the times when nothing happens on a bus. This is a very powerful feature that helps you preserve the bandwidth that you use to extract data.

3. Streaming or Burst?

EXOSTIV provides 2 modes of capture:

  • Streaming mode: data is captured continuously. If the bandwidth required to capture data exceeds its EXOSTIV Probe capabilities, it stops and resumes capture once it is able to do so. Triggers and capture length can be used to define a repeating start condition.
  • Burst mode: data is captured by ‘chunks’ equal to the ‘burst size’. The maximum burst size is defined by the size of the Capture Unit’s FIFO (click here for more details about EXOSTIV IP). So, ‘burst mode’ is always an ‘interrupted’ type of capture.


Using EXOSTIV reduces to 2 types of capture: continuous or interrupted.
‘Continuous capture’ is easy to understand: EXOSTIV starts capturing data and has sufficient bandwidth on its transceivers to sustain data capture until its large probe memory is full.

This is an unprecedented capability for a FPGA debug tool.

‘Interrupted capture’ – whether based on trigger, data qualification, burst mode or a combination of them – help ‘better capture windows’ that focus on data of interest. Interrupted captures also help go beyond the maximum bandwidth of the transceivers and capture data even farther in time than what a continuous capture would allow.

This considerably extends your reach in time, after starting up the FPGA.

Thank you for reading.
– Frederic

The FPGA problem we are trying to solve

Problem and solution

The FPGA problem we are trying to solve

Designing FPGA can be complex. Each step of the design flow brings its own challenges, problems and solutions. As engineers, this is what we do: we find solutions. We use our knowledge, we mobilize our skills and find the right tools to constantly build better solutions to the problems that we encounter.

Exostiv Labs aims at providing better tools for FPGA debug. However, ‘FPGA debug’ may be understood quite diversely. In this post, I’d like to go back to the basics and pinpoint the specific problems we are trying to solve.

‘First time right’ at board bring-up is a problem.

I am an engineer. Like any engineer, I like when things are done right technically, with the right steps taken at the right time.
However, as an engineer, I also have learned to manage budget and market constraints. We all know that between the ‘perfect design flow’ and the ‘actual design flow’, the distance is made of budget and/or market constraints. We all know that sometimes, it is chosen to skip the verification of one ‘detail’ in order to reach the announced release date. Sometimes it is expected from us to accept a small percentage of uncertainty (that is called ‘experience’ or ‘know-how’) because it is statistically cost-effective.

Ideal vs. Actual design flow

“And what if a bug appears later? Well, we’ll fix it and send an upgrade.
Isn’t it the beauty of programmable devices?”

Sometimes, being ‘first on the market’ or ‘first in budget’ beats ‘first time right’.

A typical in-lab situation

We all have been there…

The system starts up and suddenly, ‘something goes wrong’. It happens randomly, sometimes 2 minutes, sometimes 2 hours after power up.
– We have no clue about what caused the bug.
– We have no idea about why it happens.
– We do not know the time from cause to effect.

This is called ‘an emergent system with a ‘random bug’.

It is emergent because it is the result of complex interactions of individual pieces. Such system-level bugs are the most complex – and time-consuming – to solve because they involve the complex interactions of a whole system.

EXOSTIV solves the problem of finding the roots of bugs appearing in complex systems placed in their real environment.

EXOSTIV solves FPGA board bring up problems

Solving such a problem requires *a LOT of* visibility on the FPGA under test:
– It is hard to put a trigger condition on an unknown random bug. To spot it, you’d better have the largest capture window possible.
– You sometimes need to extend the capture far in time (stressing the impracticability of simulation as sole methodology).
– What happened before seeing the bug is very important to understand the ‘fatal’ sequence of events that has created the faulty condition.

Without a sufficient observability of the system under test, you’ll simply loose precious engineering hours hoping to -just- capture the bug. Then you’ll spend time again trying to capture the key events that happened before it – that ultimately would help you understand why the bug occurs.

Instrumenting a real design is the key to overcoming modelling mistake and see bugs occurring in the real environment… but only if the instrumented design provides sufficient visibility !

What can you do with 8 GB captured at FPGA speed of operation?

Please think to it. I’ll explore how to make the most of the *huge* resources offered by EXOSTIV in my next post.

Thank you for reading
– Frederic

EXOSTIV is there – and it is not a monster

Happy Halloween

EXOSTIV is there – and it is not a monster

As you might have noticed, EXOSTIV for Xilinx is now released. With the launch, I have been on the roads to demonstrate the product.

The good thing about meeting FPGA engineers is the flurry of questions, ideas and suggestions received as I show the product. Your feedback helps us find new ideas, find where the most acute pains are and understand what you actually do. I would like to thank you, who have already dedicated some time from your supercharged week to see the product in action. (If you are interested to see the product, please contact me to check our scheduled events with me).

What is EXOSTIV?

Here is one of the slides I use to present EXOSTIV (click here for the complete presentation in PDF):


EXOSTIV is not an emulator.

Why is it important?

Well, because it is sometimes expected from EXOSTIV to be everything at once. Some examples:
– Can it partition design onto multiple FPGA?
(Nope, that’s the role of a partitioning tool. We have to define how our IP can be used with such tools, though).
– Can it implement this (specific) trigger condition?
(Well, some of them, some not. But with it capture capacity, you might not need such a complex trigger).
– Will it be able to replace a protocol analyzer?
(It depends on the protocol and where it is observed…).
– …

Of course, some of your suggested additional features are already in the development pipe at Exostiv Labs… But not all of them.

EXOSTIV’s main value is in the level of visibility it provides for systems running at speed.

New features will be built around this value

Ask yourself: what can you do with 8GB of captured data flowing out of your FPGA at multi-gigabit speed? Would it add something to the flow that your current tools cannot achieve?

At Exostiv Labs, we believe that a tool that tries to be everything at once would probably be very good at nothing, not well fitted to your flow and much too expensive for the value.

EXOSTIV is not such a monster.

Thank you for reading – and Happy Halloween to all!
– Frederic

‘My FPGA debug and verification flow should be improved…’

Improve the FPGA debug flow

‘My FPGA debug and verification flow should be improved…’

In my last post, I revealed some of the results of our recent survey on FPGA. These results depicted a ‘flow-conscious’ FPGA engineer, using a reduced mix of methodologies in the flow and very prone to going to the lab for debugging.

In the same survey, we tried to evaluate the level of satisfaction of the FPGA engineer for his/her debug and verification flow. We asked the respondents to select among several propositions the one that the most closely matched their thinking. See the picture below (click on picture to zoom).

Recognition of the need to improve the FPGA debug flow

More than 70% of the respondents recognize the need to improve the FPGA debug and verification flow.

The chart above represents the answers of the respondents active with an FPGA design, and actually using FPGA as a target technology. 72% of them recognize the need to improve the FPGA debug flow and nearly 40% of them are actively looking for a solution for it.

Are these survey results representative of the whole industry? Well, you tell me. Contact me to share your personal experience – and I’ll update this post.

– Oh, and by the way, as I write this, we are about to release our EXOSTIV solution. It improves the visibility on FPGA running at speed by a factor of up to 200.000 ! See below a preview of what we’re about to release. More information will be available soon.

EXOSTIV Dashboard software screenshot preview

Thank you for reading.
– Frederic

Does FPGA use define verification and debug?

Does FPGA use define verification
and debug?

You may be aware that we have run a first survey on FPGA design, debug and verification during the last month.

(By the way, many thanks to our respondents – we’ll announce the Amazon Gift Card winner in September).

In this survey, we did some ‘population segmentation’, and first asked about the primary reason for using FPGA:

– Do you use FPGA as a target technology? In other words are FPGA part of your final product’s BOM (bill of materials) list? … – or:
– Do you use FPGA as a prototyping technology for an ASIC or a SoC… ?

49 respondents were found in the first group (which we’ll also call the ‘pure FPGA’ users) and 34 in the second group. The survey was posted during about one month online and advertised on social networks. The questions were in English.

Behind this segmentation question is of course the idea that the design flow used to design FPGA is usually significantly different (and shorter) than this used to design an ASIC.
(This is claimed by FPGA vendors as one of the significant differences between ASIC and FPGA – you can see an example of this comparison here).

Since we care (a lot) about FPGA debug and verification, one of the survey questions was about the set of tools currently used for FPGA debug and verification

The chart below illustrates the percentage of respondents in each group who selected each proposed tool or methodology. They had been invited to select all choices that applied to them (click on the picture to enlarge).

Survey results

A podium shared by simulation, in-lab testing and RTL code reviews

Without significant difference between the 2 groups, simulation is the clear winner methodology, with a little more than 90% use. Frankly, we wonder how it is still possible not to simulate a FPGA design today – we have not analyzed the cases when designers are able to skip this step.

In-lab testing comes second, with near to 70% usage among the respondents. In-lab testing actually includes 2 categories of tools in the survey:

– traditional instruments -such as scopes and logic analyzers- and
– embedded logic analyzers -such as Xilinx Chipscope, Altera Signal Tap and the likes.

Manual RTL code review closes the podium. We expect that any designer actually reviews his/her code. Why is it not 100%? Probably because the ones verifying and debugging FPGA are not the RTL coders. Another possibility is that ‘reviewing the code’ might not be seen by many as a ‘verification technique’, but as the result of the process of verification and debug.

The case of code coverage

While this technique has been around for quite some time now (and is actually complementary to simulation), it is interesting to see that code coverage has got the same popularity in our 2 groups (we do not think that the difference between the groups is significant here).

It has been widely said that ‘pure FPGA designers’ are careless and do not use the same advanced and rigorous techniques of ASIC designers. Well, apparently, they do not just care about simulating some cases; they have the same level of concern for properly covering the code as ASIC designers.

Assertions: YES
UVM : not for ‘pure’ FPGA designers

UVM is quite a hot topic today, with most of the major EDA companies actively promoting this methodology. For those unfamiliar with the subject, here is brief overview. It seems to currently have a quite positive upwards trend for ASIC and SoC design. Tools and environments are appearing on the market, that can be applied to FPGA design too. Like we see on the chart, UVM has not really gained traction among our ‘pure FPGA’ respondents, although near to one engineer on 5 among our ‘ASIC engineers’ has adopted this technique.


1) In-lab testing is a well-entrenched habit of the FPGA designer. And there seems to be no notable difference between our groups. Like we pointed out in a previous publication, in-lab FPGA test and debug is here to stay. We simply believe that – for FPGA at least – this technique must and can be improved.

2) The FPGA engineer seems to be using a mix of methodologies when it comes to debug and verification, but not necessarily a lot of them.
Among our respondents, we have found that the average set of techniques used for debug and verification counts 2.58 different techniques (2.39 for the ‘pure FPGA players’ against 2.68 for the ASIC designers).
And – please note: simulation is there of course, often completed with code coverage.
‘The FPGA engineer who does not simulate at all’ appears to be just a myth today…

3) There is no notable difference between designers using FPGA as a target technology and those using FPGA as a prototyping technology- except for:

– Equivalence checking -and:
… which seem to have a very limited adoption among the ‘pure FPGA players’.

In a later post, we’ll further analyze the results of our survey. We have shown which techniques are currently used for debug and verification.

Of course, the million dollar question is whether FPGA engineers are satisfied with this situation or if they are in need of improvements… Stay tuned.

Thank you for reading.
– Frederic

Page 1 of 212


Sign in to our Newsletter