Visibility into the FPGA.

Logic Observations

You can capture tons of data. Now what?

EXOSTIV can capture very large data sets

You can capture tons of data. Now what?

Offering huge new capabilities is not always seen positively. Sometimes, engineers come to us and ask:
‘Now that I am able to collect Gigabytes of trace data from FPGA running at speed… how do I analyze that much data?’.

Does EXOSTIV just displace the problem?

No, it does not.

In this post, I’ll show:
1) why it is important to enlarge the observation window. – and:
2) how to make the most of this unprecedented capability, based on EXOSTIV Dashboard’s capture modes, triggers and data qualification capabilities.

Random bugs happen

The figure below shows a typical debug scenario. A bug occurs randomly ‘some time’ after system power up. During an initial time interval, the effects of the bug are not noticeable – after which, the system’s behaviour is sufficiently degraded to obviously see the bug’s effect. Typically, the engineer will need to detect a ‘start of investigation’ condition from which he’ll have to ‘roll back in time’ – ultimately to find the root cause of the bug. In the case of an FPGA the engineer observes the FPGA’s internal logic.

Random bugs typically occur as a result of a simulation model issue. The system’s environment could not be simulated with complete accuracy and/or some part of the system behaves differently than initially thought int he ‘real world’. In addition, long operating times are not really the ‘simulation’s best friend’.

Size matters.

Suppose that we capture some ‘observation window’ of the system. If we are lucky enough to observe a bug right when it occurs, we can get an understanding of what happens. Similarly, we are able to recognize the effects of the bug, as shown by the orange ‘X’ on the picture (called ‘start of investigation). Conversely, we have no idea about how to trigger on such conditions (if we had the ability to trigger on unknown and random bugs, they would have been corrected already).

With a reduced observation window, the process of debugging requires some luck. We can miss everything, recognize an effect without having sufficient data to find the start of investigation. We can be lucky enough to find the start of investigation but no history to roll back to the root cause… – Or: we can hope to ‘shoot right on the bug’.

You do not want to leave the debugging process to chance?

Enlarge the observation window!

VS.


With a properly sized observation window, you can be certain of finding a random bug.

Not ‘brute force’ only

EXOSTIV provides a total observation window of 8 Gigabytes.
However, EXOSTIV is not a ‘brute force’ solution only. Many of its features are equally important to implement ‘smarter’ captures in the lab.

Trigger, Data Qualification, Streaming and Burst help you define better capture windows.

1. Trigger

The total capture window may be gigantic, a triggering capability remains essential. During EXOSTIV IP setup, the nodes that need to be observed are optionally marked as ‘source for trigger’. Currently, EXOSTIV is able to implement a single or repeating trigger based on AND and OR equations defined on these signals. The AND and OR equations can also be combined. (In a future release, EXOSTIV will provide sequential triggers as well.)

With the growing complexity of systems, defining a very accurate trigger condition has sometimes become a way to overcome a too limited capture window.

This time is over. You are now able to use trigger to collect a better capture window. Maybe you can relax a little bit about defining complex triggering, because you can count on a very large capture window to collect the information that you need.

2. Data Qualification

‘Data qualification’ enables you to define a logic condition on the observed signals. Data is captured and recorded by EXOSTIV only when this condition is true.

For instance, this can be used to filter out the times when nothing happens on a bus. This is a very powerful feature that helps you preserve the bandwidth that you use to extract data.

3. Streaming or Burst?

EXOSTIV provides 2 modes of capture:

  • Streaming mode: data is captured continuously. If the bandwidth required to capture data exceeds its EXOSTIV Probe capabilities, it stops and resumes capture once it is able to do so. Triggers and capture length can be used to define a repeating start condition.
  • Burst mode: data is captured by ‘chunks’ equal to the ‘burst size’. The maximum burst size is defined by the size of the Capture Unit’s FIFO (click here for more details about EXOSTIV IP). So, ‘burst mode’ is always an ‘interrupted’ type of capture.


Takeaways

Using EXOSTIV reduces to 2 types of capture: continuous or interrupted.
‘Continuous capture’ is easy to understand: EXOSTIV starts capturing data and has sufficient bandwidth on its transceivers to sustain data capture until its large probe memory is full.

This is an unprecedented capability for a FPGA debug tool.

‘Interrupted capture’ – whether based on trigger, data qualification, burst mode or a combination of them – help ‘better capture windows’ that focus on data of interest. Interrupted captures also help go beyond the maximum bandwidth of the transceivers and capture data even farther in time than what a continuous capture would allow.

This considerably extends your reach in time, after starting up the FPGA.

Thank you for reading.
– Frederic

The FPGA problem we are trying to solve

Problem and solution

The FPGA problem we are trying to solve

Designing FPGA can be complex. Each step of the design flow brings its own challenges, problems and solutions. As engineers, this is what we do: we find solutions. We use our knowledge, we mobilize our skills and find the right tools to constantly build better solutions to the problems that we encounter.

Exostiv Labs aims at providing better tools for FPGA debug. However, ‘FPGA debug’ may be understood quite diversely. In this post, I’d like to go back to the basics and pinpoint the specific problems we are trying to solve.

‘First time right’ at board bring-up is a problem.

I am an engineer. Like any engineer, I like when things are done right technically, with the right steps taken at the right time.
However, as an engineer, I also have learned to manage budget and market constraints. We all know that between the ‘perfect design flow’ and the ‘actual design flow’, the distance is made of budget and/or market constraints. We all know that sometimes, it is chosen to skip the verification of one ‘detail’ in order to reach the announced release date. Sometimes it is expected from us to accept a small percentage of uncertainty (that is called ‘experience’ or ‘know-how’) because it is statistically cost-effective.

Ideal vs. Actual design flow

“And what if a bug appears later? Well, we’ll fix it and send an upgrade.
Isn’t it the beauty of programmable devices?”

Sometimes, being ‘first on the market’ or ‘first in budget’ beats ‘first time right’.

A typical in-lab situation

We all have been there…

The system starts up and suddenly, ‘something goes wrong’. It happens randomly, sometimes 2 minutes, sometimes 2 hours after power up.
– We have no clue about what caused the bug.
– We have no idea about why it happens.
– We do not know the time from cause to effect.

This is called ‘an emergent system with a ‘random bug’.

It is emergent because it is the result of complex interactions of individual pieces. Such system-level bugs are the most complex – and time-consuming – to solve because they involve the complex interactions of a whole system.

EXOSTIV solves the problem of finding the roots of bugs appearing in complex systems placed in their real environment.

EXOSTIV solves FPGA board bring up problems

Solving such a problem requires *a LOT of* visibility on the FPGA under test:
– It is hard to put a trigger condition on an unknown random bug. To spot it, you’d better have the largest capture window possible.
– You sometimes need to extend the capture far in time (stressing the impracticability of simulation as sole methodology).
– What happened before seeing the bug is very important to understand the ‘fatal’ sequence of events that has created the faulty condition.

Without a sufficient observability of the system under test, you’ll simply loose precious engineering hours hoping to -just- capture the bug. Then you’ll spend time again trying to capture the key events that happened before it – that ultimately would help you understand why the bug occurs.

Instrumenting a real design is the key to overcoming modelling mistake and see bugs occurring in the real environment… but only if the instrumented design provides sufficient visibility !

What can you do with 8 GB captured at FPGA speed of operation?

Please think to it. I’ll explore how to make the most of the *huge* resources offered by EXOSTIV in my next post.

Thank you for reading
– Frederic

EXOSTIV is there – and it is not a monster

Happy Halloween

EXOSTIV is there – and it is not a monster

As you might have noticed, EXOSTIV for Xilinx is now released. With the launch, I have been on the roads to demonstrate the product.

The good thing about meeting FPGA engineers is the flurry of questions, ideas and suggestions received as I show the product. Your feedback helps us find new ideas, find where the most acute pains are and understand what you actually do. I would like to thank you, who have already dedicated some time from your supercharged week to see the product in action. (If you are interested to see the product, please contact me to check our scheduled events with me).

What is EXOSTIV?

Here is one of the slides I use to present EXOSTIV (click here for the complete presentation in PDF):

What is EXOSTIV

EXOSTIV is not an emulator.

Why is it important?

Well, because it is sometimes expected from EXOSTIV to be everything at once. Some examples:
– Can it partition design onto multiple FPGA?
(Nope, that’s the role of a partitioning tool. We have to define how our IP can be used with such tools, though).
– Can it implement this (specific) trigger condition?
(Well, some of them, some not. But with it capture capacity, you might not need such a complex trigger).
– Will it be able to replace a protocol analyzer?
(It depends on the protocol and where it is observed…).
– …

Of course, some of your suggested additional features are already in the development pipe at Exostiv Labs… But not all of them.

EXOSTIV’s main value is in the level of visibility it provides for systems running at speed.

New features will be built around this value

Ask yourself: what can you do with 8GB of captured data flowing out of your FPGA at multi-gigabit speed? Would it add something to the flow that your current tools cannot achieve?

At Exostiv Labs, we believe that a tool that tries to be everything at once would probably be very good at nothing, not well fitted to your flow and much too expensive for the value.

EXOSTIV is not such a monster.

Thank you for reading – and Happy Halloween to all!
– Frederic

‘My FPGA debug and verification flow should be improved…’

Improve the FPGA debug flow

‘My FPGA debug and verification flow should be improved…’

In my last post, I revealed some of the results of our recent survey on FPGA. These results depicted a ‘flow-conscious’ FPGA engineer, using a reduced mix of methodologies in the flow and very prone to going to the lab for debugging.

In the same survey, we tried to evaluate the level of satisfaction of the FPGA engineer for his/her debug and verification flow. We asked the respondents to select among several propositions the one that the most closely matched their thinking. See the picture below (click on picture to zoom).

Recognition of the need to improve the FPGA debug flow

More than 70% of the respondents recognize the need to improve the FPGA debug and verification flow.

The chart above represents the answers of the respondents active with an FPGA design, and actually using FPGA as a target technology. 72% of them recognize the need to improve the FPGA debug flow and nearly 40% of them are actively looking for a solution for it.

Are these survey results representative of the whole industry? Well, you tell me. Contact me to share your personal experience – and I’ll update this post.

– Oh, and by the way, as I write this, we are about to release our EXOSTIV solution. It improves the visibility on FPGA running at speed by a factor of up to 200.000 ! See below a preview of what we’re about to release. More information will be available soon.

EXOSTIV Dashboard software screenshot preview

Thank you for reading.
– Frederic

Does FPGA use define verification and debug?

Does FPGA use define verification
and debug?

You may be aware that we have run a first survey on FPGA design, debug and verification during the last month.

(By the way, many thanks to our respondents – we’ll announce the Amazon Gift Card winner in September).

In this survey, we did some ‘population segmentation’, and first asked about the primary reason for using FPGA:

– Do you use FPGA as a target technology? In other words are FPGA part of your final product’s BOM (bill of materials) list? … – or:
– Do you use FPGA as a prototyping technology for an ASIC or a SoC… ?

49 respondents were found in the first group (which we’ll also call the ‘pure FPGA’ users) and 34 in the second group. The survey was posted during about one month online and advertised on social networks. The questions were in English.

Behind this segmentation question is of course the idea that the design flow used to design FPGA is usually significantly different (and shorter) than this used to design an ASIC.
(This is claimed by FPGA vendors as one of the significant differences between ASIC and FPGA – you can see an example of this comparison here).

Since we care (a lot) about FPGA debug and verification, one of the survey questions was about the set of tools currently used for FPGA debug and verification

The chart below illustrates the percentage of respondents in each group who selected each proposed tool or methodology. They had been invited to select all choices that applied to them (click on the picture to enlarge).

Survey results

A podium shared by simulation, in-lab testing and RTL code reviews

Without significant difference between the 2 groups, simulation is the clear winner methodology, with a little more than 90% use. Frankly, we wonder how it is still possible not to simulate a FPGA design today – we have not analyzed the cases when designers are able to skip this step.

In-lab testing comes second, with near to 70% usage among the respondents. In-lab testing actually includes 2 categories of tools in the survey:

– traditional instruments -such as scopes and logic analyzers- and
– embedded logic analyzers -such as Xilinx Chipscope, Altera Signal Tap and the likes.

Manual RTL code review closes the podium. We expect that any designer actually reviews his/her code. Why is it not 100%? Probably because the ones verifying and debugging FPGA are not the RTL coders. Another possibility is that ‘reviewing the code’ might not be seen by many as a ‘verification technique’, but as the result of the process of verification and debug.

The case of code coverage

While this technique has been around for quite some time now (and is actually complementary to simulation), it is interesting to see that code coverage has got the same popularity in our 2 groups (we do not think that the difference between the groups is significant here).

It has been widely said that ‘pure FPGA designers’ are careless and do not use the same advanced and rigorous techniques of ASIC designers. Well, apparently, they do not just care about simulating some cases; they have the same level of concern for properly covering the code as ASIC designers.

Assertions: YES
UVM : not for ‘pure’ FPGA designers

UVM is quite a hot topic today, with most of the major EDA companies actively promoting this methodology. For those unfamiliar with the subject, here is brief overview. It seems to currently have a quite positive upwards trend for ASIC and SoC design. Tools and environments are appearing on the market, that can be applied to FPGA design too. Like we see on the chart, UVM has not really gained traction among our ‘pure FPGA’ respondents, although near to one engineer on 5 among our ‘ASIC engineers’ has adopted this technique.

Takeaways

1) In-lab testing is a well-entrenched habit of the FPGA designer. And there seems to be no notable difference between our groups. Like we pointed out in a previous publication, in-lab FPGA test and debug is here to stay. We simply believe that – for FPGA at least – this technique must and can be improved.

2) The FPGA engineer seems to be using a mix of methodologies when it comes to debug and verification, but not necessarily a lot of them.
Among our respondents, we have found that the average set of techniques used for debug and verification counts 2.58 different techniques (2.39 for the ‘pure FPGA players’ against 2.68 for the ASIC designers).
And – please note: simulation is there of course, often completed with code coverage.
‘The FPGA engineer who does not simulate at all’ appears to be just a myth today…

3) There is no notable difference between designers using FPGA as a target technology and those using FPGA as a prototyping technology- except for:

– Equivalence checking -and:
– UVM
… which seem to have a very limited adoption among the ‘pure FPGA players’.

In a later post, we’ll further analyze the results of our survey. We have shown which techniques are currently used for debug and verification.

Of course, the million dollar question is whether FPGA engineers are satisfied with this situation or if they are in need of improvements… Stay tuned.

Thank you for reading.
– Frederic

FPGA debug and verification is different

FPGA ‘debug and verification’ is different

Back in May, we had the pleasure speak and exhibit at a gathering of UK’s NMI (National Microelectronics Institute) titled: ‘FPGA: What next after flicking the switch?’.

The topic of the day was all about what happened after we get to download a design into a live FPGA. When we release Reset and hold our breath… ?

Those interested can download our slides from here

One of the (provocative?) subjects was:

Why is FPGA verification different from ASIC/SoC verification?

Interestingly this month, EE Journal’s Kevin Morris provided an answer in his review of the 52nd DAC: ‘Why verify?’. Kevin Morris says:

There is a common theme in the verification world that FPGA designers play too fast and loose, and that they don’t have proper respect for the verification process. IC designers have enormous verification budgets, capable teams, and sophisticated tools. The simple truth is that fear, not design complexity, drives the advancement of advanced verification methodologies.

I could not agree more.

Without the fear of loosing a gigantic amount of money if an ASIC is not properly verified and debugged, the EDA industry wouldn’t be able to produce (and sell!) all the advanced technologies used by ASIC and SoC designers today.
Like I said in a previous post: ‘It is a question of economics’.

Does this mean that there is no innovation in the FPGA design flow?

Of course not.

At Yugo Systems, we are convinced that FPGA design is different from ASIC and SoC design. For that reason, the FPGA design flow must be improved with tools, software and methodologies dedicated to FPGA. Attempting to use ASIC and SoC tools as ‘out of the box’ solutions for FPGA is the clear path to misfits and frustrations.

In order to get to know better about your specific aspirations in FPGA design, verification and debug, we invite you this month to take a short survey.

We’d be very happy if you could spend 5 minutes on it. If you participate and enter your contact information, you’ll have a chance to win a $100 Amazon gift card.

Click here to take the survey.

Thank you.

– Frederic

What are your ready to mobilize for FPGA debug?

What are you ready to waste mobilize?

I believe that there are 3 common misconceptions about debugging FPGA with the real hardware:

Misconception #1:
Debugging happens because the engineers are incompetent.

Misconception #2:
FPGA debugging on hardware ‘wastes’ resources.

Misconception #3:
A single methodology should solve ALL the problems.

Debugging is part of the design process.

Forget about misconception #1. If the word ‘debugging’ hurts your eyes or your ears, call it ‘functional verification’, ‘functional coverage’, ‘corner case testing’ or perhaps ‘specification check’. Engineers can improve the techniques, methodologies and its competences. Engineers can seek ways to automate the verification process. It remains that verifying a design is at the heart of any engineering activity that presents some level of complexity. Electronic system design has become an incredibly complex task.

Even the best engineer does some verification.

Debugging does not happen over the air.

You need to reserve resources for debug.

It can be ‘hardware resources’– e.g.:

– FPGA resources, like I/Os, logic and memory;
– PCB resources, like a connector, some area on the PCB used to collect the data and maintain its integrity.

It can be ‘engineering resources’ – typically the time spent by the engineering team to find a bug with the chosen debugging strategy.

In all cases, the project budget is impacted with additional costs:
– the cost for an extra area on the PCB;
– the extra cost of an FPGA with a larger package or a higher speed grade;
– the cost of a new logic analyzer or a scope;
– the engineering hours cost for implementing a specific debugging strategy.

Engineers are assigned the task to optimize the cost of a design. Once the system goes to production, the extra money put in the system ‘real estate’ that does not participate to the system’s functionality is considered as wasted margin. Management constantly recalls that any such dollar is worth saving because it quickly multiplies once the system is produced in (large) series.

Hence ‘hardware engineers’ are naturally inclined to:
– save on hardware, and:
– do more ‘engineering’ (after all, we are engineers !)

Actually every stage of the design flow mobilizes resources.
What can be mobilized to do the job is essentially a trade off -or, if you prefer- a question of economics.

You must mobilize some of your ‘real estate’

In my previous post, I came to the conclusion that improving the FPGA debugging process would require providing much more visibility on the FPGA than the existing solutions. Mobilizing more FPGA I/Os or FPGA memory resources would likely not provide this increased visibility. Hence, a potential better solution would need to hit a ‘target position’ shown on the diagram below and now occupied with our EXOSTIV™ solution.
Exploring the FPGA debug solutions space

As one can expect, the above target requires mobilizing other resources than I/O or internal memory.
Yugo System’s EXOSTIV™ mobilizes the following resources:
An external memory of up to 8GB, thereby providing a total trace capacity 100.000 times bigger than existing embedded instrumentation solution.
FPGA transceivers, used to send the data to the external memory.
FPGA logic and FIFO resources used to reach the FPGA internal nodes and temporarily store the data trace. The footprint of this IP on the FPGA memory resources does not grow with the total captured trace.

Part of the talent of the engineer consists in selecting the right techniques to reach a goal more efficiently

Engineers commonly complaint about verification on hardware because it does not offer the same level of visibility as simulation. I believe that the secret to efficient debugging consists in finding the right mix of techniques.

What is the right mix for you? Well, the answer depends on the economics of your project.
But please bear in mind that going early to hardware is absolutely reachable economically when you work on a FPGA-based system.

Performing some of the verification steps with the target hardware has arguably at least 2 benefits:

1) Speed of execution is optimal – as the target system runs… at system speed – and:

2) The target hardware can be placed in its real environment and won’t suffer from incomplete modeling.

Perhaps the point is all about improving the debugging on hardware, not simply discard it because it does not do ‘everything’…

Debugging involves a mix of techniques

At Yugo Systems, we believe that the engineering community is absolutely right when it chooses to use embedded instrumentation with live designs for debugging purposes. This is just a ‘part of the mix’. We also believe that debugging on FPGA hardware can be improved.

This is what we do with EXOSTIV™.

Defining targets (for FPGA debug)

Defining targets (for FPGA debug)

I recently attended a technical seminar organized in The Netherlands by one of the major FPGA vendors (hint: it is one of the 2 top vendors among the ‘4 + now single outsider’ players in the very stable FPGA market). During the lunch, I had the opportunity to discuss with some engineers about FPGA verification. Beyond high-level definition techniques, simulation and other ‘pure-software’ techniques, they all used ‘real hardware’ for some of their verification iterations.

There seems to be a strongly entrenched habit of the FPGA engineer: we cannot resist to the appeal of being able to reconfigure the FPGA indefinitely and try a system ‘for real’.

Yet, for those who would still believe it, the days of simply throwing a design to a board without prior verification are over and for good (check this article for instance). Separate IPs and separate groups of ‘functionally-coherent’ IPs really go through all-kinds of pre-hardware validations.

Like one of my lunch companions said: ‘This is when we put things together for real that virtually *anything* can go wrong.’.

So FPGA Engineers like to go on real hardware for debug. When asked about what they use at this stage of the design flow, the answers are (unsurprisingly?) similar. Two techniques still prevail: either they connect a logic analyzer to some of the FPGA’s I/Os – or they use an ’embedded logic analyzer’.

Traditional vs. Embedded logic analyzer

These two approaches are summarized below (click on picture to zoom):

Traditional vs. Embedded Logic Analyzer for FPGA debugging

Schematically, using a traditional logic analyzer to debug FPGA consists in making a special FPGA configuration where internal nodes are routed onto some of the chip’s functional I/Os. These I/Os are ‘physically’ routed on board to a connector on which the logic analyzer can be hooked. This approach uses the ability to reconfigure the FPGA to route diverse groups of internal nodes to the same set of physical I/Os. Multiplexing groups of nodes helps reducing the number of FPGA synthesis & implementation iterations. The observed nodes evolution is stored into the logic analyzer’s memory for further analysis.

Conversely, using an embedded logic analyzer consists in reserving FPGA memory resources for storing the FPGA nodes evolution. Subsequently, this memory is read by using the device’s JTAG port. The collected traces can be visualized on a PC – usually in a waveform viewer software.

Which technique is preferred?

This is a usual trade-off between which resources you are able to mobilize for debug and the achievable debug goals.

The embedded logic analyzer is generally preferred because it does not reserve FPGA I/Os for debug. Having a sufficient number of such I/Os for debug can lead to choosing a more costly device ‘only for debug’ – which usually proves to be difficult to ask to your manager.

Conversely, the traditional logic analyzer is preferred over the embedded logic analyzer because of its larger trace storage (Mega samples vs. Kilo samples).

Over time, additional considerations have appeared:

– With FPGA commonly running at more than 350 MHz, signal integrity and PCB costs problems arise when routing a large number of I/Os to a debug connector (trace length matching, power and space issues). This goes in favor of the embedded logic analyzer (not speaking of the much larger price of a traditional logic analyzer).

– In my previous post, I evoked the gap that may exist between the order of magnitude of the memory that’s available in a FPGA and the quantity of data that is necessary for debugging efficiently.

Exploring the solutions space

Supposing that none of the above techniques is satisfactory, how can they be improved?

The picture below shows the relative positions of the logic analyzer and the embedded logic analyzers on a 2 axis chart. It shows the FPGA I/O and memory resources that have to be mobilized by each technique.

The area occupied by each solution shows the relative order of magnitude of the trace memory they provide – which is a good measure of the provided ‘observability’.

Exploring the FPGA debug solutions space
A better solution should:
1) hit the TARGET position on the chart – and –
2) provide even more observability

In my next post, I’ll come back on the impact of reaching the target defined above. Stay tuned.

Thank you for reading.
– Frederic

Why observability matters

Observability for typical case of FPGA-based video processing systems

Why Observability matters.

At Exostiv Labs, we think that ‘Observability’ – or ‘Visibility’ – that is ‘the ability to observe (and understand) a system from its I/Os’ – is relevant – and even key to FPGA debug.

I’d like to show it with a real example taken from the field.

A typical Video processing debugging problem

We had an issue with a FPGA-based video processing system. At some point, the ‘data header’ of a frame of a video stream running at 24 frame per second (24 fps) went wrong. The problem was rather unpredictable and occurred randomly. And -by the way- the system was not our design: when we put our hands on it, improving the design methodology to avoid bugs was not an option anymore.

Approach nr 1 : a standard embedded logic analyzer storing the captured debug information in some FPGA memory

Our engineer set up a traditional embedded logic analyzer tool to capture 1.024 bits per frame, taken from the frame header (as the content of the pictures was not very relevant to him). 1kbit per frame was deemed a rather good observability – the engineer hoped to be able to capture and understand the history of events that were creating the bug. All the captured information was stored in the FPGA memory; he was lucky enough to be able to reserve up to 32 kbit of the FPGA memory for debugging purposes.

Traditional embedded logic analyzer approach

Using this approach, our engineer has been able to observe the information related to:

32 kbit / 1.024 bit = 32 frames

At 24 fps, this is the equivalent of 1,33 second of the movie

Approach nr 2 : Sending the captured debug information to an external memory

Now, let’s say that we have an external memory as big as 8 GB and sufficient bandwidth to send the debug information ‘on-the-fly’. We’ll check the consequences on the FPGA resources later.

EXOSTIV approach
8 Gbyte equals 64 Gigabit of memory.

64 Gbit / 1.024 bit = 64.000.000 frames

64 million frames can be observed with the same ‘accuracy’ (remember that we extract only 1.024 bits from each header).
At 24 fps, this is the equivalent of : [64 M / 24] = 2,66 million second or 2,66 M / 3.600 = 740 hours of the movie !

Usually, a movie is about 2 hours long. So, we’ll be able to ‘see the full movie’ with the same ‘debugging accuracy’.

Actually, with 8 GB external memory and a 2 hours movie, we have a 740 / 2 = 370 ‘scale up factor’. This basically means that we’ll be able to extract not 1.024 bit from each frame, but 1.024 x 370 = 378.880 bits per frame.

Gigabyte range storage eliminated the ‘guess work’

We have just seen that scaling up the total capture size of the debug information has a positive impact on how much of each picture can be seen AND how much of the movie can be observed.

When a bug may occur at any moment during a full 2 hours video stream, wouldn’t you feel more secure if you were certain to have captured the bug AND the history of events that have created it?

What about the required bandwidth?

Up to this point, we have considered a very theoretical example, assuming that we would be able to stream the debug information out to the external 8 GB memory. Actually, we do have this capability – by far.

At 24 fps, extracting 378.880 bits per frame requires a total bandwidth of: 24 x 378.880 = roughly 9,1 Mbps on average.

EXOSTIV™ provides up to 8GB memory and up to 4 x 6.6 Gbps bandwidth to extract debug information from FPGA.

So…

Stop the guess work during FPGA debug,

Scale up your tools and-

Increase your observation capability.

Thank you for reading
– Frederic

Long live Exostiv Labs!

Streaming bits

Long live Exostiv Labs!

So here we are. Exostiv Labs (formerly Yugo Systems) has started. Those of you who are familiar with Byte Paradigm (www.byteparadigm.com) already know that we have 10 years of business behind us. Quite some time dedicated to FPGA & System design AND Test and Measurement.

So, why Exostiv Labs – what is it and why now?

Exostiv Labs focuses on providing innovative debug & verification tools to the FPGA design community. We believe that FPGA (or Programmable Logic, ‘HIPPS’, or whatever they are called) has got its own constraints and economics. FPGA has become one of the most complex type of chip available for digital systems. Yet, the tools provided to the FPGA engineering community have failed to scale with the FPGA technology. The first and main problem in FPGA debug is observability– that is, the ability to understand the system’s behaviour from what you can observe of it.

Observability is the first target

Observability is in the roots of Exostiv Labs.

We choose our name to reflect what we think is biggest issue of FPGA debug: visibility and the need to see more – ultimately everything.
The other questions are more extensively developed in our latest white paper ‘FPGA verification tools need an upgrade’. You can download it from the link below.

Thank you for reading.

– Frederic

STAY IN TOUCH

Sign in to our Newsletter