Visibility into the FPGA.

Posts Tagged Debug

Record FPGA data during 1 hour – really

Record data from FPGA over long total times - deep capture

Record FPGA data during 1 hour – really.

As ASIC, SoC and FPGA engineers, we are used to watching the operation of our designs based on single limited snapshots. RTL simulations, for instance, provide bit-level details during execution times that span over a few (milli)seconds at best. Consequently, it may not be possible to see events that happen over long times as a single coherent capture.

In this blog post, I have wanted to show what can be done in a real case with EXOSTIV. The design that runs from a FPGA board is a full system on-chip that features a Gbit Ethernet connection. The board is connected to our company network – and I have set up EXOSTIV to trigger and record the Ethernet traffic during 1 full hour. Yes, we used EXOSTIV as an Ethernet sniffer, that works from inside the FPGA – providing a ‘decoded view’ of the traffic after it has entered the FPGA Gbit Ethernet IP. Check the video below. We have accelerated it but the stop watch shows the real elapsed time.

Like it is shown on the video, we have set up EXOSTIV to record a chunk of data (256 samples) every time ‘something’ is seen on the Ethernet interface: there is an event trigger when a wide variety of events (note the ‘OR’ equation trigger definition) marking the detection of traffic happen on the Ethernet connection. Each sample is 1,248 bits wide.. This gives a quite good insight of what is ‘after’ the Gbit Ethernet IP in the FPGA. Each burst records 256 x 1,248 bits = 319,488 bits of data – and there are 30,000 such bursts collected over a total time of about 1 hour.
In total, EXOSTIV has collected a little less than 1.2 Gigabyte during this run – which is just about 15% of the total memory provided by an EXOSTIV probe.

Practically, it means that we could collect roughly 6.6 times as much data… (> 6 hours) with this single capture unit.

You can see that the bursts are recorded rather randomly, as the occurrence of a trigger depends on the actual load of our company network. (Click on pictures below to enlarge and zoom on the data)

Exostiv Dashboard after capturing 1 hour of the Ethernet traffic

Exostiv Dashboard after capturing 1 hour of the Ethernet traffic - full capture
Exostiv Dashboard after capturing 1 hour of the Ethernet traffic - detail on the deep capture

As always, thank you for reading (and watching).
– Frederic

Deep Trace & Bandwidth

Exostiv provides deep trace AND bandwidth for maximal FPGA visibility

Deep Trace & Bandwidth

Exostiv provides the following maximum capabilities for capturing data from inside FPGA running at speed:

Capabilities.

  • 50 Gigabit per second bandwidth for collecting FPGA traces.
  • 8 Gigabyte of memory for trace storage.
  • 32,768 nodes probing simultaneously.
  • 524,288 nodes reach.
Actually, we have built EXOSTIV to provide VISIBILITY to FPGA designers performing debugging with real hardware. If you do not know why it is important, watch the following 7 minutes video. It sketches out the fundamentals of EXOSTIV.

We built EXOSTIV to provide visibility to the FPGA designer.

 

 

Interrupted captures are very useful too !

I was recently demonstrating Exostiv at a customer’s site and I received the following comment:
“Even with 50 Gbps bandwidth, this tool is hardly usable because you won’t see many nodes at a usual FPGA internal sampling frequency…”
This person was implying that – for example – probing more than 250 FPGA nodes at 200 MHz already exceeds this total bandwidth. So, Exostiv cannot be used to its fullest, right?

Wrong.

This reasoning is right if you think that only continuous captures are valuable for getting insight from FPGA.
The following short video explains why it is important. It features a case where the capture – from start to end – spans over 11 seconds ! . Depending on the trigger and data qualification (or data filtering options) – and by using the full provided trace data buffer (8GB) such an approach can let you observe specific moments of the FPGA in operation over hours !.
 

 

With the proper capture settings, EXOSTIV lets you observe FPGA over hours.

So, the features listed below are equally important for an efficient capture work.

Features.

  • 16 capture units that can be enabled/disabled dynamically
  • 16 multiplexed data groups per capture unit
  • 8k samples local buffer in each capture unit.
  • 1 trigger unit per capture unit. Defines start of capture.
  • Bit or bus condition. =, /=, <, >, range, out of range conditions
  • Repeating/interrupted capture based on trigger condition
  • Data qualification condition on input data. Capture only when the condition is true.
  • Interactive trigger or data qualification definition: no recompile needed
  • Sequential / state machine trigger in 2017 roadmap.

As always, thank you for reading (and for watching)
– Frederic

Exostiv for Intel (Altera) FPGA – announcement

Exostiv for Intel FPGA

Announcing… EXOSTIV for Intel FPGA

Using Intel FPGA?

We have exciting news for you: EXOSTIV will soon support Intel FPGA!
Please check the pictures above and below – this is EXOSTIV working with the ‘Attila’ dev kit of our partner, Reflex-CES, equipped with one Arria 10 GX 10AX115N4F40I3SG device.

We are now able to use EXOSTIV Dashboard Analyzer connected to an IP loaded into the design through the board QSFP port (with a QSPF to 4xSFP cable with splitter).
The board FMC connector mounted with our FMC to HDMI adapter works as an access port too! (Click here to check about the connectivity options for EXOSTIV.)

This beta version was shown during the training we co-organized with Telexsus Ltd. in Maidenhead (UK) on October 13rd and at the Europe edition of the Intel SoC FPGA Developer’s Forum (ISDF) held in Frankfurt on October 19th, where Exostiv Labs participated as a Regional Sponsor.

ISDF
The Intel SoC FPGA Developer’s Forum was held in Frankfurt on Oct. 19th.

We are happy to announce the availability of EXOSTIV for Intel FPGA (formerly Altera) for the end of 2016. That’s an exciting new step for us and for EXOSTIV !

Exostiv at ISDF

We would like to thank all our customers using Intel FPGA for their patience. We’ll be in touch!
– Frederic

Debug with reduced footprint

Footprint

Debug with reduced footprint

Footprint, ‘real estate’, resources, … No matter the design complexity, allocating resources to debugging is something you’ll worry about.
If you are reading these lines, it is likely that you have some interest in running some of your system debugging from a real hardware (Check this post if you do not know why it is important).

EXOSTIV enables you to get extended visibility out of running FPGA.
It impacts the target system resources in 2 ways:
– it requires logic, routing & storage resources from inside the target FPGA to place an IP used to reach internal FPGA nodes.
I’ll cover this aspect in a future post.

– it requires a physical connector to access the FPGA.
(- And NO, JTAG is not good enough because it does not support sufficiently large bandwidth – even with compression).

Read on…

Choosing the right connector

All EXOSTIV Probes provide 2 connection options:
Option #1: uses a single HDMI connector type (! this is not a full HDMI connection !)
Option #2: uses up to 4 SFP/SFP+ connectors

From there, a wider range of options is within reach if you consider using additional cables and board adapters available from Exostiv Labs or from third-party suppliers.

Which option will work for you? Follow the guidelines below:

1. Is there an existing SFP/SFP+/QSFP/QSP+ directly connected to the FPGA transceivers?

  • Check if you can reserve this FPGA resource (and the board connector) for debug – at least temporarily. You’ll need 1 SFP/SFP+ connection per used gigabit transceiver
  • QSFP/QSFP+ connectors can be used with a 4xSFP to QSFP cable with splitter.

Note: most of the Dini Group’s boards feature SFP/SFP+, quad SFP/SFP+ or QSFP/QSFP+ connectors by default. And they are directly connected to FPGA transceivers.

2. Is there another type of connector directly connected to the FPGA transceivers?

Please contact us for details on our adapters, external references and custom adapters support.

3. For all other cases: you’ll need to modify your board and add a connector.

    • Is space on the board critical? Go for HDMI or even Micro-HDMI !

See picture below – this is an Artix-7 board equipped with a tiny micro-HDMI connector, providing up to 4 x 6.6 Gbps bandwidth for debugging FPGA.

  • You do not have space constraints (lucky you)? Pick the one you like: SFP/QSFP/HDMI/micro-HDMI/other (+ adapter).

*** Check our special 12 Gbps probe test report – Click here ! ***

EXOSTIV provides standard and custom connection options that enable fast deployment with standard FPGA development kits and/or limits the footprint requirements from the target FPGA board.

Thank you for reading.
– Frederic

Debugging FPGAs at full speed

High bandwidth

Debugging FPGAs at full speed

In my previous post, I explained why increasing the available ‘window of visibility’ is a gigantic advantage when tracking system-level issues on modern complex FPGAs. EXOSTIV’s structure does not require the FPGA internal memory to grow with the depth of the capture.

Yet, mobilizing hardware resources (internal / external hardware and software, memory, logic, communication ports, …) is always the result of a compromise. Debugging FPGA at full speed requires adopting the right strategies to remove bottlenecks.

Debugging FPGA at full speed requires adopting the right strategies to remove bottlenecks.

Memory severely limits JTAG instrumentation

In traditional JTAG-based embedded instrumentation, the main bottleneck is FPGA memory. There is some ability to scale up the FPGA memory but this depends on what’s available in the FPGA after the design is implemented. Collecting a larger number of signals and deeper traces quickly exhausts the memory. Beyond a few (hundreds of?) kilobytes at best, you’re done. This severely limits the ability to debug further.

A common strategy for scaling up debug without increasing the memory consists in multiplexing data groups (see figure below), based on the principle that debugging FPGA sometimes requires to ‘overinstrument’ the design, to save on implementation iterations. ‘Multiplexing’ means segmenting the design into groups of signals. This technique helps reserve the scarce memory resources for a reduced set of signals, thereby enabling longer (‘deeper’) captures.

A displacement towards bandwidth

Unlike JTAG-based systems – where the captured data is really ‘trapped’ inside the FPGA – EXOSTIV sends the captured data to a large external memory (8 GB currently). This memory is hardly the bottleneck and it can be scaled up rather easily.

For EXOSTIV, the ‘new’ bottleneck is the available bandwidth (please note that even with this bottleneck, EXOSTIV goes much beyond the capabilities of JTAG-based embedded instrumentation).
Advanced versions of EXOSTIV provide up to 50 Gbps of total bandwidth with 8 GB of memory. However, these resources are in no way ‘unlimited’ in regard of the ‘exploding complexity’ of modern FPGA. 50 Gbps allows the continuous streaming of 1,000 bits sampled at… just 50 MHz.

Pushing the boundaries

EXOSTIV includes the following features to preserve bandwidth:

  • Data group multiplexing;
  • Data qualification;
  • Burst mode of operation;

All of the above strategies act differently with one purpose: reduce the bandwidth requirements.

Data Group Multiplexing

We have evoked this strategy previously, as a way to ‘deepen’ captures for JTAG-based systems, where the storage memory is very small.
For EXOSTIV, multiplexing the observed nodes as data groups has the function of reducing the maximal bandwidth required to flow the data out of the FPGA. While it constraints the designer to watch the data groups separately, it helps reduce the bandwidth requirements to this available on the gigabit links. If this bandwidth is kept below or equal to the maximum available, ‘continuous’ capture is possible, and data can be flown outside of the FPGA without requiring any local storage. The process can stop when the EXOSTIV Probe’s memory is full.

Data Qualification

Data qualification consists in defining ‘filters’ or ‘qualification equations’ built on the observed data. A logic condition is defined, which enables or disables the capture of data. This equation defines when ‘data is (un)interesting’. For instance, you may want to observe a data bus only when there is traffic on it and filter the IDLE times.
Coupled with some buffering in EXOSTIV IP Capture Units (click here for more details on EXOSTIV IP’s structure), this strategy effectively contributes to lowering the average bandwidth requirements.

Burst mode of operation.

Capturing repetitive chunks of data defined with a trigger condition is another solution. Like data qualification, bursting data contributes to lowering the average bandwidth requirements on the gigabit transceiver links. Using bursts must not be seen as a ‘fallback strategy’ for cases when streaming data continuously is not possible. As opposed to JTAG-based solutions, where only one single burst can be observed, EXOSTIV allows bursting data out until the Probe’s memory is full. Events occurring between bursts cannot be watched, but a large set of events that happen long in time after power up are now accessible (Remember: with JTAG-based systems, all you can see is… just one burst!).

Take away : make the most of your resources

EXOSTIV provides unprecedented memory and bandwidth resources that enable debugging modern FPGAs at full speed.
These already huge resources can be scaled with modern FPGA complexities. In addition, using the right EXOSTIV features and strategies help remove a ‘new debugging bottleneck’: bandwidth. Combining burst mode operation, data qualification and data grouping is a winning strategy to make the most of EXOSTIV’s debugging resources.

Thank you for reading.
– Frederic

The FPGA problem we are trying to solve

Problem and solution

The FPGA problem we are trying to solve

Designing FPGA can be complex. Each step of the design flow brings its own challenges, problems and solutions. As engineers, this is what we do: we find solutions. We use our knowledge, we mobilize our skills and find the right tools to constantly build better solutions to the problems that we encounter.

Exostiv Labs aims at providing better tools for FPGA debug. However, ‘FPGA debug’ may be understood quite diversely. In this post, I’d like to go back to the basics and pinpoint the specific problems we are trying to solve.

‘First time right’ at board bring-up is a problem.

I am an engineer. Like any engineer, I like when things are done right technically, with the right steps taken at the right time.
However, as an engineer, I also have learned to manage budget and market constraints. We all know that between the ‘perfect design flow’ and the ‘actual design flow’, the distance is made of budget and/or market constraints. We all know that sometimes, it is chosen to skip the verification of one ‘detail’ in order to reach the announced release date. Sometimes it is expected from us to accept a small percentage of uncertainty (that is called ‘experience’ or ‘know-how’) because it is statistically cost-effective.

Ideal vs. Actual design flow

“And what if a bug appears later? Well, we’ll fix it and send an upgrade.
Isn’t it the beauty of programmable devices?”

Sometimes, being ‘first on the market’ or ‘first in budget’ beats ‘first time right’.

A typical in-lab situation

We all have been there…

The system starts up and suddenly, ‘something goes wrong’. It happens randomly, sometimes 2 minutes, sometimes 2 hours after power up.
– We have no clue about what caused the bug.
– We have no idea about why it happens.
– We do not know the time from cause to effect.

This is called ‘an emergent system with a ‘random bug’.

It is emergent because it is the result of complex interactions of individual pieces. Such system-level bugs are the most complex – and time-consuming – to solve because they involve the complex interactions of a whole system.

EXOSTIV solves the problem of finding the roots of bugs appearing in complex systems placed in their real environment.

EXOSTIV solves FPGA board bring up problems

Solving such a problem requires *a LOT of* visibility on the FPGA under test:
– It is hard to put a trigger condition on an unknown random bug. To spot it, you’d better have the largest capture window possible.
– You sometimes need to extend the capture far in time (stressing the impracticability of simulation as sole methodology).
– What happened before seeing the bug is very important to understand the ‘fatal’ sequence of events that has created the faulty condition.

Without a sufficient observability of the system under test, you’ll simply loose precious engineering hours hoping to -just- capture the bug. Then you’ll spend time again trying to capture the key events that happened before it – that ultimately would help you understand why the bug occurs.

Instrumenting a real design is the key to overcoming modelling mistake and see bugs occurring in the real environment… but only if the instrumented design provides sufficient visibility !

What can you do with 8 GB captured at FPGA speed of operation?

Please think to it. I’ll explore how to make the most of the *huge* resources offered by EXOSTIV in my next post.

Thank you for reading
– Frederic

‘My FPGA debug and verification flow should be improved…’

Improve the FPGA debug flow

‘My FPGA debug and verification flow should be improved…’

In my last post, I revealed some of the results of our recent survey on FPGA. These results depicted a ‘flow-conscious’ FPGA engineer, using a reduced mix of methodologies in the flow and very prone to going to the lab for debugging.

In the same survey, we tried to evaluate the level of satisfaction of the FPGA engineer for his/her debug and verification flow. We asked the respondents to select among several propositions the one that the most closely matched their thinking. See the picture below (click on picture to zoom).

Recognition of the need to improve the FPGA debug flow

More than 70% of the respondents recognize the need to improve the FPGA debug and verification flow.

The chart above represents the answers of the respondents active with an FPGA design, and actually using FPGA as a target technology. 72% of them recognize the need to improve the FPGA debug flow and nearly 40% of them are actively looking for a solution for it.

Are these survey results representative of the whole industry? Well, you tell me. Contact me to share your personal experience – and I’ll update this post.

– Oh, and by the way, as I write this, we are about to release our EXOSTIV solution. It improves the visibility on FPGA running at speed by a factor of up to 200.000 ! See below a preview of what we’re about to release. More information will be available soon.

EXOSTIV Dashboard software screenshot preview

Thank you for reading.
– Frederic

Does FPGA use define verification and debug?

Does FPGA use define verification
and debug?

You may be aware that we have run a first survey on FPGA design, debug and verification during the last month.

(By the way, many thanks to our respondents – we’ll announce the Amazon Gift Card winner in September).

In this survey, we did some ‘population segmentation’, and first asked about the primary reason for using FPGA:

– Do you use FPGA as a target technology? In other words are FPGA part of your final product’s BOM (bill of materials) list? … – or:
– Do you use FPGA as a prototyping technology for an ASIC or a SoC… ?

49 respondents were found in the first group (which we’ll also call the ‘pure FPGA’ users) and 34 in the second group. The survey was posted during about one month online and advertised on social networks. The questions were in English.

Behind this segmentation question is of course the idea that the design flow used to design FPGA is usually significantly different (and shorter) than this used to design an ASIC.
(This is claimed by FPGA vendors as one of the significant differences between ASIC and FPGA – you can see an example of this comparison here).

Since we care (a lot) about FPGA debug and verification, one of the survey questions was about the set of tools currently used for FPGA debug and verification

The chart below illustrates the percentage of respondents in each group who selected each proposed tool or methodology. They had been invited to select all choices that applied to them (click on the picture to enlarge).

Survey results

A podium shared by simulation, in-lab testing and RTL code reviews

Without significant difference between the 2 groups, simulation is the clear winner methodology, with a little more than 90% use. Frankly, we wonder how it is still possible not to simulate a FPGA design today – we have not analyzed the cases when designers are able to skip this step.

In-lab testing comes second, with near to 70% usage among the respondents. In-lab testing actually includes 2 categories of tools in the survey:

– traditional instruments -such as scopes and logic analyzers- and
– embedded logic analyzers -such as Xilinx Chipscope, Altera Signal Tap and the likes.

Manual RTL code review closes the podium. We expect that any designer actually reviews his/her code. Why is it not 100%? Probably because the ones verifying and debugging FPGA are not the RTL coders. Another possibility is that ‘reviewing the code’ might not be seen by many as a ‘verification technique’, but as the result of the process of verification and debug.

The case of code coverage

While this technique has been around for quite some time now (and is actually complementary to simulation), it is interesting to see that code coverage has got the same popularity in our 2 groups (we do not think that the difference between the groups is significant here).

It has been widely said that ‘pure FPGA designers’ are careless and do not use the same advanced and rigorous techniques of ASIC designers. Well, apparently, they do not just care about simulating some cases; they have the same level of concern for properly covering the code as ASIC designers.

Assertions: YES
UVM : not for ‘pure’ FPGA designers

UVM is quite a hot topic today, with most of the major EDA companies actively promoting this methodology. For those unfamiliar with the subject, here is brief overview. It seems to currently have a quite positive upwards trend for ASIC and SoC design. Tools and environments are appearing on the market, that can be applied to FPGA design too. Like we see on the chart, UVM has not really gained traction among our ‘pure FPGA’ respondents, although near to one engineer on 5 among our ‘ASIC engineers’ has adopted this technique.

Takeaways

1) In-lab testing is a well-entrenched habit of the FPGA designer. And there seems to be no notable difference between our groups. Like we pointed out in a previous publication, in-lab FPGA test and debug is here to stay. We simply believe that – for FPGA at least – this technique must and can be improved.

2) The FPGA engineer seems to be using a mix of methodologies when it comes to debug and verification, but not necessarily a lot of them.
Among our respondents, we have found that the average set of techniques used for debug and verification counts 2.58 different techniques (2.39 for the ‘pure FPGA players’ against 2.68 for the ASIC designers).
And – please note: simulation is there of course, often completed with code coverage.
‘The FPGA engineer who does not simulate at all’ appears to be just a myth today…

3) There is no notable difference between designers using FPGA as a target technology and those using FPGA as a prototyping technology- except for:

– Equivalence checking -and:
– UVM
… which seem to have a very limited adoption among the ‘pure FPGA players’.

In a later post, we’ll further analyze the results of our survey. We have shown which techniques are currently used for debug and verification.

Of course, the million dollar question is whether FPGA engineers are satisfied with this situation or if they are in need of improvements… Stay tuned.

Thank you for reading.
– Frederic

FPGA debug and verification is different

FPGA ‘debug and verification’ is different

Back in May, we had the pleasure speak and exhibit at a gathering of UK’s NMI (National Microelectronics Institute) titled: ‘FPGA: What next after flicking the switch?’.

The topic of the day was all about what happened after we get to download a design into a live FPGA. When we release Reset and hold our breath… ?

Those interested can download our slides from here

One of the (provocative?) subjects was:

Why is FPGA verification different from ASIC/SoC verification?

Interestingly this month, EE Journal’s Kevin Morris provided an answer in his review of the 52nd DAC: ‘Why verify?’. Kevin Morris says:

There is a common theme in the verification world that FPGA designers play too fast and loose, and that they don’t have proper respect for the verification process. IC designers have enormous verification budgets, capable teams, and sophisticated tools. The simple truth is that fear, not design complexity, drives the advancement of advanced verification methodologies.

I could not agree more.

Without the fear of loosing a gigantic amount of money if an ASIC is not properly verified and debugged, the EDA industry wouldn’t be able to produce (and sell!) all the advanced technologies used by ASIC and SoC designers today.
Like I said in a previous post: ‘It is a question of economics’.

Does this mean that there is no innovation in the FPGA design flow?

Of course not.

At Yugo Systems, we are convinced that FPGA design is different from ASIC and SoC design. For that reason, the FPGA design flow must be improved with tools, software and methodologies dedicated to FPGA. Attempting to use ASIC and SoC tools as ‘out of the box’ solutions for FPGA is the clear path to misfits and frustrations.

In order to get to know better about your specific aspirations in FPGA design, verification and debug, we invite you this month to take a short survey.

We’d be very happy if you could spend 5 minutes on it. If you participate and enter your contact information, you’ll have a chance to win a $100 Amazon gift card.

Click here to take the survey.

Thank you.

– Frederic

What are your ready to mobilize for FPGA debug?

What are you ready to waste mobilize?

I believe that there are 3 common misconceptions about debugging FPGA with the real hardware:

Misconception #1:
Debugging happens because the engineers are incompetent.

Misconception #2:
FPGA debugging on hardware ‘wastes’ resources.

Misconception #3:
A single methodology should solve ALL the problems.

Debugging is part of the design process.

Forget about misconception #1. If the word ‘debugging’ hurts your eyes or your ears, call it ‘functional verification’, ‘functional coverage’, ‘corner case testing’ or perhaps ‘specification check’. Engineers can improve the techniques, methodologies and its competences. Engineers can seek ways to automate the verification process. It remains that verifying a design is at the heart of any engineering activity that presents some level of complexity. Electronic system design has become an incredibly complex task.

Even the best engineer does some verification.

Debugging does not happen over the air.

You need to reserve resources for debug.

It can be ‘hardware resources’– e.g.:

– FPGA resources, like I/Os, logic and memory;
– PCB resources, like a connector, some area on the PCB used to collect the data and maintain its integrity.

It can be ‘engineering resources’ – typically the time spent by the engineering team to find a bug with the chosen debugging strategy.

In all cases, the project budget is impacted with additional costs:
– the cost for an extra area on the PCB;
– the extra cost of an FPGA with a larger package or a higher speed grade;
– the cost of a new logic analyzer or a scope;
– the engineering hours cost for implementing a specific debugging strategy.

Engineers are assigned the task to optimize the cost of a design. Once the system goes to production, the extra money put in the system ‘real estate’ that does not participate to the system’s functionality is considered as wasted margin. Management constantly recalls that any such dollar is worth saving because it quickly multiplies once the system is produced in (large) series.

Hence ‘hardware engineers’ are naturally inclined to:
– save on hardware, and:
– do more ‘engineering’ (after all, we are engineers !)

Actually every stage of the design flow mobilizes resources.
What can be mobilized to do the job is essentially a trade off -or, if you prefer- a question of economics.

You must mobilize some of your ‘real estate’

In my previous post, I came to the conclusion that improving the FPGA debugging process would require providing much more visibility on the FPGA than the existing solutions. Mobilizing more FPGA I/Os or FPGA memory resources would likely not provide this increased visibility. Hence, a potential better solution would need to hit a ‘target position’ shown on the diagram below and now occupied with our EXOSTIV™ solution.
Exploring the FPGA debug solutions space

As one can expect, the above target requires mobilizing other resources than I/O or internal memory.
Yugo System’s EXOSTIV™ mobilizes the following resources:
An external memory of up to 8GB, thereby providing a total trace capacity 100.000 times bigger than existing embedded instrumentation solution.
FPGA transceivers, used to send the data to the external memory.
FPGA logic and FIFO resources used to reach the FPGA internal nodes and temporarily store the data trace. The footprint of this IP on the FPGA memory resources does not grow with the total captured trace.

Part of the talent of the engineer consists in selecting the right techniques to reach a goal more efficiently

Engineers commonly complaint about verification on hardware because it does not offer the same level of visibility as simulation. I believe that the secret to efficient debugging consists in finding the right mix of techniques.

What is the right mix for you? Well, the answer depends on the economics of your project.
But please bear in mind that going early to hardware is absolutely reachable economically when you work on a FPGA-based system.

Performing some of the verification steps with the target hardware has arguably at least 2 benefits:

1) Speed of execution is optimal – as the target system runs… at system speed – and:

2) The target hardware can be placed in its real environment and won’t suffer from incomplete modeling.

Perhaps the point is all about improving the debugging on hardware, not simply discard it because it does not do ‘everything’…

Debugging involves a mix of techniques

At Yugo Systems, we believe that the engineering community is absolutely right when it chooses to use embedded instrumentation with live designs for debugging purposes. This is just a ‘part of the mix’. We also believe that debugging on FPGA hardware can be improved.

This is what we do with EXOSTIV™.

STAY IN TOUCH

Sign in to our Newsletter