Visibility into the FPGA.

Posts Tagged UVM

Exostiv boosts RTL simulation

Exostiv boosts RTL simulation

It is essential to reduce the wasted machine cycles used for simulation workloads.

Simulation dominates ASIC/SoC/FPGA verification process

‘The 2018 Wilson Research Group ASIC and FPGA Functional Verification Study’ (you may have to register for free to watch this) reports that an ASIC, SoC or FPGA designer can spend up to 40% of the total verification time on creating testbenches, writing tests or running simulation. In addition up to 44% of the time spent on debug – and this involves running simulation too.
It is safe to say that simulation is the most dominant technique in the overall verification effort.

Simulation speed is losing the race against FPGA complexity.

Over the last decade, simulation software has improved at many levels. Simulation software needed these improvements so they would be able to cope with the increasing complexity of designing FPGAs and chips with hundreds of millions gates.

Simulation software now supports more languages, assertions, and verification methodologies such as Accellera UVM. Code coverage now includes testbench coverage – in order to keep track of whether a team has exercised specific test stimuli.

Interestingly, these efforts have improved the test coverage, provided metrics for signoff and standardized verification approaches based on simulation, but did not accelerate simulation that much. Even the most recent claims from simulation software vendors show RTL simulation acceleration by 4x or maybe 5x thanks to the execution of the simulation on multi-core servers… whereas the complexity of FPGA has been multiplied by 100 in the same period of time!

In 2018, 84% of FPGA designs had at least 1 non-trivial bug escape to production (source: The Wilson Research Group)

How to make the most of the simulation time

In the study mentioned above, The Wilson Research Group reports that a staggering 84% (!) of the FPGA designs have had at least 1 non-trivial bug escape to production – and that: 65% of the typical simulation times exceed 5 hours.

That is why it has become critical to make the most of the time spent in simulation, and limit – as much as it is possible – the wasted machine cycles. Otherwise, our verification techniques are at risk of becoming irrelevant.

In our experience, the following approaches have proven to be successful:
– Performing ‘per IP’ simulations to verify them helps spread the effort on multiple engineers in parallel and run shorter simulations;
– Make use of ‘snapshot’ simulation features – which consists in saving the state of a simulation and re-start from there to run specific simulations.

In the illustration below, two frequent snapshots are saved – after the boot of the processor and after the DUT is initialized respectively. This conveniently helps running combinations of initialization sequences and test stimuli without having to simulate the boot sequence over and over again.

Simulation shnapshot technique

Full system bugs can require multiple days of simulation.

As demonstrated in this presentation from Cadence on simulation tools, full-chip bugs, which occur when the full system is put together, can require days of simulation. A ballpark estimate is up to 54 hours of simulation for 43 milliseconds of SoC HW runtime! For these kinds of bugs, it has become equally crucial to:
– Speed up simulation; (that’s the job of simulation software vendors)
– Make sure that the simulated models are accurate; (EXOSTIV can definitely help !)
– Provide additional visibility from the real system to converge faster on finding the bug; (EXOSTIV can definitely help!)

What EXOSTIV can do to BOOST simulation.

Exostiv lets you record very long data sets from inside FPGAs running at full speed of operation.
Typical probing scenarios with their most common usages are summarized in the two tables below:
The first table lists common usages that boost simulation – illustrated below the table:

What is probedTarget usage
Specific IP inputs
(or group of IPs)
- Overcome modelling issues;
- Make test case match reality.
Specific IP outputs
(or group of IPs)
- Reference response database;
- See the effects of non- or hardly simulable elements,
(such as CDC or metastability).

Exostiv lets you record input vectors from reality for IPs and full systems

Other Exostiv usages that complement simulation:
What is probedTarget usage
A full system bus
- Verify / develop software;
- Evaluate system overall efficiency / performance over large - and real - data;
- Check system behaviour;
Specific nodes
- Functional verification and debug.

Conclusion

Exostiv complements simulation by providing a source of real-world stimulus for IPs, groups of IPs and full systems. It therefore helps overcome modelling problems that are the most common source of FPGA bugs escaping to production. Coupling simulation with the huge recording capability of Exostiv means you avoid wasting your quota of machine cycles dedicated to simulation and improves the quality of FPGA system sign-off.

When 84% of the FPGA designs report at least 1 non-trivial bug escaping to production, and 65% of the typical simulation times exceed 5 hours (source: The 2018 Wilson Research Group ASIC and FPGA functional verification study), boosting simulation is an absolute necessity.

As always, thank you for reading.
– Frederic

Does FPGA use define verification and debug?

Does FPGA use define verification
and debug?

You may be aware that we have run a first survey on FPGA design, debug and verification during the last month.

(By the way, many thanks to our respondents – we’ll announce the Amazon Gift Card winner in September).

In this survey, we did some ‘population segmentation’, and first asked about the primary reason for using FPGA:

– Do you use FPGA as a target technology? In other words are FPGA part of your final product’s BOM (bill of materials) list? … – or:
– Do you use FPGA as a prototyping technology for an ASIC or a SoC… ?

49 respondents were found in the first group (which we’ll also call the ‘pure FPGA’ users) and 34 in the second group. The survey was posted during about one month online and advertised on social networks. The questions were in English.

Behind this segmentation question is of course the idea that the design flow used to design FPGA is usually significantly different (and shorter) than this used to design an ASIC.
(This is claimed by FPGA vendors as one of the significant differences between ASIC and FPGA – you can see an example of this comparison here).

Since we care (a lot) about FPGA debug and verification, one of the survey questions was about the set of tools currently used for FPGA debug and verification

The chart below illustrates the percentage of respondents in each group who selected each proposed tool or methodology. They had been invited to select all choices that applied to them (click on the picture to enlarge).

Survey results

A podium shared by simulation, in-lab testing and RTL code reviews

Without significant difference between the 2 groups, simulation is the clear winner methodology, with a little more than 90% use. Frankly, we wonder how it is still possible not to simulate a FPGA design today – we have not analyzed the cases when designers are able to skip this step.

In-lab testing comes second, with near to 70% usage among the respondents. In-lab testing actually includes 2 categories of tools in the survey:

– traditional instruments -such as scopes and logic analyzers- and
– embedded logic analyzers -such as Xilinx Chipscope, Altera Signal Tap and the likes.

Manual RTL code review closes the podium. We expect that any designer actually reviews his/her code. Why is it not 100%? Probably because the ones verifying and debugging FPGA are not the RTL coders. Another possibility is that ‘reviewing the code’ might not be seen by many as a ‘verification technique’, but as the result of the process of verification and debug.

The case of code coverage

While this technique has been around for quite some time now (and is actually complementary to simulation), it is interesting to see that code coverage has got the same popularity in our 2 groups (we do not think that the difference between the groups is significant here).

It has been widely said that ‘pure FPGA designers’ are careless and do not use the same advanced and rigorous techniques of ASIC designers. Well, apparently, they do not just care about simulating some cases; they have the same level of concern for properly covering the code as ASIC designers.

Assertions: YES
UVM : not for ‘pure’ FPGA designers

UVM is quite a hot topic today, with most of the major EDA companies actively promoting this methodology. For those unfamiliar with the subject, here is brief overview. It seems to currently have a quite positive upwards trend for ASIC and SoC design. Tools and environments are appearing on the market, that can be applied to FPGA design too. Like we see on the chart, UVM has not really gained traction among our ‘pure FPGA’ respondents, although near to one engineer on 5 among our ‘ASIC engineers’ has adopted this technique.

Takeaways

1) In-lab testing is a well-entrenched habit of the FPGA designer. And there seems to be no notable difference between our groups. Like we pointed out in a previous publication, in-lab FPGA test and debug is here to stay. We simply believe that – for FPGA at least – this technique must and can be improved.

2) The FPGA engineer seems to be using a mix of methodologies when it comes to debug and verification, but not necessarily a lot of them.
Among our respondents, we have found that the average set of techniques used for debug and verification counts 2.58 different techniques (2.39 for the ‘pure FPGA players’ against 2.68 for the ASIC designers).
And – please note: simulation is there of course, often completed with code coverage.
‘The FPGA engineer who does not simulate at all’ appears to be just a myth today…

3) There is no notable difference between designers using FPGA as a target technology and those using FPGA as a prototyping technology- except for:

– Equivalence checking -and:
– UVM
… which seem to have a very limited adoption among the ‘pure FPGA players’.

In a later post, we’ll further analyze the results of our survey. We have shown which techniques are currently used for debug and verification.

Of course, the million dollar question is whether FPGA engineers are satisfied with this situation or if they are in need of improvements… Stay tuned.

Thank you for reading.
– Frederic

STAY IN TOUCH

Sign in to our Newsletter