Visibility into the FPGA.

What are your ready to mobilize for FPGA debug?

What are you ready to waste mobilize?

I believe that there are 3 common misconceptions about debugging FPGA with the real hardware:

Misconception #1:
Debugging happens because the engineers are incompetent.

Misconception #2:
FPGA debugging on hardware ‘wastes’ resources.

Misconception #3:
A single methodology should solve ALL the problems.

Debugging is part of the design process.

Forget about misconception #1. If the word ‘debugging’ hurts your eyes or your ears, call it ‘functional verification’, ‘functional coverage’, ‘corner case testing’ or perhaps ‘specification check’. Engineers can improve the techniques, methodologies and its competences. Engineers can seek ways to automate the verification process. It remains that verifying a design is at the heart of any engineering activity that presents some level of complexity. Electronic system design has become an incredibly complex task.

Even the best engineer does some verification.

Debugging does not happen over the air.

You need to reserve resources for debug.

It can be ‘hardware resources’– e.g.:

– FPGA resources, like I/Os, logic and memory;
– PCB resources, like a connector, some area on the PCB used to collect the data and maintain its integrity.

It can be ‘engineering resources’ – typically the time spent by the engineering team to find a bug with the chosen debugging strategy.

In all cases, the project budget is impacted with additional costs:
– the cost for an extra area on the PCB;
– the extra cost of an FPGA with a larger package or a higher speed grade;
– the cost of a new logic analyzer or a scope;
– the engineering hours cost for implementing a specific debugging strategy.

Engineers are assigned the task to optimize the cost of a design. Once the system goes to production, the extra money put in the system ‘real estate’ that does not participate to the system’s functionality is considered as wasted margin. Management constantly recalls that any such dollar is worth saving because it quickly multiplies once the system is produced in (large) series.

Hence ‘hardware engineers’ are naturally inclined to:
– save on hardware, and:
– do more ‘engineering’ (after all, we are engineers !)

Actually every stage of the design flow mobilizes resources.
What can be mobilized to do the job is essentially a trade off -or, if you prefer- a question of economics.

You must mobilize some of your ‘real estate’

In my previous post, I came to the conclusion that improving the FPGA debugging process would require providing much more visibility on the FPGA than the existing solutions. Mobilizing more FPGA I/Os or FPGA memory resources would likely not provide this increased visibility. Hence, a potential better solution would need to hit a ‘target position’ shown on the diagram below and now occupied with our EXOSTIV™ solution.
Exploring the FPGA debug solutions space

As one can expect, the above target requires mobilizing other resources than I/O or internal memory.
Yugo System’s EXOSTIV™ mobilizes the following resources:
An external memory of up to 8GB, thereby providing a total trace capacity 100.000 times bigger than existing embedded instrumentation solution.
FPGA transceivers, used to send the data to the external memory.
FPGA logic and FIFO resources used to reach the FPGA internal nodes and temporarily store the data trace. The footprint of this IP on the FPGA memory resources does not grow with the total captured trace.

Part of the talent of the engineer consists in selecting the right techniques to reach a goal more efficiently

Engineers commonly complaint about verification on hardware because it does not offer the same level of visibility as simulation. I believe that the secret to efficient debugging consists in finding the right mix of techniques.

What is the right mix for you? Well, the answer depends on the economics of your project.
But please bear in mind that going early to hardware is absolutely reachable economically when you work on a FPGA-based system.

Performing some of the verification steps with the target hardware has arguably at least 2 benefits:

1) Speed of execution is optimal – as the target system runs… at system speed – and:

2) The target hardware can be placed in its real environment and won’t suffer from incomplete modeling.

Perhaps the point is all about improving the debugging on hardware, not simply discard it because it does not do ‘everything’…

Debugging involves a mix of techniques

At Yugo Systems, we believe that the engineering community is absolutely right when it chooses to use embedded instrumentation with live designs for debugging purposes. This is just a ‘part of the mix’. We also believe that debugging on FPGA hardware can be improved.

This is what we do with EXOSTIV™.

STAY IN TOUCH

Sign in to our Newsletter