What are the key features of ideal ASIC prototypes?

What are the key features of ideal ASIC prototypes?


It seems that there has never been a better time to prototype ASIC or SoC with FPGAs. The recent announcement from leading FPGA vendors (such as this one, from Intel), show that the biggest FPGAs now reach more than 40 billion transistors!

Prototyping ASIC or SoC primarily requires gates. If these gates can be enclosed in a minimum number of FPGAs, it is even better, as it reduces the need to partition the ASIC logic onto multiple FPGAs. Having coherent & complex sets of functionalities together limits the headaches related to having chip-to-chip interfaces to implement some of the internal logic of the ASIC. In addition, a smaller number of chips means higher running frequency, which we’ll evoke later.

Even though the FPGA technology is evolving towards more complexity per chip, choosing the right FPGA platform to prototype ASIC is challenging. Many aspects enter into consideration for this choice:

  • Total number of gates;
  • FPGA technology and size;
  • Is the system modular and is modularity required?
  • Will the system be used for one project only or multiple projects?
  • Is there a software environment with tools to help with partitioning and/or debugging and verification?
  • In which environment will the prototype have to operate?
  • Will a purchased system be maintained and how to update its technology?

The question is: ‘What are the KEY features of ideal prototypes’?

We have isolated 2 among them:

The first key feature is called ‘Similarity to target’: an ideal prototype should run at operating speed, should provide the target device functionalities, and should run in the target environment.

The second key feature would be ‘Visibility’, that is, the ability to ‘look into the prototype in operation’. A totally opaque prototype is of no use: prototypes are used for debug, verification or even software development reasons. During these processes, being able to see how the prototype behaves – a bit like a simulation – is essential.

Let’s classify the various types of ASIC prototyping platforms according to these 2 criteria: similarity to target and visibility (see the chart below).

– Standard FPGA boards might not have the right set of components and peripherals to accurately behave like the target device. Especially when no more than 2 FPGAs are used to map the target ASIC, they could run close to the target speed. In addition, they come with no – or very limited visibility tools, often just traditional instrumentation or the JTAG-based embedded logic analyzer provided by the FPGA vendor. These tools prove to be very limited as for capturing the inner workings of the chips. For these reasons, such boards score low both on similarity and visibility.

– (Big) EDA companies provide sets of modular boards that can be assembled to to reach the desired number of gates and add peripherals in the shape of plug-in boards. These systems target ASIC prototyping and often come with additional tools to provide a much better visibility into the prototype.
However, such system could be bulky to use in some operating environments (like a car) and operate way below the target speed of operation, compromising operation in the target environments that cannot be slowed down.

– Building your own custom FPGA board provides full flexibility in terms of complexity, peripherals, features and often speed. As the board is specifically design for a specific ASIC target, the latest FPGA technology can be used (while your re-used prototyping system can have aged a little…). It might not be re-usable for another project, like a modular prototyping system, but that would be the price of a good score on ‘similarity’, both in terms of features, ability to run in the target environment and speed!
Unfortunately, like their standard equivalent, custom FPGA boards are not provided with the set of tools ready to be used to gain visibility. Designing custom tools is usually not reasonable nor very realistic (budget, engineering time, maintenance, …) for a team already busy with a full ASIC or SoC design.

What would be the ideal prototype?

Ideally, we should occupy the remaining quadrant of the above chart: this is a solution that scores high both for the similarity to the target (and we have found that this means: similarity in features AND speed) and for the visibility.

What we need is:

  • Total freedom of choice for the board(s): an ability to chose the target FPGA technology to make the most of the latest and greatest FPGA chips, having just the right set of features, peripheral and ability to integrate the board into the target environment;
  • At speed operation, as this is one of the keys to be able to test the prototype against a realistic set of inputs, coming from an environment that does not need to be artificially slowed down because an imperfect prototype requires it!
  • Extreme visibility into the prototype, to be able to look into and see the prototype operate with full details.

Reach maximal visibility on any type of ASIC prototype. Now.

The current Exostiv series and the upcoming Exostiv Blade series provide unprecedented visibility into any type of prototyping platform, no matter if you use standard boards, advanced prototyping systems or even custom boards. Our tools are constantly upgraded to support the latest technologies from major FPGA vendors and connect to your prototype through standard interfaces and flows. Bottom line: you are not locked anymore into using an outdated FPGA technology, a specific FPGA vendor or a previously purchased but unfit prototyping system.
Have total freedom of choice for your prototype: you can even use the same visibility tools with a succession of prototyping platforms along your validation journey. Choosing or designing your own FPGA prototyping platform does not mean ‘poor visibility’ anymore!

Request demo

Estimated number of attendees:

Yes, I'd like to receive Exostiv Labs Monthly newsletter

As always, thank you for reading.

– Frederic.

Does FPGA use define verification and debug?

Does FPGA use define verification
and debug?

You may be aware that we have run a first survey on FPGA design, debug and verification during the last month.

(By the way, many thanks to our respondents – we’ll announce the Amazon Gift Card winner in September).

In this survey, we did some ‘population segmentation’, and first asked about the primary reason for using FPGA:

– Do you use FPGA as a target technology? In other words are FPGA part of your final product’s BOM (bill of materials) list? … – or:
– Do you use FPGA as a prototyping technology for an ASIC or a SoC… ?

49 respondents were found in the first group (which we’ll also call the ‘pure FPGA’ users) and 34 in the second group. The survey was posted during about one month online and advertised on social networks. The questions were in English.

Behind this segmentation question is of course the idea that the design flow used to design FPGA is usually significantly different (and shorter) than this used to design an ASIC.
(This is claimed by FPGA vendors as one of the significant differences between ASIC and FPGA – you can see an example of this comparison here).

Since we care (a lot) about FPGA debug and verification, one of the survey questions was about the set of tools currently used for FPGA debug and verification

The chart below illustrates the percentage of respondents in each group who selected each proposed tool or methodology. They had been invited to select all choices that applied to them (click on the picture to enlarge).

Survey results

A podium shared by simulation, in-lab testing and RTL code reviews

Without significant difference between the 2 groups, simulation is the clear winner methodology, with a little more than 90% use. Frankly, we wonder how it is still possible not to simulate a FPGA design today – we have not analyzed the cases when designers are able to skip this step.

In-lab testing comes second, with near to 70% usage among the respondents. In-lab testing actually includes 2 categories of tools in the survey:

– traditional instruments -such as scopes and logic analyzers- and
– embedded logic analyzers -such as Xilinx Chipscope, Altera Signal Tap and the likes.

Manual RTL code review closes the podium. We expect that any designer actually reviews his/her code. Why is it not 100%? Probably because the ones verifying and debugging FPGA are not the RTL coders. Another possibility is that ‘reviewing the code’ might not be seen by many as a ‘verification technique’, but as the result of the process of verification and debug.

The case of code coverage

While this technique has been around for quite some time now (and is actually complementary to simulation), it is interesting to see that code coverage has got the same popularity in our 2 groups (we do not think that the difference between the groups is significant here).

It has been widely said that ‘pure FPGA designers’ are careless and do not use the same advanced and rigorous techniques of ASIC designers. Well, apparently, they do not just care about simulating some cases; they have the same level of concern for properly covering the code as ASIC designers.

Assertions: YES
UVM : not for ‘pure’ FPGA designers

UVM is quite a hot topic today, with most of the major EDA companies actively promoting this methodology. For those unfamiliar with the subject, here is brief overview. It seems to currently have a quite positive upwards trend for ASIC and SoC design. Tools and environments are appearing on the market, that can be applied to FPGA design too. Like we see on the chart, UVM has not really gained traction among our ‘pure FPGA’ respondents, although near to one engineer on 5 among our ‘ASIC engineers’ has adopted this technique.

Takeaways

1) In-lab testing is a well-entrenched habit of the FPGA designer. And there seems to be no notable difference between our groups. Like we pointed out in a previous publication, in-lab FPGA test and debug is here to stay. We simply believe that – for FPGA at least – this technique must and can be improved.

2) The FPGA engineer seems to be using a mix of methodologies when it comes to debug and verification, but not necessarily a lot of them.
Among our respondents, we have found that the average set of techniques used for debug and verification counts 2.58 different techniques (2.39 for the ‘pure FPGA players’ against 2.68 for the ASIC designers).
And – please note: simulation is there of course, often completed with code coverage.
‘The FPGA engineer who does not simulate at all’ appears to be just a myth today…

3) There is no notable difference between designers using FPGA as a target technology and those using FPGA as a prototyping technology- except for:

– Equivalence checking -and:
– UVM
… which seem to have a very limited adoption among the ‘pure FPGA players’.

In a later post, we’ll further analyze the results of our survey. We have shown which techniques are currently used for debug and verification.

Of course, the million dollar question is whether FPGA engineers are satisfied with this situation or if they are in need of improvements… Stay tuned.

Thank you for reading.
– Frederic