The design flow for a leading edge System on Chip (SoC) may use hundreds of thousands of cell views to represent all of the formats and versions of the cells inside. A single wrong version can lead to device failure, sometimes at a cost of millions of dollars and months of delay. Today it is not possible to verify manually whether all of the views in use are the proper version and have not been modified; there is simply too much data.

When first written all software has bugs. It has been necessary to test the code from top to bottom to find and fix them and then keep new bugs from getting in. Out of necessity, chip designers develop systems with this in mind- a functional error in the register transfer level (RTL) design, in the tools which transform that RTL, or in the systems which put together the final design can cost millions of dollars to fix.

Given human nature, it is not practical to expect 100% attention 100% of the time, and an error-free design requires not 0% errors at the start, but 100% detection. About half of the RTL design effort goes into verification, meaning VHDL or Verilog that is not added to the design but instead tests the functional units of the design.

After many expensive mistakes, design teams have learned not only to specify their designs carefully, but to test those designs at all levels. RTL is written to be testable, tools generate test vectors from function specifications and analysis tools look at how well the tests, when taken together, cover the totality of the design. Significant work has also gone into ensuring that design tools have not changed the functionality of the system.

Two words seen a lot in hardware testing discussions are "controllability" and "observability." The first idea is that designers will have difficulty testing all of a design if some portions are hard to control, meaning that they have been hidden from view in some way. There may be circuits around a block that prevent direct access to the block, or the circuits may implement a complex state machine with many steps to perform in sequence.

In like manner, design teams will have trouble verifying that a test had succeeded if the circuit is hard to observe, meaning that its outputs have not been connected to the outside world. The outputs will have to go through one or more intermediaries that may modify the data in some way. If something goes wrong, the team won't know whether the circuit under test or the intermediaries had been at fault.

Decades of design best practices have led to an environment where the functional aspects of a design are validated at all levels, but there are still gaps. The rise of IP aggregation and System on Chip (SoC) designs has greatly improved designer productivity, but at the cost of a loss of controllability and observability.  This is not within the design, as IP comes with test vectors and warranties, but within the design flow:  design managers cannot always control what versions of IP blocks have been used within a design, and there has been no foolproof way to observe them afterward.

This goes against the hard-learned lessons of the past- errors will be made, so there must be a way to detect them- and it creates opportunities for tools that can look into the design and identify IP blocks. Even if a design team cannot always control what IP has been put into its design (search paths are not always updated, and the IP in blocks that had finished early may be out-of-date by the time they had been incorporated into a chip), full observability lets them find the results of momentary attention lapses before they have cost a design project millions of dollars at the foundry.

Until now there has been no way to validate the IP going into a chip, particularly the hard IP such as library cells, compiled blocks, or processor cores. Verification tools look for obvious problems like design rule violations, electrical shorts and known yield issues, but they have not been able to answer the basic question: is the IP the right version?  When the concept of "quality" races ahead of the capabilities of the verification tools, the team needs a safety check to ensure that simple, undetectable errors don't delay or destroy its design.

Design teams need tools that can tell them when an assembly script references an old version of a cell library. They need tools that can tell them that the IP block dropped blindly into its design had been superseded by newer version. They need tools that can tell them when a warranted IP block has been modified improperly in an attempt to "improve it."

Until now, those tools have not existed. That means design project teams must still watch for:
  • Distracted engineers who forget to check all of the tool control scripts when updating libraries, and undetectably load the wrong library when replacing "gray boxes" with actual layouts.
  • The surprising liabilities of subsystems completed early, as libraries are modified constantly to improve yields in non-obvious ways.
  • Compiled blocks such as RAMs which have tighter tolerances and non-obvious design constraints, tempting inexperienced designers to unknowingly sacrifice yield in search of a little more speed.

Designs are growing more complex, using more versions of more libraries, being built by more teams of more designers, and incorporating more and more IP specialized in subtle and non-obvious ways. The likelihood of error in any one of these areas grows with every design. Design teams need tools to halt those errors. 

In Yotta's product suite called Data Trust Management the IP Library Release and Tapeout Auditors are based on a technology that can index and report the DNA of a design which then can be compared against all known versions of the same views. The comparison can reveal which version is actually in use and reveal what of significance has changed and where the change had occurred by cell, cell layer and x,y coordinate. Our technology can eliminate the risk of the use of the wrong version of data. Furthermore our technology reports at the leaf-cell level and all levels of hierarchy so that decisions can be made about what to do with the changes found.

"... The Company's technology plays a substantial role helping us isolate and fix problems before they become silicon issues ..."

Lisa Fisher
Engineering Manager
Texas Instruments
OASIS® is a Registered Trademark of Thomas Grebinski

A Class of Design Errors Rise to Significance for Next Generation SoC Integrated Circuits
Workflow Auditor Point Tools
© 2008-2018 Yotta Data Sciences, Inc. All rights reserved.
* US Patents 9,122,825, 8,555,219, 8,266,571 and 7,685,545.
International Patents: China 200980129771.8, Japan  4768896, Korea 10-1580258
NEW at Yotta Data Sciences
© Getty Images