Table of Contents
- Hot Topics
- First day – Tutorials and Workshops
- Second day – Memories and Stimulus
- Third day – Automation
- Final day – Low Power, UVM and ML
- Best paper award
- Virtual experience
- Acknowledgements
Hot Topics
There has always been a core basis for the DVCon, the enhancement of everyday work by means of new tools, new methodologies, new standards. Every new edition brings with it new ideas, experiences or additional iterations of past methods that can enhance the engineering processes in the semiconductor business.
The main topics that grasped the attention of the public in the last couple of years, this one included, are Portable Stimulus, Machine Learning, Automation processes as well as new or rather reinvented UVM practices.
Tutorials and Workshops
PSS in the Real World (Prabhat Gupta – AMD , Tom Fitzpatrick – Accellera UVM-AMS Working Group , Adnan Hamid – Breker Verification Systems, Inc.)
It’s a good couple of years of consistent and recurrent presentations on the Portable Stimulus topic. I couldn’t say they have improved much from last year, even some of the slides seemed to be reused. It appears like the tutorials try to keep the topic actual but the subjects, even though they might be presented different at times, have more or less the same content as in the past. I would even go as far as say that one of the early Accellera tutorials from 2017 covers most if not all of the subjects addressed in this tutorial and the slides are available here.
The presentation debuted with a parallel between PSS and the early days of UVM, how constraint random brought forth the need for coverage and the strong points of UVM, with an emphasis on VIP’s horizontal reuse and the hierarchical structure of a TB. This makes a good background for the introduction of PSS as a modeling layer on top of UVM and a better solution for vertical reuse in verification. The presentation touched upon the subject of abstraction layer and realization layer but due to the tool specific nature of the translation, that can never be covered in a general presentation, and that I think it’s the main deterrent in PSS adoption.
After the initial introduction there were multiple practical examples ranging from display controllers to complex DDR and cache mechanism which were used as a technical ground for explaining different notions of PSS like scheduling, coverage, and resource pool/lock mechanisms.
All in all, PSS offers a lot of ease of use solutions and some shortcuts for resolving problems that would require a somewhat lengthy implementation in target languages, and those wouldn’t be as reusable. Further more, many of these are inherently supported in PSS. A good evaluation of how useful this can be in your project would be to compare how much time would be spent to implement such solutions as a custom API in widely used verification languages against the amount of time necessary to adopt PSS, which even years after its announcement can still prove troublesome. If this evaluation would rate the effort as comparable, then the reusability of the PSS implementation together with the fact that it can be done as early as an available requirement spec without any design or verification environment should topple the balance in favor of adoption. Otherwise, I would wait for the 2.1 release and next year’s presentation, who knows what improvements it might bring.
UVM-AMS: An Update On The Accellera UVM (Prabhat Gupta – AMD , Tom Fitzpatrick – Accellera UVM-AMS Working Group , Tim Pylant
– Cadence Design Systems, Inc.)
The working group for this new standard was created in late 2019. The presentation starts with the claim that over 80% of the current industry projects have a mixed signal component. Since the presentation claims this is a continuation of the UVM standard it debuts with an introduction on UVM and a general UVM testbench architecture together with presenting the strengths of the methodology.
The presentation continues simple and concise by presenting an example on how analog resources can be connected with a UVM environment and benefit from the standard components in UVM VIPs like monitors and drivers. The code example provided, features a proxy class for a bridge component, that enables the control of an analog signal generator from inside the UVM components. The logic resembles a DPI implementation, conceptually being similar.
The purpose of this standard is to offer the capability of adding AMS components in existing UVM environments. The standardization effort aims at offering a platform for vendor tools to evolve and support the more needed features of the industry.
An important question was thrown at the end: Is there really a need for a new standard? Will it address any issues that don’t already have a solution?
Second day – Memories and Stimulus
Modeling Memory Coherency During Concurrent/Simultaneous Accesses (Subramoni Parameswaran – AMD)
The presentation is proposing a special memory model that can successfully manage concurrent memory accesses. They stated the general coherency problem in modern cache controllers and presented the theory behind data dependencies. The model supports all types of stimuli formats and does not require any probed RTL signals for modeling cycle-accurate behaviors. We would have liked to know more about the implementation costs of these models and evaluate the performance penalty within a regression. Moreover, there are no numerical results mentioned at the end of the presentation.
BatchSolve: A Divide And Conquer Approach To Solving The Memory Ordering Problem (Debarshi Chatterjee – Nvidia Corporation)
The paper introduces Batch-Solve (BATS), an approach to solving the Memory Ordering Problem (MOP), crucial to memory subsystem verification. From all the details provided, it appears to be a configurable ‘model’ that can be reused across testbenches and provides a user API to specify memory access ordering rules as SV constraints. One of the mentioned benefits is reduced maintenance and transaction tracking in order to meet the memory access requirements, as the ‘model’ handles this automatically. Now, the solution is intended for enhancing subsystem verification of memory controllers. Their solution is around 15% faster at solving the memory ordering problem compared to other similar approaches reported by the research community. If the presented results are close to reality, then this approach provides quite a time improvement compared to a ‘standard’ approach.
Advanced UVM Command Line Processor For Central Maintenance And Randomization Of Control Knobs (Siddharth Krishna Kumar – Samsung Austin Research Center)
The paper presented a way of managing hundreds or thousands of control knobs inside a verification environment. It can be very challenging in the context of verification for a complex design. This paper offers one way of controlling values of class fields or randomization values by means of command line arguments. The advanced command line processor makes use of the UVM class called uvm_cmdline_processor.
The intent is notable. It brings the concept into our community’s attention. The author of the paper also chose to provide the source code for his implementation. The implementation is compact, having a bit less than 400 lines.
I have personally seen a similar mechanism implemented by a company I worked for and, from my perspective, the current implementation has room for improvements. I would definitely try it out when I get the chance.
Fnob: Command Line-Dynamic Random Generator (Haoxiang Hu – Facebook, Inc.)
The authors of this paper developed a method of reducing the compile-time and the amount of the written code when adding new constraints and new tests into a verification environment with thousands of tests. Fnob is basically a library which offers the power of dynamic randomization and it boils down to a one liner of code or command line arguments. It certainly deserves a chance when using a directed tests approach for verifying a large SOC.
Third day – Automation
Systematic Constraint Relaxation (SCR): Hunting For Over-Constrained Stimulus (Debarshi Chatterjee – Nvidia Corporation)
The paper proposes a framework that can reduce over-constrained stimuli inside a verification environment. This way, the quality of stimuli packets can increase so that the time cost of reaching coverage closure can be reduced. Their objective is to automate the detection of over-constrained stimulus so that the engineering effort is reduced during coverage analysis. Also, there are cases when some constraints are accidentally introduced by the verification engineers and the quality of the test suite decreases dramatically.
This is done through a script (called SCR) that automatically parses your environment for constraints and splits them into atomic constraints blocks. It then reruns the tests while randomly disabling constraint blocks and observes if the test still passes.
I’m not sure if this approach is more efficient than coverage analysis, but does provide an interesting alternative when that is not yet available.
Two-Stage Framework For Corner Case Stimuli Generation Using Transformer And Reinforcement Learning (Chung-An Wang – MediaTek Inc.)
The paper proposes a framework that can increase the chances of covering corner cases in verification. The framework uses an AI engine to automatically adjust stimuli constraints. In the first stage, the framework selects the most relevant constraints in order to narrow the stimuli value ranges. In the second stage, the framework tries to generate novel stimuli based on the constraints adjusted/selected during the previous phase. One use case that they discuss is covering the FIFO full condition of a Memory Management Unit (MMU). The smart engine uses a reinforcement learning unit that is rewarded each time a novel stimulus is generated. Their results point at most 380 times faster closure of some corner cases in verification. We would like to know more about the scalability of this framework across other types of DUTs, not only MMUs.
Adaptive Test Generation For Fast Functional Coverage Closure ( Azade Nazi – Google Brain )
A research project which has the goal of reducing the costs and efforts of the entire ASIC development process. They focus a lot on adding AI-based automation in their flows and propose a smart constraint solver that can update the constraints by analyzing coverage feedback. This way, they can improve the probability of covering some verification corner cases. There are a lot of limitations when using data-driven approaches because the user is required to have significant domain knowledge. Their solution is called CDG4CDTG: Coverage Dependent Graph for Coverage Driven Test Generation. Practically, they use a Bayesian Network as the AI algorithm. Practically, each coverpoint is modeled using such a graph. Their framework can provide coverage closure time reductions (8 to 27 times faster). The proposed solution targets functional coverage and did not yet consider code coverage tasks.
Automatic Translation Of Natural Language To SystemVerilog Assertions (Abhishek Chauhan – Agnisys Technology Pvt. Ltd.)
The paper builds upon an interesting topic: easier SVAs definition and understanding. It tries to do this through conversion from natural language (English) to SVA syntax and vice-versa, with the use of machine learning. The presented results and model accuracies are promising, even if the training data sets might be far away from real project complexity.
A demo application is available online at iSpec.ai.
Fourth day – Low Power, UVM and ML
Problematic Bi-Directional Port Connections: How Well Is Your Simulator Filling The UPF LRM Void? (Brandon Skaggs – Cypress Semiconductor, An Infineon Technologies Company)
The paper investigates different solutions to address the problems of bi-directional port modeling in analog and mixed-signal projects. There is no extensive description in the UPF standard regarding this modeling problem. In addition, the simulators have different behavior interpretations from one vendor to another. They explained the problem of some representative use cases in which the supply nets can generate undefined values in the simulation environment. We liked their exhaustive analysis with clear examples and modeling pitfalls. They concluded with a list of proposed improvements for the future UPF standard release in which a more explicit description is expected. Thus, this better standardization should force the EDA vendors to converge so that inout HDL port models are interpreted in the same way by their tools.
What Does The Sequence Say? Powering Productivity With Polymorphism (Rich Edelman – Siemens EDA)
This session discusses an interesting alternative to the standard UVM Factory to achieve polymorphism. It explains the inner workings of the “picker”, which is a small class that allows the user to generate different types of related transactions types. The presentation concludes with an example of a sequence topology built on the above concept.
Although it is far away from a complete replacement of UVM Factory, it might be useful for specific cases.
Proven Strategies For Better Verification Planning (Jeff Vance – Verilab , Jeff McNeal – Verilab , Paul Marriott – Verilab)
This workshop brings a high-level view on verification planning best practices, as well as common pitfalls encountered during this process. It might not provide new information from the perspective of an experienced verification engineer, but it does shed light on the importance of clear directions and queries during the planning phase. The presentation is split between a planning part (mindset and strategies) and a scheduling part (pitfalls on work estimation and communication).
Overall, I consider it to be an interesting presentation for new verification engineers, with emphasis on the importance of good planning.
ML-Driven Verification: A Step Function In Productivity And Throughput (Ross Dickson – Cadence Design Systems , Matt Graham – Cadence)
This presentation mentions the new ML engines incorporated in the latest Xcelium versions. The objective is to compress the regression test set and reduce the runtime costs in terms of resource allocation. They did not mention what kind of algorithms are used for creating automation engines. Their smart engines process a lot of log data, so performance improvements are expected after several project iterations. When the infrastructure is set up, there is no data to work with, so we do not expect performance improvements in the initial phase. One smart engine incorporated in vManager tries to identify redundant tests and tries to order and prioritize the test that is expected to generate novel scenarios. Another smart engine incorporated in Xcelium focuses on automating bug hunting, but a special infrastructure is required for collecting information from the revision control system. There are also some smart engines incorporated in JasperGold, their tool for performing formal verification. These engines try to optimize different processes that can reduce the execution time of proofing some properties.
Best paper award
Stuart Sutherland Best Paper Award
First Place Award
Second Place Award
Third Place Award
Stuart Sutherland Best Poster Award
First Place Award
Second Place Award
Third Place Award
Virtual experience
All presentations were hosted on Zoom, chat box was usually active for questions and there were also a sensible amount of live questions and discussions popping up for the remainder of the session time after the presentations were concluded.
For breaks and networking opportunities gather.town was used, which i don’t think it’s for everybody, but provides an interesting experience on the “virtualization” size of online meetings.
Acknowledgements
This article has been a collaborative effort achieved with the help of our colleagues.
Let us know your thoughts on the conference in the comments section below.