DVCon U.S. (March 1-4, 2021) has concluded its first ever virtual edition.
Among the popular topics for submissions, according to the Technical Program Chair, were Portable Stimulus, Verification Productivity Methods and Software-driven verification.
AMIQ was involved on multiple levels: sponsoring, exhibiting, contributing with papers, and, last but not least, attending technical presentations.
This post presents some of the highlights of the technical program that the AMIQ team attended and enjoyed.
- Tutorials & Workshops
- Technical Presentations
- Panels
- Best Papers
- Best Posters
- Virtual Experience
- Acknowledgments
Tutorials & Workshops
UVM Birds of a Feather (Mark Strickland – Marvell; Justin Refice – NVIDIA)
In this open session, the Accellera UVM Working Group called for input from the UVM users on ways to improve the adoption of the latest UVM library releases.
A live survey on the almost 170 participants showed that 50% use UVM 1.2, 40% UVM 1.1d and less than 5% adopted the latest UVM 1800.2-2020.
Although the latest version contains a lot of bug fixes since UVM 1.1d (approximately 240 bugs out of which 50 were marked as major), adoption seems to be staggered by backward incompatibilities, lack of automatic migration solutions and vendor dependencies.
Tutorial: Portable Stimulus 2.0 Is Here: What You Need to Know (Accellera’s Portable Stimulus Working Group members: Tom Fitzpatrick – Siemens EDA; Adnan Hamid – Breker Verification Systems; Matan Vax – Cadence Design Systems; Faris Khundakjie – Intel; Karthick Gururaj – Vayavya Labs; Hillel Miller – Synopsys)
The PSS presentations that have become an usual occurrence at each DVCon have improved over the years. This year’s tutorial seemed the most practical and technically fulfilled example that was presented so far.
The presentation oscillated between general presentation of the terminology, code snippets and in-depth problem solving.
Due to the vendor “race” for tools that translate between the PSS Abstract Layer and usable code for the target platforms, there is still a bit of a shade when it comes to the Realization Layer.
It was notable that most people attending had curiosities in this direction.
Workshop: UVM-SystemC Randomization – Updates from the SystemC Verification Working Group (Accellera’s SystemC Verification Working Group members: Dragos Dospinescu – AMIQ Consulting; Thilo Vörtler – Coseda Technologies; Martin Barnasconi – NXP Semiconductors; Stephan Gerth – Bosch Sensortec GmbH)
This tutorial presented the basic concepts of UVM-SystemC (the Accellera UVM standard implemented in SystemC) and showed how constrained randomization and functional coverage can be integrated to build a verification environment using the current UVM-SystemC library.
It also discussed the standardization of a common randomization layer based on the CRAVE constraint randomization library.
Our colleague Dragos Dospinescu presented the functional coverage library FC4SC, which AMIQ has donated to Accellera and it is in the process of standardizing it.
Workshop: Multi Language Verification Framework Standardization and Demo (Accellera’s Multi-Language Working Group members: Warren Stapleton & Bryan Sniderman – AMD; Alex Chudnovsky – Cadence; Faris Khundakjie – Intel; Martin Barnasconi – NXP Semiconductors)
The Multi-Language Working Group presented a proof-of-concept implementation using a multi-language example that combines the UVM library in SystemVerilog and SystemC.
The workshop discussed the multi-language verification frameworks’ concepts and the API targeted for standardization, along with requirements for a seamless integration and interoperability between UVM SystemVerilog and SystemC verification frameworks.
Workshop: Verification of Functional Safety for an Automotive Ai Processor (Mihajlo Katona – Veriest Solutions)
The presentation was well structured, taking the audience through the multitude of steps necessary for this kind of project.
However, some parts of the presentation were very specific and lacked generality. An interesting aspect of the presentation, from the perspective of a verification engineer used to UVM, is that the methodology behind it is a blast from the past, giving the impression at times, that some automation is missing.
The conclusion drawn from the presentation is that although there is a safety standard and a “rule book”, human error is still the biggest problem and the project closure is based on review and “gut feeling” (words of the presenter).
Half of the session was an open discussion, which proves that a lot of engineers working in this area are looking for improvements on this specific verification flow.
Workshop: Early Design and Validation of an Ai Accelerator’s System Level Performance Using an HLS Design Methodology (Michael Fingeroff – Siemens EDA)
This workshop showed how an HLS design and verification flow built around Catapult, which is a High-Level Synthesis & Verification Platform.
It presented how using the open-source MatchLib SystemC library, speeds up pre-HLS simulation and helps decisions before going to HLS. It was stated that there is a 10x productivity increase over a manual RTL flow.
An interesting question was posed on the equivalence of the output RTL against the high level algorithm.
The answer was that Catapult produces in 99% of the cases equivalent RTL, but for the 1% verification should be there to catch any problem.
Technical presentations
This year there were a total of 97 abstract submissions out of which 42 papers were accepted. An additional 14 submissions were presented as posters.
Open-Source Framework for Co-Emulation Using PYNQ (Ioana Catalina Cristea & Dragos Dospinescu – AMIQ Consulting)
Our colleagues, Ioana Catalina Cristea and Dragos Dospinescu, have put a great effort together in creating a great open-source framework useful in co-emulation and beyond.
The paper content was very well received by the audience and it won 3rd Place in the Best Papers Awards.
Ioana has posted a summary of the presented paper in this article.
Configuration Conundrum: Managing Test Configuration With a Bite Sized Solution (Kevin Vasconcellos & Jeff McNeal – Verilab)
The paper presents an alternative way of configuring the verification environment by using policies, which are container classes for constraints.
The goal is to help with debugging the configuration constraint conflicts and setup more flexible configuration constraints.
Lay It on Me: Creating Layered Constraints (Bryan Morris – Ciena Corp; Andrei Tarnauceanu – BTA Design Services)
The presented strategy relies on creating different layers for the constraints, in order to help both the constraint solver and the verification engineer.
The layer selection is done by manipulating the rand_mode(), choosing which are randomized or not. The process continues for each layer, the variables which were randomized in the previous layer will become state variables for the current layer.
Everything is encapsulated with macros, so using the approach seems fairly easy and they are still working to improve debugging when using this solution.
It would have been interesting to see a performance-wise comparison between the normal and layered approaches.
Verification Learns a New Language: – An IEEE 1800.2 Implementation (Ray Salemi & Tom Fitzpatrick – Siemens EDA)
This paper presented a Python based implementation of the UVM library.
Although it has most of the features of the UVM SystemVerilog implementation, some differences included simplified syntax (e.g. factory or config db), different thread handling, sequence mechanism and logging.
Coverage is currently not implemented and it seemed there was a lot of development in progress.
This is a topic to be followed in the future, as Python is becoming ubiquitous in both software and hardware development.
How to Overcome Editor Envy: Why Can’t My Editor Do That? (Dillan Mills & Chuck McClish – Microchip Technology)
This presentation pushed the case of implementing the SystemVerilog language in the Language Server Protocol.
It was mainly a theoretical discussion on the fact that the Language Server Protocol specification is gaining traction in software development and could provide benefits to the hardware world as well.
Although it provided examples of features which are lacking in classic editors like emacs and vim, it did avoid mentioning available HDL IDEs, like AMIQ’s DVT, which already cover all of them.
To Infinity and Beyond – Streaming Data Sequences in UVM (Mark Litterick, Jeff Vance & Jeff Montesano – Verilab)
An interesting paper that presents a concept of autonomous stimulus generation using streaming data techniques. It primarily targets digital simulation of complex sensor SoCs that contain real-number models for the analog sub-components.
The paper demonstrates how to implement autonomous analog and digital data streaming patterns using an enhanced UVM sequence mechanism and driver operation with regards to low-level control knobs.
The level of detail included in the paper was particularly enjoyable.
Adopting Accellera’s Portable Stimulus Standard: Early Development and Validation Using Virtual Prototyping (Simranjit Singh, Ashwani Aggarwal, Harshita Prabhar, Vishnu Ramadas, Seonil Brian Choi & Woojoo Space Kim – Samsung)
An interesting approach to building PSS abstract models by applying them in the early phases of the project, on virtual prototypes of the DUT.
The presentation focuses on the benefits of this approach on the overall progression of the project. It highlights that the scenarios constructed using PSS, as opposed to the classic scenario implementation (from IP scaling up to SoC), are reusable on further stages of the project like emulation and post silicon.
The examples focused on the initial development of the PSS model and its validation against a virtual prototype.
Although the implementation doesn’t go in-depth and no concrete implementation ideas can be directly extracted from the paper, the presented mindset can be useful for optimizing ASIC development.
Acceleration of Coreless SoC Design-Verification Using PSS on Configurable Testbench in Multi-Link PCIe Subsystems (Thanu Ganapathy,Pravin Kumar, Garima Srivastava & Seonil Brian Choi – Samsung; Harish Peta – Cadence Design Systems)
While the examples presented show some ideas on how an implementation can work, this paper primarily stands as an example of a project that adopted PSS.
The introductory part reiterated the purpose and benefits behind these approaches, a staple of every PSS related paper up to now.
As far as the implementation itself goes, the architecture and correlation between the PSS model and the Realization Layer goes a long way into boosting the credibility of the paper.
The conclusions are well structured and present good metrics for the resources that went into the project, as well as a time comparison between the “normal” approach versus using PSS.
Media Performance Validation in Emulation and Post Silicon Using Portable Stimulus Standard (Suresh Vasu, Nithin Venkatesh & Joydeep Maitra – Intel Corporation)
A bit of a niche project, but an interesting presentation nonetheless. It proves that although PSS is advertised for “everything”, it fits in certain projects better than others.
In this case, the introduction of a higher layer of constraints and action control brings a sizable improvement to the verification effort.
The examples presented are general and as in other cases do not explain a particular implementation but rather draw the background for the idea.
The conclusion, same as in the previous PSS sessions, is that the development time is clearly reduced when you introduce this reusable higher layer of abstraction. The author mentioned 80% reduction in project time.
Poster: “Bounded Proof” Sign-Off With Formal Coverage (Abhishek Anand & Chinyu Chen – Intel Corporation; Bathri Subramanian – Siemens EDA; Joe Hupcey – Siemens EDA)
This poster presents how you can achieve sign-off for properties which are not fully proved, with emphasis on two use-cases.
Several abstractions were done in order to reduce the overall number of states. In the end an analysis on the unfilled coverage items was done to achieve sign-off.
It would have been interesting to see the process of determining the required bound and how to decide when it is good enough.
Novelty-Driven Verification: Using Machine Learning to Identify Novel Stimuli and Close Coverage (Tim Blackmore & Rhys Hodson – Infineon Technologies; Sebastian Schaal – Luminovo GmbH)
It was an interesting presentation that showed how ML was used to close coverage on a RADAR unit.
The defined coverage was both black-boxed and white-boxed (internal probing – which was the one hard to fill).
They had a pool of pre-generated tests, which included the “feedback” responses in case of error scenarios.
The standard approach required 2 million test runs in order to fill the coverage, a process that took 6 months. In the end, after ranking, the required number of runs was reduced to 3000 in order to fill the coverage.
The ML experiment used Autoencoder, which is an ANN (Artificial Neural Network), to improve the test selection process. A reduced scope of 82000 tests (out of the 2 million) was used, which included the aforementioned ranked tests.
The conclusion was that using ML they managed to fill 99.95% of the coverage with 40% fewer tests than with a random pick. It was estimated that in the real life scenario it should have saved them approximately 3 months.
Dynamically Optimized Test Generation Using Machine Learning (Rajarshi Roy, Mukhdeep Benipal & Saad Godil – NVIDIA)
The paper used Bayes Optimization (scikit-optimize Python library) to handle constraints tweaking to steer towards coverage closure. It was mentioned that it benefited cases where there are big unexplored areas so it avoids local maximums.
One of the presented examples was filling coverpoints when the ML algorithm would control delay injection (the probability and duration of the delay) for 27 numeric constraints.
The authors mentioned an improvement between 2x and 6x.
Supporting Root Cause Analysis of Inaccurate Bug Prediction Based on Machine Learning – Lessons Learned When Interweaving Training Data and Source Code (Oscar Werneman & Daniel Hansson – Verifyter; Markus Borg, RISE Research Institutes of Sweden)
The presentation might be useful for someone who struggles with the ML algorithm implementation issues and does not know where to look for the problems.
The algorithm in question was used in PinDown, which is an automatic debugger without simulating, that analyzes changes done in the code (commits) from when the test was passing up to the point it starts to fail.
It uses ML to attach a probability to where a bug can be and the most probable commits are run to identify in which one the bug was inserted.
The initial ML model did not work as expected and that is what the paper focuses on. It explains their approach to investigate the issue in the ML model, which consisted of:
– Performing Student’s T-test or Mann-Whitney U-test (for non-normal distributions) between predictions during training and the predictions during live inference to detect problems;
– Usage of multiple models in shadow mode for comparison;
The paper also proposes some methods for analysing the features and multiple data labeling techniques.
Panels
Verification in the Open-Source Era (Bipul Talukdar – SmartDV; Simon Davidmann – Imperas; Serge Leef – DARPA; Jean-Marie Brunet – Siemens EDA; Ashish Darbari – Axiomise; Tao Liu – Google; Moderated by Brian Bailey – Semiconductor Engineering)
This panel session can be summed as a heated discussion on answering what does open source verification mean.
Two different views crystalized during the discussions, one looking at the smaller development cost due to open-source and the other considering the contribution and collaboration for the greater good of the industry.
However, both sides agreed that the standards which are now being developed around open-source released frameworks are a step in the right direction.
Chip Design on Cloud – from Pipe Dream to Preeminence? (Megan Wachs – SiFive; Bob Lefferts – Synopsys; Eric Chesters – AMD; Richard Ho & Sashi Obilisetty – Google; Moderated by Ann Mutschler – Semiconductor Engineering)
Cloud datacenters are now the predominant choice for several industries. This panel revolved around the discussion on the state of chip design (& verification) in the cloud.
Although the pandemic has forced companies to rely more on cloud infrastructure, there are security concerns that need to be addressed before moving sensitive jobs onto the cloud.
The panelists also discussed the cost aspect of cloud adoption and how some of their workflows and infrastructures had to be adapted to keep this under control.
Some companies have invested a lot in using the cloud infrastructure for running everything, while others still retain some of their work for on premise datacenters.
All in all, this topic is gaining traction in our industry, with several other sessions covering it during the conference:
Raising the [Verification] Bar: Cloud-Based Simulation Increases Verification Efficiency (Melvin Cardozo – Synopsys; Ahmed Elzeftawi – Amazon) & Shift Left: Cloud As the Technology Platform to Enable Faster Verification (Rajiv Malhotra – AMD; Peeyush Tungnawat & Sashi Obilisetty – Google)
Best Papers
This year’s Stuart Sutherland Best Paper Awards went to:
1st place – Formal Verification Experiences: Spiral Refinement Methodology for Silicon Bug Hunt (Ping Yeung, Mark Eslinger & Jin Hou – Siemens EDA)
2nd place – Advanced UVM, Multi-Interface, Reactive Stimulus Techniques (Clifford Cummings, Stephen Donofrio & Jeff Wilcox – Paradigm Works; Heath Chambers – HMC Design Verification)
3rd place – Open-Source Framework for Co-Emulation Using PYNQ (Ioana Catalina Cristea & Dragos Dospinescu – AMIQ Consulting)
Best Posters
The Best Poster Award winners are:
1st Place – Improving Software Testing Speed by 100X With SystemC Virtualization in IoT Devices (David Barahona, Motaz Thiab & Milica Orlandic – Norwegian University of Science and Technology; Isael Díaz & Joakim Urdahl – Nordic Semiconductor)
2nd Place – Automated Traceability of Requirements in the Design and Verification Process of Safety-Critical Mixed-Signal Systems (Gabriel Pachiana, Maximilian Grundwald, Thomas Markwirth & Christoph Sohrmann – Fraunhofer IIS/EAS)
3rd Place – Preventing Glitch Nightmares on CDC Paths: the Three Witches (Jian-Hua Yan – MediaTek; Ping Yeung, Stewart Li & Sulabh-Kumar Khare – Siemens EDA)
Virtual experience
Although it lacked the ambience of the live conference, Virtual DVCon US 2021 provided a platform filled with opportunities to engage.
Registered attendees had the ability to post publicly viewable questions that the presenters could answer. Virtual engagements allowed for live discussions with exhibitors and other participants.
Another great feature of virtual conferences was that you can go back and watch the replays of all the sessions at your own pace.
The organizers mentioned the platform will be available through March 31, along with all the recordings and saved discussions.
Acknowledgements
The conference once again set the bar high in terms of technical program diversity and premium content.
A big thank you to my colleagues for providing their insight on this conference overview.
Let me know your thoughts on the conference in the comments section below.