Regressions can take a lot of space on the disk. If neglected, a single regression can occupy more than 100GB of space. When a project has multiple users that forget to clean up old regressions you can reach terabytes of useless data pretty fast. This article gives some hints on how to mitigate this issue. Most of the disk space is usually filled with log files so we will focus on these ones.
Use UVM_NONE verbosity
As a verification engineer, one of the first rules you will learn about regressions is that they should be run with UVM_NONE as the global verbosity. This works really well if the verification environment selects only the most important messages to be printed using UVM_NONE and most of the messages are printed using higher verbosity levels. Messages printed without the UVM macros should also be kept to a minimum. However these rules are not always followed. Sometimes you just need a lot of messages to be printed even when running regressions. This is where the second trick comes in.
Delete logs for the tests that have no errors
Most of the time, logs are used for debugging. You need messages so that if a test throws some type of error you have clues on what the root cause might be. But if a test passes you will usually not look in the logs. For a passing test, the most important part is the coverage database. You want to see that the relevant metrics have been collected, but you don’t really care about the clues you left for yourself in case the test would have failed.
Let’s see how we can automatically delete logs for tests that have no errors. I have only tested these scripts for Cadence and Mentor regression suites.
Cadence
- Save the following script in your editor as a .sh file:
#!/usr/bin/env bash echo "The log files for the tests that have passed will be deleted." if [ -f local_vsof.txt ]; then nof_errors=$(grep -ce "severity :
error " -ce "severity :critical " local_vsof.txt) echo "Found ${nof_errors} errors in ${PWD}/local_vsof.txt." if [ ${nof_errors} -eq 0 ]; then echo "No errors found. Removing logs in ${PWD}." rm -rf *log* else echo "Found ${nof_errors} errors. Will not remove logs." fi else echo "local_vsof.txt was not found in ${PWD}" fi - In the tests group section of your vsif file add the following line:
post_run_script : path/to/my/script/clean_reg_logs.sh;
where clean_reg_logs.sh is the file you have created at step 1.
Mentor
- Save the following script in your editor as a .sh file:
#!/usr/bin/env bash echo "The log files for the tests that have passed will be deleted." if [ -f execScript.log ]; then nof_errors=$(grep -ce "UVM_ERROR /" -ce "UVM_FATAL /" execScript.log) echo "Found ${nof_errors} errors in ${PWD}/execScript.log." if [ ${nof_errors} -eq 0 ]; then echo "No errors found. Removing logs in ${PWD}." rm -rf *log* else echo "Found ${nof_errors} errors. Will not remove logs." fi else echo "execScript.log was not found in ${PWD}" fi
- In the tests group section of your rmdb file add the following line:
<command> /path/to/my/script/clean_reg_logs.sh </command>
where clean_reg_logs.sh is the file you have created at step 1.
These scripts might not work for all setups. You might need to tweak them in order to work for your environment, but I included them as a starting point. If you have other tips on improving disk space usage when running regressions, please leave them in the comments section.
2 Responses
Many times during regression core. files gets generated that occupy tons of space. Have you faced such issues? If yes then how can we overcome it? Currently, while the regression is in progress, I periodically find for core files and delete them. Not the best way to tackle the situation.
Hi Abdul,
I haven’t had this issue, but you could try a similar approach to what I presented in the article above: write a script that after each test run will search for .core files and delete them when found.