env-xs-ov-00-bpu/doc/env_en.md

11 KiB

BPU Verification Environment

This environment provides all dependencies and toolkits required for BPU verification. This verification environment needs to run under the Linux system and includes the following components:

  1. Generate the Python DUT module to be verified
  2. Example project for DUT verification
  3. Component to generate verification report

Project to be verified:

Install Dependencies

In addition to the basic gcc/python3 development environment, this repository also depends on the following two projects. Please install them first, and install the dependencies of the corresponding projects.

  1. picker
  2. mlvp

Then install other dependencies through the following command:

apt install lcov # genhtml
pip install pytest-sugar pytest-rerunfailures pytest-xdist pytest-assume pytest-html # pytest

Generate Module to be Verified

Download the repository

git clone https://github.com/XS-MLVP/env-xs-ov-00-bpu.git
cd env-xs-ov-00-bpu

Generate uFTB

make uftb TL=python

The above command will generate an out directory in the current directory, and the UT_FauFTB in the picker_out_uFTB directory is the Python Module to be verified. It can be directly imported in the python environment. Because the Python DUT to be verified is related to the python version, a universal version of python-dut cannot be provided, and it needs to be compiled by yourself.

out
`-- picker_out_uFTB
    `-- UT_FauFTB
        |-- _UT_FauFTB.so
        |-- __init__.py
        |-- libDPIFauFTB.a
        |-- libUTFauFTB.so
        |-- libUT_FauFTB.py
        |-- uFTB.fst.hier
        `-- xspcomm
            |-- __init__.py
            |-- __pycache__
            |   |-- __init__.cpython-38.pyc
            |   `-- pyxspcomm.cpython-38.pyc
            |-- _pyxspcomm.so -> _pyxspcomm.so.0.0.1
            |-- _pyxspcomm.so.0.0.1
            |-- info.py
            `-- pyxspcomm.py

4 directories, 13 files

After importing the UT_FauFTB module, you can perform simple tests in the Python environment.

from UT_FauFTB import *

if __name__ == "__main__":
    # Create DUT
    uftb = DUTFauFTB()
    # Init DUT with clock pin name
    uftb.init_clock("clock")

    # Your testcases here
    # ...

    # Destroy DUT
    utb.finalize()

Other modules to be verified, such as TAGE-SC, FTB can also be generated by similar commands.

Supported module names are: uftb, tage_sc, ftb, ras, ittage. You can also generate all DUT modules at once with the following command.

make all TL=python

BPU Peripheral Environment

BPU is a module in the CPU. This environment provides the peripheral environment required for it to drive the BPU to execute trace data (when verifying, you can choose whether to use it according to the actual situation).

Branch Trace Tool: BRTParser

BRTParser is a tool we specifically designed for BPU verification that can automatically capture and parse branch information in the program instruction stream. It is based on the Xiangshan frontend development tool OracleBP. BRTParser integrates the NEMU simulator internally, can directly run programs, and capture branch information in it. BRTParser will parse the captured branch information into a universal format, which is convenient for subsequent verification work.

Please refer to BRTParser in the utils directory for details.

FTQ Running Environment

Since a single sub-predictor module cannot run real programs, it is even more impossible to verify its prediction accuracy and functional correctness in actual programs. Therefore, we provide a simple FTQ environment. This environment uses the branch information generated by BRTParser to generate the program instruction execution stream. FTQ will parse the predictor's prediction results and compare them with the actual branch information to verify the accuracy of the predictor. In addition, FTQ will also issue redirection information and update information to the BPU, so that the predictor can run continuously in the FTQ environment.

In order for a sub-predictor to work normally, we also simulated the BPU top-level module to provide timing control and other functions for the sub-predictor. For non-FTB type sub-predictors, we also provide a simple FTB implementation, which is used to add FTB basic prediction result information to the sub-predictor result.

Currently, we use the FTQ environment to drive the uFTB sub-predictor and have written a timing-accurate uFTB reference model. The specific implementation and usage of the FTQ environment can be obtained in this test case, see test_src/uFTB-with-ftq for details.

Write Test Cases

Participants in the verification need to write test cases to verify the functional correctness of the BPU sub-module. In this repository, all test cases need to be placed in the tests directory.

We provide a test case running framework based on pytest, which can easily write test cases, define function coverage, generate test reports, etc. Therefore, when writing test cases, you need to follow some specifications introduced in this section.

Running Tests

We have provided two basic test cases for uFTB, each test case is placed in a separate subdirectory under the tests directory, and the subdirectory name is the name of the test case. Before running these two test cases, please ensure that the uFTB module has been correctly compiled and the dependencies required for the test case have been installed.

Afterwards, you can run the corresponding test cases. For example, to run the uFTB_raw test case, just run the following command in the tests directory:

make TEST=uFTB_raw run

This command will automatically run the uFTB_raw test case and generate waveform, coverage, and test report information. The test report will be saved in the tests/report directory. You can open tests/report/report.html in your browser to view the content of this test report. The test report style is shown in the following figure, and other files will also be generated in the tests directory.

If you need to run all test cases at once, you can run the following command:

make run

The generated test report will include the test results of all test cases.

Adding Test Cases

When writing your own test cases, you only need to create a new subdirectory under the tests directory as the directory for the new test case. The name of the subdirectory should be the name of the test case. You can add any code files in this directory, just make sure that the entry file of the test case is test_<test name>.py. In this file, the entry function of the test case also needs to be named test_<test name>. You can write one or more entry files and entry functions.

In each entry function, you need to follow the format below:


import mlvp.funcov as fc
from mlvp.reporter import set_func_coverage, set_line_coverage

def test_mydut(request):
    # Create DUT, and specify the waveform file and coverage file name for this test
    # Please note that the waveform file and coverage file names corresponding to each test function should be different, otherwise the files will be overwritten
    my_dut = DUTMydut(waveform_filename="my_test.fst", coverage_filename="my_test_coverage.dat")

    # Specify function coverage rules
    g1 = fc.CovGroup("group1")
    # ...
    g2 = fc.CovGroup("group2")
    # ...


    # Test running code
    # ...

    # End the test, and enter the coverage information. The coverage file name should be the same as the coverage file name specified above
    my_dut.finalize()
    set_func_coverage(request, [g1, g2])
    set_line_coverage(request, "my_test_coverage.dat")

After the test case is written, you can directly run in the tests directory:

make TEST=<test case name> run

This will automatically complete the running of the test case, waveform generation, coverage statistics, and test report generation.

When the local test passes, you can submit the test case. When submitting, the test results in the test report need to meet the following requirements:

  1. All test cases pass
  2. Code line coverage is greater than 95%
  3. Function coverage reaches 100%

Log Output

In the mlvp library, a dedicated logger is provided. We recommend using this logger to record information during the test.

Specifically, you can record logs in the following way:

import mlvp

mlvp.debug("This is a debug message", extra={"log_id": "dut"})
mlvp.info("This is an info message")
mlvp.warning("This is a warning message", extra={"log_id": "bundle"})
mlvp.error("This is an error message")
mlvp.critical("This is a critical message")

If you need to change the log recording format, log level, and write to file information, you can set it by calling the setup_logging function in the mlvp library:

def setup_logging(
    log_level =logging.INFO,
    format=default_format,
    console_display=True,
    log_file=None)

Suggested Verification Process (Must Read)

1. Read the document and sort out the test points. When reading the BPU document, you can sort out and refine the function points.

2. Read the code, encapsulate and drive the DUT. The code contains all implementation details, based on which you can encapsulate the basic functions of the DUT into individual functions. Then test whether these function features are normal.

3. Write corresponding test cases based on the test points. Based on the test points and DUT basic function functions, complete the testing of most functions. (Don't write the reference model right away)

4. Write the reference model. When all basic function points have been tested, you can complete the writing of the reference model based on your understanding. (If all function points have been tested and the function and code line coverage have met the requirements, the reference model can be ignored)

5. Random full system testing. At the same time, randomly drive the DUT and reference model, compare the test results. Perform coverage analysis and construct specific inputs to improve coverage.

6. Write the test report. Complete the document writing according to the report format requirements in the basic document.

*Note: During the above verification process, if you find a bug, you can submit it at any time through PR.