summaryrefslogtreecommitdiffstats
path: root/docs/system/riscv
diff options
context:
space:
mode:
Diffstat (limited to 'docs/system/riscv')
-rw-r--r--docs/system/riscv/microchip-icicle-kit.rst149
-rw-r--r--docs/system/riscv/shakti-c.rst82
-rw-r--r--docs/system/riscv/sifive_u.rst375
-rw-r--r--docs/system/riscv/virt.rst189
4 files changed, 795 insertions, 0 deletions
diff --git a/docs/system/riscv/microchip-icicle-kit.rst b/docs/system/riscv/microchip-icicle-kit.rst
new file mode 100644
index 00000000..40798b1a
--- /dev/null
+++ b/docs/system/riscv/microchip-icicle-kit.rst
@@ -0,0 +1,149 @@
+Microchip PolarFire SoC Icicle Kit (``microchip-icicle-kit``)
+=============================================================
+
+Microchip PolarFire SoC Icicle Kit integrates a PolarFire SoC, with one
+SiFive's E51 plus four U54 cores and many on-chip peripherals and an FPGA.
+
+For more details about Microchip PolarFire SoC, please see:
+https://www.microsemi.com/product-directory/soc-fpgas/5498-polarfire-soc-fpga
+
+The Icicle Kit board information can be found here:
+https://www.microsemi.com/existing-parts/parts/152514
+
+Supported devices
+-----------------
+
+The ``microchip-icicle-kit`` machine supports the following devices:
+
+* 1 E51 core
+* 4 U54 cores
+* Core Level Interruptor (CLINT)
+* Platform-Level Interrupt Controller (PLIC)
+* L2 Loosely Integrated Memory (L2-LIM)
+* DDR memory controller
+* 5 MMUARTs
+* 1 DMA controller
+* 2 GEM Ethernet controllers
+* 1 SDHC storage controller
+
+Boot options
+------------
+
+The ``microchip-icicle-kit`` machine can start using the standard -bios
+functionality for loading its BIOS image, aka Hart Software Services (HSS_).
+HSS loads the second stage bootloader U-Boot from an SD card. Then a kernel
+can be loaded from U-Boot. It also supports direct kernel booting via the
+-kernel option along with the device tree blob via -dtb. When direct kernel
+boot is used, the OpenSBI fw_dynamic BIOS image is used to boot a payload
+like U-Boot or OS kernel directly.
+
+The user provided DTB should have the following requirements:
+
+* The /cpus node should contain at least one subnode for E51 and the number
+ of subnodes should match QEMU's ``-smp`` option
+* The /memory reg size should match QEMU’s selected ram_size via ``-m``
+* Should contain a node for the CLINT device with a compatible string
+ "riscv,clint0"
+
+QEMU follows below truth table to select which payload to execute:
+
+===== ========== ========== =======
+-bios -kernel -dtb payload
+===== ========== ========== =======
+ N N don't care HSS
+ Y don't care don't care HSS
+ N Y Y kernel
+===== ========== ========== =======
+
+The memory is set to 1537 MiB by default which is the minimum required high
+memory size by HSS. A sanity check on ram size is performed in the machine
+init routine to prompt user to increase the RAM size to > 1537 MiB when less
+than 1537 MiB ram is detected.
+
+Running HSS
+-----------
+
+HSS 2020.12 release is tested at the time of writing. To build an HSS image
+that can be booted by the ``microchip-icicle-kit`` machine, type the following
+in the HSS source tree:
+
+.. code-block:: bash
+
+ $ export CROSS_COMPILE=riscv64-linux-
+ $ cp boards/mpfs-icicle-kit-es/def_config .config
+ $ make BOARD=mpfs-icicle-kit-es
+
+Download the official SD card image released by Microchip and prepare it for
+QEMU usage:
+
+.. code-block:: bash
+
+ $ wget ftp://ftpsoc.microsemi.com/outgoing/core-image-minimal-dev-icicle-kit-es-sd-20201009141623.rootfs.wic.gz
+ $ gunzip core-image-minimal-dev-icicle-kit-es-sd-20201009141623.rootfs.wic.gz
+ $ qemu-img resize core-image-minimal-dev-icicle-kit-es-sd-20201009141623.rootfs.wic 4G
+
+Then we can boot the machine by:
+
+.. code-block:: bash
+
+ $ qemu-system-riscv64 -M microchip-icicle-kit -smp 5 \
+ -bios path/to/hss.bin -sd path/to/sdcard.img \
+ -nic user,model=cadence_gem \
+ -nic tap,ifname=tap,model=cadence_gem,script=no \
+ -display none -serial stdio \
+ -chardev socket,id=serial1,path=serial1.sock,server=on,wait=on \
+ -serial chardev:serial1
+
+With above command line, current terminal session will be used for the first
+serial port. Open another terminal window, and use ``minicom`` to connect the
+second serial port.
+
+.. code-block:: bash
+
+ $ minicom -D unix\#serial1.sock
+
+HSS output is on the first serial port (stdio) and U-Boot outputs on the
+second serial port. U-Boot will automatically load the Linux kernel from
+the SD card image.
+
+Direct Kernel Boot
+------------------
+
+Sometimes we just want to test booting a new kernel, and transforming the
+kernel image to the format required by the HSS bootflow is tedious. We can
+use '-kernel' for direct kernel booting just like other RISC-V machines do.
+
+In this mode, the OpenSBI fw_dynamic BIOS image for 'generic' platform is
+used to boot an S-mode payload like U-Boot or OS kernel directly.
+
+For example, the following commands show building a U-Boot image from U-Boot
+mainline v2021.07 for the Microchip Icicle Kit board:
+
+.. code-block:: bash
+
+ $ export CROSS_COMPILE=riscv64-linux-
+ $ make microchip_mpfs_icicle_defconfig
+
+Then we can boot the machine by:
+
+.. code-block:: bash
+
+ $ qemu-system-riscv64 -M microchip-icicle-kit -smp 5 -m 2G \
+ -sd path/to/sdcard.img \
+ -nic user,model=cadence_gem \
+ -nic tap,ifname=tap,model=cadence_gem,script=no \
+ -display none -serial stdio \
+ -kernel path/to/u-boot/build/dir/u-boot.bin \
+ -dtb path/to/u-boot/build/dir/u-boot.dtb
+
+CAVEATS:
+
+* Check the "stdout-path" property in the /chosen node in the DTB to determine
+ which serial port is used for the serial console, e.g.: if the console is set
+ to the second serial port, change to use "-serial null -serial stdio".
+* The default U-Boot configuration uses CONFIG_OF_SEPARATE hence the ELF image
+ ``u-boot`` cannot be passed to "-kernel" as it does not contain the DTB hence
+ ``u-boot.bin`` has to be used which does contain one. To use the ELF image,
+ we need to change to CONFIG_OF_EMBED or CONFIG_OF_PRIOR_STAGE.
+
+.. _HSS: https://github.com/polarfire-soc/hart-software-services
diff --git a/docs/system/riscv/shakti-c.rst b/docs/system/riscv/shakti-c.rst
new file mode 100644
index 00000000..fea57f7b
--- /dev/null
+++ b/docs/system/riscv/shakti-c.rst
@@ -0,0 +1,82 @@
+Shakti C Reference Platform (``shakti_c``)
+==========================================
+
+Shakti C Reference Platform is a reference platform based on arty a7 100t
+for the Shakti SoC.
+
+Shakti SoC is a SoC based on the Shakti C-class processor core. Shakti C
+is a 64bit RV64GCSUN processor core.
+
+For more details on Shakti SoC, please see:
+https://gitlab.com/shaktiproject/cores/shakti-soc/-/blob/master/fpga/boards/artya7-100t/c-class/README.rst
+
+For more info on the Shakti C-class core, please see:
+https://c-class.readthedocs.io/en/latest/
+
+Supported devices
+-----------------
+
+The ``shakti_c`` machine supports the following devices:
+
+ * 1 C-class core
+ * Core Level Interruptor (CLINT)
+ * Platform-Level Interrupt Controller (PLIC)
+ * 1 UART
+
+Boot options
+------------
+
+The ``shakti_c`` machine can start using the standard -bios
+functionality for loading the baremetal application or opensbi.
+
+Boot the machine
+----------------
+
+Shakti SDK
+~~~~~~~~~~
+Shakti SDK can be used to generate the baremetal example UART applications.
+
+.. code-block:: bash
+
+ $ git clone https://gitlab.com/behindbytes/shakti-sdk.git
+ $ cd shakti-sdk
+ $ make software PROGRAM=loopback TARGET=artix7_100t
+
+Binary would be generated in:
+ software/examples/uart_applns/loopback/output/loopback.shakti
+
+You could also download the precompiled example applications using below
+commands.
+
+.. code-block:: bash
+
+ $ wget -c https://gitlab.com/behindbytes/shakti-binaries/-/raw/master/sdk/shakti_sdk_qemu.zip
+ $ unzip shakti_sdk_qemu.zip
+
+Then we can run the UART example using:
+
+.. code-block:: bash
+
+ $ qemu-system-riscv64 -M shakti_c -nographic \
+ -bios path/to/shakti_sdk_qemu/loopback.shakti
+
+OpenSBI
+~~~~~~~
+We can also run OpenSBI with Test Payload.
+
+.. code-block:: bash
+
+ $ git clone https://github.com/riscv/opensbi.git -b v0.9
+ $ cd opensbi
+ $ wget -c https://gitlab.com/behindbytes/shakti-binaries/-/raw/master/dts/shakti.dtb
+ $ export CROSS_COMPILE=riscv64-unknown-elf-
+ $ export FW_FDT_PATH=./shakti.dtb
+ $ make PLATFORM=generic
+
+fw_payload.elf would be generated in build/platform/generic/firmware/fw_payload.elf.
+Boot it using the below qemu command.
+
+.. code-block:: bash
+
+ $ qemu-system-riscv64 -M shakti_c -nographic \
+ -bios path/to/fw_payload.elf
diff --git a/docs/system/riscv/sifive_u.rst b/docs/system/riscv/sifive_u.rst
new file mode 100644
index 00000000..7b166567
--- /dev/null
+++ b/docs/system/riscv/sifive_u.rst
@@ -0,0 +1,375 @@
+SiFive HiFive Unleashed (``sifive_u``)
+======================================
+
+SiFive HiFive Unleashed Development Board is the ultimate RISC-V development
+board featuring the Freedom U540 multi-core RISC-V processor.
+
+Supported devices
+-----------------
+
+The ``sifive_u`` machine supports the following devices:
+
+* 1 E51 / E31 core
+* Up to 4 U54 / U34 cores
+* Core Local Interruptor (CLINT)
+* Platform-Level Interrupt Controller (PLIC)
+* Power, Reset, Clock, Interrupt (PRCI)
+* L2 Loosely Integrated Memory (L2-LIM)
+* DDR memory controller
+* 2 UARTs
+* 1 GEM Ethernet controller
+* 1 GPIO controller
+* 1 One-Time Programmable (OTP) memory with stored serial number
+* 1 DMA controller
+* 2 QSPI controllers
+* 1 ISSI 25WP256 flash
+* 1 SD card in SPI mode
+* PWM0 and PWM1
+
+Please note the real world HiFive Unleashed board has a fixed configuration of
+1 E51 core and 4 U54 core combination and the RISC-V core boots in 64-bit mode.
+With QEMU, one can create a machine with 1 E51 core and up to 4 U54 cores. It
+is also possible to create a 32-bit variant with the same peripherals except
+that the RISC-V cores are replaced by the 32-bit ones (E31 and U34), to help
+testing of 32-bit guest software.
+
+Hardware configuration information
+----------------------------------
+
+The ``sifive_u`` machine automatically generates a device tree blob ("dtb")
+which it passes to the guest, if there is no ``-dtb`` option. This provides
+information about the addresses, interrupt lines and other configuration of
+the various devices in the system. Guest software should discover the devices
+that are present in the generated DTB instead of using a DTB for the real
+hardware, as some of the devices are not modeled by QEMU and trying to access
+these devices may cause unexpected behavior.
+
+If users want to provide their own DTB, they can use the ``-dtb`` option.
+These DTBs should have the following requirements:
+
+* The /cpus node should contain at least one subnode for E51 and the number
+ of subnodes should match QEMU's ``-smp`` option
+* The /memory reg size should match QEMU’s selected ram_size via ``-m``
+* Should contain a node for the CLINT device with a compatible string
+ "riscv,clint0" if using with OpenSBI BIOS images
+
+Boot options
+------------
+
+The ``sifive_u`` machine can start using the standard -kernel functionality
+for loading a Linux kernel, a VxWorks kernel, a modified U-Boot bootloader
+(S-mode) or ELF executable with the default OpenSBI firmware image as the
+-bios. It also supports booting the unmodified U-Boot bootloader using the
+standard -bios functionality.
+
+Machine-specific options
+------------------------
+
+The following machine-specific options are supported:
+
+- serial=nnn
+
+ The board serial number. When not given, the default serial number 1 is used.
+
+ SiFive reserves the first 1 KiB of the 16 KiB OTP memory for internal use.
+ The current usage is only used to store the serial number of the board at
+ offset 0xfc. U-Boot reads the serial number from the OTP memory, and uses
+ it to generate a unique MAC address to be programmed to the on-chip GEM
+ Ethernet controller. When multiple QEMU ``sifive_u`` machines are created
+ and connected to the same subnet, they all have the same MAC address hence
+ it creates an unusable network. In such scenario, user should give different
+ values to serial= when creating different ``sifive_u`` machines.
+
+- start-in-flash
+
+ When given, QEMU's ROM codes jump to QSPI memory-mapped flash directly.
+ Otherwise QEMU will jump to DRAM or L2LIM depending on the msel= value.
+ When not given, it defaults to direct DRAM booting.
+
+- msel=[6|11]
+
+ Mode Select (MSEL[3:0]) pins value, used to control where to boot from.
+
+ The FU540 SoC supports booting from several sources, which are controlled
+ using the Mode Select pins on the chip. Typically, the boot process runs
+ through several stages before it begins execution of user-provided programs.
+ These stages typically include the following:
+
+ 1. Zeroth Stage Boot Loader (ZSBL), which is contained in an on-chip mask
+ ROM and provided by QEMU. Note QEMU implemented ROM codes are not the
+ same as what is programmed in the hardware. The QEMU one is a simplified
+ version, but it provides the same functionality as the hardware.
+ 2. First Stage Boot Loader (FSBL), which brings up PLLs and DDR memory.
+ This is U-Boot SPL.
+ 3. Second Stage Boot Loader (SSBL), which further initializes additional
+ peripherals as needed. This is U-Boot proper combined with an OpenSBI
+ fw_dynamic firmware image.
+
+ msel=6 means FSBL and SSBL are both on the QSPI flash. msel=11 means FSBL
+ and SSBL are both on the SD card.
+
+Running Linux kernel
+--------------------
+
+Linux mainline v5.10 release is tested at the time of writing. To build a
+Linux mainline kernel that can be booted by the ``sifive_u`` machine in
+64-bit mode, simply configure the kernel using the defconfig configuration:
+
+.. code-block:: bash
+
+ $ export ARCH=riscv
+ $ export CROSS_COMPILE=riscv64-linux-
+ $ make defconfig
+ $ make
+
+To boot the newly built Linux kernel in QEMU with the ``sifive_u`` machine:
+
+.. code-block:: bash
+
+ $ qemu-system-riscv64 -M sifive_u -smp 5 -m 2G \
+ -display none -serial stdio \
+ -kernel arch/riscv/boot/Image \
+ -initrd /path/to/rootfs.ext4 \
+ -append "root=/dev/ram"
+
+Alternatively, we can use a custom DTB to boot the machine by inserting a CLINT
+node in fu540-c000.dtsi in the Linux kernel,
+
+.. code-block:: none
+
+ clint: clint@2000000 {
+ compatible = "riscv,clint0";
+ interrupts-extended = <&cpu0_intc 3 &cpu0_intc 7
+ &cpu1_intc 3 &cpu1_intc 7
+ &cpu2_intc 3 &cpu2_intc 7
+ &cpu3_intc 3 &cpu3_intc 7
+ &cpu4_intc 3 &cpu4_intc 7>;
+ reg = <0x00 0x2000000 0x00 0x10000>;
+ };
+
+with the following command line options:
+
+.. code-block:: bash
+
+ $ qemu-system-riscv64 -M sifive_u -smp 5 -m 8G \
+ -display none -serial stdio \
+ -kernel arch/riscv/boot/Image \
+ -dtb arch/riscv/boot/dts/sifive/hifive-unleashed-a00.dtb \
+ -initrd /path/to/rootfs.ext4 \
+ -append "root=/dev/ram"
+
+To build a Linux mainline kernel that can be booted by the ``sifive_u`` machine
+in 32-bit mode, use the rv32_defconfig configuration. A patch is required to
+fix the 32-bit boot issue for Linux kernel v5.10.
+
+.. code-block:: bash
+
+ $ export ARCH=riscv
+ $ export CROSS_COMPILE=riscv64-linux-
+ $ curl https://patchwork.kernel.org/project/linux-riscv/patch/20201219001356.2887782-1-atish.patra@wdc.com/mbox/ > riscv.patch
+ $ git am riscv.patch
+ $ make rv32_defconfig
+ $ make
+
+Replace ``qemu-system-riscv64`` with ``qemu-system-riscv32`` in the command
+line above to boot the 32-bit Linux kernel. A rootfs image containing 32-bit
+applications shall be used in order for kernel to boot to user space.
+
+Running VxWorks kernel
+----------------------
+
+VxWorks 7 SR0650 release is tested at the time of writing. To build a 64-bit
+VxWorks mainline kernel that can be booted by the ``sifive_u`` machine, simply
+create a VxWorks source build project based on the sifive_generic BSP, and a
+VxWorks image project to generate the bootable VxWorks image, by following the
+BSP documentation instructions.
+
+A pre-built 64-bit VxWorks 7 image for HiFive Unleashed board is available as
+part of the VxWorks SDK for testing as well. Instructions to download the SDK:
+
+.. code-block:: bash
+
+ $ wget https://labs.windriver.com/downloads/wrsdk-vxworks7-sifive-hifive-1.01.tar.bz2
+ $ tar xvf wrsdk-vxworks7-sifive-hifive-1.01.tar.bz2
+ $ ls bsps/sifive_generic_1_0_0_0/uboot/uVxWorks
+
+To boot the VxWorks kernel in QEMU with the ``sifive_u`` machine, use:
+
+.. code-block:: bash
+
+ $ qemu-system-riscv64 -M sifive_u -smp 5 -m 2G \
+ -display none -serial stdio \
+ -nic tap,ifname=tap0,script=no,downscript=no \
+ -kernel /path/to/vxWorks \
+ -append "gem(0,0)host:vxWorks h=192.168.200.1 e=192.168.200.2:ffffff00 u=target pw=vxTarget f=0x01"
+
+It is also possible to test 32-bit VxWorks on the ``sifive_u`` machine. Create
+a 32-bit project to build the 32-bit VxWorks image, and use exact the same
+command line options with ``qemu-system-riscv32``.
+
+Running U-Boot
+--------------
+
+U-Boot mainline v2021.07 release is tested at the time of writing. To build a
+U-Boot mainline bootloader that can be booted by the ``sifive_u`` machine, use
+the sifive_unleashed_defconfig with similar commands as described above for
+Linux:
+
+.. code-block:: bash
+
+ $ export CROSS_COMPILE=riscv64-linux-
+ $ export OPENSBI=/path/to/opensbi-riscv64-generic-fw_dynamic.bin
+ $ make sifive_unleashed_defconfig
+
+You will get spl/u-boot-spl.bin and u-boot.itb file in the build tree.
+
+To start U-Boot using the ``sifive_u`` machine, prepare an SPI flash image, or
+SD card image that is properly partitioned and populated with correct contents.
+genimage_ can be used to generate these images.
+
+A sample configuration file for a 128 MiB SD card image is:
+
+.. code-block:: bash
+
+ $ cat genimage_sdcard.cfg
+ image sdcard.img {
+ size = 128M
+
+ hdimage {
+ gpt = true
+ }
+
+ partition u-boot-spl {
+ image = "u-boot-spl.bin"
+ offset = 17K
+ partition-type-uuid = 5B193300-FC78-40CD-8002-E86C45580B47
+ }
+
+ partition u-boot {
+ image = "u-boot.itb"
+ offset = 1041K
+ partition-type-uuid = 2E54B353-1271-4842-806F-E436D6AF6985
+ }
+ }
+
+SPI flash image has slightly different partition offsets, and the size has to
+be 32 MiB to match the ISSI 25WP256 flash on the real board:
+
+.. code-block:: bash
+
+ $ cat genimage_spi-nor.cfg
+ image spi-nor.img {
+ size = 32M
+
+ hdimage {
+ gpt = true
+ }
+
+ partition u-boot-spl {
+ image = "u-boot-spl.bin"
+ offset = 20K
+ partition-type-uuid = 5B193300-FC78-40CD-8002-E86C45580B47
+ }
+
+ partition u-boot {
+ image = "u-boot.itb"
+ offset = 1044K
+ partition-type-uuid = 2E54B353-1271-4842-806F-E436D6AF6985
+ }
+ }
+
+Assume U-Boot binaries are put in the same directory as the config file,
+we can generate the image by:
+
+.. code-block:: bash
+
+ $ genimage --config genimage_<boot_src>.cfg --inputpath .
+
+Boot U-Boot from SD card, by specifying msel=11 and pass the SD card image
+to QEMU ``sifive_u`` machine:
+
+.. code-block:: bash
+
+ $ qemu-system-riscv64 -M sifive_u,msel=11 -smp 5 -m 8G \
+ -display none -serial stdio \
+ -bios /path/to/u-boot-spl.bin \
+ -drive file=/path/to/sdcard.img,if=sd
+
+Changing msel= value to 6, allows booting U-Boot from the SPI flash:
+
+.. code-block:: bash
+
+ $ qemu-system-riscv64 -M sifive_u,msel=6 -smp 5 -m 8G \
+ -display none -serial stdio \
+ -bios /path/to/u-boot-spl.bin \
+ -drive file=/path/to/spi-nor.img,if=mtd
+
+Note when testing U-Boot, QEMU automatically generated device tree blob is
+not used because U-Boot itself embeds device tree blobs for U-Boot SPL and
+U-Boot proper. Hence the number of cores and size of memory have to match
+the real hardware, ie: 5 cores (-smp 5) and 8 GiB memory (-m 8G).
+
+Above use case is to run upstream U-Boot for the SiFive HiFive Unleashed
+board on QEMU ``sifive_u`` machine out of the box. This allows users to
+develop and test the recommended RISC-V boot flow with a real world use
+case: ZSBL (in QEMU) loads U-Boot SPL from SD card or SPI flash to L2LIM,
+then U-Boot SPL loads the combined payload image of OpenSBI fw_dynamic
+firmware and U-Boot proper.
+
+However sometimes we want to have a quick test of booting U-Boot on QEMU
+without the needs of preparing the SPI flash or SD card images, an alternate
+way can be used, which is to create a U-Boot S-mode image by modifying the
+configuration of U-Boot:
+
+.. code-block:: bash
+
+ $ export CROSS_COMPILE=riscv64-linux-
+ $ make sifive_unleashed_defconfig
+ $ make menuconfig
+
+then manually select the following configuration:
+
+ * Device Tree Control ---> Provider of DTB for DT Control ---> Prior Stage bootloader DTB
+
+and unselect the following configuration:
+
+ * Library routines ---> Allow access to binman information in the device tree
+
+This changes U-Boot to use the QEMU generated device tree blob, and bypass
+running the U-Boot SPL stage.
+
+Boot the 64-bit U-Boot S-mode image directly:
+
+.. code-block:: bash
+
+ $ qemu-system-riscv64 -M sifive_u -smp 5 -m 2G \
+ -display none -serial stdio \
+ -kernel /path/to/u-boot.bin
+
+It's possible to create a 32-bit U-Boot S-mode image as well.
+
+.. code-block:: bash
+
+ $ export CROSS_COMPILE=riscv64-linux-
+ $ make sifive_unleashed_defconfig
+ $ make menuconfig
+
+then manually update the following configuration in U-Boot:
+
+ * Device Tree Control ---> Provider of DTB for DT Control ---> Prior Stage bootloader DTB
+ * RISC-V architecture ---> Base ISA ---> RV32I
+ * Boot options ---> Boot images ---> Text Base ---> 0x80400000
+
+and unselect the following configuration:
+
+ * Library routines ---> Allow access to binman information in the device tree
+
+Use the same command line options to boot the 32-bit U-Boot S-mode image:
+
+.. code-block:: bash
+
+ $ qemu-system-riscv32 -M sifive_u -smp 5 -m 2G \
+ -display none -serial stdio \
+ -kernel /path/to/u-boot.bin
+
+.. _genimage: https://github.com/pengutronix/genimage
diff --git a/docs/system/riscv/virt.rst b/docs/system/riscv/virt.rst
new file mode 100644
index 00000000..4b16e41d
--- /dev/null
+++ b/docs/system/riscv/virt.rst
@@ -0,0 +1,189 @@
+'virt' Generic Virtual Platform (``virt``)
+==========================================
+
+The ``virt`` board is a platform which does not correspond to any real hardware;
+it is designed for use in virtual machines. It is the recommended board type
+if you simply want to run a guest such as Linux and do not care about
+reproducing the idiosyncrasies and limitations of a particular bit of
+real-world hardware.
+
+Supported devices
+-----------------
+
+The ``virt`` machine supports the following devices:
+
+* Up to 8 generic RV32GC/RV64GC cores, with optional extensions
+* Core Local Interruptor (CLINT)
+* Platform-Level Interrupt Controller (PLIC)
+* CFI parallel NOR flash memory
+* 1 NS16550 compatible UART
+* 1 Google Goldfish RTC
+* 1 SiFive Test device
+* 8 virtio-mmio transport devices
+* 1 generic PCIe host bridge
+* The fw_cfg device that allows a guest to obtain data from QEMU
+
+The hypervisor extension has been enabled for the default CPU, so virtual
+machines with hypervisor extension can simply be used without explicitly
+declaring.
+
+Hardware configuration information
+----------------------------------
+
+The ``virt`` machine automatically generates a device tree blob ("dtb")
+which it passes to the guest, if there is no ``-dtb`` option. This provides
+information about the addresses, interrupt lines and other configuration of
+the various devices in the system. Guest software should discover the devices
+that are present in the generated DTB.
+
+If users want to provide their own DTB, they can use the ``-dtb`` option.
+These DTBs should have the following requirements:
+
+* The number of subnodes of the /cpus node should match QEMU's ``-smp`` option
+* The /memory reg size should match QEMU’s selected ram_size via ``-m``
+* Should contain a node for the CLINT device with a compatible string
+ "riscv,clint0" if using with OpenSBI BIOS images
+
+Boot options
+------------
+
+The ``virt`` machine can start using the standard -kernel functionality
+for loading a Linux kernel, a VxWorks kernel, an S-mode U-Boot bootloader
+with the default OpenSBI firmware image as the -bios. It also supports
+the recommended RISC-V bootflow: U-Boot SPL (M-mode) loads OpenSBI fw_dynamic
+firmware and U-Boot proper (S-mode), using the standard -bios functionality.
+
+Machine-specific options
+------------------------
+
+The following machine-specific options are supported:
+
+- aclint=[on|off]
+
+ When this option is "on", ACLINT devices will be emulated instead of
+ SiFive CLINT. When not specified, this option is assumed to be "off".
+
+- aia=[none|aplic|aplic-imsic]
+
+ This option allows selecting interrupt controller defined by the AIA
+ (advanced interrupt architecture) specification. The "aia=aplic" selects
+ APLIC (advanced platform level interrupt controller) to handle wired
+ interrupts whereas the "aia=aplic-imsic" selects APLIC and IMSIC (incoming
+ message signaled interrupt controller) to handle both wired interrupts and
+ MSIs. When not specified, this option is assumed to be "none" which selects
+ SiFive PLIC to handle wired interrupts.
+
+- aia-guests=nnn
+
+ The number of per-HART VS-level AIA IMSIC pages to be emulated for a guest
+ having AIA IMSIC (i.e. "aia=aplic-imsic" selected). When not specified,
+ the default number of per-HART VS-level AIA IMSIC pages is 0.
+
+Running Linux kernel
+--------------------
+
+Linux mainline v5.12 release is tested at the time of writing. To build a
+Linux mainline kernel that can be booted by the ``virt`` machine in
+64-bit mode, simply configure the kernel using the defconfig configuration:
+
+.. code-block:: bash
+
+ $ export ARCH=riscv
+ $ export CROSS_COMPILE=riscv64-linux-
+ $ make defconfig
+ $ make
+
+To boot the newly built Linux kernel in QEMU with the ``virt`` machine:
+
+.. code-block:: bash
+
+ $ qemu-system-riscv64 -M virt -smp 4 -m 2G \
+ -display none -serial stdio \
+ -kernel arch/riscv/boot/Image \
+ -initrd /path/to/rootfs.cpio \
+ -append "root=/dev/ram"
+
+To build a Linux mainline kernel that can be booted by the ``virt`` machine
+in 32-bit mode, use the rv32_defconfig configuration. A patch is required to
+fix the 32-bit boot issue for Linux kernel v5.12.
+
+.. code-block:: bash
+
+ $ export ARCH=riscv
+ $ export CROSS_COMPILE=riscv64-linux-
+ $ curl https://patchwork.kernel.org/project/linux-riscv/patch/20210627135117.28641-1-bmeng.cn@gmail.com/mbox/ > riscv.patch
+ $ git am riscv.patch
+ $ make rv32_defconfig
+ $ make
+
+Replace ``qemu-system-riscv64`` with ``qemu-system-riscv32`` in the command
+line above to boot the 32-bit Linux kernel. A rootfs image containing 32-bit
+applications shall be used in order for kernel to boot to user space.
+
+Running U-Boot
+--------------
+
+U-Boot mainline v2021.04 release is tested at the time of writing. To build an
+S-mode U-Boot bootloader that can be booted by the ``virt`` machine, use
+the qemu-riscv64_smode_defconfig with similar commands as described above for Linux:
+
+.. code-block:: bash
+
+ $ export CROSS_COMPILE=riscv64-linux-
+ $ make qemu-riscv64_smode_defconfig
+
+Boot the 64-bit U-Boot S-mode image directly:
+
+.. code-block:: bash
+
+ $ qemu-system-riscv64 -M virt -smp 4 -m 2G \
+ -display none -serial stdio \
+ -kernel /path/to/u-boot.bin
+
+To test booting U-Boot SPL which in M-mode, which in turn loads a FIT image
+that bundles OpenSBI fw_dynamic firmware and U-Boot proper (S-mode) together,
+build the U-Boot images using riscv64_spl_defconfig:
+
+.. code-block:: bash
+
+ $ export CROSS_COMPILE=riscv64-linux-
+ $ export OPENSBI=/path/to/opensbi-riscv64-generic-fw_dynamic.bin
+ $ make qemu-riscv64_spl_defconfig
+
+The minimal QEMU commands to run U-Boot SPL are:
+
+.. code-block:: bash
+
+ $ qemu-system-riscv64 -M virt -smp 4 -m 2G \
+ -display none -serial stdio \
+ -bios /path/to/u-boot-spl \
+ -device loader,file=/path/to/u-boot.itb,addr=0x80200000
+
+To test 32-bit U-Boot images, switch to use qemu-riscv32_smode_defconfig and
+riscv32_spl_defconfig builds, and replace ``qemu-system-riscv64`` with
+``qemu-system-riscv32`` in the command lines above to boot the 32-bit U-Boot.
+
+Enabling TPM
+------------
+
+A TPM device can be connected to the virt board by following the steps below.
+
+First launch the TPM emulator:
+
+.. code-block:: bash
+
+ $ swtpm socket --tpm2 -t -d --tpmstate dir=/tmp/tpm \
+ --ctrl type=unixio,path=swtpm-sock
+
+Then launch QEMU with some additional arguments to link a TPM device to the backend:
+
+.. code-block:: bash
+
+ $ qemu-system-riscv64 \
+ ... other args .... \
+ -chardev socket,id=chrtpm,path=swtpm-sock \
+ -tpmdev emulator,id=tpm0,chardev=chrtpm \
+ -device tpm-tis-device,tpmdev=tpm0
+
+The TPM device can be seen in the memory tree and the generated device
+tree and should be accessible from the guest software.