Table of Contents
- Questions and Answers
Don't run Vivado on your host OS. Run it in an LXC container instead. Here's why and how.
Each Vivado release is supported on a limited set of Linux distributions and releases. While Xilinx documents this well in Vivado's release notes, and attempts to maintain compatibility, newer Vivado releases require newer (but not quite the newest) OSes.
When you need to run multiple Vivado versions, you can end up gridlocked, unable or unwilling to upgrade or maintain your PC. For example:
- You cannot, with a single officially-supported distribution and release, run both new and old versions of Vivado. For example, Vivado 2020.1 and Vivado 2016.1 are not both supported on any single version of any Linux distribution.
- "Rolling" changes to many Linux distributions, even within a single version, can also break Vivado. This is a disincentive to regular OS maintainance and can have security implications.
- While you can often run Vivado outside the officially-supported distribution list, it does not always work. Sometimes the problems are easily fixed (or google-able). Sometimes you will be the first to stumble across an obscure segfault. Xilinx support will often (and understandably) refuse to offer support in this scenario.
- Many other software packages (TI's Code Composer Studio; Microsemi's Libero; MATLAB) also impose restrictions on OS releases. Managing to find a single OS that supports all your software becomes more unlikely the more packages impose these limitations.
- If upgrading Vivado requires you to upgrade your OS, you will defer upgrades until you absolutely need them.
This is no way to live. Besides, Debian isn't on Xilinx's approved list, and who wants to run a host OS that isn't Debian?
By running Vivado within an LXC container, we can separate the OS we run (the "host" OS) from the OS Vivado perceives (the "guest"). By adding an extra OS layer, we are free to run multiple different guests for different versions of Vivado, and upgrade the host without worrying about breaking something obscure.
The following instructions assume the following:
- Host: Debian Testing, with full (sudo) access.
- Guest: Ubuntu 18.04.4 (bionic)
- Vivado: 2020.1
I have used this recipe for several Vivado and Ubuntu releases, on the same Debian Testing host.
These LXC directions are cribbed from the Debian wiki (https://wiki.debian.org/LXC). Please go there for fuller, and possibly better maintained, guidance.
Install the necessary packages:
$ sudo apt-get install lxc libvirt-clients libvirt0 libpam-cgfs bridge-utils uidmap
We will follow the "simple bridge" (lxc-net) version of the instructions.
Create the file
/etc/default/lxc-net with the following contents:
USE_LXC_BRIDGE="true" LXC_BRIDGE="lxcbr0" LXC_ADDR="10.0.3.1" LXC_NETMASK="255.255.255.0" LXC_NETWORK="10.0.3.0/24" LXC_DHCP_RANGE="10.0.3.2,10.0.3.254" LXC_DHCP_MAX="253" LXC_DHCP_CONFILE="" LXC_DOMAIN=""
Now enable and start
host$ sudo systemctl enable lxc-net host$ sudo systemctl start lxc-net
At this point I had to reboot my system to see the network device lxcbr0 appear:
host$ sudo reboot [...] host$ sudo ifconfig lxcbr0 lxcbr0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 inet 10.0.3.1 netmask 255.255.255.0 broadcast 0.0.0.0 [...]
Create and start a container named vivado2020_1 as follows:
host$ sudo lxc-create -n vivado2020_1 -t ubuntu -- -r bionic
Do not start the container yet (we'll do that below).
Now, in the host, edit the container configuration (/var/lib/lxc/vivado2020_1/config) and add the following lines:
# Network configuration lxc.net.0.type = veth lxc.net.0.flags = up lxc.net.0.link = lxcbr0 lxc.net.0.hwaddr = aa:bb:cc:dd:ee:ff # Xilinx lxc.mount.entry=/home/your_username /var/lib/lxc/vivado2020_1/rootfs/home/your_username none bind 0 0 lxc.mount.entry=/opt/xilinx /var/lib/lxc/vivado2020_1/rootfs/opt/xilinx none bind 0 0 # Automatically start this container lxc.start.auto = 1
You will have to modify three things here:
- The "hwaddr" should correspond to an arbitrary MAC address with a valid Xilinx license.
- your_username should be changed to match your setup
- /opt/xilinx should be altered to match your host's Xilinx installation (/opt/Xilinx is default, I think.)
You will have to create the bind-mount directories on your host, matching your alterations above:
host$ sudo mkdir -p /var/lib/lxc/vivado2020_1/rootfs/home/your_username host$ sudo mkdir -p /var/lib/lxc/vivado2020_1/rootfs/opt/xilinx
Now, start the container, attach to it, and configure it as follows:
host$ sudo lxc-start -n vivado2020_1 host$ sudo lxc-attach -n vivado2020_1
This last command will bring up a root prompt inside the container. Install a few basic packages:
guest# apt-get update guest# apt-get install avahi-daemon openssh-server xutils x11-apps xauth
You will also want to modify the ubuntu user to match your UID and GID on the host, and set its password:
guest# usermod -u 1000 -g 1000 -d /home/your_username -l your_username ubuntu guest# passwd your_username
...where the UID 1000 and GID 1000, here, should be altered to match your UID and GID on the host. You can find these by running id -u and id -g on the host.
You can now exit out of the root session and restart the container:
host$ sudo lxc-stop -n vivado2020_1 host$ sudo lxc-start -n vivado2020_1
You now have a user in the container that matches your user on the host. To avoid entering your password each time, use the ssh-copy-id script:
host$ ssh-copy-id vivado2020_1.local
If these steps worked, you can now "ssh" into the machine and run the all-important xeyes test:
host$ ssh -XC vivado2020_1.local xeyes
If the eyes show up, congratulations! You are able to connect to the container and forward X11 apps, which is how you'll be using Vivado.
If your host already has Vivado installed, it is also installed on your client. (That's what the lxc.mount.entry line above accomplished.)
If not, you can now install it inside your guest (but note that you will have to grant write permission on the host's /opt/xilinx to the user within your guest!) Your environment inside the LXC environment should be similar to the environment outside.
This depends on you.
Network licenses likely "just work" once you set up the server.
My Vivado installations are licensed by Ethernet MAC address. In this regime, the simplest (and most legitimate) way is to issue a separate license file for each guest. You will need to assign randomly generated MAC addresses to your guests via the config file you modified above. (See the hwaddr line.)
If you are interested in re-using the same MAC-based license as your host uses (I am not advocating this!), you should be able to create a dummy Ethernet interface in your guest with the same MAC address as your host's licensed Ethernet address. Provided no traffic flows over this interface, the fact that we have two Ethernet adapters on the same PC with the same MAC address is not problematic. While this is probably against Xilinx's terms of service, it's hard to imagine they would really have a problem with you running the software on the PC it's licensed with (albeit with an extra OS layer between the licensed MAC and Vivado.)
Most FPGA designers will be familiar with Virtual Machines (VMs) like VirtualBox or VMWare. These programs, with the help of OS- and processor-level features, emulate an entire PC and provide a well-isolated sandbox for you to install a "guest" OS on your PC. This guest OS is completely separate from your host OS, which can run several guests at a time.
You can install Vivado in a VM running an approved Linux distribution. However, you shouldn't. VMs generally require a fixed allocation of resources (RAM, disk, CPUs). When you run Vivado, you typically want it to consume as much CPU and RAM as it needs, up to the limits imposed by your hardware. Whenever your VM requires a static allocation, you are artificially hobbling the performance of both the host and guest. Even if you don't run into artificial resource limits imposed by static allocations, a VM will perform worse due to the extra virtualization overhead over a container.
A container is like a VM, only less virtualized. Where a VM host virtualizes the CPU and allows a guest to run its own OS, a container runs only a single kernel and virtualizes only the userspace environment. The host and guest share kernel, and hence hardware, drivers, CPUs, and memory.
LXC is one of several container management packages commonly provided in Linux distributions. Alternative container services include LXD (a user-friendlier layer on LXC, but which is not packed for Debian) and Docker, which solves a slightly different problem.
Docker, like LXC, is a container management service. However, Docker is intended to build short-lived, reproducible containers for non-interactive use. Vivado is a pretty poor match for these environments because it is so big, and because reproducing an automated Xilinx installation is not easy. Although you can find Docker instructions (Dockerfiles) for Vivado on-line, they are more of a hassle than the above procedure.
Sorry, it's possible but it's not what I describe here. I use this on my laptop and workstation, so I'm the only one driving. (I am aware that the manual uid remapping is slightly cheesy!)
It is possible for the host kernel (which, remember, is shared with your guest OS) to be incompatible with something in the guest's userspace or in Vivado. For example, kernel-level security features might expose a latent bug or assumption in the guest OS that Vivado doesn't tickle when used in a non-containerized environment. I have not surveyed past versions of Vivado carefully to check for bugs. As with all free internet advice, you are welcome to apply for a refund.