Installation and Attestation on Hetzner Baremetal VMs

In this guide we are going to explore how we can easily deploy vHSMarrow-up-right in a Kubernetes cluster and perform remote attestation for any VM we might spawn on the baremetal machine.

This tutorial assumes that you are able to spawn confidential virtual machines in a baremetal environment and have very basic knowledge of Kubernetes.

Initial setup

For our experiment we are going to spawn 2 identical confidential virtual machines which are going to be able to communicate through a private network. One virtual machine is going to host the vHSM within a Kubernetes cluster and has the IP 10.10.10.11 assigned, while the virtual machine we are going to attest later has the IP 10.10.10.12 assigned.

We assume that you are able to install Kubernetes yourself on our vHSM host machine. If not you can use this quickstart option.

chevron-rightKubernetes installationhashtag

On your Kubernetes virtual machine we are going to install K3S in a simple configuration.

The installation is straightforward when following the quickstartarrow-up-right guide of K3s.

The default configuration does not work nicely out of box therefore we are going to copy the K3S specific configuration to a standard Kubernetes directoy.

mkdir -p ~/.kube
sudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config
sudo chown $(id -u):$(id -g) ~/.kube/config
chmod 600 ~/.kube/config

Afterwards we are going to export ENCLAIVE_LICENCE="<>" which has been acquired through the support and install our helm chart using

helm install vhsm oci://harbor.enclaive.cloud/vhsm/vhsm   --version 0.29.1   --set injector.enabled=false   --set server.extraEnvironmentVars.ENCLAIVE_LICENCE="$ENCLAIVE_LICENCE"

By default the vHSM expects its communication to occur over HTTPS, however as we have no automatic HTTPs deployed we are going to fallback to the insecure HTTP communication and therefore need to set export VAULT_ADDR=http://127.0.0.1:8200.

For configuration we now need to use the vHSM binary which can be queried using wget https://vhsm.enclaive.cloud/static/vhsm and needs to be modified to be executable by using chmod +x vhsm

Now we are ready to expose the necessary ports using kubectl port-forward --address 0.0.0.0 svc/vhsm 8200:8200 8201:8201.

Configuring vHSM

With vhsm operator init we are going to initialize the vHSM which will output 5 unsealing keys and a root token for login purposes. Now we need to call vhsm operator unseal thrice while entering a different key share each time in order to unlock vHSM. After unlocking we can log in using vhsm login using the root token produced by the initialization process.

For the remote attestation process we need to set up certain parameters in the vHSM. The easy way is to use vhsm nitride init. However we are going to configure everything ourselves. Therefore we enable in a first step the remote attestation plugin vhsm auth enable -path=ratls ratls. In a next step we are going to query the appropriate certificate for our platform, which in our case is Genoa, using wget https://kdsintf.amd.com/vcek/v1/Genoa/cert_chain. This certificate now need to be converted to Base64 using base64 -w 0 cert_chain. Now we can create a new platform.json file which is going to define which firmware version needs to be met and what the root of trust is. The root of trust field is going to be filled with the Base64 encoded string we just produced.

Now we can define which firmware measurement we are going to accept. This can be calculated beforehand by having access to the OVMF.fd file and knowing which CPU is going to be emulated or passed through. In our case we are running 8 vCPUs and are passing through our host CPU. We can determine the necessary information through using sudo dmidecode --type processor on our host operating system. Therefore the tooling sev-snp-measure results in the following command: sev-snp-measure --mode snp --vcpus 8 --vcpu-family 25 --vcpu-model 17 --vcpu-stepping 1 --ovmf /var/lib/libvirt/images/OVMF.fd which has to be executed on the host OS. The output of this command needs to supplied into a new firmware.json file in our VM where the Kubernetes cluster is running.

Based on this information we are going to define a policy.json which combines both previous configurations using vhsm nitride policy create @policy.json. As a provider we can use sev-snp-raw, as we are in the default case for SEV-SNP attestation.

Finally we need to create an attestation object using vhsm nitride attestation create @attestation.json

Attesting

Now we are going to switch to our VM which we want to attest. In a first step we are going to download the vHSM binary as described previously and make it executable. Afterwards we are going to configure the VAULT_ADDR environment variable and let it point to our VM running vHSM within the Kubernetes cluster e.g. export VAULT_ADDR=http://<kubernetes-ip>:8200

The last step of creating the attestation object produced a uuid as output which we are now going to need in order to produce a nonce vhsm nitride attestation nonce <uuid> which is going to be included into our attestation report in order to guarantee its freshness. This nonce is now going to be supplied as part of the attestation command for creating the report structure, sudo vhsm nitride attestation -provider=sev-snp-raw generate <nopnce> > verify.json && base64 verify.json > verify.base64. The report will directly be safed in a json format and formatted in base64 for further verification on the vHSM side. In a last step we can now verify our locally generated attestation report against all information stored on the vHSM side, which we created earlier using vhsm nitride attestation verify <uuid> [email protected]. Afterwards if everything went successful we are going to receive a response containing a token which can be used for further authentication with the vHSM for retrieving secrets.

Last updated

Was this helpful?