MariaDB root admin password provisioning on Azure DCXas_v5 VM

In this self-contained tutorial we show how to provision a MariaDB container running in a confidential "buckypaper" VM on Azure. The tutorial easily transfers to any other CSP with buckypaper VMs.

In this tutorial, you will learn how to:

  • Configure the vHSM to attest a cVM.

  • Set up the vHSM to generate a high-entropy admin password.

  • Configure the enclaivelet to attest the cVM and securely retrieve the password.

  • Use cloud-config to automate these processes during VM startup.

Introduction

Setting up a cloud environment and configuring the network can be fraught with challenges. Given the complexity of cloud systems and the pressure to deliver products quickly, issues are almost inevitable.

Challenge

When cloud workloads fail to authenticate properly, they create critical vulnerabilities that can lead to significant security breaches. Without proper authentication, unauthorized users, malicious actors, or compromised applications may gain unrestricted access to sensitive data, services, and resources. This can result in data theft, manipulation, or deletion, leading to operational disruptions and financial losses. Furthermore, the absence of authentication increases the risk of privilege escalation, enabling attackers to move laterally within a network, compromise additional systems, and launch further attacks. This also complicates incident detection and response, making it difficult to trace malicious activities. In summary, missing authentication in cloud workloads compromises trust, compliance, and overall cloud security.

Solution

Traditionally, only users had identities, which led to the development of identity management services in the cloud. These services ensure that users authenticate with an identity provider that centrally manages access permissions and grants access to specific resources. For example, a user might authenticate with an identity management service (such as Entra on Azure, formerly known as Active Directory), which verifies permissions and grants access to a service like Key Vault with the appropriate capabilities (e.g., administrator role). Best practices emphasize the principle of least privilege, ensuring that users only have the access necessary to perform their tasks. Centralized management of user access also minimizes credential sprawl and reduces the risk of credential leakage by avoiding unnecessary storage and sharing.

Confidential computing introduces a new approach where workloads running in enclaves are assigned their own identity. With Nitride, the principles of identity management are extended to workloads. Unlike user authentication, a Buckypaper VM attests (where "attestation" for workloads is akin to "authentication" for users) to the workload identity provider, Nitride, to verify its authorization to access the Vault key management service. In Nitride, a policy defines how workload identities are verified and what access privileges they receive.

Prerequisites

For this tutorial, it is assumed that you have permission to create a confidential "buckypaper" VM on Azure. We have selected a cVM from the DCXas_v5 family, which supports AMD SEV-SNP. Alternative configuration options are available in the accompanying table. Additionally, it is assumed that the vHSM is correctly running on this cVM and is accessible via the /auth/ratls path.

Getting-Started Blueprint

Any use case involving the secret provisioning of attested workloads generally includes one or more of the following steps:

Create the enclave identity

Creation of the enclave identities has to be done once for the cloud configuration

An enclaive identity captures the properties of the compute environment. It is divided into artefacts platform, firmware, workload and metadata.

Create the Namespace (Currently not optional)

In certain scenarios, it can be helpful to store secrets within namespaces. To achieve this, you can create a namespace called some-ns by running the following command:

vhsm namespace create some-ns

Configure a Platform Artifact

In the example below, the enclave identity is defined by a platform artifact, as described in the platform.json file. This artifact details an AMD SEV-SNP enabled platform with VCEK attestation support.

In more complex cases, the enclave identity may consist of a combination of platform, firmware, workload, and meta artifacts.

{
  "type": "platform",
  "name": "amd-sev-snp-milan-vcek",
  "values": {
    "root_of_trust": "amd-sev-snp-milan-vcek"
  }
}

Run this command to see available chains:

vhsm nitride identity list

Run the command to create the platform identity

vhsm nitride identity create @platform.json

Example output:

Key        Value
---        -----
created    1727217302
name       amd-sev-snp-milan-vcek
type       platform
values     map[root_of_trust:amd-sev-snp-milan-vcek]

Define an attestation verification policy

We need to tell Nitride next how to verify an attestation. To this end we define policy.json. The canonical example below enforces the policy that platforms we previously defined under the name pass verification. The provider ensures the correct attestation type is used by the client.

There are more options to this. Each field can individually be disabled and identities can be filtered based on values, but for this simple example, we will only verify the platform.

{
  "name": "Azure_DC2as_v5-platform_only",
  "identities": {
    "provider": "azure-sev-snp-vtpm",
    "platform": {
      "selector": {
        "name": "amd-sev-snp-milan-vcek"
      }
    },
    "firmware": null,
    "workload": null,
    "metadata": null
  }
}

Run the command to create the policy

 vhsm nitride policy -format=json create @policy.json | jq .data

Example output:

{
  "created": 1727217785,
  "identities": {
    "firmware": null,
    "metadata": null,
    "platform": {
      "identity": {
        "created": 1727217302,
        "name": "amd-sev-snp-milan-vcek",
        "type": "platform",
        "values": {
          "root_of_trust": "amd-sev-snp-milan-vcek"
        }
      },
      "policy": null,
      "selector": {
        "name": "amd-sev-snp-milan-vcek"
      }
    },
    "provider": "azure-sev-snp-vtpm",
    "workload": null
  },
  "name": "Azure_DC2as_v5-platform_only"
}

Create a workload

Policies are resuable, so we use workloads to identify each instance. We can use our policy to create an attestation object that manages the verification procedure.

Aside from cosmetics, we set the target namespace some-ns for the auth token that is given to the client after verification. We can also set an events handler to receive the attestation results.

We will also reference our created policy here.

{
  "name": "Azure MariaDB",
  "description": "A small Azure VM running MariaDB",
  "namespace": "some-ns",
  "events": "http://localhost:8000",
  "policy": "Azure_DC2as_v5-platform_only"
}

Run the command to create the workload

vhsm nitride attestation create @attestation.json

Example output:

Key            Value
---            -----
created        1727218322
description    A small Azure VM running MariaDB
events         http://localhost:8000
name           Azure MariaDB
namespace      some-ns
nonce          n/a
policy         Azure_DC2as_v5-platform_only
updated        0
uuid           fd11edb1-718f-4d68-a0f6-30b2c4cd9c79

Note down the uuid. It is required by the attestation client.

Activate the buckypaper engine

To register the buckypaper extension vault-plugin-secrets-dkv, use the following command with the correct SHA-256 digest:

vhsm plugin register -sha256=<digest> secret vault-plugin-secrets-dkv

To verify successful registration, run the command below and look for vault-plugin-secrets-dkv in the list:

vhsm plugin list | grep vault-plugin-secrets-dkv

We'll activate two endpoint instances of the vault-plugin-secrets-dkv extension

vhsm secrets enable -namespace=some-ns -path=buckypaper vault-plugin-secrets-dkv

Run the command below to confirm that the endpoint has been enabled correctly:

vhsm secrets list -namespace=some-ns

Example output:

Path           Type                        Accessor                             Description
----           ----                        --------                             -----------
buckypaper/    vault-plugin-secrets-dkv    vault-plugin-secrets-dkv_86fad1be    n/a
cubbyhole/     ns_cubbyhole                ns_cubbyhole_8ea08522                per-token private secret storage
identity/      ns_identity                 ns_identity_45775dad                 identity store
sys/           ns_system                   ns_system_ae8d65fd                   system endpoints used for control, policy and debugging

Set up the cVM with the attestation client

Set the address

export VAULT_ADDR=http://localhost:8200

Download the client

The attestation client is contained in the vhsm binary. It will only verify generated reports and output the token. The provisioning of secrets must be done by something else.

Download vhsm and make it executable:

wget -c -q -O vhsm "${VAULT_ADDR}"/static/vhsm
chmod +x vhsm

Create the attestation

Run the client

sudo -E vhsm nitride attestation \
    -format=json \
    -provider=azure-sev-snp-vtpm \
    report <uuid>

After successful attestation, the plugin generates an access token for the namespace that grants access to the buckypaper secrets engine in charge of generating the admin password.

Example Output:

{
  "data": {
    "created": 1727218322,
    "description": "A small Azure VM running MariaDB",
    "events": "http://localhost:8000",
    "name": "Azure MariaDB",
    "namespace": "some-ns",
    "nonce": "",
    "policy": "Azure_DC2as_v5-platform_only",
    "updated": 1727220023,
    "uuid": "fd11edb1-718f-4d68-a0f6-30b2c4cd9c79"
  },
  "warnings": null,
  "auth": {
    "client_token": "hvs.CAESI...",
    "accessor": "2mjVgP21kSKIniu4WPOykg2J.some-ns",
    "policies": [
      "default",
      "enclaive-attested"
    ],
    "metadata": {
      "measurement": "122d0d6fcd1b714a7c34f32d0dc9262ab08976cc8e22132b40ef2569f1dcc47b71ba617debed11563389d7a3f8481d99",
      "namespace": "some-ns",
      "workload": "fd11edb1-718f-4d68-a0f6-30b2c4cd9c79"
    },
    "orphan": true,
    "lease_duration": 2764800,
    "renewable": false
  }
}

.auth.client_token is the access token for the namespace some-ns.

The attached policy is for additional external configuration. The actual policy is inlined into the token and currently can't be changed or viewed.

Example Webhook:

POST / HTTP/1.1
Host: localhost:8000
User-Agent: Go-http-client/1.1
Content-Length: 73525
Content-Type: application/json
Accept-Encoding: gzip
{
  "Success": true,
  "Message": "success",
  "Instance": "fd11edb1-718f-4d68-a0f6-30b2c4cd9c79",
  "Quote": "[base64-of-the-report]"
}

Provision the password from vHSM

It remains to contact the buckypaper secrets engine with the request to generate a high-entropy password, retrieve it over a secure channel, and source it into the enclave.

First, login with the attestation token:

vhsm login <client_token>

Second, request the secret that will be dynamically generated:

vhsm kv get -mount=buckypaper \
    <uuid>/env/MARIADB_PASSWORD

Example Output:

============================== Secret Path ==============================
buckypaper/data/fd11edb1-718f-4d68-a0f6-30b2c4cd9c79/env/MARIADB_PASSWORD

======= Metadata =======
Key                Value
---                -----
created_time       2024-09-24T23:28:33.05967231Z
custom_metadata    <nil>
deletion_time      n/a
destroyed          false
version            1

==== Data ====
Key      Value
---      -----
value    AO2U4SXSCN2LCCXUYCRM4OEVETPCH3ZOHTNAKA5VO7Z7LLL3CRMA

After installing docker, you can start a mariadb-Container with this password:

docker run -d --name mariadb \
    -e MARIADB_ROOT_PASSWORD=AO2U4SXSCN2LCCXUYCRM4OEVETPCH3ZOHTNAKA5VO7Z7LLL3CRMA \
    mariadb:latest

Verify that the password is set:

docker exec -it mariadb mariadb -pAO2U4SXSCN2LCCXUYCRM4OEVETPCH3ZOHTNAKA5VO7Z7LLL3CRMA

Example Output:

Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 7
Server version: 11.5.2-MariaDB-ubu2404 mariadb.org binary distribution

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]>

Congratulations! You have successfully attested your first cVM.

After walking through the process by yourself, you now might want to automate.

The documentation for that is not updated yet.

Cloud-Init: Putting it all together

Cloud-init is a popular method for customizing a Linux VM during its initial boot. It allows you to automate tasks like installing packages, configuring users, setting up security, and writing files. Since cloud-init runs as part of the first boot process, no additional agents or steps are needed to apply configurations.

One of cloud-init's strengths is its compatibility across different distributions. Instead of specifying commands like apt-get install or yum install for package installation, you simply define a list of packages, and cloud-init automatically uses the appropriate package manager for the selected distribution.

Create cloud-init config file

At your bash prompt or in the Cloud Shell, create a file named cloud-init.txt and paste the following configuration. For example, you can type sensible-editor cloud-init.txt to create the file and choose from the available editors. Ensure that the entire cloud-init file is copied correctly, paying special attention to the first line:

#cloud-config
runcmd:
  - |
    (
    set -eu

    # Variables
    export MARIA_PASSWORD=${maria_password}
    export VERSION=10.8.2

    # Update packages and install necessary dependencies
    sudo apt-get update
    sudo apt-get install -y ca-certificates curl gnupg

    # Add the official Docker GPG key
    sudo install -m 0755 -d /etc/apt/keyrings
    curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
    sudo chmod a+r /etc/apt/keyrings/docker.gpg

    # Add the official Docker repository
    echo \
      "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
      $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

    # Update packages and install Docker Engine, Docker CLI, and Containerd
    sudo apt-get update
    sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

    # Start and enable Docker
    sudo systemctl enable docker
    sudo systemctl start docker

    # MARIA
    sudo docker run --name mariadb -d \
    -e MARIADB_ROOT_PASSWORD=$MARIADB_PASSWORD \
    -v /var/lib/mysql:/var/lib/mysql \
    -p 3306:3306 \
    mariadb:$VERSION
    
    # Variables
    export ENCLAIVE_WORKLOAD=5432087a-726a-4600-b81c-13988c96957a
    export NITRIDE_ADDR=https://myvhsmdomain.com
    export VAULT_ADDR=https://myvhsmdomain.com
   
    # Download vHSM cli
    COMMAND="curl -s -o"

    $COMMAND client "$ENCLAIVE_NITRIDE/static/vhsm"
    chmod +x client provision
    
    # Attest to $ENCLAIVE_NITRIDE
    vhsm nitride attest -format=json \
    -provider=azure-sev-snp-vtpm \
    report <uuid>
    
    ) >enclaive.log 2>&1

Create a confidential "buckypaper" VM

Before creating a cVM, start by creating a resource group using az group create. The following example sets up a resource group named myResourceGroupAutomate in the eastus location:

az group create --name myResourceGroupAutomate --location eastus

Next, create a VM using az vm create, and pass in your cloud-init configuration file with the --custom-data parameter. If the cloud-init.txt file is saved outside of your current directory, be sure to provide the full path. The following example creates a VM named myAutomatedVM:

az vm create \
    --resource-group myResourceGroupAutomate \
    --name myAutomatedVM \
    --image Ubuntu2204 \
    --admin-username azureuser \
    --generate-ssh-keys \
    --custom-data cloud-init.txt

It may take a few minutes for the VM to be created, packages to install, and the app to start. Background tasks continue running after the Azure CLI returns to the prompt, so accessing the app might take a couple of additional minutes. Once the VM is created, note the publicIpAddress displayed by the Azure CLI. This IP address allows you to access the Node.js app via a web browser.

To allow web traffic to reach your VM, open port using az vm open-port:

az vm open-port --port <port> --resource-group myResourceGroupAutomate --name myAutomatedVM

Last updated