📃
Confidential Computing 101
HomeTechnologyTry CC!
  • Welcome
  • Confidential Computing
    • What is Confidential Computing
    • What problems Confidential Computing solves
      • Bare Metal
      • Docker
      • Kubernetes
      • Knative
    • Why Confidential Computing
    • How Confidential Computing works
      • Memory Encryption
      • Workload Attestation
      • Confidential Boot
      • Sealing / Binding
      • Secret Provisioning
    • Technology Overview
    • Cloud Service Providers
  • Technology in depth
    • Intel SGX
      • Getting Started
        • Bare Metal Server Installation
        • Enclave Development Environment
        • Intel SGX SDK Setup
      • Technology
        • 🎭Features
        • 💂Threat Model
        • 🆚Versions
        • 🟦Concepts
          • 🏦Memory Encryption
          • 👮Local and Remote Attestation
          • 🖼️DCAP-Attestation Framework
          • 🔑Secret Key Provisioning
      • enclaive Development Kit
        • 🏢Architecture
        • 🌪️Workflow
        • 🌍Tutorials
          • Azure DCdsv3, DCsv2, or DCsv3 Setup
          • Redis in cK8s
          • MongoDB in cK8s
          • K8s + HashiCorp Vault on Azure DCsv3
      • Vault Remote Attestation Plug-In
        • 🏃‍♂️Initialization
        • 👮Attestation
        • ⚙️Configuration
    • Intel TDX
      • Getting Started
        • Azure
        • AWS
        • GCP
      • Technology
        • History
          • VT
          • TME/MKTME
          • SGX
        • Features
        • Threat Model
        • Concepts
          • Architecture
            • TDX Module
          • Memory Encryption
            • Confidentiality and Integrity
            • Keys and Key Management
          • TD Partitioning
          • DCAP-Attestation
            • Overview
            • Platform Registration
            • Attestation Report
    • AMD SEV
      • Getting Started
        • Azure
        • AWS
        • GCP
      • Technology
        • History
        • Threat Model
        • SME Concepts
          • Use Models
        • SEV-SNP Concepts
          • Features
            • Integrity Threats
            • Reverse Map Table
            • Page Validation
            • Page States
            • Virtual Machine Privilege Levels
            • Interrupt/Exception Protection
            • Trusted Platform Information
            • TCB Versioning
            • VM Launch & Attestation
            • VM Migration
            • Side Channels
          • Use Cases
          • Architecture
            • Encrypted Memory
            • Key Management
          • Software Implications
    • ARM CC
      • Technology
        • Introduction
        • Threat Model
        • Design
        • Comparison
    • Attestation Methods
      • Raw Attestation
      • Raw Attestation with Secure-Boot
      • Raw Attestation with a vTPM
        • AMD Secure VM Service Module and vTPMs
      • Raw Attestation with paravirtualized TPM
  • Resources
    • Youtube
    • Github
    • Products
Powered by GitBook
On this page

Was this helpful?

  1. Technology in depth
  2. Intel SGX
  3. enclaive Development Kit
  4. Tutorials

Redis in cK8s

To deploy the SGX application and access it using the Redis CLI in Kubernetes, follow these steps:

  1. Apply the YAML file for the Redis service application:

kubectl apply -f apps/redis/redis.yaml

This will deploy the actual SGX application that you want to use.

  1. Apply the YAML file for the Redis CLI demonstration client:

kubectl apply -f apps/redis/redis-cli.yaml

This will deploy a client container that allows easy access to the Redis CLI with attested CA.

  1. Copy the certs directory to the enclaive-redis-cli container:

kubectl cp certs/ enclaive-redis-cli:/data/
  1. Access the enclaive-redis-cli container's shell:

kubectl exec -it enclaive-redis-cli -- bash
  1. Connect to the Redis service using the Redis CLI command:

redis-cli -h enclaive-redis-sgx --tls --cacert certs/sgx-ca.pem --cert certs/sgx-cert.pem --key certs/sgx-key.pem

If everything goes as expected, the Redis CLI should connect to the attested and provisioned Redis service application through the Vault.

Configuration of enclaive Redis-SGX Container

Additionally, if you want to enclave your own applications using Gramine and achieve compatibility with the enclaive attestation infrastructure using Vault, you need to configure the enclaive Redis-SGX container as follows:

The container manifest should include at least the following values:

libos.entrypoint = "/app/premain"
loader.argv = [ "/usr/bin/redis-server", "/etc/redis.conf" ]
loader.env.ENCLAIVE_NAME = "enclaive-redis-sgx"
loader.env.ENCLAIVE_SERVER = { passthrough = true }
fs.mounts = [ { path = "/secrets/tmp", type = "tmpfs" } ]
sgx.enclave_size = "1G"
sgx.remote_attestation = "dcap"

Ideally, the memory size of the enclave should be set to 2G for better startup stability.

The TLS configuration is stored in the following paths within the container:

  • Public Certificate: /secrets/tmp/cert.pem

  • Private Key: /secrets/tmp/key.pem

  • Cluster CA: /secrets/tmp/ca.pem

You can use these paths for your application configuration.

Please note that forked processes do not share temporary filesystems and therefore cannot access the TLS credentials.

Last updated 1 year ago

Was this helpful?

🌍