Exposure Notification Reference Key Server

View on GitHub

Getting Started

    How to Deploy
    Application Configuration
    How to Publish Keys
    How to Download Keys
    Key Server Migration Guide
    Estimating the Cost of Deployment

Design

    Server Deployment Options
    Server Functional Requirements
    Verification Protocol Design
    API Definitions

Deployment Guide

This page explains how to build and deploy servers within the Exposure Notification Reference implementation.

The Exposure Notification Reference implementation includes multiple services. Each service’s main package is located in the cmd directory.

Each service is deployed in the same way, but may accept different configuration options. Configuration options are specified via environment variables.

Service Folder Description
cleanup-export cmd/cleanup-export Deletes old exported files published by the exposure key export service
cleanup-exposure cmd/cleanup-exposure Deletes old exposure keys
debugger cmd/debugger Server-side debugging component; do not deploy in production
export cmd/export Publishes exposure keys
exposure cmd/exposure Stores infection keys
federation-in cmd/federation-in Pulls federation results from federation partners
federation-out cmd/federation-out gRPC federation requests listener
generate cmd/generate Sample service that generates data
key-rotation cmd/key-rotation Generates new revision keys and retires old ones
jwks cmd/jwks Updates any HealthAuthority keys with public jwks endpoints

Before you begin

To build and deploy the Exposure Notification Key Server services, you need to install and configure the following:

  1. Download and install the Google Cloud SDK.

    For more information on installation and to set up, see the Cloud SDK Quickstarts.

Provisioning infrastructure with Terraform

You can use Terraform to provision the initial infrastructure, database, service accounts, and first deployment of the services on Cloud Run. Terraform does not manage the Cloud Run services after their initial creation!

See Deploying with Terraform for more information.

Running services

While Terraform does an initial deployment of the services, it does not manage the Cloud Run services beyond their initial creation. If you make changes to the code, you need to build, deploy, and promote new services. The general order of operations is:

  1. Build - this is the phase where the code is bundled into a container image and pushed to a registry.

  2. Deploy - this is the phase where the container image is deployed onto Cloud Run, but is not receiving any traffic.

  3. Promote - this is the phase where a deployed container image begins receiving all or a percentage of traffic.

Building

Build new services by using the script at ./scripts/build, specifying the following values:

PROJECT_ID="my-project" \
SERVICES="export" \
./scripts/build

Expect this process to take 3-5 minutes.

Deploying

Deploy already-built container using the script at ./scripts/deploy, specifying the following values:

PROJECT_ID="my-project" \
REGION="us-central1" \
SERVICES="export" \
TAG="20200521084829" \
./scripts/deploy

Expect this process to take 1-2 minutes.

Promoting

Promote an already-deployed service to begin receiving production traffic using the script at ./scripts/promote, specifying the following values:

PROJECT_ID="my-project" \
REGION="us-central1" \
SERVICES="export" \
./scripts/promote

Expect this process to take 1-2 minutes.

Running migrations

On Google Cloud

To migrate the production database, use the script in ./scripts/migrate. This script triggers a Cloud Build invocation which uses the Cloud SQL Proxy to run the database migrations and uses the following environment variables:

If you created the infrastructure using Terraform, you can get these values by running terraform output from inside the terraform/ directory:

PROJECT_ID=$(terraform output project)
DB_CONN=$(terraform output db_conn)
DB_PASSWORD=$(terraform output db_password)

On a custom setup

If you did not use the Terraform configurations to provision your server, or if you are running your own Postgres server,

  1. Download and install the migrate tool.

  2. Construct the database URL for your database. This is usually of the format:

    postgres://DB_USER:DB_PASSWORD@DB_HOST:DB_PORT/DB_NAME?sslmode=require
    
  3. Run the migrate command with this database URL:

    migrate \
      -database "YOUR_DB_URL" \
      -path ./migrations \
      up
    

Configuring the server

This repository includes a configuration tool which provides a browser-based interface for manipulating the database-backed configuration. This admin tool does not have authentication / authorization and should not be deployed on the public Internet!

  1. Export the database connection parameters:

    export DB_CONN=...
    export DB_USER=...
    export DB_PASSWORD="secret://..."
    export DB_PORT=...
    export DB_NAME=...
    

    If you used Terraform to provision the infrastructure:

    cd terraform/
    export DB_CONN=$(terraform output db_conn)
    export DB_USER=$(terraform output db_user)
    export DB_PASSWORD="secret://$(terraform output db_password)"
    export DB_PORT=5432
    export DB_NAME=$(terraform output db_name)
    cd ../
    
  2. Configure the Cloud SQL proxy:

    If you are using Cloud SQL, start the proxy locally:

    cloud_sql_proxy -instances=$DB_CONN=tcp:$DB_PORT &
    

    And disable SSL verification:

    # Cloud SQL uses a local proxy and handles TLS communication automatically
    export DB_SSLMODE=disable
    
  3. Start the admin console:

    go run ./tools/admin-console
    
  4. Open a browser to localhost:8080.

    Remember, you are editing the live configuration of the database!

Configuring components

Blob storage

The blob storage component defines where and how export files are written. The following configurations are available:

Name Build tag BLOBSTORE value Description
AWS S3 aws AWS_S3 Store data in AWS S3.
Azure Blobstore azure AZURE_BLOB_STORAGE Store data in Azure Storage.
Google Cloud Storage google GOOGLE_CLOUD_STORAGE Store data in Google Cloud Storage.
Filesystem (none) FILESYSTEM Store data on a filesystem.
Memory* (none) MEMORY Store data in memory.

* default

Note that you must compile the server with the appropriate build tag in order to use certain blobstore:

go build -tags=TAG

Key management

The key management component is responsible for signing and verifying data. The following configurations are available:

Name Build tag KEY_MANAGER value Description
AWS KMS aws AWS_KMS Perform signing using AWS KMS.
Azure Keyvault azure AZURE_KEY_VAULT Perform signing using Azure Keyvault.
Google Cloud KMS google GOOGLE_CLOUD_KMS Perform signing using Google Cloud KMS.
HashiCorp Vault vault HASHICORP_VAULT Perform signing using HashiCorp Vault.
Filesystem* (none) FILESYSTEM Keys are generated and stored on the local filesystem.

* default

Note that you must compile the server with the appropriate build tag in order to use certain key managers:

go build -tags=TAG

Secrets management

The secrets management component is responsible for acquiring secrets. The following configurations are available:

Name Build tag SECRET_MANAGER value Description
AWS Secrets Manager aws AWS_SECRETS_MANAGER Resolve with AWS Secrets Manager.
Azure Key Vault azure AZURE_KEY_VAULT Resolve with Azure Keyvault.
Google Secret Manager google GOOGLE_SECRET_MANAGER Resolve with Google Secret Manager.
HashiCorp Vault vault HASHICORP_VAULT Resolve with HashiCorp Vault.
Filesystem (none) FILESYSTEM Resolve secrets using a local filesystem store.
Memory* (none) IN_MEMORY Resolve secrets using an in-memory store.

* default

Note that you must compile the server with the appropriate build tag in order to use certain secret managers:

go build -tags=TAG

In the interest of performance, secrets are TTL-cached to eliminate the secret manager from most hot paths. Configure this TTL with SECRET_CACHE_TTL which is expressed as a time duration like “5m” or “15s”. The default cache time is 5 minutes and lower values are strongly discouraged.

Observability

The observability component is responsible for metrics. The following configurations are available:

Name OBSERVABILITY_EXPORTER value Description
Stackdriver* STACKDRIVER Use Stackdriver. NOTE: when using STACKDRIVER, environment variable PROJECT_ID must also be set.
Prometheus PROMETHEUS Use Prometheus. NOTE: when using PROMETHEUS, environment variable METRICS_PORT must also be set.
OpenCensus Agent OCAGENT Use OpenCensus.
Noop NOOP No metrics are exported.

* default

Running the admin console

The admin console is deployed as a Cloud Run service, protected by Cloud IAM. To access it, you need:

Start the proxy with the URL to the Admin Console:

cloud-run-proxy -host https://admin-console-fda.a.run.app # example URL

Point your browser to http://localhost:8080.

Running the debugger

The debugger is deployed as a Cloud Run service, protected by Cloud IAM. It displays debugging information including the environment variables set on all services, list of authorized apps, list of export configs and signature infos, and recent batch files. The debugger is most helpful when you’re building a custom Exposure Notifications app and are trying to debug issues. You should not deploy the debugger in production. To access a deployed debugger, you need:

Start the proxy with the URL to the Debugger:

cloud-run-proxy -host https://debugger-fda.a.run.app # example URL

Point your browser to http://localhost:8080.

Edit this page