Exposure Notification Reference Key Server

View on GitHub

Getting Started

    How to Deploy
    How to Publish Keys
    How to Download Keys
    Estimating the Cost of Deployment

Design

    Server Deployment Options
    Server Functional Requirements
    Verification Protocol Design
    API Definitions

Deployment Guide

This page explains how to build and deploy servers within the Exposure Notification Reference implementation.

The Exposure Notification Reference implementation includes multiple services. Each service’s main package is located in the cmd directory.

Each service is deployed in the same way, but may accept different configuration options. Configuration options are specified via environment variables.

Service Folder Description
cleanup-export cmd/cleanup-export Deletes old exported files published by the exposure key export service
cleanup-exposure cmd/cleanup-exposure Deletes old exposure keys
export cmd/export Publishes exposure keys
exposure cmd/exposure Stores infection keys
federation-in cmd/federation-in Pulls federation results from federation partners
federation-out cmd/federation-out gRPC federation requests listener
generate cmd/generate Sample service that generates data
key-rotation cmd/key-rotation Generates new revision keys and retires old ones

Before you begin

To build and deploy the Exposure Notification Key Server services, you need to install and configure the following:

  1. Download and install the Google Cloud SDK.

    For more information on installation and to set up, see the Cloud SDK Quickstarts.

Provisioning infrastructure with Terraform

You can use Terraform to provision the initial infrastructure, database, service accounts, and first deployment of the services on Cloud Run. Terraform does not manage the Cloud Run services after their initial creation!

See Deploying with Terraform for more information.

Running services

While Terraform does an initial deployment of the services, it does not manage the Cloud Run services beyond their initial creation. If you make changes to the code, you need to build, deploy, and promote new services. The general order of operations is:

  1. Build - this is the phase where the code is bundled into a container image and pushed to a registry.

  2. Deploy - this is the phase where the container image is deployed onto Cloud Run, but is not receiving any traffic.

  3. Promote - this is the phase where a deployed container image begins receiving all or a percentage of traffic.

Building

Build new services by using the script at ./scripts/build, specifying the following values:

PROJECT_ID="my-project" \
SERVICES="export" \
./scripts/build

Expect this process to take 3-5 minutes.

Deploying

Deploy already-built container using the script at ./scripts/deploy, specifying the following values:

PROJECT_ID="my-project" \
REGION="us-central1" \
SERVICES="export" \
TAG="20200521084829" \
./scripts/deploy

Expect this process to take 1-2 minutes.

Promoting

Promote an already-deployed service to begin receiving production traffic using the script at ./scripts/promote, specifying the following values:

PROJECT_ID="my-project" \
REGION="us-central1" \
SERVICES="export" \
./scripts/promote

Expect this process to take 1-2 minutes.

Tracing services

To enable distributed tracing, please ensure your environment has these variables

Variable Values Comment
OBSERVABILITY_EXPORTER If unset, no exporting shall be done. Use any of “stackdriver”, “prometheus”, or “ocagent” otherwise Note: when using “prometheus” a METRICS_PORT environment variable should also be set
PROJECT_ID The ProjectID of your associated Google Cloud Platform project on which this application shall be deployed Required if you use “stackdriver”

Running migrations

On Google Cloud

To migrate the production database, use the script in ./scripts/migrate. This script triggers a Cloud Build invocation which uses the Cloud SQL Proxy to run the database migrations and uses the following environment variables:

If you created the infrastructure using Terraform, you can get these values by running terraform output from inside the terraform/ directory:

PROJECT_ID=$(terraform output project)
DB_CONN=$(terraform output db_conn)
DB_PASS_SECRET=$(terraform output db_pass_secret)

On a custom setup

If you did not use the Terraform configurations to provision your server, or if you are running your own Postgres server,

  1. Download and install the migrate tool.

  2. Construct the database URL for your database. This is usually of the format:

    postgres://DB_USER:DB_PASSWORD@DB_HOST:DB_PORT/DB_NAME?sslmode=require
    
  3. Run the migrate command with this database URL:

    migrate \
      -database "YOUR_DB_URL" \
      -path ./migrations \
      up
    

Configuring the server

This repository includes a configuration tool which provides a browser-based interface for manipulating the database-backed configuration. This admin tool does not have authentication / authorization and should not be deployed on the public Internet!

  1. Export the database connection parameters:

    export DB_CONN=...
    export DB_USER=...
    export DB_PASSWORD="secret://..."
    export DB_PORT=...
    export DB_NAME=...
    

    If you used Terraform to provision the infrastructure:

    cd terraform/
    export DB_CONN=$(terraform output db_conn)
    export DB_USER=$(terraform output db_user)
    export DB_PASSWORD="secret://$(terraform output db_pass_secret)"
    export DB_PORT=5432
    export DB_NAME=$(terraform output db_name)
    cd ../
    
  2. Configure the Cloud SQL proxy:

    If you are using Cloud SQL, start the proxy locally:

    cloud_sql_proxy -instances=$DB_CONN=tcp:$DB_PORT &
    

    And disable SSL verification:

    # Cloud SQL uses a local proxy and handles TLS communication automatically
    export DB_SSLMODE=disable
    
  3. Start the admin console:

    go run ./tools/admin-console
    
  4. Open a browser to localhost:8080.

    Remember, you are editing the live configuration of the database!

Configuring components

Blob storage

The blob storage component defines where and how export files are written. The following configurations are available:

Name BLOBSTORE value Description
AWS S3 AWS_S3 Store data in AWS S3.
Azure Blobstore AZURE_BLOB_STORAGE Store data in Azure Storage.
Google Cloud Storage* GOOGLE_CLOUD_STORAGE Store data in Google Cloud Storage.
Filesystem FILESYSTEM Store data on a filesystem.
Noop NOOP No files are written.

* default

Key management

The key management component is responsible for signing exports. The following configurations are available:

Name KEY_MANAGER value Description
AWS KMS AWS_KMS Perform signing using AWS KMS.
Google Cloud KMS* GOOGLE_CLOUD_KMS Perform signing using Google Cloud KMS.
HashiCorp Vault HASHICORP_VAULT Perform signing using HashiCorp Vault.
Noop NOOP No keys are managed.

* default

Secrets management

The secrets management component is responsible for acquiring secrets. The following configurations are available:

Name SECRET_MANAGER value Description
AWS Secrets Manager AWS_SECRETS_MANAGER Resolve with Secrets Manager.
Azure Key Vault AZURE_KEY_VAULT Resolve with Key Vault.
Google Secret Manager* GOOGLE_SECRET_MANAGER Resolve with Secret Manager.
HashiCorp Vault HASHICORP_VAULT Resolve with Vault.
Noop NOOP No secrets are resolved.

* default

In the interest of performance, secrets are TTL-cached to eliminate the secret manager from most hot paths. Configure this TTL with SECRET_CACHE_TTL which is expressed as a time duration like “5m” or “15s”. The default cache time is 5 minutes and lower values are strongly discouraged.

Observability

The observability component is responsible for metrics. The following configurations are available:

Name OBSERVABILITY_EXPORTER value Description
OpenCensus Agent OCAGENT Use OpenCensus.
Stackdriver* STACKDRIVER Use Stackdriver.

* default

Edit this page