This page explains how to build and deploy servers within the Exposure Notification Reference implementation.
The Exposure Notification Reference implementation includes multiple services.
Each service’s main
package is located in the cmd
directory.
Each service is deployed in the same way, but may accept different configuration options. Configuration options are specified via environment variables.
Service | Folder | Description |
---|---|---|
cleanup-export | cmd/cleanup-export | Deletes old exported files published by the exposure key export service |
cleanup-exposure | cmd/cleanup-exposure | Deletes old exposure keys |
debugger | cmd/debugger | Server-side debugging component; do not deploy in production |
export | cmd/export | Publishes exposure keys |
exposure | cmd/exposure | Stores infection keys |
federation-in | cmd/federation-in | Pulls federation results from federation partners |
federation-out | cmd/federation-out | gRPC federation requests listener |
generate | cmd/generate | Sample service that generates data |
key-rotation | cmd/key-rotation | Generates new revision keys and retires old ones |
jwks | cmd/jwks | Updates any HealthAuthority keys with public jwks endpoints |
To build and deploy the Exposure Notification Key Server services, you need to install and configure the following:
Download and install the Google Cloud SDK.
For more information on installation and to set up, see the Cloud SDK Quickstarts.
You can use Terraform to provision the initial infrastructure, database, service accounts, and first deployment of the services on Cloud Run. Terraform does not manage the Cloud Run services after their initial creation!
See Deploying with Terraform for more information.
While Terraform does an initial deployment of the services, it does not manage the Cloud Run services beyond their initial creation. If you make changes to the code, you need to build, deploy, and promote new services. The general order of operations is:
Build - this is the phase where the code is bundled into a container image and pushed to a registry.
Deploy - this is the phase where the container image is deployed onto Cloud Run, but is not receiving any traffic.
Promote - this is the phase where a deployed container image begins receiving all or a percentage of traffic.
Build new services by using the script at ./scripts/build
, specifying the
following values:
PROJECT_ID
(required) - your Google Cloud project ID.
SERVICES
(required) - comma-separated list of names of the services to
build, or “all” to build all. See the list of services in the table above.
TAG
(optional) - tag to use for the images. If not specified, it uses a
git-sha-based short tag.
PROJECT_ID="my-project" \
SERVICES="export" \
./scripts/build
Expect this process to take 3-5 minutes.
Deploy already-built container using the script at ./scripts/deploy
,
specifying the following values:
PROJECT_ID
(required) - your Google Cloud project ID.
REGION
(required) - region in which to deploy the services.
SERVICES
(required) - comma-separated list of names of the services to
deploy, or “all” to deploy all. Note, if you specify multiple services, they
must use the same tag.
TAG
(required) - tag of the deployed image (e.g. abcd1234).
PROJECT_ID="my-project" \
REGION="us-central1" \
SERVICES="export" \
TAG="20200521084829" \
./scripts/deploy
Expect this process to take 1-2 minutes.
Promote an already-deployed service to begin receiving production traffic using
the script at ./scripts/promote
, specifying the following values:
PROJECT_ID
(required) - your Google Cloud project ID.
REGION
(required) - region in which to promote the services.
SERVICES
(required) - comma-separated list of names of the services to
promote, or “all” to deploy all. Note, if you specify multiple services,
then the revision must be “LATEST”.
REVISION
(optional) - revision of the service to promote, usually the
output of a deployment step. Defaults to “LATEST”.
PERCENTAGE
(optional) - percent of traffic to shift to the new revision.
Defaults to “100”.
PROJECT_ID="my-project" \
REGION="us-central1" \
SERVICES="export" \
./scripts/promote
Expect this process to take 1-2 minutes.
To migrate the production database, use the script in ./scripts/migrate
. This
script triggers a Cloud Build invocation which uses the Cloud SQL Proxy to run
the database migrations and uses the following environment variables:
PROJECT_ID
(required) - your Google Cloud project ID.
DB_CONN
(required) - your Cloud SQL connection name.
DB_PASSWORD
(required) - the reference to the secret where the
database password is stored in Secret Manager (e.g.
secret://projects/...
).
DB_NAME
(default: “main”) - the name of the database against which to run
migrations.
DB_USER
(default: “notification”) - the username with which to
authenticate.
COMMAND
(default: “up”) - the migration command to run.
If you created the infrastructure using Terraform, you can get these values by
running terraform output
from inside the terraform/
directory:
PROJECT_ID=$(terraform output project)
DB_CONN=$(terraform output db_conn)
DB_PASSWORD=$(terraform output db_password)
If you did not use the Terraform configurations to provision your server, or if you are running your own Postgres server,
Download and install the
migrate
tool.
Construct the database URL for your database. This is usually of the format:
postgres://DB_USER:DB_PASSWORD@DB_HOST:DB_PORT/DB_NAME?sslmode=require
Run the migrate command with this database URL:
migrate \
-database "YOUR_DB_URL" \
-path ./migrations \
up
This repository includes a configuration tool which provides a browser-based interface for manipulating the database-backed configuration. This admin tool does not have authentication / authorization and should not be deployed on the public Internet!
Export the database connection parameters:
export DB_CONN=...
export DB_USER=...
export DB_PASSWORD="secret://..."
export DB_PORT=...
export DB_NAME=...
If you used Terraform to provision the infrastructure:
cd terraform/
export DB_CONN=$(terraform output db_conn)
export DB_USER=$(terraform output db_user)
export DB_PASSWORD="secret://$(terraform output db_password)"
export DB_PORT=5432
export DB_NAME=$(terraform output db_name)
cd ../
Configure the Cloud SQL proxy:
If you are using Cloud SQL, start the proxy locally:
cloud_sql_proxy -instances=$DB_CONN=tcp:$DB_PORT &
And disable SSL verification:
# Cloud SQL uses a local proxy and handles TLS communication automatically
export DB_SSLMODE=disable
Start the admin console:
go run ./tools/admin-console
Open a browser to localhost:8080.
Remember, you are editing the live configuration of the database!
The blob storage component defines where and how export files are written. The following configurations are available:
Name | Build tag | BLOBSTORE value |
Description |
---|---|---|---|
AWS S3 | aws |
AWS_S3 |
Store data in AWS S3. |
Azure Blobstore | azure |
AZURE_BLOB_STORAGE |
Store data in Azure Storage. |
Google Cloud Storage | google |
GOOGLE_CLOUD_STORAGE |
Store data in Google Cloud Storage. |
Filesystem | (none) | FILESYSTEM |
Store data on a filesystem. |
Memory* | (none) | MEMORY |
Store data in memory. |
* default
Note that you must compile the server with the appropriate build tag in order to use certain blobstore:
go build -tags=TAG
The key management component is responsible for signing and verifying data. The following configurations are available:
Name | Build tag | KEY_MANAGER value |
Description |
---|---|---|---|
AWS KMS | aws |
AWS_KMS |
Perform signing using AWS KMS. |
Azure Keyvault | azure |
AZURE_KEY_VAULT |
Perform signing using Azure Keyvault. |
Google Cloud KMS | google |
GOOGLE_CLOUD_KMS |
Perform signing using Google Cloud KMS. |
HashiCorp Vault | vault |
HASHICORP_VAULT |
Perform signing using HashiCorp Vault. |
Filesystem* | (none) | FILESYSTEM |
Keys are generated and stored on the local filesystem. |
* default
Note that you must compile the server with the appropriate build tag in order to use certain key managers:
go build -tags=TAG
The secrets management component is responsible for acquiring secrets. The following configurations are available:
Name | Build tag | SECRET_MANAGER value |
Description |
---|---|---|---|
AWS Secrets Manager | aws |
AWS_SECRETS_MANAGER |
Resolve with AWS Secrets Manager. |
Azure Key Vault | azure |
AZURE_KEY_VAULT |
Resolve with Azure Keyvault. |
Google Secret Manager | google |
GOOGLE_SECRET_MANAGER |
Resolve with Google Secret Manager. |
HashiCorp Vault | vault |
HASHICORP_VAULT |
Resolve with HashiCorp Vault. |
Filesystem | (none) | FILESYSTEM |
Resolve secrets using a local filesystem store. |
Memory* | (none) | IN_MEMORY |
Resolve secrets using an in-memory store. |
* default
Note that you must compile the server with the appropriate build tag in order to use certain secret managers:
go build -tags=TAG
In the interest of performance, secrets are TTL-cached to eliminate the secret
manager from most hot paths. Configure this TTL with SECRET_CACHE_TTL
which is
expressed as a time duration like “5m” or “15s”. The default cache time is 5
minutes and lower values are strongly discouraged.
The observability component is responsible for metrics. The following configurations are available:
Name | OBSERVABILITY_EXPORTER value |
Description |
---|---|---|
Stackdriver* | STACKDRIVER |
Use Stackdriver. NOTE: when using STACKDRIVER , environment variable PROJECT_ID must also be set. |
Prometheus | PROMETHEUS |
Use Prometheus. NOTE: when using PROMETHEUS , environment variable METRICS_PORT must also be set. |
OpenCensus Agent | OCAGENT |
Use OpenCensus. |
Noop | NOOP |
No metrics are exported. |
* default
The admin console is deployed as a Cloud Run service, protected by Cloud IAM. To access it, you need:
The URL to the service, which you can find in the Terraform output or by looking for admin-console
in the Cloud Run web UI.
roles/run.invoker
permission on the Cloud Run service.
Start the proxy with the URL to the Admin Console:
cloud-run-proxy -host https://admin-console-fda.a.run.app # example URL
Point your browser to http://localhost:8080.
The debugger is deployed as a Cloud Run service, protected by Cloud IAM. It displays debugging information including the environment variables set on all services, list of authorized apps, list of export configs and signature infos, and recent batch files. The debugger is most helpful when you’re building a custom Exposure Notifications app and are trying to debug issues. You should not deploy the debugger in production. To access a deployed debugger, you need:
The URL to the service, which you can find in the Terraform output or by looking for debugger
in the Cloud Run web UI.
roles/run.invoker
permission on the Cloud Run service.
Start the proxy with the URL to the Debugger:
cloud-run-proxy -host https://debugger-fda.a.run.app # example URL
Point your browser to http://localhost:8080.
Edit this page