Turbinia is an open-source framework for deploying, managing, and running distributed forensic workloads.
helm repo add osdfir-charts https://google.github.io/osdfir-infrastructure/
helm install my-release osdfir-charts/turbinia
Note: By default, Turbinia is not externally accessible and can be reached via
kubectl port-forward
within the cluster.
For a quick start with a local Kubernetes cluster on your desktop, check out the getting started with Minikube guide.
This chart bootstraps a Turbinia deployment on a Kubernetes cluster using the Helm package manager.
Note: See GKE Installations for deploying to GKE.
The first step is to add the repo and then update to pick up any new changes.
helm repo add osdfir-charts https://google.github.io/osdfir-infrastructure/
helm repo update
To install the chart, specify any release name of your choice. For example, using my-release
as the release name, run:
helm install my-release osdfir-charts/turbinia
The command deploys Turbinia on the Kubernetes cluster in the default configuration. The Parameters section lists the parameters that can be configured during installation or see Installating for Production for a recommended production installation.
Tip: You can override the default Turbinia configuration by placing the
turbinia.conf
config at the root of the Helm chart. When choosing this option, pull and install the Helm chart locally.
Pull the chart locally then cd into /turbinia
and review the values-production.yaml
file for a list of values that will be used for production.
helm pull osdfir-charts/turbinia --untar
Create a Turbinia GCP account using the helper script in tools/create-gcp-sa.sh
prior to installing the chart.
Install the chart with the base values in values.yaml
, the production values in values-production.yaml
, and set appropriate values to enable GCP for Turbinia. Using a release name such as my-release
, run:
helm install my-release ../turbinia \
-f values.yaml -f values-production.yaml \
--set gcp.enabled=true \
--set gcp.projectID=<GCP_PROJECT_ID> \
--set gcp.projectRegion=<GKE_CLUSTER_REGION> \
--set gcp.projectZone=<GKE_ClUSTER_ZONE>
Follow these steps to externally expose Turbinia and enable Google Cloud OIDC using the Oauth2 Proxy to control user access to Turbinia.
Create a global static IP address:
gcloud compute addresses create turbinia-webapps --global
Register a new domain or use an existing one, ensuring a DNS entry points to the IP created earlier.
Create OAuth web client credentials following the Google Support guide. If using the CLI client, also create a Desktop/Native OAuth client.
https://<DOMAIN_NAME>.com
https://<DOMAIN_NAME>.com/oauth2/callback/
Generate a cookie secret:
openssl rand -base64 32 | head -c 32 | base64
Store your new OAuth credentials in a K8s secret:
kubectl create secret generic oauth-secrets \
--from-literal=client-id=<WEB_CLIENT_ID> \
--from-literal=client-secret=<WEB_CLIENT_SECRET> \
--from-literal=cookie-secret=<COOKIE_SECRET> \
--from-literal=client-id-native=<NATIVE_CLIENT_ID>
Make a list of allowed emails in a text file, one per line:
touch authenticated-emails.txt
Apply the authenticated email list as a K8s secret:
kubectl create secret generic authenticated-emails --from-file=authenticated-emails-list=authenticated-emails.txt
Then to upgrade an existing release with production values, externally expose Turbinia through a load balancer with GCP managed certificates, and deploy the Oauth2 Proxy for authentication, run:
helm upgrade my-release \
-f values.yaml -f values-production.yaml \
--set ingress.enabled=true \
--set ingress.host=<DOMAIN> \
--set ingress.gcp.managedCertificates=true \
--set ingress.gcp.staticIPName=<GCP_STATIC_IP_NAME> \
--set oauth2proxy.enabled=true \
--set oauth2proxy.configuration.existingSecret=<OAUTH_SECRET_NAME> \
--set oauth2proxy.configuration.authenticatedEmailsFile.existingSecret=<AUTHENTICATED_EMAILS_SECRET_NAME>
Warning: Turbinia relies on the Oauth2 Proxy for authentication. If you plan to expose Turbinia with a public facing IP, it is highly recommended that the Oauth2 Proxy is deployed alongside with the command provided above.
Application and system monitoring is available through the kube-prometheus-stack. Kube Prometheus is a collection of Grafana dashboards and Prometheus rules combined with documentation to provide easy to operate end-to-end K8s cluster monitoring.
To setup monitoring, first add the repository containing the kube-prometheus-stack Helm chart:
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
If using GKE, EKS, or similar K8s managed services some options will need to be disabled due to the control plane nodes not being visible to Prometheus. To address this create a values file containing the following updates:
cat >> values-monitoring.yaml << EOF
kubeScheduler:
enabled: false
kubeControllerManager:
enabled: false
coreDns:
enabled: false
kubeProxy:
enabled: false
kubeDns:
enabled: true
prometheus:
prometheusSpec:
serviceMonitorSelectorNilUsesHelmValues: false
EOF
Then to install the kube prometheus chart in a namespace called monitoring
:
helm install kube-prometheus prometheus-community/kube-prometheus-stack -f values-monitoring.yaml --namespace monitoring
That’s it! To verify Turbinia metrics are being collected, connect to either
Prometheus or Grafana and search for turbinia_*
in metrics explorer. If no
metrics appear, you may need to run a helm upgrade on your existing Turbinia
deployment so that the CustomResourceDefinitions (CRDs) can be applied.
To uninstall/delete a Helm deployment with a release name of my-release
:
helm uninstall my-release
Tip: Please update based on the release name chosen. You can list all releases using
helm list
The command removes all the Kubernetes components but Persistent Volumes (PVC) associated with the chart and deletes the release.
To delete the PVC’s associated with a release name of my-release
:
kubectl delete pvc -l release=my-release
Note: Deleting the PVC’s will delete Turbinia data as well. Please be cautious before doing it.
Name | Description | Value |
---|---|---|
global.timesketch.enabled |
Enables the Timesketch deployment (only used in the main OSDFIR Infrastructure Helm chart) | false |
global.timesketch.servicePort |
Timesketch service port (overrides timesketch.service.port ) |
nil |
global.turbinia.enabled |
Enables the Turbinia deployment (only used within the main OSDFIR Infrastructure Helm chart) | false |
global.turbinia.servicePort |
Turbinia API service port (overrides turbinia.service.port ) |
nil |
global.dfdewey.enabled |
Enables the dfDewey deployment along with Turbinia | false |
global.yeti.enabled |
Enables the Yeti deployment (only used in the main OSDFIR Infrastructure Helm chart) | false |
global.yeti.servicePort |
Yeti API service port (overrides yeti.api.service.port ) |
nil |
global.ingress.enabled |
Enable the global loadbalancer for external access | false |
global.existingPVC |
Existing claim for Turbinia persistent volume (overrides persistent.name ) |
"" |
global.storageClass |
StorageClass for the Turbinia persistent volume (overrides persistent.storageClass ) |
"" |
Name | Description | Value |
---|---|---|
server.image.repository |
Turbinia image repository | us-docker.pkg.dev/osdfir-registry/turbinia/release/turbinia-server |
server.image.pullPolicy |
Turbinia image pull policy | IfNotPresent |
server.image.tag |
Overrides the image tag whose default is the chart appVersion | latest |
server.image.imagePullSecrets |
Specify secrets if pulling from a private repository | [] |
server.podSecurityContext |
Holds pod-level security attributes and common server container settings | {} |
server.securityContext |
Holds security configuration that will be applied to the server container | {} |
server.resources.limits |
Resource limits for the server container | {} |
server.resources.requests |
Requested resources for the server container | {} |
server.nodeSelector |
Node labels for Turbinia server pods assignment | {} |
server.tolerations |
Tolerations for Turbinia server pods assignment | [] |
server.affinity |
Affinity for Turbinia server pods assignment | {} |
Name | Description | Value |
---|---|---|
worker.image.repository |
Turbinia image repository | us-docker.pkg.dev/osdfir-registry/turbinia/release/turbinia-worker |
worker.image.pullPolicy |
Turbinia image pull policy | IfNotPresent |
worker.image.tag |
Overrides the image tag whose default is the chart appVersion | latest |
worker.image.imagePullSecrets |
Specify secrets if pulling from a private repository | [] |
worker.replicaCount |
Number of worker pods to run at once | 1 |
worker.autoscaling.enabled |
Enables Turbinia Worker autoscaling | false |
worker.autoscaling.minReplicas |
Minimum amount of worker pods to run at once | 5 |
worker.autoscaling.maxReplicas |
Maximum amount of worker pods to run at once | 500 |
worker.autoscaling.targetCPUUtilizationPercentage |
CPU scaling metric workers will scale based on | 80 |
worker.podSecurityContext |
Holds pod-level security attributes and common worker container settings | {} |
worker.securityContext.privileged |
Runs the container as priveleged. Due to Turbinia attaching and detaching disks, a priveleged container is required for the worker container. | true |
worker.resources.limits |
Resources limits for the worker container | {} |
worker.resources.requests.cpu |
Requested cpu for the worker container | 250m |
worker.resources.requests.memory |
Requested memory for the worker container | 256Mi |
worker.nodeSelector |
Node labels for Turbinia worker pods assignment | {} |
worker.tolerations |
Tolerations for Turbinia worker pods assignment | [] |
worker.affinity |
Affinity for Turbinia worker pods assignment | {} |
Name | Description | Value |
---|---|---|
api.image.repository |
Turbinia image repository for API / Web server | us-docker.pkg.dev/osdfir-registry/turbinia/release/turbinia-api-server |
api.image.pullPolicy |
Turbinia image pull policy | IfNotPresent |
api.image.tag |
Overrides the image tag whose default is the chart appVersion | latest |
api.image.imagePullSecrets |
Specify secrets if pulling from a private repository | [] |
api.podSecurityContext |
Holds pod-level security attributes that will be applied to the API / Web container | {} |
api.securityContext |
Holds security configuration that will be applied to the API / Web container | {} |
api.resources.limits |
Resource limits for the api container | {} |
api.resources.requests |
Requested resources for the api container | {} |
api.nodeSelector |
Node labels for Turbinia api pods assignment | {} |
api.tolerations |
Tolerations for Turbinia api pods assignment | [] |
api.affinity |
Affinity for Turbinia api pods assignment | {} |
Name | Description | Value |
---|---|---|
controller.enabled |
If enabled, deploys the Turbinia controller | false |
controller.image.repository |
Turbinia image repository for the Turbinia controller | us-docker.pkg.dev/osdfir-registry/turbinia/release/turbinia-controller |
controller.image.pullPolicy |
Turbinia image pull policy | IfNotPresent |
controller.image.tag |
Overrides the image tag whose default is the chart appVersion | latest |
controller.image.imagePullSecrets |
Specify secrets if pulling from a private repository | [] |
controller.podSecurityContext |
Holds pod-level security attributes and common API / Web container settings | {} |
controller.securityContext |
Holds security configuration that will be applied to the API / Web container | {} |
controller.resources.limits |
Resource limits for the controller container | {} |
controller.resources.requests |
Requested resources for the controller container | {} |
controller.nodeSelector |
Node labels for Turbinia controller pods assignment | {} |
controller.tolerations |
Tolerations for Turbinia controller pods assignment | [] |
controller.affinity |
Affinity for Turbinia controller pods assignment | {} |
Name | Description | Value |
---|---|---|
config.override |
Overrides the default Turbinia config to instead use a user specified config. Please ensure | turbinia.conf |
config.existingConfigMap |
Use an existing ConfigMap as the default Turbinia config. | "" |
config.disabledJobs |
List of Turbinia Jobs to disable. Overrides DISABLED_JOBS in the Turbinia config. | ['BinaryExtractorJob', 'BulkExtractorJob', 'HindsightJob', 'PhotorecJob', 'VolatilityJob'] |
config.existingVertexSecret |
Name of existing secret containing Vertex API Key in order to enable the Turbinia LLM Artifacts Analyzer. The secret must contain the key turbinia-vertexapi |
"" |
gcp.enabled |
Enables Turbinia to run within a GCP project. When enabling, please ensure you have run the supplemental script create-gcp-sa.sh to create a Turbinia GCP service account required for attaching persistent disks. |
false |
gcp.projectID |
GCP Project ID where your cluster is deployed. Required when .Values.gcp.enabled is set to true |
"" |
gcp.projectRegion |
Region where your cluster is deployed. Required when .Values.gcp.enabled is set to true |
"" |
gcp.projectZone |
Zone where your cluster is deployed. Required when .Values.gcp.enabled is set to true |
"" |
gcp.gcpLogging |
Enables GCP Cloud Logging | false |
gcp.gcpErrorReporting |
Enables GCP Cloud Error Reporting | false |
serviceAccount.create |
Specifies whether a service account should be created | true |
serviceAccount.annotations |
Annotations to add to the service account | {} |
serviceAccount.name |
The name of the Kubernetes service account to use | turbinia |
serviceAccount.gcpName |
The name of the GCP service account to annotate. Applied only if .Values.gcp.enabled is set to true |
turbinia |
service.type |
Turbinia service type | ClusterIP |
service.port |
Turbinia api service port | 8000 |
metrics.enabled |
Enables metrics scraping | true |
metrics.port |
Port to scrape metrics from | 9200 |
versioncheck.enabled |
Enable Turbinia runtime version checking | true |
persistence.name |
Turbinia persistent volume name | turbiniavolume |
persistence.size |
Turbinia persistent volume size | 2Gi |
persistence.storageClass |
PVC Storage Class for Turbinia volume | "" |
persistence.accessModes |
PVC Access Mode for Turbinia volume | ["ReadWriteOnce"] |
ingress.enabled |
Enable the Turbinia loadbalancer for external access (only used in the main OSDFIR Infrastructure Helm chart) | false |
ingress.host |
The domain name Turbinia will be hosted under | "" |
ingress.selfSigned |
Create a TLS secret for this ingress record using self-signed certificates generated by Helm | false |
ingress.certManager |
Add the corresponding annotations for cert-manager integration | false |
ingress.className |
IngressClass that will be be used to implement the Ingress | gce |
ingress.gcp.managedCertificates |
Enabled GCP managed certificates for your domain | false |
ingress.gcp.staticIPName |
Name of the static IP address you reserved in GCP | "" |
ingress.gcp.staticIPV6Name |
Name of the static IPV6 address you reserved in GCP. This can be optionally provided to deploy a loadbalancer with an IPV6 address | "" |
Name | Description | Value |
---|---|---|
dfdewey.postgresql.enabled |
Enables the Postgresql deployment | true |
dfdewey.postgresql.nameOverride |
String to partially override common.names.fullname template | dfdewey-postgresql |
dfdewey.postgresql.auth.username |
Name for a custom user to create | dfdewey |
dfdewey.postgresql.auth.password |
Password for the custom user to create. Ignored if auth.existingSecret is provided | password |
dfdewey.postgresql.auth.database |
Name for a custom database to create | dfdewey |
dfdewey.postgresql.primary.persistence.size |
PostgreSQL Persistent Volume size | 8Gi |
dfdewey.postgresql.primary.resources.requests.cpu |
Requested cpu for the PostgreSQL Primary containers | 250m |
dfdewey.postgresql.primary.resources.requests.memory |
Requested memory for the PostgreSQL Primary containers | 256Mi |
dfdewey.postgresql.primary.resources.limits |
Resource limits for the PostgreSQL Primary containers | {} |
Name | Description | Value |
---|---|---|
dfdewey.opensearch.enabled |
Enables the Opensearch deployment | true |
dfdewey.opensearch.nameOverride |
Overrides the clusterName when used in the naming of resources | dfdewey-opensearch |
dfdewey.opensearch.masterService |
The service name used to connect to the masters | dfdewey-opensearch |
dfdewey.opensearch.singleNode |
Replicas will be forced to 1 | true |
dfdewey.opensearch.sysctlInit.enabled |
Sets optimal sysctl’s through privileged initContainer | true |
dfdewey.opensearch.opensearchJavaOpts |
Sets the size of the Opensearch Java heap | -Xms512m -Xmx512m |
dfdewey.opensearch.config.opensearch.yml |
Opensearch configuration file. Can be appended for additional configuration options | {"opensearch.yml":"discovery:\n type: single-node\nplugins:\n security:\n disabled: true\n"} |
dfdewey.opensearch.extraEnvs[0].name |
Environment variable to set the initial admin password | OPENSEARCH_INITIAL_ADMIN_PASSWORD |
dfdewey.opensearch.extraEnvs[0].value |
The initial admin password | KyfwJExU2!2MvU6j |
dfdewey.opensearch.extraEnvs[1].name |
Environment variable to disable Opensearch Demo config | DISABLE_INSTALL_DEMO_CONFIG |
dfdewey.opensearch.extraEnvs[1].value |
Disables Opensearch Demo config | true |
dfdewey.opensearch.extraEnvs[2].name |
Environment variable to disable Opensearch Security plugin given that | DISABLE_SECURITY_PLUGIN |
dfdewey.opensearch.extraEnvs[2].value |
Disables Opensearch Security plugin | true |
dfdewey.opensearch.persistence.size |
Opensearch Persistent Volume size. A persistent volume would be created for each Opensearch replica running | 2Gi |
dfdewey.opensearch.resources.requests.cpu |
Requested cpu for the Opensearch containers | 250m |
dfdewey.opensearch.resources.requests.memory |
Requested memory for the Opensearch containers | 512Mi |
Name | Description | Value |
---|---|---|
kubePrometheus.deployKubePrometheus |
Deploy kube-prometheus-stack as a subchart. For production environments, it is best practice to deploy this chart separately. | false |
kubePrometheus.kubeScheduler.enabled |
Component scraping kube scheduler. Disabled by default due to lack of Prometheus endpoint access for managed K8s clusters (e.g. GKE, EKS). | false |
kubePrometheus.kubeControllerManager.enabled |
Component scraping kube controller. Disabled by default due to lack of Prometheus endpoint access for managed K8s clusters (e.g. GKE, EKS). | false |
kubePrometheus.coreDns.enabled |
Component scraping core dns. Disabled by default in favor of kube dns. | false |
kubePrometheus.kubeProxy.enabled |
Component scraping kube proxy. Disabled by default due to lack of Prometheus endpoint access for managed K8s clusters (e.g. GKE, EKS). | false |
kubePrometheus.kubeDns.enabled |
Component scraping kube dns. | true |
kubePrometheus.prometheus.prometheusSpec.serviceMonitorSelectorNilUsesHelmValues |
Disable so that custom servicemonitors can be created and monitored | false |
Name | Description | Value |
---|---|---|
redis.enabled |
enabled Enables the Redis deployment | true |
redis.auth.enabled |
Enables Redis Authentication. Disabled due to incompatibility with Turbinia | false |
redis.sentinel.enabled |
Enables Redis Sentinel on Redis pods | false |
redis.architecture |
Specifies the Redis architecture. Allowed values: standalone or replication |
standalone |
redis.master.count |
Number of Redis master instances to deploy (experimental, requires additional configuration) | 1 |
redis.master.service.type |
Redis master service type | ClusterIP |
redis.master.service.ports.redis |
Redis master service port | 6379 |
redis.master.persistence.size |
Persistent Volume size | 2Gi |
redis.master.resources.limits |
Resource limits for the Redis master containers | {} |
redis.master.resources.requests |
Requested resources for the Redis master containers | {} |
redis.replica.replicaCount |
Number of Redis replicas to deploy | 0 |
redis.replica.service.type |
Redis replicas service type | ClusterIP |
redis.replica.service.ports.redis |
Redis replicas service port | 6379 |
redis.replica.persistence.size |
Persistent Volume size | 2Gi |
redis.replica.resources.limits |
Resources limits for the Redis replica containers | {} |
redis.replica.resources.requests |
Requested resources for the Redis replica containers | {} |
Name | Description | Value |
---|---|---|
oauth2proxy.enabled |
Enables the Oauth2 Proxy deployment | false |
oauth2proxy.containerPort |
Oauth2 Proxy container port | 4180 |
oauth2proxy.service.type |
OAuth2 Proxy service type | ClusterIP |
oauth2proxy.service.port |
OAuth2 Proxy service HTTP port | 8080 |
oauth2proxy.extraEnvVars[0].name |
Name of the environment variable to pass to Oauth2 Proxy | OAUTH2_PROXY_OIDC_EXTRA_AUDIENCES |
oauth2proxy.extraEnvVars[0].valueFrom.secretKeyRef.name |
Name of the secret containing native client id | `` |
oauth2proxy.extraEnvVars[0].valueFrom.secretKeyRef.key |
Name of the secret key containing native client id | client-id-native |
oauth2proxy.extraEnvVars[0].valueFrom.secretKeyRef.optional |
Set to optional if native client id is not provided | true |
oauth2proxy.configuration.turbiniaSvcPort |
Turbinia service port referenced from .Values.service.port to be used in Oauth setup |
8000 |
oauth2proxy.configuration.existingSecret |
Secret with the client ID, client secret, client native id (optional) and cookie secret | "" |
oauth2proxy.configuration.content |
Oauth2 proxy configuration. Please see the official docs for a list of configurable values | "" |
oauth2proxy.configuration.authenticatedEmailsFile.enabled |
Enable authenticated emails file | true |
oauth2proxy.configuration.authenticatedEmailsFile.content |
Restricted access list (one email per line). At least one email address is required for the Oauth2 Proxy to properly work | "" |
oauth2proxy.configuration.authenticatedEmailsFile.existingSecret |
Secret with the authenticated emails file | "" |
oauth2proxy.configuration.authenticatedEmailsFile.existingSecret |
Secret with the authenticated emails file | "" |
oauth2proxy.configuration.oidcIssuerUrl |
OpenID Connect issuer URL | https://accounts.google.com |
oauth2proxy.redis.enabled |
Enable Redis for OAuth Session Storage | false |
Specify each parameter using the –set key=value[,key=value] argument to helm install. For example,
helm install my-release osdfir-charts/turbinia --set controller.enabled=true
The above command installs Turbinia with the Turbinia Controller deployed.
Alternatively, the values.yaml
and values-production.yaml
file can be
directly updated if the Helm chart was pulled locally. For example,
helm pull osdfir-charts/turbinia --untar
Then make changes to the downloaded values.yaml
and once done, install the
chart with the updated values.
helm install my-release ../turbinia
The Turbinia deployment stores data at the /mnt/turbiniavolume
path of the container and stores configuration files at the /etc/turbinia
path of the container.
Persistent Volume Claims are used to keep the data across deployments. This is known to work in GCP and Minikube. See the Parameters section to configure the PVC or to disable persistence.
If you need to upgrade an existing release to update a value, such as persistent volume size or upgrading to a new release, you can run helm upgrade. For example, to set a new release and upgrade storage capacity, run:
helm upgrade my-release ../turbinia \
--set image.tag=latest \
--set persistence.size=10T
The above command upgrades an existing release named my-release
updating the
image tag to latest
and increasing persistent volume size of an existing volume to 10 Terabytes. Note that existing data will not be deleted and instead triggers an expansion
of the volume that backs the underlying PersistentVolume. See here.
Copyright © 2023 OSDFIR Infrastructure
Licensed under the Apache License, Version 2.0 (the “License”); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an “AS IS” BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.