If you are looking for a managed solution, consider our Bring Your Own Cloud offering that will
bring you the same security and control with the E2B team managing infrastructure for you.
Google Cloud Platform
Prerequisites
Tools- Packer
- Golang
- Docker
- Terraform (v1.5.x)
- This version that is still using Mozilla Public License
- The last version of Terraform that supports Mozilla Public License is v1.5.7
- You can install it with tfenv for easier version management
- Google Cloud CLI
- Used for managing GCP resources deployed by Terraform
- Authenticate with
gcloud auth login && gcloud auth application-default login
- Cloudflare account with a domain
- Google Cloud Platform account and project
- Supabase account with PostgreSQL database
- (Optional) Grafana account for monitoring and logging
- (Optional) Posthog account for analytics
Steps
-
Go to
console.cloud.google.comand create a new GCP projectMake sure your Quota allows you to have at least 2500 GB for
Persistent Disk SSD (GB)and at least 24 forCPUs. -
Create
.env.prod,.env.staging, or.env.devfrom.env.template. You can pick any of them. Make sure to fill in the values. All are required if not specified otherwise.Get Postgres database connection string from your database, e.g. from Supabase: Create a new project in Supabase and go to your project in Supabase -> Settings -> Database -> Connection Strings -> Postgres -> Direct.
Your Postgres database needs to have IPv4 access enabled. You can do that in the Connect screen.
-
Run
make switch-env ENV={prod,staging,dev}to start using your env -
Run
make login-gcloudto login togcloudCLI so Terraform and Packer can communicate with GCP API. -
Run
make initIf this error, run it a second time. It’s due to a race condition on Terraform enabling API access for the various GCP services; this can take several seconds.
A full list of services that will be enabled for API access: Secret Manager API, Certificate Manager API, Compute Engine API, Artifact Registry API, OS Config API, Stackdriver Monitoring API, Stackdriver Logging API
-
Run
make build-and-upload -
Run
make copy-public-builds -
Run
make migrate - Secrets are created and stored in GCP Secrets Manager. Once created, that is the source of truth—you will need to update values there to make changes. Create a secret value for the following secrets:
-
Update
e2b-cloudflare-api-tokenin GCP Secrets Manager with a value taken from Cloudflare.Get Cloudflare API Token: go to the Cloudflare dashboard -> Manage Account -> Account API Tokens -> Create Token -> Edit Zone DNS -> in “Zone Resources” select your domain and generate the token
-
Run
make plan-without-jobsand thenmake apply -
Fill out the following secret in the GCP Secrets Manager:
- e2b-supabase-jwt-secrets (optional / required to self-host the E2B dashboard)
Get Supabase JWT Secret: go to the Supabase dashboard -> Select your Project -> Project Settings -> Data API -> JWT Settings
- e2b-postgres-connection-string
This is the same value as for the
POSTGRES_CONNECTION_STRINGenv variable.
- e2b-supabase-jwt-secrets (optional / required to self-host the E2B dashboard)
-
Run
make planand thenmake applyNote: This will work after the TLS certificates are issued. It can take some time; you can check the status in the Google Cloud Console.
-
Setup data in the cluster by following one of the two
make prep-clusterinpackages/sharedto create an initial user, etc. (You need to be logged in viae2bCLI). It will create a user with same information (access token, api key, etc.) as you have in E2B.- You can also create a user in the database, it will automatically also create a team, an API key, and an access token. You will need to build template(s) for your cluster. Use
e2bCLI) and runE2B_DOMAIN=<your-domain> e2b template build.
Interacting with the cluster
SDK
When using SDK pass the domain when creating a newSandbox in the JS/TS SDK
CLI
When using CLI, you can pass the domain as wellMonitoring and logging jobs
To access the Nomad web UI, go tohttps://nomad.<your-domain.com>. Go to sign in, and when prompted for an API token, you can find this in GCP Secrets Manager.
From here, you can see nomad jobs and tasks for both client and server, including logging.
To update jobs running in the cluster, look inside packages/nomad for config files. This can be useful for setting your logging and monitoring agents.
Deployment Troubleshooting
If any problems arise, open a GitHub issue on the repo and we’ll look into it.Google Cloud Troubleshooting
Quotas not available If you can’t find the quota inAll Quotas in GCP’s Console, then create and delete a dummy VM before proceeding to step 2 in self-deploy guide. This will create additional quotas and policies in GCP
All Quotas and be able to request the correct size.