Local Development
A guide on how to run Dub.co’s codebase locally.
Introduction
Dub’s codebase is set up in a monorepo (via Turborepo) and is fully open-source on GitHub.
Here’s the monorepo structure:
The apps
directory contains the code for:
web
: The entirety of Dub’s application (app.dub.co) + our link redirect infrastructure.
The packages
directory contains the code for:
tailwind-config
: The Tailwind CSS configuration for Dub’s web app.tinybird
: Dub’s Tinybird configuration.tsconfig
: The TypeScript configuration for Dub’s web app.ui
: Dub’s UI component library.utils
: A collection of utility functions and constants used across Dub’s codebase.
How app.dub.co
works
Dub’s web app is built with Next.js and TailwindCSS.
It also utilizes code from the packages
directory, specifically the @dub/ui
and @dub/utils
packages.
All of the code for the web app is located in here: main
/apps/web/app/app.dub.co. This is using the Next.js route group pattern.
There’s also the API server, which is located in here: main
/apps/web/app/api
When you run pnpm dev
to start the development server, the app will be available at http://localhost:8888. The reason we use localhost:8888
and not app.localhost:8888
is because Google OAuth doesn’t allow you to use localhost subdomains.
How link redirects work on Dub
Link redirects on Dub are powered by Next.js Middleware.
To handle high traffic, we use Redis to cache every link’s metadata when it’s first created. This allows us to serve redirects without hitting our MySQL database.
Here’s the code that powers link redirects:
- Link redirects:
main
/apps/web/lib/middleware/link.ts. - Root domain redirects:
main
/apps/web/lib/middleware/root.ts.
Running Dub locally
To run Dub.co locally, you’ll need to set up the following:
- A Tinybird account
- An Upstash account
- A PlanetScale-compatible MySQL database
Step 1: Local setup
First, you’ll need to clone the Dub.co repo and install the dependencies.
Clone the repo
First, clone the Dub.co repo into a public GitHub repository.
Install dependencies
Run the following command to install the dependencies:
Build internal packages
Execute the command below to compile all internal packages:
Set up environment variables
Copy the .env.example
file to .env
:
You’ll be updating this .env
file with your own values as you progress through the setup.
Step 2: Set up Tinybird Clickhouse database
Next, you’ll need to set up the Tinybird Clickhouse database. This will be used to store time-series click events data.
Create Tinybird Workspace
In your Tinybird account, create a new Workspace.
Copy your admin
Auth Token. Paste this token as the TINYBIRD_API_KEY
environment variable in your .env
file.
Install Tinybird CLI and authenticate
In your newly-cloned Dub.co repo, navigate to the packages/tinybird
directory.
Install the Tinybird CLI with pip install tinybird-cli
(requires Python >= 3.8).
Run tb auth
and paste your admin
Auth Token.
Publish Tinybird datasource and endpoints
Run tb push
to publish the datasource and endpoints in the packages/tinybird
directory. You should see the following output (truncated for brevity):
Set up Tinybird API base URL
You will then need to update your Tinybird API base URL to match the region of your database.
From the previous step, take note of the Test endpoint URL. It should look something like this:
Copy the base URL and paste it as the TINYBIRD_API_URL
environment variable in your .env
file.
Step 3: Set up Upstash Redis database
Next, you’ll need to set up the Upstash Redis database. This will be used to cache link metadata and serve link redirects.
Create Upstash database
In your Upstash account, create a new database.
For better performance & read times, we recommend setting up a global database with several read regions.
Set up Upstash Redis environment variables
Once your database is created, copy the UPSTASH_REDIS_REST_URL
and UPSTASH_REDIS_REST_TOKEN
from the REST API section into your .env
file.
Navigate to the QStash tab and copy the QSTASH_TOKEN
, QSTASH_CURRENT_SIGNING_KEY
, and QSTASH_NEXT_SIGNING_KEY
from the Request Builder section into your .env
file.
Optional: Set up Ngrok tunnel
If you’re planning to run Qstash-powered background jobs locally, you’ll need to set up an Ngrok tunnel to expose your local server to the internet.
Follow these steps to setup ngrok
, and then run the following command to start an Ngrok tunnel at port 8888
:
Copy the https
URL and paste it as the NEXT_PUBLIC_NGROK_URL
environment variable in your .env
file.
Step 4: Set up PlanetScale MySQL database
Next, you’ll need to set up a PlanetScale-compatible MySQL database. This will be used to store user data and link metadata. There are two options:
Option 1: Local MySQL database with PlanetScale simulator (recommended)
You can use a local MySQL database with a PlanetScale simulator. This is the recommended option for local development since it’s 100% free.
Prerequisites:
Spin up the docker-compose stack
In the terminal, navigate to the apps/web
directory and run the following command to start the Docker Compose stack:
This will start two containers: one for the MySQL database and another for the PlanetScale simulator.
Set up database environment variables
Add the following credentials to your .env
file:
Here, we are using the open-source PlanetScale simulator so the application can continue to use the @planetscale/database
SDK.
While we’re using two different values in local development, in production or staging environments, you’ll only need the DATABASE_URL
value.
Generate Prisma client and create database tables
In the terminal, navigate to the apps/web
directory and run the following command to generate the Prisma client:
Then, create the database tables with the following command:
The docker-compose setup includes Mailhog, which acts as a mock SMTP server and shows received emails in a web UI. You can access the Mailhog web interface at http://localhost:8025. This is useful for testing email functionality without sending real emails during local development.
Option 2: PlanetScale hosted database
PlanetScale recently removed their free tier, so you’ll need to pay for this option. A cheaper alternative is to use a MySQL database on Railway ($5/month).
Create PlanetScale database
In your PlanetScale account, create a new database.
Once your database is created, you’ll be prompted to select your language or Framework. Select Prisma.
Set up PlanetScale environment variables
Then, you’ll have to create a new password for your database. Once the password is created, scroll down to the Add credentials to .env section and copy the DATABASE_URL
into your .env
file.
Generate Prisma client and create database tables
In the terminal, navigate to the apps/web
directory and run the following command to generate the Prisma client:
Then, create the database tables with the following command:
Step 5: Set up Mailhog
To view emails sent from your application during local development, you’ll need to set up Mailhog.
If you’ve already run docker compose up
as part of the database setup, you
can skip this step. Mailhog is included in the Docker Compose configuration
and should already be running.
Pull Mailhog Docker image
Run the following command to pull the Mailhog Docker image:
Start Mailhog container
Start the Mailhog container with the following command:
This will run Mailhog in the background, and the web interface will be available at http://localhost:8025.
Step 6: Start the development server
Finally, you can start the development server. This will build the packages + start the app servers.
The web app (apps/web
) will be available at localhost:8888.
Was this page helpful?