Introduction

Dub’s codebase is set up in a monorepo (via Turborepo) and is fully open-source on GitHub.

Here’s the monorepo structure:

apps
├── web
packages
├── tailwind-config
├── tinybird
├── tsconfig
├── ui
├── utils

The apps directory contains the code for:

  • web: The entirety of Dub’s application (app.dub.co) + our link redirect infrastructure.

The packages directory contains the code for:

  • tailwind-config: The Tailwind CSS configuration for Dub’s web app.
  • tinybird: Dub’s Tinybird configuration.
  • tsconfig: The TypeScript configuration for Dub’s web app.
  • ui: Dub’s UI component library.
  • utils: A collection of utility functions and constants used across Dub’s codebase.

How app.dub.co works

Dub’s web app is built with Next.js and TailwindCSS.

It also utilizes code from the packages directory, specifically the @dub/ui and @dub/utils packages.

All of the code for the web app is located in here: main/apps/web/app/app.dub.co. This is using the Next.js route group pattern.

There’s also the API server, which is located in here: main/apps/web/app/api

When you run pnpm dev to start the development server, the app will be available at http://localhost:8888. The reason we use localhost:8888 and not app.localhost:8888 is because Google OAuth doesn’t allow you to use localhost subdomains.

Link redirects on Dub are powered by Next.js Middleware.

To handle high traffic, we use Redis to cache every link’s metadata when it’s first created. This allows us to serve redirects without hitting our MySQL database.

Here’s the code that powers link redirects:

Running Dub locally

To run Dub.co locally, you’ll need to set up the following:

Step 1: Local setup

First, you’ll need to clone the Dub.co repo and install the dependencies.

1

Clone the repo

First, clone the Dub.co repo into a public GitHub repository.

Terminal
git clone https://github.com/dubinc/dub.git
2

Install dependencies

Run the following command to install the dependencies:

Terminal
pnpm i
3

Build internal packages

Execute the command below to compile all internal packages:

Terminal
pnpm -r --filter "./packages/**" build
4

Set up environment variables

Copy the .env.example file to .env:

Terminal
cp .env.example .env

You’ll be updating this .env file with your own values as you progress through the setup.

Step 2: Set up Tinybird Clickhouse database

Next, you’ll need to set up the Tinybird Clickhouse database. This will be used to store time-series click events data.

1

Create Tinybird Workspace

In your Tinybird account, create a new Workspace.

Copy your admin Auth Token. Paste this token as the TINYBIRD_API_KEY environment variable in your .env file.

2

Install Tinybird CLI and authenticate

In your newly-cloned Dub.co repo, navigate to the packages/tinybird directory.

Install the Tinybird CLI with pip install tinybird-cli (requires Python >= 3.8).

Run tb auth and paste your admin Auth Token.

3

Publish Tinybird datasource and endpoints

Run tb push to publish the datasource and endpoints in the packages/tinybird directory. You should see the following output (truncated for brevity):

Terminal
$ tb push

** Processing ./datasources/click_events.datasource
** Processing ./endpoints/clicks.pipe
...
** Building dependencies
** Running 'click_events'
** 'click_events' created
** Running 'device'
** => Test endpoint at https://api.us-east.tinybird.co/v0/pipes/device.json
** Token device_endpoint_read_8888 not found, creating one
** => Test endpoint with:
** $ curl https://api.us-east.tinybird.co/v0/pipes/device.json?token=p.ey...NWeaoTLM
** 'device' created
...
4

Set up Tinybird API base URL

You will then need to update your Tinybird API base URL to match the region of your database.

From the previous step, take note of the Test endpoint URL. It should look something like this:

Terminal
Test endpoint at https://api.us-east.tinybird.co/v0/pipes/device.json

Copy the base URL and paste it as the TINYBIRD_API_URL environment variable in your .env file.

Terminal
TINYBIRD_API_URL=https://api.us-east.tinybird.co

Step 3: Set up Upstash Redis database

Next, you’ll need to set up the Upstash Redis database. This will be used to cache link metadata and serve link redirects.

1

Create Upstash database

In your Upstash account, create a new database.

For better performance & read times, we recommend setting up a global database with several read regions.

2

Set up Upstash Redis environment variables

Once your database is created, copy the UPSTASH_REDIS_REST_URL and UPSTASH_REDIS_REST_TOKEN from the REST API section into your .env file.

Navigate to the QStash tab and copy the QSTASH_TOKEN, QSTASH_CURRENT_SIGNING_KEY, and QSTASH_NEXT_SIGNING_KEY from the Request Builder section into your .env file.

3

Optional: Set up Ngrok tunnel

If you’re planning to run Qstash-powered background jobs locally, you’ll need to set up an Ngrok tunnel to expose your local server to the internet.

Follow these steps to setup ngrok, and then run the following command to start an Ngrok tunnel at port 8888:

Terminal
ngrok http 8888

Copy the https URL and paste it as the NEXT_PUBLIC_NGROK_URL environment variable in your .env file.

Step 4: Set up PlanetScale MySQL database

Next, you’ll need to set up a PlanetScale-compatible MySQL database. This will be used to store user data and link metadata. There are two options:

You can use a local MySQL database with a PlanetScale simulator. This is the recommended option for local development since it’s 100% free.

Prerequisites:

1

Spin up the docker-compose stack

In the terminal, navigate to the apps/web directory and run the following command to start the Docker Compose stack:

Terminal
docker-compose up

This will start two containers: one for the MySQL database and another for the PlanetScale simulator.

2

Set up database environment variables

Add the following credentials to your .env file:

PLANETSCALE_DATABASE_URL="http://root:unused@localhost:3900"
DATABASE_URL="mysql://root:@localhost:3306/planetscale"

Here, we are using the open-source PlanetScale simulator so the application can continue to use the @planetscale/database SDK.

While we’re using two different values in local development, in production or staging environments, you’ll only need the DATABASE_URL value.

3

Generate Prisma client and create database tables

In the terminal, navigate to the apps/web directory and run the following command to generate the Prisma client:

Terminal
npx prisma generate

Then, create the database tables with the following command:

Terminal
npx prisma db push

The docker-compose setup includes Mailhog, which acts as a mock SMTP server and shows received emails in a web UI. You can access the Mailhog web interface at http://localhost:8025. This is useful for testing email functionality without sending real emails during local development.

Option 2: PlanetScale hosted database

PlanetScale recently removed their free tier, so you’ll need to pay for this option. A cheaper alternative is to use a MySQL database on Railway ($5/month).

1

Create PlanetScale database

In your PlanetScale account, create a new database.

Once your database is created, you’ll be prompted to select your language or Framework. Select Prisma.

2

Set up PlanetScale environment variables

Then, you’ll have to create a new password for your database. Once the password is created, scroll down to the Add credentials to .env section and copy the DATABASE_URL into your .env file.

3

Generate Prisma client and create database tables

In the terminal, navigate to the apps/web directory and run the following command to generate the Prisma client:

Terminal
npx prisma generate

Then, create the database tables with the following command:

Terminal
npx prisma db push

Step 5: Set up Mailhog

To view emails sent from your application during local development, you’ll need to set up Mailhog.

If you’ve already run docker compose up as part of the database setup, you can skip this step. Mailhog is included in the Docker Compose configuration and should already be running.

1

Pull Mailhog Docker image

Run the following command to pull the Mailhog Docker image:

Terminal
docker pull mailhog/mailhog
2

Start Mailhog container

Start the Mailhog container with the following command:

Terminal
docker run -d -p 8025:8025 -p 1025:1025 mailhog/mailhog

This will run Mailhog in the background, and the web interface will be available at http://localhost:8025.

Step 6: Start the development server

Finally, you can start the development server. This will build the packages + start the app servers.

Terminal
pnpm dev

The web app (apps/web) will be available at localhost:8888.