Skip to Content
advanced-topicsDatabase Integration

Last Updated: 3/9/2026


Database Integration

Best practices for using Nano ID as database primary keys.

Why Nano ID for Database IDs?

Advantages:

  • No coordination between nodes (distributed systems)
  • Generate client-side (reduce server load)
  • URL-safe (use in REST APIs)
  • Shorter than UUID (better index performance)
  • Collision-resistant (safe for production)

Disadvantages:

  • Not sortable by creation time (use ULID/KSUID if needed)
  • Larger than auto-increment integers
  • Random (no sequential patterns)

SQL Databases

PostgreSQL

CREATE TABLE users ( id VARCHAR(21) PRIMARY KEY, email VARCHAR(255) NOT NULL, created_at TIMESTAMP DEFAULT NOW() );
import { nanoid } from 'nanoid' import pg from 'pg' const client = new pg.Client() const user = { id: nanoid(), email: 'user@example.com' } await client.query( 'INSERT INTO users (id, email) VALUES ($1, $2)', [user.id, user.email] )

MySQL

CREATE TABLE users ( id VARCHAR(21) PRIMARY KEY, email VARCHAR(255) NOT NULL, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP );
import { nanoid } from 'nanoid' import mysql from 'mysql2/promise' const connection = await mysql.createConnection({ host: 'localhost', user: 'root', database: 'myapp' }) const user = { id: nanoid(), email: 'user@example.com' } await connection.execute( 'INSERT INTO users (id, email) VALUES (?, ?)', [user.id, user.email] )

SQLite

CREATE TABLE users ( id TEXT PRIMARY KEY, email TEXT NOT NULL, created_at INTEGER DEFAULT (strftime('%s', 'now')) );
import { nanoid } from 'nanoid' import Database from 'better-sqlite3' const db = new Database('myapp.db') const user = { id: nanoid(), email: 'user@example.com' } db.prepare('INSERT INTO users (id, email) VALUES (?, ?)').run( user.id, user.email )

NoSQL Databases

MongoDB

import { nanoid } from 'nanoid' import { MongoClient } from 'mongodb' const client = new MongoClient(url) const db = client.db('myapp') const user = { _id: nanoid(), // Use as _id email: 'user@example.com', createdAt: new Date() } await db.collection('users').insertOne(user)

PouchDB / CouchDB

⚠️ Important: IDs can’t start with underscore _

import { nanoid } from 'nanoid' import PouchDB from 'pouchdb' const db = new PouchDB('myapp') const user = { _id: 'user:' + nanoid(), // Add prefix to avoid underscore email: 'user@example.com' } await db.put(user)

Redis

import { nanoid } from 'nanoid' import { createClient } from 'redis' const client = createClient() await client.connect() const sessionId = nanoid() const session = { userId: '123', createdAt: Date.now() } await client.set( `session:${sessionId}`, JSON.stringify(session), { EX: 3600 } // Expire in 1 hour )

DynamoDB

import { nanoid } from 'nanoid' import { DynamoDBClient, PutItemCommand } from '@aws-sdk/client-dynamodb' const client = new DynamoDBClient({}) const user = { id: { S: nanoid() }, email: { S: 'user@example.com' }, createdAt: { N: Date.now().toString() } } await client.send(new PutItemCommand({ TableName: 'Users', Item: user }))

ORMs and Query Builders

Prisma

// schema.prisma model User { id String @id @default(nanoid()) email String @unique createdAt DateTime @default(now()) }
import { nanoid } from 'nanoid' import { PrismaClient } from '@prisma/client' const prisma = new PrismaClient() const user = await prisma.user.create({ data: { id: nanoid(), // Override default if needed email: 'user@example.com' } })

TypeORM

import { Entity, PrimaryColumn, Column, BeforeInsert } from 'typeorm' import { nanoid } from 'nanoid' @Entity() export class User { @PrimaryColumn('varchar', { length: 21 }) id: string @Column() email: string @BeforeInsert() generateId() { this.id = nanoid() } }

Sequelize

import { DataTypes } from 'sequelize' import { nanoid } from 'nanoid' const User = sequelize.define('User', { id: { type: DataTypes.STRING(21), primaryKey: true, defaultValue: () => nanoid() }, email: { type: DataTypes.STRING, allowNull: false } })

Knex

import { nanoid } from 'nanoid' import knex from 'knex' const db = knex({ client: 'pg', connection: process.env.DATABASE_URL }) const user = { id: nanoid(), email: 'user@example.com' } await db('users').insert(user)

Best Practices

✅ Generate on Application Layer

import { nanoid } from 'nanoid' // ✅ Good: Generate in application const user = { id: nanoid(), email: 'user@example.com' } await db.insert(user) // ❌ Bad: Generate in database (requires function/trigger) // Less portable, harder to test

✅ Index Primary Keys

-- ✅ Primary key is automatically indexed CREATE TABLE users ( id VARCHAR(21) PRIMARY KEY, email VARCHAR(255) ); -- ✅ Add index on foreign keys CREATE TABLE posts ( id VARCHAR(21) PRIMARY KEY, user_id VARCHAR(21), FOREIGN KEY (user_id) REFERENCES users(id) ); CREATE INDEX idx_posts_user_id ON posts(user_id);

✅ Handle Collisions (Rare)

import { nanoid } from 'nanoid' async function createUser(email, maxRetries = 3) { for (let i = 0; i < maxRetries; i++) { const id = nanoid() try { await db.users.insert({ id, email }) return id } catch (error) { if (error.code === 'ER_DUP_ENTRY') { // Collision detected, retry continue } throw error } } throw new Error('Failed to generate unique ID') }

✅ Use Consistent Length

-- ✅ Good: Consistent length CREATE TABLE users ( id VARCHAR(21) PRIMARY KEY ); CREATE TABLE posts ( id VARCHAR(21) PRIMARY KEY, user_id VARCHAR(21) ); -- ❌ Bad: Inconsistent lengths CREATE TABLE users ( id VARCHAR(21) PRIMARY KEY ); CREATE TABLE posts ( id VARCHAR(30) PRIMARY KEY -- Why different? );

Performance Considerations

Index Size

Nano ID (21 chars) vs Auto-increment (8 bytes):

Auto-increment: 8 bytes per ID Nano ID: 21 bytes per ID For 1M rows: Auto-increment: 8 MB Nano ID: 21 MB Difference: 13 MB (negligible for modern systems)

Query Performance

-- Both are O(log n) with B-tree index SELECT * FROM users WHERE id = 123; -- Auto-increment SELECT * FROM users WHERE id = 'V1StGXR8...'; -- Nano ID -- Nano ID slightly slower due to string comparison, -- but difference is negligible (<1ms for millions of rows)

Insertion Performance

Auto-increment: Sequential writes (optimal for B-tree) Nano ID: Random writes (causes page splits) Impact: ~5-10% slower insertions Mitigation: Use UUID v7 or ULID for time-ordered IDs

Migration from Auto-Increment

Strategy 1: Dual Column

ALTER TABLE users ADD COLUMN nanoid VARCHAR(21); UPDATE users SET nanoid = generate_nanoid(); -- Custom function ALTER TABLE users ADD UNIQUE (nanoid); -- Gradually migrate foreign keys -- Then drop old id column

Strategy 2: New Table

CREATE TABLE users_new ( id VARCHAR(21) PRIMARY KEY, email VARCHAR(255), old_id INT ); INSERT INTO users_new (id, email, old_id) SELECT generate_nanoid(), email, id FROM users; -- Update foreign keys -- Then rename table

See Also