Building a Blog with Next.js and MongoDB: Complete Production Guide - NextGenBeing Building a Blog with Next.js and MongoDB: Complete Production Guide - NextGenBeing
Back to discoveries

Building a Production-Ready Blog with Next.js and MongoDB: What We Learned Scaling to 500K Monthly Readers

Learn how we built and scaled a Next.js blog with MongoDB from zero to 500K monthly readers. Real architecture decisions, performance optimizations, and gotchas we discovered the hard way.

DevOps 12 min read
NextGenBeing

NextGenBeing

Apr 18, 2026 8 views
Building a Production-Ready Blog with Next.js and MongoDB: What We Learned Scaling to 500K Monthly Readers
Photo by Sammyayot254 on Unsplash
Size:
Height:
📖 12 min read 📝 4,012 words 👁 Focus mode: ✨ Eye care:

Listen to Article

Loading...
0:00 / 0:00
0:00 0:00
Low High
0% 100%
⏸ Paused ▶️ Now playing... Ready to play ✓ Finished

The Problem That Started Everything

Six months ago, my team at a mid-sized SaaS company decided to rebuild our engineering blog. Our old WordPress setup was costing us $200/month in hosting, the admin panel was painfully slow, and honestly, none of us wanted to touch PHP anymore. We were a JavaScript shop through and through, and maintaining a separate tech stack for our blog felt like technical debt we couldn't justify.

"Let's just use Next.js," our lead engineer Sarah suggested during a Friday afternoon planning session. "We already use it for our main app. How hard could a blog be?"

Spoiler alert: harder than we thought, but absolutely worth it.

Fast forward to today, and we're serving 500K monthly readers with an infrastructure that costs us $40/month. Our pages load in under 800ms globally, our content team can publish without touching code, and we've learned a ton about what actually works at scale versus what looks good in tutorials.

This isn't going to be another "hello world" Next.js tutorial. I'm going to show you exactly how we built this thing, the mistakes we made (like using the wrong MongoDB indexes for three weeks), the performance optimizations that actually moved the needle, and the architecture decisions we'd make differently if we started over today.

Why Next.js and MongoDB? The Architecture Decision We Almost Got Wrong

When we started this project, we considered three approaches:

Option 1: Static Site Generator (Gatsby, Hugo) We'd used Gatsby before. It's fast, but the build times were killing us. Our previous blog had 200+ posts, and rebuilds took 8-12 minutes. Every typo fix meant waiting for a full rebuild. Sarah had PTSD from waiting for Gatsby builds during our last project.

Option 2: Headless CMS (Contentful, Strapi) + Next.js This looked promising initially. The problem? Cost. Contentful wanted $489/month for our usage tier. Strapi meant managing another server and database. We wanted to keep our infrastructure lean.

Option 3: Next.js + MongoDB (What We Chose) Here's why this won: We already had MongoDB Atlas for our main app. Adding a blog database cost us nothing extra. Next.js 13 (now 14) had just released the App Router with React Server Components, and we wanted to learn it anyway. Plus, the flexibility of a database meant we could add features like search, analytics, and related posts without rebuilding everything.

The catch? Most Next.js blog tutorials use markdown files or simple JSON. We needed something that could scale, handle rich content, support multiple authors, and give our marketing team a decent editing experience.

The Tech Stack We Actually Shipped With

Let me show you our complete stack, including versions, because that matters more than people think:

{
  "dependencies": {
    "next": "14.1.0",
    "react": "18.2.0",
    "mongodb": "6.3.0",
    "next-auth": "4.24.5",
    "react-markdown": "9.0.1",
    "gray-matter": "4.0.3",
    "date-fns": "3.2.0",
    "sharp": "0.33.2",
    "zod": "3.22.4"
  },
  "devDependencies": {
    "typescript": "5.3.3",
    "@types/node": "20.11.5",
    "@types/react": "18.2.48",
    "eslint": "8.56.0",
    "tailwindcss": "3.4.1"
  }
}

Why these specific versions matter:

Next.js 14.1.0 fixed a critical bug we hit with Server Components and MongoDB connections. If you're using 14.0.x, you'll run into connection pooling issues. Trust me, we spent two days debugging this before discovering it was a known issue.

MongoDB driver 6.3.0 has better connection handling than 5.x. We were getting "connection pool exhausted" errors until we upgraded. The new driver also has better TypeScript support, which saved us from several runtime errors.

Sharp 0.33.2 is crucial for image optimization. Earlier versions had memory leaks that would crash our build process on large image sets. We have about 800 images across our blog posts, and older Sharp versions couldn't handle it.

Setting Up MongoDB: The Schema Design That Saved Us

Most tutorials show you a simple blog post schema with title, content, and date. That's fine for a demo, but it falls apart in production. Here's the schema we evolved to after three iterations:

// lib/mongodb/schemas.js
import { z } from 'zod';

export const PostSchema = z.object({
  _id: z.string().optional(),
  slug: z.string().min(1).max(200),
  title: z.string().min(1).max(200),
  excerpt: z.string().min(1).max(300),
  content: z.string().min(1),
  coverImage: z.object({
    url: z.string().url(),
    alt: z.string(),
    width: z.number(),
    height: z.number(),
    blurDataUrl: z.string().optional(),
  }),
  author: z.object({
    id: z.string(),
    name: z.string(),
    avatar: z.string().url(),
    bio: z.string().optional(),
  }),
  tags: z.array(z.string()).min(1).max(5),
  category: z.string(),
  publishedAt: z.date(),
  updatedAt: z.date(),
  status: z.enum(['draft', 'published', 'archived']),
  seo: z.object({
    metaTitle: z.string().max(60),
    metaDescription: z.string().max(160),
    keywords: z.array(z.string()),
    ogImage: z.string().url().optional(),
  }),
  readingTime: z.number(),
  views: z.number().default(0),
  featured: z.boolean().default(false),
});

export const AuthorSchema = z.object({
  _id: z.string().optional(),
  email: z.string().email(),
  name: z.string(),
  avatar: z.string().url(),
  bio: z.string().max(500),
  social: z.object({
    twitter: z.string().optional(),
    github: z.string().optional(),
    linkedin: z.string().optional(),
  }),
  role: z.enum(['author', 'editor', 'admin']),
  createdAt: z.date(),
});

Why this schema works:

The coverImage object includes blurDataUrl for Next.js's placeholder blur effect. We generate these during the upload process, not at build time. This saved us hours in build time.

The seo object is separate because our marketing team needed to override meta tags without editing the main content. This was a late addition after they complained about SEO control.

readingTime is calculated server-side and stored, not computed on every render. We use a simple formula: word count divided by 225 (average reading speed). Computing this on every page load was adding 20-30ms of processing time.

The views counter is incremented asynchronously. We don't wait for MongoDB to confirm the write before rendering the page. This pattern alone saved us 100ms on average page load.

The Indexes That Actually Matter

Here's where we screwed up initially. We launched without proper indexes, and our queries were taking 2-3 seconds once we hit 100+ posts. Here are the indexes we eventually added:

// scripts/setup-indexes.js
import { MongoClient } from 'mongodb';

async function setupIndexes() {
  const client = await MongoClient.connect(process.env.MONGODB_URI);
  const db = client.db('blog');
  
  const posts = db.collection('posts');
  
  // Compound index for published posts sorted by date
  await posts.createIndex(
    { status: 1, publishedAt: -1 },
    { name: 'published_posts_by_date' }
  );
  
  // Unique index on slug for fast lookups
  await posts.createIndex(
    { slug: 1 },
    { unique: true, name: 'slug_unique' }
  );
  
  // Text index for search
  await posts.createIndex(
    { title: 'text', content: 'text', excerpt: 'text' },
    { 
      name: 'search_index',
      weights: { title: 10, excerpt: 5, content: 1 }
    }
  );
  
  // Index for category filtering
  await posts.createIndex(
    { category: 1, status: 1, publishedAt: -1 },
    { name: 'category_posts' }
  );
  
  // Index for tag filtering
  await posts.createIndex(
    { tags: 1, status: 1, publishedAt: -1 },
    { name: 'tag_posts' }
  );
  
  // Index for featured posts
  await posts.createIndex(
    { featured: 1, status: 1, publishedAt: -1 },
    { name: 'featured_posts' }
  );
  
  console.log('Indexes created successfully');
  await client.close();
}

setupIndexes().catch(console.error);

Run this with: node scripts/setup-indexes.js

The performance impact was dramatic:

Before indexes:

Query: db.posts.find({status: 'published'}).sort({publishedAt: -1}).limit(10)
Execution time: 2,847ms
Documents examined: 156

After indexes:

Query: db.posts.find({status: 'published'}).sort({publishedAt: -1}).limit(10)
Execution time: 12ms
Documents examined: 10

That's a 237x improvement. The compound index on status and publishedAt means MongoDB can use the index for both filtering and sorting, which is critical for blog listing pages.

The text index on title, content, and excerpt enables full-text search. The weights mean title matches rank higher than content matches. We tested this with 200+ posts, and search queries complete in under 50ms.

Database Connection Pooling: The Bug That Cost Us Two Days

Here's something that bit us hard: MongoDB connection handling in Next.js Server Components is tricky. The naive approach causes connection pool exhaustion in production.

What we tried first (DON'T DO THIS):

// lib/mongodb.js - WRONG APPROACH
import { MongoClient } from 'mongodb';

export async function getDatabase() {
  const client = await MongoClient.connect(process.env.MONGODB_URI);
  return client.db('blog');
}

This looks fine, right? It works great in development. In production, we started getting this error after about 100 requests:

MongoServerError: connection pool exhausted
    at Connection.onMessage
    at MessageStream.<anonymous>
    at MessageStream.emit (node:events:514:28)

The problem: Every request creates a new connection. MongoDB's default connection pool size is 100. Once you hit that limit, new connections wait or fail.

The solution that actually works:

// lib/mongodb.js - CORRECT APPROACH
import { MongoClient } from 'mongodb';

if (!process.env.MONGODB_URI) {
  throw new Error('Please add your Mongo URI to .env.local');
}

const uri = process.env.MONGODB_URI;
const options = {
  maxPoolSize: 10,
  minPoolSize: 5,
  maxIdleTimeMS: 60000,
  serverSelectionTimeoutMS: 5000,
  socketTimeoutMS: 45000,
};

let client;
let clientPromise;

if (process.env.NODE_ENV === 'development') {
  // In development, use a global variable to preserve the connection
  // across hot reloads
  if (!global._mongoClientPromise) {
    client = new MongoClient(uri, options);
    global._mongoClientPromise = client.connect();
  }
  clientPromise = global._mongoClientPromise;
} else {
  // In production, create a new client
  client = new MongoClient(uri, options);
  clientPromise = client.connect();
}

export async function getDatabase() {
  const client = await clientPromise;
  return client.db('blog');
}

export async function closeDatabaseConnection() {
  if (client) {
    await client.close();
  }
}

Why this works:

The clientPromise is created once and reused across requests. In development, we store it in global to survive hot reloads. In production, Next.js's build process ensures it's created once per server instance.

The connection pool settings are critical:

  • maxPoolSize: 10 - Enough for concurrent requests without exhausting MongoDB Atlas's connection limit
  • minPoolSize: 5 - Keeps connections warm for faster response times
  • maxIdleTimeMS: 60000 - Closes idle connections after 60 seconds to free resources
  • serverSelectionTimeoutMS: 5000 - Fails fast if MongoDB is unreachable
  • socketTimeoutMS: 45000 - Prevents hanging connections

After implementing this, we ran a load test with 1,000 concurrent requests. Zero connection errors. Average response time: 180ms.

Building the Data Access Layer: Repository Pattern That Scales

We use a repository pattern to abstract MongoDB operations. This made testing easier and gave us a clean interface for data access. Here's our posts repository:

// lib/repositories/posts.js
import { getDatabase } from '@/lib/mongodb';
import { ObjectId } from 'mongodb';

export class PostRepository {
  constructor() {
    this.collectionName = 'posts';
  }

  async getCollection() {
    const db = await getDatabase();
    return db.collection(this.collectionName);
  }

  async findBySlug(slug) {
    const collection = await this.getCollection();
    return collection.findOne({ 
      slug, 
      status: 'published' 
    });
  }

  async findPublished({ 
    limit = 10, 
    skip = 0, 
    category = null,
    tag = null,
    featured = null 
  }) {
    const collection = await this.getCollection();
    
    const query = { status: 'published' };
    if (category) query.category = category;
    if (tag) query.tags = tag;
    if (featured !== null) query.featured = featured;

    const posts = await collection
      .find(query)
      .sort({ publishedAt: -1 })
      .skip(skip)
      .limit(limit)
      .toArray();

    const total = await collection.countDocuments(query);

    return { posts, total, hasMore: skip + limit < total };
  }

  async search(searchTerm, { limit = 10, skip = 0 }) {
    const collection = await this.getCollection();
    
    const posts = await collection
      .find(
        { 
          $text: { $search: searchTerm },
          status: 'published'
        },
        { 
          score: { $meta: 'textScore' } 
        }
      )
      .sort({ score: { $meta: 'textScore' } })
      .skip(skip)
      .limit(limit)
      .toArray();

    return posts;
  }

  async incrementViews(slug) {
    const collection = await this.getCollection();
    
    // Fire and forget - don't wait for response
    collection.updateOne(
      { slug },
      { $inc: { views: 1 } }
    ).catch(err => {
      console.error('Failed to increment views:', err);
    });
  }

  async getRelatedPosts(postId, tags, limit = 3) {
    const collection = await this.getCollection();
    
    return collection
      .find({
        _id: { $ne: new ObjectId(postId) },
        tags: { $in: tags },
        status: 'published'
      })
      .sort({ publishedAt: -1 })
      .limit(limit)
      .toArray();
  }

  async create(postData) {
    const collection = await this.getCollection();
    const result = await collection.insertOne({
      ...postData,
      createdAt: new Date(),
      updatedAt: new Date(),
      views: 0,
    });
    
    return { ...postData, _id: result.insertedId };
  }

  async update(slug, updates) {
    const collection = await this.getCollection();
    
    const result = await collection.updateOne(
      { slug },
      { 
        $set: { 
          ...updates, 
          updatedAt: new Date() 
        } 
      }
    );

    return result.modifiedCount > 0;
  }

  async delete(slug) {
    const collection = await this.getCollection();
    const result = await collection.deleteOne({ slug });
    return result.deletedCount > 0;
  }
}

export const postRepository = new PostRepository();

This repository handles all our post operations. The incrementViews method is fire-and-forget, which prevents view counting from blocking page renders. The getRelatedPosts method uses tag matching to find similar content - simple but effective.

Next.js App Router: Server Components vs Client Components

This is where Next.js 13/14 gets interesting. The App Router with React Server Components changed everything about how we fetch data. Here's our blog post page:

// app/blog/[slug]/page.js
import { notFound } from 'next/navigation';
import { postRepository } from '@/lib/repositories/posts';
import { PostContent } from '@/components/PostContent';
import { RelatedPosts } from '@/components/RelatedPosts';
import { TableOfContents } from '@/components/TableOfContents';
import { formatDate } from '@/lib/utils';

// Generate static params for all published posts
export async function generateStaticParams() {
  const { posts } = await postRepository.findPublished({ 
    limit: 1000 
  });
  
  return posts.map((post) => ({
    slug: post.slug,
  }));
}

// Generate metadata for SEO
export async function generateMetadata({ params }) {
  const post = await postRepository.findBySlug(params.slug);
  
  if (!post) return {};

  return {
    title: post.seo.metaTitle,
    description: post.seo.metaDescription,
    keywords: post.seo.keywords,
    openGraph: {
      title: post.seo.metaTitle,
      description: post.seo.metaDescription,
      images: [post.seo.ogImage || post.coverImage.url],
      type: 'article',
      publishedTime: post.publishedAt.toISOString(),
      modifiedTime: post.updatedAt.toISOString(),
      authors: [post.author.name],
      tags: post.tags,
    },
    twitter: {
      card: 'summary_large_image',
      title: post.seo.metaTitle,
      description: post.seo.metaDescription,
      images: [post.seo.ogImage || post.coverImage.url],
    },
  };
}

// Revalidate every 3600 seconds (1 hour)
export const revalidate = 3600;

export default async function BlogPost({ params }) {
  const post = await postRepository.findBySlug(params.slug);

  if (!post) {
    notFound();
  }

  // Increment views asynchronously
  postRepository.incrementViews(params.slug);

  // Fetch related posts
  const relatedPosts = await postRepository.getRelatedPosts(
    post._id,
    post.tags,
    3
  );

  return (
    <article className="max-w-4xl mx-auto px-4 py-12">
      <header className="mb-8">
        <div className="flex items-center gap-4 mb-4 text-sm text-gray-600">
          <time dateTime={post.publishedAt.toISOString()}>
            {formatDate(post.publishedAt)}
          </time>
          <span>·</span>
          <span>{post.readingTime} min read</span>
          <span>·</span>
          <span>{post.views.toLocaleString()} views</span>
        </div>
        
        <h1 className="text-4xl font-bold mb-4">
          {post.title}
        </h1>
        
        <p className="text-xl text-gray-600 mb-6">
          {post.excerpt}
        </p>

        <div className="flex items-center gap-4">
          <img 
            src={post.author.avatar} 
            alt={post.author.name}
            className="w-12 h-12 rounded-full"
          />
          <div>
            <div className="font-medium">{post.author.name}</div>
            <div className="text-sm text-gray-600">{post.author.bio}</div>
          </div>
        </div>
      </header>

      <div className="grid grid-cols-1 lg:grid-cols-12 gap-8">
        <div className="lg:col-span-8">
          <PostContent content={post.content} />
        </div>
        
        <aside className="lg:col-span-4">
          <div className="sticky top-8">
            <TableOfContents content={post.content} />
          </div>
        </aside>
      </div>

      <footer className="mt-12 pt-8 border-t">
        <div className="flex flex-wrap gap-2 mb-8">
          {post.tags.map(tag => (
            <a 
              key={tag}
              href={`/blog/tag/${tag}`}
              className="px-3 py-1 bg-gray-100 rounded-full text-sm hover:bg-gray-200"
            >
              {tag}
            </a>
          ))}
        </div>

        {relatedPosts.length > 0 && (
          <RelatedPosts posts={relatedPosts} />
        )}
      </footer>
    </article>
  );
}

Why this architecture works:

The entire page is a Server Component. We fetch data directly in the component without any client-side loading states. This means:

  1. Faster initial page load - HTML is rendered on the server with all content
  2. Better SEO - Search engines see complete content immediately
  3. No loading spinners - Users see content instantly
  4. Reduced JavaScript - No data fetching libraries needed on client

The generateStaticParams function tells Next.js to pre-render all blog posts at build time. For new posts, we use Incremental Static Regeneration (ISR) with revalidate = 3600. This means:

  • Existing pages are served from cache (fast)
  • Pages are regenerated in the background every hour
  • New posts are generated on first request, then cached

The metadata generation happens server-side, giving us perfect SEO control without client-side meta

Never Miss an Article

Get our best content delivered to your inbox weekly. No spam, unsubscribe anytime.

Comments (0)

Please log in to leave a comment.

Log In

Related Articles