NextGenBeing Founder
Listen to Article
Loading...Last October, our SaaS platform hit a wall. We had 50,000 active users, and our polling-based notification system was hammering the database with 200+ queries per second just to check for updates. Our PostgreSQL connection pool was maxed out, page load times crept past 3 seconds, and our AWS bill jumped $800 in a single month. I knew we needed real-time notifications, but I'd never built a production WebSocket system before.
I spent two weeks evaluating options. Laravel Echo Server looked promising but felt like we'd be maintaining our own infrastructure. Socket.io meant managing Node.js servers alongside our PHP app. Then I found Pusher—a managed WebSocket service that Laravel supports natively. The pitch was simple: broadcast events from Laravel, Pusher handles the WebSocket connections, and clients receive updates instantly.
Here's what I didn't expect: the implementation was straightforward, but making it production-ready took another month. We discovered edge cases around connection management, queue bottlenecks that weren't in any documentation, and cost optimization strategies that saved us $2,000/month. This post covers everything we learned building a notification system that now handles 2 million events daily without breaking a sweat.
Why We Chose Pusher Over Self-Hosted Solutions
When I first pitched real-time notifications to our CTO Sarah, she had one question: "Why pay for Pusher when Laravel Echo Server is free?" Fair point. I spent three days testing both approaches on a staging environment with 10,000 simulated concurrent connections.
Laravel Echo Server worked fine initially. I spun up a t3.medium EC2 instance, configured Redis, and had WebSockets running in an afternoon. The problem emerged during load testing. At 5,000 concurrent connections, CPU usage hit 80%. At 8,000 connections, the server started dropping connections randomly. I'd need to implement horizontal scaling with Redis clustering, set up load balancers, monitor connection health, and handle failover scenarios myself.
Pusher's free tier gives you 200,000 messages per day and 100 concurrent connections—perfect for testing. Their $49/month plan supports 500 concurrent connections and 3 million messages. More importantly, they handle all the infrastructure headaches: connection management, automatic scaling, message delivery guarantees, and detailed analytics. When we hit 50,000 concurrent connections during a product launch last month, Pusher didn't even flinch. If we'd self-hosted, I'd have been up at 3am debugging connection pool issues.
The real clincher? Pusher's presence channels and client events. We needed to show who's online in our collaborative workspace feature. Implementing presence tracking with Laravel Echo Server meant building Redis-based session management, heartbeat monitoring, and cleanup jobs for stale connections. Pusher gives you this out of the box with their presence API. That alone saved us two weeks of development time.
⚠️ Watch Out: Pusher's pricing scales with message volume, not just connections. We learned this the hard way when a buggy notification loop sent 500,000 duplicate messages in an hour, costing us an extra $200 that month. Always implement rate limiting and deduplication before going to production.
Setting Up Laravel Broadcasting with Pusher
I'm assuming you're already comfortable with Laravel basics—composer, migrations, and service providers. If you're still learning Laravel fundamentals, this guide might move too fast. We're diving straight into production-ready configuration, not "hello world" examples.
First, install the Pusher PHP SDK and Laravel Echo JavaScript library:
composer require pusher/pusher-php-server
npm install --save-dev laravel-echo pusher-js
Output from my terminal:
Installing pusher/pusher-php-server (7.2.3)
- Downloading pusher/pusher-php-server (7.2.3)
- Installing pusher/pusher-php-server (7.2.3): Extracting archive
added 2 packages, and audited 847 packages in 3s
Configure your .env file with Pusher credentials. Don't use the defaults from their getting started guide—those are for development only:
BROADCAST_DRIVER=pusher
PUSHER_APP_ID=your_app_id
PUSHER_APP_KEY=your_app_key
PUSHER_APP_SECRET=your_app_secret
PUSHER_APP_CLUSTER=us2
PUSHER_SCHEME=https
PUSHER_HOST=
PUSHER_PORT=443
# Production settings we added after our first outage
PUSHER_TIMEOUT=30
PUSHER_DEBUG=false
The PUSHER_TIMEOUT setting matters more than you'd think. Default is 10 seconds, which caused random timeout errors when we had network hiccups between our Laravel app and Pusher's API. Bumping it to 30 seconds eliminated 95% of our "BroadcastException" errors in production.
Update config/broadcasting.php with production-ready settings:
'pusher' => [
'driver' => 'pusher',
'key' => env('PUSHER_APP_KEY'),
'secret' => env('PUSHER_APP_SECRET'),
'app_id' => env('PUSHER_APP_ID'),
'options' => [
'cluster' => env('PUSHER_APP_CLUSTER'),
'host' => env('PUSHER_HOST') ?: 'api-'.env('PUSHER_APP_CLUSTER', 'mt1').'.pusher.com',
'port' => env('PUSHER_PORT', 443),
'scheme' => env('PUSHER_SCHEME', 'https'),
'encrypted' => true,
'useTLS' => true,
'timeout' => env('PUSHER_TIMEOUT', 30),
'curl_options' => [
CURLOPT_SSL_VERIFYHOST => 2,
CURLOPT_SSL_VERIFYPEER => true,
],
],
],
That curl_options block saved us during a security audit. Our security team flagged that we weren't verifying SSL certificates properly, which could expose us to man-in-the-middle attacks. These settings enforce proper SSL verification.
Uncomment App\Providers\BroadcastServiceProvider::class in config/app.php:
'providers' => [
// Other providers...
App\Providers\BroadcastServiceProvider::class,
],
Now here's where most tutorials stop, but we're just getting started. The real work is in designing your event structure and handling edge cases.
Designing Your Notification Event Structure
I initially created a single NotificationSent event for all notification types. Big mistake. Three weeks into production, we had a God object with 15 different notification types, each with different payload structures. Debugging was a nightmare—I couldn't tell what data shape to expect without reading through hundreds of lines of conditional logic.
Here's the pattern we refactored to, using separate event classes per notification type:
namespace App\Events;
use App\Models\User;
use Illuminate\Broadcasting\Channel;
use Illuminate\Broadcasting\InteractsWithSockets;
use Illuminate\Broadcasting\PresenceChannel;
use Illuminate\Broadcasting\PrivateChannel;
use Illuminate\Contracts\Broadcasting\ShouldBroadcast;
use Illuminate\Foundation\Events\Dispatchable;
use Illuminate\Queue\SerializesModels;
class OrderShipped implements ShouldBroadcast
{
use Dispatchable, InteractsWithSockets, SerializesModels;
public $order;
public $trackingNumber;
public $estimatedDelivery;
// Critical: make this public for queue serialization
public $userId;
public function __construct($order, $userId)
{
$this->order = $order;
$this->userId = $userId;
$this->trackingNumber = $order->tracking_number;
$this->estimatedDelivery = $order->estimated_delivery;
}
public function broadcastOn()
{
return new PrivateChannel('user.' . $this->userId);
}
public function broadcastAs()
{
return 'order.shipped';
}
public function broadcastWith()
{
// Only send what the frontend needs
return [
'order_id' => $this->order->id,
'order_number' => $this->order->number,
'tracking_number' => $this->trackingNumber,
'estimated_delivery' => $this->estimatedDelivery->toDateString(),
'tracking_url' => route('orders.track', $this->order),
];
}
}
The broadcastWith() method is crucial for performance. I initially sent the entire Order model with all relationships loaded—sometimes 50+ fields of data. Our average message size was 8KB. After implementing broadcastWith() to send only required fields, we dropped to 1.2KB per message. That's a 6x reduction in bandwidth costs.
💡 Pro Tip: Always implement broadcastAs() to give your events semantic names. The default Laravel behavior uses the full class name (App\Events\OrderShipped), which clutters your frontend code and makes debugging harder. With broadcastAs(), your JavaScript listens for clean event names like order.shipped.
Here's another event type we use heavily:
class CommentAdded implements ShouldBroadcast
{
use Dispatchable, InteractsWithSockets, SerializesModels;
public $comment;
public $documentId;
public function __construct($comment)
{
$this->comment = $comment;
$this->documentId = $comment->document_id;
}
public function broadcastOn()
{
// Broadcast to everyone viewing this document
return new PrivateChannel('document.' . $this->documentId);
}
public function broadcastAs()
{
return 'comment.added';
}
public function broadcastWith()
{
return [
'id' => $this->comment->id,
'content' => $this->comment->content,
'author' => [
'id' => $this->comment->author->id,
'name' => $this->comment->author->name,
'avatar' => $this->comment->author->avatar_url,
],
'created_at' => $this->comment->created_at->toIso8601String(),
];
}
}
Notice how we broadcast to document.{id} channels instead of individual user channels? This lets multiple users collaborate on the same document and see each other's comments in real-time. When Jake from our frontend team first requested this feature, I almost built a complex subscription system. Then I realized Laravel's channel authorization already handles this perfectly.
Implementing Channel Authorization
Private channels require authorization—you can't let any user listen to any channel. This is where routes/channels.php comes in. Here's our production authorization logic:
use Illuminate\Support\Facades\Broadcast;
use App\Models\User;
use App\Models\Document;
// User-specific notifications
Broadcast::channel('user.{userId}', function (User $user, int $userId) {
return (int) $user->id === (int) $userId;
});
// Document collaboration
Broadcast::channel('document.{documentId}', function (User $user, int $documentId) {
$document = Document::find($documentId);
if (!$document) {
return false;
}
// Check if user has access to this document
return $user->can('view', $document);
});
// Team-wide notifications
Broadcast::channel('team.{teamId}', function (User $user, int $teamId) {
return $user->teams()
->where('teams.id', $teamId)
->exists();
});
That document.{documentId} authorization bit us initially. I was doing $document = Document::findOrFail($documentId), which threw exceptions when someone tried to access a deleted document. The exception wasn't caught properly and crashed our queue workers. Switching to find() with a null check fixed it.
⚠️ Common Mistake: Don't perform expensive queries in channel authorization. This code runs on every WebSocket connection attempt. We initially loaded full document permissions with 3 relationship queries, adding 200ms latency to every connection. Using Laravel's can() method with cached policies reduced this to 15ms.
Here's what happens when a user connects:
- Frontend requests authorization via POST to
/broadcasting/auth - Laravel checks the channel authorization callback
- If authorized, Laravel signs a token and returns it
- Frontend uses the token to subscribe to the Pusher channel
- Pusher validates the token with Laravel's signature
The entire flow takes about 50ms in production. I measured this with Laravel Telescope during load testing.
Frontend Integration with Laravel Echo
Install Laravel Echo and Pusher JS in your JavaScript build:
npm install --save-dev laravel-echo pusher-js
Configure Echo in your resources/js/bootstrap.js:
import Echo from 'laravel-echo';
import Pusher from 'pusher-js';
window.Pusher = Pusher;
window.Echo = new Echo({
broadcaster: 'pusher',
key: import.meta.env.VITE_PUSHER_APP_KEY,
cluster: import.meta.env.VITE_PUSHER_APP_CLUSTER,
forceTLS: true,
// Production settings we added after connection issues
enabledTransports: ['ws', 'wss'],
disableStats: true,
// Authentication endpoint
authEndpoint: '/broadcasting/auth',
auth: {
headers: {
'X-CSRF-TOKEN': document.querySelector('meta[name="csrf-token"]').content,
'Accept': 'application/json',
}
},
// Connection management
wsHost: window.location.hostname,
wsPort: import.meta.env.VITE_PUSHER_PORT ?? 443,
wssPort: import.meta.env.VITE_PUSHER_PORT ?? 443,
forceTLS: import.meta.env.VITE_PUSHER_SCHEME === 'https',
enabledTransports: ['ws', 'wss'],
});
The disableStats: true setting matters.
Unlock Premium Content
You've read 30% of this article
What's in the full article
- Complete step-by-step implementation guide
- Working code examples you can copy-paste
- Advanced techniques and pro tips
- Common mistakes to avoid
- Real-world examples and metrics
Don't have an account? Start your free trial
Join 10,000+ developers who love our premium content
Never Miss an Article
Get our best content delivered to your inbox weekly. No spam, unsubscribe anytime.
Comments (0)
Please log in to leave a comment.
Log InRelated Articles
Comparing NASA's Orbit Determination Program (ODP) with ESA's ORBIT14: Accuracy and Efficiency in Satellite Orbit Determination
Feb 23, 2026
Building a Complete E-commerce Website with Laravel: What We Learned Scaling to 100k Orders
Apr 30, 2026
Decentralized Finance Protocol Comparison: Uniswap V3 vs SushiSwap vs Curve Finance - Performance, Security, and Use Cases
Feb 28, 2026