AI Tutorial Generator
Listen to Article
Loading...The Day Our Infrastructure Bill Made Everyone Panic
Last November, our CFO Sarah walked into the engineering room with a printout that made my stomach drop. Our AWS bill had jumped from $8,000 to $23,000 in three months. We'd scaled from 2 million to about 15 million daily requests, and we were adding EC2 instances like they were going out of style.
"Can we fix this without rewriting everything?" she asked.
I'd been hearing about Laravel Octane for months but kept putting it off. The idea of running Laravel on an application server instead of the traditional PHP-FPM model seemed risky. What if something broke? What if our legacy code didn't work? What if we lost data during the transition?
But here's the thing: we didn't have a choice. We needed to scale, and we needed to do it fast. So I spent the next two weeks testing Octane in staging, and what I discovered changed everything about how we think about PHP performance.
This isn't a story about magical 10x improvements (though we did see 8-12x throughput gains in specific scenarios). This is about the real, messy process of migrating a production Laravel application to Octane, the gotchas nobody warns you about, and the architectural decisions that actually matter when you're serving tens of millions of requests per day.
Why Traditional PHP-FPM Hits a Wall at Scale
Before we dive into Octane, you need to understand why PHP-FPM becomes a bottleneck. Most developers don't hit this until they're at serious scale, which is why the problem isn't obvious when you're building your app.
PHP-FPM works like this: every request spawns a new PHP process (or reuses one from a pool), bootstraps your entire Laravel application, loads all your service providers, connects to the database, processes the request, and then tears everything down. For a typical Laravel app, this bootstrap process takes 50-150ms before your actual application code even runs.
When we profiled our application using Blackfire, here's what we found:
// Typical request breakdown on PHP-FPM
Bootstrap time: 87ms
- Autoloader registration: 12ms
- Service provider loading: 31ms
- Config loading: 18ms
- Database connection: 14ms
- Route compilation: 12ms
Actual application logic: 43ms
Response generation: 8ms
Total: 138ms per request
That's 63% overhead before we even start processing the request. When you're handling 2,000 requests per second, you're spending 174 seconds per second just bootstrapping (yes, that math means you need 87 processes running in parallel just to keep up with bootstrap overhead).
We were running 48 m5.xlarge instances (4 vCPUs, 16GB RAM each) to handle our peak traffic. Each instance ran about 120 PHP-FPM workers. Do the math: that's 5,760 processes constantly bootstrapping Laravel.
My colleague Jake pointed out something interesting during one of our debugging sessions: "We're basically starting a new Laravel application 2,000 times per second. That's insane when you think about it."
He was right. It is insane.
How Application Servers Change Everything
Application servers like Swoole and RoadRunner flip this model on its head. Instead of bootstrapping for every request, they bootstrap once and keep your application in memory. Then they handle requests in a persistent process.
Here's what that same request looks like on Octane:
// First request - one-time bootstrap
Bootstrap time: 87ms
Request processing: 43ms
Total: 130ms
// Every subsequent request
Bootstrap time: 0ms
Request processing: 43ms
Total: 43ms
// Throughput improvement
PHP-FPM: 138ms per request = ~7.2 requests/second per worker
Octane: 43ms per request = ~23 requests/second per worker
That's a 3.2x improvement per worker just from eliminating bootstrap overhead. But the real gains come from being able to handle concurrent requests within a single process.
With Swoole (which we ultimately chose), each Octane worker can handle multiple concurrent requests using coroutines. A single worker might handle 10-20 concurrent requests depending on your workload. Suddenly that 3.2x improvement becomes 8-12x in real-world scenarios.
But here's what nobody tells you: this architectural change breaks a lot of assumptions in your code.
The Migration: What Actually Broke
We started our Octane migration in December 2023. I thought it would take a week. It took six weeks and uncovered issues we didn't even know existed.
Memory Leaks We Didn't Know We Had
The first thing that broke was subtle. Our application would run fine for a few hours, then memory usage would creep up until workers started getting killed by the OOM killer.
# Monitoring memory usage
watch -n 1 'ps aux | grep octane | awk "{sum+=\$6} END {print sum/1024 \" MB\"}"'
# After 2 hours
1,847 MB
# After 4 hours
2,934 MB
# After 6 hours
4,128 MB (workers start dying)
The problem? We had static class properties that accumulated data across requests. In PHP-FPM, this doesn't matter because the process dies after each request. In Octane, it's a memory leak.
Here's the actual bug:
// Our original code - worked fine on PHP-FPM
class ProductRepository
{
protected static $cache = [];
public function find($id)
{
if (!isset(self::$cache[$id])) {
self::$cache[$id] = Product::find($id);
}
return self::$cache[$id];
}
}
This looks innocent, right? We're just caching products to avoid duplicate queries. But in Octane, self::$cache never gets cleared. After a million requests, we'd have a million products in memory.
The fix required rethinking our caching strategy:
// Octane-compatible version
class ProductRepository
{
protected $cache;
public function __construct()
{
// Cache is now per-request, not static
$this->cache = [];
}
public function find($id)
{
if (!isset($this->cache[$id])) {
$this->cache[$id] = Product::find($id);
}
return $this->cache[$id];
}
}
But wait, now we're not caching across requests at all.
Unlock Premium Content
You've read 30% of this article
What's in the full article
- Complete step-by-step implementation guide
- Working code examples you can copy-paste
- Advanced techniques and pro tips
- Common mistakes to avoid
- Real-world examples and metrics
Don't have an account? Start your free trial
Join 10,000+ developers who love our premium content
Never Miss an Article
Get our best content delivered to your inbox weekly. No spam, unsubscribe anytime.
Comments (0)
Please log in to leave a comment.
Log In