Rise of Functions as a Service: How PHP Set the “Serverless” Stage 20 Years Ago

Keith Horwood
9 min readSep 7, 2017

“Serverless” is the new flavour du jour in software development. AWS Lambda, Microsoft Azure Functions, Google Cloud Functions, IBM OpenWhisk all represent tremendous big name support. There are a lot of ways to get started with “serverless”: from the FaaS library we’re building at StdLib, to open source projects like the FaaSlang specification for serverless execution, Serverless Framework and Docker-based solutions like OpenFaaS. With all of this innovation, it’s worthwhile to take a step down memory lane and look at how an unlikely candidate, PHP, brought us all the way to modern “serverless” architecture.

From 2000 to now — a brief timeline of tools, languages, and adoptive companies that brought us to “serverless”

The key innovations over the past twenty years have been focused around a single target, developer velocity. To understand how these technologies and companies are aligned around this concept, we’ll take a brief walk through the past twenty years of web-based software development. We’ll start in the early 2000s with PHP and trace a path through history: Django, then Rails, AWS, Heroku, GitHub, and Docker. We’ll take note of what we learned along the way and how we got to Functions as a Service and AWS Lambda.

Beginning with PHP: The “Simpler Times” Really Were Simpler

Timo Mihaljov, @noidi, summarizing the elegance of PHP development

In the days of peak PHP, building was easy. Apache automatically routed HTTP routes like https://my-website.com/dir/my_script.php to automatically execute the script in a dedicated folder on your server, like /home/www/dir/my_script.php.

For example, you want to build functionality that sends emails to your customers. This is simple to implement: you’d create a file send_emails.php, write the desired functionality, and upload that file to somewhere in your /home/www/ directory, say /customers/admin/. This was done via FTP: either through a drag-and-drop interface or the command line. Your Apache webserver would handle the rest: just visit the site https://my-website.com/customers/admin/send_emails.php in your browser, and voila! The code would execute.

Of course, you would pay monthly for this Apache webserver. But if you paid for a hosting provider, they took care of nearly everything. In this way, with PHP (and specifically the LAMP stack), the base shippable unit of code was the script. This really was simple: development was rapid and anybody could be a web developer. The barrier to entry was practically non-existent. There was even a term for those who capitalized on the rapidly iterative ease-of-development in this timeframe, those who came to be known as script kiddies. These were untrained developers who could rapidly download and deploy code to do their bidding: sometimes malicious, but most importantly, quickly. There was something special about PHP that a huge chunk of the industry collectively forgot, it had established the ultimate developer experience paradigm: developer velocity was demonstrably fast.

Why did the industry move on from PHP, and what was the problem with it? One word: Scalability. It was expensive to vertically scale the machines running Apache servers, and time-consuming to horizontally scale them. We almost gave up on the greatest thing to happen to software engineering job accessibility and product creation in our lifetimes in the pursuit of a single, infamous developer marketing term, webscale.

“Webscale”: The New World of Software

The Dotcom Boom-and-Bust passed us by, and engineering demand began to grow again. Though initially humbled by the bust of 2001, the internet was being adopted on a scale larger than ever thought possible. Applications got larger, teams got larger, average customer sizes for applications grew. Reuse of resources (like database connections) became an issue, and webservers like Apache turned into application gateways and load balancers like nginx.

Django and Ruby on Rails were born in this time period. Both Python and Ruby (the languages underlying these frameworks, respectively) are interpreted languages like PHP and can be executed as scripts, but in the name of resource pooling (like Database connections), Django and Ruby on Rails actually create immersive application servers to minimize delivery time of web applications. The MVC paradigm they share reduces duplication of effort at execution time. Everything you need for your application to execute remains in RAM, from templates to data models.

This was, of course, all in the name of performance. We changed from the script as the base shippable unit of code to the application. You can’t just drag and drop a new script into an application: applications have memory, they run as daemons (never-ending execution), and the optimizations at run-time (including resource pooling) need to be reset and re-initialized at deployment.

The Problems With “Webscale” Engineering

“Webscale” engineering initiated the birth and growth of exciting new technologies. There was a concerted effort by software engineers, globally, to minimize compute resources while maximizing customer retention with fast load-times. However, this came at a cost. The simplicity of drag-and-drop deployment turned into a nightmare of managing VMs running nginx proxies while load-balancing Django or Rails instances. Suddenly, your three-person polyglot engineering team now required a one, then two, then ten person developer operations team. They were just IT and Network Engineers at the time. Every company had its own rack.

The market responded quickly, arguably presciently. Django, Rails and Git were released in 2005. AWS launched in 2006 as a way to manage virtual fleets of servers. Heroku launched in 2007, atop AWS, to simplify developer workflows. GitHub was founded in 2008 to promote social sharing of code. There was an unmistakable market opportunity in maximizing developer experience and minimizing investment in operations, but we only ever seemed to get it half-right.

“Webscale” 2.0: Containers

The issues with working with software at scale didn’t stop. Though the base unit of shippable code was firmly set in the application layer, the migration to “the cloud” was not obvious to most players. There was a huge issue with system consistency. Whenever you needed to horizontally scale an application, if you wanted your web software to run predictably and deterministically you needed to install the exact same software on the exact same operating system. Virtual Machines did a good job of this to help horizontally scale more efficiently, but updating them was a nightmare. You needed an entire operating system image with the pre-configured software, sometimes GB in size, to be distributed to your entire network simultaneously.

The answer was containers, which focus on the application layer as the base shippable unit and provide a minimal yet consistent operating system-like interface layer to these applications. The platform as a service, dotCloud, pioneered Docker (though not the concept of containers) and following its release in 2013, Docker took off like wildfire, especially in on-premise development.

Suddenly, developers could use whatever software they wanted and could put it wherever they wanted it! We could build software using whatever application servers we pleased, behind whatever proxies or gateways we liked the most, attached to the most experimental database we could get our hands on: it would work anywhere. The future had so many options!

And what did we do with Docker? Mostly, we forgot about Django and Rails and built amazingly complex pipelines to deploy JavaScript applications, as in the name of anticipated scale. More importantly, AWS was taking note, and Node.js had started to make its way into Enterprise.

“Serverless” 1.0: Back to PHP Roots

It didn’t take long for AWS to enter the ring with a new product. AWS Lambda was formally offered as a product in 2014 with a functions-first development paradigm as a way to respond to AWS events within its ecosystem, without having to worry about scale.

AWS Lambda is based on the same principles as containers are, extended one abstraction layer up the chain: what if we treated the function instead of the application as the base shippable unit? This allows us to do some pretty neat optimizations: instead of a standard OS-like environment or interface, we just need a standardized runtime. Node.js is a particularly interesting target: due to non-blocking IO, you can have multiple functions scale vertically on a single runtime instance and further optimize compute resources at scale.

Yes, there are restrictions: we can’t use some of our application frameworks. However, intentionally or not, AWS Lambda brought us back to the days of PHP by replacing the script as the base shippable unit with the function. The principles are identical: stateless, immutable execution. The limitations are the same, mostly that resource pooling is difficult. Every request, for example, may need a new database connection.

The difference is that it’s now been over a decade since Django and Rails were introduced. AWS Lambda (and similar offering from Microsoft, Google and IBM) represent the culmination of over a decade of working and re-working architecture and developer experience to allow for massive scale. The cost of compute has fallen drastically, heading towards commoditization. The speed of our hardware and optimizations in software means these execution paths are tenable replacements for old-school servers. In effect, as silly as it sounds, “serverless” simply means we’ve finally brought the simplicity of PHP development pipelines to massive scale, with a new face: mostly Node.js and Python.

Conclusions: What’s Next?

The path to “serverless” (or perhaps, more aptly, as many are beginning to note: functions as a service) has been arduous. We’ve seen a massive graph of technologies spring forth to solve challenges of building software at scale: everything from deployment pipelines to team management. Each new technology came with its own costs and benefits. The application as the base shippable unit inadvertently (against suggestions of its purveyors) created huge teams and slow iteration cycles in many organizations.

The complexity that was created in this timeframe was unimaginable. There are going to be organizations that skip containers completely and move straight to functions, iRobot is a perfect example. We see the result of this complexity in wages: more technological surface area to tackle issues with scaling means more demand for jobs, which leads to increased wages as supply can’t keep up.

The functions as a service trend has put tremendous pressure on the industry towards a global reset. There are branches of the technology graph created in the last decade that will wither and become defunct. The activation energy, or barrier to entry, to creating backend applications and technological toolchain required to iterate and execute as a team has nearly been flipped back to zero. We’re going to see a re-emergence of truly full stack developers, born out of both newbie and frontend developers who have been intimidated by the bore and complexity of backends. If you can write a function, you can build a product and watch it grow, no further maintenance required.

Of course, this won’t stop wage progression or demand for engineers. Engineers love to build — we do so spontaneously and oftentimes for free. The plethora of offerings popping up around AWS Lambda and its ilk are evidence of that. The industry will have a tendency to organize around optimal time-to-delivery and iteration speed, or developer velocity. Functions-first development is the natural conclusion of a race up and back down the hill of scaling. What’s exciting is that we can introduce children and reintroduce new classes of developers to professional backend software development at scale. The revolution towards functions as a service will create new economic opportunities for tens, possibly hundreds of millions of people worldwide with simpler product delivery cycles.

How Do I Get Started with “Serverless” Architecture?

There are plenty of tools to get started. Our goal at StdLib is to truly deliver the greatest developer velocity of all platforms: we want you to go from zero to one, building your first shippable code using functions-first development in a production ready environment in the shortest amount of time. We are actively working with Slack and our friends at AWS to help deliver this reality. You can get started building functional APIs with our FaaSlang specification, or jump into Slack Apps, Twilio Hubs and Stripe Stores in minutes.

For those that want to manage their AWS stack alongside their products, the Serverless Framework is a great option as well. Plug in your AWS credentials and build event-driven architectures seamlessly.

Finally, the on-prem player in the space is OpenFaaS, where you can manage your functions in-house behind Docker. While the barrier to entry is a bit larger, if you’re a high-level operational manager or want to organize fleets of Raspberry Pis using functions-first development, a one-time setup cost can be worth it to bring FaaS on-prem.

Happy Building!

We’re in a very exciting transitionary period in software development. Most importantly: have fun with it. If you’re interested in more of my musings about this, you can follow me on Twitter, @keithwhor. I also recommend you follow our company, StdLib, on Twitter, @StdLibHQ and feel free to reach out to me any time directly with thoughts, questions or anything that comes to mind: keith@stdlib.com.

Keith Horwood is the Founder and CEO of StdLib, a FaaS library focused on providing a comprehensive build-deploy-discover-integrate toolchain for rapid, production-ready, functions-first development. He’s previously the author of the popular Node.js framework Nodal, and a bit of a bioinformatics geek.

Special thanks to Ben Kehoe, Brian LeRoux, and our fantastic team for their help editing!

--

--