Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes, each request is handled by its own process. This might sound weird if you're used to other languages, but it's how every language used to work in the good ol' CGI days. PHP doesn't break backward compatibility easily, and I don't think it will ever break this one. Besides, once you get the hang of it, PHP's execution model is highly intuitive and beginner-friendly. You simply don't have to worry about a whole class of concurrency-related problems. Those problems are solved in C, not PHP.

Nowadays everyone uses PHP-FPM (again, written in C) which manages a pool of processes. Once a process is done serving a request, it is cleaned up and becomes available for serving another request. You can tweak the number of processes to control how much concurrency you want, or leave it to PHP-FPM to decide on its own. The process pool is much more efficient than the CGI method of setting up and tearing down a process every time, while preserving much of the conceptual simplicity.

PHP has had a built-in HTTP server since 5.4, but few people use it in production because PHP-FPM is so stable and performant.



The PHP doc advices you that the built-in server from PHP should be used only for development: https://www.php.net/manual/en/features.commandline.webserver.... But PHP-FPM is the most used nowadays, for sure.


I find the built-in web server very useful for running tests. Instead of setting up Apache or nginx on every CI build, you just fire up the built-in web server and point your tests at it.


I use it for small scripted jobs on my own machine that have to go through a proxy, I start the built-in server on the proxy root and point the script to the localhost address.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: