A PHP development environment your team will love

In this comprehensive introductory guide into Docker fundamentals, you will learn how to build a custom made PHP 7.4 development environment, fully-featured with powerful tools to boost your team's productivity and overall development experience


In these days, it's quite usual to see startups, performing manual setup of development environments. The results are often a fragile mix of operating systems, libraries, and language versions. Making it difficult to understand the source of many bugs.
On other occasions, experienced IT teams are responsible for building these environments, protecting them from inexperienced developers. As a side effect, development teams experience blockers, to understand the underlying environment, or try new ideas.
In both scenarios, evolving software development practices to increase business outcomes, it's impractical and discouraged.

In this article, I'll show you how to use Docker, to build a portable development environment for PHP 7.4 applications. With a PHP debugger, profiler, and tracing tools. Where developers can experiment aggressively, on fast sandboxed environments.
If something breaks, developers can replace their broken environment by a new one in seconds.

Long gone are the days when containerized environments where a practice of a few tech giants, or an unstable technology promoted by early adopters.
With modern containerization and orchestration tools like Docker, docker-compose, Swarms, Kubernetes, and modern cloud infrastructure services. It is becoming a no brainer for an increasing number of startups, to adopt some of these technologies. Providing better experiences for their customers, and cost-effective processes for their teams.


To follow this tutorial you need to install the Docker engine, and GNU Make.

The easiest way to setup Docker in Windows and MacOS is by getting Docker Desktop.
GNU Make might be already available in Windows through PowerShell, and it's usually present on most popular Linux distros as well as MacOS.
Linux users can get both via their default package manager. Two popular package managers for other environments are brew (MacOS), and chocolatey (Windows).

Next, create a DockerHub account, and sign in from the command-line.

docker login

Basic Docker concepts

With docker, you can pack applications with their environment, build it once, and deploy it automatically, to any number of computers running a docker engine.
Containers share the kernel of their host, and usually run a single process, bundled with the minimum operating system, files, and environment variables required. As a result, containers are lighter than virtual machines and allow detailed resource administration.
Instead of long-lasting dedicated servers, containerized infrastructure can be ephemeral and disposable. Opening the door to new ways of infrastructure: automated, resilient, that can adapt to varying demands, and recover from errors.


A Docker image it's the blueprint from where containers are created. An image it's just a stopped container.
For those coming from object-oriented programming, an image can be seen as a class and a container as an instance. Custom images are extended from existing images and can be shared with others by pushing them to docker repositories.

Windows and Linux images

Docker can run two types of images: Linux or Windows images.
The difference is the kernel they need to interact with to access hardware resources. The machine that provides its kernel, is called host.
It's possible to run Windows images on a computer running Linux, and vice versa, by running a docker engine on top of a virtual machine.
Both Windows and MacOS docker versions, run Linux images by default, on top of a lightweight Linux VM. However, Docker for Windows can spin up Windows images natively.

Image layers

You can copy files into an image from your host, add files from a URL, create them by running shell commands, or mount a folder from your host as a volume. Each of those operations adds a new layer to a stack that compounds the resulting image. Each layer it's like a GIT commit, encapsulating the changes made to the image.
Every layer is identified with a unique hash ID. And the layer at the top of the stack identifies the image like a GIT branch is identified by its last commit.
You can create new containers from any layer on the stack.
When a new container is created from a given image tag or layer ID, it is built by summing every layer together, into a cohesive environment.

Immutable vs mutable images

Usually, managing state on your applications is harder than not doing so. And something similar happens with infrastructure-as-code; Deployment of immutable images are easier to implement.

Immutability happens when an image is built with every file needed, and you don't modify their state once deployed.
Instead of modifying the image while it's running. Once a new version of the code it's available for deployment, you build a new image, deploy it, and remove the old one.
Overall, immutable images allows you to implement simpler deployment processes, at the expense of longer build times.

Docker networking

Docker relies on the Container Network Model (CNM), a pluggable open-source architecture, to provide networking capabilities.
CNM is extended to provide different network topologies. Docker supports the following implementations out of the box:

  • Linux network drivers
    • bridge
    • overlay
    • macvlan
  • Windows network drivers
    • nat
    • overlay
    • transparent
    • l2bridge

Many other implementations exist maintained by third parties.

Bridge networks are the default network type when attaching a Linux container port to a host port, as you will see later. Their Windows counterpart is NAT.

Hello World

This website was built inside a nodejs docker image.
That means you can run the same version available at https://guille.cloud, at your localhost, without caring or knowing what's under the hoods.

Go ahead and try it
docker run --rm \
-p 8080:8000 \
--name guillecloud \

An exact version of this website should be available at or http://localhost:8080.

It should work out of the box, cause Docker did all the heavy lifting for you:

  1. Attempts to find a local image tagged as guillermomaschwitz/blog:2-production
  2. If the image is not found locally, docker will attempt to pull it from its Dockerhub repository.
  3. A new container with label guillecloud, gets started from that image.
  4. Binds localhost's port 8080 to container's port 8000.
  5. Logs are streamed to the screen through STDOUT and/or STDERR
List every image pulled by docker
docker image ls
guillermomaschwitz/blog 2 14699ea162d0 2 hours ago 885MB
guillermomaschwitz/blog 2-production 14699ea162d0 2 hours ago 885MB
node 13.1.0-alpine f20a6d8b6721 3 months ago 105MB

The image with ID 14699ea162d0 belongs to the repository guillermomaschwitz/blog and have two different tags: 1, and 1-production. Its base image is f20a6d8b6721, from the node repository , with the tag 13.1.0-alpine.

Check if your container is running
docker container ls
7ecd413bdf91 guillermomaschwitz/blog:2-production "docker-entrypoint.s…" 23 minutes ago Up 23 minutes>8000/tcp guillecloud

If everything went well, you should see a similar list in your terminal; A running container with a unique ID and NAME.

To list stopped containers as well, use the -a flag.

Run a command inside the container

You can run bash and play inside your container

docker exec -it guillecloud bash
bash-5.0$ whoami
bash-5.0$ pwd
bash-5.0$ echo "I am inside a container!"
I am inside a container!
bash-5.0$ exit
guille@localhost %
Stop the container
docker stop guillecloud

Because the container was started with the --rm flag, docker destroys the container once stopped.

Remove both images
docker image rm 14699ea162d0 f20a6d8b6721

The big picture

At Imaginary startup, a group of engineers work together on a tight loop with other teams, deploying several daily changes to their product. Today, a new service for its customers it's ready to go live, requiring the implementation of new endpoints on their API, with very specific hardware requirements. The team decides to split a single cluster, into two, to run the API. The traffic will be routed by URL, with a load balancer, operating at the 7th OSI layer.

Because they where few, they choose not long ago, to start declaring their infrastructure as code. And they decided to use a monorepo, to keep their deployments more manageable.

The day has come to deliver the service their customers expect, and they have been working out a semi-automated deployment process.

Someone capable of running deployments drops a few commands at a deployment environment.

git checkout master
git pull origin master
make build # create 1 image per runtime environment
make test # run automated tests on the final run time environment
git tag -a v3.1 # once all test passes, a new tag is added to the app and infrastructure code
git push origin --tags # updated upstream tags
make share-images-with-team # upload every new image to a privately shared repository for docker images
make deploy # roll infrastructure changes

At production, next to the old API, two new clusters are deployed. Once the new infrastructure is ready to go, and the new version of the API is running on them. The load balancer redirects the traffic to the new infrastructure, and stops the old one an hour later.

This is called a blue/green deployment and should provide an uninterrupted user experience to their customers, while deployments happen in the background.

Minutes after the last deployment, another developer pulls the code from the master branch, and updates her development environment: A development docker container, based on the production one, with different settings, plus extra tools to ease debugging and development.

make stop # stop the development environment
git pull origin master # dowload the new version of the app and its environments
make start # start the development environment

Project codebase

The code samples are based on a containerized development environment I'm maintaining at its Github repository.

The project contains a few files and folders. The most relevant are briefly explained below:


This file is the "Rosetta stone" of this tutorial; the recipe used to build a docker environment.


Settings to make PHP work like it should in a development environment: exposing errors to the user and enabling powerful debugging tools.


Apache configuration files.


A high-level command-line interface other developers can use to interact with the development environment, while they get familiar with docker.
This is also the place, external tools should rely on to interact with the docker environment.


Every software project maintained by more than a single person, should have a clear and concise README file in the root folder.
I won't cover its content, but you can read an example at its Github repository.
What you write on this file should be the minimal amount of information, needed to introduce others on every basic development workflow.

Other files in the project
  • ./src
    • PHP code that shouldn't be exposed to the public
  • ./src/public
    • Files you want to expose to the public, like a front controller, static assets, etc
  • ./data
    • Folder used by PHP's tracer and profiler to output reports.
  • .gitignore
    • Flag files to exclude from the GIT repo, like credentials, secrets, vendor files, build constructs, etc.
  • .dockerignore
    • Flag files to exclude from Docker's context.
  • .env
    • A place to initialize environment variables consumed by the containerized environment

Deep dive


A development environment should help developers to understand their codebase, and catch errors as early as possible. The following settings should do the job.

# Tracer
# General
error_reporting = E_ALL
display_startup_errors = On
display_errors = On
# Tracer
# Tracer
# Tracer
# Tracer

Developers should have complete visibility over any error or exception that might rise at run time.

  • error_reporting = E_ALL enable output of every message sent to PHP error logs
  • display_startup_errors = On enable PHP's startup errors reporting
  • display_errors = On print errors to the screen as part of the output

Heads up! "host.docker.internal" it's a domain pointing to the docker host, but it's provided by Docker for Desktop, and not a native feature in docker. However you can register the domain with the ip of the docker host at /etc/hosts at run time with docker exec.

PHP Debugger

Using a PHP debugger will make a huge difference in your productivity as a PHP developer.
If you are debugging issues with var_dump() or print_r() you should really try this approach instead.

This debugger isn't suitable for shared development servers, or servers exposed to the wild. But it's very easy to make it work from a locally running docker container.

With these settings, an HTTP request, containing the parameter XDEBUG_SESSION_START will start a debugging session.
You can use Chrome's XDebug Helper to help you attach that param to any request.

Steps of a debugging session
  1. You must tell your editor to listen on port 9001 for incoming connections. If the editor doesn't support that out of the box, look for a plugin.
  2. An HTTP request containing the param XDEBUG_SESSION_START is sent to the webserver
  3. XDebug pauses the flow of your program, attempts to connect back to your computer at localhost:9001, and waits for your orders.
  4. Your editor accepts the connection
  5. If there is a breakpoint in your code, the debugger will inform your IDE of the current context of your running app, including the value of every variable, constant, object, created on the current context, and every frame of the memory stack of your PHP programs.
    You can use the debugger like you would use a javascript debugger from your browser.

You can use Chrome's XDebug Helper to help you attach that param to any request.

Using the PHP debugger increases your productivity !!

A few useful resources to set up an IDE or editor to debug requests:

This is my VSCode configuration for FelixBecker's plugin, in case you are using VSCode:

"version": "0.2.0",
"configurations": [
"name": "My project",
"type": "php",
"request": "launch",
"port": 9001,
"pathMappings": {
"/var/www": "${workspaceFolder}/src"
PHP Profiler

Enabled by an HTTP parameter XDEBUG_PROFILE. XDebug's PHP profiler will produce a complete report of the messages passed between functions and/or objects in your PHP programs during a specific request. Then you can open those reports with KCacheGrind, QCacheGrind, WinCacheGrind, or WebGrind .
This is useful when you want to understand what happens on a new project, or how a framework works under the hoods.

PHP Tracer

XDebug tracer is a powerful tool that gives you the ability to analyze your PHP code, detect bottlenecks, and understand memory consumption at every step in a given flow.

Reports are human-readable and are stored in the container's folder /tmp/mydata.
Later I show you how to map that folder with a local folder in your host, to read those reports more easily.

To enable the tracer, add the HTTP parameter XDEBUG_TRACE to any HTTP request. Again, you can use Chrome's XDebug Helper to help you attach that param to any request.

PHP OPcache

OPCache its a PHP extension that stores precompiled php scripts as bytecode in shared memory, improving the performance of PHP, because the interpreter doesn't have to parse the same files on each request.
These settings aims to make the development experience faster.


Docker uses Dockerfiles as recipes to create images. Let's explore its syntax and a few useful instructions.

FROM php:7.4.2-apache AS dev-image
FROM php:7.4.2-apache
FROM php:7.4.2-apache AS dev-image
ENV COMPOSER_HOME=/usr/local/composer/global
ENV COMPOSER_HOME=/usr/local/composer/global
ENV COMPOSER_HOME=/usr/local/composer/global
ENV COMPOSER_HOME=/usr/local/composer/global
ENV COMPOSER_HOME=/usr/local/composer/global
RUN apt-get install -yq --no-install-recommends \
RUN apt-get install -yq --no-install-recommends \
RUN apt-get install -yq --no-install-recommends \
RUN apt-get install -yq --no-install-recommends \
RUN chown -R www-data:www-data /usr/local/composer/global
RUN chown -R www-data:www-data /usr/local/composer/global
&& chown -R www-data:www-data /usr/local/composer/global \
RUN chown -R www-data:www-data /usr/local/composer/global
&& chown -R www-data:www-data /usr/local/composer/global \
&& chown -R www-data:www-data /usr/local/composer/global \
&& chown -R www-data:www-data /usr/local/composer/global \
&& chown -R www-data:www-data /usr/local/composer/global \
&& chown -R www-data:www-data /usr/local/composer/global \
&& chown -R www-data:www-data /usr/local/composer/global \
Use an official base image

FROM tells docker which base image has to use for subsequent instructions. Because of this, it's the first instruction on the Dockerfile.
Every image is tagged with the following syntax:

< name > : [ version ]

Docker has a vibrant ecosystem of images you can use. The most popular place to look for it's DockerHub, with thousands of publicly available repositories, built by DevOps practitioners around the world.

The image you choose must exist and you need to have access to it.
Use the official PHP image, bundled with Apache, whose name its php, and its version 7.4.2-apache. If you want to check it's source code, read its Dockerfile from Github.
To protect your image from upstream's breaking changes, always use explicit version tags. This principle applies to every external dependency you use in your projects, from Docker images to composer and npm packages.

For improved security and stability, your custom images should rely on official images; A curated list of images maintained by stricter quality standards

Choose an alias for your image

An alias helps you reference one of your images in your build process. In this case, the alias is dev-image.

Many image definitions can be written on the same Dockerfile.
An alias helps you identify a specific image in the build process.

docker build \
-f ./Dockerfile \
-t guillermomaschwitz/blog:2-production \
--target blog-production ./

docker build \
-f ./Dockerfile \
-t guillermomaschwitz/blog:2-development \
--target blog-development ./

Configurable images

Containers are atomic units, that must interact with varying infrastructure: a cloud service, a localhost, a dedicated server, etc. To provide the required flexibility, configuration settings must be initialized at the highest possible level, which is the environment itself.

Environment variables are a natural fit for this.

The dockerfile syntax provides the ENV instruction, to define environment variables that can be hardcoded at build time, and available at run time.
Any environment variable defined this way, can be overridden at run time, using the -e VAR_NAME=VALUE parameter at the moment of running the container.

docker run --rm \
-e HTTPS_PORT=4430 \
-e HTTP_PORT=8000 \

But how can we initialize variables at build time, to simplify the way we run the container, and at the same time make our build processes more maintainable?

Build Arguments

Docker provides a way of configuring variables that will be available only at build time with the ARG instruction.
Variables initialized with ARG can be initialized outside the Dockerfile, at the moment of building an image.

docker build \
--build-arg COMPOSER_HOME=/usr/local/composer/global \
--build-arg HOME=/usr/local/app \
--build-arg HTTP_PORT=8080 \
--build-arg HTTPS_PORT=8081 \
--build-arg PROJECT_NAME=my-project \
--build-arg APP_ROOT_DIR=/usr/local/app \
--build-arg APP_PUBLIC_DIR=/usr/local/app/public \
-t guillermomaschwitz/awesome-php:7.4-development .
Configure environment variables at build time

You can wire environment variables with build arguments to achieve default environment variables, configurable at build time.

The way to do it is by initializing build arguments from your build scripts and pass those values to environment variables.

Remember, those environment variables can be overridden later by passing new values to docker run.

Choose the least-privileged user

To protect your infrastructure from errors, malicious code in vendors libraries, and other attack vectors, always set a less privileged default user to run your containers.

Set that USER once you finish performing any other task for which you need more privileges.

Linux packages

You need to install a few extra Linux packages that are not included with the base image, to implement several valuable features for this image.
The instruction RUN allows you to pass commands to the default Linux terminal provided by the base image.

Install PHP extensions

Install XDebug; A development PHP extension that packs handy features to make your life easier as a developer. And zip, an extension required by composer.
As documented here, use pecl, and docker-php-ext-enable instead of apt-get or other methods.

SSL Setup

To provide HTTPS support for your project, set up a self-signed certificate.

The easiest way to do it its via the ssl-cert Linux package, which provides the command make-ssl-cert to generate fake self-signed SSL certificates.

Set the ownership of the generated SSL certs, to the user that will be in charge of running the image main process; the Apache server. Finally, enable the SSL apache module.

Install composer

Composer is the de-facto standard for PHP vendor management.

Install composer globally, with the provided installer from their website.

The global composer folder is the folder where globally available vendors are stored.

www-data it's the default user responsible for running any command at run time. Assign it as the owner of that folder, so composer doesn't fail to install vendors with the --global flag.

Create a folder to store XDebug reports

PHP's profiler and tracer tools, were configured at config/php-ini.dev to dump their reports as files.

Create a folder for those files, and assign ownership to the container's default user

Optimize the size of your image

Each time you use RUN, a new image layer is created to isolate the changes made to the file system.

You can use RUN as many times as you want, but keep in mind that each new layer, increases the resulting image's size.

You can chain shell commands together to reduce the number of layers that compound an image.

Unless you want to cache specific steps of your build process, it's a good practice to replace consecutive RUN lines, by a single line of chained sentences

Optimize size by removing temporary files

This trick would not work if executed on a different layer, because the previous layers would contain those temporary files already. An attempt to remove them from a different layer, to decrease the size of the resulting image, would be futile.

But since every command is executed on a single layer, you can remove them, decreasing the resulting size of the specific image layer created by RUN.

Pack custom Apache and PHP settings

Use COPY to copy your own PHP and Apache's configuration files into your image, to override default settings; enabling features such as HTTPS, error reporting, opcache settings, xdebug tools, etc.

Pack your source code

COPY your local folder src/ inside your images, to run your project inside the containerized environment, and assign the ownership of those files to user www-data.

--chown=user:group sets the ownership of those files.

Keep in mind the instruction COPY adds a new layer to the image.

To speed up your builds, group layers that change most often at the bottom of your Dockerfile.

Usually, the layer that changes more often in the life of a docker image, its the application's source code; Infrastructure change less often.

To speed up your image's build process, copy your source code as late as possible.

Change the current folder

Like if you were typing cd on the console, WORKDIR changes the current folder of your image.

This is useful to provide the current context during build time but's also useful to provide a useful context to execute commands inside a running container.

Expose ports

EXPOSE open ports for HTTP and HTTPS

Since you set www-data as the non-root user to run your image, you will have to choose port numbers higher than 1024 to expose in your container.
You can map those ports to whatever ports you like at your docker host.

And that's it!

You can now use this Dockerfile to build your custom image, every time your source code or development infrastructure changes:

FROM php:7.4.2-apache AS dev-image
# arguments available at build time
# environment variables available at build time and run time
ENV COMPOSER_HOME=/usr/local/composer/global
# Install debian packages
RUN apt-get update \
&& apt-get install -yq --no-install-recommends \
&& dialog apt-utils openssl ssl-cert \
&& curl git unzip libzip-dev \
# install php extensions
&& pecl install xdebug-2.9.2 zip \
&& docker-php-ext-enable xdebug opcache zip \
# set up self signed cert (the easy way)
&& make-ssl-cert generate-default-snakeoil --force-overwrite \
&& chown -R www-data:www-data /etc/ssl/certs /etc/ssl/private \
# enable SSL apache module
&& a2enmod ssl \
# Install PHP Composer package manager
&& curl -sS https://getcomposer.org/installer | \
php -- --install-dir=/usr/bin/ --filename=composer \
&& mkdir -p /usr/local/composer/global \
&& chown -R www-data:www-data /usr/local/composer/global \
# Create data folders for XDebug
# tracing and profiling tools
# and set ownership to www-data
&& mkdir -p /tmp/${PROJECT_NAME} \
&& chown -R www-data:www-data /tmp/${PROJECT_NAME} \
# cleaning...
&& apt-get -y autoremove \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# copy development settings for PHP
COPY ./config/php-dev.ini /usr/local/etc/php/conf.d/
# override default apache configuration
COPY ./config/apache-vhost.conf /etc/apache2/sites-enabled/000-default.conf
COPY ./config/apache-ports.conf /etc/apache2/ports.conf
# copy the PHP source code inside your image
COPY --chown=www-data:www-data ./src ${APP_ROOT_DIR}
# change the context
# expose HTTP and HTTPS ports
# set a less privileged default user to run this image
USER www-data

Check the Dockerfile reference for a deep dive into every option available to write your own images, or customize this example.

Use your image

Previous steps

Before building the environment, allow Docker to read your project's folder.
At least in Docker Desktop for MacOS, it goes like this:

  1. Go to Docker Desktop's preferences
  2. Go to File Sharing, under Resources section
  3. Add your folder and apply changes

Avoid symlinked folders cause Docker doesn't like them.


Build your image
docker build \
-t username/my-project:1.0 \
--target dev-image .

Builds and tags your image as my-project:1.0.
The last parameter ".", is the file system context at your docker host. Any file you want to COPY into the image must be inside the given context. You should allow docker to access your project's folder first. See the "File Sharing" settings section in Docker for Desktop.

If everything went well, you should see the "my-project" custom image on that list.

docker image ls
username/my-project 1.0 14699ea162d0 2 minutes ago 454MB
Test it as a standalone container

Test your image before sharing it.
Do not mount any volume, because you want to test the image itself as an immutable artifact.

docker run --rm \
-p 8080:8080/tcp \
-p 8081:8081/tcp \
--name my-immutable-container \
Run it as a development environment

How can you use this image as a development environment?
Simply by mounting your source code inside the environment.

Let's run a new container, with your source code mounted into it. And mount as well a local "data/" folder, as the folder where XDebug profiler and tracer dump their files inside the container. That way you can read them from your host.
Use the -v host-folder:container-folder parameters for each volume.
You should also use --rm to tell Docker to discard the container after using it.

docker run --rm \
-p 8080:8080/tcp \
-p 8081:8081/tcp \
-v "$PWD"/src:/usr/local/project \
-v "$PWD"/data:/tmp/my-project \
--name my-container \
Share your image

To share it with other developers, push the image to your Dockerhub repo.

docker login
docker push username/my-project

Wrapping every workflow into a cohesive interface

Just like you would define an interface, or a group of public methods in a class, to present it as simple as possible to others. You should implement a simple and concise command-line interface, so devs and external tools can interact with the environment.
Let's see now how to define a high-level command-line interface, by writing a Makefile; a pretty standard approach for IT teams, that helps integrating opeartions and development processes.
GNUMake it's a good fit for this task, because it allows you to write every high-level command in a single file, and it's a basic development tool, available on most operating systems.

These are a few commands I like to define for projects in general:

  • make build
  • make clean
  • make all
  • make start
  • make stop
  • make test
-docker container rm my-container
-v ${PWD}/src:/usr/local/project \
# Bake a new image
docker build -t username/my-project:1.0 \
--build-arg HTTP_PORT=8080 \
--build-arg HTTPS_PORT=8081 \
--build-arg PROJECT_NAME=my-project \
--build-arg APP_ROOT_DIR=/usr/local/project \
--target dev-image .
-docker container rm my-container
docker exec -it my-container sh -c "$$cmd"
-v ${PWD}/src:/usr/local/project \
-docker container rm my-container
-docker container stop my-container
docker exec -it my-container sh -c "$$cmd"
-docker container rm my-container
-docker container stop my-container
-docker container rm my-container
make build

Build the development environment.
In the case of multiple environments, it should build every image required by any environment.

This command is used by the maintainers of the environment, and a CI/CD pipeline.

make start

Use this command to start a local development environment. It's usually used by developers involved in the project.

  • Docker starts my-project:1.0 image from acme´s docker repository
  • Map ports with the developer computer
  • Assigns a name to the container (useful to write other commands like "make stop")
  • Mount the source code inside the container so the environment can reflect changes made to the code.
make exec

make exec it's a helper command that executes any command you pass to it inside the development environment, removing bugs caused by environments that differ from the production environment.

Commands you should run inside a container
composer init
composer install
composer update --save
composer require --save-dev phpunit
vendor/bin/phpunit your-tests/*.php
npm install
npm update
npm audit
make stop

This command stops the container previously started with make start

make clean

This command removes any artifact created during the build by make build.

It's usually used by the maintainers of the environment and a CI/CD pipeline.

It's important to implement a common set of commands across projects to ease understanding and integration with automation tools. I like to use commands like build, clean, start, stop, deploy, test, all. Because they are concise and self-descriptive, besides being some of them pretty common across other projects I've seen in the wild.


Always remember to wrap your environment with a high-level documentation, so those who have to work with it can move faster without depending on you.
Documentation is critical for communication, and reduce the dependency a team has on the maintainer of the environment. It also helps to spread knowledge about the product, which is something very good if you plan to go on vacation someday.


In this article, you learned how to pack and share an application with its development environment. The importance of pragmatical documentation and high-level interfaces, to ease the learning curve required to be proficient with the tools you build. And how to configure Apache, PHP, and XDebug, to set up a great development environment. But the main goal has been to show you, how I use some tools and write some docs, to unlock new or better ways of collaboration, aimed at increasing business outcomes while reducing operational costs.

If you have any inputs, please write to me at guilledevel@gmail.com

What's next

In my next post, I'll show you how to write different versions of the environment with multi-stage builds; How to use docker to define the network topology for this project; And docker-compose, a great tool to declare single-host environments with multiple containers.

Later, I plan to show you how to automate the creation of a production-ready, auto-scalable AWS ECS infrastructure to run your containers. And how to automate and document your infrastructure-as-code with GNUMake and Terraform, a great tool, similar to CloudFormation, that once adopted, introduces as well, the possibility of automating hybrid clouds infrastructure.

Stay tuned!

© Copyright 2019 Guillermo Maschwitz