Let’s go more practice working with Dockerfiles by prepping an image we can use as our Laravel development environment. Again, we could use something like Laradock (which is fantastic by the way), or any of tens of pre-build images available from Docker Hub or Github. For our purposes, that’d be cheating. It’s good to know the details for when your project gets larger and you start really having to deep-dive. You’ll want to know your infrastructure like the back of your hand, otherwise, a critical failure in it will take too long to fix while customers can’t access your application and your boss is DM-ing you on Slack every 15 minutes. What I will say about things like Laradock and other pre-build images is that you can find actively maintained images. To be clear, if you choose to use your own images in production, it’s up to you to patch the software inside the image to ensure vulnerabilities don’t creep into your containers.

I do care about readership, so I won’t painstakingly take you through the line-by-line process of creating a functional Laravel environment from scratch on Arch Linux. That would be cruel & unusual punishment. Here, have a gist instead, paste it in myapp/server/Dockerfile

Part 1 configures the environment. Docker allows you to use ENV to do environment-replacements. Part 2 is a list of the commands we’d run inside the container in order to install Nginx, since we’ll need a server to manage requests to our application. Note the use of

command \

&& command

to run commands in series. Part 3 is the installation of php7 and the core extensions you’ll need to get going with Laravel. Part 4 configures the php environment, and Part 5 installs composer. Lastly there is some additional configuration. You can EXPOSE ports on the containers to allow traffic into the container. Here we use expose both 8000 and 443 so that we could serve an https version at 443 if we wanted to (and we’re smart). The COPY command will copy a file from a path in the build context that you specify into the container at a path you specify as the second argument. CMD provides a default executable for the container, where the first string in the array is the path to the executable to run and the additional strings in the array are arguments passed to the executable. There is another command we could have used in its place, and if you’re interested check out the top-rated Stack Overflow answer on the differences between CMD and ENTRYPOINT .

Paste the contents of this gist inside of a Dockerfile at myapp/server , and from inside that directory build the image:

$ docker build -t myserver:latest .

Oops, that was my bad. The build will fail if files passed to COPY don’t actually live in the directory passed as the source argument. This is something to watch out for. Create a file named nginx.conf in the myapp/server directory and paste in the following contents into it

We have easy access to our Nginx configuration, and now try the build again, it should work. Depending on your CPU and network speed, you may need to go grab a cup of coffee while it builds.

Installing React

There’s a way to get up and running with React without needing to have Node installed on your machine. From inside of the myapp folder use

$ docker run -it --mount type=bind,source="$(pwd)"/client,target=/usr/src/app myclient:latest /bin/sh

Note that "$(pwd)” is meant to be a command substitution for the current working directory. Depending on your platform, you may have to replace this with the absolute path to your myapp directory. When we specify the mount type as bind , Docker expects an absolute path for the source. We’re specifiying a mount point to the Docker run command so that changes we make at the folder inside the container will persist in our myapp directory, since the container will treat the folder as a volume mounted inside the container. Then, from inside the container

/# cd /usr/src

/# npx create-react-app client ; mv client/* app/ ; rm -rf client

/# cd /app ; npm install

We choose to install the software inside the container because then we don’t have to worry about having Node.js and npm installed on your local machine. It reduces the number of prerequisites we need to install on the machine to complete our setup. We’ll follow the same approach for the Laravel installation, so you won’t have to have php and Composer installed globally on your machine in order to install the required Composer modules and generate an app key.

Installing Laravel

From your terminal, run the myserver images with the server folder mounted inside of the container:

$ docker run -it --mount type=bind,source="$(pwd)"/server,target=/var/www myserver:latest /bin/sh

And then from inside the container

/# cd /var /# composer global update /# composer create-project --prefer-dist laravel/laravel server ; mv server/* www/ ; rm -rf server /# cd www ; chmod -R 775 storage /# wget https://github.com/laravel/laravel/blob/master/.env.example /# cp .env.example .env /# php artisan key:generate /# exit

Running the Project

We could bring up the project by running the commands

$ docker run -p 80:3000 --mount src=client,target=/var/www myclient:latest npm start $ docker run -p 8000:8000 --mount src=server,target=/var/www myserver:latest

every time we need the environment, but that wouldn’t be any fun. We can add a docker-compose.yml file to the root of our myapp directory with the following contents to better manage our stack:

from myapp run

$ docker-compose up

and there you have it, your client should be accessible at localhost:80, and your Laravel server at localhost:8000. Run this command whenever you want to spin up your dev environment, and run

$ docker-compose down

from the directory to bring it down.

Enjoy!