How To Install and Set Up Laravel with Docker Compose (With Example)

Coding (Php 7.x)


From the examples in this tutorial, you will learn how to set up a Laravel application using Docker containers
 
/img/blog/containerize-your-laravel-application-docker.jpg

Introduction

 

In the previous episode, we have seen how to use the basic command of Docker,

 

You now know that there is a service called Docker Hub, from which you can download (pull is a more appropriate term) images, then you can run those images and, like magic, have an environment with PHP on your computer.

 

You also saw an example of how to dockerize your own PHP application by learning how to write a Dockerfile.

 

These features allow you to recreate environments with the software you need without actually installing them on your machine.

 

if you haven't read it yet here it is: Containerize your PHP application

 

Even though the basic functionalities of Docker bring you several improvements, having a fully working application with these alone it is far from obtained.

 

You can create as many PHP containers as you want but eventually, you want to connect them with a database, maybe an in-memory data structure like Redis, or a front-end framework such as React.

 

Here is where you need several containers of different types talking to each other.

 

The way to do that is by the use of Docker compose:

 

The Series

 

What you are reading is a 3-part series about the containerization of PHP applications:

If you haven't read the previous part or want to just already on the next one below you can see the quick links

 

 

Let's begin!

 

Docker compose

 

Compose is an amazing tool.

 

It simply consists of a configuration file, written in YAML, that describes all the different services, networks, etc, that our application needs to use.

 

Then by using a few commands, we can run all the containers in a single Docker host.

 

The reason we use Compose is that is way easier to manage a single well-formatted file than 2,3 or even 100 standalone containers.

 

To better explain how to use this tool the Docker team created an example of an application running using a microservices method.

 

This article shows all the functionality that this sample application includes.

 

The docker voting app

 

The Docker voting app is the de facto example used to show how docker-composer works

 

The goal of this application is to show how different containers running different software can be connected to each other,

 

The system consists of a voting application, that let users choose between two options, and show the result back on the screen after doing all the due diligence.

 

You can see the file of pull the voting-app repository here

 

There are 5 containers in this application:

 

  • A front-end web app in Python which lets you vote between two options
  • A Redis queue that collects new votes
  • A .NET Core worker which consumes votes and stores them in…
  • A Postgres database backed by a Docker volume
  • A Node.js web app that shows the results of the voting in real-time

 

As you can see this application is composed of different parts that are all connected to each other.

 

If you were to do the only using docker you would type something like this:

 

docker run -d --name=redis redis
docker run -d --name=db postgres
docker run -d --name=vote -p 5000:80 voting-app
docker run -d --name=result -p 5001:80 
docker run -d --name=worker worker

 

 

With these commands above we are instantiating several containers that we would need for our application.

 

Note that all of them run in detached mode and all of them have their own name.

 

Also, the two web apps, the parts that let users vote and the one that shows the results on the screen have ports indicated by the flag -p as we saw in the previous episode

 

A problem that you might have already seen is that these are all standalone containers.

 

They do not communicate with each other, which is an application of this type would be a problem.

 

We need to find a way to link them together.

 

Let’s think about it:

 

The python voting app needs to communicate with the Redis container to store the data after the user chooses one of the available options.

 

Also, the worker needs to get the data previously stored in the Redis memory and save it into the Postgres database.

 

Eventually, the result-app needs to be linked to the Postgres database to retrieve and show the aggregate data on the screen.

 

To connect containers together we use the link command.

 

The syntax is:

 

--link [containerName]:[hostName]

 

 

The command is quite straightforward, also notice that it requires the name of the container.

docker run -d --name=redis redis
docker run -d --name=db --link db:db postgres
docker run -d --name=vote --link redis:redis -p 5000:80 voting-app
docker run -d --name=result -p 5001:80 
docker run -d --name=worker --link db:db --link redis:redis worker

 

Once you have a mental image of how all the containers in your application would look like you can start writing the docker-composer.yml file.

 

It will be the main focus of the next section

 

 

docker-compose.yml

 

It is common to use, yml file to create full environments for web applications.

 

AWS CloudFormation and Google Cloud Deployment Manager are famous examples of that.

 

Docker does the same, by using a YAML file called docker-compose.yml.

 

It let you create a system containing several images and then manage them with a few commands such as docker-compose up or docker-compose down.

 

The first step when creating a multi-containers application is to discover what features are going to be used by each container.

 

We have done this exercise in the previous part of this article,

 

We have seen what ports the web apps need to be connected to, also which database they need to be linked to.

 

But having it in several commands can be messy, and result in several problems.

 

To solve that and manage all in an easier way we are going to translate these commands into the yml file.

 

 

redis: 
    image: redis
db:
    image: postgres
vote:
    image: vote
    ports:
     - 5000:80
    links:
     - redis
result:
    image: result
    ports:
     - 5001:80
    links:
     - db
worker:
    image: worker
    links:
    - redis
    - db

 

Writing db is an easier way to write db:db

 

Now that the application is ready we set the application up with the command:

 

docker-compose up

 

Some images are already available but some other are custom ones that might or might not be available to run.

 

In this case, we use the command build

 

redis: 
    image: redis
db:
    image: postgres
vote:
    build: vote
    ports:
     - 5000:80
    links:
     - redis
    result:
     build: result
    ports: 
     - 5001:80
    links:
     - db
worker:
    build: worker
    links:
     - redis
     - db

 

 

The file docker-compose.yml has evolved a lot over the years, the one above is similar to version 1 of it.

 

Version 2 had some new features,

 

the easier to catch is that the containers have been encapsulated in the services section.

 

Another one is that you do not need to specify the links among them anymore.

 

Also from this version on, the file needs to specify with version are we choosing to use,

 

Like so:

version: 2
services:
 redis: 
    image: redis
 db:
    image: postgres
 vote:
    build: vote
    ports:
     - 5000:80
 result:
    build: result
    ports:
     - 5001:80
 worker:
    build: worker

 

 

Other updates arrived in version 3 like support for Docker Swamp and other several other features added.

 

 

 

 

The network

 

By default, the composer makes it easy to link together all the containers in an application by creating a single network.

 

Your application is called vote Docker creates a network called vote_default and all the containers within the services join it.

 

On occasion it is useful to separate the container and divide them is groups.

 

For example, our app can be split into two different sections,

 

The frontend one is the part that gets and sends the input to the user and the backend part is which the worker does its calculations regarding the vote it is getting.

 

To create new networks in a docker-compose file you need to use the section network.

 

In there you can indicate the names of how many networks you want to use.

 

In the example below we are using front and backend we are also adding the network flag to each service,

 

You can see that some service like Redis is only connected to the backend of the application whereas the image vote has to join both networks.

 

version: 2
services:
    redis: 
      image: redis
      network:
      backend
    db:
      image: postgres
      network:
      backend
    vote:
      build: vote
    ports:
      - 5000:80
    network:
      frontend
      backend
    result:
      build: result
      ports:
       - 5001:80
      network:
       frontend
       backend
    worker:
      build: worker
      network:
        backend
 
networks:
   frontend: 
   backend:

 

You can also some details about the network such as giving a custom name and indicating the driver to use;

version: "3.5"
networks:
 frontend:
  name: my_personal_frontend
  driver: my_personal_driver

 

 

Using Laravel in Docker

 

If you don't know what Laravel is, or you want to start learning the basics of PHP here is the page for you.

 

We are using version 3

 

We are going to create a network that connects all the services we need, even though this is not mandatory  

 

Regarding the core of our docker-compose.yml we are going to need 3 different services:

 

The first one is the web server on which our Laravel application will run.

 

Setting up Nginx

 

In this case, we are going to use Nginx

 

The second service we need is the database.

 

I’d love to use MySql in this case.

 

Lastly, it would be time for PHP.

 

Nothing that you haven’t seen so far.

version: '3'
services:
   nginx:
      networks:
        laravel
   mysql:
      networks:
        laravel
   php:
      networks:
        laravel
networks:
   laravel:

 

You can notice that, as we saw in the network section of this article, we are connecting the Laravel network to all of our services.

 

Even though we have official images available on the Docker Hub registry for all three images, since we need to use Laravel, we are going to use a Dockerfile to get our PHP up and running and install all the dependencies required.

 

Here is what our Dockerfile will look like:

FROM php:7.4-fpm-alpine
RUN docker-php-ext-install pdo pdo_myslq

 

We are going to use PHP version 7.4 for this project.

 

What this does is pull PHP and install the extension required to run Laravel (PDO in this case). 

 

That’s it for PHP for now.

 

We are, instead, going to pull the other two images

 

We have:

 

https://hub.docker.com/_/nginx for Nginx, I want to use the lighter image available here we’ll opt for the 1.17.8-alpine

 

https://hub.docker.com/_/mysql for the database, we are going to use version 5.7 

 

This is how the docker-compose looks so far.

 

version: '3'
services:
    nginx:
      image: nginx:1.17.8-alpine
      networks:
         - laravel
    mysql:
         image: mysql:5.7
      networks:
       - laravel
   php:
      build:
      context: .
      dockerfile: Dockerfile
      networks:
       - laravel
networks:
  laravel:

 

 

Let’s focus on the webserver for a bit,

 

The name of the image can look a bit complicated and writing it over and over won’t make us as effective as we can be.

 

Docker let us change the name of the container with the command container_name.

 

Also, we want to be able to connect to the Docker engine. 

 

To do so we need to define the ports, in this case, we can go for 8000 or 8088 and bind it to the Nginx container port 80.

 

Eventually, we want to be able to use persistent storage in our machine, to do so we need to indicate where we want to save our data. 

 

The command, in this case, is volumes and it requires two values the destination and the directory where data is stored in the container.

 

Unfortunately, the directory /var/www/html is not configured for a Laravel application.

 

To override this we need to add a custom Nginx config file.

 

Thus, we are going to create a local Nginx file in our machine and then attach the file where the webserver would normally look for the file.

 

nginx:
  container_name: nginx
  ports:
    - "8080:80"
  volumes:
    - ./src:/var/www/html
    - ./nginx/default.conf:/etc/nginx/conf/d/default/conf

 

You can use a basic Nginx default.conf like shown in the wiki 

 

(If you want me to make an article about Apache and Nginx send me a message on my Facebook page)

 

Now, to work, Nginx needs to have a connection with PHP and a database before it is initialized.

 

This means Docker has to prioritize these two services and then, start the web service.

 

We do this with the command depends_on.

nginx:
   depends_on:
   - php
   - mysql

 

The part relative to the Nginx is now completed and it looks like this:

nginx:
  image: nginx:1.17.8-alpine
    container_name: nginx
  ports:
    - "8080:80"
   volumes:
    - ./src:/var/www/html
    depends_on:
    - php
    - mysql
   networks:
     - laravel

 

Now it is time to focus on our second service,

 

The database 

 

In this case Mysql.

 

We already saw that the image we are using will be version 5.7,

 

It will be helpful to give an easy name to the container, let’s just opt for MySQL.

 

We also need to connect the port to the docker container, 

 

This time we are going to use 3306:3306.

 

Then we add the volume as we did for the Nginx

 

mysql:
 image: mysql:5.7
 container_name: mysql
 ports:
   - "3306:3306"
  volumes:
      - ./mysql:/var/lib/mysql
  networks:
    - laravel

 

I’d like to add two more commands that we haven’t seen so far.

 

The two commands are restart that tells the container what to do in case it fails.

 

In our case since the database is a really important feature of our application we want it to restart automatically as soon as possible unless we are the one that actually stops it for one reason or another.

 

The next task we need to do is to enable the tty.

 

It allows us to work with the MySQL shell and run the command via the terminal.

 

restart: unless-stopped
tty: true

 

Eventually, as you can see from the official documentation https://hub.docker.com/_/mysql, this image requires a few environmental variables.

 

Here is a list of them:

 

MYSQL_DATABASE, MYSQL_USER, MYSQL_PASSWORD, MYSQL_ROOT_PASSWORD, SERVICE_TAGS, SERVICE_NAME.

 

The complete part regarding MySQL service looks like this:

 

mysql:
 image: mysql:5.7
 container_name: mysql
 restart: unless-stopped
tty: true
 ports:
   - "3306:3306"
 volumes:
   - ./mysql:/var/lib/mysql
 environment:
   MYSQL_DATABASE: root 
   MYSQL_USER: root
   MYSQL_PASSWORD: root
   MYSQL_ROOT_PASSWORD: root
   SERVICE_TAGS: dev
   SERVICE_NAME: mysql
 networks:
   - laravel

 

Now it is time to ultimate the PHP section of our docker-composer.yml.

 

So far we have the reference of the build image and the network that includes PHP.

 

What we do now add a simpler name to the container, the reference to the volume path, and the ports to be connected to Docker Host.

php:
  build:
   context: .
   dockerfile: Dockerfile
  container_name: php
  volumes:
   - ./src:/var/www
  ports:
   - "9000:9000"
  networks:
   - laravel

 

That is basically it.

 

All the files required for the environment are set.

 

We can now start implementing Laravel into the project

 

It will be really easy.

 

It is best practice to add Laravel into his own folder, in this case, we can add an src folder in the root of the project (at the same level as Dockerfile and docker-compose.yml).

 

Once inside this new src folder, type the following command:

 

composer create-project --prefer-dist laravel/laravel projectName

 

Here you go, Laravel is on.

 

The application is not ready though.

 

What you want is to add a connection to the database.

 

This is not a Laravel tutorial but to make it simple this is a two-step process:

  1. Edit the .env file provided
  2. Enable migration and seeding with docker-compose exec

 

Notice that To be able to migrate you need to type the command 

 

docker-compose exec php php var/www/html/artisan migrate

 

 

Conclusion

 

It is undeniable that, especially for beginners, Docker is an overwhelming tool.

 

But, at the same time, it is also a simple feature that you can add to your work to improve your code.

 

It also allows you to use a lot of the tools included, like Compose or Swarn (or Kubernetes).

 

Learning Docker is not strictly necessary, but you need to know that this type of technology exists and became popular for the problem they solve among web developers.

 

Also, if you want to learn more about PHP you can now go to PHP design pattern

 

This article was inspired by Andrew Schmelyun’s repository 

 

PHP has an amazing community all over the world, if you want to read more articles like this subscribe PHPWeekly a weekly roundup with all the updates from PHP

 

 

 
 
If you like this content and you are hungry for some more join the Facebook's community in which we share info and news just like this one!

Other posts that might interest you

Coding (Php 7.x) Jan 25, 2020

Containerize your PHP application (Docker 101)

See details
Coding (Php 7.x) Feb 19, 2020

Container Orchestration (Docker 103)

See details
Coding (Php 7.x) Feb 29, 2020

Repository in PHP [Design pattern with examples]

See details
Get my free books' review to improve your skill now!
I'll do myself