The most common question I encounter when training or consulting with developers, Engineers and Software development laboratories about a new Event Sourcing project is how and where do we start. This question makes so much sense. I remember trying to get my head around Object Oriented Programming in practice (not the crap I learnt at school), let alone understanding Domain-Driven Design!

Almost a decade ago, I was building a relatively complex online and offline sales and logistics system for what was, back in the day one of my largest clients: a manufacturer, wholesaler and distributor of industrial Refrigeration and Air-conditioning systems. Three months in the project, I had my team set up with three Software Engineers, two backend web developers and one frontend developer. We were making excellent progress. We were building this system using a DDD model and Laravel as the application framework. It was good, and we were productive. Then, the client hired an internal CTO. He asked to meet me, and I obliged. He asked me how it was working in Event Sourcing with PHP. My answer was more of a series of questions 😳.

I already knew about Event Sourcing, I read the books but I never, ever needed to implement such a thing. I did not know where to start. I mean, I had no idea. I knew the concept and the theory, but that was it.

In my previous article, I explained how I get started with a new Event Sourcing Project. With some theory and Event Storming covered, it is time to start thinking about other equally important factors, the Technology Stack for an Event Sourcing project.

I chose PHP as the primary backend programming language. However, I am confident that the planning factors apply to any programming language you'd like to use for your Event Sourcing project. In this article, I will outline what the considerations for planning the Infrastructure and the Application glue in between are. I will also describe and provide a working example of my local environment deployment, code structure, and what I deem as the starting point for such a system, the testing framework.

How to plan the infrastructure services for an event sourcing project

Arguably, I believe that in the majority of cases, when working with a team, preparing for these requirements will depend on several factors and other project stakeholder's needs. Although I am making these decisions by myself for this project, the process is similar for teams and projects commissioned by your clients. The main difference is that I can choose which backend services I am most comfortable using and that I don't have external dependencies beyond my control. Such as migrating or integrating data from an MSSQL database into a MySQL Database 😡.

During my career, I had an equal number of opportunities for developing new systems and migrating legacy systems into an event sourcing solution. In my opinion, the latter is a lot more complicated. You can get in touch with me if you need help with refactoring a legacy system to use an event sourcing pattern and also to migrate your existing data.

I intentionally kept this example Domain simple so to not distract you from the primary topic, which is to build an Event Sourcing project with practical examples. The example code in this project is a sample from a commercial project I manage. Please feel free to ask me questions in the comments section further below about this or other Business Domains. With this out of the way, lets us start planning the Technology Stack for our example project because we have a lot to cover.

Event store database

The most critical persistent storage for Event Sourcing is the Event Store Database. As I'm sure you are aware, there are several different database engines we can use for running our event store. I will not be comparing any of these engines in this post because all we need to know is that we need an append optimised database.

Given the immutable nature of Event Sourcing, we certainly don't need to update any of our persisted events. We also don't need to run any complex read queries. As I will mention in a future article, the Event Store implementation will query the persisted events as identified by the Aggregate's UUID (Universally Unique Identifier). That is quite a simple query.

Are you enjoying reading this article? Subscribe to receive email notifications when I publish new articles and code libraries. Subscribe I will not share your email address with anyone, and you can unsubscribe at any time.

View the privacy policy for more information.

I also would discourage implementing your Event Store on a NoSQL database. I don't have this rule written in ink, you're okay to use any storage mechanism you wish, but since I believe that the Event Store's schema should be explicitly declared, I don't see NoSQL as a good fit for the Event Store. Although I'll emphasize, you can still declare your schema and primary keys on most database engines.

Friendly advice: Don't use message brokers such as Don't use message brokers such as Kafka as your source of persisted truth. Message brokers are precisely what their name implies, brokers, even if the message stream is cached or logged, the messages should be persisted elsewhere.

In any case, I've chosen to implement my Event Store on a MySQL database. I find that PostgreSQL is also great for the Event Store implementation, and I've used it several times.

With this, I have made my first decision for the infrastructure of this Listing application. I will use MySQL for persisting events. However, I will segregate all the Model's code from this decision by using the Repository Pattern. Therefore, if I change my mind in the future, I only need to change the implementation and somehow migrate the data.

Read model database

Although I have used MySQL for persisting the Read Models before, the more often I implement a NoSQL database for the Read Models, the more I favour the flexibility NoSQL gives to my applications.

I should note that some applications require the Read Models to have an explicitly declared schema. Our Listing application will not. In saying so, I want to point out that you can have several versions of a Read Model. These types of decisions will be very dependant on your specific application, anti-corruption layers and security. The Listing application's Read Models will be schema-less. The lack of schema gives me the flexibility of adding fields to the Read Models without having to make any changes to existing data. However, when taking this approach, you will need to make sure your read queries can handle missing fields and thus, arbitrary data.

Also, note that cost is a significant factor for these decisions. Most cloud service providers will charge you for both incoming and outgoing traffic, not just for storage and computing. Thus, hosting multiple databases on separate instances will affect your traffic fees.

Once again, there are so many different NoSQL database engines available to make our heads spin if we try to list even half of them 😵. I will be using MongoDB as a database for the Read Models of this system, for no specific reason other than I use it a lot in my day-to-day work, so I'm very comfortable with it.

Application framework

Do I need an Application Framework? Arguably, I don't. I need an application layer which can be accessed by the clients such as a User Interface. However, I also need a few other essential components in my application layer. In no particular order, these are:

The ability to resolve concrete implementations of my dependencies.

The ability to easily swap all of my concrete implementations with new ones.

An Anti-corruption layer between incoming requests and the Model.

Some form of Identity and Authentication for the clients and Users.

Ability to segregate configurable options using environment variables.

And probably some other requirements too, such as bootstrapping the whole thing 😆.

We can, of course, use a bunch of decoupled components that together will satisfy all of the requirements. I've done this before, even successfully. But why? Why not use a framework such as Laravel ? There are disadvantages, of course. But none of these cons are bad enough for this application. Laravel and all frameworks do have a bootstrap time overhead, but I don't see this as significant. If I was to use decoupled components and then bootstrap them together myself, I'm almost sure that my bootstrapping will be less efficient, unless I develop some custom caching and optimisation similar to the one already provided by Laravel.

You might be asking why I'm not opting for the Lumen micro-framework. If you are, my answer is a little dumb. I haven't used Lumen in production, and I don't have experience with it. What I do know is that I will still need to use Eloquent with the Laravel's Authentication system, and I think that this is the only significant component that might reduce the bootstrap time. Please correct me if I'm wrong as I don't know much about Lumen.

API access

I also intend to use GraphQL as a query language for the API. I find GraphQL and schema-less Read Models to be a perfect match. However, this will most probably result in using even less of Laravel's HTTP layer because (as an example), the GraphQL Type System can serve as the Anti-corruption layer for the application layer as opposed to Laravel's Request Objects. Using this API also means that I will likely need to develop the GraphQL implementation myself because (currently and AFAIK) all the current libraries for GrapghQL and Laravel use Eloquent as the Type System. I won't be using Eloquent for the Read Models as it defeats the schema-less advantages that I so much love 💕.

Local development environment

We are ready with planning the technology stack for our Listing application, so it's about time we prepare our local development environment. Last year I wrote an article detailing my workflow for developing PHP applications, and although I continuously refactor my process, the most noticeable change is that I'm now almost always using Docker in my development environment as opposed to Vagrant.

I'm not focussing this series of articles on deployment strategies or even distributed systems. I am not implementing a micro-services approach, but although the system may feel like it is a monolith, in reality, we can deploy every back-end service on separate machines. You can run this system on a Kubernetes Cluster if so needed by merely following this article.

I will not be going through all the details of how I organised this project's tasks in this article. The details in the article mentioned above are still valid. As a summary, I followed these steps:

I created a new remote Git repository on GitHub I've set up my local directories for the project.

mkdir -p ~/code/keithmifsud/php-event-sourcing-demo cd ~/code/keithmifsud/php-event-sourcing-demo

I installed Laravel in a directory named code .

composer create-project --prefer-dist laravel/laravel code

It is probably best if you install Laravel from within Docker, but since I'm using an Ubuntu host, it doesn't matter.

I initialised Git and added the remote repository as its origin.

cd code git init git remote add origin git@github.com:keithmifsud/php-event-sourcing-demo.git

I updated the project description in the composer.json file, deleted the readme.md file provided by Laravel and create a new and initial README.md file to describe the purpose of this repository. You may also wish to add the license file. Once done, I committed my changes and pushed them to the remote origin .

git add . git commit -m "#GH-1 Initial commit containing the Laravel framework." git push -u origin master

I also created a Project, some Issues and a KanBan board on GitHub. These simple management techniques help me to continue working on a project even if I've been away from it for a while as it is the case on this same project 🐢.

Docker containers for Event Sourcing

Okay, I've shared a summary of my getting going tasks, let's get going with the Docker containers we need for this Event Sourcing application. We've already covered that we need a MySQL database for the Event Store and a MongoDB database for the Read Models. However, I want to have a separate database for the application framework, in this case, for Laravel. You don't need to have a separate database for the application layer. I choose to for several reasons, such as, I want to be able to easily backup the Event Store independently, and I don't want Laravel to get anywhere close to my Event Store, artisan migrate commands are fluffin' scary 😱.

We also need a Docker Container for the PHP runtime and a web server for well, serving to the web. I'm using docker-compose to compose my containers as follows:

version : '3' services : es_demo_php_runtime : build : context : . dockerfile : ./docker/PHP_Dockerfile image : digitalocean.com/php container_name : es_demo_php_runtime restart : unless - stopped tty : true environment : SERVICE_NAME : es_demo_php_runtime SERVICE_TAGS : dev working_dir : /var/www volumes : - ./ : /var/www - ./docker/configuration/php/local.ini : /usr/local/etc/php/conf.d/local.ini networks : - es - demo - network es_demo_nginx_webserver : image : nginx : alpine container_name : es_demo_nginx_webserver restart : unless - stopped tty : true ports : - "8080:80" - "4430:443" volumes : - ./ : /var/www - ./docker/configuration/nginx/conf.d/ : /etc/nginx/conf.d/ networks : - es - demo - network es_demo_mysql_app_db : image : mysql : 5.7.22 container_name : es_demo_mysql_app_db restart : unless - stopped tty : true ports : - "3315:3306" environment : MYSQL_DATABASE : es_demo_application_db MYSQL_ROOT_PASSWORD : secret SERVICE_TAGS : dev SERVICE_NAME : es_demo_mysql_app_db volumes : - ./docker/storage/app - data : /var/lib/mysql/ - ./docker/configuration/mysql/my.cnf : /etc/mysql/my.cnf networks : - es - demo - network es_demo_mysql_event_store_db : image : mysql : 5.7.22 container_name : es_demo_mysql_event_store_db restart : unless - stopped tty : true ports : - "3316:3306" environment : MYSQL_DATABASE : es_demo_event_store_db MYSQL_ROOT_PASSWORD : secret SERVICE_TAGS : dev SERVICE_NAME : es_demo_mysql_event_store_db volumes : - ./docker/storage/event - store - data : /var/lib/mysql/ - ./docker/configuration/mysql/my.cnf : /etc/mysql/my.cnf networks : - es - demo - network es_demo_mongo_read_model_db : image : mongo : 3.4.22 - xenial container_name : es_demo_mongo_read_model_db restart : unless - stopped tty : true ports : - "8081:8081" environment : MONGO_INITDB_DATABASE : es_demo_read_model_db MONGO_INITDB_ROOT_USERNAME : root MONGO_INITDB_ROOT_PASSWORD : secret SERVICE_NAME : es_demo_mongo_read_model_db SERVICE_TAGS : dev volumes : - ./docker/storage/read - model - data : /data/db networks : - es - demo - network networks : es - demo - network : driver : bridge volumes : app - data : driver : local event - store - data : driver : local read - model - data : driver : local

As you can see, I have extracted the PHP container to a Docker file, and I also added configuration files outside of the docker-compose file. You can find these files here

If you are following along with me, you will need to create directories for the Docker volumes too.

mkdir -p ./docker/storage/app-data mkdir ./docker/storage/event-store-data mkdir ./docker/storage/read-model-data

Running the application

We don't have much to see, but I want us to make sure we're up and running for the next part - setting up the testing framework and also, so we're ready for the following article. So, let's wake up these containers:

docker-compose up -d

You can then either check that the application is running by visiting http://localhost:8080 or even better through PHPUnit 😃.

docker-compose exec es_demo_php_runtime php ./vendor/bin/phpunit

At this stage, you may wish to configure your .env file with the database connection values. I won't be doing this because I don't need it yet 🤷‍♂️.

Code structure

One of my pet peeves, when asked to help a team working on a project that uses Laravel, is, when, Laravel is be-all and end-all. Laravel is an application framework, same as Symfony is, same as all application frameworks are. I say "Please, be verbose about your business. The \App namespace says nothing about the problem you are solving. The solution you are providing to your customers."

We aren't solving any complex business problems. Our Domain is a demo of a Listing website similar to GumTree. But still, let's all change the ambiguous \App namespace to reflect our goals.

docker-compose exec es_demo_php_runtime php artisan app:name ESDemo \ \ Application

Okies, so Laravel is simply an application framework. No, it is more than just that. However, it is not our entire system. Laravel will provide access to the application, and we can use other helpers such as authentication and application bootstrap. We will also use Eloquent for persisting the authentication requirements, but, we will not use Eloquent for either the event store nor the read models

Whether you use Eloquent or not in a different architecture, such as Domain-Driven Design (DDD), believe you me, DBAL models are not your business models. When you use a DBAL (a Database Abstraction Layer) to enforce business rules, you will end up querying the database for every check, even if it should already exist in memory.

Rant over 😌. We will now create the first step for our system structure, not the application but the system (I thought you said the rant is over?!). Same as we structure a DDD model, all our system's code will live in a ./src directory. This source directory will be namespaced as ESDemo or your system's name. The sub-directories will be auto-loaded in a PSR-4 standard.

My Directory structure for enterprise systems in PHP

It is too early in our Listing system to have anything living in the src directory but never too early for me to explain how I structure such directories.

I structure Command models such as our Listing model in the following way.

code |__ app(... application stuff (like Laravel's things) |__ src | | | | | |___Infrastructure (Concrete implementations of things) | | | |___Models | | | |___Listing | | | | | |___Commands | | | | | |___Domain | | | | | |___Events | | | | | |___Exceptions | | | | | |___Handlers | | | | | |___Listeners | | | | | |___Repositories | | | | | |___Specifications | | | |___Listings (Read model, as detailed below) | |___ ...

The above file structure is extremely opinionated, but since it is a common question, I wanted to share it with you 😬.

You'll notice that I don't have a directory to separate the Command and Query Models. I do this intentionally because I want my Models to reflect the business and only the business. Querying Listings in bulk is a Domain on its own. It has very different business requirements than the Listing (singular) model, and I think that the pluralisation is descriptive enough.

Diving into this directory structure for the Listing domain, I have the following directories for the following purposes:

Commands

Commands are the only way exterior layers can access the Model.

Domain

This directory includes business model objects. It holds the Aggregate Root, sometimes other Entities and also Value Objects.

Events

I think of the Events directory as the outgoing information of this Model.

Exceptions

All the possible Exceptions spewed out from this Model. Everything that is abnormally, yet possibly wrong.

Handlers

The Command Handlers.

Listeners

If the Model listens to Events occurring in other Models, I put the Listeners here.

Repositories

The data repositories.

Specifications

I like to use the Specification Pattern because Specification objects are very descriptive of their intent. Some fellow Engineers will question as to why this directory does not reside inside the Domain directory. I choose to keep it outside because other layers can re-use specifications. If the Domain used Factories, they'd also be on the same directory level because we can access them from external layers.

I structure the Read Models, such as our Listings as follows:

code |__ ... application stuff (like Laravel's things) |__ src | | | | | |___Infrastructure (Concrete implementations of things) | | | |___Model | | | |___Listing (Write model, as detailed above) | | |___ ... | | | |___Listings | | | |___Exceptions | | | |___Projectors | | | |___ReadModels | | | |___Repositories

My Read Models directory structure is somewhat simpler than the Command Models. Although this is not a generic template, it more or less always includes the following structure:

Exceptions

Things that can go wrong, such as querying things that don't exist.

Projectors

Contains objects that listen to Events and project them to the Read Model(s).

Read Models

All the Read Models. For example, ActiveListings, ExpiredListings etc.

Repositories

Interface(s) segregating the concrete data implementation(s).

PSR-4 Namespacing

Although we don't have anything in our src directory yet, we can still set up the namespace for it. In composer.json , add an entry for the ./src directory.

"autoload" : { "psr-4" : { "ESDemo\\Application\\" : "app/" , "ESDemo\\" : "src/" } .... } ,

Please note that I'm not using a Container for composer , therefore, I cannot run composer dump-autoload inside Docker. I run this from my host. However, you may wish to either add a Container for Composer or even keep Composer installed in your PHP Runtime container. I discourage pushing Composer to a production server and instead build the Docker Image during my CD process.

With the above namespace configuration, we can easily follow the system's code structure and its intent. Throughout the development of this Listing application, we will place all the Model code inside the src/Models directory and all the Infrastructure code in the src/Infrastructure directory.

The remaining code is within the Application Layer. This layer and namespace contain all the Application code, which, in our case, is more or less all coupled with Laravel.

In future articles, we will also see how the Infrastructure layer is structured. We will implement all the Repositories concretely in this layer. Therefore, the directory will be structured accordingly and will not follow the Business Model Structure because its Domain is the Infrastructure and not the Business.

Are you enjoying reading this article? Subscribe to receive email notifications when I publish new articles and code libraries. Subscribe I will not share your email address with anyone, and you can unsubscribe at any time.

View the privacy policy for more information.

Testing

As always, I will approach the development of this event-sourced application using TDD . I start with setting up an initial directory structure for my tests to separate the three different test suites.

These test suites are Unit tests, Integration tests and Feature tests. Laravel ships with an example Feature test and another example for a Unit test. Therefore, I only need to create a directory for the Integration test suite as follows:

mkdir ./tests/Integration

Then, I add an entry in ./phpunit.xml for this test suite.

< testsuite name = " Integration " > < directory suffix = " Test.php " > ./tests/Integration </ directory > </ testsuite >

I don't mind to keep the example tests provided by Laravel because the Unit test is useful to ensure that the test framework is working, while the Feature test ensures that the Homepage is accessible. However, I want to change the tests namespace from Tests\Feature to ESDemo\Tests\Feature and the same for the Unit test and the TestCases provided by Laravel.

Therefore, I update every instance of the Test namespace to ESDemo\Tests . Then, I update the composer.json file to auto-load the test classes.

"autoload-dev" : { "psr-4" : { "ESDemo\\Tests\\" : "tests/" } } ,

And refresh the auto-loaded classes.

composer dump-autoload

And obviously, I make sure the tests pass.

docker-compose exec es_demo_php_runtime ./vendor/bin/phpunit

As we go along these tutorials, I will show you how I make use of testing scenario classes and also developing our TestCases to fit each suite because as an example, Unit tests do not require the application's instance while the Laravel's TestCase, needlessly creates the entire app for every test.

Until next time

I want to end this article with a small note about the amount of work involved when working on a new Event Sourcing solution. As we can see from this article alone, even just planning the Technology Stack for our example project seems extensive. Please don't fret over this, the planning does take time but does it take more time than the planning involved when implementing other Patterns of Enterprise Application Architecture? I don't think so. It didn't take me more time to plan and execute the steps in this article than the time I spend on any other Containerised system.

The entire process I covered in this article took me less than half an hour. I'm including; planning which Databases and Frameworks to use, getting GitHub set up and also setting my local environment. It will take you the same amount of time once you've repeated the process on a few projects. It is also good to refactor both your development process, which tools to use and your overall strategy to get things done as often as needed.