Stateful Microservice with .NET Core, Docker, and PostreSQL


The last post I did on this topic was titled, “Setup Visual Studio, .NET, and Docker Under an Hour”.  In that post, I covered how to get up and running with Docker on a Windows machine and how to leverage .NET Core within a Linux container image.  If you haven’t read that post yet, I recommend that you start there as I will be building off of that post with some more advanced topics.

Being a Connected Systems / EAI developer, my main goal for researching Docker has been to see how it meshes with some of the patterns that we see in the Microservices Architecture space; specifically, the Database per Service pattern and the Single Service Instance per Host pattern.

I’ve noted through my exploring that Docker, in and of itself, is not a Microservices framework, but that it definitely can be leveraged to easily create Microservices.  The platform provides a number of mechanisms for applying patterns such as the two mentioned previously.

In this post I will cover how to leverage .NET Core alongside Docker, in order to achieve several Microservices Architecture patterns.


What are Microservices?

In this post, I will not be covering Microservices soup-to-nuts.  If you would like a full overview of Microservices, here are a few really good places to start:

  • Martin Fowler: Microservices – This article is robust, but, at the same time, not too wordy. It provides a great overview of Microservices and I recommend anyone that is new to the topic start here.
  • – Similar to how I view Service Oriented Architecture, one way I like to break down the term “Microservices” is to see it as an architecture style that consists of many patterns that all aim to achieve similar goals: writing software via extremely decoupled and encapsulated fine-grain services that expose vertical slices of business capabilities. With that in mind, this site is aimed at documenting all of the various patterns that we use to architect Microservices.
  • Building Microservices by Sam Newman – A really good book on the topic. I highly recommend the read, but will admit that I have not made it cover to cover just yet.  I hope to soon.


We’ve all reached a point in our careers where we’ve maintained an application that is nearly impossible to extend.  To add a feature, we must make numerous changes to layer upon layer of abstracted logic within project solutions that continue to grow in size over time.  To just simply add a new data element to an app, we may find ourselves having to

  1. Add a column to a DB
  2. Update a business entity in code somewhere
  3. Update the repository that uses that entity
  4. Modify the domain object (service contract) for a service that uses that repository
  5. Deploy the new service

And it doesn’t end there.  Now we potentially have to update several layers within several different UI project to fully realize that one simple change.  Depending on the level of tight coupling within our enterprise there could be numerous other updates for various applications within different business domains in order to ensure that our one change doesn’t break some downstream dependency.

When this type of situation occurs, it is safe to assume that we’re dealing with what is commonly coined a “monolithic application”.




The monolith is typically characterized by a system where users interact with a single system that houses lots and lots of tightly coupled code / DB logic, and sometimes even lots and lots of tightly coupled services.  This code is often times built within layers that consist of UI, business domain, data, and services.  Eventually all of these layers communicate with a single database or SQL Server that often times is…  yes, you guessed it, also tightly coupled.

Microservices step in to attempt to solve some of the challenges we face with monolithic systems.




With this type of architecture, every business capability is loosely coupled and the interface is always recommended via microservice interfaces.  Typically, with this type of architecture, the database is seen strictly as a means of storing state over time and it is encouraged that any business logic occurs only within the microservice’s business logic and not within a database layer.

Here are two patterns that this type of style dictates, which I will focus on for the remainder of this post:

  • Database per service
    • Services must be loosely coupled so that they can be developed, deployed and scaled independently
    • Different services have different data storage requirements
  • Single Service Instance per Host
    • Services are written using a variety of languages, frameworks, and framework versions
    • You want deployments to be reliable
    • Service instances need to be isolated from one another
    • Need to reduce DB coupling
    • Want to empower individual teams to make technology stack decisions rather than trickle those decisions from above

Now, I by no means am trying to state that Microservices are the ultimate and only intelligent means of architecting enterprise systems.  Nor do I want to imply that they should be used everywhere.  There are a number of great things that come with a Microservices Architecture style, such as extreme decoupling and service isolation.  However, we also gain some new challenges that the organization must be prepared to manage.  Things like: how do we track all these service?  Or…  how do we govern the SLA’s around these services?  These are just two tradeoffs among many, which should be fully analyzed before marching into your CTO’s office and telling him you’ve found the next big thing.

Before proceeding, I also want to stress that this section really only hits the tip of the ice berg when it comes to an overview of Microservices and Microservices architectural patterns.  If you desire to learn more about Microservices Architecture, please refer to the books and articles that I reference above.


A Values Microservice

In this post, I will show you how to take the .NET Core WebAPI project template in Visual Studio and turn it into a Docker-enabled Microservice that leverages a separate Docker container for its state.  The service will provide a very simplified RESTful API that allows clients to use HTTP methods in order to add values, delete values by index, and fetch stored values.

The following diagram represents the approach that we’ll use to test this Microservice as well as what we can expect the service to return back to us:



Here is an outline of the six steps from above:

  1. The client POSTs the value “test1” to the API and the Microservice will store the value within its state
  2. The client POSTs the value “test2” to the API and the Microservice will store the value within its state
  3. The client POSTs the value “test3” to the API and the Microservice will store the value within its state
  4. The client uses the HTTP DELETE method against the API and the Microservice will remove the stored value that corresponds to the index in the URL (1 à “test2”)
  5. The client uses the HTTP GET method on the API
  6. The Microservice returns all the values that are currently stored in its state (Only “test1” and “test3” since “test2” was deleted)


How is it Stateful?

In the context of Microservices, building out a service that stores values in memory would be very easy.  However, if we aim to maintain these values over time and survive any type of server reboot, we will need to consider how we can store these values on disk through some mean.  For this, we have a few options:



We could leverage the built-in file system where our service code lives, but that would be too easy and not to mention, probably not extremely extensible.  Let’s decouple things even further and consider the use of some sort of database.  MongoDB sounds promising, but, unfortunately, at the time I write this post, there is not a MongoDB .NET Core driver, so I’m going to wait on that.  For this particular implementation of our Values Microservice, I opted to use a PostgreSQL DB, which we will also be hosted in Docker.  Pretty cool right?  That means we can implement an entire vertical slice (values service) that implements both the Single Database pattern and the Single Service Instance Host pattern (via Docker).

Here is an additional view of what that will look like when we’re done:

New Bitmap Image


The Docker Engine will be used to host two containers: one for our Values Microservice code (.NET Core WebAPI) and a separate container for our PostgreSQL DB.



The goal of this post will be to build the aforementioned Values Microservice using .NET Core and Docker.  Now that we know the objective, let’s explore for a bit how we get there.

To achieve a stateful Microservice that uses a Postgres container for its state, we’ll want to consider the use of EntityFramework in order to simplify things a bit for us.  If you followed along with my previous post titled, “Setup Visual Studio, .NET, and Docker Under an Hour”, then your environment may still be setup with .NET Core 1.0.0 RC2.  The only problem with that setup is.NET Core 1.0.0 is now RTM and a few features have been introduced in EntityFrameworkCore and EntityFrameworkCore Code First that are going to help us.  We will need to upgrade in order to leverage some the of the cool new things that have come out since my last post.

In the next section, I’ll cover what you need to do to upgrade your system in order to follow along with this post.  Feel free to skip the next section if you are comfortable installing these pre-requisites:

  • Visual Studio 2015 Update 3
  • Docker Toolbox version 1.11.2 (not Docker for Windows; this is the Oracle VirtualBox version)
  • .NET Core 1.0.0 VS 2015 Tooling + SDK Preview 2


Setting Up Your Environment

If you followed along with my previous post titled, “Setup Visual Studio, .NET, and Docker Under an Hour”, then your environment may still be setup with .NET Core 1.0.0 RC2.

To determine your current version, navigate to the “Add or remove programs” tool and search for “core”.



If you are on the older version, then we need to get this upgraded.  Here are the steps you should follow to do so.  If you did not follow my previous post, then you should be able to skip Step 2, but, in addition, you will need to install the Docker Tools for Visual Studio 2015 – Preview extension after step 3.  You will also need to have the Docker Engine installed on your machine.

  1. Install Visual Studio 2015 Update 3. The MSI can be downloaded here.
  2. Uninstall all remnants of .NET Core from your machine. The easiest way to do so is from the “Add or remove programs” console on your machine (can access this from the start menu).  From that menu, run a search for “core” and remove:
    1. Microsoft .NET Core 1.0.0 RC2 – VS 2015 Tooling
    2. Microsoft .NET Core 1.0.0 RC2 – SDK Preview 1
  3. Install .NET Core 1.0.0. The MSI can be downloaded here.

After performing these steps, your edition of Visual Studio 2015 should appear as follows:



In addition, from the “Add or remove programs” console, you should see the following apps installed:



With everything upgraded, you should be all set to continue along.


Code Bits

If you follow along with this post, I will show you how to create the project template and will instruct you what code to modify.  I recommend following along and being hands on as you may learn better that way.  However, if you want to just skim through and or browse the code, it can be found on my GitHub repository titled, “dockernetcore”.

Within that GitHub repository, I have folder that contains the out-of-the-box WebAPI .NET Core template, along with a few modifications located here: samples/StatelessValuesMicroservice.

And the final solution that support PostgreSQL can be found here: samples/StatefulValuesMicroservice.


Create a WebAPI that Runs in Docker

Hopefully you get the gist of what we’re trying to achieve here.  With the overview and pre-requisites out of the way, let’s move on to the fun part J  We’re now ready to check out some of the features provided by the Docker Tools for Visual Studio extension.

Let’s fire up Visual Studio 2015 and create a new project named “StatefulValuesMicroservice” that targets the ASP.NET Core platform.  Provide these details

IMPORTANT: For now, please use a folder in your C:\Users\<username> path for the location and a use project name that does not contain any spaces or periods.



NOTE: I have found that using fully qualified namespaces in your Docker project name can cause some issues.  See this post for more information.

NOTE: The compiler will complain if you use a path other than your user path.  Furthermore, I believe this will most likely will not work due to how the debugger ties in with the DNX engine in order to enable live code updates and debugging.  There is a way to add a shared path to the VM in Oracle VirtualBox, but I haven’t had luck with doing so yet and will not be covering that in this post.

On the following page, choose “Web API” for the template and disable the “Host in the cloud” option.  This isn’t necessary, but will result in extra fluff that we don’t need for the sake of what we’re doing.



Add Docker Support to the Project

Thus far, we’ve seen some of the features of the .NET Core tooling, but nothing as of yet is the Docker toolkit.  Let’s change that.

Right click on the project we just created, hover over Add, and then select the “Docker Support” option.




Now let’s take a look at the solution explorer to see what we just did.




A couple of interesting files show up here now.  These files are utilized when you run the build on the project in order to provide the necessary glue to wire things up in your Docker host.  DockerTask.ps1 is the main entry point for all of this.

So how does the script get wired up?  Let’s take a look at the properties for the project.



From here, we can see that the Launch target for the newly created Docker profile is PowerShell.   PowerShell is then fed several parameters which ultimately lead to the execution of the DockerTask.ps1 file.

Set-ExecutionPolicy RemoteSigned .\Docker\DockerTask.ps1 -Run -Environment $(Configuration) -Machine '$(DockerMachineName)'


You may have even noticed that we now have an option in the top menu for starting the debugger with the Docker option.  Just take note this for now.  We’ll come back and debug in a few after we’ve taken a moment to look at some other interesting things.



Last, but not least, open the Docker.props file under the properties.



Inside that file, populate the “DockerMachineName” value with “default” as such:

<DockerMachineName Condition=" '$(DockerMachineName)'=='' ">default</DockerMachineName>

After, populating the machine name value, restart Visual Studio.


Debugging the Stateless API

Once Visual Studio is restarted, open the “StatefulValuesMicroservice” project we just created from the start page.  Now, click the green debug button at the top in order to debug using Docker.  If all goes well, the Visual Studio output should indicate that the container is running.  This output will be displayed in the “Docker” output section as indicated by the drop down.  If the dropdown is still set to “Build” then either the project is still compiling and deploying, or something may have gone wrong.  Be sure to check for errors before proceeding to the next section.

If everything kicked off correctly, you should be able to navigate your web browser to and see the following.


The content above is coming from our WebAPI, which is running and debugging via Visual Studio.


Making our Service Stateful

The service we created in the last section provided generic hard coded values (“value1” and “value2”).  This probably is not going to help much in teaching us how to write Microservices that deal with dynamic data.  In this section, we’ll take a look at the necessary steps to turn our service into a stateful Microservice that stores its service state in a linked PostgreSQL Docker container.


Stateful Service Dependencies

To get things working with PostgreSQL, we’ll need a few dependencies: EF-Core, and EF-Core PostgreSQL driver.

Navigate within Visual Studio to the NuGet Package Manager.



Within the Package Manager, click on the “Browse” tab and then search for “entityframeworkcore”.  From the options, install “Microsoft.EntityFrameworkCore”.



After accepting the disclaimer and installing the EntityFrameworkCore package, the Package Manager should reflect that the package is installed:


Repeat these steps to install the following NuGet packages:

  • EntityFrameworkCore.Tools: Latest prerelease 1.0.0-preview2-final
  • EntityFrameworkCore.PostgreSQL: Latest stable 1.0.0


After installing these, if you expand your references section, you should see that all the packages were installed successfully.


The last thing we need to do is open the project.json file and make a few manual updates.  In that file, you should see a line that reads something similar to this:

"Microsoft.EntityFrameworkCore.Tools": "1.0.0-preview2-final"


Copy and paste that line into the tools section so that you end up seeing something similar to the following JSON:

  "tools": {
    "Microsoft.EntityFrameworkCore.Tools": "1.0.0-preview2-final",
    "Microsoft.AspNetCore.Server.IISIntegration.Tools": "1.0.0-preview2-final"


NOTE: The version numbers here might be different depending on when you are executing these steps.  As of right now, the latest version of Microsoft.EntityFrameworkCore.Tools is 1.0.0-preview2-final.

The reason for we executed these steps was to register a tool that enables support for the PowerShell extensions necessary to scaffold our database migrations.  We will need these tools in order to automatically create the PostgreSQL database.  You will see more on this later when we begin to add the DB migrations to our project.


Setup Docker Compose File

You’re probably wondering at this point if we’ll need to install PostgreSQL for this to work.  The answer is, “that’s crazy talk…  Absolutely not!”  This is where the power of Docker steps in.  By the same mechanism we were able to spin up a container capable of hosting our .NET Core code, we will setup a container capable of hosting our PostgreSQL database.

Navigate in Visual Studio to the docker-compose.debug.yml file.  In here, you will see all of the instructions given to Docker in order to setup our service.  Let’s edit the contents to match the following (assuming your project name is “StatefulValuesMicroservice”:

version: '2'
    image: postgres
      - POSTGRES_PASSWORD=test123!
      - statefulvaluesmicroservicedb
    image: username/statefulvaluesmicroservice:Debug
      - POSTGRES_DB=statefulvaluesmicroservicedb
      - POSTGRES_PASSWORD=test123!
      - "80:80"
      - statefulvaluesmicroservicedb
      - .:/app


IMPORTANT: Be very sure that you only indent the lines in this file with spaces and not tabs.  If you use tabs on accident, you will eventually end up with a hard to debug build error that goes to the tune of:

'C:\Users\cmyers\dev\ValuesMicroservice\src\ValuesMicroservice\bin\Docker\Debug\app\docker-compose.Debug.yml' -p
valuesmicroservice up -d
yaml.scanner.ScannerError: while scanning for the next token
found character '\t' that cannot start any token
in "C:\Users\cmyers\dev\ValuesMicroservice\src\ValuesMicroservice\bin\Docker\Debug\app\docker-compose.Debug.yml", line 6, column 1
Run : Failed to start the container(s)


So what did we just do here.  Let’s tackle it by line number.  On lines 3 – 6 we added the necessary instructions for Docker Compose to create our PostgreSQL container.  We gave it a name (line 3), told Docker what the base image is (“postgres”: line 4), and then configure the password through an environment variable on line 6.

The “statefulvaluesmicroservice” kept many of the previous values as before, but we made a few changes.  Let’s focus only on the lines we changed:

  1. On lines 8 – 9, we instructed Docker Compose that this particular image requires that Docker also hosts and starts the “statefulvaluesmicroservicedb” container, which we defined on lines 3 – 6.
  2. On lines 13 – 14, we wired up a few additional environment variables. These are not required, but they will be helpful later in our application code so that we’re not having to hard-code values in the code.  It is also a better practice to store these in the YML files so we can have different configured values per environment (e.g., docker-compose.debug.yml vs docker-compose.yml).
  3. On line 17 – 18, we added a Docker “link”. This part is crucial.  This enables our StatefulValuesMicroservice container to be able to talk to the PostgreSQL container.  By default, the Docker Engine will lock down every container and is very aggressive about the firewall rules in the virtual bridge network.

We introduced a couple of concepts here that I don’t cover in great detail in this post.  If you want to explore these concepts further, here are a couple of links:

NOTE: We have not touched the docker-compose.yml file.  This would be required if you want to create an actual release of this that would go to a production environment.  I won’t be covering that file in this post, but once you get everything working, it will be worthwhile to you to explore this file and analyze why it’s there.


Setting Up Our Code-First Model

Now that we have our project all ready to go and our Docker Compose file is wired up with the appropriate hooks for PostgreSQL, we’re ready to create the data model we’ll use to maintain our service’s state.

First, create a folder within the “StatefulValuesMicroservice” project named “Data” by right clicking on the project and choosing the correct option from the menu.  Next, add a class to the “Data” folder named “StoredValuesContext.cs”.

Open the file after creating it and add some code to have the class inherit from DbContext like so:

using Microsoft.EntityFrameworkCore;

namespace StatefulValuesMicroservice.Data
    public class StoredValuesContext : DbContext


Keep the file open as we’ll be editing it later.  Right click the project again and add a second folder to it named “Entity”.  Add a class to the “Entity” folder named “StoredValue.cs”.  This will be the actual entity that will be stored in the DB and will need all the properties required, which will be a primary key named ‘Id’ and a place to store our strings named ‘Value’.  Open the file and add these two properties so that the contents are as follows:

namespace StatefulValuesMicroservice.Entity
    public class StoredValue
        public int Id { get; set; }
        public string Value { get; set; }


You can safely close the StoredValue file as we’ve done all that we need to it.  Now, let’s navigate back to the StoredValuesContext class.  Add a DbSet to the class that represents the table where the values will be stored.  You will need to add the required namespaces to the source file to make this work as shown below:

using Microsoft.EntityFrameworkCore;
using StatefulValuesMicroservice.Entity;

namespace StatefulValuesMicroservice.Data
    public class StoredValuesContext : DbContext
        public DbSet<StoredValue> StoredValues { get; set; }

Lastly, we’ll need to add a couple default constructors to the file.  Make the necessary updates to StoredValuesContext so that it reflects the following:

using Microsoft.EntityFrameworkCore;
using StatefulValuesMicroservice.Entity;

namespace StatefulValuesMicroservice.Data
    public class StoredValuesContext : DbContext
        public StoredValuesContext() : base()
        { }

        public StoredValuesContext(DbContextOptions<StoredValuesContext> options) : base(options)
        { }

        public DbSet<StoredValue> StoredValues { get; set; }


Setting Up Code-First Migrations

Let’s add support for migrations to our project now.  Remember in the previous section where we needed to edit the project.json file and added the “EntityFrameworkCore.Tools” JSON to the tools section?  This is the part where that step comes in handy.

Within Visual Studio, use the Search feature at the far top right and search for “package manager”.  You should see the following:



Choose “View à Other Windows à Package Manager Console”.  When the console window opens, run the following command:

Add-Migration -Name InitialCreate

The output should reflect “To undo this action, use Remove-Migration.”  That means we’re good.  This option is available because we wired up the appropriate tool in the project.json file.  If everything went well, you should have seen a new folder named “Migrations” created in our project along with two files: StoredValuesContextModelSnapshot.cs and <datetime>_InitialCreate.cs.

These files will be used when our application starts up in order to create a DB (on the first run) that is capable of housing our service’s state.  However, we still need to wire up both these migrations and the actual StoredValuesContext within .NET Core’s Kestrel engine.


Wiring Up the Migrations and Context

Navigate to the “Startup.cs” file within our “StatefulValuesMicroservice” project.  In this file we will need to do a few things.

First, add a using statement for Microsoft.EntityFrameworkCore:

using Microsoft.EntityFrameworkCore;


Next, edit the ConfigureServices function and add the code you see on lines 6 – 10.

        public void ConfigureServices(IServiceCollection services)
            // Add framework services.

            // Setup DB
            string dbName = Environment.GetEnvironmentVariable("POSTGRES_DB") ?? "";
            string dbPassword = Environment.GetEnvironmentVariable("POSTGRES_PASSWORD") ?? "postgres";
            string pgConStr = string.Format("Host={0};Port=5432;Database=myDataBase;User ID=postgres;Password={1}", dbName, dbPassword);
            services.AddDbContext<StoredValuesContext>(options => options.UseNpgsql(pgConStr));


The code we just added grabs the values that we configured in our compose file or supplied defaults.  It then uses wires up some dependency injection for our StoredValuesContext.  It also instructs that all instances of that class use Npgsql (PostgreSQL) for their DB context along with the connection string.

The last updates we need to make are to the Configure method.  Add lines 8-12 (reflected below) to that method.

        // This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
        public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)

            // Create DB on startup
            using (var serviceScope = app.ApplicationServices.GetRequiredService<IServiceScopeFactory>().CreateScope())


This piece is necessary since we’re planning to use code-first EF migrations.  The “Database.Migrate()” piece is actually responsible for two things:

  1. Creating the database in PostgreSQL if it doesn’t already exist
  2. Migrating the DB schemas to the latest versions

EF does not do this automatically, so this piece is necessary.  With all of this complete, we should be ready to start modifying our ValuesController to use the PostgreSQL DB for its service state.


Making ValuesController Stateful

At this point, navigate to the “ValuesController.cs” file within our project under the “Controllers” folder.  Take a moment to examine what it is doing currently before we make any changes.  I want it to really sink in what we’re changing here in order to change this from “stateless” to “stateful” service.

The first thing we need to do is setup the necessary code for the dependency injection to work correctly.  For this, add a protected member to the code for our StoredValuesContext.  Then, create a generic constructor in the class that receives a StoredValuesContext and maps it up to our member.

    public class ValuesController : Controller
        protected StoredValuesContext _context;

        public ValuesController(StoredValuesContext context)
            _context = context;
    // Snipped


Next, remove the Put(…) and Get(int id) methods from the class.

After deleting those methods, we’ll need to edit each of the remaining controller actions to use the StoredValuesContext rather than the hard-coded values.  Modify your class so that it reflects the following:

using System.Collections.Generic;
using System.Linq;
using Microsoft.AspNetCore.Mvc;
using StatefulValuesMicroservice.Data;
using StatefulValuesMicroservice.Entity;

namespace StatefulValuesMicroservice.Controllers
    public class ValuesController : Controller
        protected StoredValuesContext _context;

        public ValuesController(StoredValuesContext context)
            _context = context;

        // GET api/values
        public IEnumerable<string> Get()
            return _context.StoredValues.Select(x => x.Value).ToArray();

        // POST api/values
        public void Post([FromBody]string value)
            _context.StoredValues.Add(new StoredValue { Value = value });

        // DELETE api/values/5
        public void Delete(int index)
            var valToDelete = _context.StoredValues.Skip(index).Take(1).First();


We should be all set now to debug the application.


Testing the Stateful Values Microservice

Let’s debug the service now and see what happens.  Start by clicking the debug button located at the top of Visual Studio.  After everything has finished building, compiling, deploying, and starting, please navigate to in a browser.  If everything goes well, you should see this:


This is a good sign.  Since we’re not seeing any errors, it means that it successfully created the context in our controller, connected to the PostgreSQL container, and created the database.

It’s a bit too early to crack open a cold one just yet though.  We need to validate that we can actually store and retrieve values.

Here is a copy of the diagram from above.  This is the test we will be running:



NOTE: For the next parts, I will use a tool called “Postman” to run some HTTP tests.  Fiddler or any other HTTP debugger will suffice, but if you would like to try Postman, you’ll need to install it from the Google Chrome app store.


For my first action, I’m just going to call the root api/values and will receive the empty JSON array:



Now let’s post a couple of values to the API.  This will be done through a HTTP POST and we will need to set the Content-Type HTTP header to “application/json”.



In addition, we will need to add a value in the HTTP body that is wrapped in quotation marks.



We’re safe to send that off and we should receive Status: 200 OK.

Repeat these steps two more times with “test2” and “test3” in order to end up with three values stored in the service’s state: test1, test2, and test3.

Now let’s delete test2 from the state.  Per our implementation of the Delete method within the ValuesController.cs file, we should be able to supply an index in the URL of the item we wish to delete.  The index starts at 0, so our mapping currently would be: 0 = test1, 1 = test2, and 2 = test3.  Thus, let’s delete index 1.  To do so, we need to use the HTTP DELETE method and provide the appropriate index at the end of the URL as follows:



If everything goes well, you should receive a “Status: 200 OK”.

The stored state within PostgreSQL should only contain test1 and test3 now.  Let’s put that theory to the test.  Navigate back to the root URL with an HTTP GET:



And this is great news!  This simple result means quite a lot.  It means that:

  1. We have two containers running in Docker: our service and the backend PostgreSQL DB for state (Docker Compose)
  2. Our service container was successfully linked to the PostgreSQL container and the two are able to communicate (Docker Compose)
  3. Our EF code successfully created the database and migrated the DB to the latest version (EF Code-First)
  4. Our dependency injection setup code in Startup.cs is working (Kestrel)
  5. Our EF model is working as intended (EF Code-First)


Not too bad right?  But we still have one more thing to do before we call it a wrap.  Let’s really put this to yet another test and restart our containers to make sure they can survive server reboots and things of the like.


Last Test: Will the Containers Survive a Restart

To do this test, we’ll need to cozy up a bit to console and run a few fairly simple commands.  First things first though… stop the Visual Studio debugger.  Then open the “Docker Quickstart Terminal” from the start menu.

Once the console opens (it may take a bit for you to receive a prompt), run a “docker ps”.

$ docker ps
$ docker ps
CONTAINER ID        IMAGE                                COMMAND                    CREATED            STATUS           PORTS                   NAMES
661ad3714e46        username/valuesmicroservice:Debug    "/bin/bash -c 'if [ \""    8 minutes ago      Up 9 minutes>80/tcp      valuesmicroservice_valuesmicroservice_1
bc1193b3782c        postgres                             "/docker-entrypoint.s"     16 minutes ago     Up 17 minutes    5432/tcp                valuesmicroservice_valuesmicroservicedb_1


Take note of the first 3 or more unique characters of these container ID’s and then run the following with those values: docker stop <unique_id1_substring> <unique_id2_substring>.  In my case, the command was docker stop 661 bc1:

$ docker stop 661 bc1


Run a “docker ps” again.  Nothing should be reflected as running now.  This means our two containers are stopped, but let’s confirm this for sure.  Jump over to Postman or whichever HTTP debugging tool you’re using and attempt to fetch values through an HTTP GET from our API.  You should get nothing back at all.  Something along the lines of “Could not get any response.  There was an error connecting to”

Alright, now jump over to Visual Studio and start the debugger.  If you pay careful attention to the output of the Build and Docker commands, you should notice that it does not re-create the containers.  This time around, it simply starts them.  This is due to the fact that we have not changed anything in the underlying code implementation.  The VS Docker Tools Extensions are capable of tracking what versions of containers exist in our Docker Host and if we have changed any code since those versions.  This is a really great feature!

Once everything is running, attempt another HTTP GET against our api/values controller.



Whala!  Hopefully, your run through is reflecting the same result.  If not, you may need to skim through the various sections and make sure nothing is missed.  It is also possible that something has changed with the way one or more of these platforms and frameworks work since the time I posted this article.

A couple things to check:

  • Make sure that your Docker engine is running via the Docker Quickstart Terminal
  • Ensure that your Docker Compose file is configured correctly
  • Ensure that you did not run any of these steps with Administrative privileges (this may cause issues unless you ran all steps with elevated permissions)


Beyond that, feel free to leave me a comment if you get stuck and I can try to work with you on it.


I hope you enjoyed this post.  Please use the social icons below to share this to social media.  In addition, I moderate the comments so that I can be sure to review and respond to any comments you send me.  Let me know if there are any other topics you would like me to cover in this .NET Core and Docker series.

There is definitely more to come on the Microsoft and Docker front.  I expect that over the next few years we will really start to see the .NET Core footprint grow and Docker adoption with the Microsoft space increase.  Even in just this year alone (2016), we saw .NET Core, Docker for Windows, and Windows Containers.  I’m sure we will continue to see really cool advancements in this space in Q4, throughout 2017, and in years to come!

Tags: , , , , , ,

Reader's Comments »

  1. […] this post, I will be following suit with some of my more recent posts (here & here) and will be discussing the concept of containerizing .NET Core apps with Docker. […]

  2. By Sulthon Zainul Habib on August 18, 2016 at 7:10 am

    Hi, i’m Sulthon i was tried that procedure but
    i got error:
    Severity Code Description Project File Line Suppression State
    Error Error Running: powershell -NonInteractive -ExecutionPolicy RemoteSigned .\DockerTask.ps1 -Build -Environment Debug -Machine ” -ClrDebugVersion VS2015U2. See the output window for details. StatefulValuesMicroservice …\StatefulValuesMicroservice\src\StatefulValuesMicroservice\DockerTask.ps1 1

    Severity Code Description Project File Line Suppression State
    Error Failed to run the command: “…\StatefulValuesMicroservice\src\StatefulValuesMicroservice\DockerTask.ps1 -Run -Environment Debug -Machine ” -RemoteDebugging $True -OpenSite $False”. Click for details. 0

    Severity Code Description Project File Line Suppression State
    Error MSB3073 The command “powershell -NonInteractive -ExecutionPolicy RemoteSigned .\DockerTask.ps1 -Build -Environment Debug -Machine ” -ClrDebugVersion VS2015U2″ exited with code 1. StatefulValuesMicroservice …\StatefulValuesMicroservice\src\StatefulValuesMicroservice\Properties\Docker.targets 43

  3. By Amol Gholap on August 28, 2016 at 5:35 am

    Very nice work Chris!!

  4. By Ed on September 5, 2016 at 11:33 am

    Great article. Very helpful.

    The indentation for environment for the statefulvaluesmicroservicedb should be corrected in the docker-compose.debug.yml file- it’s a little confusing for us newbies!

  5. By JMelis on September 9, 2016 at 1:15 pm

    Thanks for the tutorial!

    One thing, I was running into problems trying to run the project after making things “stateful”. It appears my problem was being caused by:


    in the compose file. Removing this lets things work ass expected. The code you have at GitHub seems to confirm this finding. It would be helpful if you updated the compose file step to reflect this.

    Thanks again

  6. By JMelis on September 9, 2016 at 2:17 pm

    Please disregard my previous comment. There was something else going on.

Leave a Reply