Extract and Deploy a Microservice with Consul on Kubernetes
Consul service mesh provides a robust set of features that enable you to migrate from monolith to microservices in a safe, predictable and controlled manner.
This tutorial is focused on explaining the process of extracting code from the monolith into a separate microservice, and deploying your first microservice to the Consul service mesh on Kubernetes. This tutorial is designed for practitioners who will be responsible for developing microservices that will be deployed to Consul service mesh running on Kubernetes. However, the concepts discussed should be useful for practitioners developing microservices that will be deployed to any environment.
In this tutorial you will:
- Review the scope of the initial implementation
- Extracting the microservice code to a new project and git repository
- Perform the identified refactorings
- Deploy the microservice to the service mesh
Prerequisites
The following prerequisites are required to complete the tutorial.
- Access to a Kubernetes cluster
- Helm
- kubectl
- A git command line client
Source code
This tutorial uses an example application, called HashiCups. You can review the source code for the
different tiers of the application on github.
Specifically, the code for the he monolithic API can be found at product-api-go
,
and the code for the microservice can be found at coffee-service
.
You can either clone those repositories locally, or review them online. This tutorial
will review the code, but does not require you to make any modifications. Also worth
noting, the code in those repositories is under active development, but the links
above should take you to the correct tagged version that supports this tutorial.
Deployment
The HashiCups example application is intentionally designed so that it can be deployed to any environment. For this tutorial, the database tier of the application will be deployed to the Consul service mesh on Kubernetes alongside the extracted microservice. This was intentionally done to simplify the tutorial experience, but is likely not representative of your current architecture, or all of the challenges you will face when migrating to microservices. No single tutorial can cover all the possible operating environment scenarios, so in an effort to focus on the process of extracting the microservice, this trade-off was intentionally made.
Review the scope
In the Scope a Microservice tutorial, the HashiCups team came up with the following scope of work for their microservices pilot project.
- Structural code
- Baseline a bare bones service project in a new git repo
- Refactor logging to conform with guidelines
- Identify required configuration and refactor to environment variables if possible
- Stub out route handlers to allow route configuration
- Configure route handling
- Business logic
- Migrate and refactor the coffees query route handler and supporting code
- Write unit tests to gate deployment
- Deployment configuration
- Create the Kubernetes deployment configuration
- Include a livenessProbe
- Add Consul specific annotations
- Document a manual process for deployment that covers the following
- Deploying container images
- Injecting secrets
- Deploying to the Kubernetes cluster
The remainder of the is tutorial is a fictional case study of how that team used this scope of work to extract their first microservice from their existing monolith.
Structural code
The team had budgeted two team members to work on the pilot project. The teammates decided to pair program their solution with the hope their combined knowledge would allow them to overcome any individual knowledge gaps more quickly. Once they decided how they would work, they decided that working on the structural code was what they should work on first, so they huddled around a laptop and started coding.
New project setup
Baselining the new project was the easiest part. They created a new git repository
and then cloned it. They then initialized
the root directory, and ported over the existing main.go
code from the monolith.
Then they deleted the sections that either weren't relevant to the coffee-service
,
or couldn't be handled yet because other required structural code did yet exist
in the new repository. Wherever they deleted a line because of missing required
structural code they added a panic
call so that tests would fail until the missing
code was available. Then they started looking at the rest of their structural code
checklist.
While discussing the porting process, the duo had a thought. They had both worked on projects influenced by domain driven design (DDD) before. They felt like the microservices architecture lent itself well to that design discipline, specifically around the use of bounded contexts. Another element they liked from DDD was the pattern language.
The pair were convinced that it wouldn't be that hard to refactor the code during the porting process, at least in terms of naming conventions. They pitched their idea to the team, and got immediate buy in. They agreed to rename structs and packages to conform to the service, repository, entity nomenclature during the port. They also agreed to not spend anytime on trying to create generic code around this change. They felt that this was a bit of scope creep they themselves were adding. Limiting the improvements to naming would be really low risk, and was something everyone had been discussing for a while.
Refactor logging
The first refactoring job was to improve their logging situation. Before starting on that process, they made sure they had their logging guidelines available so that they could reference them as they had to make decisions or judgement calls while refactoring the structural code.
Level | Guideline |
---|---|
Info | Lifecycle events Component initialization Inter-component method calls External service calls |
Debug | Intra-component method calls Flow of control branching Coarse grained contextual state |
Trace | Flow of control iteration Fine grained contextual state |
Warn | Expected handled errors Unexpected recoverable errors |
Error | Unexpected or unrecoverable errors |
The team decided it was a good idea to add comments to any lines that included logging statements that indicated which guideline informed the logging statement. They felt this provided them two benefits. First, if they needed to look back on why they made the decision to log something, they could understand the original developer's intent when adding the log statement. Second, they knew that eventually they wanted to move towards some sort of instrumented logging and telemetry approach but just couldn't invest in it right now. They felt this was a way of lowering the interest rate on the technical debt they were incurring by not solving that problem right now.
With these decisions and guidelines in mind, this is what they came up with for
their main
function.
Build configuration from environment variables
Next, they needed to start thinking about configuration. During the analysis phase, the team had done quite a bit of this work up front. This list shows the required configuration they documented previously. During the implementation of the code they determined that it should be possible, and in fact preferable, to provide all of this configuration to the service as environment variables 12-factor app style.
Variable | Notes |
---|---|
USERNAME | Postgres username Used to be in a file on the file system Refactor to environment variable |
PASSWORD | Postgres password Used to be in a file on the file system Refactor to environment variable |
LOG_FORMAT | json |
LOG_LEVEL | New Will allow toggling log level |
BIND_ADDRESS | New Used to bind to port using dynamic config Refactor to binding to localhost:port Localhost required by Consul |
VERSION | New Want to experiment with running multiple versions from same binary |
They also decided it was a good idea to create a sub module for encapsulating
the loading of this configuration at startup. They didn't want to pollute the main
function with logic not related to lifecycle management. They decided that the
loaded configuration would be mapped to a custom data structure so that it was
easier to pass the configuration around the application as needed. So, they created
a config
package and a NewFromEnv
factory method that would create and return
a Config
struct from environment variables.
In the process of doing that, they realized that USERNAME
and PASSWORD
were just
values used in their database connection string, and that it was the connection
string itself they actually cared about. So, they decided to format a connection
string when creating a new Config
instance and set that to a field on the struct.
Similarly, they realized they weren't really interested in the LOG_FORMAT
or the
LOG_LEVEL
so much as ensuring that they had a Logger
singleton
that could ensure consistent logging configuration throughout the application.
So, they decided to have the factory method also handle configuring the Logger
and then add it to the Config
struct so that it could be shared across the
application. This was the final Config
abstraction they came away with.
Now that they had their configuration plan, they updated their structural code to work against the interfaces they had just defined. This is what they came away with.
Stub out route handlers
This team had been using gorilla/mux for multiplexing in the monolith, and they liked it. This library works similarly to how other web frameworks handle route configuration (e.g. ASP.NET Core, Spring MVC, expressjs, rocket). In all these frameworks, some entry point method, func main() in this case, is responsible for mapping supported routes to the appropriate route handler. The handler contains the logic related to incoming and outgoing HTTP concerns, and delegates business logic to whatever supporting code it needs to fulfill the request.
The goal for this phase was to get the structural code to compile and refactor to
the DDD naming convention. This was not the time to implement all the business logic.
They knew they needed to support two routes: health
and coffees
. The health
route was required to support the Liveness Probe
for health checking, and the coffees
route supported the business logic needs
of the application. They also knew the interface the handlers needed to implement (http.Handler),
and they knew they wanted a factory method for each handler that could be used to
construct instances of the handler. So they implemented just that, and stopped.
Here is an example of what they implemented for the health service.
Configure route handling
When it came time configure route handling, the duo realized they had forgotten something. They actually needed to configure three route handlers: health, coffees, and a 404 not found handler.
They had already stubbed out handler code for the health and coffees routes,
but hadn't handled the 404 issue. Luckily, the gorilla/mux
package provides a
feature for handling 404s. The router struct has a NotFoundHandler field, and
the convention is to map it to an inline http.HandlerFunc.
All that was left for them to do was to configure all the handler mappings. The
final contents of their main
function are listed below with inline comments
to help reinforce the concepts outlined above. Once the team got this far, the
project compiled, so they stopped and celebrated their first success. You should
too!
Business logic
Now that the structural code had been refactored, it was time to move on to porting and refactoring the business logic components.
Migrate and refactor code
The HashiCups team had already determined during the service profiling phase that the components from the monolith that served coffee data had no code level dependencies on other components of the monolith code base. With that in mind, they continued the analysis of the coffee serving process, and eventually come away with the following diagram.
As they had suspected, the coffee functionality was relatively straight forward. The handler called a DB facade. The DB facade performed the queries against the Postgres database and used a 3rd party library that could handle the data mapping of the SQL results to go structs. The DB facade then returned the structs back to the handler. The handler took care of converting the structs to JSON for transmission over the network, and then wrote the serialized JSON to the response.
After final analysis, the team came up with the following list of target files that would need to be ported to the new project.
During the refactoring they copied over the code files from the monolith, and
renamed them to the DDD naming conventions in the process. They also defined the
need to add a layer of indirection between the structural code and the CoffeeService
.
Recall that the team decided during analysis that they wanted to be able to run multiple
versions of the service from the same binary. They decided to place a top level
factory method for creating new CoffeeService instances in the service
package
and have it accept a Config
parameter. Using the passed configuration, the
factory method could now handle three things.
First, it could use the version flag on from the Config
to decide which version
of the service to instantiate. Refactoring to this indirection allowed the factory
method to provide that desired feature. Second, the factory method could be responsible
for instantiating a Repository
instance using the connection string set on the
passed Config
. Third, the the factory could pass both the Repository
and
the Logger
singleton to the version specific factory method. This in effect,
provided them with a dependency injection
mechanism, which improved their overall testability position.
Once the refactoring for names was complete, the team ended up with the following files in the new project.
- service/service.go
- service/v1/coffee.go
- data/repository.go
- data/entities/coffee.go
- data/entities/ingredient.go
The core business logic within those files didn't need to change. The refactoring
efforts were focused mainly on handling the structural code changes, and ensuring
the code adhered to their new logging standards. Also, they had to update their
main
method to handle both passing the Config
struct to the service factory,
as well as handling the lifecycle logic if service initialization fails.
Code from each of these packages has been extracted and included below to help you better understand the sequence at the code level. Once the team got this far, the project compiled, so they stopped and celebrated their second success. You should too!
CoffeeService instantiation sequence
Migrate and refactor tests
Finally, they ported over their existing unit and behavioral tests, which they were lucky enough to have already had, and refactored those as well to match the new project structure. After a few iterations, they were able to feel confident that the port and refactoring were ready for a test deployment. Then they realized they had a significant shortcoming to their current testing strategy.
Previously, the layers of code were interacting via direct imports. For example, if the ordering code wanted to call methods in the auth package it would import that package, and make the function call. Now, they realized, services were interacting over HTTP, and function inputs and outputs needed to be serialized to JSON. What would happen, they asked themselves, if the expected input or output from an upstream service were to change without notification? In the world of library dependencies, this situation is often referred to as breaking interface. In the world of microservices, this is called breaking contract.
At first the team was uncertain about how best to approach this problem. They wanted to be able to iterate rapidly, and independently, but they needed a way to ensure they were still in sync with upstream service contracts before shipping new builds.
That was when they discovered the concept of contract testing. Using this technique, they realized they could still use hand crafted mocks for local testing, but call actual deployed instances of upstream services periodically to provide early warning detection of contract breakage. They also decided to defer implementing this now, as it would involve research, experimentation, and wasn't part of their agreed upon scope. As always, they added it to the backlog for with high value score, to be scheduled if the pilot project earned them a green light on the full migration.
Deployment configuration
With the code refactoring complete, and all the automated tests passing, the team moved on to the tasks of defining the Kubernetes configuration.
Kubernetes configuration
The team made the following decisions relative to their Kubernetes deployment.
- Use the same Service and ServiceAccount for all versions of their service
- Name the Service, ServiceAccount, and Deployment
coffee-service
- Pass environment variables to the container using the pod template
- Since the pilot project would be manual deploys only, anyone running the deployment would be responsible for injecting secrets to the pod template manually
- The
livenessProbe
would be configured as anhttpGet
probe that with thepath
set to the/health
route.
Consul configuration
Next, the team added required and optional annotations to the pod spec. Specifically,
they set connect-inject
to true
to ensure sidecar injection, defined the
connect-service-upstream
as postgres:5432
so that the web service could reach
the upstream Postgres service, and then added some desired metadata around versioning
and tier. The following snippet shows the specific annotations they added.
The team also decided to add a ServiceDefaults
config entry to ensure that the service mesh would enforce their decision that
the service protocol should be http
. The team already had plans to handle the
roll out of the service using Consul's L7 Traffic Management
features, which they knew from earlier research, required that the service protocol
be set to HTTP.
Deployment manifest
The final contents of their deployment.yaml
file are listed below with inline comments
to help reinforce the concepts outlined above. Once the team got this far, the
they stopped and celebrated their third success. You should too!
Manual deployment checklist
The final section in their scope document required the team to document a manual deployment process for the pilot project. This is what they came up with.
Stage 1 - Docker
Now that they had their service code passing all its tests, it was time to package it up as a container and publish it to a registry that their Kubernetes cluster can pull images from. They decided to go with Docker and Docker Hub for the experiment, and consider other container runtime implementations or cloud specific container registries after they got approval to go past the pilot phase of the project.
They'd heard good things about Alpine as a
base for docker images, and decided to go with that for their docker image. They
created a very minimal Dockerfile
that copied the binary to the image and used
ENTRYPOINT
to run the binary on startup. Here is the Dockerfile
for reference.
Next they write a Makefile
to build and push the container to Docker Hub. In
the long run, they think that automating this with some sort of GitOps style approach
using GitHub Actions
deployment tool like HashiCorp Waypoint is the right way to go, but
since they are limited on scope and time, they decide the Makefile
file is a
reasonable compromise. Here is what they came up with.
So whenever it was time to push a new build they needed to manually increment the CONTAINER_VERSION
variable and run the Makefile
push_docker rule. The rule was part of a chain
that will build the binary, build and tag the docker image, and then finally push
it to whatever registry the build machine's docker client was currently logged in to.
After the docker image was pushed to the registry, the next step was to manually
set the Postgres username in the ./deployments/v1/coffee-service.yaml
file.
Once the secrets were added to the file the team agreed that the operator should
verify that build machine's KUBECONFIG
was currently targeting the correct Kubernetes
cluster. After that, the operator could deploy the service with kubectl apply
, and
then finally the operator should manually back out the changes made to the Kubernetes
manifest so that they didn't accidentally check the secrets into source control!
Everyone on the team cringed a bit at this last task. They immediately all agreed
that work item #1 if the project got approved past the pilot phase was to create
build automation that included sensible secrets management.
The team writes all this down in a pseudo-code checklist so that they can automate the process after the pilot. They add git checkout step at the beginning to help them remember to not deploy from their local development branches.
The team decides to test out their plan, so they all huddle around a workstation to see how it goes.
Validate the deployment
With the microservice deployed to the kubernetes cluster, the team needed to validate the
deployment. Since the pilot project was focused solely on deploying one single
microservice as a proof of concept, they realized they didn't to worry about adding
an ingress gateway
or External LoadBalancer.
A simple kubectl port-forward
could be used to validate that the microservice
was working. So that's what they did.
Once the service port for the deployment was forwarded to their local test host,
the used curl
to send a request to the coffee-service
.
Once the team got this far, the they stopped and celebrated their fourth and final (for now) success. You should too!
After a quick celebration, they called they ran some load tests and made some adjustments to their Pod templates to give it more resources. Once they were satisfied that the container resource specs were within reasonable limits, they called the business team and scheduled the demo!
Next steps
In this tutorial you:
- Reviewed the scope of the initial implementation
- Extracted the microservice code to a new project and git repository
- Performed the identified refactorings
- Deployed the microservice to the service mesh
Feedback
Did you like this case study style approach to learning? Did you find it distracting? Did you think we missed some obvious steps in the extraction process? Use our feedback form below to let us know what your thoughts and preferences are, or to let us know if we missed something.
Exercise for the reader
Give us a code review! This collection laid out a set of guidelines for implementation, but no one is perfect. Review our code, and file an issue, or if you are feeling really motivated, or looking to get into open source, submit a PR.
Also, keep in mind, this project in its current form is a pilot. There a number of concerns we didn't address. Some are listed below. Use the feedback form below to let us know what you'd like us to address next.
- Secrets management
- Build automation
- Secrets injection
- Alerts
- Test gating
- Unit
- Integration/Acceptance
- Performance
- Consul
- ACL tokens
- Ingress/Egress
- Code telemetry instrumentation
- Metrics
- Tracing
- Structured Logging
- Resource utilization
- Observability pipeline
- Microservice framework
- Test coverage