Service mesh with ECS and Consul on EC2
Consul on Elastic Container Service (ECS) integrates with Consul on Elastic Cloud Compute (EC2) to provide you with a complete service mesh ecosystem. Empowering your AWS ECS tasks with Consul service mesh connectivity enables you to take advantage of features such as zero-trust-security, intentions, observability, traffic policy, and more.
In this tutorial, you will use Terraform to create an ECS cluster, an example ECS application, and various AWS services. You can choose a learning path to either connect the ECS services with your existing Consul server on EC2 or deploy a complete environment that includes a standalone Consul server on EC2. This environment will be used to highlight the ease of deployment, simplified scalability, and reduced operational overhead gained by utilizing this pattern.
Specifically, you will:
- Customize the Terraform environment deployment script
- Deploy AWS resources using the Terraform script
- Inspect your environment using the Consul UI
- Explore the sample application UI
- Enable service mesh networking with Consul Intentions
- Decommission your environment with Terraform
While this tutorial uses elements that are not suitable for production environments including a development-grade Consul datacenter and lack of redundancy within the architecture, it will teach you the core concepts for deploying and interacting with a service mesh using AWS Elastic Container Service (ECS) and Consul on Elastic Cloud Compute (EC2). Refer to the Consul Reference Architecture for Consul best practices and the AWS Well-Architected Documentation for AWS best practices.
Prerequisites
To complete this tutorial you will need the following.
- Basic command line access
- Terraform v1.0.0+ installed
- Git installed
- AWS account and associated credentials that allow you to create resources.
If you don't have AWS Access Credentials, create your AWS Access Key ID and Secret Access Key by navigating to your service credentials in the IAM service on AWS. Click "Create access key" on that page to view your AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
. You will need these values later.
Clone GitHub repository
Clone the GitHub repository containing the configuration files and resources.
Change into the directory with the newly cloned repository. This directory contains the complete configuration files.
Check out the v0.5
tag of the repository.
Change directory to the ECS module.
Create and configure credential resources
Create a Terraform configuration file for your secrets
Terraform will utilize your unique credentials to build a complete Elastic Cloud Compute (EC2) Consul cluster and example application in Elastic Container Service (ECS).
Create a file named terraform.tfvars
in your working directory and copy the following configuration into the file.
Replace the placeholders with your values and save the file.
To learn more about each of the Terraform attributes, see the respective resource documentation in the Terraform registry.
Note
By default, secrets created by AWS Secrets Manager require 30 days before they can be deleted. If this tutorial is destroyed and recreated, a name conflict error will occur for these secrets. This can be resolved by changing the value of name
in your terraform.tfvars
file.
Configure AWS authentication
Configure AWS credentials for your environment so that Terraform can authenticate with AWS and create resources.
To do this with IAM user authentication, set your AWS access key ID as an environment variable.
Now set your secret key.
If you have temporary AWS credentials, you must also add your AWS_SESSION_TOKEN
as an environment variable. See the AWS Provider Documentation for more details.
Tip
If you don't have access to IAM user credentials, use another authentication method described in the AWS provider documentation.
Explore the Terraform manifest files
The Terraform manifest files used in this tutorial deploy various resources that enable your fully managed service mesh ecosystem. Below is the purpose of each Terraform manifest file.
consul-server.tf
- AWS EC2 Consul cluster deployment resources.data.tf
- Data sources that allow Terraform to use information defined outside of Terraform.ecs-clusters.tf
- AWS ECS cluster deployment resources.ecs-services.tf
- AWS ECS service deployment resources.iam.tf
- AWS IAM policy and role resources.load-balancer.tf
- AWS Application Load Balancer (ALB) deployment resources.logging.tf
- AWS Cloudwatch logging configuration.modules.tf
- AWS ECS task application definitions.outputs.tf
- Unique values output after Terraform successfully completes a deployment.providers.tf
- AWS and HCP provider definitions for Terraform.secrets-manager.tf
- AWS Secrets Manager configuration.security-groups.tf
- AWS Security Group port management definitions.variables.tf
- Parameter definitions used to customize unique user environment attributes.vpc.tf
- AWS Virtual Private Cloud (VPC) deployment resources.terraform.tfvars
- Your unique credentials and environment attributes (created in the previous step).scripts/consul-server-init.sh
- A bootstrap script for initializing Consul on an EC2 instance.
Note
By default, the consul-server.tf
file creates a single node Consul server. For production, we recommend using at least a three-node Consul datacenter. Check out the Consul Reference Architecture guide to learn more.
Deploy the Consul + ECS environment
With the Terraform manifest files and your custom credentials file, you are now ready to deploy your infrastructure.
Issue the terraform init
command from your working directory to download the necessary providers and initialize the backend.
Once Terraform has been initialized, you can verify the resources that will
be created using the plan
command.
Finally, you can deploy the resources using the apply
command.
Remember to confirm the run by entering yes
.
Once you confirm, it will take a few minutes to complete the deploy
. Terraform will print the following output if the deployment is successful.
Note
The deploy
could take up to 10 minutes to complete. Feel free to grab a cup of
coffee while waiting for the cluster to complete initialization or learn more about the Raft protocol in a fun, interactive way.
Perform testing procedures
Access the Consul UI
Once your resources have been deployed, three unique values will be output to your console. Access the Consul UI by opening the Consul_ui_address
value in your browser. Login to the secure Consul instance with the generated acl_bootstrap_token
- This will authorize you to interact with the Consul UI.
In your Consul UI, open the menu item labeled Services in the left of the screen. Notice the informational text in service mesh with proxy on each ECS service.
Note
If the expected services are not displayed when you log into the Consul UI, refresh the page.
Explore the sample application
One of the ECS service tasks defined in this environment deploys the application fake-service
, a Consul client agent, and an Envoy sidecar proxy in your ECS cluster.
Visit the unique client_lb_address
URL that was output by Terraform after your run to see the deployed fake-service
application.
Notice the lack of communication between the two services. This is due to the deny
by default service mesh communication behavior.
Enable service mesh networking
Consul Intentions are used to control which services may establish connections or make requests.
In your Consul UI, open the menu item labeled Intentions in the left of the screen. Click the Create button in the top right to create an intention.
Set the source service as hcp-ecs-example-client-app, the destination service as hcp-ecs-example-server-app, both namespace fields as default, and communication behavior to Allow. Click the Save button in the bottom left once complete.
Revisit your fake-service
application at the unique client_lb_address
URL that was output by Terraform after your run. Notice that the two services are now able to communicate with each other.
You have successfully deployed a Consul environment across ECS and EC2 using Terraform. Within this environment, you enabled service mesh communication with Consul intentions.
Destroy resources
Use the terraform destroy
command to clean up the resources you created.
Remember to confirm by entering yes
.
Once you confirm, it will take a few minutes to complete the removal. Terraform will print the following output if the command is successful.
Next steps
In this tutorial you used Terraform to deploy a service mesh with AWS Elastic Container Service (ECS) and Consul on Elastic Cloud Compute (EC2). You accomplished this task using the Terraform AWS provider. You also learned how to enable service mesh communication between services using Consul intentions.
You can find the full documentation for the HashiCorp Cloud Platform and AWS providers in the Terraform registry documentation.
To get additional hands-on experience with Consul's service discovery and service mesh features, you can follow these guides to connect a Consul client deployed in a virtual machine or on Elastic Kubernetes Service (EKS).
If you encounter any issues, please contact the HCP team at support.hashicorp.com.