Setup and implement read
Note: We recommend using the Terraform Plugin Framework for new provider development because it offers significant advantages compared to the SDKv2. Refer to the Plugin Framework tutorials to learn how to create providers using the framework.
In these tutorials, you will write a custom provider against the API of a fictional coffee-shop application called HashiCups using the Terraform Plugin SDKv2. Through the process, you will learn how to create data sources, authenticate the provider to the HashiCups client, and create resources with CRUD functionality.
There are a few possible reasons for authoring a custom Terraform provider, including:
- An internal private cloud whose functionality is either proprietary or would not benefit the community.
- Extending the capabilities of an existing provider (bug fixes, new features, or customizations)
In this tutorial, you will set up your Terraform provider development environment and create a coffees data source that will return all coffees HashiCups serves. To do this, you will:
- Set up your development environment.
You will clone the HashiCups repository and checkout theboilerplate
branch. This contains a scaffold for a generic Terraform provider. - Define the coffees data source.
You will add a scaffold that defines an empty schema and functions to retrieve a list of coffees. - Define the coffees schema.
The schema defines properties that allow Terraform to recognize, reference and store the coffees data resource. - Implement read.
This read function invokes aGET
request to the/coffees
endpoint, then maps its value to the schema defined above. - Add coffees data source to the provider schema.
This allows you to use the data source in your configuration.
Prerequisites
To follow this tutorial, you need:
- a Golang 1.15+ installed and configured.
- the Terraform 0.14+ CLI installed locally.
- Docker and Docker Compose to run an instance of HashiCups locally.
Set up your development environment
Clone the boilerplate
branch of the Terraform HashiCups Provider repository. This serves as the boilerplate for your provider workspace.
Change into the cloned repository.
The HashiCups provider requires an instance of HashiCups. Navigate to the docker_compose
directory then run docker-compose up
to spin up a local instance of HashiCups on port :19090
.
Leave this terminal running.
In another terminal, verify that HashiCups is running by sending a request to its health check endpoint.
The directory should have the following structure.
If you're stuck at any point during this tutorial, refer to the implement-read
branch to see the changes implemented in this tutorial.
Explore your development environment
The boilerplate includes the following:
Makefile
contains helper functions used to build, package and install the HashiCups provider.
It's currently written for MacOS Terraform provider development, but you can change the variables at the top of the file to match yourOS_ARCH
. If you're using Windows, update yourMakefile
. You can find a full list of supported GO_ARCH here.Theinstall
function is configured to install the provider into the appropriate subdirectory within the default MacOS and Linux user plugins directory as defined by Terraform 0.13 specifications.docker_compose
contains the files required to initialize a local instance of HashiCups.examples
contains sample Terraform configuration that can be used to test the HashiCups provider.hashicups
contains the main provider code. This will be where the provider's resources and data source implementations will be defined.main.go
is the main entry point. This file creates a valid, executable Go binary that Terraform Core can consume.
Explore main.go
file
Open main.go
in the root of the repository. The contents of the main function consume the Plugin SDK's plugin
library which facilitates the RPC communication between Terraform Core and the plugin.
Notice the ProviderFunc
returns a *schema.Provider
from the hashicups
package.
Explore provider schema
The hashicups/provider.go
file currently defines an empty provider.
The helper/schema
library is part of Terraform Core. It abstracts many of the complexities and ensures consistency between providers. The *schema.Provider
type can accept:
- the resources it supports (
ResourcesMap
andDataSourcesMap
) - configuration keys (properties in
*schema.Schema{}
) - any callbacks to configure (
ConfigureContextFunc
)
You can use configuration keys and callbacks to authenticate and configure the provider. You will add them in the Add Authentication to a Provider tutorial.
Build provider
Run the go mod init
command to define this directory as the root of a module.
Then, install all the provider's dependencies.
Next, build the provider using the Makefile.
This runs the go build -o terraform-provider-hashicups
command. Terraform searches for plugins in the format of terraform-<TYPE>-<NAME>
. In the case above, the plugin is of type "provider" and of name "hashicups".
To verify things are working correctly, execute the recently created binary.
Define coffees data resource
Now that you have created the provider, add the coffees data resource. The coffees data source will pull information on all coffees served by HashiCups.
Create a new file named data_source_coffee.go
in the hashicups
directory and add the following code snippet. As a general convention, Terraform providers put each data source in their own file, named after the resource, prefixed with data_source_
.
The libraries imported here will be used in dataSourceCoffeesRead
.
The coffees data source function returns a schema.Resource
which defines the schema and CRUD operations for the resource. Since Terraform data resources should only read information (not create, update or delete), only read (ReadContext
) is defined.
Define coffees schema
All Terraform resources must have a schema. This allows the provider to map the JSON response to the schema.
The /coffees
endpoint returns an array of coffees. The sample below shows a truncated output.
Since the response returns a list of coffees, the coffees schema should reflect that. Update your coffees data source's schema with the following code snippet.
Format your code.
Notice that the coffees schema is a schema.TypeList
of coffee (schema.Resource
).
The coffee resource's properties should map to their respective values in the JSON response. In the above example response:
- The coffee's
id
is1
, aschema.TypeInt
. - The coffee's
name
is"Packer Spiced Latte"
, aschema.TypeString
. - The coffee
ingredients
is an array of ingredient objects, aschema.TypeList
with elementsmap[string]*schema.Schema{}
.
You can use various schema types to define complex data models. You will implement a complex read in the Implement Complex Read tutorial and Implement Create tutorial.
Implement read
Now that you defined the coffees schema, you can implement the dataSourceCoffeesRead
function.
Add the following read function to your hashicups/data_source_coffee.go
file.
Format your code.
This function creates a new GET request to localhost:19090/coffees
. Then, it decodes the response into a []map[string]interface{}
. The d.Set("coffees", coffees)
function sets the response body (list of coffees object) to Terraform coffees data source, assigning each value to its respective schema position. Finally, it uses SetID
to set the resource ID.
Notice that this function returns a diag.Diagnostics
type, which can return multiple errors and warnings to Terraform, giving users more robust error and warning messages. You can use the diag.FromErr()
helper function to convert a Go error to a diag.Diagnostics
type. You will implement this in the Debug a Terraform provider tutorial.
Tip
This function doesn't use an API client library to explicitly show the steps involved. The HashiCups client library is used to abstract CRUD functionality in other tutorials.
The existence of a non-blank ID tells Terraform that a resource was created. This ID can be any string value, but should be a value that Terraform can use to read the resource again. Since this data resource doesn't have a unique ID, you set the ID to the current UNIX time, which will force this resource to refresh during every Terraform apply.
When you create something in Terraform but delete it manually, Terraform should gracefully handle it. If the API returns an error when the resource doesn't exist, the read function should check to see if the resource is available first. If the resource isn't available, the function should set the ID to an empty string so Terraform "destroys" the resource in state. The following code snippet is an example of how this can be implemented; you do not need to add this to your configuration for this tutorial.
Add data source to provider
Now that you've defined your data source, you can add it to your provider.
In your hashicups/provider.go
file, add the coffees data source to the DataSourcesMap
. The DataSourcesMap
attribute takes a map of the data source name, hashicups_coffees
, and the *schema.Resource
defined in hashicups/data_source_coffee.go
. Resources and data sources names must follow the <provider>_<resource_name>
convention.
Format your code.
Test the provider
Now that you've implemented read and created the coffees data source, verify that it works.
First, confirm that you are in the terraform-provider-hashicups
root directory.
Next, build the binary and move it into your user Terraform plugins directory. This allows you to sideload and test the custom provider. Select the tab for your operating system for specific instructions.
Tip
The Perform CRUD operations with Providers tutorial explains why and how to sideload custom providers. Refer to it to learn more about where to install custom providers and how to reference them in your configuration.
Navigate to the terraform-provider-hashicups/examples
directory. This contains a sample Terraform configuration for the Terraform HashiCups provider.
Finally, initialize your workspace to refresh your HashiCups provider, then apply. This should return the properties of "Packer Spice Latte" in your output.
Next steps
In this tutorial, you created your first Terraform provider and data resource to reference information from an API in your Terraform configuration.
If you were stuck during this tutorial, checkout the implement-read
branch to see the changes implemented in this tutorial.
- The Terraform Provider Scaffold is a quick-start repository for creating a Terraform provider. Use this GitHub template when you're ready to create a custom provider.
- To learn more about the SDK v2, refer to the Terraform Plugin SDK v2 Upgrade tutorial.
- To learn more about the Terraform Plugin SDK, refer to the Terraform Plugin SDK Documentation.
- To learn more about how the plugins system in Terraform works, refer to the Terraform Plugins Documentation.
- To learn more about provider source, refer to the Terraform provider source documentation.