Set up a Nomad cluster on GCP
This tutorial will guide you through deploying a Nomad cluster with access control lists (ACLs) enabled on GCP. Consider checking out the cluster setup overview first as it covers the contents of the code repository used in this tutorial.
Prerequisites
For this tutorial, you need:
- Packer 1.9.4 or later
- Terraform 1.2.0 or later
- Nomad 1.7.7 or later
- gcloud CLI 474.0.0 or later
- A Google Cloud account configured for use with Terraform
Note
This tutorial creates GCP resources that may not qualify as part of the GCP free tier. Be sure to follow the Cleanup process at the end of this tutorial so you don't incur any additional unnecessary charges.
Clone the code repository
The cluster setup code repository contains configuration files for creating a Nomad cluster on GCP. It uses Consul for the initial setup of the Nomad servers and clients and enables ACLs for both Consul and Nomad.
Clone the code repository.
Navigate to the cloned repository folder.
Navigate to the gcp
folder.
Create the Nomad cluster
There are two main steps to creating the cluster: building a Google Compute Engine image with Packer and provisioning the cluster infrastructure with Terraform. Both Packer and Terraform require that you configure variables before you run commands. The variables.hcl.example
file contains the configuration you need for this tutorial.
Update the variables file for Packer
Rename variables.hcl.example
to variables.hcl
, and open it in your text editor.
Configure the gcloud
tool for use with Terraform and use the values from authenticating to GCP to update the project
, region
and zone
variables. In this example, those are hc-3ff63253e6a54756b207e4d4727
, us-east1
, us-east1-b
. The remaining variables are for Terraform ,and you update them after building the VM image.
Build the GCE image
Initialize Packer to download the required plugins.
Tip
packer init
returns no output when it finishes successfully.
Then, build the image and provide the variables file with the -var-file
flag.
Tip
Packer will print out a Warning: Undefined variable
message notifying you that some variables were set in variables.hcl
but not used, this is only a warning. The build will still complete sucessfully.
Packer outputs the specific disk image ID once it finishes building the image. In this example, the value is hashistack-20221121163551
.
Update the variables file for Terraform
Open variables.hcl
in your text editor and update the machine_image
variable with the value output from the Packer build. In this example, the value is hashistack-20221121163551
.
The remaining variables in variables.hcl
are optional.
-
allowlist_ip
is a CIDR range specifying which IP addresses are allowed to access the Consul and Nomad UIs on ports8500
and4646
as well as SSH on port22
. The default value of0.0.0.0/0
allows traffic from everywhere. -
name
is a prefix for naming the GCP resources. -
server_instance_type
andclient_instance_type
are the virtual machine instance types for the cluster server and client nodes, respectively. -
server_count
andclient_count
are the number of nodes to create for the servers and clients, respectively.
Deploy the Nomad cluster
Initialize Terraform to download required plugins and set up the workspace.
Run the Terraform deployment and provide the variables file with the -var-file
flag. Respond yes
to the prompt to confirm the operation. The provisioning takes several minutes. The Consul and Nomad web interfaces are available upon completion.
Verify the services are in a healthy state. Navigate to the Consul UI in your web browser with the URL in the Terraform output.
Click on the Log in button and use the bootstrap token secret consul_bootstrap_token_secret
from the Terraform output to log in.
Click on the Nodes page from the sidebar navigation. There are six healthy nodes, including three Consul servers and three Consul clients created with Terraform.
Set up access to Nomad
Run the post-setup.sh
script.
Note
It may take some time for the setup scripts to complete and for the Nomad user token to become available in the Consul KV store. If the post-setup.sh
script doesn't work the first time, wait a couple of minutes and try again.
Apply the export
commands from the output.
Finally, verify connectivity to the cluster with nomad node status
Navigate to the Nomad UI in your web browser with the URL in the post-setup.sh
script output. Click on Sign In in the top right corner and log in with the bootstrap token saved in the NOMAD_TOKEN
environment variable. Set the Secret ID to the token's value and click Sign in with secret.
Click on the Clients page from the sidebar navigation to explore the UI.
Cleanup
Destroy infrastructure
Use terraform destroy
to remove the provisioned infrastructure. Respond yes
to the prompt to confirm removal.
Delete the GCE image
Your GCP account still has the virtual machine image, which you may be charged for. Delete the image by running the gcp compute images delete
command. In this example, the image name is hashistack-20221121163551
.
Next steps
In this tutorial you created a Nomad cluster on GCP with Consul and ACLs enabled. From here, you may want to:
- Run a job with a Nomad spec file or with Nomad Pack
- Test out native service discovery in Nomad
For more information, check out the following resources.
- Learn more about managing your Nomad cluster
- Read more about the ACL stanza and using ACLs