Deploying an AWS Multi-Region Hub-Spoke Architecture with Terraform

Deploying an AWS Multi-Region Hub-Spoke Architecture with Terraform
Photo by Alina Grubnyak / Unsplash

Today, I set out to test a multi-region hub-and-spoke architecture in AWS using Transit Gateways. While this architecture may not always be the best choice for production, it’s a great way to understand AWS Transit Gateways and how to build scalable, multi-region networks.

As usual, I wanted to make it easy for everyone to test this setup in their own AWS account. To simplify the process, I’ve created Terraform files that allow you to deploy and explore the various components effortlessly. You can find the Terraform code at the end of this article, available after a free subscription to my website.

High level architecture

The architecture includes:

  • 1 hub VPC in eu-west-1
  • 1 spoke VPC in eu-central-1
  • 1 spoke VPC in us-east-1

Each region has a dedicated Transit Gateway (TGW) to facilitate inter-region routing. The spoke TGWs are connected to the hub TGW, enabling spoke-to-spoke traffic to flow through the hub. To provide internet access, I set up an internet outbound configuration in the hub VPC, allowing private subnets across all regions to use the hub's NAT gateways.

To test this setup, the Terraform code deploys one Ubuntu server in the EU spoke's public subnet and another server in the US spoke's private subnet. This allows you to connect to the EU public server via its public IP address, then SSH into the US private server to verify spoke-to-spoke traffic. Additionally, from the US server, you can test outbound internet traffic through the HUB's NAT gateways.

You can see the setup on this diagram:

Setup Walkthrough

If you use the Terraform code, it will deploy everything for you. However, here’s a breakdown of the setup to help you understand how the configuration works.

1. Deploying VPCs and Subnets

First, we create three VPCs across different AWS regions—one for the HUB and one for each spoke (EU and US). Each VPC includes public and private subnets with appropriate route tables. These route tables will need modifications later to ensure proper routing between regions.

2. Setting Up Internet Access

In the HUB VPC, we deploy an Internet Gateway, as well as another one in the EU spoke VPC. This is necessary because we'll deploy a test Linux server in the EU spoke, which requires SSH access over its public IP for connectivity testing.

3. Deploying Transit Gateways and Attachments

We deploy three Transit Gateways (TGWs)—one in each region. Once the TGWs are created, we need to establish Transit Gateway Attachments, which are the building blocks of communication between VPCs and TGWs.

There are two types of TGW attachments used in this setup:

  • VPC Attachments – These connect VPCs to their respective regional TGW, allowing traffic to flow between them.
  • Peering Attachments – These establish a connection between two Transit Gateways in different regions, enabling cross-region communication.

The peering attachment works similarly to VPC peering, but at the TGW level. It allows resources in one region to communicate with resources in another, following this general flow:

VPC 1 – (VPC Attachment) – Transit GW 1 – (Peering Attachment) – Transit GW 2 – (VPC Attachment) – VPC 2

4. Configuring Transit Gateway Route Tables

Once the attachments are in place, we configure the Transit Gateway Route Tables. Each TGW has its own route table that dictates how traffic should be forwarded.

  • In the spoke TGW route tables, a default route (0.0.0.0/0) directs traffic to the peering attachment connecting the spoke to the hub. This ensures that all outbound traffic from the spoke VPCs is forwarded to the HUB.
  • In the HUB TGW route table, we configure the following routes:
    • EU Spoke VPC CIDR → Routes to the EU Spoke Peering Attachment
    • US Spoke VPC CIDR → Routes to the US Spoke Peering Attachment
    • Default Route (0.0.0.0/0) → Routes to the HUB VPC Attachment, where the NAT Gateways are located

This setup ensures that all inter-region traffic flows through the HUB TGW, and internet-bound traffic from the spoke VPCs is directed to the NAT Gateways in the HUB.

5. Modify VPC Route Tables

In the spoke VPC route tables, add a default route (0.0.0.0/0) pointing to the local Transit Gateway. This ensures that all outbound traffic from the spoke VPCs is forwarded to their respective TGW.

In the hub VPC route tables, add routes for the spoke VPC CIDR ranges (10.0.0.0/16 and 192.168.0.0/16), directing traffic to the local Hub Transit Gateway.

Once traffic leaves a VPC and reaches its local Transit Gateway, the TGW route tables take over, determining how to forward the traffic based on the peering and VPC attachments.

6. Deploying NAT Gateways for Internet Access

To allow private resources in the spoke VPCs to access the internet, we deploy NAT Gateways in the HUB VPC's public subnets. The spoke VPCs route their internet traffic through the HUB NAT Gateways.

Modify HUB private subnet route tables as well, pointing to the NAT Gateways for default 0.0.0.0/0 traffic.

7. Deploying and Testing Servers

To verify connectivity, we deploy:

  • An Ubuntu server in the EU Spoke's public subnet (to allow SSH access via its public IP).
  • An Ubuntu server in the US Spoke's private subnet (to test private connectivity and outbound internet access).

Testing steps:

  1. Connect to the EU Spoke public server using SSH.
  2. From the EU server, SSH into the US private server to verify spoke-to-spoke communication over the HUB Transit Gateway.
  3. From the US private server, test internet connectivity by running:
ping google.com
curl ifconfig.me

If everything is set up correctly, the public IP returned by ifconfig.me should match one of the NAT Gateway IPs from the HUB region, confirming that outbound traffic is routed through the HUB NAT Gateways.

Summary

Using Terraform for this multi-region hub-and-spoke architecture makes the entire process faster, easier, and repeatable. Instead of manually configuring each component across multiple regions, Terraform automates the deployment, ensuring consistency and saving you hours of work. Whether you're testing, scaling, or adapting this setup for your own needs, Terraform provides a reliable and efficient way to manage your cloud infrastructure.

Want to test this yourself? Get instant access to the Terraform code by subscribing to my website for free. Start deploying your own multi-region AWS network today!

The rest of this post is for subscribers only

Already have an account? Sign in.
Update cookies preferences