AWS Multi-Account and Multi-VPC Architecture

1. Connectivity Patterns Overview

VPC Peering Architecture

graph TB subgraph "Account A" VPC1[VPC-A
10.0.0.0/16] EC2A[EC2 Instance
10.0.1.10] end subgraph "Account B" VPC2[VPC-B
10.1.0.0/16] EC2B[EC2 Instance
10.1.1.10] end subgraph "Account C" VPC3[VPC-C
10.2.0.0/16] EC2C[EC2 Instance
10.2.1.10] end VPC1 -.->|Peering Connection| VPC2 VPC2 -.->|Peering Connection| VPC3 VPC1 -.->|Peering Connection| VPC3 VPC1 --> EC2A VPC2 --> EC2B VPC3 --> EC2C
VPC Peering Diagram Explanation:
This diagram shows a mesh topology with three VPCs across different AWS accounts. Each VPC has non-overlapping CIDR blocks (10.0.0.0/16, 10.1.0.0/16, 10.2.0.0/16). The dotted lines represent VPC peering connections that enable direct communication between VPCs. Each EC2 instance can communicate with instances in other VPCs through the peering connections, but traffic does not transit through intermediate VPCs (no transitive routing).

Transit Gateway Hub-and-Spoke Architecture

graph TB subgraph "Production Account" PROD[Production VPC
10.0.0.0/16] PRODEC2[Web Servers
Database Servers] end subgraph "Development Account" DEV[Development VPC
10.1.0.0/16] DEVEC2[Dev Instances
Test Servers] end subgraph "Shared Services Account" SHARED[Shared Services VPC
10.2.0.0/16] DNS[DNS Resolvers
AD Controllers] end subgraph "Security Account" SEC[Security VPC
10.3.0.0/16] FW[Firewall
IDS/IPS] end TGW[Transit Gateway
Central Hub] PROD --> TGW DEV --> TGW SHARED --> TGW SEC --> TGW PROD --> PRODEC2 DEV --> DEVEC2 SHARED --> DNS SEC --> FW style TGW fill:#ff9999 style PROD fill:#e1f5fe style DEV fill:#f3e5f5 style SHARED fill:#e8f5e8 style SEC fill:#fff3e0
Transit Gateway Hub-and-Spoke Explanation:
This architecture shows Transit Gateway as the central hub connecting multiple VPCs across different accounts. Unlike VPC peering, Transit Gateway provides transitive routing, allowing any connected VPC to communicate with any other connected VPC through the hub. Route tables control which VPCs can communicate with each other. The color coding represents different account types: Production (blue), Development (purple), Shared Services (green), and Security (orange).

PrivateLink Service Architecture

graph LR subgraph "Service Provider Account" NLB[Network Load Balancer] APP[Application Servers] VPCE_SVC[VPC Endpoint Service] NLB --> APP NLB --> VPCE_SVC end subgraph "Consumer Account 1" VPC1[Consumer VPC 1] EC2_1[EC2 Instances] VPCE1[VPC Endpoint] EC2_1 --> VPCE1 VPC1 --> VPCE1 end subgraph "Consumer Account 2" VPC2[Consumer VPC 2] EC2_2[EC2 Instances] VPCE2[VPC Endpoint] EC2_2 --> VPCE2 VPC2 --> VPCE2 end VPCE_SVC -.->|Private Connection| VPCE1 VPCE_SVC -.->|Private Connection| VPCE2 style NLB fill:#ff6b6b style VPCE_SVC fill:#4ecdc4 style VPCE1 fill:#45b7d1 style VPCE2 fill:#45b7d1
PrivateLink Architecture Explanation:
This diagram illustrates AWS PrivateLink for service-to-service communication. The service provider exposes their application through a VPC Endpoint Service backed by a Network Load Balancer. Consumer accounts create VPC Endpoints in their VPCs to privately access the service without internet gateway, NAT, or VPC peering. Traffic remains within the AWS network backbone, providing better security and performance.

2. Implementation Commands and Configuration

Command Execution Flow

graph TD A[1. Create VPCs] --> B[2. Create Subnets] B --> C[3. Setup Route Tables] C --> D[4. Create Internet/NAT Gateways] D --> E[5. Configure Security Groups] E --> F[6. Establish Connectivity] F --> G{Connectivity Type?} G -->|Peering| H[7a. Create Peering Connection] G -->|Transit Gateway| I[7b. Create Transit Gateway] G -->|PrivateLink| J[7c. Create VPC Endpoint Service] H --> K[8a. Accept Peering & Update Routes] I --> L[8b. Attach VPCs & Configure Route Tables] J --> M[8c. Create VPC Endpoints] K --> N[9. Test Connectivity] L --> N M --> N style A fill:#e3f2fd style G fill:#fff3e0 style N fill:#e8f5e8

Step 1: VPC Creation

Create Production VPC
aws ec2 create-vpc \
    --cidr-block 10.0.0.0/16 \
    --amazon-provided-ipv6-cidr-block \
    --tag-specifications 'ResourceType=vpc,Tags=[{Key=Name,Value=Production-VPC},{Key=Environment,Value=Production}]' \
    --region us-east-1
Parameter Explanation:
  • --cidr-block: Primary IPv4 CIDR block for the VPC (10.0.0.0/16 provides 65,536 IP addresses)
  • --amazon-provided-ipv6-cidr-block: Requests an IPv6 CIDR block from Amazon's pool
  • --tag-specifications: Applies tags for resource management and billing
  • --region: AWS region where the VPC will be created
Alternative Options:
  • --ipv6-cidr-block: Specify your own IPv6 CIDR block
  • --instance-tenancy: Default, dedicated, or host tenancy
Create Development VPC
aws ec2 create-vpc \
    --cidr-block 10.1.0.0/16 \
    --tag-specifications 'ResourceType=vpc,Tags=[{Key=Name,Value=Development-VPC},{Key=Environment,Value=Development}]' \
    --region us-east-1
This creates the development VPC with a different CIDR block to avoid IP address conflicts. The Development VPC uses 10.1.0.0/16 to ensure no overlap with the Production VPC (10.0.0.0/16). This is critical for establishing connectivity between VPCs.

Step 2: Subnet Creation

Create Public Subnet in Production VPC
aws ec2 create-subnet \
    --vpc-id vpc-12345678 \
    --cidr-block 10.0.1.0/24 \
    --availability-zone us-east-1a \
    --map-public-ip-on-launch \
    --tag-specifications 'ResourceType=subnet,Tags=[{Key=Name,Value=Production-Public-Subnet-1a},{Key=Type,Value=Public}]'
Parameter Explanation:
  • --vpc-id: The VPC ID returned from the previous VPC creation command
  • --cidr-block: Subnet CIDR must be within the VPC CIDR range
  • --availability-zone: Specific AZ for high availability design
  • --map-public-ip-on-launch: Automatically assigns public IP to instances
This subnet will host resources that need direct internet access, such as NAT gateways, load balancers, or bastion hosts.
Create Private Subnet in Production VPC
aws ec2 create-subnet \
    --vpc-id vpc-12345678 \
    --cidr-block 10.0.2.0/24 \
    --availability-zone us-east-1a \
    --tag-specifications 'ResourceType=subnet,Tags=[{Key=Name,Value=Production-Private-Subnet-1a},{Key=Type,Value=Private}]'
Private subnets don't have the --map-public-ip-on-launch flag, ensuring instances launched here don't get public IPs by default. These subnets typically host application servers, databases, and other backend resources that shouldn't be directly accessible from the internet.

Step 3: Internet Gateway and NAT Gateway Setup

Create and Attach Internet Gateway
aws ec2 create-internet-gateway \
    --tag-specifications 'ResourceType=internet-gateway,Tags=[{Key=Name,Value=Production-IGW}]'
aws ec2 attach-internet-gateway \
    --internet-gateway-id igw-12345678 \
    --vpc-id vpc-12345678
The Internet Gateway provides internet access to public subnets. It's a horizontally scaled, redundant, and highly available VPC component. One IGW can be attached per VPC, and it's required for any public subnet connectivity.
Create NAT Gateway
aws ec2 create-nat-gateway \
    --subnet-id subnet-12345678 \
    --allocation-id eipalloc-12345678 \
    --tag-specifications 'ResourceType=nat-gateway,Tags=[{Key=Name,Value=Production-NAT-1a}]'
Prerequisites: You need an Elastic IP allocation first:
aws ec2 allocate-address --domain vpc
The NAT Gateway must be placed in a public subnet and allows outbound internet access for resources in private subnets while preventing inbound connections from the internet.

Step 4: VPC Peering Connection

Create VPC Peering Connection
aws ec2 create-vpc-peering-connection \
    --vpc-id vpc-12345678 \
    --peer-vpc-id vpc-87654321 \
    --peer-region us-east-1 \
    --tag-specifications 'ResourceType=vpc-peering-connection,Tags=[{Key=Name,Value=Prod-Dev-Peering}]'
Parameter Explanation:
  • --vpc-id: Source VPC (requester)
  • --peer-vpc-id: Target VPC (accepter)
  • --peer-region: Region of the peer VPC (can be different)
Cross-Account Options:
  • --peer-owner-id: AWS account ID of the peer VPC owner
Accept VPC Peering Connection
aws ec2 accept-vpc-peering-connection \
    --vpc-peering-connection-id pcx-12345678
This command must be run in the peer (accepter) account/region. For cross-account peering, you'll need to switch to the other account's credentials. The peering connection remains in "pending-acceptance" state until accepted.

Step 5: Route Table Configuration for Peering

Add Routes for VPC Peering
aws ec2 create-route \
    --route-table-id rtb-12345678 \
    --destination-cidr-block 10.1.0.0/16 \
    --vpc-peering-connection-id pcx-12345678
aws ec2 create-route \
    --route-table-id rtb-87654321 \
    --destination-cidr-block 10.0.0.0/16 \
    --vpc-peering-connection-id pcx-12345678
These commands add routes in both VPCs' route tables. The first route allows Production VPC (10.0.0.0/16) to reach Development VPC (10.1.0.0/16), and the second allows the reverse. Both routes must be added for bidirectional communication.

Step 6: Transit Gateway Implementation

Create Transit Gateway
aws ec2 create-transit-gateway \
    --description "Multi-Account Transit Gateway" \
    --options=AmazonSideAsn=64512,AutoAcceptSharedAttachments=enable,DefaultRouteTableAssociation=enable,DefaultRouteTablePropagation=enable \
    --tag-specifications 'ResourceType=transit-gateway,Tags=[{Key=Name,Value=Multi-Account-TGW}]'
Options Explanation:
  • AmazonSideAsn: BGP ASN for the gateway (64512-65534 for private use)
  • AutoAcceptSharedAttachments: Automatically accept attachment requests
  • DefaultRouteTableAssociation: Associate attachments with default route table
  • DefaultRouteTablePropagation: Propagate routes to default table
For production environments, consider disabling auto-accept and using custom route tables for better control.
Attach VPC to Transit Gateway
aws ec2 create-transit-gateway-vpc-attachment \
    --transit-gateway-id tgw-12345678 \
    --vpc-id vpc-12345678 \
    --subnet-ids subnet-12345678 subnet-87654321 \
    --tag-specifications 'ResourceType=transit-gateway-attachment,Tags=[{Key=Name,Value=Production-VPC-Attachment}]'
Subnet Selection: Choose one subnet per AZ for redundancy. Transit Gateway creates an ENI in each specified subnet. For multi-AZ deployments, specify subnets in different AZs. The subnets should be dedicated to Transit Gateway or have sufficient IP addresses available.
Share Transit Gateway Cross-Account
aws ram create-resource-share \
    --name "Transit-Gateway-Share" \
    --resource-arns arn:aws:ec2:us-east-1:123456789012:transit-gateway/tgw-12345678 \
    --principals 123456789013,123456789014 \
    --tags Key=Purpose,Value=Multi-Account-Networking
AWS Resource Access Manager (RAM) shares the Transit Gateway with other accounts. The --principals parameter accepts AWS account IDs, organizational units, or organization IDs. Recipients must accept the share invitation before they can create attachments.

Step 7: PrivateLink Configuration

Create VPC Endpoint Service
aws ec2 create-vpc-endpoint-service-configuration \
    --network-load-balancer-arns arn:aws:elasticloadbalancing:us-east-1:123456789012:loadbalancer/net/my-nlb/1234567890123456 \
    --acceptance-required \
    --tag-specifications 'ResourceType=vpc-endpoint-service,Tags=[{Key=Name,Value=My-Service-Endpoint}]'
Prerequisites: Network Load Balancer must exist and be configured. Parameters:
  • --network-load-balancer-arns: ARN of the NLB backing the service
  • --acceptance-required: Requires manual approval for endpoint connections
  • --gateway-load-balancer-arns: Alternative to NLB for Gateway Load Balancer
Create VPC Endpoint (Consumer)
aws ec2 create-vpc-endpoint \
    --vpc-id vpc-87654321 \
    --route-table-ids rtb-12345678 \
    --service-name com.amazonaws.vpce.us-east-1.vpce-svc-12345678 \
    --policy-document file://endpoint-policy.json \
    --tag-specifications 'ResourceType=vpc-endpoint,Tags=[{Key=Name,Value=Service-Consumer-Endpoint}]'
The service name format is region-specific and includes the VPC endpoint service ID. The policy document controls which actions are allowed through the endpoint. For interface endpoints, use --subnet-ids instead of --route-table-ids.

3. Advanced Configuration Patterns

Hub-and-Spoke with Segmentation

graph TB subgraph "Transit Gateway Route Tables" TGW_RT1[Production Route Table] TGW_RT2[Development Route Table] TGW_RT3[Shared Services Route Table] end subgraph "Production Account" PROD_VPC[Production VPC
10.0.0.0/16] PROD_ATT[TGW Attachment] end subgraph "Development Account" DEV_VPC[Development VPC
10.1.0.0/16] DEV_ATT[TGW Attachment] end subgraph "Shared Services" SHARED_VPC[Shared Services VPC
10.2.0.0/16] SHARED_ATT[TGW Attachment] DNS_SVC[DNS/AD Services] end PROD_ATT --> TGW_RT1 DEV_ATT --> TGW_RT2 SHARED_ATT --> TGW_RT3 TGW_RT1 -.->|Route to Shared| TGW_RT3 TGW_RT2 -.->|Route to Shared| TGW_RT3 TGW_RT3 -.->|Routes to All| TGW_RT1 TGW_RT3 -.->|Routes to All| TGW_RT2 PROD_VPC --> PROD_ATT DEV_VPC --> DEV_ATT SHARED_VPC --> SHARED_ATT SHARED_VPC --> DNS_SVC style TGW_RT1 fill:#ffcccb style TGW_RT2 fill:#add8e6 style TGW_RT3 fill:#90ee90
Segmented Hub-and-Spoke Explanation:
This advanced Transit Gateway configuration uses separate route tables for network segmentation. Production and Development environments can only communicate with Shared Services, not with each other. This is achieved through custom route table associations and propagations. The Shared Services route table can reach all environments to provide centralized services like DNS and Active Directory.
Create Custom Transit Gateway Route Table
aws ec2 create-transit-gateway-route-table \
    --transit-gateway-id tgw-12345678 \
    --tag-specifications 'ResourceType=transit-gateway-route-table,Tags=[{Key=Name,Value=Production-Route-Table},{Key=Environment,Value=Production}]'
Custom route tables provide granular control over traffic flow. Each attachment can be associated with a specific route table, and routes can be selectively propagated to implement network segmentation policies.
Associate Attachment with Route Table
aws ec2 associate-transit-gateway-route-table \
    --transit-gateway-attachment-id tgw-attach-12345678 \
    --transit-gateway-route-table-id tgw-rtb-12345678
This associates a VPC attachment with a specific route table. An attachment can only be associated with one route table at a time. This determines which routes the attachment can use for outbound traffic.

Cross-Region Peering Architecture

graph LR subgraph "US-East-1" TGW1[Transit Gateway
US-East-1] VPC1[Production VPC
10.0.0.0/16] VPC2[Development VPC
10.1.0.0/16] VPC1 --> TGW1 VPC2 --> TGW1 end subgraph "US-West-2" TGW2[Transit Gateway
US-West-2] VPC3[DR VPC
10.10.0.0/16] VPC4[Backup VPC
10.11.0.0/16] VPC3 --> TGW2 VPC4 --> TGW2 end TGW1 -.->|TGW Peering| TGW2 style TGW1 fill:#ff9999 style TGW2 fill:#99ccff
Cross-Region Transit Gateway Peering:
This architecture shows Transit Gateway peering between regions for disaster recovery and global connectivity. Each region has its own Transit Gateway managing local VPC attachments. The peering connection enables cross-region communication while maintaining regional isolation and optimizing data transfer costs.
Create Cross-Region TGW Peering
aws ec2 create-transit-gateway-peering-attachment \
    --transit-gateway-id tgw-12345678 \
    --peer-transit-gateway-id tgw-87654321 \
    --peer-account-id 123456789012 \
    --peer-region us-west-2 \
    --tag-specifications 'ResourceType=transit-gateway-attachment,Tags=[{Key=Name,Value=Cross-Region-Peering}]'
Cross-region peering enables connectivity between Transit Gateways in different regions. The peer Transit Gateway owner must accept the peering attachment. Routes must be configured in both regions' route tables to enable traffic flow.

4. IP Address Management and Overlap Resolution

CIDR Block Planning: Proper IP address management is crucial for multi-VPC architectures. Non-overlapping CIDR blocks are essential for direct connectivity patterns like VPC peering and Transit Gateway.

CIDR Allocation Strategy

Environment CIDR Block Available IPs Usage
Production 10.0.0.0/16 65,536 Production workloads
Development 10.1.0.0/16 65,536 Development and testing
Staging 10.2.0.0/16 65,536 Pre-production testing
Shared Services 10.3.0.0/16 65,536 DNS, AD, monitoring
DR Region 10.10.0.0/14 262,144 Disaster recovery

VPC Sharing Implementation

Share VPC Subnets with AWS RAM
aws ram create-resource-share \
    --name "Shared-VPC-Subnets" \
    --resource-arns arn:aws:ec2:us-east-1:123456789012:subnet/subnet-12345678,arn:aws:ec2:us-east-1:123456789012:subnet/subnet-87654321 \
    --principals 123456789013,123456789014 \
    --allow-external-principals false \
    --tags Key=Purpose,Value=VPC-Sharing
VPC sharing allows multiple AWS accounts to create resources in shared subnets. The VPC owner maintains control over the VPC and its route tables, while participant accounts can launch resources in shared subnets. This reduces the number of VPCs needed and simplifies network management.

5. Security Groups and NACLs Configuration

Create Security Group for Multi-VPC Access
aws ec2 create-security-group \
    --group-name cross-vpc-access \
    --description "Allow access from other VPCs" \
    --vpc-id vpc-12345678 \
    --tag-specifications 'ResourceType=security-group,Tags=[{Key=Name,Value=Cross-VPC-Access}]'
This security group will be configured to allow traffic from other VPCs in the multi-VPC architecture. Security groups are stateful and provide instance-level firewall rules.
Add Security Group Rules for Cross-VPC Communication
aws ec2 authorize-security-group-ingress \
    --group-id sg-1234567890abcdef0 \
    --protocol tcp \
    --port 443 \
    --cidr 10.1.0.0/16
aws ec2 authorize-security-group-ingress \
    --group-id sg-1234567890abcdef0 \
    --protocol tcp \
    --port 80 \
    --cidr 10.2.0.0/16
These rules allow HTTPS traffic from the Development VPC (10.1.0.0/16) and HTTP traffic from the Shared Services VPC (10.2.0.0/16). Always use the principle of least privilege and only open necessary ports to specific CIDR blocks.

6. Monitoring and Troubleshooting

Enable VPC Flow Logs
aws ec2 create-flow-logs \
    --resource-type VPC \
    --resource-ids vpc-12345678 \
    --traffic-type ALL \
    --log-destination-type cloud-watch-logs \
    --log-group-name VPCFlowLogs \
    --deliver-logs-permission-arn arn:aws:iam::123456789012:role/flowlogsRole
VPC Flow Logs capture information about IP traffic in your VPC. This is essential for troubleshooting connectivity issues in multi-VPC architectures. Logs can be sent to CloudWatch Logs, S3, or Kinesis for analysis.

Connectivity Troubleshooting Commands

Test Connectivity and Troubleshoot
# Check VPC Peering Connection Status
aws ec2 describe-vpc-peering-connections \
    --vpc-peering-connection-ids pcx-12345678
# Verify Transit Gateway Attachments
aws ec2 describe-transit-gateway-attachments \
    --transit-gateway-attachment-ids tgw-attach-12345678
# Check Route Table Entries
aws ec2 describe-route-tables \
    --route-table-ids rtb-12345678
These diagnostic commands help troubleshoot connectivity issues. Check that peering connections are active, Transit Gateway attachments are available, and route tables contain the expected routes pointing to the correct targets.
Common Pitfalls to Avoid:

7. Cost Optimization and Best Practices

Cost Considerations:

Final Architecture Validation

graph TB subgraph "Implementation Checklist" A[✓ VPCs Created] --> B[✓ Subnets Configured] B --> C[✓ Route Tables Updated] C --> D[✓ Security Groups Set] D --> E[✓ Connectivity Established] E --> F[✓ Testing Complete] F --> G[✓ Monitoring Enabled] G --> H[🎯 Production Ready] end style A fill:#90EE90 style B fill:#90EE90 style C fill:#90EE90 style D fill:#90EE90 style E fill:#90EE90 style F fill:#90EE90 style G fill:#90EE90 style H fill:#FFD700
Implementation Validation Checklist:
This flowchart represents the validation steps for a complete multi-account, multi-VPC architecture. Each green checkmark indicates a completed implementation phase, leading to a production-ready environment. Regular validation of each component ensures reliable network connectivity and security.
Final
Comprehensive Testing Command
aws ec2 describe-vpcs --query 'Vpcs[*].[VpcId,CidrBlock,State]' --output table
This command provides a final overview of all VPCs in your account, showing their IDs, CIDR blocks, and current state, helping you verify your multi-VPC architecture is properly configured.