AWS Direct Connect and Direct Connect Gateway Scale Limits

Direct Connect (DX)

  • DX is region specific offering
    • It allows On-Prem physical locations to connect to a specific AWS region/location
  • DX supports max of 50 VIFs (including Private and Public) per physical connection
  • DX does not support Transit VIF for AWS-TGW connectivity

Direct Connect Gateway (DXGW)

  • Only supports Private and Transit VIFs
    • DXGW mainly used to access private resources in VPCs
  • Does not support public VIF
    • DXGW does not provide any benefit of Public Internet Connectivity
  • VGW associated with a DXGW must be “attached” to a VPC
  • Does not support transitive routing or transit connectivity
    • VPC in Region-1 cannot directly communicate with VPC in Region-2
    • DX Location-1 cannot directly communicate with DX Location-2
  • Up to 30 DX physical connections can connect to one single DXGW for physical link redundancy purposes
    • In another words 30 DX locations/regions
  • DX supports max of 50 VIFs (for DXGW only Private and Transit VIFs are applicable)
    • It means one can have Max of 50 DXGW per physical DX link
    • But one DXGW can connect to max of 10 VPCs
    • It means Max of 500 VPCs (50 x 10 VPC) per physical DX link across accounts and regions

DXGW with AWS-TGW Limitations

  • Transit VIF can only be attached to a DXGW
  • Only one Transit VIF for any AWS Direct Connect 1/2/5/10 Gbps connection
    • Less than 1G connections does not support Transit VIF
    • Max of 3 AWS-TGW can connect to one DXGW behind one Transit VIF
  • A single DXGW cannot attach with both Private and Transit VIF
    • This could be a serious limitation for some customers
    • I think the underline assumption is that if a customer is alreadt using AWS-TGW then why would he want to use a private VIF attached to the same DXGW?

DXGW without and with AWS-TGW Comparision

DXGW without AWS-TGWDXGW with AWS-TGW
10 VPCs per DXGW3 TGWs per DXGW
50 DXGW max (b/c of 50 Private VIF)With Transit  VIF only one DXGW is possible
500 VPCs total5,000 VPCs per TGW
15,000 VPC per DX physical link
Private VIF supported on all Direct Connect connection typesTransit VIF supported only on dedicated or hosted connections of speed 1Gbps and above
No additional chargesAdditional charge for TGW data processing

References

https://docs.aws.amazon.com/vpc/latest/tgw/transit-gateway-limits.html
https://docs.aws.amazon.com/directconnect/latest/UserGuide/limits.html

Credits

Abdul Rahim
Kamran Habib
Saad Mirza
Hammad Alam

Importance of Right Network Architecture vs Cost

An Architect, CTO or any technical decision maker has a huge responsibility to approve and adopt the “right network architecture” that is aligned with the business requirement.

We have seen enterprises who picked the wrong or compromised network architecture and then paid the price, way more than the initial cost to build and run the network, in the long run.

Here we are sharing some nuggets for the technical decision makers

  • A bad architecture can cost you a lot in the long run. A lot more than what you have spent on building and running it
  • Do not build operations around architecture, build architecture around operations
  • Don’t make long term architecture decisions based on short-term problems
  • Right architecture is more important than feature set
  • Simple architecture is the best architecture

Building an architecture and putting a design is one time deal. You end up running that design for years to come. As an architect if you have not make smart choices to build the correct architecture, your enterprise will be paying a lot more.

Also think about support. You need a trusted support partner who can troubleshoot with you.

Real World Customer Example

Let me give you an example. Here is an architecture a customer wanted to go with.

They wanted to use Aviatrix Transit to build the encrypted transit peering within the AWS region and across multiple AWS regions and clouds (GCP). They also wanted to deploy AWS-TGW using Aviatrix Controller but just to attach the AWS-TGW with the AVX-Transit-GW (or ASN).

Essentially all the red-lines in the topology above were to be controlled and managed by Aviatrix. For VPC1, VPC2 and so on, they wanted to do it manually. Thinking that it was just one time job.

In order to save few $$$, they wanted to make just one comprises in the architecture and I will explain how costly that one compromise could be in the long run

Customer did not want to use Aviatrix’s AWS-TGW Orchestration/Control to attach the Spoke VPCs with AWS-TGW

Ripple Effect of a Single Compromise

  • Aviatrix Controller won’t be able to monitor and propagate existing and new routes
    • Application VPC routes must be updated manually
    • AWS-TGW route tables must be updates manually
    • Transit VPC route table must be updated manually
  • Customer will loose the Aviatrix Controller’s TGW Audit functionality
    • This could be huge operational burden on the team
  • Customer will not be providing proper alerts about the route updates and warnings about incorrect or duplicate routes
    • No network correctness
  • In future if Aviatrix build a functionality where any new route update will require admin approval, then customer might not be able to use that functionality
  • Beside that there are other functionalities that Aviatrix is planning to build for AWS-TGW and Aviatrix-TGW that probably won’t work in such a network design
  • No way to do network segmentation for workloads in different VPCs
    • No Security Domain functionality available
  • Potential of AWS-TGW sprawl
    • Multiple AWS-TGW might be needed for traffic separation
    • Huge management overhead
  • Some of the Aviatrix Flight-Path functionality might be broken in future
  • In future if Aviatrix releases, capacity planning and monitoring tools, that might not work in this type of architecture
  • Adding the Firewall in the architecture will not be possible. This could be a huge compliance and security risk a customer would be taking for security sensitive data
  • For User-VPN use-case, customer must accommodate VPN subnets manually on TGW and Aviatrix Transit
  • Aviatrix support won’t be able to solve and troubleshoot end-to-end because the VPCs were not attached by Aviatrix Controller
  • Customer is taking the risk of not having end-to-end Encryption
    • AWS-TGW does not provide encryption for Spoke VPCs
    • This could be moot point in this architecture, because customer decided to use AWS-TGW as attachment but it is important to call out for compliance, audit, GDPR and security reasons

Credits

Wanted to say thanks to the following people for providing input to this post

Tomasz Klimczyk
Don Leone
Mark Cunningham
Hammad Alam
Nauman Mustafa
Saad Mirza

Aviatrix ACE Professional Bootcamp Prerequisite

Technical Prerequisite

Familiarity and basic know-how of at least one Cloud Provider is must to attend the bootcamp. For instance attendees should know concepts of

  • VPC/ VNet
  • Account/ Subscription
  • AMI / VM / EC2
  • VPN GW / VGW / Internet GW (IGW)
  • CIDR / Subnet / Routes
  • etc

List of items participants need to bring

  • Laptop needed with SSH/RDP tools installed
  • For Windows, make sure to have software like puttygen to generate cert based password
  • The underlying Cloud for labs is AWS
    • The same labs are applicable to other Cloud such as Azure, GCP and OCI.
    • The beauty of Aviatrix is that it will hide all the complexities of Clouds and provides a unified/normalized management, control, data and operations plane
  • All users must have an account with admin. privileges in AWS. It could be a personal account that could be deleted after the bootcamp. For Azure and GCP instructor will use their account to showcase multi-cloud use-cases

List of items needed in the training room

  • Projector with HDMI or USB-C cable
  • White-Board (it should not be in-front of Projector. Ideally it could be on the side)
    • Dry-erase markers
  • Easel Board with markers
    • Will be used by attendees to draw their design diagrams
  • The WIFI provided should allow ssh/rdp out

Misc.

  • Attendees are responsible for their Flight/Transportation/Lodging

Azure Transit Network Deployment with Native VNet Peering

Unless you were living under a rock :-), everyone knows that Microsoft Azure is picking really fast in the Enterprise market. Understanding the Multi-Cloud Network (MCN) architecture is a must for Network/Cloud architects and Transit Networking is one of the Cloud Core element of MCN architecture.

This blog will discuss the deployment of an Azure Transit Network design pattern called “Aviatrix Transit with VNet Peering”.

Refer to my previous blog for different Azure Transit Network design options and pros/cons

Topology

We will be using following topology to deploy the Azure Transit Networking with native VNet peering with Aviatrix Transit GW in the Transit/Hub VNet.

Simple and Quick Deployment

The process of deploying the Azure Transit Network is extremely simple by using the Aviatrix Controller UI. You need to perform 3 simple steps and the entire setup can be up and running in about 30 min.

IP Addressing

I will be using following IP addressing scheme for this deployment (also shown in the topology diagram above)

  • Aviatrix-Transit-VNet-Central  10.160.0.0/16
    • Aviatrix-Transit-GW-Central  10.160.0.4/22
  • Aviatrix-Spoke-VNet-Central-1  10.161.0.0/16
  • Aviatrix-Spoke-VNet-Central-2  10.162.0.0/16

Region Selection

I am using US-Central region for this deployment but it is not really mandatory to deploy all the hub and spokes in the same region. They all can be spread across multiple regions as well.


Step#1: Aviatrix Controller Creates Azure VNets and Resource Group (RG)

Use Aviatrix Controller UI to creates the Azure VNets. The process allows to pick the VNet region and CIDR range. A corresponding and unique Azure Resource Group (RG) is also created at this step.

Behind the Scene

Here is what happens in the background in Azure (you can verify it from the Azure portal itself)

  • Aviatrix first creates a new Azure Resource Group
  • Then Aviatrix Controller creates VNet in that RG
  • Aviatrix Controller also creates four /20 subnets from /16 CIDR range
    • Controller makes it easy and select the subnet range automatically. For example for /24 CIDR, controller will create /28 subnets
  • Controller then creates a User Route-Table and associates subnets in the newly created User Route-Table

Les us take a look at the screen-shots from Azure portal for the above mentioned bullet points

Aviatrix first creates a new Azure Resource Group

Aviatrix Controller creates VNet in the newly created RG

Azure Virtual Network (VNet) Properties

Aviatrix Creates four /20 subnets. 2 public and 2 private
User Route Table created without any routes. Only “user” subnets associated with the user-route table. Public subnets are not associated with any route table at this stage


Step#2: Aviatrix Controller Deploys Transit GW VM in Transit VNet:RG

Now deploy Aviatrix Transit GW VM in Azure using the Aviatrix Controller UI. Make sure to deploy this VM in the Azure Public subnet that was created in Step#1.

Aviatrix Controller deploys the AVX-Transit GW in the Hub/Transit VNet

The controller UI shows the progress of this deployment as shown below

[03:47:10] Starting to create ARM GW Aviatrix-Transit-GW-Central.
[03:47:11] Connected to Azure ARM.
[03:47:22] Deploying virtual machine...
[03:50:32] Deploy virtual machine done.
[03:50:33] License check is complete.
[03:50:33] Added GW info to Database.
[03:50:34] Aviatrix-Transit-GW-Central AVX SQS Queue created. 
[03:50:34] Create message queue done.
[03:50:34] Initializing GW.....
[03:50:34] Copy configuration to GW Aviatrix-Transit-GW-Central done.
[03:50:35] Copy new software to GW Aviatrix-Transit-GW-Central done.
[03:50:35] Copy /etc/cloudx/cloudx_code_file.json.enc to GW Aviatrix-Transit-GW-Central done.
[03:50:35] Copy /etc/cloudx/cloudx_code_key_file.txt to GW Aviatrix-Transit-GW-Central done.
[03:50:35] Copy scripts to GW Aviatrix-Transit-GW-Central done.
[03:50:35] Copy sdk to GW Aviatrix-Transit-GW-Central done.
[03:50:39] Copy libraries to GW Aviatrix-Transit-GW-Central done.
[03:50:39] Installing software ....
[03:50:41] Issuing certificates....
[03:50:41] Issue certificates done
[03:51:14] GW software started.
[03:51:38] Software Installation done.
[03:51:40] Run self diagnostics done. 

Behind the Scene

At this stage the Aviatrix Transit VM is deployed. Let me show you what happens behind the scene by logging into the Azure Portal

Aviatrix Transit Resource Group now has the AVX-Transit VM/GW

Pay attention to the above screen shot. Following are the resources that Aviatrix Controller orchestrate behind the scene

  • Creates new VNet
  • Creates VM in the newly created VNet (see screen-shot below)
  • Creates network interface for the VM
  • Allocated Public IP address to the VM
  • Creates Availability set and assign it to VM
  • Creates NSG (Azure Network Security Group) and assign it to the VM
  • Creates storage account
  • Assign the user-route-table to VM subnet

Following screen shows the Aviatrix Transit GW VM details

Aviatrix Transit GW VM details

Inbound Rules Added by Aviatrix Controller for Transit-GW at the NIC Level
Outbound Rule Added by Aviatrix Controller for Transit-GW
NSG Created by Aviatrix (all rules on one screen)


Step#3: Aviatrix Controller Orchestrate Azure Native Transitive Peering

Now attach Azure ARM Spoke through Native Peering using Aviatrix Controller

Native Peering between Spoke and Transit VNets

Repeat the above step for the second Spoke VNet as well.

Behind the Scene

  • Aviatrix Controller creates native peering
  • Creates Route Table I
  • nstall RFC1918 routes in the spoke VNets and points it to Transit VNet
Native Peering Created by Aviatrix Controller

Following two screen shot show that Aviatrix controller automatically creates a bi-directional peering relationship between Transit GW and Spoke VNet

Peering Details from Aviatrix Transit to Spoke-1 VNet

Peering Details from Spoke-1 VNet to Aviatrix Transit GW VM
Aviatrix Manages Route Table Creation and Life-Cycle
“Aviatrix-Spoke-VNet-Central-1 public” Route Table points to Aviatrix Transit GW IP as the Next Hope
Similarly, Aviatrix-Spoke-VNet-Central-2 public” Route Table points to Aviatrix Transit GW IP as next hops

No need for route(s) is needed in transit VNet routing table because the routing in the Transit VNet is handled by the Aviatrix GW itself
Aviatrix Controller UI shows Azure peering information
Aviatrix Transit GW Routing Table
You can also verify Azure Spoke VNet Routing Table from the Aviatrix Controller UI

Transit Network Validation/Testing

Now we will deploy two test VMs to validate the deployment. The VMs will be deployed using CentOS OS and will get a public IP address so that we can ssh into them for testing purposes.

  • Azure-Test-VM-Spoke1 (Public: 13.67.225.200, Private: 10.161.0.4)
  • Azure-TestVM-Spoke2 (Public: 40.78.147.153, Private: 10.162.0.4)
Azure-Test-VM-Spoke1

Similarly a second Azure Test VM was created as shown in the screen-shot below

[shahzad@Azure-Test-VM-Spoke1 ~]$ ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.161.0.4  netmask 255.255.240.0  broadcast 10.161.15.255
        inet6 fe80::20d:3aff:fe9f:8c29  prefixlen 64  scopeid 0x20<link>
        ether 00:0d:3a:9f:8c:29  txqueuelen 1000  (Ethernet)
        RX packets 69315  bytes 39635034 (37.7 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 70959  bytes 14573682 (13.8 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 186  bytes 15872 (15.5 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 186  bytes 15872 (15.5 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[shahzad@Azure-Test-VM-Spoke1 ~]$ ping 10.162.0.4
PING 10.162.0.4 (10.162.0.4) 56(84) bytes of data.
64 bytes from 10.162.0.4: icmp_seq=1 ttl=63 time=1.95 ms
64 bytes from 10.162.0.4: icmp_seq=2 ttl=63 time=1.95 ms
64 bytes from 10.162.0.4: icmp_seq=3 ttl=63 time=2.24 ms
64 bytes from 10.162.0.4: icmp_seq=4 ttl=63 time=1.67 ms
64 bytes from 10.162.0.4: icmp_seq=5 ttl=63 time=2.19 ms
64 bytes from 10.162.0.4: icmp_seq=6 ttl=63 time=2.30 ms

Conclusion

Aviatrix makes it extremely simple and easy to deploy Azure Transit Network with native VNet peering option. The strength of the solution is that enterprises can build a common and unified transit solution in other cloud such as AWS and GCP and create a true multi-cloud network architecture with consistent operations and management options.

For more details, refer to Aviatrix documentation here.

Aviatrix User-VPN Deployment with AWS UDP Based NLB

The steps mentioned here are not supported yet. It should be treated as a workaround only.

Introduction

AWS LBN Supports UDP

AWS recently started supporting UDP protocol for its NLB (Network Load Balancer). Customers are looking to add support for UDP based NLB now. While this support will shortly be available in the product, there is a workaround to deploy such a topology.

Note: Aviatrix User-VPN GW uses TCP:443 for incoming heath-check probes

Deployment Overview

  • Create an Aviatrix GW (AGW) with VPN Access option but without enabling cloud-native ELB integration
  • This will create the AGW and by default it listens on UDP 1194 port
  • Manually create AWS NLB in the AWS console with the UDP option and port 1194
  • Manually create the target group with user-VPN AGW(s) in it
  • Make sure to override the health-check port and use TCP 443 for it

Deployment Details

Following screen shots shows a working deployment

Topology

Deploy Aviatrix User-VPN GW

Deploy an Aviatrix User-VPN GW with “VPN Access” checked and without enabling ELB using Aviatrix Controller.

Gateway config shows following in the Aviatrix diagnostics section. Notice the port 1194 here.

"VPN Service": {
"port": {
"1194": [
"up",
"reachable"
]
},

Create a new user and assign this user to the Aviatrix User-VPN GW

Create NLB in AWS Console

Create a UDP based NLB using the AWS console. Once the NLB is created, you will notice following config in the AWS console. Notice the DNS name for this NLB. This is the name we will use later in the config.

Name: shahzad-udp-nlb
arn:aws:elasticloadbalancing:ap-southeast-1:481151252831:loadbalancer/net/shahzad-udp-nlb/a2e01e8690702d00
DNS name: shahzad-udp-nlb-a2e01e8690702d00.elb.ap-southeast-1.amazonaws.com
(A Record)

AWS Network Load Balancer

Following screen also shows the name of the NLB and the DNS name associated with it.

NLB Listner

By default the AWS UDP based NLB listen at UDP port 1194 which is the port Aviatrix GW also listen at. You can observe it in the following screen

NLB Listener Details

Now we nee to create target group that will point to the Aviatrix User-VPN GW.

Health Check Configuration for Aviatrix GW

Make sure to modify the health-check port to 443 (by default it will be configured with 1194)

Modify User-VPN Certificate File

Download the User-VPN certificate file and replace the IP address with the DNS name of the AWS NLB.

client
comp-lzo
nobind
persist-key
persist-tun
auth-nocache
tun-mtu 1500
remote shahzad-udp-nlb-a2e01e8690702d00.elb.ap-southeast-1.amazonaws.com 1194
proto udp
mssfix
route-method exe
verb 3
route-delay 2
mute 20
reneg-sec 0
cipher AES-256-CBC
auth SHA512
key-direction 1
explicit-exit-notify
dev-type tun
dev tun

Connect VPN User

Now we connect using this profile. I am using OpenVPN connect client version 2.7.1.100.

User will be connected and will show in the Aviatrix Controller UI as well

Credits

Thank you Liming Xiang and Felipe Vasconcellos for reviewing and making adjustments to this post.

Azure Transit Network Design Patterns

Microsoft Azure is getting lot of Cloud business in the Enterprise market. Understanding the Multi-Cloud Network (MCN) architecture is a must for Network/Cloud architects and Transit Networking is one of the Cloud Core element of MCN architecture.

Aviatrix offer two distinct design patterns to build global transit in Azure and for cross-cloud connectivity. Both of them have their pros and cons that will be discussed later.

Azure Transit Network Design Patterns

The general recommendation and best practice from Aviatrix is to deploy “Aviatrix Transit with VNet GWs” design pattren.

Refer to my blog post to deploy Aviatrix Controller from the Azure Market Place.

Aviatrix Transit with VNet GWs

In this pattern, a transit network in Azure is built with a Transit gateway (aka hub gateway) in the centralized VNet (aka Transit VNet) and spoke gateways in the spoke VNets.

In this model Aviatrix Controller

  • Deploys the Aviatrix Azure Transit GW in the transit VNet
  • Deploys the Aviatrix Azure Spoke GW in the spoke VNets
  • Orchestrates vNET route creation and propagation
  • Connects spoke VNet to Transit/Hub GW and
  • Controls and steers the traffic accordion to the desired state
  • Provides life-cycle of management of the deployment
Aviatrix Transit with VNet GWs

Aviatrix recommendation is to use “Aviatrix Transit with VNet GW” design pattern

Aviatrix Transit with VNet GWs – Details

  • This model Provides encrypted connection between Spoke and Transit VNets
    • This is extremely important and in majority of the cases first requirement for enterprises
  • It can leverage Aviatrix ActiveMesh between Transit and Spoke VNets.
    • ActiveMesh provides huge advantages by building multiple Active/Active encrypted links between Hub and Spoke VNets.
    • It provides higher availability of service in case one or even two links go down.
    • ActiveMesh links actively participate in packer forwarding that results in increased throughput as compare to single encrypted link
  • Enterprises can leverage Advanced troubleshooting and visibility options provided by Aviatrix
    • For example Aviatrix GW allows enterprises to take tcpdump or packet capture of the traffic passing through
  • It allows enterprises to deploy consistent enterprise network architecture across multiple regions and multiple-clouds

Aviatrix Transit with VNet Peering

Aviatrix also offers building transit networks by natively peering the spoke VNets with the Aviatrix Transit GW. This model does not require any GWs in the spoke VNet.

In this model Aviatrix controller

  • Deploys the Aviatrix Azure Transit GW in the transit VNet
  • Orchestrates vNET route creation and propagation
  • Connects spoke VNet to Transit/Hub GW and
  • Controls and steers the traffic accordion to the desired state
  • Provides life-cycle of management of the deployment

Aviatrix recommendation is to use “Aviatrix Transit with VNet GW” design pattern

Aviatrix Transit with VNet Peering – Details

  • This model does not provide encryption between spoke and transit VNets
  • Less visibility into the traffic and overall operations
  • There is no option to take tcpdump or packet capture at the spoke VNet as compare to the other other model
  • Ops team depends on the Azure tools and options to troubleshoot rather than using Aviatrix’s rich set of troubleshooting and visibility tools
  • No gateways in the spoke VNets means no IPSec tunnel charges between spoke and transit VNets
  • The Aviatrix ActiveMesh behavior is different in this model as compare to the previous one. Since we do not have Aviatrix Spoke GW in the spoke VNet, the behavior is more like Primary/Backup links
    • If spoke VNet has multiple route tables, the Aviatrix controller will configure both primary and backup Transit GWs as the default gateway for different route tables. By doing so, we can achieve load balancing for Spoke VNet outbound traffic
    • If Spoke VNet has only one route table, this route table will point to the private IP of the primary Transit GW until that GW fails. In case of primary Transit GW failure, the controller will automatically update the Spoke VNet’s route table to point to the private IP of the backup Transit GW
  • The throughput between Transit and Spoke VNet is what Azure native VNet peering provides (At the time of writing this article, Azure did not publish those numbers)
  • Aviatrix has done performance testing using different Azure VM sizes. Refer to following results as a reference
Azure Throughput

Conclusion

Transit Network is an integral part of Multi-Cloud Network (MCN) architecture. It fits right into the Cloud Core pillar of MCN architecture. Aviatrix offer two different Azure Transit Network design patterns to cater various enterprise needs in this space.

In my next blog, I will discusses the Azure Transit Network deployment with native VNet peering in more details.

Azure Aviatrix Controller Deployment

Aviatrix Controller (AVX-CTRL) can be deployed in AWS, Azure, GCP or OCI. Only one single AVX-Controller needed for an enterprise multi-cloud deployment. A single AVX-Controller can control, manage and operate resources in an all the public clouds.

Recently I noticed that more and more enterprises are asking to deploy Aviatrix Controller in Azure. Hence I decided to write this short blog with screen shots.

Azure Cloud Portal

This blog assumes that you are somewhat familiar with the Azure Cloud Portal.

1- Login to Azure Portal @ https://portal.azure.com
2- Click on the Marketplace link (this could be in a different place depending on your customization) as shown in the screen-shot here

Azure Marketplace

3- Search Aviatrix in Azure Marketplace (as shown in the screen-shot below)

Search Aviatrix and select Aviatrix Platform Bundle – PAYG

Here you need to select the Aviatrix Bundle – PAYG.

After that, you will see multiple Aviatrix plans listed on Azure Marketplace page. These plans listed based on your enterprise needs and use-cases. In this deployment I have picked pick “Aviatrix Networking Platform Bundle”

Aviatrix Software planDescription
Multi-service units and SSL VPN users per BYOLEach FQDN deployment, site to cloud tunnel, or multi-cloud tunnel is a service unit. You can configure as many as SSL VPN users to access your private cloud with MFA and SMAL on Aviatrix Secure Networking Platform.
The description of the plan selected for this customer deployment

Deploy Aviatrix Controller VM in Azure

At this stage, Azure will created the Aviatrix Controller VM. All the steps onward are related to Azure’s Aviatrix VM creation.

Enter basic VM information. Select the default size for now.
Select default disk selection option

Select the Resource Group (RG) for Aviatrix Controller VM deployment. Aviatrix will create the NSG with proper security automatically

You can leave the default setting here

Leave this section with default config.

Tags are important – apply at least name tag to this VM

At this state the Aviatrix Controller VM deployment is underway. It will take about 3 to 5 min for this process to compelete.

Conclusion

Now that your Aviatrix Controller VM is ready, you can login to the UI by browsing the Public IP address of your controller. The default user name is admin and default password is the “Private IP Address” of Aviatrix Controller VM.

Aviatrix CloudWAN Deployment with AWS Global Accelerator

In this blog post I explained what Aviatrix CloudWAN solution is. Here let us actually deploy it and appreciate the simplicity of implementation.

Recently I worked with an enterprise (lets call is netJoints Inc., as I cannot share the actual name of my customer) and connected their branches (Cisco Routers) in various regions to Aviatrix Global Transit Network.

I will show how to connect a branch in Singapore.

Step1 – Register Cisco Router to Aviatrix Controller

Step2 – Attach Cisco Router to Public Cloud Transit

In this step Aviatrix Controller will automatically builds IPSec tunnel to connect branch router in Singapore to Public Cloud Transit Network. This Transit network could be

1- Aviatrix Transit GW (AVX-TGW)
2- AWS Transit GW (AWS-TGW)

AVX-TGW is preferred option as it allows to build a true Global Transit across multiple-regions and multiple-clouds. AWS-TGW is limited to only single region and obviously is only available in AWS, hence is not recommended for enterprise multi-cloud customers.

Prepare to attach:

Attach to cloud now:

Following diagram shows Singapore-Br1 attached to AVX-TGW

You can also get IPSec VPN tunnel details under Site2Cloud menu

Click on the tunnel to see the routes it learned via BGP

Cisco Router Configuration

Following is what Aviatrix Controller has configured in the background

IPSec Config

BGP Config

AWS Global Accelerator Configuration

Following is what Aviatrix Controller configured in the AWS

What is Aviatrix CloudWAN?

Problem Statement

Enterprises are moving their data centers, workload, applications and even branches into the public cloud. They do not want to own and manage the physical infrastructure anymore.

The ground reality is that Enterprises have also invested millions of $$$ in the Branch, Access routers and entire WAN ecosystem.

  • These branch routers are deployed in Banks, ATM machines, retail store floor etc.
  • These branches could be deployed within a country, continent or across the globe.
  • In some cases they are owned by the enterprise and in other by partners or managed services providers.

The adoption of Cloud might not happen overnight for these enterprises. In the mean-time, these branches do need secure and efficient connectivity to the Cloud.

So how do we solve this challenge?

If you talk to device vendors, most likely they will push you to use one of the followings

1- SD-WAN (if they have one) solution or
2- Just create an IPSec tunnles to public cloud

Both of these have their own issues and problems. Let us examine them

SD-WAN

  • Requires re-architecting your entire WAN
  • In almost all the cases, it requires you to purchase new hardware for all the branches
  • Usually the new SD-WAN hardware does not integrate or work with the existing branch hardware
  • So effectively you will be running two different WAN architectures in your enterprise for a long period of time
  • You are now talking about new compliance and audit approval for your entire WAN architecture
  • Come up with a new governance model
  • Train pretty much everyone who touches WAN or any WAN device
    • Although these SD-WAN vendors claim that it is zero-touch, reality is that it is not, when it comes to troubleshooting and debugging issues in the branch
  • Requires new operations model with new tools, associates learning curve and integration challenges
  • You might not want to keep physical presences in the long run. You might want to move majority of the branches in the Cloud, but SD-WAN kind of forces you to use another hardware device on-prem
  • And you know how painful and lengthy process it is 🙂

SD-WAN is not the answer to this challenge at least

IPSec Tunnel to Public Cloud

Another solution they will offer you to create an IPSec Tunnel to Public Cloud. This sounds simple but it is not. Let us examine it

  • Creating a simple IPSec tunnel is painful and require deep CLI and IPSec knowledge
  • If it is managed by partner or MSP, then it is another people, process cost discussion
  • For 5 to 10 branches, may be it ok to create IPSec Tunnels but what about for 100 and thousands of branches
  • These routers do not have any REST API or Terraform knowledge, so it is extremely hard to even automate the entire process
  • Lets assume that someone wrote the script for that, but then what about life-cycle management of routers and support-ability of scripted solution?
  • Connection to Cloud is beyond just creating a simple IPSec tunnel
    • You need BGP to exchange routes
    • Provide transit connection to other VPCs/vNETs
    • Provide secure connectivity to workload
    • Also maintain QoS and performance because what if a branch in Singapore trying to connect to workload in San Francisco VPC or in Virginia? The IPSec latency will just kill the application and it will be extremely bad use-experience. That could result in customer satisfaction issues and potentially revenue loss

Creating just an IPSec Tunnel is not the answer to this challenge either

Aviatrix CloudWAN Solution

In order to solve the challenges I described above, Aviatrix has launched a new solution called CloudWAN for Cisco Routers in Branch, Campus and other Access locations. The solution leverage the Aviatrix centralized management plane, Aviatrix Global Transit or AWS-TGW with native integration with AWS global accelerator.

Following are the highlights and advantages of Aviatrix CloudWAN solution

  • Manage and control Cisco routers in the branches from a centralized management (Aviatrix Controller) point, deployed in the public cloud of your choice (AWS, Azure, GCP etc.)
  • On-ramp branch routers without any rip and replace (investment $$$ protection)
  • Securely connect Cisco routers to public Cloud without any manual intervention or CLI interaction
    • Aviatrix provides simple, point-and-click UI workflow as well as REST API options
  • Manage life-cycle (config. changes, config diff, audit, etc.) of Cisco routers directly from public Cloud
  • Integrated with AWS Global Accelerator to provide optimal latency and QoS by routing traffic to AWS global backbone instead of hopping across multiple public ISPs
    • Branches get connected to the Cloud from the closest AWS point of presence using Anycast IP and attached to Aviatrix global transit to seamlessly reach any cloud provider in any region.
    • Branches can also reach enterprise data centers via direct connect or equivalent
  • Supports BGP to dynamically propagate routes from Cloud to branch locations and vice versa
  • Provide service insertion framework
    • For example when branches traffic is entering or leaving the Cloud, it can be inspected by Next Generation Firewall (such as Palo Alto Networks) for deeper packet inspection
  • Provides connectivity to multiple clouds by using Aviatrix Global Transit solution

Deployment Model Examples

The Aviatrix solution is extremely flexible and there could be many deployment models and design patterns available to enterprises based on their needs. Following shows just few examples of how an enterprise might deploy this CloudWAN solution with and without AWS Global Accelerator

Aviatrix CloudWAN without AWS Global Accelerator

This is the deployment model an organization could use if branches are in one region and there is no need for optimal latency path to Cloud.

In the example shown in the following diagram, access from branches will add extra latency because those physical branches will have to traverse multiple ISPs (internet hops with no QoS or SLA).

Aviatrix CloudWAN with AWS Global Accelerator

This is the best deployment model that allows optimal latency path from branch to Cloud. This is the model that is recommended for enterprises having presence in different AWS regions.

Here AWS Global Accelerator provides anycast IP (two by default for redundancy reasons). The branch connects to the closed AWS-Edge location (CloudFront POP) and then the traffic rides back onto the AWS expense backbone to reach to the destination.

Credit

Hammad Alam for contributing to this post.