Networking and Networks have transformed over period of time. Enterprises have realized that public cloud is the strategic direction for their IT infrastructure and applications. The service providers like Amazon, Google and Microsoft are extremely efficient at providing networking, security, compute and storage capabilities in their respective public cloud such as AWS, GCP and Azure.
“All Clouds are not created equally”. In order to get the best of each and every cloud, there is a need to create seamless joints between those clouds which should, like human body joints, work in conjunction with each others. They should work in harmony, in an orchestrated fashion. This joining and marriage has given birth to a new Networking Architecture, what we call “Multi-Cloud” today.
An Architect, CTO or any technical decision maker has a huge responsibility to approve and adopt the “right network architecture” that is aligned with the business requirement.
We have seen enterprises who picked the wrong or compromised network architecture and then paid the price, way more than the initial cost to build and run the network, in the long run.
Here we are sharing some nuggets for the technical decision makers
A bad architecture can cost you a lot in the long run. A lot more than what you have spent on building and running it
Do not build operations around architecture, build architecture around operations
Don’t make long term architecture decisions based on short-term problems
Right architecture is more important than feature set
Simple architecture is the best architecture
Building an architecture and putting a design is one time deal. You end up running that design for years to come. As an architect if you have not make smart choices to build the correct architecture, your enterprise will be paying a lot more.
Also think about support. You need a trusted support partner who can troubleshoot with you.
Real World Customer Example
Let me give you an example. Here is an architecture a customer wanted to go with.
They wanted to use Aviatrix Transit to build the encrypted transit peering within the AWS region and across multiple AWS regions and clouds (GCP). They also wanted to deploy AWS-TGW using Aviatrix Controller but just to attach the AWS-TGW with the AVX-Transit-GW (or ASN).
Essentially all the red-lines in the topology above were to be controlled and managed by Aviatrix. For VPC1, VPC2 and so on, they wanted to do it manually. Thinking that it was just one time job.
In order to save few $$$, they wanted to make just one comprises in the architecture and I will explain how costly that one compromise could be in the long run
Customer did not want to use Aviatrix’s AWS-TGW Orchestration/Control to attach the Spoke VPCs with AWS-TGW
Ripple Effect of a Single Compromise
Aviatrix Controller won’t be able to monitor and propagate existing and new routes
Application VPC routes must be updated manually
AWS-TGW route tables must be updates manually
Transit VPC route table must be updated manually
Customer will loose the Aviatrix Controller’s TGW Audit functionality
This could be huge operational burden on the team
Customer will not be providing proper alerts about the route updates and warnings about incorrect or duplicate routes
No network correctness
In future if Aviatrix build a functionality where any new route update will require admin approval, then customer might not be able to use that functionality
Beside that there are other functionalities that Aviatrix is planning to build for AWS-TGW and Aviatrix-TGW that probably won’t work in such a network design
No way to do network segmentation for workloads in different VPCs
No Security Domain functionality available
Potential of AWS-TGW sprawl
Multiple AWS-TGW might be needed for traffic separation
Huge management overhead
Some of the Aviatrix Flight-Path functionality might be broken in future
In future if Aviatrix releases, capacity planning and monitoring tools, that might not work in this type of architecture
Adding the Firewall in the architecture will not be possible. This could be a huge compliance and security risk a customer would be taking for security sensitive data
For User-VPN use-case, customer must accommodate VPN subnets manually on TGW and Aviatrix Transit
Aviatrix support won’t be able to solve and troubleshoot end-to-end because the VPCs were not attached by Aviatrix Controller
Customer is taking the risk of not having end-to-end Encryption
AWS-TGW does not provide encryption for Spoke VPCs
This could be moot point in this architecture, because customer decided to use AWS-TGW as attachment but it is important to call out for compliance, audit, GDPR and security reasons
Credits
Wanted to say thanks to the following people for providing input to this post
Tomasz Klimczyk Don Leone Mark Cunningham Hammad Alam Nauman Mustafa Saad Mirza
Familiarity and basic know-how of at least one Cloud Provider is must to attend the bootcamp. For instance attendees should know concepts of
VPC/ VNet
Account/ Subscription
AMI / VM / EC2
VPN GW / VGW / Internet GW (IGW)
CIDR / Subnet / Routes
etc
List of items participants need to bring
Laptop needed with SSH/RDP tools installed
For Windows, make sure to have software like puttygen to generate cert based password
The underlying Cloud for labs is AWS
The same labs are applicable to other Cloud such as Azure, GCP and OCI.
The beauty of Aviatrix is that it will hide all the complexities of Clouds and provides a unified/normalized management, control, data and operations plane
All users must have an account with admin. privileges in AWS. It could be a personal account that could be deleted after the bootcamp. For Azure and GCP instructor will use their account to showcase multi-cloud use-cases
List of items needed in the training room
Projector with HDMI or USB-C cable
White-Board (it should not be in-front of Projector. Ideally it could be on the side)
Dry-erase markers
Easel Board with markers
Will be used by attendees to draw their design diagrams
The WIFI provided should allow ssh/rdp out
Misc.
Attendees are responsible for their Flight/Transportation/Lodging
Unless you were living under a rock :-), everyone knows that Microsoft Azure is picking really fast in the Enterprise market. Understanding the Multi-Cloud Network (MCN) architecture is a must for Network/Cloud architects and Transit Networking is one of the Cloud Core element of MCN architecture.
This blog will discuss the deployment of an Azure Transit Network design pattern called “Aviatrix Transit with VNet Peering”.
We will be using following topology to deploy the Azure Transit Networking with native VNet peering with Aviatrix Transit GW in the Transit/Hub VNet.
Simple and Quick Deployment
The process of deploying the Azure Transit Network is extremely simple by using the Aviatrix Controller UI. You need to perform 3 simple steps and the entire setup can be up and running in about 30 min.
IP Addressing
I will be using following IP addressing scheme for this deployment (also shown in the topology diagram above)
Aviatrix-Transit-VNet-Central 10.160.0.0/16
Aviatrix-Transit-GW-Central 10.160.0.4/22
Aviatrix-Spoke-VNet-Central-1 10.161.0.0/16
Aviatrix-Spoke-VNet-Central-2 10.162.0.0/16
Region Selection
I am using US-Central region for this deployment but it is not really mandatory to deploy all the hub and spokes in the same region. They all can be spread across multiple regions as well.
Step#1: Aviatrix Controller Creates Azure VNets and Resource Group (RG)
Use Aviatrix Controller UI to creates the Azure VNets. The process allows to pick the VNet region and CIDR range. A corresponding and unique Azure Resource Group (RG) is also created at this step.
Behind the Scene
Here is what happens in the background in Azure (you can verify it from the Azure portal itself)
Aviatrix first creates a new Azure Resource Group
Then Aviatrix Controller creates VNet in that RG
Aviatrix Controller also creates four /20 subnets from /16 CIDR range
Controller makes it easy and select the subnet range automatically. For example for /24 CIDR, controller will create /28 subnets
Controller then creates a User Route-Table and associates subnets in the newly created User Route-Table
Les us take a look at the screen-shots from Azure portal for the above mentioned bullet points
Aviatrix first creates a new Azure Resource GroupAviatrix Controller creates VNet in the newly created RGAzure Virtual Network (VNet) PropertiesAviatrix Creates four /20 subnets. 2 public and 2 private User Route Table created without any routes. Only “user” subnets associated with the user-route table. Public subnets are not associated with any route table at this stage
Step#2: Aviatrix Controller Deploys Transit GW VM in Transit VNet:RG
Now deploy Aviatrix Transit GW VM in Azure using the Aviatrix Controller UI. Make sure to deploy this VM in the Azure Public subnet that was created in Step#1.
Aviatrix Controller deploys the AVX-Transit GW in the Hub/Transit VNet
The controller UI shows the progress of this deployment as shown below
[03:47:10] Starting to create ARM GW Aviatrix-Transit-GW-Central.
[03:47:11] Connected to Azure ARM.
[03:47:22] Deploying virtual machine...
[03:50:32] Deploy virtual machine done.
[03:50:33] License check is complete.
[03:50:33] Added GW info to Database.
[03:50:34] Aviatrix-Transit-GW-Central AVX SQS Queue created.
[03:50:34] Create message queue done.
[03:50:34] Initializing GW.....
[03:50:34] Copy configuration to GW Aviatrix-Transit-GW-Central done.
[03:50:35] Copy new software to GW Aviatrix-Transit-GW-Central done.
[03:50:35] Copy /etc/cloudx/cloudx_code_file.json.enc to GW Aviatrix-Transit-GW-Central done.
[03:50:35] Copy /etc/cloudx/cloudx_code_key_file.txt to GW Aviatrix-Transit-GW-Central done.
[03:50:35] Copy scripts to GW Aviatrix-Transit-GW-Central done.
[03:50:35] Copy sdk to GW Aviatrix-Transit-GW-Central done.
[03:50:39] Copy libraries to GW Aviatrix-Transit-GW-Central done.
[03:50:39] Installing software ....[03:50:41] Issuing certificates....
[03:50:41] Issue certificates done
[03:51:14] GW software started.
[03:51:38] Software Installation done.
[03:51:40] Run self diagnostics done.
Behind the Scene
At this stage the Aviatrix Transit VM is deployed. Let me show you what happens behind the scene by logging into the Azure Portal
Aviatrix Transit Resource Group now has the AVX-Transit VM/GW
Pay attention to the above screen shot. Following are the resources that Aviatrix Controller orchestrate behind the scene
Creates new VNet
Creates VM in the newly created VNet (see screen-shot below)
Creates network interface for the VM
Allocated Public IP address to the VM
Creates Availability set and assign it to VM
Creates NSG (Azure Network Security Group) and assign it to the VM
Creates storage account
Assign the user-route-table to VM subnet
Following screen shows the Aviatrix Transit GW VM details
Aviatrix Transit GW VM details Inbound Rules Added by Aviatrix Controller for Transit-GW at the NIC Level Outbound Rule Added by Aviatrix Controller for Transit-GW NSG Created by Aviatrix (all rules on one screen)
Repeat the above step for the second Spoke VNet as well.
Behind the Scene
Aviatrix Controller creates native peering
Creates Route Table I
nstall RFC1918 routes in the spoke VNets and points it to Transit VNet
Native Peering Created by Aviatrix Controller
Following two screen shot show that Aviatrix controller automatically creates a bi-directional peering relationship between Transit GW and Spoke VNet
Peering Details from Aviatrix Transit to Spoke-1 VNetPeering Details from Spoke-1 VNet to Aviatrix Transit GW VM Aviatrix Manages Route Table Creation and Life-Cycle“Aviatrix-Spoke-VNet-Central-1 public” Route Table points to Aviatrix Transit GW IP as the Next Hope Similarly, Aviatrix-Spoke-VNet-Central-2 public” Route Table points to Aviatrix Transit GW IP as next hops No need for route(s) is needed in transit VNet routing table because the routing in the Transit VNet is handled by the Aviatrix GW itselfAviatrix Controller UI shows Azure peering information Aviatrix Transit GW Routing Table You can also verify Azure Spoke VNet Routing Table from the Aviatrix Controller UI
Transit Network Validation/Testing
Now we will deploy two test VMs to validate the deployment. The VMs will be deployed using CentOS OS and will get a public IP address so that we can ssh into them for testing purposes.
Aviatrix makes it extremely simple and easy to deploy Azure Transit Network with native VNet peering option. The strength of the solution is that enterprises can build a common and unified transit solution in other cloud such as AWS and GCP and create a true multi-cloud network architecture with consistent operations and management options.
Create a new user and assign this user to the Aviatrix User-VPN GW
Create NLB in AWS Console
Create a UDP based NLB using the AWS console. Once the NLB is created, you will notice following config in the AWS console. Notice the DNS name for this NLB. This is the name we will use later in the config.
Name: shahzad-udp-nlb
arn:aws:elasticloadbalancing:ap-southeast-1:481151252831:loadbalancer/net/shahzad-udp-nlb/a2e01e8690702d00
DNS name: shahzad-udp-nlb-a2e01e8690702d00.elb.ap-southeast-1.amazonaws.com
(A Record)
AWS Network Load Balancer
Following screen also shows the name of the NLB and the DNS name associated with it.
NLB Listner
By default the AWS UDP based NLB listen at UDP port 1194 which is the port Aviatrix GW also listen at. You can observe it in the following screen
NLB Listener Details
Now we nee to create target group that will point to the Aviatrix User-VPN GW.
Health Check Configuration for Aviatrix GW
Make sure to modify the health-check port to 443 (by default it will be configured with 1194)
Modify User-VPN Certificate File
Download the User-VPN certificate file and replace the IP address with the DNS name of the AWS NLB.
client comp-lzo nobind persist-key persist-tun auth-nocache tun-mtu 1500 remote shahzad-udp-nlb-a2e01e8690702d00.elb.ap-southeast-1.amazonaws.com 1194 proto udp mssfix route-method exe verb 3 route-delay 2 mute 20 reneg-sec 0 cipher AES-256-CBC auth SHA512 key-direction 1 explicit-exit-notify dev-type tun dev tun
Connect VPN User
Now we connect using this profile. I am using OpenVPN connect client version 2.7.1.100.
User will be connected and will show in the Aviatrix Controller UI as well
Credits
Thank you Liming Xiang and Felipe Vasconcellos for reviewing and making adjustments to this post.
Microsoft Azure is getting lot of Cloud business in the Enterprise market. Understanding the Multi-Cloud Network (MCN) architecture is a must for Network/Cloud architects and Transit Networking is one of the Cloud Core element of MCN architecture.
Aviatrix offer two distinct design patterns to build global transit in Azure and for cross-cloud connectivity. Both of them have their pros and cons that will be discussed later.
Azure Transit Network Design Patterns
The general recommendation and best practice from Aviatrix is to deploy “Aviatrix Transit with VNet GWs” design pattren.
In this pattern, a transit network in Azure is built with a Transit gateway (aka hub gateway) in the centralized VNet (aka Transit VNet) and spoke gateways in the spoke VNets.
In this model Aviatrix Controller
Deploys the Aviatrix Azure Transit GW in the transit VNet
Deploys the Aviatrix Azure Spoke GW in the spoke VNets
Orchestrates vNET route creation and propagation
Connects spoke VNet to Transit/Hub GW and
Controls and steers the traffic accordion to the desired state
Provides life-cycle of management of the deployment
Aviatrix recommendation is to use “Aviatrix Transit with VNet GW” design pattern
Aviatrix Transit with VNet GWs – Details
This model Provides encrypted connection between Spoke and Transit VNets
This is extremely important and in majority of the cases first requirement for enterprises
It can leverage Aviatrix ActiveMesh between Transit and Spoke VNets.
ActiveMesh provides huge advantages by building multiple Active/Active encrypted links between Hub and Spoke VNets.
It provides higher availability of service in case one or even two links go down.
ActiveMesh links actively participate in packer forwarding that results in increased throughput as compare to single encrypted link
Enterprises can leverage Advanced troubleshooting and visibility options provided by Aviatrix
For example Aviatrix GW allows enterprises to take tcpdump or packet capture of the traffic passing through
It allows enterprises to deploy consistent enterprise network architecture across multiple regions and multiple-clouds
Aviatrix Transit with VNet Peering
Aviatrix also offers building transit networks by natively peering the spoke VNets with the Aviatrix Transit GW. This model does not require any GWs in the spoke VNet.
In this model Aviatrix controller
Deploys the Aviatrix Azure Transit GW in the transit VNet
Orchestrates vNET route creation and propagation
Connects spoke VNet to Transit/Hub GW and
Controls and steers the traffic accordion to the desired state
Provides life-cycle of management of the deployment
Aviatrix recommendation is to use “Aviatrix Transit with VNet GW” design pattern
Aviatrix Transit with VNet Peering – Details
This model does not provide encryption between spoke and transit VNets
Less visibility into the traffic and overall operations
There is no option to take tcpdump or packet capture at the spoke VNet as compare to the other other model
Ops team depends on the Azure tools and options to troubleshoot rather than using Aviatrix’s rich set of troubleshooting and visibility tools
No gateways in the spoke VNets means no IPSec tunnel charges between spoke and transit VNets
The Aviatrix ActiveMesh behavior is different in this model as compare to the previous one. Since we do not have Aviatrix Spoke GW in the spoke VNet, the behavior is more like Primary/Backup links
If spoke VNet has multiple route tables, the Aviatrix controller will configure both primary and backup Transit GWs as the default gateway for different route tables. By doing so, we can achieve load balancing for Spoke VNet outbound traffic
If Spoke VNet has only one route table, this route table will point to the private IP of the primary Transit GW until that GW fails. In case of primary Transit GW failure, the controller will automatically update the Spoke VNet’s route table to point to the private IP of the backup Transit GW
The throughput between Transit and Spoke VNet is what Azure native VNet peering provides (At the time of writing this article, Azure did not publish those numbers)
Aviatrix has done performance testing using different Azure VM sizes. Refer to following results as a reference
Azure Throughput
Conclusion
Transit Network is an integral part of Multi-Cloud Network (MCN) architecture. It fits right into the Cloud Core pillar of MCN architecture. Aviatrix offer two different Azure Transit Network design patterns to cater various enterprise needs in this space.
Aviatrix Controller (AVX-CTRL) can be deployed in AWS, Azure, GCP or OCI. Only one single AVX-Controller needed for an enterprise multi-cloud deployment. A single AVX-Controller can control, manage and operate resources in an all the public clouds.
Recently I noticed that more and more enterprises are asking to deploy Aviatrix Controller in Azure. Hence I decided to write this short blog with screen shots.
Azure Cloud Portal
This blog assumes that you are somewhat familiar with the Azure Cloud Portal.
1- Login to Azure Portal @ https://portal.azure.com 2- Click on the Marketplace link (this could be in a different place depending on your customization) as shown in the screen-shot here
Azure Marketplace
3- Search Aviatrix in Azure Marketplace (as shown in the screen-shot below)
Search Aviatrix and select Aviatrix Platform Bundle – PAYG
Here you need to select the Aviatrix Bundle – PAYG.
After that, you will see multiple Aviatrix plans listed on Azure Marketplace page. These plans listed based on your enterprise needs and use-cases. In this deployment I have picked pick “Aviatrix Networking Platform Bundle”
Aviatrix Software plan
Description
Multi-service units and SSL VPN users per BYOL
Each FQDN deployment, site to cloud tunnel, or multi-cloud tunnel is a service unit. You can configure as many as SSL VPN users to access your private cloud with MFA and SMAL on Aviatrix Secure Networking Platform.
The description of the plan selected for this customer deployment
Deploy Aviatrix Controller VM in Azure
At this stage, Azure will created the Aviatrix Controller VM. All the steps onward are related to Azure’s Aviatrix VM creation.
Enter basic VM information. Select the default size for now.
Select default disk selection optionSelect the Resource Group (RG) for Aviatrix Controller VM deployment. Aviatrix will create the NSG with proper security automaticallyYou can leave the default setting hereLeave this section with default config.Tags are important – apply at least name tag to this VMAt this state the Aviatrix Controller VM deployment is underway. It will take about 3 to 5 min for this process to compelete.
Conclusion
Now that your Aviatrix Controller VM is ready, you can login to the UI by browsing the Public IP address of your controller. The default user name is admin and default password is the “Private IP Address” of Aviatrix Controller VM.
In this blog post I explained what Aviatrix CloudWAN solution is. Here let us actually deploy it and appreciate the simplicity of implementation.
Recently I worked with an enterprise (lets call is netJoints Inc., as I cannot share the actual name of my customer) and connected their branches (Cisco Routers) in various regions to Aviatrix Global Transit Network.
I will show how to connect a branch in Singapore.
Step1 – Register Cisco Router to Aviatrix Controller
Step2 – Attach Cisco Router to Public Cloud Transit
In this step Aviatrix Controller will automatically builds IPSec tunnel to connect branch router in Singapore to Public Cloud Transit Network. This Transit network could be
AVX-TGW is preferred option as it allows to build a true Global Transit across multiple-regions and multiple-clouds. AWS-TGW is limited to only single region and obviously is only available in AWS, hence is not recommended for enterprise multi-cloud customers.
Prepare to attach:
Attach to cloud now:
Following diagram shows Singapore-Br1 attached to AVX-TGW
You can also get IPSec VPN tunnel details under Site2Cloud menu
Click on the tunnel to see the routes it learned via BGP
Cisco Router Configuration
Following is what Aviatrix Controller has configured in the background
IPSec Config
BGP Config
AWS Global Accelerator Configuration
Following is what Aviatrix Controller configured in the AWS
Enterprises are moving their data centers, workload, applications and even branches into the public cloud. They do not want to own and manage the physical infrastructure anymore.
The ground reality is that Enterprises have also invested millions of $$$ in the Branch, Access routers and entire WAN ecosystem.
These branch routers are deployed in Banks, ATM machines, retail store floor etc.
These branches could be deployed within a country, continent or across the globe.
In some cases they are owned by the enterprise and in other by partners or managed services providers.
The adoption of Cloud might not happen overnight for these enterprises. In the mean-time, these branches do need secure and efficient connectivity to the Cloud.
So how do we solve this challenge?
If you talk to device vendors, most likely they will push you to use one of the followings
1- SD-WAN (if they have one) solution or 2- Just create an IPSec tunnles to public cloud
Both of these have their own issues and problems. Let us examine them
SD-WAN
Requires re-architecting your entire WAN
In almost all the cases, it requires you to purchase new hardware for all the branches
Usually the new SD-WAN hardware does not integrate or work with the existing branch hardware
So effectively you will be running two different WAN architectures in your enterprise for a long period of time
You are now talking about new compliance and audit approval for your entire WAN architecture
Come up with a new governance model
Train pretty much everyone who touches WAN or any WAN device
Although these SD-WAN vendors claim that it is zero-touch, reality is that it is not, when it comes to troubleshooting and debugging issues in the branch
Requires new operations model with new tools, associates learning curve and integration challenges
You might not want to keep physical presences in the long run. You might want to move majority of the branches in the Cloud, but SD-WAN kind of forces you to use another hardware device on-prem
And you know how painful and lengthy process it is 🙂
SD-WAN is not the answer to this challenge at least
IPSec Tunnel to Public Cloud
Another solution they will offer you to create an IPSec Tunnel to Public Cloud. This sounds simple but it is not. Let us examine it
Creating a simple IPSec tunnel is painful and require deep CLI and IPSec knowledge
If it is managed by partner or MSP, then it is another people, process cost discussion
For 5 to 10 branches, may be it ok to create IPSec Tunnels but what about for 100 and thousands of branches
These routers do not have any REST API or Terraform knowledge, so it is extremely hard to even automate the entire process
Lets assume that someone wrote the script for that, but then what about life-cycle management of routers and support-ability of scripted solution?
Connection to Cloud is beyond just creating a simple IPSec tunnel
You need BGP to exchange routes
Provide transit connection to other VPCs/vNETs
Provide secure connectivity to workload
Also maintain QoS and performance because what if a branch in Singapore trying to connect to workload in San Francisco VPC or in Virginia? The IPSec latency will just kill the application and it will be extremely bad use-experience. That could result in customer satisfaction issues and potentially revenue loss
Creating just an IPSec Tunnel is not the answer to this challenge either
Aviatrix CloudWAN Solution
In order to solve the challenges I described above, Aviatrix has launched a new solution called CloudWAN for Cisco Routers in Branch, Campus and other Access locations. The solution leverage the Aviatrix centralized management plane, Aviatrix Global Transit or AWS-TGW with native integration with AWS global accelerator.
Following are the highlights and advantages of Aviatrix CloudWAN solution
Manage and control Cisco routers in the branches from a centralized management (Aviatrix Controller) point, deployed in the public cloud of your choice (AWS, Azure, GCP etc.)
On-ramp branch routers without any rip and replace (investment $$$ protection)
Securely connect Cisco routers to public Cloud without any manual intervention or CLI interaction
Aviatrix provides simple, point-and-click UI workflow as well as REST API options
Manage life-cycle (config. changes, config diff, audit, etc.) of Cisco routers directly from public Cloud
Integrated with AWS Global Accelerator to provide optimal latency and QoS by routing traffic to AWS global backbone instead of hopping across multiple public ISPs
Branches get connected to the Cloud from the closest AWS point of presence using Anycast IP and attached to Aviatrix global transit to seamlessly reach any cloud provider in any region.
Branches can also reach enterprise data centers via direct connect or equivalent
Supports BGP to dynamically propagate routes from Cloud to branch locations and vice versa
Provide service insertion framework
For example when branches traffic is entering or leaving the Cloud, it can be inspected by Next Generation Firewall (such as Palo Alto Networks) for deeper packet inspection
Provides connectivity to multiple clouds by using Aviatrix Global Transit solution
Deployment Model Examples
The Aviatrix solution is extremely flexible and there could be many deployment models and design patterns available to enterprises based on their needs. Following shows just few examples of how an enterprise might deploy this CloudWAN solution with and without AWS Global Accelerator
Aviatrix CloudWAN without AWS Global Accelerator
This is the deployment model an organization could use if branches are in one region and there is no need for optimal latency path to Cloud.
In the example shown in the following diagram, access from branches will add extra latency because those physical branches will have to traverse multiple ISPs (internet hops with no QoS or SLA).
Aviatrix CloudWAN with AWS Global Accelerator
This is the best deployment model that allows optimal latency path from branch to Cloud. This is the model that is recommended for enterprises having presence in different AWS regions.
Here AWS Global Accelerator provides anycast IP (two by default for redundancy reasons). The branch connects to the closed AWS-Edge location (CloudFront POP) and then the traffic rides back onto the AWS expense backbone to reach to the destination.