Aviatrix Oracle Cloud (OCI) Transit Network Setup

In the previous blog post, we performed the initial OCI on-boarding. Now here we will show how to build a transit network in OCI as some architects referred to as Hub and Spoke network architecture. This is the common cloud architecture that Aviatrix provide across all major Clouds such as AWS, Azure and GCP. This common cloud architecture provide consistent operational tools and visibility into different Cloud Networks.

Business Requirement to connect GCP and OCI

Our objective is to build the following topology where we have same common transit architecture deployed in GCP as well as OCI. The business requirement is to connect to GCP to utilize ML/Analytics tools that are not available in OCI. The GCP transit is already built using Aviatrix technology and we will focus on building the OCI transit network and then connecting it GCP with encrypted transit peering via Aviatrix Controller.

Multi-Cloud Common Architecture in GCP and OCI

Aviatrix Transit Gateway Deployment in OCI

As first step, we logged into the controller and launched the workflow to deploy the Aviatrix Transit VCN Gateway (OCI calls VPC as VCN: Virtual Cloud Network). The VCNs were build in the previous blog here.

Notice the easy of deploying it in the region on your choice with the size that your business require. Also notice that Public Subnet was automatically created by Aviatrix and one does not need to worry about creating it from scratch.

Launch Aviatrix Gateway (AVX-Transit-GW) in OCI VCN

Once you hut create button, the Aviatrix Controller will communicate with the OCI and will deploy the Aviatrix Gateway. Following output shows the process of creating this transit gateway.

Aviatrix Controller Output to deploy Transit Gateway

[21:42:57] Starting to create OCI GW OCI-Transit-GW-Ashburn.
[21:42:58] Connected to Oracle OCI.
[21:42:58] Deploying virtual machine…
[21:44:32] Deploy virtual machine done.
[21:44:32] Configure virtual machine.
[21:44:33] License check is complete.
[21:44:33] Added GW info to Database.
[21:44:35] OCI-Transit-GW-Ashburn AVX SQS Queue created.
[21:44:35] Create message queue done.
[21:44:35] Initializing GW…..
[21:45:06] Copy configuration to GW OCI-Transit-GW-Ashburn done.
[21:45:06] Copy new software to GW OCI-Transit-GW-Ashburn done.
[21:45:06] Copy /etc/cloudx/cloudx_code_file.json.enc to GW OCI-Transit-GW-Ashburn done.
[21:45:06] Copy /etc/cloudx/cloudx_code_key_file.txt to GW OCI-Transit-GW-Ashburn done.
[21:45:06] Copy scripts to GW OCI-Transit-GW-Ashburn done.
[21:45:06] Copy sdk to GW OCI-Transit-GW-Ashburn done.
[21:45:06] Copy libraries to GW OCI-Transit-GW-Ashburn done.
[21:45:06] Installing software ….
[21:45:06] Issuing certificates….
[21:45:06] Issue certificates done
[21:45:15] GW software started.
[21:45:29] Software Installation done.

You can now login to OCI console and notice the instance deployed in the Finance Compartment or department

Aviatrix Transit Gateway Deployed in OCI

Following output shows the type of instance and bunch of other information directly gathered from the OCI console.

Instance Information
Availability Domain: RGRl:US-ASHBURN-AD-2
Image: Published Image: aviatrix_gateway_0415_1017_20190820
Fault Domain: FAULT-DOMAIN-2
OCID: ...dsmskaShowCopy
Region: iad
Launched: Thu, 17 Oct 2019 04:43:00 UTC
Shape: VM.Standard2.2
Compartment: shahzadali (root)/Finance-Compartment
Virtual Cloud Network: OCI-Transit-VCN-Ashburn
Launch Mode: NATIVE
Maintenance Reboot: -
Primary VNIC Information
Private IP Address:
Internal FQDN: av-gw-oci-transit-gw-ashburn...ShowCopy
Public IP Address:
Subnet: OCI-Transit-VCN-Ashburn-public-subnet
Network Security Groups: aviatrix-security-group
This instance's traffic is controlled by its firewall rules in addition to the associated Subnet's security lists and the VNIC's network security groups.
Launch Options
NIC Attachment Type: VFIO
Firmware: UEFI_64

Important point we would like to highlight that in order to get all that information, one does not really need to login to OCI console. All this information is also available from the Aviatrix Controller UI itself. This is great operational benefit because now operators don’t need to worry about learning different clouds and their constructs.

Aviatrix Spoke VCN Deployment in OCI

Aviatrix Spoke Gateway Deployment in OCI

[21:58:12] Starting to create OCI GW OCI-Spoke-GW1-Ashburn.
[21:58:12] Connected to Oracle OCI.
[21:58:12] Deploying virtual machine…
[21:59:46] Deploy virtual machine done.
[21:59:46] Configure virtual machine.
[21:59:47] License check is complete.
[21:59:47] Added GW info to Database.
[21:59:49] OCI-Spoke-GW1-Ashburn AVX SQS Queue created.
[21:59:49] Create message queue done.
[21:59:49] Initializing GW…..
[22:00:20] Copy configuration to GW OCI-Spoke-GW1-Ashburn done.
[22:00:20] Copy new software to GW OCI-Spoke-GW1-Ashburn done.
[22:00:20] Copy /etc/cloudx/cloudx_code_file.json.enc to GW OCI-Spoke-GW1-Ashburn done.
[22:00:20] Copy /etc/cloudx/cloudx_code_key_file.txt to GW OCI-Spoke-GW1-Ashburn done.
[22:00:20] Copy scripts to GW OCI-Spoke-GW1-Ashburn done.
[22:00:20] Copy sdk to GW OCI-Spoke-GW1-Ashburn done.
[22:00:20] Copy libraries to GW OCI-Spoke-GW1-Ashburn done.
[22:00:20] Installing software ….
[22:00:21] Issuing certificates….
[22:00:21] Issue certificates done
[22:00:28] GW software started.
[22:00:42] Software Installation done.

Enable ActiveMesh For Aviatrix OCI Transit and Spoke Gateways

AVX-CTRL –> Gateway –> Enable ActiveMesh Mode Info

Connect AVX-Spoke GW to AVX-Transit GW

Aviatrix Transit VPC and Transit GW Routing Tables

The OCI Transit VPC Route Table is Empty because all routing is done by the Transit-GW

Aviatrix Spoke VPC and Spoke GW Routing Tables

Transit GW Peering between GCP-Transit-GW and OCI-Transit-GW

After about a minutes the transit peering comes UP

Test Topology

( gcp-vm–>gcp-spoke–gcp-transit-gw—–>oci-transit-gw–>oci-spoke–>oci-vm (

Traceroute from GCP Test VM to OCI-Test VM
Ping from GCP Test VM to OCI-Test VM


Aviatrix allows a common topology across multiple clouds. This makes the enterprise network and security deployments seamless. There are no surprises and IT admins does not need to know the underlying artifacts of various clouds.


Multi-Cloud Transit Design: Interworking with On-Prem and/or Cloud Devices/Services

Aviatrix solution can take care of networking, security and network segmentation for workloads deployed in public clouds by deploying transit networking solution using Aviatrix transit and spoke gateways. It is a standard and stamp-out (copy/paste and repeat) design that is applicable to any public cloud (e.g AWS, GCP, Azure and OCI).

There are situation when there is a need to connect to 3rd party devices or services to exchange routes or to provide additional connectivity. These services and devices could be in the Public Cloud or On-Premise. In those situation the Aviatrix transit can also connect to those devices and services in secure and encrypted fashion (e.g using L3 IPSec).

Customer Scenario

Following topology demonstrate a scenario where a business is using Cisco CSR (could be any service or instance from any vendor that supports IPSec) in the Cloud to terminate LISP. By virtue of using LISP, the business is forced into a sub-optimal design where an additional hop is necessary.

Aviatrix Setup

For this setup we are assuming that you have already deployed Cisco CSR from AWS marketplace

  • Created Transit VPC and Spoke VPC directly from Aviatrix Controller UI (no need to login to AWS console)
  • Deployed AVX Transit GW and AVX Spoke GW in their respective VPCs using the Aviatrix Controller UI
  • Follow the Aviatrix Transit Networking workflow to connect to external 3rd party device (e.g Cisco CSR)
    • For external connectivity eBGP is the preferred option and this is what we are using here
    • If you want to connect via static route to external device, it is also possible but then you have to enable “ActiveMesh” on Aviatrix Transit and Spoke Gateways first
  • Attached Spoke-VPC to Transit-VPC

Build the IPSec Tunnel From Aviatrix Transit Gateway to Cisco CSR

Configure Aviatrix Controller as shown below

Notice we are using the default IPSec Algorithms. My recommendation is to start with the default and change after if needed

After you have done the setup as above, you will notice an entry in the Site2Cloud (S2C) section of AVX-Controller automatically (The screenshot shows tunnel UP which is not correct. The tunnel will be in the down state at this time)

Click on the Name above and download the IPSec config.

Use Generic as vendor.
Aviatrix Site2Cloud configuration. 
 This connection has a single IPsec tunnel between customer  gateway and Aviatrix gateway in the cloud.
 Tunnel #1
1: Internet Key Exchange Configuration
 Configure the IKE SA as follows
 Version                  : 1
 Authentication Method    : Pre-Shared Key 
 Pre-Shared Key           : Aviatrix1!
 Encryption Algorithm     : AES-256-CBC
 Authentication Algorithm : SHA-1
 Lifetime                 : 28800 seconds
 Phase 1 Negotiation Mode : main
 Perfect Forward Secrecy  : Diffie-Hellman Group 2
 DPD threshold            : 10 seconds
 DPD retry interval       : 3 seconds
 DPD retry count          : 3 
2: IPSec Configuration
 Configure the IPSec SA as follows:
 Protocol                 : esp
 Authentication Algorithm : hmac-sha1
 Encryption Algorithm     : AES-256-CBC
 Authentication Algorithm : HMAC-SHA-1
 Lifetime                 : 28800 seconds
 Mode                     : tunnel
 Perfect Forward Secrecy  : Diffie-Hellman Group 2 
IPSec ESP (Encapsulating Security Payload) inserts additional
 headers to transmit packets. These headers require additional space, which reduces the amount of space available to transmit application data.To limit the impact of this behavior, we recommend the following configuration on your Customer Gateway:

 TCP MSS Adjustment       : 1387 bytes
 Clear Don't Fragment Bit : enabled
 Fragmentation            : Before encryption 
3: Tunnel Interface Configuration
 Your Customer Gateway must be configured with a tunnel interface that is associated with the IPSec tunnel. Traffic that should go through the tunnel should be specified by following your gateway's configuration guide, using the information below.
Gateway IP addresses:
 Customer Gateway                :
 Aviatrix Gateway Public IP      :
 Aviatrix Gateway Private IP     : 
 Customer Network(s)             : N/A for transit network
 Cloud Networks(s)               : N/A for transit network 
Tunnel Inside IP addresses:
 Customer Gateway                :
 Aviatrix Gateway                : 
Configure your tunnel to fragment at the optimal size:
 Tunnel interface MTU     : 1436 bytes 
4. Border Gateway Protocol (BGP) Configuration:
 The Border Gateway Protocol (BGPv4) is used to exchange routes from the VPC to on-prem network. Each BGP router has an Autonomous System Number (ASN).
BGP Configuration:
 BGP Mode                        : true
 Customer Gateway ASN            : 65002
 Aviatrix Gateway ASN            : 65003 
Configure BGP to receive routes from on-prem network. Aviatrix Transit gateway will announce prefixes to your on-prem  gateway based upon the spokes you have attached. For vendor specific instructions, please go to the following URL:

Cisco CSR Configuration

This is how the above template translates into a Cisco CSR Config.

ip-10-60-0-89#sh run 
Building configuration...

Current configuration : 7936 bytes
! Last configuration change at 16:30:21 UTC Fri Oct 4 2019 by ec2-user
version 16.12
service timestamps debug datetime msec
service timestamps log datetime msec
service password-encryption
platform qfp utilization monitor load 80
no platform punt-keepalive disable-kernel-core
platform console virtual
hostname ip-10-60-0-89
vrf definition GS
 rd 100:100
 address-family ipv4
logging persistent size 1000000 filesize 8192 immediate
no aaa new-model
login on-success log
subscriber templating
multilink bundle-name authenticated
license udi pid CSR1000V sn 91V3AHTVAJ1
diagnostic bootup level minimal
memory free low-watermark processor 72406
spanning-tree extend system-id
username ec2-user privilege 15
crypto keyring mykey
! local-address is the private IP address of this CSR
  pre-shared-key address key Aviatrix1!
! is the public IP address of Avaitrix 
crypto isakmp policy 10
 encryption aes 256
 authentication pre-share
 group 2
 lifetime 28800
crypto isakmp keepalive 10 3 periodic
crypto isakmp profile myprofile
   keyring mykey
   self-identity address
   match identity address 
crypto ipsec transform-set myset esp-aes 256 esp-sha-hmac 
 mode tunnel
crypto ipsec df-bit clear
crypto ipsec profile ipsec_profile
 set security-association lifetime seconds 28800
 set transform-set myset 
 set pfs group2
interface Loopback0
 ip address
interface Tunnel0
 ip address
 ip tcp adjust-mss 1387
 tunnel source
 tunnel mode ipsec ipv4
 tunnel destination
 tunnel protection ipsec profile ipsec_profile
interface VirtualPortGroup0
 vrf forwarding GS
 ip address
 ip nat inside
 no mop enabled
 no mop sysid
interface GigabitEthernet1
 ip address dhcp
 ip nat outside
 negotiation auto
 no mop enabled
 no mop sysid
router bgp 65002
 bgp log-neighbor-changes
 network mask
 neighbor remote-as 65003
 neighbor timers 10 30 30
 address-family vpnv4
  neighbor activate
  neighbor send-community extended
ip forward-protocol nd
ip tcp mss 1387
ip tcp window-size 8192
ip http server
ip http authentication local
ip http secure-server
ip nat inside source list GS_NAT_ACL interface GigabitEthernet1 vrf GS overload
ip route vrf GS GigabitEthernet1 global
ip ssh rsa keypair-name ssh-key
ip ssh version 2
ip ssh pubkey-chain
  username ec2-user
   key-hash ssh-rsa BF29B2896E9286C9B44DD472EF3397DA ec2-user
ip scp server enable
ip access-list standard GS_NAT_ACL
 10 permit
 20 permit
line con 0
 stopbits 1
line vty 0 4
 login local
 transport input ssh
line vty 5 20
 login local
 transport input ssh
app-hosting appid guestshell
 app-vnic gateway1 virtualportgroup 0 guest-interface 0
  guest-ipaddress netmask
 app-default-gateway guest-interface 0


BGP Working Config. with address-family ipv4

The configuration above uses the vpn4 as address family. You can also make it work with ipv4 address family

router bgp 65002
 bgp log-neighbor-changes
 neighbor remote-as 65003
 neighbor timers 10 30 30
 neighbor remote-as 65001
 neighbor timers 10 30 30
 address-family ipv4
  ! is being advertised by Cisco CSR
  redistribute connected
  neighbor activate
  neighbor activate
Aviatrix Transit GW receives advertised by Cisco CSR


Aviatrix Transit Gateway workflow allows direct connectivity from Transit Gateway to 3rd party devices. The standard IPSec protocols allows Aviatrix Transit Gateway to connect to any devices supporting IPSec. These devices could be in the same Public Cloud, a different Public Cloud or to the On-Prem devices.

The workflow based implementation allows ease of use and reduces time to market.

SAML Based User-VPN / Open-VPN in Public Cloud

All major Cloud providers like AWS, Azure and GCP provide User-VPN (aka SSL/TLS VPN) services to allow remote users to connect to Cloud resources, instances and VMs.

This functionality is missing the support for SAML /SSO. SAML/SSO is extremely popular today but it is not supported by any major Cloud (AWS, Azure, GCP) yet.

This is where Aviatrix User-VPN solution has an edge. It provides a policy based framework which works nicely with the SAML and supported with IdP providers like OneLogin, Okta and DUO.


Network Joints

Networking and Networks have transformed over period of time. Enterprises have realized that public cloud is the strategic direction for their IT infrastructure and applications. The service providers like Amazon, Google and Microsoft are extremely efficient at providing networking, security, compute and storage capabilities in their respective public cloud such as AWS, GCP and Azure.

“All Clouds are not created equally”. In order to get the best of each and every cloud, there is a need to create seamless joints between those clouds which should, like human body joints, work in conjunction with each others. They should work in harmony, in an orchestrated fashion. This joining and marriage has given birth to a new Networking Architecture, what we call “Multi-Cloud” today.