Wednesday 18 May 2022

Secure Access Service Edge (SASE) - Part 1

There have been major changes in technology trends in the last decade. Digital transformation has accelerated the migration of enterprise applications and workloads from traditional data centers to public cloud. 

The applications are available everywhere and can be accessed from anywhere. The 4G/5G technology and cost-effective Internet circuits have enabled users to work from anywhere.  The rigid networks of the past do not work in the new digital economy. Traditional WAN architectures weren't built to support cloud applications. 

These changes have also brought in new challenges. Internet connections are not inherently secure and here is an increase in the usage of BYOD (Bring your own device) which has resulted in the increased attack surface of corporate networks. 

Standard hardware based product life cycle is getting replaced by usage based subscriptions. Businesses are moving from permanently fixed infrastructure to on-demand cloud services.  

Enterprises are looking to extend their security perimeter all the way to the user and provide enhanced user experience and visibility into application performance and usage.

This is where the SASE architecture comes into play. It contains five major components as below.

1) Software-Defined Wide Area Network (SDWAN) : Simplifies IT infrastructure control and management by building a virtual WAN over public and private network that securely connects users to their applications.

2) Secure Web Gateway (SWG) : Provides granular control and visibility to web traffic and enforce appropriate corporate security policies.  

3) Cloud Access Security Broker (CASB) : Helps to manage and protect corporate data that is stored in the cloud. 

4) Zero Trust Network Access (ZTNA) : Connects distributed users with distributed applications without compromising on security or user experience.

5) Firewall-as-a-Service (FWaas) : Cloud service that provides advanced security to the infrastructure, applications, and platforms managed and hosted by an organization in its cloud infrastructure.

We will now look into each of these components in detail.

SDWAN:

In the traditional networks, service providers would give customer a link at each of their locations which would enable any to any full mesh connectivity. The service providers would use VPN technology to create tunnels between various end points, appear to be directly connected to the remote customer devices.


The internet breakout in such a network would be from a central location or from a specific branch office.

Key challenges with this architecture are

- Expensive Bandwidth : The MPLS circuits normally cost more than Internet access. If a branch office has two circuits implemented as active/passive, the backup circuit would seat idle until the primary link fails. It is difficult to achieve load sharing between redundant links.

- Failover : In an active/passive setup, failover is completely dependent upon the state of the link (up/down). 

- Control : Configuration is done locally on each individual router. Any policy change would require manual change on each device.

- Visibility : There is a very little application level visibility with such architecture. One has to rely on external tools to get necessary application data.

The SDWAN solution has the following key components

Management : Centralized management, configuration and monitoring

- Single Pane of glass

- Configuration creating and management

- Centralized deployment system

- Simplified on-boarding process

Control Plane: Distribution of reachability of information

- Tunnel creation

- Route advertisement

- Auto-discovery

- Topology management

Data Plane: Data Transport

- Tunneling and encapsulation

- Encryption

- Data forwarding and path selection

- Implementation of the security process

SDWAN can utilize any underlay transport such as MPLS, DIA or LTE to build overlay tunnels for enterprise traffic. Customer can choose which physical path to use based on path properties, security policies, application type, user groups, path stability etc.


We will look at the remaining components of SASE architecture in second part of the blog.


Tuesday 31 May 2016

Containers (Docker) - OS Virtualization

Lately I have been looking at some application virtualization stuff. Majority of us are aware of  how server virtualization works however the concept of application virtualization is relatively new.

To understand what it is, let's first see what server virtualization is. In the old days, organizations used to keep physical servers to run their applications. Normally one physical server would be used to run one or two specific applications. Depending on the number of applications, companies had to maintain server farms which meant management of multiple Hardwares and Softwares, paying for CoLo space and electricity bill for all the physical devices.



















Wednesday 30 March 2016

Migration Methods

In this post, we will discuss various options to migrate customer's network from one service provider to other service provider. 

Let's assume that you work for an enterprise customer with three sites. All the sites are currently connected through a Layer 3 IPVPN solution provided by service provider A.


Sunday 14 February 2016

MPLS Traffic Engineering

In this post, we will discuss about MPLS Traffic Engineering. To understand where it can be used and what problems it can resolve, let's look at the below topology.

The CE1 and CE2 are customer edge devices with LAN subnet of 1.1.1.1/32 and 7.7.7.7/32 respectively. They are connected with corresponding PE1 and PE2. We are running OSPF and LDP in the service provider core. Both the PE devices exchange VPN labels via MP-BGP and transport label via LDP. 

Sunday 17 January 2016

BGP PIC EDGE

Continuing from our previous post, we will now see how BGP PIC EDGE works. We will use the same topology. The only difference is that I have removed R4 as route-reflector. All the PEs have full mesh IBGP neighbourship. 

At the moment R2 learns 8.8.8.8/32 from R6 and R7. It prefers the path with the next-hop of 6.6.6.6 over 7.7.7.7.

Friday 1 January 2016

BGP PIC CORE

Happy New Year Folks!

In one of the previous posts, we looked at EIGRP FRR and OSPF LFA feature which helps achieving fast convergence.

There is a similar feature  in BGP which is called PIC (Prefix Independent Convergence). It speeds up the convergence of the FIB in failover conditions. BGP works differently than any IGP. It is designed to carry hundreds of thousands routes in the routing table hence fast failover works differently in BGP. There are couple of ways to implement PIC in BGP. They are "PIC Core" and "PIC Edge". We will look into both of these options.

Let's look at the below topology.

Wednesday 16 December 2015

VRF Aware IPSEC VPN

In this post we will see how we can support multiple VRFs in site to site IPSEC VPN implementation.

We will use the below topology. The routers CE1 and CE2 are connected to the Internet. For simplicity, I have used the private IP addressing for the WAN connectivity.



Ok so the first step is to configure required VRFs on both the CPEs.

Saturday 31 October 2015

EIGRP IP FRR & OSPF LFA

In today's modern networks, fast convergence has become a mandatory requirement.
If we want to achieve fast convergence, each of the steps below need to be optimized

1. Failure Detection
2. Failure Propagation
3. Processing of new information
4. Updating RIB/FIB

1. Failure Detection:- "How long does it take me to detect a failure?"
This normally depends on the Hello/Hold down/Dead timers of the routing protocol. We can either tune these timers or use a mechanism such as "BFD" which we have seen in our earlier post.

2. Failure Propagation:- "How long does it take me to tell everyone else?"
In EIGRP, this is done through Query/Reply packets. We can reduce the Query domain by configuring the routers as "stub".

In OSPF, this depends on the area size and the LSA flooding procedure. We can tune the LSA timers to change this.

Friday 18 September 2015

Carrier Supporting Carrier (CSC)

In this post we will look at the Carrier Supporting Carrier design where smaller service providers use large service providers as backbone in order to connect parts of their network which eliminates the need to build and maintain their own MPLS network.

From customer's point of view, there is no difference in terms of connectivity and it will still appear as they have a normal Layer 3 MPLS connection from the provider.

Let's look at the below topology to understand how it works.



We have a Tier 2 SP who is providing services to customer sites in two different geographical locations. The service provider have their own network within specific regions but not connected with each other directly hence they are using Tier 1 SP's backbone to connect both of their networks and provide end-to-end connectivity to the customer.

Saturday 29 August 2015

Inter-AS MPLS VPN - Option C (BGP+Label)

In this post, we will look into the Inter-AS MPLS VPN - Option C which is also known as "BGP + Label". 

Option C uses eBGP IPv4 session between ASBRs to exchange reachability to PE loopbacks. There will be a VPNv4 neighbourship between service providers VPNv4 route-reflectors (RRs).

Option C takes away the heavy reliance on ASBRs. In this case, ASBRs are only used to exchange the loopback prefixes using eBGP IPv4 sessions.

To understand how it works, let's look at our topology below.



We have two service providers. Both are connected through ASBR routers. In this specific instance, we have local VPNv4 route-reflector for each SP. The customer site CE1 is connected to SP1 and CE2 is connected to SP2.