VPP TCP Host Stack Exploration

Kartik KankurteKartik Kankurte
17 min read

📌 1. Introduction

This blog details my hands-on project exploring high-performance networking with the Vector Packet Processing (VPP) framework on a Data Processing Unit (DPU). The aim was to understand VPP, try to set up its native TCP stack, compare its user-space method with the traditional Linux kernel, and run various confirmation tests on VPP.

Mini Project Overview: I set up VPP on a DPU, configured its network interfaces, tried to enable its native TCP capabilities, diagnosed why it wasn't working (due to missing plugins), implemented a workaround by "punting" TCP traffic to the Linux kernel, and used Scapy to test the end-to-end TCP connectivity through this hybrid path.

Technologies/Tools Covered: VPP (Vector Packet Processing), DPDK (Data Plane Development Kit concepts), TCP/IP, Linux Networking, Scapy (Packet Crafting/Analysis), User-space Networking principles, Tcpdump, trace, Packetdrill.

Relevance: As network speeds soar (10G, 40G, 100G+), the standard Linux kernel can become a bottleneck. User-space frameworks like VPP are crucial for Network Function Virtualization (NFV), Cloud Gateways, DPUs, and any application demanding line-rate packet processing. Hence, Understanding how VPP handles (or can handle) stateful protocols like TCP is vital.

TL;DR: Readers will gain a solid understanding of VPP's high-performance, modular architecture and how it fundamentally differs from traditional Linux kernel networking. The guide will cover the practical steps — and common challenges — involved in configuring VPP interfaces, setting up key plugins, and managing session layers. It will also explain how to redirect and balance traffic between the VPP data plane and the Linux kernel stack. Finally, readers will learn how to use low-level testing tools like Scapy to craft and validate TCP packets, helping them effectively troubleshoot and analyze network behavior in a VPP environment.


❗ 2. The Problem I Wanted to Solve

The core problem lies in the performance limitations of traditional kernel-based networking stacks when faced with high packet rates or massive connection counts.

Use Case: Imagine a high-throughput firewall, a load balancer terminating millions of connections, or a 5G User Plane Function (UPF) processing mobile data traffic on a DPU. These scenarios push beyond the capabilities of standard kernel processing.

Pain Point: The Linux kernel, while versatile, incurs overhead from interrupts, context switching between kernel and user space, memory copies, and potential cache inefficiencies when processing packets individually (scalar processing). This limits throughput and increases latency, especially above 10 Gbps.

Real-World Relevance: Companies building high-performance network appliances, telcos deploying NFV, and cloud providers optimizing their gateways constantly battle these bottlenecks. DPUs often employ user-space networking (like VPP, potentially with hardware acceleration) to offload the main CPU and achieve better performance.

Why Solve It? Bypassing the kernel promises significant performance gains (higher throughput, lower latency), CPU efficiency (more packets processed per cycle), and scalability. My goal was to experimentally explore VPP's approach to TCP, a notoriously complex stateful protocol, to see how it integrates into this high-performance model.

Similar Studies: Many studies show that VPP delivers excellent Layer 2/3 forwarding performance, often hitting line-rate on standard hardware, and some have explored its TCP/UDP processing through the Host Stack. This project, however, takes a different angle. Instead of focusing on raw performance, it looks at the real-world challenges of setting up VPP on specialized DPU hardware with custom builds. A major issue faced was the missing native TCP stack plugin, a common situation when working outside standard VPP packages. This blog covers the troubleshooting process and shows how Linux kernel networking was used as a practical fallback when native support wasn’t available. It offers a hands-on perspective that complements existing benchmarks, especially for customized hardware deployments.


🧩 3. Design and Approach

My approach was experimental, focusing on a methodical, step-by-step process. First, I established the basic VPP configuration and interface setup. Then, I attempted to enable and configure VPP's native TCP stack. Finally, I planned rigorous testing using Scapy to validate TCP functionality end-to-end, allowing issues to be isolated at each stage.

Architecture:

  1. Baseline: Understand the Linux kernel path (scalar, interrupt-driven).

  2. VPP Alternative: Learn VPP's architecture (user-space, DPDK PMDs, vector processing, graph nodes). The key idea was to leverage VPP for the fast path.

  3. TCP Goal: Utilize VPP's native "Host Stack" for TCP termination, anticipating performance benefits.

  4. Testing: Use Scapy from a separate Traffic Generator (TG) machine to precisely control TCP packets (SYN, ACK, Data) and analyze responses.

  5. Contingency: If native TCP fails, explore VPP's mechanisms (like punting) to redirect traffic to the Linux kernel as a fallback.

    Tools Involved:

    • Scapy (on TG): Sends crafted TCP SYN packet.

    • Linux Network Interface: Sends/receives packets on TG.

    • VPP: Processes incoming SYN, responds with SYN-ACK if appropriate.

Tools & Platforms:

  • VPP (v24.02.0-86~g7a2e88c83): indicates VPP release 24.02.0 with build number 86, based on the commit hash 7a2e88c83.

  • DPDK (Implicit): VPP relies on DPDK for high-speed NIC access via Poll-Mode Drivers (PMDs), bypassing kernel drivers.

  • Linux (Ubuntu/Debian based): Running on both the DPU (hosting VPP) and the TG. Used for standard tools (nc, ping, ip, arp) and hosting Scapy.

  • Scapy: Python library for packet manipulation. Chosen for its flexibility in crafting, sending, and dissecting packets at any layer, essential for detailed TCP testing without needing a full client application.

  • Hardware:

    • DPU: Xeon system (xeon2-oct-111) running VPP.

    • TG: Intel i9 system (IITH-i9-3) running Scapy.

  • TCP SYN Packet Flow (TG to VPP)

      [Scapy Script on TG]
          |
          | (1) Sends TCP SYN packet
          v
      [TG Network Interface (e.g., enp1s0f0np0)]
          |
          | (2) Packet travels over Ethernet link
          v
      [Network Cable / Link]
          |
          | (3) Packet received by DPU or Host running VPP
          v
      [VPP Network Interface]
          |
          | (4) VPP L3 Input Node processes IP layer
          |
          | (5) VPP TCP Input Node handles TCP SYN
          v
      [VPP TCP Stack]
          |
          | (6) SYN-ACK generated if a listening socket exists
          v
      [Reply sent back over same path to TG]
    

Testbed Setup Diagram (TG ↔ DPU with VPP)

+---------------------------+        +-------------------------------------+
|   Traffic Generator (TG)  |        |      Data Processing Unit (DPU)     |
|   (IITH-i9-3 / 10.68.0.48)|        |    (xeon2-oct-111 / 10.68.0.111)    |
|                           |        |                                     |
| Interface: enp1s0f0np0    |        |    +-----------------------------+  |
| IP: 10.10.10.2/24         +--------+----|             VPP             |  |
+-------------^-------------+        |    | Interface: eth0             |  |
              |                      |    | IP: 10.10.10.1/24           |  |
              |                      |    |                             |  |
              |                      |    | Interface: eth1             |  |
              +----------------------+    | IP: 20.20.20.1/24           |  |
                     Test Network         |                             |  |
                                          +-----------------------------+  |
                                          |                                |
                                          +--------------------------------+

🔧 4. Implementation

The implementation involved configuring VPP, enabling plugins, diagnosing issues, and setting up the Linux listener for further analysis.

Key Steps & Components:

  1. VPP Interface Setup (on DPU):

    • Identified physical NICs managed by VPP (via DPDK binding).

                # --- On the DPU Linux Shell ---
                dpdk-devbind.py -b vfio-pci 0002:02:00.0 0002:03:00.0
      
    • Used vppctl (VPP's command-line):

    # Enter VPP CLI
    sudo vppctl

    # Bring interfaces up and assign IPs (within VPP's context)
    set interface state eth0 up # for first interface
    set interface ip address eth0 10.10.10.1/24
    set interface state eth1 up # for a second interface
    set interface ip address eth1 20.20.20.1/24

    # Verify
    show interface addr
    show hardware-interfaces
  • Key Learning: VPP IPs are separate from Linux IPs (ip addr won't show them).
  1. Plugin Management (Ping Example):

    • Edited /etc/vpp/startup.conf:
    ## VPP Plugin Configuration

    # Enables basic ping utility inside VPP CLI
    plugin ping_plugin.so { enable }

    # Enables support for Cavium/Octeon DPU hardware
    plugin dev_octeon_plugin.so { enable }

    # Enables performance monitoring and statistics collection
    plugin perfmon_plugin.so { enable }

    # Required to support session (TCP/UDP) layer features
    plugin session_plugin.so { enable }

    # Enables high-scale apps like VPP Echo server/client
    plugin hs_apps_plugin.so { enable }

    # Enables DPDK (Data Plane Development Kit) for high-speed packet I/O
    plugin dpdk_plugin.so { enable }

    # Allows redirecting IP traffic into the VPP session layer
    plugin ip_session_redirect_plugin.so { enable }
  • Restarted VPP service:
    sudo systemctl stop vpp
    sudo pkill -9 vpp 
    sudo systemctl start vpp
    # Or:
    sudo vpp -c /etc/vpp/startup.conf
  • Verified basic connectivity using ping from TG to VPP's 10.10.10.1 and captured using tcpdump and trace on TG and VPP for verification and analysis.
  1. TCP Exploration & Challenge:

    • To utilize the VPP Host Stack for TCP, I attempted to bind a test echo server to the VPP interface IP and assigned a port to listen on.
    vpp# test echo server uri tcp://10.10.10.1/12345
  • Investigated loaded plugins:
    vpp# show plugins
     Plugin path is: /usr/lib/aarch64-linux-gnu/vpp_plugins

         Plugin                                   Version                          Description
      1. ping_plugin.so                           24.02.0-86~g7a2e88c83            Ping (ping)
      2. dpdk_plugin.so                           24.02.0-86~g7a2e88c83            Data Plane Development Kit (DPDK)
      3. hs_apps_plugin.so                        24.02.0-86~g7a2e88c83            Host Stack Applications
      4. ip_session_redirect_plugin.so            24.02.0-86~g7a2e88c83            IP session redirect
      5. perfmon_plugin.so                        24.02.0-86~g7a2e88c83            Performance Monitor
      6. dev_octeon_plugin.so                     24.02.0-86~g7a2e88c83            dev_octeon
  • Challenges :

  • A fundamental prerequisite before VPP can even see network interfaces is correctly binding them to a DPDK-compatible driver (such as vfio-pci or uio_pci_generic) and unbinding them from the kernel's default network driver.

  • Initial attempts to ping the VPP interface failed. After investigation, it was found that the ping_plugin.so was not enabled by default. Adding it to the startup.conf file resolved the issue and allowed successful ICMP (ping) testing within VPP.

  • Attempts to run the test echo server initially failed as well. This was traced back to the hs_apps_plugin.so not being loaded. Enabling this plugin in the configuration restored expected functionality.


🧪 5. Testing the System

Once the VPP interfaces were configured and the necessary plugins were enabled, testing proceeded in two main phases: verifying basic connectivity and then testing the native VPP TCP echo server functionality.

Phase 1: Basic Connectivity & Debugging Tools

  • Connectivity Test (ICMP Ping): Standard Linux ping was used from the Traffic Generator (TG) system (10.10.10.2) to target the VPP interface IP (10.10.10.1). Successful reception of ICMP Echo Replies confirmed basic L2/L3 reachability between the TG and VPP.

      # --- On TG ---
      ping 10.10.10.1
    
  • Debugging Tools: During troubleshooting, VPP's internal trace facility and standard Linux tcpdump on the TG were used for observing packet flow and diagnosing issues.

      # --- Inside vppctl ---
      trace add eth0-x 10 # Trace 10 packets
      show trace
      clear trace
    
      # --- On TG ---
      sudo tcpdump -i enp1s0f1np1
    

Phase 2: Functional TCP Test (VPP Echo Server & Scapy)

  • Start VPP Listener: The internal VPP TCP echo server was started to listen on the VPP interface IP and a specific port:

      # --- Inside vppctl ---
      test echo server uri tcp://10.10.10.1/12345
    
  • Functional TCP Tests : Scapy scripts were executed on the TG to interact directly with the VPP echo server:

    • Handshake: A TCP SYN packet was crafted and sent to 10.10.10.1:12345. Successful operation was verified by receiving a SYN-ACK response directly from VPP's stack.

Tools Used:

  • ping (Linux command-line)

  • tcpdump (Linux command-line)

  • VPP vppctl commands (trace add, show trace, test echo server)

  • Scapy (Python library on TG) for crafting and analyzing TCP packets specifically for the echo server test.

Test Environment Notes:

  • TG interface (enp1s0f1np1) configured with 10.10.10.2/24.

  • VPP interface eth0 configured with 10.10.10.1/24.

  • VPP Session State

      vpp# show session verbose
      Connection                                                  State          Rx-f      Tx-f      
      [0:0][T] 10.10.10.1:12345->0.0.0.0:0                        LISTEN         0         0         
      [0:1][T] 10.10.10.1:12346->0.0.0.0:0                        LISTEN         0         0         
    
      Thread 0: active sessions 2
      Thread 1: no sessions
    

Benchmarking: Performance benchmarking (throughput, latency) was not performed as part of this phase, primarily because the native VPP TCP stack comparison point was unavailable. Testing focused purely on functional correctness of the implemented path.

📊 6. Results

  1. ICMP Connectivity Test (ping from TG to VPP)

     labadmin@IITH-i9-3:~$ ping 10.10.10.1
     PING 10.10.10.1 (10.10.10.1) 56(84) bytes of data.
     64 bytes from 10.10.10.1: icmp_seq=1 ttl=64 time=0.014 ms
     64 bytes from 10.10.10.1: icmp_seq=2 ttl=64 time=0.011 ms
     64 bytes from 10.10.10.1: icmp_seq=3 ttl=64 time=0.011 ms
     ^C
     --- 10.10.10.1 ping statistics ---
     3 packets transmitted, 3 received, 0% packet loss, time 2059ms
     rtt min/avg/max/mdev = 0.011/0.012/0.014/0.001 ms
    

    This confirms successful ICMP connectivity between the Traffic Generator (TG) and the VPP interface (10.10.10.1) with no packet loss and very low latency.

  2. Scapy UDP Packet Send

sendp(
    Ether(dst="b6:d2:8b:47:54:e6", src="40:a6:b7:c2:c7:79") /
    IP(src="10.10.10.1", dst="20.20.20.1", len=60) /
    UDP(dport=12345, sport=5000, len=40) /
    Raw(RandString(size=32)),
    iface="enp1s0f1np1",
    return_packets=True,
    count=10
)

VPP Interface Status (show interface)

vpp# show interface
              Name               Idx    State  MTU (L3/IP4/IP6/MPLS)     Counter          Count     
eth0                              1      up          9000/0/0/0     
eth1                              2      up          9000/0/0/0     
local0                            0     down          0/0/0/0
vpp# show interface
              Name               Idx    State  MTU (L3/IP4/IP6/MPLS)     Counter          Count     
eth0                              1      up          9000/0/0/0     
                                                                    rx packets                    10
                                                                    rx bytes                     740
                                                                    drops                         10
                                                                    ip4                           10
eth1                              2      up          9000/0/0/0     
local0                            0     down          0/0/0/0

Interface eth0 has received 10 IPv4 packets, with 740 bytes in total, and 10 drops. In VPP, a packet is dropped if there is no matching session, route, or input processing after it is received.

  1. Scapy TCP (SYN-ACK check)

    This Python script uses Scapy to craft a TCP SYN packet, send it to a target IP and port, and listen for a SYN-ACK response. It helped in testing network connectivity or TCP handshake behavior.

from scapy.all import *

target_ip = "10.10.10.1"
target_port = 12345
tg_iface = "enp1s0f0np0"

syn = IP(dst=target_ip)/TCP(dport=target_port, flags="S", seq=RandInt())

print("Sending SYN...")

# sr1 sends and receives one packet, specifying the interface
syn_ack = sr1(syn, timeout=2, iface=tg_iface, verbose=1)

if syn_ack and syn_ack.haslayer(TCP) and syn_ack[TCP].flags & 0x12 == 0x12:
    print("SUCCESS: Received SYN-ACK!")
    syn_ack.show()
else:
    print("FAILURE or unexpected response.")
    if syn_ack:
        syn_ack.show()
labadmin@IITH-i9-3:~$ sudo tcpdump -i enp1s0f1np1
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on enp1s0f1np1, link-type EN10MB (Ethernet), snapshot length 262144 bytes
15:50:54.560907 IP IITH-i9-3.ftp-data > 10.10.10.1.12345: Flags [S], seq 0, win 8192, length 0
15:50:54.561055 IP 10.10.10.1.12345 > IITH-i9-3.ftp-data: Flags [S.], seq 1000, ack 1, win 8192, length 0

Notes:

  • Replace target_ip, target_port, and tg_iface with your test setup details.

  • RandInt() generates a random initial sequence number for the TCP SYN packet.

  • sr1() sends the packet and waits for exactly one reply (or times out).

Key Findings:

  • SYN/ACK Handshake: Scapy successfully initiated a TCP handshake with the nc listener running on the DPU Linux host, with packets correctly traversing VPP.
  1. Ran Packetdrill on x86 system to test various TCP features :

  • Baseline Kernel Testing with Packetdrill: To understand the behavior of the standard networking stack, we executed detailed TCP conformance tests using the packetdrill tool directly against the Linux kernel on the x86 TG system and also the DPU.

  • Focused TCP Scenario Validation: The packetdrill tests specifically targeted TCP protocol behavior.

  • Packetdrill Analysis Summary (TG System)

    1. Overall Results:

      • Ran 484 TCP tests; 420 passed (~87%), 64 failed (~13%).

      • Indicates generally robust TCP conformance but with specific deviations.

    2. Failure Categories & Key Issues:

      • TCP Fast Open (TFO): Frequent failures due to mismatches in TFO cookie values, packet header fields (esp. IPv6), and unavailable sysctl configurations assumed by scripts.

      • Epoll Edge-Triggered: Consistent epoll_wait failures with unusual ENOENT errors, pointing to potential edge cases in kernel readiness reporting or test environment interaction.

      • Congestion Control/Timing: Minor discrepancies noted in cwnd values after ECN, packet/system call timing slightly differing from script models, and ACK sequence handling after GRO.

      • IPv6 Specifics: Some tests failed only for IPv6 due to TFO issues, local routing problems (MTU probe), or mismatched ICMPv6 code handling.

  • Packetdrill Analysis Summary (on DPU System)

    1. Overall Results:

      • Ran 481 TCP tests; 387 passed (~80%), 91 failed (~19%), 3 timed out (zerocopy/maxfrags.pkt).

      • Indicates reasonable TCP conformance but with more deviations than the TG system.

    2. Failure Categories & Key Issues:

      • TCP Fast Open (TFO): Significant failures similar to TG (cookie values, packet headers esp. IPv6, unavailable sysctls) plus some additional timing errors on TFO ACKs.

      • Epoll Edge-Triggered: Identical ENOENT failures as the TG system, suggesting a consistent issue with these specific tests or Packetdrill itself.

      • Timing Errors (More Pronounced): A larger number of tests failed due to packet/system call timing mismatches (e.g., in close, eor, sack, shutdown scenarios) compared to the TG system.

      • MSS Option Handling (New): Failures in getsockopt for TCP_MAXSEG, indicating the kernel reported a different MSS (536) than expected (1100).

      • Congestion Control/Window: Similar assertion failures on cwnd post-ECN and tcpi_busy_time as seen on TG.

      • IPv6 Specifics: Similar issues as TG (TFO, MTU probe routing, ICMP codes).

      • Zero Copy Timeout (New): The zerocopy/maxfrags.pkt test timed out, suggesting a potential performance issue or hang in this specific scenario on the DPU.

  • Conclusion:

    • DPU's Linux kernel shows generally functional TCP but deviates more from Packetdrill's strict model than the TG kernel.

    • Major differences lie in increased timing sensitivity, specific MSS reporting/handling, and zero-copy performance/stability, likely due to the DPU's specialized kernel/hardware/configuration.

Metrics: Since performance benchmarking wasn't done, quantitative results (Gbps throughput, microsecond latency) are not available. However, the qualitative result is that the punt mechanism works as designed for redirecting specific flows.

Did it meet goals? Partially. The goal of exploring VPP TCP was met, leading to the discovery of the missing plugins. The goal of testing native VPP TCP was unmet due to the various issues. The goal of establishing any TCP connectivity via VPP was met using the workaround.

Acceleration Relevance: This problem (high-speed TCP processing) significantly benefits from acceleration.

  • VPP's Software Acceleration: Vector processing, kernel bypass (DPDK), cache optimization provide substantial software-level acceleration compared to the standard kernel.

  • Hardware Offloads: Modern NICs and DPUs (like the OCTEON platform) often include hardware offloads for:

    • Checksum calculation (TCP/IP checksums)

    • TCP Segmentation Offload (TSO) / Generic Segmentation Offload (GSO)

    • Receive Side Scaling (RSS)

    • Potentially full TCP state machine offload (though less common/more complex).
      VPP can integrate with these hardware offloads (via DPDK) for further performance gains. In my case, the dev_octeon_plugin.so suggests potential integration points with specific OCTEON hardware features, although I didn't delve into configuring them.

💬 7. What I Learned

This project served as a deep and hands-on introduction to VPP’s TCP stack and its modular architecture. While setting up basic connectivity seems simple, I quickly learned that:

  • VPP does not enable many essential plugins by default, which led to initial failures with basic tasks like ping or running an echo server.

  • Understanding how VPP sessions, plugin dependencies, and interface bindings work was crucial—especially for features like TCP or app-layer interactions.

  • Debugging required packet-level inspection (with tcpdump and Scapy) and reading VPP's internal state (show session, show int, etc.), which gave me practical experience with real-world troubleshooting.

  • I appreciated how low-level control in VPP offers power, but also increases the need for careful configuration and plugin awareness.

Overall, it was a technically challenging but rewarding experience that improved my understanding of both userspace networking and VPP internals.

Limitations & Pain Points:

  • Inability to test or benchmark native VPP TCP performance due to the build.

  • Complexity of VPP configuration beyond the basics (plugins, DPDK binding nuances).

  • Manual TCP state management required in Scapy for robust testing.

Improvements Next Time:

  1. Verify Build First: Before any configuration, run vppctl show plugins and check for all required components (session, tcp, etc.).

  2. Structured Testing: Use or develop a more stateful Scapy test suite or integrate with tools like Packetdrill to test various confirmation tests on VPP TCP and then do some performance testing as well to compare with Linux Kernel TCP.

  3. Explore Hardware Offloads: If using capable hardware (like OCTEON), investigate how to enable and verify VPP's integration with hardware acceleration features.

✅ 8. Wrapping Up – Key Takeaways

This project aimed to tackle the challenge of high-performance TCP processing by exploring VPP as an alternative to the standard Linux kernel.

Problem Recap: Linux kernel networking struggles at high packet rates due to inherent overheads.
Solution Explored: VPP offers a user-space, vector-processing approach. My specific implementation involved setting up VPP, tested basic tcp features with Scapy.

Key Contributions:

  • Personal Skill: Gained hands-on experience with VPP configuration, plugin management, and the VPP/Linux boundary. Deepened understanding of user-space networking concepts. Improved Scapy skills for network diagnostics.

  • Community/Course: Provides a practical account of setting up VPP, highlighting a common potential pitfall (build variations), which can be valuable for others starting with VPP, especially on specialized hardware.

VPP is undeniably powerful, but "some assembly required" is often the case. Understanding its modularity and verifying its components are crucial first steps.


🔭 9. What’s Next?

Several ideas to extend this project:

  1. Perform detailed TCP conformance tests on VPP using tools like Packetdrill to verify protocol correctness and feature support.

  2. Performance Benchmarking: Once native TCP is working and verified, perform rigorous benchmarking.

    • Measure throughput (Gbps), connections/second, latency using tools like iperf3 (potentially needing VPP host stack integration) or specialized load generators.
  3. Hardware Offload Integration: Investigate and enable specific OCTEON hardware acceleration features via VPP/DPDK configurations and measure their impact.

  4. Application Integration: Explore using VPP's Host Stack APIs (like the VCL - VPP Communications Library) to integrate a simple application directly with VPP's stack.


🔗 10. Resources & References

🔍 11. Contribution :

  • Kartik Kankurte (cs24mtech11003):

    • Performed the complete setup on the SSH-based remote systems (DPU and TG), followed by testing and debugging the configuration to ensure proper connectivity and functionality.

    • Researched various network testing tools, including Packetdrill, and explored how they can be used to simulate and validate TCP behavior.

  • Binoy Krishna Pal (cs24mtech11009):

    • Did the setup on a system and conducted initial testing to validate the configuration.

    • Investigated the chiTCP framework to understand its architecture and tried its potential for detailed TCP-level testing and analysis.

0
Subscribe to my newsletter

Read articles from Kartik Kankurte directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Kartik Kankurte
Kartik Kankurte