Peering strategy and traffic flow

In Microsoft Azure, networking strategy revolves around how you connect virtual networks (VNets) and how data actually travels between them. Understanding this is key to building a scalable, secure environment.


1. VNet Peering: The Fundamentals

VNet Peering connects two Azure Virtual Networks over the Microsoft backbone network. Once peered, the two networks behave as one for connectivity purposes.

  • Regional Peering: Connects VNets within the same Azure region.
  • Global Peering: Connects VNets across different Azure regions.

Key Characteristics:

  • Low Latency: Traffic stays on the private Microsoft fiber; it never touches the public internet.
  • Performance: Offers the same high-bandwidth connection as if the resources were in the same network.
  • No Gateway Required: Unlike VPNs, peering doesn’t require a virtual network gateway unless you are “chaining” transit traffic.

2. Common Peering Strategies

Hub-and-Spoke Topology

This is the “gold standard” for enterprise architecture.

  • The Hub: A central VNet that hosts shared services (Firewalls, ExpressRoute gateways, DNS).
  • The Spokes: Individual VNets (e.g., for different departments or apps) that peer with the Hub.
  • The Benefit: It centralizes security and reduces costs by sharing expensive resources like an Azure Firewall.

Mesh Topology

Every VNet is peered directly to every other VNet.

  • Use Case: Small environments with very few VNets (3 or less).
  • The Downside: It becomes a management nightmare as you scale, because peering is not transitive. If VNet A is peered with B, and B is peered with C, A and C cannot talk to each other unless you peer them directly or use a “Transit” setup.

3. Understanding Traffic Flow

How data moves depends heavily on your User Defined Routes (UDRs) and Gateway Transit settings.

Default Flow (Direct)

In a simple peering, traffic flows directly from VM-A to VM-B. Azure handles the routing automatically via “System Routes.”

Hub-and-Spoke with Transit

If you want Spokes to communicate with each other or the internet through the Hub, you must configure:

  1. Allow Gateway Transit: Set on the Hub peering.
  2. Use Remote Gateways: Set on the Spoke peering.
  3. Network Virtual Appliance (NVA): Usually a Firewall in the Hub. You must create a UDR on the Spoke subnets to “force” traffic to the Firewall’s IP address instead of letting it go direct.

Traffic Flow Table

ScenarioPathLogic
VNet A to Peered VNet BDirectSystem Route (Peering)
Spoke A to Spoke BSpoke A → Hub (Firewall) → Spoke BUser Defined Route (UDR)
On-Prem to SpokeOn-Prem → Hub Gateway → SpokeGateway Transit enabled

4. Crucial Constraints

  • Overlapping Address Space: You cannot peer two VNets if their IP address ranges overlap.
  • Transitivity: Peering is strictly 1:1. To move traffic “through” a VNet to another destination, you must use a Network Virtual Appliance (like Azure Firewall) or Azure Route Server.

In a multi-region Azure architecture, the strategy shifts from connecting individual workloads to connecting entire regional ecosystems. As of 2026, the industry standard has moved toward high-automation models to manage the complexity of cross-region routing.

There are two primary ways to architect this: Global Hub-and-Spoke and Azure Virtual WAN.


1. Global Hub-and-Spoke (Self-Managed)

In this model, you replicate the Hub-and-Spoke design in every region where you have a footprint and then link the hubs together.

The Strategy

  • Regional Hubs: Each region (e.g., East US, West Europe) has its own Hub VNet containing a local Azure Firewall and Gateway.
  • Hub Peering: You connect the Hubs using Global VNet Peering.
  • Full Mesh vs. Chain: Ideally, all Hubs are peered to each other (Full Mesh of Hubs) to ensure the lowest latency and avoid “bottlenecking” through a third region.

Traffic Flow

  • Inter-Region Spoke-to-Spoke: To go from Spoke A (Region 1) to Spoke B (Region 2), the traffic flow is:Spoke AHub 1 (Firewall)Global PeeringHub 2 (Firewall)Spoke B.
  • Routing Logic: This requires meticulous User-Defined Routes (UDRs) on every spoke subnet to ensure traffic knows the “Next Hop” is the local Hub Firewall for remote region ranges.

2. Azure Virtual WAN (Microsoft-Managed)

Virtual WAN (vWAN) is a managed networking service that replaces the manual Hub-and-Spoke with a “plug-and-play” global transit architecture.

The Strategy

  • The Virtual Hub: Microsoft manages the Hub VNet for you. You don’t manage the underlying VMs or scaling; you just toggle features (VPN, ExpressRoute, Firewall).
  • Automated Peering: When you connect a Spoke VNet to a Virtual Hub, the routing is automatically propagated.
  • Any-to-Any Connectivity: By default, all spokes connected to any hub in the vWAN can talk to each other globally via the Microsoft backbone.

Traffic Flow

  • Simplified Path: Traffic moves from a Spoke to its local Virtual Hub, which then uses the Global Mesh (automatically maintained by Azure) to reach a Spoke in another region.
  • Managed Routing: You don’t need to write dozens of UDRs. The vWAN “Hub Router” handles BGP propagation across regions automatically.

Key Differences for 2026

FeatureGlobal Hub-and-SpokeAzure Virtual WAN
EffortManual (UDRs, Peering, NVA configs)Managed (Point-and-click/Policy)
TransitivityNot native (must use Firewall/NVA)Native (Any-to-Any transit)
ScaleLimits at ~500 peerings per HubScales to thousands of VNets
CostCheaper for small, simple setupsBetter ROI for large, complex global builds
RoutingStatic (UDR-based)Dynamic (BGP-based)

Important Change: Default Outbound Access (March 2026)

As of March 31, 2026, Azure has retired “default outbound access” for new VNets. In a multi-region setup, you can no longer rely on VMs simply “finding” the internet.

The Strategy Update: You must now explicitly define an egress path. In a multi-region architecture, this is typically done by routing all regional internet traffic through the local Hub’s Azure Firewall or by attaching an Azure NAT Gateway to each spoke subnet for high-performance, non-inspected traffic.

Would you like to see a sample User-Defined Route (UDR) configuration for a cross-region Hub-and-Spoke setup, or are you leaning more toward the managed Virtual WAN approach?

Since you’re interested in the “how-to,” let’s look at the User-Defined Route (UDR) configuration for a cross-region Hub-and-Spoke setup. This is the manual way to “force” traffic through your security stack.

Scenario

  • Region 1 (East US): Hub VNet (10.1.0.0/16) with a Firewall at 10.1.0.4.
  • Region 2 (West US): Hub VNet (10.2.0.0/16) with a Firewall at 10.2.0.4.
  • Spoke A (in East US): Subnet range 10.1.10.0/24.

1. The Route Table Logic

To get Spoke A (East US) to talk to a Spoke in West US, the packet needs instructions. Without a UDR, the packet will try to go directly via Global Peering, but if you have a Firewall, it will be blocked or dropped because there is no return path.

Example UDR for Spoke A (East US)

You would create a Route Table and associate it with the subnets in Spoke A:

Route NameAddress PrefixNext Hop TypeNext Hop AddressPurpose
To-West-US10.2.0.0/16Virtual Appliance10.1.0.4Sends West US traffic to the Local Hub Firewall first.
To-Internet0.0.0.0/0Virtual Appliance10.1.0.4Forces all web traffic through the Firewall (Egress).
Local-Spoke-Traffic10.1.10.0/24VNet LocalNoneKeeps traffic within the same subnet local.

2. The Traffic Flow (Step-by-Step)

When a VM in Spoke A (East US) sends a packet to Spoke B (West US):

  1. Source Check: The VM looks at its Route Table. It sees the 10.2.0.0/16 prefix points to 10.1.0.4 (Local Hub Firewall).
  2. First Hop: Packet travels to the East US Firewall. The Firewall inspects it against your Network Rules.
  3. Global Transit: The Firewall sees the destination is in West US. It sends the packet across the Global VNet Peering to the West US Hub.
  4. Second Hop: The West US Firewall receives the packet, inspects it again (if desired), and forwards it to the destination VM in the West US Spoke.
  5. Return Path: Symmetric routing is critical. The West US Spoke must have a mirrored UDR pointing East US traffic (10.1.0.0/16) to the West US Hub Firewall (10.2.0.4).

3. Comparison: Manual UDR vs. Virtual WAN (vWAN)

If writing these tables for 50 spokes sounds like a headache, that’s exactly why Azure Virtual WAN exists.

FeatureManual Hub-and-Spoke (UDR)Azure Virtual WAN
Routing TableYou manually create/update every UDR.Automated. Routes are propagated via BGP.
TransitYou must configure NVAs/Firewalls to route.Native. The Virtual Hub is a transit router.
ComplexityHigh (risk of “Route Leaks” or loops).Low (Microsoft manages the routing mesh).
ScalingHard to manage beyond 10-15 VNets.Designed for 100s of VNets globally.

Pro-Tip: The “Gateway Subnet” Exception

Never associate a UDR with the GatewaySubnet in your Hub unless you are very experienced with BGP. Doing so can “break” the connection between your On-Premises VPN and your Azure VNets by creating circular routing loops.

Given the complexity of managing cross-region UDRs, are you currently leaning toward building this manually for more granular control, or does the automation of Virtual WAN sound like a better fit for your scale?

Hub-and-Spoke architecture

The Hub-and-Spoke architecture is the gold standard for enterprise networking in Azure. It’s designed to centralize shared resources while providing isolation for individual workloads, effectively balancing security with scalability.

Think of it like an airport: the Hub is the main terminal (security, baggage, customs), and the Spokes are the individual gates where the planes (your apps) live.


1. The Hub: The “Central Command”

The Hub is a Virtual Network (VNet) that acts as the central point of connectivity. It typically contains resources shared by the entire organization.

  • Azure Firewall / NVA: All traffic entering or leaving the network passes through here for inspection.
  • VPN / ExpressRoute Gateway: Connects your on-premises data center to the Azure environment.
  • Shared Services: Domain Controllers (ADDS), DNS servers, or specialized management tools.
  • Azure Bastion: Provides secure RDP/SSH access to VMs without exposing public IPs.

2. The Spokes: Isolated Workloads

Spokes are separate VNets used to host specific workloads, such as a production environment, a dev/test environment, or a specific business application.

  • Isolation: Spokes do not communicate with each other by default. This “Zero Trust” approach ensures that if one spoke is compromised, the threat is contained.
  • Connectivity: Each spoke is connected to the Hub via VNet Peering.
  • Resource Management: Spokes are often managed by different teams but still rely on the Hub for security and connectivity.

3. How Traffic Flows (Routing)

The “magic” of the Hub-and-Spoke model lies in how data moves through the network.

  • Spoke-to-Internet: Traffic is usually forced through the Hub’s Firewall using User Defined Routes (UDRs). This ensures no application can “talk” to the internet without being inspected.
  • Spoke-to-Spoke: If Spoke A needs to talk to Spoke B, the traffic must travel to the Hub, be inspected by the Firewall, and then be routed back out to the destination spoke.
  • On-Premises-to-Spoke: The Hub acts as the gateway. External traffic hits the VPN/ExpressRoute in the Hub and is then routed to the correct Spoke.

4. Why Use This Alignment?

FeatureBenefit
Cost EfficiencyYou only pay for one Firewall and one VPN Gateway in the Hub, rather than putting them in every Spoke.
SecurityCentralized “choke point” for security policies and traffic monitoring.
ScalabilityYou can add new Spokes easily without redesigning the core network.
GovernanceCentral IT can manage the Hub (security), while App Teams manage their own Spokes (agility).

5. The “Virtual WAN” Evolution

For very large global deployments, Microsoft offers Azure Virtual WAN. This is essentially “Hub-and-Spoke as a Service.” It automates the peering and routing, making it easier to manage hundreds of spokes across different geographic regions.

To implement this for a new migration

If you are starting a new migration, aligning your Hub-and-Spoke architecture with the Azure Landing Zone (ALZ) framework is the best move. It ensures you don’t just build a “network,” but a governed ecosystem that can grow from 10 VMs to 1,000 without a redesign.

Here is the blueprint for aligning a new migration to the Hub-and-Spoke model in 2026.


1. The Hierarchy (Management Groups)

Before touching the network, you must organize your subscriptions. Alignment starts with Management Groups (MGs).

  • Interim/Sandbox MG: For initial migration testing.
  • Platform MG: Holds your Hub subscription (Connectivity, Identity, Management).
  • Landing Zones MG: Holds your Spoke subscriptions, organized by archetype (e.g., “Corp” for internal apps, “Online” for public apps).

2. Connectivity: Hub vs. Spoke Strategy

The Hub (Centralized Services)

In a new migration, the Hub is your “Landing Strip.” It should be deployed first and contain:

  • Azure Firewall: Acts as the central security guard for all “North-South” (Internet) and “East-West” (Spoke-to-Spoke) traffic.
  • Private DNS Resolver: Critical for migrations. It ensures your migrated Azure VMs can still resolve names of servers remaining on-premises.
  • Gateway Subnet: Where your VPN or ExpressRoute lands to connect your old data center to the new cloud environment.

The Spokes (Workload Isolation)

Each application or business unit gets its own Spoke VNet.

  • VNet Peering: Connect each Spoke to the Hub.
  • Gateways: Spokes should not have their own VPN gateways. They “use the remote gateway” in the Hub to reach on-premises.

3. The Migration “Paved Path”

To make the migration repeatable, use a Subscription Vending Machine approach.

StepActionWhy?
1. StandardizeUse Infrastructure as Code (IaC) (Bicep or Terraform).Avoids “snowflake” configurations that are hard to support.
2. SecureApply Azure Policy at the Landing Zone MG level.Automatically prevents Spokes from creating Public IPs or bypassing the Firewall.
3. RouteDeploy User Defined Routes (UDRs) in every Spoke subnet.Forces all traffic (0.0.0.0/0) to the Hub Firewall for inspection.
4. MigrateUse Azure Migrate or Azure Site Recovery (ASR).Replicates your on-prem VMs directly into the designated Spoke VNets.

4. Key 2026 Best Practices

  • Identity-First Security: Don’t just rely on the network. Use Microsoft Entra ID (formerly Azure AD) and Managed Identities for all migrated apps.
  • Private Link Everything: Instead of opening ports, use Private Endpoints in your Spokes to connect to PaaS services (like SQL or Storage) privately.
  • Observability: Connect all Spokes to a central Log Analytics Workspace in the Hub’s “Management” subscription for a single pane of glass during the migration cutover.

5. Pro-Tip: Use the “ALZ Accelerator”

Microsoft provides a Landing Zone Accelerator (available in the Azure Portal or via GitHub). It allows you to deploy a fully compliant Hub-and-Spoke foundation in about 20 minutes. It handles the management groups, policies, and core networking for you, so you can focus on moving your data rather than building the pipes.

Warning: During a migration, watch out for IP Address Overlap. Ensure the CIDR ranges you choose for your Azure Spokes do not conflict with your existing on-premises network or you won’t be able to peer them.

Are you planning to move physical/virtual servers (IaaS) or are you refactoring directly into platform services (PaaS)?

VNet Peering

VNet Peering

Q: What is the difference between regional and global VNet peering? Are there any restrictions with global peering?

Regional VNet peering connects two VNets within the same Azure region. Global VNet peering connects VNets across different Azure regions.

Restrictions with global peering:

  • Basic Load Balancer — Resources behind a Basic Load Balancer in one VNet cannot be reached from a globally peered VNet. Standard Load Balancer works fine.
  • Latency — Global peering crosses region boundaries, so latency is higher than regional peering. You need to account for this in latency-sensitive workloads.
  • Cost — Global peering incurs data transfer charges in both directions, whereas regional peering charges are lower.
  • No transitive routing — Same as regional peering, traffic does not flow transitively through a peered VNet without additional configuration.

Q: Can peered VNets communicate transitively by default? How would you work around this?

No — transitive routing is not supported natively in VNet peering. If Spoke A is peered to the Hub, and Spoke B is peered to the Hub, Spoke A cannot reach Spoke B directly through the Hub by default.

To work around this, you have two main options:

  1. Azure Firewall or NVA in the Hub — Route traffic from Spoke A through the Hub firewall, which then forwards it to Spoke B. This requires User Defined Routes (UDRs) on both Spokes pointing their traffic to the firewall’s private IP as the next hop. This is the most common enterprise approach and has the added benefit of traffic inspection.
  2. Azure Virtual WAN — Virtual WAN supports transitive routing natively, making it a cleaner option when you have many Spokes and don’t want to manage UDRs manually.

Q: Spoke A and Spoke B are peered to the Hub. Can Spoke A reach Spoke B? What needs to be in place?

Not by default. To enable this:

  • Deploy Azure Firewall (or an NVA) in the Hub VNet
  • Create a UDR on Spoke A’s subnet with a route: destination = Spoke B’s address space, next hop = Azure Firewall private IP
  • Create a mirror UDR on Spoke B’s subnet: destination = Spoke A’s address space, next hop = Azure Firewall private IP
  • Configure Azure Firewall network rules to allow the traffic between Spoke A and Spoke B
  • Enable “Use Remote Gateway” or “Allow Gateway Transit” on the peering connections as needed for routing to propagate correctly

This gives you transitive connectivity with centralized inspection — a core benefit of Hub-and-Spoke.


Q: When would you choose VNet peering over VPN Gateway or ExpressRoute for VNet-to-VNet connectivity?

  • VNet Peering — Best for Azure-to-Azure connectivity. It uses the Microsoft backbone, offers the lowest latency, highest throughput, and is the simplest to configure. Use it whenever both VNets are in Azure.
  • VPN Gateway (VNet-to-VNet) — Used when you need encrypted tunnels between VNets, or when connecting across different Azure tenants/subscriptions where peering may be complex. Higher latency and limited throughput compared to peering.
  • ExpressRoute — Used for on-premises to Azure connectivity over a private, dedicated circuit. Not typically used for VNet-to-VNet unless traffic must flow through on-premises for compliance or inspection reasons.

In short: always prefer peering for Azure-to-Azure, and reserve VPN/ExpressRoute for hybrid or cross-tenant scenarios.

AZ DNS

DNS Architecture

Q: Can you explain the difference between Azure Public DNS and Azure Private DNS Zones, and when you would use each?

Azure Public DNS is used to host publicly resolvable domain names — for example, resolving http://www.yourcompany.com from the internet. Anyone on the internet can query it.

Azure Private DNS Zones, on the other hand, are only resolvable within a VNet or linked VNets. They are used for internal name resolution — for example, resolving a private endpoint for a storage account like mystorageaccount.privatelink.blob.core.windows.net from inside your network, without exposing it publicly.

You use Public DNS when you need external-facing resolution, and Private DNS Zones when you need secure, internal name resolution for resources that should never be reachable from the internet.


Q: How does DNS resolution work for a VM inside a VNet — what is the default behavior, and when would you override it?

By default, Azure provides a built-in DNS resolver at the special IP 168.63.129.16. Every VM in a VNet uses this address automatically. It can resolve Azure-internal hostnames and any Private DNS Zones linked to that VNet.

You would override this default when:

  • You need to resolve on-premises hostnames from Azure (hybrid scenarios)
  • You need conditional forwarding to route specific domain queries to specific DNS servers
  • You are using a centralized custom DNS server (e.g., a DNS forwarder VM or Azure DNS Private Resolver) to control and log all DNS traffic across the environment

In those cases, you configure a custom DNS server address at the VNet level, pointing VMs to your centralized resolver instead.


Q: What is conditional forwarding, and how would you set it up to resolve on-premises domain names from Azure?

Conditional forwarding is a DNS rule that says: “For queries matching this specific domain, forward them to this specific DNS server instead of resolving them normally.”

For example, if your on-premises domain is corp.contoso.local, you would configure your Azure DNS resolver to forward any query for corp.contoso.local to your on-premises DNS server IP.

The setup typically looks like this:

  • Deploy Azure DNS Private Resolver with an outbound endpoint in your Hub VNet
  • Create a DNS forwarding ruleset with a rule: corp.contoso.local → forward to on-premises DNS IP
  • Associate the ruleset with the relevant VNets
  • Ensure the on-premises DNS server can be reached over ExpressRoute or VPN

Q: A client reports that their Azure VM cannot resolve a private endpoint hostname. What are the first things you check?

I would systematically check the following:

  1. Private DNS Zone linkage — Is the Private DNS Zone (e.g., privatelink.blob.core.windows.net) linked to the VNet the VM is in? Without the link, the zone is invisible to that VNet.
  2. A record presence — Does the Private DNS Zone actually contain an A record pointing to the private endpoint’s IP?
  3. Custom DNS configuration — If the VNet uses a custom DNS server, is that server forwarding queries for privatelink.* domains to Azure’s resolver (168.63.129.16)? This is a very common misconfiguration.
  4. nslookup / dig from the VM — Run nslookup <hostname> on the VM to see what IP is being returned. If it returns the public IP instead of the private IP, the DNS zone is not being picked up correctly.
  5. Network connectivity — Even if DNS resolves correctly, confirm NSG rules and routing aren’t blocking traffic to the private endpoint IP.

Q: How would you use Azure DNS Private Resolver, and how does it differ from a traditional DNS forwarder running on a VM?

Azure DNS Private Resolver is a fully managed, highly available DNS service that handles inbound and outbound DNS resolution without requiring you to manage VMs.

  • The inbound endpoint allows on-premises clients to send DNS queries into Azure and resolve Private DNS Zones — something that wasn’t possible before without a forwarder VM.
  • The outbound endpoint with forwarding rulesets allows Azure VMs to forward specific domain queries (e.g., on-premises domains) to external DNS servers.

Compared to a forwarder VM, DNS Private Resolver is:

  • Fully managed — no patching, no VM maintenance, no availability concerns
  • Scalable — handles high query volumes automatically
  • Integrated — natively understands Azure Private DNS Zones without extra configuration
  • More secure — no need to open management ports on a VM

The main reason teams still use forwarder VMs is legacy architecture or specific advanced configurations not yet supported by Private Resolver.


🔵 VNet Peering

Q: What is the difference between regional and global VNet peering? Are there any restrictions with global peering?

Regional VNet peering connects two VNets within the same Azure region. Global VNet peering connects VNets across different Azure regions.

Restrictions with global peering:

  • Basic Load Balancer — Resources behind a Basic Load Balancer in one VNet cannot be reached from a globally peered VNet. Standard Load Balancer works fine.
  • Latency — Global peering crosses region boundaries, so latency is higher than regional peering. You need to account for this in latency-sensitive workloads.
  • Cost — Global peering incurs data transfer charges in both directions, whereas regional peering charges are lower.
  • No transitive routing — Same as regional peering, traffic does not flow transitively through a peered VNet without additional configuration.

Q: Can peered VNets communicate transitively by default? How would you work around this?

No — transitive routing is not supported natively in VNet peering. If Spoke A is peered to the Hub, and Spoke B is peered to the Hub, Spoke A cannot reach Spoke B directly through the Hub by default.

To work around this, you have two main options:

  1. Azure Firewall or NVA in the Hub — Route traffic from Spoke A through the Hub firewall, which then forwards it to Spoke B. This requires User Defined Routes (UDRs) on both Spokes pointing their traffic to the firewall’s private IP as the next hop. This is the most common enterprise approach and has the added benefit of traffic inspection.
  2. Azure Virtual WAN — Virtual WAN supports transitive routing natively, making it a cleaner option when you have many Spokes and don’t want to manage UDRs manually.

Q: Spoke A and Spoke B are peered to the Hub. Can Spoke A reach Spoke B? What needs to be in place?

Not by default. To enable this:

  • Deploy Azure Firewall (or an NVA) in the Hub VNet
  • Create a UDR on Spoke A’s subnet with a route: destination = Spoke B’s address space, next hop = Azure Firewall private IP
  • Create a mirror UDR on Spoke B’s subnet: destination = Spoke A’s address space, next hop = Azure Firewall private IP
  • Configure Azure Firewall network rules to allow the traffic between Spoke A and Spoke B
  • Enable “Use Remote Gateway” or “Allow Gateway Transit” on the peering connections as needed for routing to propagate correctly

This gives you transitive connectivity with centralized inspection — a core benefit of Hub-and-Spoke.


Q: When would you choose VNet peering over VPN Gateway or ExpressRoute for VNet-to-VNet connectivity?

  • VNet Peering — Best for Azure-to-Azure connectivity. It uses the Microsoft backbone, offers the lowest latency, highest throughput, and is the simplest to configure. Use it whenever both VNets are in Azure.
  • VPN Gateway (VNet-to-VNet) — Used when you need encrypted tunnels between VNets, or when connecting across different Azure tenants/subscriptions where peering may be complex. Higher latency and limited throughput compared to peering.
  • ExpressRoute — Used for on-premises to Azure connectivity over a private, dedicated circuit. Not typically used for VNet-to-VNet unless traffic must flow through on-premises for compliance or inspection reasons.

In short: always prefer peering for Azure-to-Azure, and reserve VPN/ExpressRoute for hybrid or cross-tenant scenarios.


🔵 Hub-and-Spoke Network Design

Q: Explain the Hub-and-Spoke topology. What lives in the Hub, and what lives in the Spokes?

Hub-and-Spoke is a network design pattern where a central VNet (the Hub) acts as the connectivity and security backbone, and multiple Spoke VNets connect to it via peering.

The Hub hosts shared, centralized services:

  • Azure Firewall or NVA for traffic inspection and internet egress control
  • VPN Gateway or ExpressRoute Gateway for on-premises connectivity
  • Azure DNS Private Resolver
  • Bastion for secure VM access
  • Shared monitoring and logging infrastructure

The Spokes host workload-specific resources:

  • Application VMs, AKS clusters, App Services
  • Databases and storage
  • Each Spoke is isolated — it can only communicate outside its boundary through the Hub, which enforces security policies

This model gives you centralized governance and security without duplicating shared services in every workload environment.


Q: How do you enforce traffic inspection through the Hub for Spoke-to-internet traffic?

  • Deploy Azure Firewall in the Hub VNet
  • On each Spoke subnet, create a UDR with a default route: 0.0.0.0/0 → next hop = Azure Firewall private IP
  • This forces all outbound internet traffic from Spoke VMs through the firewall before it exits
  • On the Hub, configure Azure Firewall application and network rules to define what traffic is allowed out
  • Enable Azure Firewall DNS proxy if you want centralized DNS logging as well

For Spoke-to-Spoke, additional UDRs point inter-spoke traffic to the firewall as described earlier.


Q: A new business unit needs to be onboarded into your existing Hub-and-Spoke architecture. Walk me through the steps.

  1. IP planning — Allocate a non-overlapping address space for the new Spoke VNet from the enterprise IP plan
  2. Create the Spoke VNet — Deploy it in the appropriate subscription under the correct Management Group
  3. Establish peering — Create bidirectional peering between the new Spoke and the Hub (allow gateway transit on Hub side, use remote gateway on Spoke side if needed)
  4. Configure UDRs — Apply route tables on the Spoke subnets to direct internet and cross-spoke traffic through the Hub firewall
  5. DNS configuration — Point the Spoke VNet’s DNS settings to the centralized DNS Private Resolver in the Hub
  6. Firewall rules — Add rules in Azure Firewall to permit the business unit’s required traffic flows
  7. Azure Policy — Ensure the new subscription inherits enterprise policies (e.g., no public IPs, required tags, allowed regions)
  8. Private DNS Zone links — Link relevant Private DNS Zones to the new Spoke VNet for private endpoint resolution
  9. Connectivity testing — Validate DNS resolution, internet egress, and any required on-premises connectivity

🔵 Landing Zones & Enterprise Network Governance

Q: What is an Azure Landing Zone, and how does networking fit into it?

An Azure Landing Zone is a pre-configured, governed Azure environment that provides the foundation for hosting workloads securely and at scale. It is designed following Microsoft’s Cloud Adoption Framework (CAF) and covers identity, governance, security, networking, and management.

Networking is one of the most critical components. In the CAF Landing Zone model:

  • A Connectivity subscription hosts the Hub VNet, gateways, firewall, and DNS infrastructure
  • Landing Zone subscriptions host Spoke VNets for individual workloads or business units
  • All networking is governed centrally — workload teams cannot create arbitrary public IPs or peer VNets outside the approved architecture
  • Azure Policy enforces these constraints automatically

Q: What role do Azure Policy and Management Groups play in enforcing network governance?

Management Groups create a hierarchy of subscriptions (e.g., Root → Platform → Landing Zones → Business Units). Policies applied at a Management Group level automatically inherit down to all subscriptions beneath it.

Azure Policy enforces guardrails such as:

  • Deny creation of public IP addresses in Spoke subscriptions
  • Require all VNets to use a specific custom DNS server
  • Deny VNet peering unless it connects to the approved Hub
  • Enforce NSG association on every subnet
  • Require private endpoints for PaaS services like Storage and SQL

Together, they ensure that even if a workload team has Contributor access to their subscription, they cannot violate the network architecture — the policies block non-compliant actions automatically.


Q: How would you manage IP address space allocation across multiple subscriptions to avoid conflicts?

This is an area where discipline and tooling are both essential:

  • Centralized IP plan — Maintain a master IP address management (IPAM) document or tool (e.g., Azure’s native IPAM feature in preview, or third-party tools like Infoblox or NetBox) that tracks all allocated ranges across subscriptions
  • Non-overlapping ranges per Spoke — Assign each Landing Zone a dedicated, non-overlapping CIDR block from a master supernet (e.g., 10.0.0.0/8 split into /16 per region, then /24 per Spoke)
  • Azure Policy — Use policy to deny VNet creation if the address space conflicts with known ranges or falls outside the approved supernet
  • Automation — When onboarding new Landing Zones via Pulumi or other IaC, automatically pull the next available range from the IPAM system rather than relying on manual assignment

🔵 Hybrid DNS Resolution

Q: On-premises clients need to resolve privatelink.blob.core.windows.net. What DNS architecture changes are needed?

This is one of the most common hybrid DNS challenges. By default, privatelink.blob.core.windows.net resolves to a public IP from on-premises. To make it resolve to the private endpoint IP, you need:

On the Azure side:

  • Create a Private DNS Zone for privatelink.blob.core.windows.net and link it to the Hub VNet
  • Ensure the private endpoint A record is registered in the zone
  • Deploy Azure DNS Private Resolver with an inbound endpoint in the Hub VNet — this gives on-premises clients a routable IP to send DNS queries into Azure

On the on-premises side:

  • Configure your on-premises DNS server with a conditional forwarder: privatelink.blob.core.windows.net → forward to the DNS Private Resolver inbound endpoint IP
  • Ensure the inbound endpoint IP is reachable over ExpressRoute or VPN from on-premises

Result: On-premises clients query their local DNS → conditional forwarder redirects to Azure DNS Private Resolver → Private Resolver checks the linked Private DNS Zone → returns the private endpoint IP → traffic flows privately over ExpressRoute/VPN.


Q: You’re migrating from a custom DNS forwarder VM to Azure DNS Private Resolver. How do you ensure zero DNS disruption?

  1. Deploy Private Resolver in parallel — Set up the inbound and outbound endpoints and configure forwarding rulesets to mirror the existing forwarder VM’s rules exactly
  2. Test thoroughly — Validate resolution of all key domains (on-premises, private endpoints, public) from test VMs pointing to the new resolver
  3. Staged migration — Update the custom DNS server setting on VNets one at a time, starting with non-production VNets, monitoring for any resolution failures
  4. Update on-premises conditional forwarders — Once Azure-side is validated, update on-premises DNS to point to the Private Resolver inbound endpoint instead of the old forwarder VM IP
  5. Monitor — Use Azure Monitor and DNS metrics on the Private Resolver to confirm query volumes are healthy
  6. Decommission the VM — Only after all VNets and on-premises forwarders are updated and validated, remove the forwarder VM

The key principle is run both in parallel, migrate gradually, and never cut over until validation is complete.


how to integrate token in java call

Gotcha—here are quick, copy-paste ways to add an OIDC/JWT Bearer token to Java HTTP calls. Pick the stack you use.


Java 11+ HttpClient

var client = java.net.http.HttpClient.newHttpClient();
String token = "eyJ..."; // put your access token here

var req = java.net.http.HttpRequest.newBuilder()
    .uri(URI.create("https://api.example.com/resource"))
    .header("Authorization", "Bearer " + token)
    .header("Content-Type", "application/json")
    .POST(java.net.http.HttpRequest.BodyPublishers.ofString("{\"ping\":\"pong\"}"))
    .build();

var res = client.send(req, java.net.http.HttpResponse.BodyHandlers.ofString());
System.out.println(res.statusCode() + " " + res.body());


OkHttp (popular + simple)

OkHttpClient client = new OkHttpClient.Builder()
    .addInterceptor(chain -> {
        String token = "eyJ..."; // inject per request or from a provider
        Request req = chain.request().newBuilder()
            .addHeader("Authorization", "Bearer " + token)
            .build();
        return chain.proceed(req);
    })
    .build();

Request request = new Request.Builder()
    .url("https://api.example.com/resource")
    .post(RequestBody.create("{\"ping\":\"pong\"}", MediaType.get("application/json")))
    .build();

try (Response resp = client.newCall(request).execute()) {
    System.out.println(resp.code());
}


Apache HttpClient 5

var httpClient = HttpClients.custom()
    .addRequestInterceptorFirst((HttpRequest request, EntityDetails entity, HttpContext ctx) -> {
        request.addHeader("Authorization", "Bearer " + "eyJ...");
    })
    .build();

var post = new HttpPost("https://api.example.com/resource");
post.setHeader(HttpHeaders.CONTENT_TYPE, "application/json");
post.setEntity(new StringEntity("{\"ping\":\"pong\"}", StandardCharsets.UTF_8));

try (var resp = httpClient.execute(post)) {
    System.out.println(resp.getCode());
}


Spring (WebClient) — preferred in Spring Boot

@Bean
WebClient webClient() {
  return WebClient.builder()
      .filter((req, next) -> {
        String token = "eyJ..."; // inject from a bean that caches/refreshes
        ClientRequest r = ClientRequest.from(req)
            .header(HttpHeaders.AUTHORIZATION, "Bearer " + token).build();
        return next.exchange(r);
      })
      .build();
}

// use it
webClient().post().uri("https://api.example.com/resource")
  .contentType(MediaType.APPLICATION_JSON)
  .bodyValue(Map.of("ping","pong"))
  .retrieve().toEntity(String.class).block();


Spring (RestTemplate)

RestTemplate rt = new RestTemplate();
rt.getInterceptors().add((req, body, ex) -> {
  req.getHeaders().setBearerAuth("eyJ...");
  return ex.execute(req, body);
});
ResponseEntity<String> resp = rt.getForEntity("https://api.example.com/resource", String.class);


Feign (OpenFeign)

@Bean
public RequestInterceptor bearerAuth() {
  return template -> template.header("Authorization", "Bearer " + "eyJ...");
}


JAX-WS / SOAP (header example)

SOAP 1.1 often also needs SOAPAction, but the Bearer goes in HTTP headers:

BindingProvider bp = (BindingProvider) port;
Map<String, List<String>> headers = new HashMap<>();
headers.put("Authorization", List.of("Bearer eyJ..."));
bp.getRequestContext().put(MessageContext.HTTP_REQUEST_HEADERS, headers);


Getting the token (Ping/OIDC) in Java (client-credentials)

var client = HttpClient.newHttpClient();
var form = URLEncoder.encode("grant_type","UTF-8") + "=client_credentials" +
           "&" + URLEncoder.encode("client_id","UTF-8") + "=" + URLEncoder.encode(System.getenv("OIDC_CLIENT_ID"),"UTF-8") +
           "&" + URLEncoder.encode("client_secret","UTF-8") + "=" + URLEncoder.encode(System.getenv("OIDC_CLIENT_SECRET"),"UTF-8");

var req = HttpRequest.newBuilder(URI.create("https://idp.example.com/oauth2/token"))
    .header("Content-Type", "application/x-www-form-urlencoded")
    .POST(HttpRequest.BodyPublishers.ofString(form))
    .build();

var res = client.send(req, HttpResponse.BodyHandlers.ofString());
String token = new org.json.JSONObject(res.body()).getString("access_token");


Pro tips (Kong/Ping friendly)

  • Always send Authorization: Bearer <token> (no quotes, single space).
  • Handle 401 by refreshing the token (cache access_token + expires_in).
  • For Cloudflare/ALB in front, ensure they don’t strip Authorization.
  • If you need mTLS as well, add your keystore/truststore to the HTTP client config; the Bearer header stays the same.

If you tell me which client you’re using (Spring WebClient, RestTemplate, OkHttp, Apache, or pure Java 11) and how you obtain tokens (client-credentials vs user flow), I’ll tailor a tiny reusable “TokenProvider” + interceptor for you.

Kong – proxy issue

Got it—your client runs in AWS and must use a corporate proxy. With mTLS, a client-side proxy can absolutely be the culprit. Here’s the playbook.

What’s happening

  • If the proxy does TLS inspection (MITM), it terminates TLS and re-signs with its own CA. Your server asks the proxy (not the real client) for a cert → it has none → server logs “trying to obtain a certificate from the client.”
  • The client may also see “unable to get local issuer certificate” because it’s validating the proxy’s substituted cert but doesn’t trust the proxy’s Root CA.

Decide the path (pick one)

A) Allow end-to-end mTLS (best):
Ask the proxy admins to bypass SSL inspection for your domain (add it to the proxy’s TLS bypass list) or ensure it does pure CONNECT tunneling. Then the real client cert reaches your server.

B) Two-hop mTLS (enterprise pattern):

  • Client ↔ Proxy: mTLS using Client-Cert-#1 (issued by proxy’s CA).
  • Proxy ↔ Your Server: mTLS using Client-Cert-#2 (issued by a CA your server trusts).
    Your server will authenticate the proxy’s identity, not the original client. If you need end-user identity, have the proxy forward vetted identity (headers/JWT) and verify it.

C) Temporary test bypass:
On that AWS instance, set NO_PROXY for your hostname if policy allows, and open egress 443 to your server to confirm the issue is the proxy.

Quick diagnostics (run both)

1) See if proxy is intercepting

# Behind the proxy
openssl s_client -proxy PROXY_HOST:PROXY_PORT \
  -connect YOUR_HOST:443 -servername YOUR_HOST -showcerts </dev/null | \
  openssl x509 -noout -issuer -subject -enddate

Compare issuer/subject to the same command from a machine without the proxy.
If issuers differ (e.g., “Zscaler/BlueCoat/Corp Root CA”), it’s inspecting.

2) Prove mTLS works through a tunnel

# If proxy supports pure CONNECT (no inspection)
curl -v https://YOUR_HOST/ \
  --proxy http://PROXY_HOST:PROXY_PORT \
  --cert client.crt --key client.key \
  --cacert server_ca_chain.pem

  • If your proxy itself is HTTPS, add: --proxy-cacert PROXY_ROOT_CA.pem.

Java client notes (common in AWS)

# Client cert for mTLS to your server
-Djavax.net.ssl.keyStore=/path/client.p12 -Djavax.net.ssl.keyStorePassword=***** -Djavax.net.ssl.keyStoreType=PKCS12
# Trust your server (or proxy if inspecting)
-Djavax.net.ssl.trustStore=/path/truststore.jks -Djavax.net.ssl.trustStorePassword=*****
# Proxy settings (CONNECT)
-Dhttps.proxyHost=PROXY_HOST -Dhttps.proxyPort=PROXY_PORT
# Bypass (if allowed)
-Dhttp.nonProxyHosts="|YOUR_HOST|*.yourdomain.com"

If inspection can’t be bypassed, you must import the proxy’s Root CA into the truststore, and if mTLS is required by your server, implement the two-hop mTLS model above.

What to send to the proxy team

  • “Please bypass TLS inspection (or allow pure CONNECT) for YOUR_HOST:443.”
  • If not possible: “We’ll run two-hop mTLS. We’ll trust your proxy’s client CA for the proxy→server leg; you’ll trust ours for client→proxy.”

If you share the outputs of the two openssl/curl tests (redact hostnames as needed), I’ll tell you exactly whether it’s inspection vs tunneling and the minimal change to make it work.

Short answer: tcpdump can capture payload, but only if the traffic is plaintext.
If it’s HTTPS/TLS (which most APIs are), tcpdump will not show the body unless you decrypt it. It also won’t write into your “app logs”—it just captures packets.

Here are your practical options:

ClientMTLSAuth status absent

It means no client certificate was presented/validated on the TLS handshake—i.e., client mTLS auth did not happen for that request.

Think of it like Nginx’s $ssl_client_verify = NONE: the server (Cloudflare/F5/Kong) didn’t see a client cert, so mTLS status is absent.

Why it happens

  • Client didn’t send a cert (missing --cert/--key in curl, or browser/keystore not configured).
  • mTLS not required on that hop (e.g., F5/Cloudflare set to “request/ignore” instead of “require”; Kong’s mtls-auth plugin not enabled).
  • TLS terminates before Kong (Cloudflare/F5 terminates TLS; the origin can’t see the client cert unless that proxy enforces mTLS itself).
  • Wrong host/port/SNI that doesn’t have the mTLS policy.

Quick checks

# PEM pair
curl -vk https://api.example.com/path \
  --cert client.crt --key client.key

# or PKCS#12
curl -vk https://api.example.com/path \
  --cert-type P12 --cert client.p12:password

Look for SSL client certificate lines in the verbose output.

  • On Cloudflare/F5:
    • Require client certs on the relevant hostname/path.
    • Ensure the CA used to validate client certs is uploaded/selected.
    • If using Cloudflare, remember: the client’s cert terminates at Cloudflare; the origin (Kong) won’t receive it. Enforce mTLS at Cloudflare edge and use origin controls (e.g., Authenticated Origin Pulls) separately.
  • On Kong:
    • If using Enterprise mtls-auth plugin, enable it on the route/service and upload your CA certificate to /ca-certificates; reference it in the plugin.
    • If relying on Nginx-level mTLS, ensure the proxy listener is configured with ssl_client_certificate and ssl_verify_client on; (or the equivalent in your deployment).

Interpreting statuses

  • absent = no cert presented.
  • failed/invalid = cert presented but couldn’t be validated (untrusted/expired/CN mismatch).
  • success/verified = cert was validated.

If you share how traffic flows (Client → Cloudflare/F5 → Kong) and where you intend to enforce mTLS, I’ll give you the exact config and a working curl for that hop.

Kong latency 2

Short answer:
In Kong logs, proxy latency is the time spent waiting on your upstream service (the API/backend) — i.e., how long it took the upstream to respond to Kong.

Here’s the breakdown of the three latency fields you’ll see in Kong logs:

  • latencies.proxyUpstream latency (a.k.a. “proxy latency”): time from when Kong sends the request to the upstream until it starts getting the response.
  • latencies.kongKong internal time: routing + plugin execution + overhead inside Kong.
  • latencies.requestTotal request time as seen by the client.

Quick mental model:

Client ──> [ Kong (latencies.kong) ] ──> Upstream API (latencies.proxy) ──> [ Kong ] ──> Client
                           \________________ latencies.request ________________/

A common point of confusion: in response headers,

  • X-Kong-Upstream-Latencylatencies.proxy (upstream time)
  • X-Kong-Proxy-Latencylatencies.kong (Kong time)

So, if you see high proxy latency, the slowness is almost always in your backend (or the network to it), not Kong itself. Focus on the upstream’s performance (DB calls, external services), network/DNS, and connection reuse; use Kong’s service/route timeouts (connect_timeout, read_timeout, write_timeout) to guard against outliers.Short answer:
In Kong logs, proxy latency is the time spent waiting on your upstream service (the API/backend) — i.e., how long it took the upstream to respond to Kong.

Here’s the breakdown of the three latency fields you’ll see in Kong logs:

  • latencies.proxyUpstream latency (a.k.a. “proxy latency”): time from when Kong sends the request to the upstream until it starts getting the response.
  • latencies.kongKong internal time: routing + plugin execution + overhead inside Kong.
  • latencies.requestTotal request time as seen by the client.

Quick mental model:

Client ──> [ Kong (latencies.kong) ] ──> Upstream API (latencies.proxy) ──> [ Kong ] ──> Client
                           \________________ latencies.request ________________/

A common point of confusion: in response headers,

  • X-Kong-Upstream-Latencylatencies.proxy (upstream time)
  • X-Kong-Proxy-Latencylatencies.kong (Kong time)

So, if you see high proxy latency, the slowness is almost always in your backend (or the network to it), not Kong itself. Focus on the upstream’s performance (DB calls, external services), network/DNS, and connection reuse; use Kong’s service/route timeouts (connect_timeout, read_timeout, write_timeout) to guard against outliers.

Kong – no suitable authorization credentials were provided

The error “no suitable authorization credentials were provided” usually means Kong (or an upstream service) expected authentication information, but it either:

  • Didn’t receive it at all
  • Received it in the wrong format
  • Received it in the wrong location (header vs. query parameter)
  • Couldn’t match it to any configured credentials

Here’s how to troubleshoot it step-by-step:


1. Identify which plugin or service is throwing the error

  • Kong Plugin Layer – Often caused by key-auth, jwt, or openid-connect plugin when no valid credential is passed.
  • Upstream Service Layer – Could be from the API/backend if Kong passed the request without credentials.
  • Load Balancer / F5 – Sometimes strips or alters headers, so credentials never reach Kong.

2. Check the authentication type you’re using

  • API Key (key-auth plugin)
    • Expected: apikey in ?query or apikey in header (apikey: <key>).
    • Verify: curl -H "apikey: YOUR_API_KEY" https://your-kong-endpoint/your-api
  • JWT Plugin
    • Expected: Authorization: Bearer <jwt_token> header.
    • Token must be valid and match a configured consumer.
  • OpenID Connect (openid-connect plugin)
    • Expected: Valid ID/Access token, usually Authorization: Bearer <access_token>.
    • Token must be signed by a trusted issuer in plugin config.

3. Verify headers are not stripped

If you have F5 / Nginx / Apache in front of Kong:

  • Check that Authorization headers are forwarded.
  • In F5, enable HTTP::header Authorization forwarding.
  • In Nginx, ensure: proxy_set_header Authorization $http_authorization;

4. Check Kong logs for details

Run Kong in debug mode temporarily:

KONG_LOG_LEVEL=debug kong restart
tail -f /usr/local/kong/logs/error.log

Look for:

  • plugin=... lines showing missing credentials
  • no credentials found for given consumer/issuer

5. Confirm consumer mapping

For jwt or oidc:

  • The public key / JWKS URL must be correct.
  • The iss (issuer) in token must match plugin config.
  • The consumer must be linked to the credential.

💡 Common causes I’ve seen in your past scenarios with F5 + Kong + Ping Identity:

  • F5 stripping Authorization header before reaching Kong.
  • Wrong token type (ID token instead of Access token) in OIDC flow.
  • Consumer not mapped in Kong for the provided JWT/OIDC token.
  • client_id and client_secret missing in a token request.

If you want, I can draw you a flow diagram for Kong + Ping Identity showing exactly where the credential might be getting lost in your setup with F5 in the middle. That would make pinpointing the issue much faster.

Kong / our app is build using Java Spring Boot

Yes — when integrating certificates into Spring Boot, especially for mTLS, HTTPS, or secure inter-service communication, there are several important certificate handling considerations you should account for:


✅ 1. Keystore / Truststore Formats

StorePurposeCommon Format
KeystoreHolds your application’s private key + certJKS, PKCS12 (.p12 / .pfx)
TruststoreHolds trusted CA certs to verify clients or serversJKS, PKCS12

🟡 JKS is Java-native, but PKCS12 is preferred for modern apps (more portable + supports OpenSSL).

🔧 Example Spring Boot Config:

server.ssl.key-store=classpath:server.p12
server.ssl.key-store-password=changeit
server.ssl.key-store-type=PKCS12
server.ssl.trust-store=classpath:truststore.p12
server.ssl.trust-store-password=changeit
server.ssl.trust-store-type=PKCS12


✅ 2. Certificate Chain Requirements

Spring Boot expects:

  • Full certificate chain in the keystore if issued by an intermediate CA
  • Trusted root CA in the truststore (for mTLS)

🔸 Failure to include the full chain often results in handshake errors or “unable to find valid certification path” errors.


✅ 3. Protocol Configuration (TLS)

Spring Boot (via embedded Tomcat) uses TLS 1.2+ by default.

To restrict or specify:

server.ssl.enabled-protocols=TLSv1.2,TLSv1.3

🔐 Disable TLS 1.0/1.1 — they are deprecated and insecure.


✅ 4. Cipher Suites

Spring Boot (via Tomcat) negotiates secure ciphers by default.
You can explicitly define them:

server.ssl.ciphers=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,...

Use modern ciphers like ECDHE + AES_GCM.
Avoid RC4, 3DES, NULL, or EXPORT ciphers.


✅ 5. Mutual TLS (mTLS)

To require client certs:

server.ssl.client-auth=need

Modes:

  • none: default (no client cert)
  • want: optional client cert
  • need: mandatory client cert (for mTLS)

✅ 6. Generating Keystore & Truststore

Convert PEM to PKCS12:

openssl pkcs12 -export \
  -in client.crt \
  -inkey client.key \
  -certfile ca.crt \
  -out client-keystore.p12 \
  -name client

Then import trusted CA into truststore (if using JKS):

keytool -import -alias myca -file ca.crt -keystore truststore.jks


✅ 7. Spring Boot with Reverse Proxies (e.g., Kong, F5)

If TLS termination is done by Kong/F5 and Spring Boot sits behind it:

  • Use X-Forwarded headers: server.forward-headers-strategy=framework
  • Do not terminate TLS inside Spring unless you need end-to-end TLS.

✅ Summary: What to Account For

AreaKey Considerations
🔐 FormatUse PKCS12 for best compatibility
🧾 TrustTruststore must contain root CA or intermediate
🔄 ChainInclude full cert chain in keystore
🌐 ProtocolEnable TLSv1.2+ only
🔐 CiphersUse strong, modern suites
🔁 mTLSSet client-auth=need + configure truststore
🔀 ProxyHandle X-Forwarded-* if behind Kong/F5

Would you like:

  • A working Spring Boot + mTLS project example?
  • An OpenSSL script to prepare the .p12 files?
  • A Kong + Spring Boot mTLS walkthrough?

Let me know your setup.