Managing DNS in a hybrid environment (Azure + On-premises) can feel like a high-stakes game of “telephone.” As of 2026, the industry standard has moved away from the old “DNS Forwarder VMs” and settled on the Azure DNS Private Resolver.
Here are the primary resolution patterns you should know to keep your traffic flowing smoothly over VPN or ExpressRoute.
1. The Modern Hub-Spoke Pattern (Azure DNS Private Resolver)
This is the recommended architecture. It uses a managed service instead of VMs, reducing overhead and providing built-in high availability.
How it Works:
- Azure to On-Prem: You create an Outbound Endpoint in your Hub VNet and a Forwarding Ruleset. You link this ruleset to your Spoke VNets. When an Azure VM tries to resolve
internal.corp.com, Azure DNS sees the rule and sends the query to your on-premises DNS servers. - On-Prem to Azure: You create an Inbound Endpoint (a static IP in your VNet). On your local Windows/Linux DNS servers, you set up a Conditional Forwarder for Azure zones (like
privatelink.blob.core.windows.net) pointing to that Inbound Endpoint IP.
2. The “Private Link” Pattern (Split-Brain Avoidance)
One of the biggest “gotchas” in hybrid setups is resolving Azure Private Endpoints. If you aren’t careful, your on-premises machine might resolve the public IP of a storage account instead of the private one.
- The Pattern: Always forward the public service suffix (e.g.,
blob.core.windows.net) to the Azure Inbound Endpoint, not just theprivatelinkversion. - Why: Azure DNS is “smart.” If you query the public name from an authorized VNet, it automatically checks for a matching Private DNS Zone and returns the private IP. If you only forward the
privatelinkzone, local developers have to change their connection strings, which is a massive headache.
3. Legacy DNS Forwarder Pattern (IaaS VMs)
While largely replaced by the Private Resolver, some organizations still use Domain Controllers or BIND servers sitting in a Hub VNet.
| Feature | VM-based Forwarders | Azure DNS Private Resolver |
| Management | You patch, scale, and backup. | Fully managed by Microsoft. |
| Availability | Requires Load Balancers/Availability Sets. | Built-in 99.9% – 99.99% SLA. |
| Cost | High (Compute + Licenses + Management). | Consumption-based (often cheaper). |
| Complexity | High (Custom scripts for sync). | Low (Native ARM/Terraform support). |
4. Key Configuration Rules for 2026
- The 168.63.129.16 Rule: This is the “Magic IP” for Azure DNS. You cannot query this IP directly from on-premises. You must use an Inbound Endpoint as a bridge.
- VNet Delegation: Remember that the subnets used for Private Resolver endpoints must be delegated specifically to
Microsoft.Network/dnsResolvers. You can’t put VMs or other resources in those subnets. - Avoid Recursive Loops: Never point an Azure Outbound Forwarder to an on-premises server that is also configured to forward those same queries back to Azure. This creates a “DNS death loop” that will drop your resolution performance to zero.
Pro-Tip: If you are using Azure Virtual WAN, the DNS Private Resolver can be integrated into the Hub, allowing all connected spokes and branches to share the same resolution logic without redundant configurations.
Starting a fresh greenfield deployment?
For a greenfield deployment in 2026, you have the advantage of skipping the “technical debt” of legacy VM forwarders. The goal is a Hub-and-Spoke architecture using the Azure DNS Private Resolver.
This setup ensures that your on-premises office and your Azure cloud act as a single, cohesive network for naming.
1. The Essential Architecture
In a greenfield setup, you should centralize the resolver in your Hub VNet.
- Inbound Endpoint: Provides a static IP address in your Hub VNet. Your on-premises DNS servers (Windows/BIND) will use this as a Conditional Forwarder.
- Outbound Endpoint: A dedicated egress point that Azure DNS uses to send queries out to your on-premises DNS.
- Forwarding Ruleset: A logic engine where you define: “If a query is for
corp.local, send it to On-Prem IP10.50.0.4.”
2. Step-by-Step Implementation Strategy
Step 1: Subnet Design (Non-Negotiable)
The Private Resolver requires two dedicated subnets in your Hub VNet. They cannot contain any other resources (no VMs, no Gateways).
- Subnet 1 (Inbound): Min
/28. Delegate toMicrosoft.Network/dnsResolvers. - Subnet 2 (Outbound): Min
/28. Delegate toMicrosoft.Network/dnsResolvers.
Step 2: The “Private Link” Rule of Thumb
To avoid the common “2026 DNS Trap” where resolution fails for Private Endpoints, do not just forward privatelink zones.
- On-Prem Config: Create conditional forwarders for the public suffixes (e.g.,
blob.core.windows.net,database.windows.net) pointing to your Azure Inbound Endpoint IP. - Why? This allows the Azure Resolver to handle the logic. If a Private Endpoint exists, it returns the private IP. If not, it returns the public IP. This keeps your application connection strings identical regardless of where the app sits.
Step 3: Global Reach (Spoke VNets)
You don’t need a resolver in every spoke.
- Link your Forwarding Ruleset to every Spoke VNet.
- This instantly gives all spokes the ability to resolve on-premises names without changing their DNS server settings from “Azure Provided.”
3. Financial Blueprint (2026 Pricing)
Budgeting for this is straightforward but requires awareness of the “Base Fee.”
| Component | Estimated Cost (Monthly) |
| Inbound Endpoint | ~$180 |
| Outbound Endpoint | ~$180 |
| Forwarding Ruleset | ~$2.50 |
| Query Processing | ~$0.60 per million queries |
| Total Base | ~$362.50 / month |
4. Greenfield “Gotchas” to Avoid
- DNS Loops: Never link a Forwarding Ruleset to the same VNet where the Inbound Endpoint lives if that ruleset contains a rule pointing back to that Inbound Endpoint. You’ll create a recursive loop that crashes resolution.
- The 168.63.129.16 Myth: In 2026, some older documentation still suggests querying the Azure WireServer IP directly from on-prem. This will not work. Traffic to this IP is only routable within the Azure backbone.
- Subnet Sizing: While
/28is the minimum, if you plan on a massive global architecture with dozens of endpoints, consider a/27to avoid future re-addressing headaches.
deploy this via Terraform
Deploying a Private DNS Resolver via Terraform requires a specific sequence of resources. In a greenfield scenario, you’ll likely want to use the Azure Verified Modules (AVM) or the standard azurerm provider resources.
Here is the “cleanest” implementation using standard azurerm resources for a Hub-and-Spoke architecture.
1. Subnet Delegation (The Foundation)
You cannot use standard subnets. You must delegate them to Microsoft.Network/dnsResolvers.
Terraform
# Inbound Subnetresource "azurerm_subnet" "dns_inbound" { name = "snet-dns-inbound" resource_group_name = azurerm_resource_group.hub.name virtual_network_name = azurerm_virtual_network.hub.name address_prefixes = ["10.0.10.0/28"] delegation { name = "delegation" service_delegation { name = "Microsoft.Network/dnsResolvers" actions = ["Microsoft.Network/virtualNetworks/subnets/join/action"] } }}# Outbound Subnetresource "azurerm_subnet" "dns_outbound" { name = "snet-dns-outbound" resource_group_name = azurerm_resource_group.hub.name virtual_network_name = azurerm_virtual_network.hub.name address_prefixes = ["10.0.10.16/28"] delegation { name = "delegation" service_delegation { name = "Microsoft.Network/dnsResolvers" actions = ["Microsoft.Network/virtualNetworks/subnets/join/action"] } }}
2. The Resolver & Endpoints
The Resolver itself is just a container; the endpoints do the heavy lifting.
Terraform
resource "azurerm_private_dns_resolver" "main" { name = "res-hub-dns" resource_group_name = azurerm_resource_group.hub.name location = azurerm_resource_group.hub.location virtual_network_id = azurerm_virtual_network.hub.id}resource "azurerm_private_dns_resolver_inbound_endpoint" "inbound" { name = "inbound-endpoint" private_dns_resolver_id = azurerm_private_dns_resolver.main.id location = azurerm_private_dns_resolver.main.location ip_configurations { subnet_id = azurerm_subnet.dns_inbound.id private_ip_allocation_method = "Dynamic" # Azure will assign an IP from the subnet }}resource "azurerm_private_dns_resolver_outbound_endpoint" "outbound" { name = "outbound-endpoint" private_dns_resolver_id = azurerm_private_dns_resolver.main.id location = azurerm_private_dns_resolver.main.location subnet_id = azurerm_subnet.dns_outbound.id}
3. Forwarding Rulesets (The “Logic”)
This is where you tell Azure to send traffic to your on-premises DNS servers.
Terraform
resource "azurerm_private_dns_resolver_dns_forwarding_ruleset" "onprem" { name = "ruleset-onprem" resource_group_name = azurerm_resource_group.hub.name location = azurerm_resource_group.hub.location private_dns_resolver_outbound_endpoint_ids = [azurerm_private_dns_resolver_outbound_endpoint.outbound.id]}resource "azurerm_private_dns_resolver_forwarding_rule" "corp_local" { name = "rule-corp-local" dns_forwarding_ruleset_id = azurerm_private_dns_resolver_dns_forwarding_ruleset.onprem.id domain_name = "corp.local." # Note the trailing dot enabled = true target_dns_servers { ip_address = "10.50.0.4" # Your On-Prem DNS Server IP port = 53 }}
4. Linking to Spokes
To make this work for your entire environment, you must link the ruleset to your spoke VNets.
Terraform
resource "azurerm_private_dns_resolver_virtual_network_link" "spoke_links" { name = "link-spoke-app" dns_forwarding_ruleset_id = azurerm_private_dns_resolver_dns_forwarding_ruleset.onprem.id virtual_network_id = azurerm_virtual_network.spoke_app.id}
Important Terraform Considerations for 2026:
- Trailing Dots: Always include the trailing dot in
domain_name(e.g.,internal.com.). If omitted, some versions of the provider may show a perpetual “diff” in your plan. - IP Outputs: Since the Inbound Endpoint uses dynamic allocation, use an
outputblock to capture the IP address. You’ll need this IP to configure the conditional forwarders on your On-Premises DNS servers.Terraformoutput "dns_inbound_ip" { value = azurerm_private_dns_resolver_inbound_endpoint.inbound.ip_configurations[0].private_ip_address } - Lifecycle Management: Endpoints can take 10-15 minutes to deploy. If you are running this in a CI/CD pipeline, ensure your timeout settings are sufficient.
Does your current Terraform setup include a central “Hub” module where this configuration would live?