In Kubernetes, Ingress is an API object that acts as a “smart router” for your cluster. While a standard Service (like a LoadBalancer) simply opens a hole in the firewall for one specific app, Ingress allows you to consolidate many services behind a single entry point and route traffic based on the URL or path.
Think of it as the receptionist of an office building: instead of every employee having their own front door, everyone uses one main entrance, and the receptionist directs visitors to the correct room based on who they are looking for.
1. How Ingress Works
There are two distinct parts required to make this work:
- Ingress Resource: A YAML file where you define your “rules” (e.g., “Send all traffic for
myapp.com/apito theapi-service“). - Ingress Controller: The actual software (like NGINX, HAProxy, or Traefik) that sits at the edge of your cluster, reads those rules, and physically moves the traffic. Kubernetes does not come with a controller by default; you must install one.
2. Key Capabilities
Ingress is much more powerful than a simple Port or LoadBalancer because it operates at Layer 7 (HTTP/HTTPS).
- Host-based Routing: Route
blue.example.comto the Blue Service andgreen.example.comto the Green Service using a single IP. - Path-based Routing: Route
example.com/loginto the Auth service andexample.com/searchto the Search service. - SSL/TLS Termination: You can handle your SSL certificates at the Ingress level so your individual application pods don’t have to deal with encryption/decryption.
- Name-based Virtual Hosting: Supporting multiple domain names on the same IP address.
3. Ingress vs. LoadBalancer vs. NodePort
Choosing how to expose your app is a common point of confusion. Here is the breakdown:
| Method | Best For | Pros/Cons |
| NodePort | Testing/Dev | Opens a high-range port (30000+) on every node. Very insecure for production. |
| LoadBalancer | Single Apps | Standard way to get an external IP. Expensive if you have 50 apps, as you pay for 50 cloud load balancers. |
| Ingress | Production | Multiple apps share one Load Balancer. Extremely cost-effective and feature-rich. |
4. What about OpenShift “Routes”?
Since you were asking about OCP earlier, it’s important to note that OpenShift has its own version called a Route.
- Routes were created by Red Hat before the Kubernetes Ingress standard existed.
- In modern OpenShift, an Ingress is automatically converted into a Route behind the scenes.
- The main difference: Routes support “F5” and more complex HAProxy features natively, while Ingress is the industry-standard “portable” way to do it.
5. Basic Ingress Example
This YAML routes traffic to two different services based on the URL path:
YAML
apiVersion: networking.k8s.io/v1kind: Ingressmetadata: name: main-ingress annotations: nginx.ingress.kubernetes.io/rewrite-target: /spec: rules: - host: myapp.example.com http: paths: - path: /billing pathType: Prefix backend: service: name: billing-service port: number: 80 - path: /inventory pathType: Prefix backend: service: name: inventory-service port: number: 80
To set up HTTPS for an Ingress, you need a TLS Secret. This secret acts as a container for your SSL certificate and private key, which the Ingress Controller (like NGINX or HAProxy) uses to encrypt the traffic before it reaches your pods.
1. Create the TLS Secret
First, you need your certificate files (tls.crt and tls.key). You can create a Kubernetes Secret from these files using the CLI:
Bash
kubectl create secret tls my-app-tls \ --cert=path/to/tls.crt \ --key=path/to/tls.key
Note: If you are using OpenShift, you can often skip this manual step by using Service Serving Certificates, which allow OpenShift to generate and manage the certificates for you automatically.
2. Update the Ingress YAML
Now, you tell the Ingress resource to use that secret for a specific hostname.
YAML
apiVersion: networking.k8s.io/v1kind: Ingressmetadata: name: secure-ingress annotations: # This annotation tells the controller to redirect HTTP to HTTPS nginx.ingress.kubernetes.io/ssl-redirect: "true"spec: tls: - hosts: - myapp.example.com secretName: my-app-tls # This must match the secret name created in Step 1 rules: - host: myapp.example.com http: paths: - path: / pathType: Prefix backend: service: name: my-web-service port: number: 80
3. Automation with Cert-Manager
Manually updating certificates before they expire is a headache. Most production clusters use Cert-Manager.
Cert-Manager is an operator that talks to certificate authorities like Let’s Encrypt. You simply add an annotation to your Ingress, and Cert-Manager handles the rest:
The “Magic” Annotation:
YAML
metadata: annotations: cert-manager.io/cluster-issuer: "letsencrypt-prod"
Once you add this, Cert-Manager will:
- See the Ingress request.
- Reach out to Let’s Encrypt to verify you own the domain.
- Generate the
tls.crtandtls.key. - Create the Secret for you and renew it every 90 days automatically.
Summary Checklist for HTTPS
| Step | Action |
| 1. Certificate | Obtain a CA-signed cert or use Let’s Encrypt. |
| 2. Secret | Store the cert/key in a kind: Secret (type kubernetes.io/tls). |
| 3. Ingress Spec | Add the tls: section to your Ingress YAML. |
| 4. DNS | Ensure your domain points to the Ingress Controller’s IP. |
To automate SSL certificates with Cert-Manager, you need a ClusterIssuer. This is a cluster-wide resource that tells Cert-Manager how to talk to a Certificate Authority (CA) like Let’s Encrypt.
Before you start, ensure the Cert-Manager Operator is installed in your cluster (in OpenShift, you can find this in the OperatorHub).
1. Create a ClusterIssuer (The “Account”)
This YAML defines your identity with Let’s Encrypt. It uses the ACME (Automated Certificate Management Environment) protocol.
YAML
apiVersion: cert-manager.io/v1kind: ClusterIssuermetadata: name: letsencrypt-prodspec: acme: # The ACME server address for Let's Encrypt production server: https://acme-v02.api.letsencrypt.org/directory # Your email address for expiration notices email: admin@yourdomain.com # Name of a secret used to store the ACME account private key privateKeySecretRef: name: letsencrypt-prod-account-key # Enable the HTTP-01 challenge provider solvers: - http01: ingress: class: nginx # Or 'openshift-default' depending on your ingress controller
2. Update your Ingress to “Request” the Cert
Once the ClusterIssuer is created, you don’t need to manually create secrets anymore. You just “tag” your Ingress with an annotation. Cert-Manager will see this, perform the challenge, and create the secret for you.
YAML
apiVersion: networking.k8s.io/v1kind: Ingressmetadata: name: my-secure-app annotations: # THIS IS THE TRIGGER: It links the Ingress to your ClusterIssuer cert-manager.io/cluster-issuer: "letsencrypt-prod"spec: tls: - hosts: - app.yourdomain.com secretName: app-tls-cert # Cert-Manager will create this secret automatically rules: - host: app.yourdomain.com http: paths: - path: / pathType: Prefix backend: service: name: my-service port: number: 80
3. How to verify it’s working
After you apply the Ingress, Cert-Manager creates a Certificate object and a Challenge object. You can track the progress:
- Check the certificate status:
kubectl get certificate(Look forREADY: True) - Check the order status (if it’s stuck):
kubectl get challenges - Check the secret:
kubectl get secret app-tls-cert(If this exists, your site is now HTTPS!)
Why use Let’s Encrypt?
- Cost: It is 100% free.
- Trust: It is recognized by all major browsers (unlike self-signed certs).
- No Maintenance: Cert-Manager automatically renews the cert 30 days before it expires.
A Small Warning:
Let’s Encrypt has rate limits. If you are just testing, use the “Staging” URL (https://acme-staging-v02.api.letsencrypt.org/directory) first. Browsers will show a warning for staging certs, but you won’t get blocked for hitting limit thresholds while debugging.
When Cert-Manager fails to issue a certificate, it usually gets stuck in the Challenge phase. Let’s look at how to diagnose and fix the most common “Let’s Encrypt” roadblocks.
1. The Troubleshooting Command Chain
If your certificate isn’t appearing, follow this hierarchy to find where the “handshake” broke:
- Check the Certificate status:
oc get certificateIfREADYisFalse, move to the next step. - Check the Order:
oc get orderThe Order is the request sent to Let’s Encrypt. Look at theSTATEcolumn. - Check the Challenge (The most important step):
oc get challengesIf a challenge exists, it means Let’s Encrypt is trying to verify your domain but can’t. - Describe the Challenge for the error message:
oc describe challenge <challenge-name>
2. Common Failure Reasons
A. The “I Can’t See You” (Firewall/Network)
Let’s Encrypt uses the HTTP-01 challenge. It tries to reach http://yourdomain.com/.well-known/acme-challenge/<TOKEN>.
- The Problem: Your firewall, Security Group (AWS/Azure), or OpenShift Ingress Controller is blocking Port 80.
- The Fix: Ensure Port 80 is open to the public internet. Let’s Encrypt cannot verify your domain over Port 443 (HTTPS) because the certificate doesn’t exist yet!
B. DNS Record Mismatch
- The Problem: Your DNS
Arecord orCNAMEforapp.yourdomain.comhasn’t propagated yet or is pointing to the wrong Load Balancer IP. - The Fix: Use
dig app.yourdomain.comornslookupto ensure the domain points exactly to your Ingress Controller’s external IP.
C. Rate Limiting
- The Problem: You’ve tried to issue the same certificate too many times in one week (Let’s Encrypt has a limit of 5 duplicate certs per week).
- The Fix: Switch your
ClusterIssuerto use the Staging URL (mentioned in the previous step) until your configuration is 100% correct, then switch back to Production.
3. Dealing with Internal/Private Clusters
If your OpenShift cluster is behind a VPN and not accessible from the public internet, the HTTP-01 challenge will always fail because Let’s Encrypt can’t “see” your pods.
The Solution: DNS-01 Challenge
Instead of a web check, Cert-Manager proves ownership by adding a temporary TXT record to your DNS provider (Route53, Cloudflare, Azure DNS).
Example DNS-01 Issuer (Route53):
YAML
spec: acme: solvers: - dns01: aws-route53: region: us-east-1 hostedZoneID: Z123456789
Summary Checklist
- Is Port 80 open?
- Does DNS point to the cluster?
- Are you hitting Rate Limits?
- Is your Ingress Class correct in the Issuer?