In the world of Kubernetes and OpenShift, both Ingress and API Gateways serve as the entry point for external traffic. While they overlap in functionality, they operate at different levels of the networking stack and offer different “intelligence” regarding how they handle requests.
Think of Ingress as a simple receptionist directing people to the right room, while an API Gateway is a concierge who also checks IDs, translates languages, and limits how many people enter at once.
1. What is Ingress?
Ingress is a native Kubernetes resource (Layer 7) that manages external access to services, typically HTTP and HTTPS.
- Primary Job: Simple routing based on the URL path (e.g.,
/api) or the hostname (e.g.,app.example.com). - Implementation: In OCP, this is usually handled by the OpenShift Ingress Controller (based on HAProxy) using Routes.
- Pros: Lightweight, standard across Kubernetes, and built-in.
- Cons: Limited “logic.” It’s hard to do complex things like rate limiting, authentication, or request transformation without custom annotations.
2. What is an API Gateway?
An API Gateway is a more sophisticated proxy that sits in front of your microservices to provide “cross-cutting concerns.”
- Primary Job: API Management. It handles security, monitoring, and orchestration.
- Key Features:
- Authentication/Authorization: Validating JWT tokens or API keys before the request hits the service.
- Rate Limiting: Ensuring one user doesn’t spam your backend.
- Payload Transformation: Changing a XML request to JSON for a modern backend.
- Circuit Breaking: Stopping traffic to a failing service to prevent a total system crash.
- Examples: Kong, Tyk, Apigee, or the Red Hat 3scale API Management platform.
Key Comparison Table
| Feature | Ingress / Route | API Gateway |
| OSI Layer | Layer 7 (HTTP/S) | Layer 7 + Application Logic |
| Main Goal | Expose services to the internet | Protect and manage APIs |
| Complexity | Low | High |
| Security | Basic SSL/TLS termination | JWT, OAuth, mTLS, IP Whitelisting |
| Traffic Control | Simple Load Balancing | Rate Limiting, Quotas, Retries |
| Cost | Usually free (built into OCP) | Often requires licensing or extra infra |
When to use which?
- Use Ingress/Routes when: You have a web application and just need to point a domain name to a service. It’s the “plumbing” of the cluster.
- Use an API Gateway when: You are exposing APIs to third parties, need strict usage tracking (monetization), or want to centralize security logic so your developers don’t have to write auth code for every single microservice.
The “Modern” Middle Ground: Gateway API
There is a newer Kubernetes standard called the Gateway API. It is designed to replace Ingress by providing the power of an API Gateway (like header-based routing and traffic splitting) while remaining a standard part of the Kubernetes ecosystem. In OpenShift, you can enable the Gateway API through the Operator.
To help you see the evolution, here is how the “old” standard (Ingress) compares to the “new” standard (Gateway API).
1. The Traditional Ingress
Ingress is a single, “flat” resource. It’s simple but limited because the person who owns the app (the developer) and the person who owns the network (the admin) have to share the same file.
YAML
apiVersion: networking.k8s.io/v1kind: Ingressmetadata: name: my-app-ingress annotations: nginx.ingress.kubernetes.io/rewrite-target: /spec: rules: - host: app.example.com http: paths: - path: /api pathType: Prefix backend: service: name: api-service port: number: 80
- The Problem: If you want to do something fancy like a “Canary deployment” (sending 10% of traffic to a new version), you usually have to use messy, vendor-specific annotations.
2. The Modern Gateway API
The Gateway API breaks the configuration into pieces. This allows the Cluster Admin to define the entry point (the Gateway) and the Developer to define how their specific app is reached (the HTTPRoute).
The Admin’s Part (The Infrastructure):
YAML
apiVersion: gateway.networking.k8s.io/v1kind: Gatewaymetadata: name: external-gatewayspec: gatewayClassName: openshift-default listeners: - name: http protocol: HTTP port: 80
The Developer’s Part (The Logic & Traffic Splitting):
YAML
apiVersion: gateway.networking.k8s.io/v1kind: HTTPRoutemetadata: name: my-app-routespec: parentRefs: - name: external-gateway hostnames: - "app.example.com" rules: - matches: - path: { type: PathPrefix, value: /api } backendRefs: - name: api-v1 port: 80 weight: 90 # 90% of traffic here - name: api-v2 port: 80 weight: 10 # 10% of traffic to the new version!
Summary of Differences
| Feature | Ingress | Gateway API |
| Structure | Monolithic (One file for everything) | Role-based (Separated for Admin vs Dev) |
| Traffic Splitting | Requires non-standard annotations | Built-in (Weights/Canary) |
| Extensibility | Limited | High (Supports TCP, UDP, TLS, GRPC) |
| Portability | High (but annotations are not) | Very High (Standardized across vendors) |
Why OpenShift is moving this way
OpenShift 4.12+ fully supports the Gateway API because it solves the “annotation hell” that occurred when users tried to make basic Ingress act like a full API Gateway. It gives you the power of a professional Gateway (like Kong or Istio) but stays within the native Kubernetes language.
In OpenShift 4.15 and later (reaching General Availability in 4.19), the Gateway API is managed by the Cluster Ingress Operator. Unlike standard Kubernetes where you might have to install many CRDs manually, OpenShift streamlines this by bundling the controller logic into its existing operators.
Here is the step-by-step process to enable and use it.
1. Enable the Gateway API CRDs
In newer versions of OCP, the CRDs are often present but “dormant” until a GatewayClass is created. The Ingress Operator watches for a specific controllerName to trigger the installation of the underlying proxy (which is Istio/Envoy in the Red Hat implementation).
Create the GatewayClass:
YAML
apiVersion: gateway.networking.k8s.io/v1kind: GatewayClassmetadata: name: openshift-defaultspec: controllerName: openshift.io/gateway-controller/v1
What happens next? The Ingress Operator will automatically detect this and start a deployment called
istiod-openshift-gatewayin theopenshift-ingressnamespace.
2. Set up a Wildcard Certificate (Required)
Unlike standard Routes, the Gateway API in OCP does not automatically generate a default certificate. You need to provide a TLS secret in the openshift-ingress namespace.
Bash
# Example: Creating a self-signed wildcard for testingoc -n openshift-ingress create secret tls gwapi-wildcard \ --cert=wildcard.crt --key=wildcard.key
3. Deploy the Gateway
The Gateway represents the actual “entry point” or load balancer.
YAML
apiVersion: gateway.networking.k8s.io/v1kind: Gatewaymetadata: name: my-gateway namespace: openshift-ingressspec: gatewayClassName: openshift-default listeners: - name: https protocol: HTTPS port: 443 hostname: "*.apps.mycluster.com" tls: mode: Terminate certificateRefs: - name: gwapi-wildcard
4. Create an HTTPRoute (Developer Task)
Now that the “door” (Gateway) is open, a developer in a different namespace can “attach” their application to it.
YAML
apiVersion: gateway.networking.k8s.io/v1kind: HTTPRoutemetadata: name: my-app-route namespace: my-app-projectspec: parentRefs: - name: my-gateway namespace: openshift-ingress hostnames: - "myapp.apps.mycluster.com" rules: - backendRefs: - name: my-app-service port: 8080
Summary Checklist for the Interview
If you are asked how to set this up in an interview, remember these four pillars:
- Operator-Led: It’s managed by the Ingress Operator; no separate “Gateway Operator” is needed for the default Red Hat implementation.
- Implementation: OpenShift uses Envoy (via a lightweight Istio control plane) as the engine behind the Gateway API.
- Namespace: The
Gatewayobject itself almost always lives inopenshift-ingress. - Service Type: Creating a Gateway usually triggers the creation of a Service type: LoadBalancer automatically.