Hadoop ha Active/Active vs Active/Passive

Hadoop High Availability (HA): Active/Active vs. Active/Passive

When designing a Hadoop High Availability (HA) solution, two common approaches are Active/Active and Active/Passive. These strategies help ensure data and service availability across failures and disasters. Let’s compare them in detail to help you understand their differences, benefits, challenges, and use cases.


1. Active/Active Hadoop Architecture

Overview:

  • Both sites are fully operational and handling workloads simultaneously.
  • Both clusters actively serve requests, and the load can be distributed between them.
  • Data is replicated between the sites, ensuring both sites are synchronized.

Key Components:

  • HDFS Federation: Each site has its own NameNode that manages a portion of the HDFS namespace.
  • YARN ResourceManager: Each site runs its own ResourceManager, coordinating job execution locally, but the jobs can be balanced between sites.
  • Zookeeper & JournalNodes Quorum: Spread across both sites to provide consistency and manage service coordination.
  • Cross-Site Replication: Hadoop’s DistCp or HDFS replication is used to replicate data across sites.
  • Hive/Impala Metastore: Shared between sites, ensuring consistent metadata.

Advantages:

  1. Load Balancing: Traffic and workloads can be distributed between the two active sites, reducing pressure on a single site.
  2. Low Recovery Time: In case of a site failure, the other site can immediately handle all workloads without downtime.
  3. Improved Resource Utilization: Both sites are fully operational, utilizing available resources efficiently.
  4. Fast Failover: If one site fails, the remaining site continues operating without needing to bring up services.

Challenges:

  1. Increased Complexity: Managing two active sites involves more complex setup, including federation, data replication, and synchronization.
  2. Data Consistency: Ensuring both sites have up-to-date data requires robust replication mechanisms and careful coordination.
  3. Conflict Resolution: Handling conflicting updates across both sites requires careful planning and automated conflict resolution strategies.

Operational Considerations:

  • Synchronization of Data: Ensure real-time or near real-time data replication across both sites.
  • Federated HDFS: Requires splitting data across multiple namespaces with NameNodes in each site.
  • Network Requirements: Reliable, high-bandwidth network links are essential for cross-site replication and synchronization.
  • Monitoring and Automation: Continuous monitoring of job failures, resource usage, and automatic load balancing/failover processes.

Best Use Cases:

  • Mission-Critical Workloads: Where zero downtime and continuous availability are essential.
  • Geographically Distributed Sites: When there is a need for global load balancing or when sites are geographically distant but still need to function as one.
  • High Load Systems: Systems that need to distribute workloads across multiple data centers to balance processing power.

2. Active/Passive Hadoop Architecture

Overview:

  • The Primary (Active) site handles all the workloads, while the Secondary (Passive) site is on standby.
  • In case of failure or disaster, the passive site takes over and becomes the active one.
  • The secondary site is synchronized with the active site, but it does not actively serve any workloads until failover occurs.

Key Components:

  • Active and Standby NameNodes: The active site runs the main NameNode, while the passive site hosts a standby NameNode.
  • YARN ResourceManager: Active ResourceManager at the primary site, standby ResourceManager at the secondary site.
  • Zookeeper & JournalNode Quorum: Distributed across both sites for fault tolerance and coordination.
  • HDFS Replication: Ensures data is replicated across both sites using HDFS data blocks.
  • Hive/Impala Metastore: Either synchronized or replicated between the two sites for metadata consistency.

Advantages:

  1. Simpler Setup: Easier to configure and manage compared to Active/Active architecture.
  2. Cost-Efficient: Since the passive site is not active until failover, fewer resources are consumed.
  3. Data Integrity: With a single active site at a time, data conflicts and consistency issues are less likely.
  4. Disaster Recovery: Ensures quick recovery of services in the event of failure or disaster in the primary site.

Challenges:

  1. Failover Time: There can be a delay in switching over from the active site to the passive site.
  2. Underutilized Resources: The passive site is mostly idle, which can lead to inefficient resource use.
  3. Single Point of Failure: Until failover occurs, there is a reliance on the primary site, creating a risk of downtime.
  4. Data Replication: You need to ensure that the passive site has the latest data in case of a failover.

Operational Considerations:

  • Automated Failover: Implement automated failover mechanisms using Zookeeper and JournalNodes to reduce downtime.
  • Data Synchronization: Ensure regular and real-time synchronization between the two sites to avoid data loss.
  • Disaster Recovery Testing: Regularly test the failover process to ensure that the passive site can take over with minimal downtime.
  • Backup and Monitoring: Maintain backups and monitor the status of both sites to detect any potential failures early.

Best Use Cases:

  • Cost-Conscious Environments: When you need a disaster recovery solution but don’t want the expense of running both sites at full capacity.
  • Disaster Recovery Scenarios: When one site is meant purely for recovery in case of major failure or disaster at the primary site.
  • Low-Volume Operations: When your workloads don’t justify the complexity and overhead of an active/active setup.

Setup services and route in Kong API Gateway

  1. Shell script

<code>

#!/bin/bash

#Set Kong Admin API URL

KONG_ADMIN_URL=”http://localhost:8001&#8243;

#Define an array of services and routes

declare -A services
services=(
[“service11″]=”http://example11.com:8080&#8221;
[“service12″]=”http://example12.com:8080&#8221;
[“service13″]=”http://example13.com:8080&#8221;
)

Define routes corresponding to the services

declare -A routes
routes=(
[“service11″]=”/example11”
[“service12″]=”/example12”
[“service13″]=”/example13”
)

Loop through the services and create them in Kong

for service in “${!services[@]}”; do
# Create each service
echo “Creating service: $service with URL: ${services[$service]}”
curl -i -X POST $KONG_ADMIN_URL/services \
–data name=$service \
–data url=${services[$service]}

# Create a route for each service
echo “Creating route for service: $service with path: ${routes[$service]}”
curl -i -X POST $KONG_ADMIN_URL/routes \
–data paths[]=${routes[$service]} \
–data service.name=$service

# Optionally, add a plugin (e.g., key-auth) to each route

echo “Adding key-auth plugin to route for service: $service”

curl -i -X POST $KONG_ADMIN_URL/routes/${service}/plugins \

–data name=key-auth

done

echo “All services and routes have been configured.

</code>

</code>

  • name: Automate Kong API Mapping for Multiple Services with Different Ports hosts: localhost tasks:
    • name: Define a list of services and routes with different ports set_fact: services: – { name: service6, url: http://service6.com:8086, path: /service6 } – { name: service7, url: http://service7.com:8087, path: /service7 } – { name: service8, url: http://service8.com:8088, path: /service8 } – { name: service9, url: http://service9.com:8089, path: /service9 } – { name: service10, url: http://service10.com:8090, path: /service10 }

    • name: Create a Service in Kong for each service with different ports uri: url: http://localhost:8001/services method: POST body_format: json body: name: “{{ item.name }}” url: “{{ item.url }}” status_code: 201 with_items: “{{ services }}” register: service_creation

    • name: Create a Route for each Service uri: url: http://localhost:8001/routes method: POST body_format: json body: service: name: “{{ item.name }}” paths: – “{{ item.path }}” status_code: 201 with_items: “{{ services }}”

</code>

How to integrate mTLS Kong certificate with secrets management infrastructure

To integrate mTLS (Mutual TLS) certificates used in Kong API Gateway with a secrets management infrastructure (such as HashiCorp Vault, AWS Secrets Manager, or other secret management tools), you can follow a systematic approach to store, retrieve, and rotate the certificates securely.

Key Steps:

  1. Generate mTLS certificates.
  2. Store certificates securely in the secrets management infrastructure.
  3. Configure Kong to retrieve and use certificates for mTLS.
  4. Automate certificate rotation for secure management.

Step 1: Generate mTLS Certificates

You need to generate the client and server certificates that Kong will use for mTLS.

  1. Generate a Certificate Authority (CA): First, generate a CA to sign the certificates.

openssl genrsa -out ca.key 2048

openssl req -x509 -new -nodes -key ca.key -sha256 -days 365 -out ca.crt \

-subj “/CN=Kong-CA”

  1. Generate the Server Certificate: Generate a private key and a certificate signing request (CSR) for Kong, and sign it with your CA.

openssl genrsa -out kong-server.key 2048

openssl req -new -key kong-server.key -out kong-server.csr -subj “/CN=kong-server”

openssl x509 -req -in kong-server.csr -CA ca.crt -CAkey ca.key -CAcreateserial \

-out kong-server.crt -days 365 -sha256

  1. Generate the Client Certificate: You also need a client certificate to authenticate incoming requests.

openssl genrsa -out kong-client.key 2048

openssl req -new -key kong-client.key -out kong-client.csr -subj “/CN=kong-client”

openssl x509 -req -in kong-client.csr -CA ca.crt -CAkey ca.key -CAcreateserial \

-out kong-client.crt -days 365 -sha256

Step 2: Store Certificates Securely in a Secrets Management Infrastructure

Use a secrets management service like HashiCorp Vault, AWS Secrets Manager, or another system to store the mTLS certificates securely.

Example: Store Certificates in HashiCorp Vault

  1. Start by enabling the secrets engine in Vault:

vault secrets enable -path=pki pki

vault secrets tune -max-lease-ttl=87600h pki

  1. Store the CA certificate in Vault:

vault write pki/config/ca pem_bundle=@ca.crt

  1. Store the server certificate and key in Vault:

vault kv put secret/kong/server cert=@kong-server.crt key=@kong-server.key

  1. Store the client certificate and key:

vault kv put secret/kong/client cert=@kong-client.crt key=@kong-client.key

Example: Store Certificates in AWS Secrets Manager

  1. Use AWS CLI to store the server certificate:

aws secretsmanager create-secret –name kong/server \

–secret-string file://kong-server.json

The kong-server.json file contains:

{

  “certificate”: “—–BEGIN CERTIFICATE—– …”,

  “private_key”: “—–BEGIN PRIVATE KEY—– …”

}

  1. Store the client certificate similarly in AWS Secrets Manager:

aws secretsmanager create-secret –name kong/client \

–secret-string file://kong-client.json

Step 3: Configure Kong to Retrieve and Use Certificates for mTLS

Once the certificates are securely stored, you need to configure Kong to retrieve and use these certificates from your secrets management infrastructure.

Option 1: Use HashiCorp Vault with Kong

  1. Install the Vault Plugin for Kong (or use an external script to retrieve certificates from Vault).
  2. Write a Lua script or custom plugin to dynamically retrieve the certificates from Vault.
    • Use the vault kv get API to retrieve certificates from Vault.
    • Load the certificates into Kong’s SSL configuration dynamically.
  3. Configure Kong to use the certificates:
    • Add the mTLS plugin to your service or route to enable mutual authentication using the retrieved certificates.

Example to configure the mTLS plugin:

curl -i -X POST http://localhost:8001/services/my-service/plugins \

–data “name=mtls-auth” \

–data “config.trusted_ca_ids=ca_id” \

–data “config.client_verify=true”

Option 2: Use AWS Secrets Manager with Kong

  1. Install the AWS SDK (or a script) on your Kong instances to fetch the certificates from Secrets Manager.
  2. Create a script or custom plugin to:
    • Retrieve the server and client certificates from AWS Secrets Manager using the AWS CLI or SDK:

aws secretsmanager get-secret-value –secret-id kong/server

aws secretsmanager get-secret-value –secret-id kong/client

  1. Dynamically load the certificates into Kong’s SSL configuration using Lua or custom logic.
  2. Configure mTLS in Kong:
    • Set up the mTLS plugin in Kong to verify client certificates:

curl -i -X POST http://localhost:8001/services/my-service/plugins \

–data “name=mtls-auth” \

–data “config.trusted_ca_ids=<ca-id>”

Step 4: Automate Certificate Rotation

To ensure secure and automated certificate management, integrate certificate rotation.

  1. Automated Certificate Renewal in Vault:
    • Configure Vault’s PKI secrets engine to automate certificate issuance and rotation.

Example:

vault write pki/roles/kong \

allowed_domains=example.com \

allow_subdomains=true \

max_ttl=72h

Use Vault’s pki/issue endpoint to automatically rotate certificates and replace them in Kong.

  1. Automate AWS Secrets Manager Rotation:
    • Set up AWS Secrets Manager’s built-in rotation feature for SSL certificates.
  2. Trigger Updates in Kong:
    • Use a periodic task (e.g., a cron job or Ansible playbook) to update the certificates in Kong without restarting the gateway:

kong reload

This ensures Kong always uses the latest mTLS certificates from the secrets manager.

Conclusion

To integrate Kong API Gateway’s mTLS certificates with a secrets management infrastructure, follow these steps:

  1. Generate the mTLS certificates and store them securely in a secrets management tool (e.g., HashiCorp Vault, AWS Secrets Manager).
  2. Configure Kong to retrieve the certificates dynamically from the secret manager.
  3. Implement automation for certificate renewal and rotation to ensure that Kong always uses up-to-date certificates without manual intervention.

This approach enhances security by managing sensitive SSL certificates in a centralized and automated manner.

How to send Kong logs to splunk

To send Kong API Gateway logs to Splunk, you can leverage several approaches based on the logging mechanism Kong uses, such as:

  1. HTTP Logging Plugin (sending logs via HTTP to Splunk’s HTTP Event Collector)
  2. Syslog Logging Plugin (sending logs to a syslog server integrated with Splunk)
  3. File-based Logging (sending logs using Splunk Universal Forwarder)

Here’s how you can achieve this:


1. Using the HTTP Logging Plugin (Recommended for Splunk HEC)

You can use Kong’s HTTP Log Plugin to send logs directly to Splunk HTTP Event Collector (HEC) over HTTP(S).

Steps:

1.1. Set Up HTTP Event Collector (HEC) in Splunk

  1. Go to your Splunk Web Interface.
  2. Navigate to Settings > Data Inputs > HTTP Event Collector.
  3. Create a new HEC token.
    • Set a source type (e.g., kong_logs).
    • Note down the token and HEC URL (e.g., http://<SPLUNK_URL&gt;:8088/services/collector).
  4. Ensure that HEC is enabled and configured to accept data.

1.2. Install the HTTP Log Plugin in Kong

The HTTP Log Plugin sends Kong logs to a specified HTTP endpoint (in this case, Splunk HEC).

  • You can configure the plugin at the service, route, or global level.

Example Configuration (using curl):

curl -X POST http://<KONG_ADMIN_URL&gt;:8001/services/<service-id>/plugins \

    –data “name=http-log” \

    –data “config.http_endpoint=http://<SPLUNK_HEC_URL>:8088/services/collector” \

    –data “config.method=POST” \

    –data “config.timeout=10000” \

    –data “config.keepalive=10000” \

    –data “config.headers.Splunk”=”<SPLUNK_HEC_TOKEN>”

  • Replace <KONG_ADMIN_URL> with your Kong Admin URL.
  • Replace <SPLUNK_HEC_URL> with your Splunk HEC endpoint.
  • Replace <SPLUNK_HEC_TOKEN> with the HEC token from Splunk.

You can customize the headers, format, and log levels as per your needs.

1.3. Log Format Configuration (Optional)

You can customize the log format that Kong sends to Splunk by configuring the log_format property of the HTTP Log Plugin.

bash

Copy code

–data “config.log_format={‘message’: ‘Kong Log: $request_uri $status’, ‘client_ip’: ‘$remote_addr’}”

Splunk will now start receiving the logs sent by Kong via the HTTP Event Collector.


2. Using the Syslog Logging Plugin

Kong can send logs to a syslog server that can be monitored by Splunk.

Steps:

2.1. Set Up Syslog Logging Plugin in Kong

  1. Install the Syslog Logging Plugin on your Kong instance:

bash

Copy code

curl -X POST http://<KONG_ADMIN_URL&gt;:8001/services/<service-id>/plugins \

    –data “name=syslog” \

    –data “config.host=<SYSLOG_SERVER_IP>” \

    –data “config.port=514” \

    –data “config.facility=user” \

    –data “config.log_level=info”

  1. Replace <SYSLOG_SERVER_IP> with your syslog server IP or domain.

2.2. Configure Splunk to Receive Syslog Data

  1. On your Splunk instance, configure a new data input for receiving syslog data:
    • Go to Settings > Data Inputs > UDP.
    • Create a new UDP input on port 514 (or another port if you’re using a different one).
    • Set a source type like syslog or a custom type like kong_logs.
  2. You can also use a dedicated syslog server (like rsyslog or syslog-ng) to forward syslog messages from Kong to Splunk.

3. Using the File Log Plugin and Splunk Universal Forwarder

If you’re using file-based logging in Kong, you can set up the File Log Plugin and use the Splunk Universal Forwarder to monitor and send log files to Splunk.

Steps:

3.1. Set Up the File Log Plugin in Kong

  1. Install the File Log Plugin in Kong and configure it to log to a specific file.

Example configuration:

bash

Copy code

curl -X POST http://<KONG_ADMIN_URL&gt;:8001/services/<service-id>/plugins \

    –data “name=file-log” \

    –data “config.path=/var/log/kong/kong.log”

  • Replace /var/log/kong/kong.log with the path where you want the log files stored.

3.2. Install and Configure Splunk Universal Forwarder

  1. Install the Splunk Universal Forwarder on the server where Kong logs are stored.
  2. Configure the forwarder to monitor the log file:

In the inputs.conf file, specify the log file you want to forward:

bash

Copy code

[monitor:///var/log/kong/kong.log]

index = kong

sourcetype = kong:logs

  1. In the outputs.conf file, configure the forwarder to send logs to your main Splunk indexer:

bash

Copy code

[tcpout]

defaultGroup = indexers

[tcpout:indexers]

server = <SPLUNK_INDEXER_IP>:9997

  1. Start the Splunk Universal Forwarder to begin sending logs.

4. Using AWS Lambda (If Kong is in AWS)

If you’re using Kong on AWS (e.g., on EC2), and your logs are stored in AWS CloudWatch, you can set up an AWS Lambda function to forward logs to Splunk HTTP Event Collector (HEC).

Steps:

  1. Set up CloudWatch Logs to capture Kong logs.
  2. Create an AWS Lambda function to forward logs from CloudWatch to Splunk.
  3. Use the Splunk-provided AWS Lambda blueprint to send logs to Splunk HEC.

Summary of Methods:

  • HTTP Log Plugin: Send logs directly to Splunk’s HTTP Event Collector (HEC). This is the easiest and most efficient method.
  • Syslog Logging Plugin: Send logs to a syslog server that can be ingested by Splunk.
  • File Log Plugin + Splunk Universal Forwarder: Write logs to a file and forward them to Splunk using the Splunk Universal Forwarder.
  • AWS Lambda (CloudWatch Logs): Use Lambda to stream logs from CloudWatch to Splunk HEC (for AWS-hosted Kong).

Choose the method based on your infrastructure and logging requirements

generate CA certificate for Kong API gateway and configure with mTLS

To generate a Certificate Authority (CA) certificate for Kong Gateway and configure it for mTLS (Mutual TLS), follow these steps. This process involves creating a root CA, generating client certificates, and setting up Kong to use them for mTLS authentication.

Steps Overview:

  1. Generate your own Certificate Authority (CA).
  2. Use the CA to sign client certificates.
  3. Upload the CA certificate to Kong.
  4. Configure Kong to enforce mTLS using the CA.
  5. Test the mTLS setup.

1. Generate a Certificate Authority (CA)

1.1. Generate the CA’s Private Key

openssl genrsa -out ca.key 2048

This command generates a 2048-bit RSA private key for your CA.

1.2. Create a Self-Signed Certificate for the CA

openssl req -x509 -new -nodes -key ca.key -sha256 -days 3650 -out ca.crt \

-subj “/C=US/ST=State/L=City/O=Organization/OU=OrgUnit/CN=Your-CA-Name”

  • This command creates a self-signed certificate valid for 10 years (3650 days).
  • Customize the -subj fields with your information.

You now have two files:

  • ca.key: The CA’s private key (keep this secure).
  • ca.crt: The CA’s self-signed certificate, which you will use to sign client certificates.

2. Generate and Sign Client Certificates

2.1. Generate the Client’s Private Key

openssl genrsa -out client.key 2048

2.2. Create a Certificate Signing Request (CSR) for the Client

openssl req -new -key client.key -out client.csr -subj “/C=US/ST=State/L=City/O=Organization/OU=OrgUnit/CN=Client-Name”

2.3. Sign the Client’s Certificate with the CA

openssl x509 -req -in client.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out client.crt -days 365 -sha256

This command signs the client certificate (client.crt) with your CA. The client.crt is valid for 1 year (365 days).

You now have:

  • client.key: The client’s private key.
  • client.crt: The client’s signed certificate.

3. Upload the CA Certificate to Kong

Kong needs the CA certificate to validate the client certificates during mTLS authentication. You can upload the CA certificate to Kong as follows:

curl -i -X POST http://localhost:8001/ca_certificates \

  –data “cert=@/path/to/ca.crt”

This will make Kong aware of the trusted CA certificate, enabling it to validate client certificates that are signed by this CA.


4. Enable the mTLS Plugin in Kong

Now, configure Kong to enforce mTLS for a service or route using the mTLS Authentication plugin. This plugin requires clients to present a certificate signed by the CA.

4.1. Enable mTLS for a Service

To enable mTLS authentication on a specific service:

curl -i -X POST http://localhost:8001/services/<service_id>/plugins \

  –data “name=mtls-auth”

Replace <service_id> with the actual service ID.

4.2. Enable mTLS for a Route

Alternatively, you can enable mTLS for a specific route:

curl -i -X POST http://localhost:8001/routes/<route_id>/plugins \

  –data “name=mtls-auth”

By default, the plugin will validate the client certificate against the CA certificate you uploaded in Step 3.


5. Configure Trusted Certificate IDs (Optional)

If you have multiple CA certificates, you can specify which ones to trust. You can update the mTLS plugin configuration to use the correct CA certificate ID:

curl -i -X PATCH http://localhost:8001/plugins/<plugin_id&gt; \

  –data “config.trusted_certificate_ids=<ca_certificate_id>”


6. Test the mTLS Setup

6.1. Test Using Curl

To test the mTLS setup, make a request to your Kong service or route while providing the client certificate and private key:

curl -v –cert client.crt –key client.key https://<kong-gateway-url>/your-service-or-route

This request should succeed if the client certificate is valid. If the client certificate is invalid or not provided, the request will fail with an error.


Summary

  1. Generate a Certificate Authority (CA): Use OpenSSL to generate a root CA (ca.key and ca.crt).
  2. Create and sign client certificates: Sign client certificates using the CA (client.crt and client.key).
  3. Upload the CA certificate to Kong (ca.crt).
  4. Enable the mTLS Authentication plugin for services or routes in Kong.
  5. Test mTLS by making requests using the client certificates.

By following these steps, Kong Gateway will be configured to enforce mTLS, ensuring that only clients with valid certificates signed by your CA can access your services.

Configure mTLS plugin for kong api gateway

Kong API Gateway offers a few plugins to handle mutual TLS (mTLS) authentication and related features. These plugins ensure that clients are authenticated using certificates, providing an additional layer of security beyond standard TLS encryption. Key mTLS Plugins for Kong API Gateway mtls-auth Plugin (Kong Enterprise) Mutual TLS Authentication Plugin (Kong Gateway OSS) Basic Authentication with mTLS (Combined Usage) Custom mTLS Logic with Lua (Advanced Use Case) 1. mtls-auth Plugin (Kong Enterprise) Description: This plugin is available in Kong Enterprise and is specifically designed for mTLS authentication. It validates the client certificate presented during the TLS handshake against a set of CA certificates stored in Kong. Features: Validates client certificates using specified CA certificates. Supports multiple CA certificates. Can pass the client certificate information to upstream services. Configurable to allow or restrict access based on client certificate IDs. Configuration Options: config.ca_certificates: List of CA certificate IDs used to verify client certificates. config.allowed_client_certificates: List of client certificate IDs allowed to access the service or route. config.pass_client_cert: Boolean to decide whether to pass client certificate info to upstream services. 2. Mutual TLS Authentication Plugin (Kong Gateway OSS) Description: This plugin provides basic mTLS functionality in the open-source version of Kong. It requires client certificates for authentication and validates them against the provided CA certificates. Features: Validates client certificates using CA certificates. Simpler than the mtls-auth plugin and may not support advanced enterprise features. Configuration Options: ca_certificates: Array of CA certificate IDs for validation. allowed_client_certificates: Array of specific client certificates IDs. 3. Basic Authentication with mTLS (Combined Usage) Description: Although not an mTLS plugin by itself, Kong allows combining basic authentication plugins (like basic-auth) with mTLS for a two-layered authentication approach. Usage: Apply both the basic-auth plugin and the mtls-auth plugin to a service or route. Requires both a valid client certificate and a valid basic authentication credential. 4. Custom mTLS Logic with Lua (Advanced Use Case) Description: For advanced use cases where you need custom mTLS handling beyond what the plugins provide, you can use Kong’s serverless capabilities to write custom logic using Lua in a plugin like serverless-functions. Use Cases: Custom certificate validation logic. Dynamic CA certificate selection. Additional logging and monitoring for mTLS events. Choosing the Right Plugin For Enterprise Needs: If you have a Kong Enterprise license, the mtls-auth plugin is the most feature-rich option, offering advanced mTLS configurations and management capabilities. For Open Source Users: The Mutual TLS Authentication Plugin is available in Kong Gateway OSS but with fewer features. It’s suitable for basic mTLS needs. For Custom Logic: If your use case requires custom logic, consider using Lua scripting with serverless-functions to implement advanced mTLS workflows. Conclusion These plugins allow Kong to enforce mTLS for enhanced security. The choice between them depends on your version of Kong (Enterprise vs. OSS) and your specific security requirements. Configure mTLS plugin for kong api gateway

To configure mTLS (mutual TLS) in Kong API Gateway, you need to use the mtls-auth plugin, which validates client certificates against a set of trusted Certificate Authorities (CA). This process involves uploading the CA certificate to Kong, enabling the mtls-auth plugin for a service or route, and testing the configuration.
Steps to Configure mTLS in Kong
1. Upload CA Certificate to Kong
2. Enable the mtls-auth Plugin for a Service or Route
3. Test the mTLS Configuration
Step 1: Upload CA Certificate to Kong
You must upload the CA certificate to Kong so it can validate the client certificates.
1. Upload the CA Certificate using Kong Admin API:

curl -i -X POST http://:8001/ca_certificates \
–data “cert=@/path/to/ca.crt”
o Replace with your Kong Admin URL.
o This will upload the CA certificate to Kong, which will then be used to verify client certificates.
2. Check the Uploaded CA Certificate: Verify that the CA certificate has been uploaded correctly by listing all CA certificates:

curl -i -X GET http://:8001/ca_certificates
Step 2: Enable the mtls-auth Plugin
1. Enable the Plugin on a Service: You can apply the mtls-auth plugin to a specific service in Kong.

curl -i -X POST http://:8001/services//plugins \
–data “name=mtls-auth” \
–data “config.ca_certificates=” \
–data “config.allow_any_client_cert=true”
o Replace with the name or ID of the service you want to protect.
o Replace with the ID of the CA certificate you uploaded in step 1.
o allow_any_client_cert=true allows any client certificate issued by the uploaded CA to access the service.
2. Enable the Plugin on a Route: Alternatively, you can apply the plugin to a specific route.

curl -i -X POST http://:8001/routes//plugins \
–data “name=mtls-auth” \
–data “config.ca_certificates=” \
–data “config.allow_any_client_cert=true”
o Replace with the ID of the route you want to protect.
3. Optional Configuration Options:
o config.pass_client_cert=false: By default, the plugin does not pass the client certificate to the upstream service. Set this to true if you want to pass it.
o config.allowed_client_certificates: You can specify individual client certificate IDs if you want to allow only specific certificates.
Step 3: Test the mTLS Configuration
1. Test with a Valid Client Certificate: Make a request to the service or route using a client certificate signed by the trusted CA.
curl -v https://:/ \
–cert /path/to/client.crt \
–key /path/to/client.key
o If everything is configured correctly, you should receive a successful response.
2. Test with an Invalid or No Client Certificate: Try making a request without a client certificate or with an invalid one.
curl -v https://:/
o You should receive a 401 Unauthorized or 403 Forbidden response, indicating that the client certificate validation failed.
Additional Considerations
• Certificate Renewal: If you update your CA or client certificates, remember to update them in Kong as well.
• Multiple CA Certificates: You can upload multiple CA certificates to Kong and specify them in the config.ca_certificates array when configuring the plugin.
• Error Handling: If you encounter errors, check the Kong logs for detailed messages that can help diagnose issues.
Summary
1. Upload CA Certificate: Use the Admin API to upload the CA certificate.
2. Enable mTLS Plugin: Configure the mtls-auth plugin on your desired service or route, specifying the CA certificate.
3. Test and Verify: Ensure that the setup is correct by testing with valid and invalid client certificates.
By following these steps, you can configure mTLS in Kong to secure your API services, ensuring that only clients with trusted certificates can access them.