KONG 401 & 400 error

If you’re getting 401 and 400 intermittently from the upstream service, that strongly suggests authentication-related issues, often tied to token forwarding, expiration, or format mismatch.


🔁 Quick Summary of Key Differences

StatusMeaningCommon Cause
401UnauthorizedMissing/invalid/expired credentials
400Bad RequestMalformed or incomplete request (e.g. OIDC token request)

🧠 Intermittent 401 + 400: Common Root Causes

🔸 1. Expired or Reused Tokens

  • Kong gets a token once, caches it, and keeps using it—but upstream expects a fresh one.
  • Especially common with client credentials or authorization code flows.

Solution:

  • Set token caching to short duration or disable it in the OIDC plugin: config: cache_ttl: 0 # Or a very short TTL like 5

🔸 2. Multiple Consumers with Invalid Secrets

  • One client (consumer) is configured correctly, others are not.
  • You see 401/400 when the bad client makes a request.

Solution:

  • Enable verbose logging in Kong: export KONG_LOG_LEVEL=debug kong reload Then correlate consumer_id with the error.

🔸 3. Kong Not Forwarding Tokens Correctly

  • Kong authenticates but doesn’t forward Authorization header to the upstream.
  • Some plugins strip headers by default.

Solution:

  • Add request-transformer plugin to pass the token: curl -X POST http://localhost:8001/services/YOUR_SERVICE/plugins \ --data "name=request-transformer" \ --data "config.add.headers=Authorization:Bearer $(jwt_token)"

🔸 4. OIDC Plugin Misconfiguration

If you’re using the OpenID Connect plugin:

  • grant_type, client_id, or redirect_uri may be wrong or missing intermittently.
  • Kong might request a new token but fail to pass a correct one.

Check:

  • Kong OIDC plugin config
  • Errors like: error=invalid_request error_description="Unsupported client authentication method"

🧪 How to Debug Effectively

  1. Set KONG_LOG_LEVEL=debug, reload Kong, and tail the logs: tail -f /usr/local/kong/logs/error.log
  2. Inspect Upstream Request:
    • Look for what headers/body Kong is sending.
    • Especially Authorization, Content-Type, and request body if OIDC is involved.
  3. Track Errors to a Specific Consumer:
    • Use consumer_id in the access log to trace.
    • Maybe only some consumers are misconfigured.
  4. Try Curling the Upstream Directly with the exact payload Kong sends (use Postman or curl): curl -X POST https://upstream/token \ -H "Authorization: Bearer <your_token>" \ -H "Content-Type: application/x-www-form-urlencoded" \ -d "grant_type=client_credentials&client_id=...&client_secret=..."

to check

  • Kong plugin configs (especially OIDC/JWT)
  • A few lines from Kong’s debug logs showing the upstream request/response
  • Whether you’re using Ping Identity or a custom upstream

Upgrade RHEL 7 to RHEL 8 – ansible playbook

---
- name: Upgrade RHEL 7 to RHEL 8
  hosts: all
  become: true
  vars:
    leapp_repo: "rhel-7-server-extras-rpms"

  tasks:
    - name: Ensure system is up to date
      yum:
        name: '*'
        state: latest

    - name: Install the Leapp utility
      yum:
        name: leapp
        state: present
        enablerepo: "{{ leapp_repo }}"

    - name: Run the pre-upgrade check with Leapp
      shell: leapp preupgrade
      register: leapp_preupgrade
      ignore_errors: true

    - name: Check for pre-upgrade issues
      debug:
        msg: "Leapp pre-upgrade check output: {{ leapp_preupgrade.stdout }}"

    - name: Fix any issues identified by Leapp (manual step)
      pause:
        prompt: "Please review the pre-upgrade report at /var/log/leapp/leapp-report.txt and fix any blocking issues. Press Enter to continue."

    - name: Proceed with the upgrade
      shell: leapp upgrade
      ignore_errors: true

    - name: Reboot the server to complete the upgrade
      reboot:
        reboot_timeout: 1200

    - name: Verify the OS version after reboot
      shell: cat /etc/redhat-release
      register: os_version

    - name: Display the new OS version
      debug:
        msg: "The server has been successfully upgraded to: {{ os_version.stdout }}"

Hive and HiveServer2

Hive and HiveServer2 are closely related but serve different purposes within the Apache Hive ecosystem:

Hive

  • Definition: Hive is a data warehouse infrastructure built on top of Hadoop for querying and managing large datasets using SQL-like language called HiveQL.
  • Function: It provides a mechanism to project structure onto the data in Hadoop and to query that data using a SQL-like language.
  • Use Case: Hive is used to create, read, update, and delete data stored in HDFS (Hadoop Distributed File System).

HiveServer2

  • Definition: HiveServer2 is a service that enables clients to execute queries against Hive.
  • Function: It acts as a server that processes HiveQL queries and returns results to clients. It supports multi-client concurrency and authentication, making it suitable for handling multiple simultaneous connections1.
  • Use Case: HiveServer2 is used to provide a more robust and scalable interface for executing Hive queries, supporting JDBC and ODBC clients.

Key Differences

  • Concurrency: HiveServer2 supports multi-client concurrency, whereas the older HiveServer1 does not.
  • Authentication: HiveServer2 provides better support for authentication mechanisms like Kerberos, LDAP, and other pluggable implementations.
  • API Support: HiveServer2 supports common ODBC and JDBC drivers, making it easier to integrate with various applications.
  • Deprecation: HiveServer1 has been deprecated and replaced by HiveServer2.

In summary, Hive is the data warehouse and query language, while HiveServer2 is the server that allows clients to interact with Hive.