Kong latency 2

Short answer:
In Kong logs, proxy latency is the time spent waiting on your upstream service (the API/backend) — i.e., how long it took the upstream to respond to Kong.

Here’s the breakdown of the three latency fields you’ll see in Kong logs:

  • latencies.proxyUpstream latency (a.k.a. “proxy latency”): time from when Kong sends the request to the upstream until it starts getting the response.
  • latencies.kongKong internal time: routing + plugin execution + overhead inside Kong.
  • latencies.requestTotal request time as seen by the client.

Quick mental model:

Client ──> [ Kong (latencies.kong) ] ──> Upstream API (latencies.proxy) ──> [ Kong ] ──> Client
                           \________________ latencies.request ________________/

A common point of confusion: in response headers,

  • X-Kong-Upstream-Latencylatencies.proxy (upstream time)
  • X-Kong-Proxy-Latencylatencies.kong (Kong time)

So, if you see high proxy latency, the slowness is almost always in your backend (or the network to it), not Kong itself. Focus on the upstream’s performance (DB calls, external services), network/DNS, and connection reuse; use Kong’s service/route timeouts (connect_timeout, read_timeout, write_timeout) to guard against outliers.Short answer:
In Kong logs, proxy latency is the time spent waiting on your upstream service (the API/backend) — i.e., how long it took the upstream to respond to Kong.

Here’s the breakdown of the three latency fields you’ll see in Kong logs:

  • latencies.proxyUpstream latency (a.k.a. “proxy latency”): time from when Kong sends the request to the upstream until it starts getting the response.
  • latencies.kongKong internal time: routing + plugin execution + overhead inside Kong.
  • latencies.requestTotal request time as seen by the client.

Quick mental model:

Client ──> [ Kong (latencies.kong) ] ──> Upstream API (latencies.proxy) ──> [ Kong ] ──> Client
                           \________________ latencies.request ________________/

A common point of confusion: in response headers,

  • X-Kong-Upstream-Latencylatencies.proxy (upstream time)
  • X-Kong-Proxy-Latencylatencies.kong (Kong time)

So, if you see high proxy latency, the slowness is almost always in your backend (or the network to it), not Kong itself. Focus on the upstream’s performance (DB calls, external services), network/DNS, and connection reuse; use Kong’s service/route timeouts (connect_timeout, read_timeout, write_timeout) to guard against outliers.

Leave a comment