A while ago I was troubleshooting an issue with a certain client software (that I don’t even know the name of) on an internal network, that was uploading images via HTTP to a remote service on the internet. The simple HTTP POST upload requests failed for some reason with a HTTP/1.0 501 Not Implemented error from our Squid proxy server, prompting me to analyze further.
Fortunately the I had a network trace from the local admins so I could easily reproduce and test the requests with curl from my Linux client.
The issue proved to be caused to the chunked Transfer-Encoding headers of the POSTs and how our Squid 3.1 proxy server is unable to process requests with this Transfer-Encoding in certain cases.
I’ve seen a few mentions of issues with chunked Transfer-Encoding and Squid on the internets but many were older, contradictory or referencing Server side responses and not client side requests like in this case.
In a nutshell, Squid 3.1 is just not fully HTTP/1.1 compatible yet and only partially supports some features like chunked encoding as stated on the project’s website:
Squid-3.2 claims HTTP/1.1 support. Squid v3.1 claims HTTP/1.1 support but only in sent requests (from Squid to servers). Earlier Squid versions do not claim HTTP/1.1 support by default because they cannot fully handle Expect:100-continue, 1xx responses, and/or chunked messages.
Both Squid-3 and Squid-2 contain at least response chunked decoding. The chunked encoding portion is available from Squid-3.2 on all traffic except CONNECT requests.
The following HTTP POST request with a chunked encoding header uploading a dummy file of 65523 byte works fine through Squid 3.1:
(Note: I’m just passing the header with curl like. The payload is actually not properly formatted chunked as explained below, but it doesn’t matter for the sake of these tests.)
$ head -c65523 /dev/urandom > /tmp/data $ curl -v 'http://example.com' -H 'Transfer-Encoding: chunked' -x 192.168.1.221:3128 -X POST --data-binary @/tmp/data * About to connect() to proxy 192.168.1.221 port 3128 (#0) * Trying 192.168.1.221... * connected * Connected to 192.168.1.221 (192.168.1.221) port 3128 (#0) > POST http://example.com HTTP/1.1 > User-Agent: curl/7.24.0 (i686-pc-linux-gnu) libcurl/7.24.0 GnuTLS/2.12.18 zlib/126.96.36.199 libidn/1.19 > Host: example.com > Accept: */* > Proxy-Connection: Keep-Alive > Transfer-Encoding: chunked > Content-Type: application/x-www-form-urlencoded > Expect: 100-continue > * Done waiting for 100-continue * HTTP 1.0, assume close after body < HTTP/1.0 200 OK [...] Squid access.log: 1398345135.604 627 10.1.1.204 TCP_MISS/200 1706 POST http://example.com/ - FIRST_UP_PARENT/192.168.1.10 text/html
All good here.
Now the exact same request that carries only one additional byte payload fails with HTTP/1.0 501 Not Implemented:
$ head -c65524 /dev/urandom > /tmp/data $ curl -v 'http://example.com' -H 'Transfer-Encoding: chunked' -x 192.168.117.221:3128 -X POST --data-binary @/tmp/data * About to connect() to proxy 192.168.1.221 port 3128 (#0) * Trying 192.168.1.221... * connected * Connected to 192.168.1.221 (192.168.1.221) port 3128 (#0) > POST http://example.com HTTP/1.1 > User-Agent: curl/7.24.0 (i686-pc-linux-gnu) libcurl/7.24.0 GnuTLS/2.12.18 zlib/188.8.131.52 libidn/1.19 > Host: example.com > Accept: */* > Proxy-Connection: Keep-Alive > Transfer-Encoding: chunked > Content-Type: application/x-www-form-urlencoded > Expect: 100-continue > * Done waiting for 100-continue * HTTP 1.0, assume close after body < HTTP/1.0 501 Not Implemented [...] Unsupported Request Method and Protocol Squid does not support all request methods for all access protocols. For example, you can not POST a Gopher request. ERR_UNSUP_REQ [...] Squid access.log: 1398345142.273 0 10.1.1.204 NONE/501 3713 POST http://example.com/ - NONE/- text/html
This doesn’t work and clearly I’m not using Gopher here, hah.
Turns out this behavior is controlled by the chunked_request_body_max_size Squid directive, which has a default value of 64KB:
Squid configuration directive chunked_request_body_max_size:
A broken or confused HTTP/1.1 client may send a chunked HTTP request to Squid. Squid does not have full support for that feature yet. To cope with such requests, Squid buffers the entire request and then dechunks request body to create a plain HTTP/1.0 request with a known content length. The plain request is then used by the rest of Squid code as usual.
The option value specifies the maximum size of the buffer used to hold the request before the conversion. If the chunked request size exceeds the specified limit, the conversion fails, and the client receives an “unsupported request” error, as if dechunking was disabled.
Dechunking is enabled by default. To disable conversion of chunked requests, set the maximum to zero.
Request dechunking feature and this option in particular are a temporary hack. When chunking requests and responses are fully supported, there will be no need to buffer a chunked request.
Ok, according to the default value the exact failure boundary should be 65536 bytes instead of 65524 but I’m sure there is some sound explanation for this so let’s disregard this.
After increasing the default parameter by adding chunked_request_body_max_size 1024 KB to the squid.conf file and reloading Squid, the uploads worked went through without problems. In our case 1MB is sufficient, but you may need more depending on the size of your requests.
The parameter apparently exists for Squid 3.2 and the development stage 3.3/3.4 as well, but I’m not sure if it’s really relevant because of the initial statement linked above. Unfortunately I don’t have a Squid 3.2+ system to test with. On RHEL/CentOS 6, which our proxy servers are running, Squid is available in version 3.1 only.
On the topic of chunked Transfer-Encoding
Chunked Transfer-Encoding means that a single HTTP body (like a file) can be split into multiple parts for transmission. In that case the traditional Content-Length header indicating the size of the entire request or response body is omitted. Instead multiple parts of data are transmitted sequentially, prefixed with the it’s length (bytes in hex) in the body, followed by a zero-size chunk. A chunked request from my case basically looks like this on the wire:
Client request: POST http://example.com HTTP/1.1 Host: example.com [...Some other headers...] Transfer-Encoding: chunked [CRLF] 8000 [First chunk of the body payload, 32KB] [CRLF] 8000 [Second chunk of the body payload, 32KB] [CRLF] [...] FF [Last chunk of body payload, 255 bytes] [CRLF] 0 [CRLF] [CRLF] Server response: HTTP/1.1 200 OK [...]
Why is this useful you may ask? It wasn’t obvious to me at first, but there is a very practical advantage in our everyday web browsing with chunked transfer encoding (although it isn’t really that widespread):
A web server that has to dynamically generate a web page can start sending HTML code to the browser almost immediately. It does not have to wait until the page is completely generated and it knows the total size to transfer it in one normal big “chunk” to the client. With this the client browser can already display parts of the page, fetch linked resources or execute scripts (most are blocked until the entire page is received though). This can improve the site loading speed for end users considerably, especially if there is a lengthy continuous flow of information like a huge table where data is returned from a slow database query. This technique is also known as streamed HTTP responses.
In my particular problem case of a client software uploading images I don’t really see the point of using chunked encoding though.