http标头值的最大值?

Maximum on http header values?

是否允许HTTP标头允许的最大允许大小? 如果是这样,它是什么? 如果没有,这是服务器特定的东西,或者是允许任何大小的标题的公认标准吗?


不,HTTP没有定义任何限制。但是,大多数Web服务器都会限制它们接受的标头大小。例如在Apache中,默认限制为8KB,在IIS中为16K。如果标头大小超过该限制,服务器将返回413 Entity Too Large错误。

相关问题:用户代理字符串有多大?


正如vartec所述,HTTP规范没有定义限制,但默认情况下会有许多服务器。实际上,这意味着下限为8K。对于大多数服务器,此限制适用于请求行和所有标头字段的总和(因此请保持cookie短)。

  • Apache 2.0,2.2:8K
  • nginx:4K - 8K
  • IIS:因版本而异,8K - 16K
  • Tomcat:因版本而异,8K - 48K(?!)

值得注意的是,nginx默认使用系统页面大小,在大多数系统上都是4K。您可以查看这个小程序:

pagesize.c:

1
2
3
4
5
6
7
8
9
#include <unistd.h>
#include <stdio.h>

int main() {
    int pageSize = getpagesize();
    printf("Page size on your system = %i bytes
", pageSize);
    return 0;
}

gcc -o pagesize pagesize.c编译然后运行./pagesize。来自Linode的我的ubuntu服务器尽职尽责告诉我答案是4k。


HTTP does not place a predefined limit on the length of each header
field or on the length of the header section as a whole, as described
in Section 2.5. Various ad hoc limitations on individual header
field length are found in practice, often depending on the specific
field semantics.

HTTP标头值受服务器实现的限制。 Http规范不限制标头大小。

A server that receives a request header field, or set of fields,
larger than it wishes to process MUST respond with an appropriate 4xx
(Client Error) status code. Ignoring such header fields would
increase the server's vulnerability to request smuggling attacks
(Section 9.5).

发生这种情况时,大多数服务器将返回413 Entity Too Large或适当的4xx错误。

A client MAY discard or truncate received header fields that are
larger than the client wishes to process if the field semantics are
such that the dropped value(s) can be safely ignored without changing
the message framing or response semantics.

无限制的HTTP标头大小使服务器免受攻击,并且可以降低其服务有机流量的能力。

资源


我还发现在某些情况下,在许多标题的情况下502/400的原因可能是因为大量的标题而不考虑大小。
来自文档

tune.http.maxhdr
Sets the maximum number of headers in a request. When a request comes with a
number of headers greater than this value (including the first line), it is
rejected with a"400 Bad Request" status code. Similarly, too large responses
are blocked with"502 Bad Gateway". The default value is 101, which is enough
for all usages, considering that the widely deployed Apache server uses the
same limit. It can be useful to push this limit further to temporarily allow
a buggy application to work by the time it gets fixed. Keep in mind that each
new header consumes 32bits of memory for each session, so don't push this
limit too high.

https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#3.2-tune.http.maxhdr