Does anyone know offhand the reason why network connections fail
if socket buffers are set above 256k?
# sysctl net.inet.tcp.sendspace=262145
# telnet naiad 80
I was thinking of looking into it, but before going down that rabbit
hole I thought I'd ask in case there's a quick answer that somebody
(yes, people do use buffers much bigger than this, I looked at some
of the academic ftp mirror sites - looks like mirrorservice.org wil
negotiate 3MB buffers, aarnet 35MB, if you let them - presumably
they try to avoid buffers being a bottleneck for clients reaching
them over a national network of at least 1Gb/s end-to-end).
There is this magical define in uipc_socket2.c called SB_MAX that limits
the socket buffers to 256k going over that line makes the initial scaling
35M, that is insane. Either they have machines with infinite memory or you
can kill the boxes easily.
You don't need 35MB per client connection if interfaces like sendfile(2)
are used. All the kernel has to guarantee in that case is copy-on-write
for the file content as far as it has been "send" already. Media
distribution server normally don't change files inplace, so the only
backpressure this creates is on the VFS cache. Let's assume the server
is busy due to a new OpenBSD/Linux/Firefox/whatever release. A lot of
clients will try to fetch a small number of distinct files. The memory
the kernel has to commit is limited by the size of that active set, not
by the number of clients.