Hi, the following/attached patch works around a [obscure] problem when an 2.6 (not sure/caring about 2.4) NFS client accesses an "offline" file on a Sun/Solaris-10 NFS server when the underlying filesystem is of type SAM-FS. Happens with RHEL4/5 and mainline kernels. Frankly, it is not a Linux problem, but the chance for a short-/mid-term solution from Sun are very slim. So, being lazy, I would love to get this patch into Linux. If not, I just will have to maintain it for eternity out of tree. The problem: SAM-FS is Suns proprietary HSM filesystem. It stores meta-data and a relatively small amount of data "online" on disk and pushes old or infrequently used data to "offline" media like e.g. tape. This is completely transparent to the users. If the date for an "offline" file is needed, the so called "stager daemon" copies it back from the offline medium. All of this works great most of the time. Now, if an Linux NFS client tries to read such an offline file, performance drops to "extremely slow". After lengthly investigation of tcp-dumps, mount options and procedures involving black cats at midnight, we found out that the readahead behaviour of the Linux NFS client causes the problem. Basically it seems to issue read requests up to 15*rsize to the server. In the case of the "offline" files, this behaviour causes heavy competition for the inode lock between the NFSD process and the stager daemon on the Solaris server. - The real solution: fixing SAM-FS/NFSD interaction. Sun engineering acks the problem, but a solution will need time. Lots of it. - The working solution: disable the client side readahead, or make it tunable. The patch does that by introducing a NFS module parameter "ra_factor" which can take values between 1 and 15 (default 15) and a tunable "/proc/sys/fs/nfs/nfs_ra_factor" with the same range and default. Signed-off-by: Martin Knoblauch <firstname.lastname@example.org> diff -urp linux-2.6.27-rc6-git4/fs/nfs/client.c linux-2.6.27-rc6-git4-nfs_ra/fs/nfs/client.c --- ...
Hi. I was curious if a design to limit or eliminate read-ahead activity when the server returns EJUKEBOX was considered? Unless one can know that the server and client can get into this situation ahead of time, how would the tunable be used? Thanx... ps --
I tend to agree. A tunable is probably not a good solution in this case. I would bet that this lock contention issue is a problem in other more common cases, and would merit some careful analysis. -- Chuck Lever --
So, you need to a) make your stager daemon do IO more sensibly, and b) apply something like this patch which adds O_NONBLOCK when knfsd does reads writes and truncates and translates -EAGAIN into NFS3ERR_JUKEBOX http://kerneltrap.org/mailarchive/linux-fsdevel/2006/5/5/312567 and c) make your filesystem IO interposing layer report -EAGAIN when a process tries to do IO to an offline region in a file and O_NONBLOCK is I think having a tunable for client readahead is an excellent idea, although not to solve your particular problem. The SLES10 kernel has a patch which does precisely that, perhaps Neil could post it. I don't think there's a lot of point having both a module parameter and a sysctl. A maximum of 15 is unwise. I've found that (at least with the older readahead mechanisms in SLES10) a multiple of 4 is required to preserve rsize-alignment of READ rpcs to the server, which helps a lot with wide RAID backends. So in SGI we tune client readahead to 16. Your patch seems to have a bunch of other unrelated stuff mixed in. -- Greg Banks, P.Engineer, SGI Australian Software Group. Be like the squirrel. I don't speak for SGI. --
mount -o remount,readahead=42 --