In non-servers box (as final users desktop) where there aren't overloaded services
¿is really needed a /swap partition if the machine have a lot of RAM (2-4 GB for example)?
Thanks in advance.
Swap is only needed if the RAM used by your applications and data is greater than the RAM you've got. It can help if your applications are sufficiently large that there's no room for disk cache, as all applications have code that's run once at startup, then never touched again, or if you run the occasional huge application (e.g. I occasionally play with very large uncompressed images). It's also needed for software suspend to disk.
If you've got enough RAM (I'd suggest you need at least twice your normal working set), I'd recommend either no swap, or swap + a kernel with the swap prefetch patch from Con Kolivas. The swap prefetch patch will ensure that you only keep stuff in swap but not RAM when there's no other choice; in my experience, it eliminates the large recovery latency you can get when you leave a large compile job and OO.o running overnight (OO.o running slow, because it was swapped out for the compile).
The kernel does a good job of working out which pages are not
required. Swapping them out to disk leaves more memory available
for caching something useful. I do all my compilations in tmpfs,
so I have 8GB of swap (4GB was not enough to build a cross-compiler)
((less chance of corrupting a real filsesytem if all the activity is elsewhere)).
If your machine has plenty of RAM, and the CPU is not that busy,
you will not notice the absence of a swap partition. In real life,
computers spend much of their time with resources to spare, and have
bursts of activity. Having some swap arround will let your machine
slow rather than stop when it gets a large burst of activity.
In your case, if you had 32GB of RAM, swap just wouldn't help much; everything you used would fit in RAM, and your machine wouldn't need to swap.
In the desktop case, everything you need typically fits in 128-256MB RAM, so 2GB RAM is such overkill that you don't need swap. It can still help if you occasionally go massively over your normal memory usage, but it's certainly not needed.
This whole argument about whether swap is needed in systems with lots of ram has been going around for years.
Swap is still useful in the vast majority of systems, even those with large amounts of physical memory.
Long lived applications can be swapped to disk freeing memory to provide additional disk cache.
Swapping an application back in slows a system for a short time. Increasing cache improves performance for the entire life of the system. Overall allowing for increased disk cache is a win for most systems most of the time.
In addition providing swap allows coverage for larger than expected mallocs.
You're thinking in big system terms still; on my desktop, if I start OO.o (a big application), spend 5 minutes working in it, then come back to OO.o a week later, any swap-in latency is unacceptable. The gains from swapping OO.o out are always throughput gains; it's a rare day when I reach 400MB in use, of 1GB RAM total.
Because I (as a desktop user) don't care about throughput, but I do care about latency, I do not ever want an application swapped out and left swapped out. Disabling swap is one way to guarantee that this happens; my preferred option (as I've mentioned before) is Con Kolivas's swap prefetch patch, which ensures that any application that gets swapped out when I'm under memory pressure gets swapped back in when I'm out of trouble.
Put bluntly; when I do something like compile a kernel, I don't care about the difference between a 5 minute compile and a ten minute compile. Either they're background tasks, and I'm doing something else, or I've got to go for a coffee anyway. I do care if a running copy of OpenOffice.org thrashes the disk for 5-10 seconds swapping itself back in, just because I left OO.o alone to do some image manipulation, then wandered off for the weekend.
Finally, in a desktop system, you normally run overcommitted anyway. If an application genuinely mallocs and uses more RAM than you've got, you're probably screwed anyway. If it doesn't use it, overcommit never allocates physical pages. FWIW, total allocated memory on my system is currently at around the 2GB mark; total used (swap+RAM) is at around the 300MB mark. Overcommmit would cover the 1.7GB gap, even if I didn't have enough swap to back it up.
Why don't you just decide to drink your coffee, when you return to your desk on monday? :)
As everyone knows, "swapping" only covers the data of an application, the code pages are simply dropped on memory pressure and reloaded from backing store (the original executable file) when needed again; this is -- apart from the name -- equivalent to swapping, and if the system can't swap, it just does that. And you can't stop the system from doing that, if you don't copy your executables into a ram disk. But then 2GB sound too small.
Only clean executable pages get swapped. There's a fair number of "dirty" executable pages (due to run-link relocation patchups) that are neither shared nor swappable.
Granted, the prelink work that they've done the past couple years helps the latter problem quite a bit.
The "do I need swap" debate has come up again and again. And then there's wacky people who put swap on a compressed RAM-disk:   I wonder how the compressed caching numbers might look for a dual-core machine, since one could be compressing while the other is running application code. Hmmmm.... :-) (Unfortunately, the specific version linked to above doesn't support SMP.)
last time i read about it, the elf way of relocation was described as: if the references internal to the libraries are resolved in a position independent way (-fPIC) you only need the GOT and the PLT to resolve the inter-library references -- data addresses and procedure addresses (lazily resolved) respectively -- as per process writeable, but compact objects, and the modifiable data segments from the executable can't simply be reread of course, too. but i won't fight over it.
I guess I need to upgrade my thinking of how stuff works.
Right now, I'm reading this paper, dated 2001. It seems you're right. The function pointer fix-ups take place only in the PLT, so every call into a library ends up being a branch-to-a-branch.
What's interesting is that back when I hacked around a bit in Solaris, it appeared (and maybe I am misremembering a decade later) that calling a function replaced all of the call sites with a direct call to the library, not just the stub. But, then, I would not be surprised in the least if I'm dead wrong here.
Sure enough, I popped over to one of our current Solaris machines and when I call "printf" for instance, it just updates the stub. I don't think we have any Solaris 5.5.1 machines I can check on anymore. ;-)
Learn something new each day. :-)
You don't *need* swap, but harddisk space is cheap and there might be cases where you'd want to have some. Note that you really don't need to have 4-8GB swap in a machine with 2-4GB RAM if you expect the RAM to be sufficient. I only have 512MB of swap for as much of memory, and it's completely unused most of the time.
If your system won't need to use swap, it will simply not use it (or at least not to a noticeable degree).
You can add swap later -- if you have a partition to spare. And, IIRC, there might be a way to use a regular file for swap in Linux, e.g. for emergencies, but my recollection is pretty vague on this...
For example, when a few developers share the same machine, it sometimes happens that a process has a memory leak and grows and grows and grows. So, over time, it will begin to force the kernel to do heavy swapping. And then your system is basically knocked out and must be hard-resetted.
The Linux kernel defnitely needs a means to avoid such a situation. Does kernel preemption help here?
memory is a shared resource, i.e. if one process doesn't have memory left, the others don't have as well. there are some heuristics like delaying fast growing processes until their own pages are swapped to let the others make some progress (btw it has nothing to do with kernel preemption), but if you are developers and know how your process should behave, you can just set rlimits for your test runs; the values don't have to be overly strict, you just want to not let them kill your machine. use the shell command "ulimit -v" (in bash) or "limit vm" (in tcsh) or the function setrlimit() to set the address space size and read the corresponding man pages. the limit just does to the limited process what 'no swap' does as well, e.g. let malloc() return NULL; but without the risk of letting innocent other processes die.
I did no use SWAP on my old box, however I found a reason for why it Hanged some times, I had 512MB ram and all my music ate it all up when I started juk, and ONLY then.
Now I'm on a Powerbook, 1,5GB ram and no swap... You do FINE without the SWAP, if you are not working on some movie project that is.
Watching movie, 15 - 20 open tabs in FF. OO.org document, Quanta with say 10 tabs and Juk idleing (cuz of the movie) with a few 100 GB with audio, the rest I recon is just usual background tasks.
Well, I may recomand a swap file and not partition if you need swap (because it is easier to configure, to change its size when needed, and if you are swapping, your PC is so slow anyway you will not notice the speed difference in between a swap file and a swap partition), but that is unless you are/will be using the swap partition to "suspend to disk" your PC instead of powering it down...