1) I've noticed something related to his comment about programs trying to be more clever than the kernel in other situations, too. Java, glib, and other libraries/languages will try to wrap malloc() with their own memory allocator, which will cache memory, so freeing the objects doesn't actually give that memory back to the OS for other processes to use. Is this necessarily intelligent design? 2) How do fancier languages than C handle memory allocation with respect to multiple processors? Do they just malloc and take the performance hit? Or are they more clever? Is the design pattern mentioned in the post generalizable to most cases?
(Salvatore Sanfillipo is one of the top C hackers and system designers in the world right now, imo. Period.) 1) Everyone and their mommy is writing a memory manager -- it's the thing to do ;-) That said, JVM is not "wrapping" malloc. The memory manager is tasked with garbage collection. malloc() is just a part of the equation. (Also see jmalloc) "intelligent design"? You'll find this of interest: http://marakana.com/s/tuning_jvm_for_a_vm_lessons_learned_di... 2) JVM does an excellent job, but once past 4GiB heaps, GC's period runs can get unacceptably long. Thus BigMemory, Oracle Coherence, etc. Azul Systems, however, has a kick ass JVM with GC written by the master, Cliff Click, that can scale to insanely huge heaps. The issue with malloc, imho, is that the kernel is entirely clueless about the semantics of the data. Pointer chasing and temporal locality only gets you so far. Servers these days generate an incredible amount of garbage; cache coherence on multi-core is expensive, and there is very little communication between the kernel and user land process about the semantics of the memory objects. Thus, most just mmap a chunk of memory and manage the memory themselves. (c.f. Antirez above.)