Disabling the swapfile is generally fairly stupid
I dunno why you think you're agreeing, gigabite. If your working set is too big for ram then it's too big: disabling the swapfile isn't going to fix it. If it's not, then you are making system performance worse by forcing pages that aren't in the working set to stay in ram.
By not having a swapfile, you are forcing all anonymous writable pages (programs' normal working data, their heaps and stacks and so on) to be kept in physical ram at all times, even if they have not been touched for ages and will perhaps never be touched ever again. This reduces the amount of physical ram available to store file-backed read-only pages (programs' actual code), file-backed writable pages (mmap'ed stuff) and disk cache. Having a smaller disk cache has an obvious effect: you may have to access the disk more often, which is glacial by your CPU's standards. The effect of evicting file-backed pages is not so obvious, but what you are doing is causing more swapping to take place
Not having a swapfile doesn't prevent the OS from swapping, it just means it has nowhere to put anonymous writable pages (since they don't belong to a file). Data that belongs to an actual file on disk (the aforementioned program code, and mmap'ed data files) is still only brought into physical ram when it's used, and is evicted when something else needs the space.
Old versions of Windows have terrible virtual memory managers, and make extremely poor decisions about what pages to evict at any given time: back then, disabling the swapfile tended to have the effect that the disk cache was reduced instead of evicting pages, which for some tasks made performance better. The modern Windows VMM is much, much better; in general you are hurting performance by not allowing it to evict anonymous pages, sometimes by a great deal. Its heuristics for trading off keeping anonymous pages versus keeping other kinds of pages or expanding the disk cache are much better. There are still workloads for which it's better to not allow it to use a swapfile, but general computer usage is not one. You might sometimes find that not only does disabling the swapfile not make things better, but it substantially increases swapping activity and makes performance much worse.
So yes, there is always a good reason to page things: the amount of data your computer could potentially be working with is always going to be bigger than your physical ram. You have to count all the data on your disk that you are using (i.e. things that could potentially be in the disk cache) as well as the actual memory being used by programs, as well as the programs' actual code itself.
As for the other bits..
If you have a dual channel memory controller then you will get better memory performance if you install your memory in matched pairs in the right sockets. However, the benefit is not huge: the circumstances in which anything will actually be twice as fast are so rare you will never encounter them. If it's significantly less effort/cost to have an uneven memory configuration which prevents dual channel operation then it's not the end of the world.
The problem with seeing how much memory you need is it's a hugely difficult thing to understand in the first place. The amount of memory that a program is using depends whether you count all the pages it has mapped or only the ones that are paged in; whether you count any pages of memory mapped files; whether you count the pages in the disk cache which it is responsible for; how you count pages which are being shared with other programs; and loads of other factors. Windows doesn't display it very well because the true answer is that it's a huge complicated list of numbers which only mean anything for the exact state your computer is in right now; generalising is not always easy.
All versions of Windows use practically all of physical memory all the time; in fact, all operating systems do. Physical memory is so much faster than your disk that it would be stupid not to cache as much as possible: actual free memory is wasted memory! Vista displays many of these memory statistics differently to XP, though, which has confused a lot of people, and its virtual memory manager is somewhat changed in any case. I've not personally poked around on a Vista machine for any great length of time to be able to comment authoritatively on the differences..
If you're interested, here's a rough description of what some of the memory statistics in Windows mean. All labels and interpretations are from Windows XP's task manager as it's the only system I have to hand: other versions might display this stuff differently, so take it with a grain of salt.
Memory Usage on the Process tab refers to how many of the pages that the process owns are currently in physical ram. The process might actually own a lot more pages than that, but they are either in the swapfile or in a regular file. Now, here's where the first weirdness comes in: the memory usages of the various processes will probably add up to considerably more than the amount of ram you have. This is because any page that is mapped into more than one process is being counted under both
The most common reason for this is that the pages are part of the actual code for the process: the exe/dlls. Any dll that's loaded by more than one process will have its pages mapped into all of them, but it only actually takes up one copy's space in memory. Of course, which parts of the exe/dll are actually *in* memory at the moment is subject to change constantly as they get paged in and out based on usage. So, what does this number really mean? Very little. Hooray! Larger numbers generally mean the process is responsible for more of the system's memory usage, but even that isn't guaranteed. The other memory-related stats on the Process tab don't mean very much either, they are generally counted using extremely vague measures and include/exclude various kinds of pages without much explanation
The Performance tab has a bunch more numbers referring to the system as a whole. PF Usage, the graph in the middle, is displaying the same number as Commit Charge in the status bar at the bottom and under the Commit Charge section. What this is referring to is how many pages of writable memory have been committed, i.e. promised to various processes. This doesn't mean that much memory is actually in use: pages are not generally allocated at the time they are committed. What it basically means is that there has been space in the swapfile reserved to accommodate them; the first time the page is actually touched, it will be allocated. If you write a program that allocates, say, 100MB of ram and then never uses it, then the commit charge will increase by 100MB but the memory usage of the program will not, and the amount of free memory on the system will not go down. The commit charge is limited by the available physical ram and swapfile space, because Windows generally avoids overcommitting (promising more pages than actually exist) - other OSes sometimes allow overcommitting by some factor to allow for pages which will never be used (or in the case of Linux's default, overcommitting by any amount you like *grin*). So... the commit charge is actually kinda meaningful: it's the closest thing to "total memory needed" that is measured. What it doesn't count, though, is read-only pages like code: if you have a commit charge of 512MB and you have 512MB of ram then you still need to push some of it out into the swapfile, to make room for some actual code in order to do anything *with* all that data. Peak commit charge is just the highest it's been since you last rebooted.
Kernel Memory is just how much memory belongs to the kernel itself, rather than to any userspace application: paged is how much of it could be paged out (either by just discarding it if it's a read-only mapping, or by swapping it out if not), and nonpaged is how much cannot be paged out under any circumstances (because it may be used by parts of the kernel which cannot tolerate taking a page fault: this is where the PAGE_FAULT_IN_UNPAGED_MEMORY bluescreen comes from
Last, Physical Memory. Total is pretty self-explanatory, I hope. System Cache is how large the disk cache currently is. Available is.. an interesting claim. It includes all the pages of the disk cache, on the assumption that they can be discarded if more memory is needed, but sadly this isn't actually true because pages which have been modified (dirty pages) can't simply be discarded, they have to be written back to disk first which takes time. It also includes memory which is actually not being used for anything at all: this amount is whatever the difference between Available and System Cache is. There's two reasons memory wouldn't be in use: one is that some memory is always held back from the cache to ensure that important allocations (such as the kernel wanting to bring in a page for its own use) will succeed quickly, and another is that the system cache simply hasn't needed to grow since the page was freed up
What it *doesn't* include is all the memory that could be freed up just as easily as the disk cache: paged-in read-only mappings. These can also just be discarded, but they Just Don't Count for this purpose.
So, the actual Physical Memory Available number means very little: it is neither the minimum nor the maximum amount of memory that could be freed up for use.
Actually displaying this any better is, well, difficult. What are you going to display?
Linux has similar issues, see the definitions of virtual size, resident set size, shared size as displayed by ps/top (none of which represent anything particularly more informative than the Windows equivalents). There is a very recent patch to the Linux kernel which allows a userspace program to calculate two new values: the proportional set size and the unique set size. These are more useful (though still not perfect): the unique set size represents the number of pages which are mapped *only* into this program, and thus is about how much memory you would expect to free up if you exited it, and the proportional set size is the number of pages mapped into the program, with each page divided by the number of processes who map it. This is about the best guideline you can get for "which program is using all the ram", but still has issues. These numbers can't be calculated unless the kernel exports very detailed information about which pages are mapped where, though, which Windows certainly does not and even Linux has only started doing very recently.
I should probably stop now. This may have gone on a bit longer than I intended, but I do love to lecture.
(if you're wondering, yes I do this stuff for a living, I'm a kernel developer *grin*)