XboxHacker BBS
 
*
Welcome, Guest. Please login or register.
Did you miss your activation email?
April 20, 2014, 01:34:40 AM


Login with username, password and session length


Pages: 1
  Print  
Author Topic: 64-bit Windows  (Read 3124 times)
neonpolaris
Xbox Hacker
*****
Posts: 1051


View Profile
« on: February 12, 2009, 01:29:23 PM »

I've been shying away from 64-bit Windows since the first time I saw a 64-bit XP.  I've only heard nightmare stories of poor driver support, programs not working, certain features of programs not working, etc.  Especially things like underground-type software.  I've looked at 64-bit Vista, and now 64-bit Windows 7 beta.  I was wondering from those of you that have used them, what do you think?  What kind of problems have you had as compared to a 32-bit OS?  It seems like support for them should be pretty solid by now, and I'll probably give it a try on my laptop sometime this week (runs 32-bit Vista and Windows 7 fine, I'll try 64-bit 7), so I was just wondering what those of you out there thought.  This seems like as good a place as any to ask, since it's more likely to have a base of people with some actual technical ability, and can overcome simple problems.  Let me know what you think, I feel as though I'm wasting the ram in this machine otherwise.
Logged

k0mpresd
Xbox Hacker
*****
Posts: 608


View Profile
« Reply #1 on: February 12, 2009, 04:12:04 PM »

i installed the 64 bit 7 beta on desktop without a hitch. a lot of the drivers installed automatically. i only had to install 1 driver and that was for a nic card on the mainboard. mainboard has 2 nics, 2 different brands.
Logged
neonpolaris
Xbox Hacker
*****
Posts: 1051


View Profile
« Reply #2 on: February 12, 2009, 07:18:32 PM »

Have you tried any of the Windows Xbox tools (Jungleflasher, Xtractor Reader, etc) on it?
Logged

gigabite
Xbox Hacker
*****
Posts: 3089


.: Xplode Mods :.


View Profile WWW
« Reply #3 on: February 12, 2009, 07:34:39 PM »

XP 64bit Yes, Vista 64bit no - anything Vista, NO ... 64 bit Windows is way better then XP 32bit because it utilizes the 64bit side of the CPU (so things are faster), everything of mine works except my firewall (so for me, because of that, I can't use it) however....using it for xbox's i'd stay away from it. XP 32bit for maximum compatibility and less problems

gigabite
Logged



.ISO  - he's a wannabe ... feel part of "t3h sc33n" yet ? QQ

coming 2009
k0mpresd
Xbox Hacker
*****
Posts: 608


View Profile
« Reply #4 on: February 12, 2009, 09:22:04 PM »

Have you tried any of the Windows Xbox tools (Jungleflasher, Xtractor Reader, etc) on it?
no. all of that stuff is on my xp partition/hdd for ease of use and reliability.
Logged
neonpolaris
Xbox Hacker
*****
Posts: 1051


View Profile
« Reply #5 on: February 13, 2009, 02:48:25 AM »

You see, that's just what I'm talking about.  Everyone says 64-bit seems to work well, "except for".  One thing or the other, always.  I would never be doing any of the Xbox stuff with my laptop, I've got an old desktop with XP for that.  But it seemed like the perfect example of the type of thing I would expect to not work.

I already dual boot Windows/Ubuntu.  The differences between the OS's are stark, and certain things I just can't do on one or the other.  But why should I bother with 64-bit Windows if the 32-bit does everything that the 64-bit does, and more?  That is, I wouldn't be using 64-bit until I needed something only 32-bit could do, then boot into 32-bit.  I'd just stay in 32-bit.

I am aware of the memory limit with 32-bit Windows, but I'm also aware of the additional overhead of addressing all memory in 64-bit.  And I haven't heard anything saying that the 64-bit "runs faster", unless you're running something that requires a ridiculous amount of memory.

My laptop has become a toy of sorts since I no longer use it at work, and I don't really use it at home either.  I'll probably throw Windows 7 64-bit on it this weekend just to check out compatibility of my favorite programs, if nothing else.  I downloaded both of the ISOs from the official Beta already, so it's just a matter of me taking the time.
Logged

torne
Master Hacker
****
Posts: 105


View Profile
« Reply #6 on: February 13, 2009, 06:29:55 AM »

On x86, 64-bit execution isn't faster than 32-bit, in general. The added addressable registers and the like help, but the fact that all 64-bit instruction encodings require prefix bytes, and the fact that pointers and some data are bigger, makes code larger and hurts cache hit rates. There are some tasks for which it has a slight edge, and some tasks for which it's a little worse: there is certainly no clear advantage. All the hardware in the chip gets used whether you are in 64 or 32 bit mode: the real hardware will have more than the 16 addressable 64-bit registers anyway, so whichever mode you are in it will just continue to do register renaming to get more parallelism out of the code.

Unless you actually need to address more than ~3GB of physical RAM (depends what peripherals you have as to how much of your address space is stolen for PCI mappings and the like), or unless you are writing drivers that you need to test, there's not much point to bothering with a 64-bit windows install right now: at best, there are drivers for all your hardware and it works the same as the 32-bit version Smiley
Logged
jelle2503
Xbox Hacker
*****
Posts: 1686


elitist prick


View Profile
« Reply #7 on: February 13, 2009, 06:43:54 AM »

i have vista 64 on my pc.. but i ran up to quite a few issues when it comes to 360 apps and the like.. i'm not really fond of using a x64 OS to flash 360 drives. perhaps it may work.
i already got myself a 32bit vista to install but haven't had the chance yet

so i gave my parents a PC with XP 32bit.. which i always use to flash
Logged

*
neonpolaris
Xbox Hacker
*****
Posts: 1051


View Profile
« Reply #8 on: February 13, 2009, 08:35:27 AM »

Ok, so let me ask another question then, have any of you found any performance benefit from having more than 2GB or RAM with Vista.  With 32-bit vista, and my video card, I can't really use more memory than that.

Also, as far as program compatability, is the only software that tends to have problems the ones that access (or emulate) hardware on a low level?  Firewall/AV, Virtual cd-roms, etc?
Logged

torne
Master Hacker
****
Posts: 105


View Profile
« Reply #9 on: February 16, 2009, 06:23:59 AM »

Ok, so let me ask another question then, have any of you found any performance benefit from having more than 2GB or RAM with Vista.  With 32-bit vista, and my video card, I can't really use more memory than that.
There's almost always a benefit to having more ram, until you get to truly huge amounts: it's just maybe not very much of one Smiley
If your physical ram is already big enough to contain your working set with room to spare then adding more will just make the disk cache bigger: this might speed things up a bit but for most tasks there is only so much disk cache you actually need. If your working set is too big for ram then you will be swapping a lot and adding more ram will almost certainly make a difference.

If you really want a proper answer you need lots of statistics on pagein/out rates, working set sizes and so on, most of which Windows displays badly or in a non-obvious way Smiley
Logged
gigabite
Xbox Hacker
*****
Posts: 3089


.: Xplode Mods :.


View Profile WWW
« Reply #10 on: February 16, 2009, 08:39:38 AM »

^agree - which is why I set my page file to 0 (effectively deleting it, it's easy to do)

gigabite
Logged



.ISO  - he's a wannabe ... feel part of "t3h sc33n" yet ? QQ

coming 2009
neonpolaris
Xbox Hacker
*****
Posts: 1051


View Profile
« Reply #11 on: February 16, 2009, 09:15:10 AM »

So my Dell Inspiron 1520 takes Windows 7 beta 64-bit just fine.  Even my bluetooth works.  I haven't installed anything except firefox yet, though.  I'll probably be trying out XBC later today.

I'd like some more opinions on a few things.  Removing the page file altogether... if you have "enough ram" is this ok?  I mean, if you can handle everything you need in ram, is there ever a good reason to page anything?

If you are using a 32-bit OS, is it better to put 2 2GB sticks in, than a 1GB and a 2GB? Or 3 1GBs?  (Assuming you have a video card >512MB) Doesn't dual channel require an even number of like sticks?  Does dual channel matter anymore?

The most multitasking my home desktop sees is probably Dreamweaver with a few pages open, Photoshop with a few web-graphics open, Internet Explorer with a few pages open, firefox with two pages open, BulletProof FTP, and maybe Winamp playing.  Across two screens.  Would a heavy game have more/less memory usage than this?

If windows does such a bad job of showing me what it's using, are there any other apps that do better?

I remember reading about Vista's apperant excessive memory usage.  People were complaining that Vista was using dramatically more memory than XP ever did, and it was explained that although Vista does use more for just itself, it also attempts to use all memory not needed by applications as cache.  That is, the more you have available, the more it uses as cache.  Not a bad thing, but it can make it look at first glance like you NEVER have enough memory no matter how much you throw in.

Thoughts?
Logged

torne
Master Hacker
****
Posts: 105


View Profile
« Reply #12 on: February 16, 2009, 12:25:13 PM »

Disabling the swapfile is generally fairly stupid Smiley I dunno why you think you're agreeing, gigabite. If your working set is too big for ram then it's too big: disabling the swapfile isn't going to fix it. If it's not, then you are making system performance worse by forcing pages that aren't in the working set to stay in ram.

By not having a swapfile, you are forcing all anonymous writable pages (programs' normal working data, their heaps and stacks and so on) to be kept in physical ram at all times, even if they have not been touched for ages and will perhaps never be touched ever again. This reduces the amount of physical ram available to store file-backed read-only pages (programs' actual code), file-backed writable pages (mmap'ed stuff) and disk cache. Having a smaller disk cache has an obvious effect: you may have to access the disk more often, which is glacial by your CPU's standards. The effect of evicting file-backed pages is not so obvious, but what you are doing is causing more swapping to take place Smiley

Not having a swapfile doesn't prevent the OS from swapping, it just means it has nowhere to put anonymous writable pages (since they don't belong to a file). Data that belongs to an actual file on disk (the aforementioned program code, and mmap'ed data files) is still only brought into physical ram when it's used, and is evicted when something else needs the space.

Old versions of Windows have terrible virtual memory managers, and make extremely poor decisions about what pages to evict at any given time: back then, disabling the swapfile tended to have the effect that the disk cache was reduced instead of evicting pages, which for some tasks made performance better. The modern Windows VMM is much, much better; in general you are hurting performance by not allowing it to evict anonymous pages, sometimes by a great deal. Its heuristics for trading off keeping anonymous pages versus keeping other kinds of pages or expanding the disk cache are much better. There are still workloads for which it's better to not allow it to use a swapfile, but general computer usage is not one. You might sometimes find that not only does disabling the swapfile not make things better, but it substantially increases swapping activity and makes performance much worse.

So yes, there is always a good reason to page things: the amount of data your computer could potentially be working with is always going to be bigger than your physical ram. You have to count all the data on your disk that you are using (i.e. things that could potentially be in the disk cache) as well as the actual memory being used by programs, as well as the programs' actual code itself.

As for the other bits..

If you have a dual channel memory controller then you will get better memory performance if you install your memory in matched pairs in the right sockets. However, the benefit is not huge: the circumstances in which anything will actually be twice as fast are so rare you will never encounter them. If it's significantly less effort/cost to have an uneven memory configuration which prevents dual channel operation then it's not the end of the world.

The problem with seeing how much memory you need is it's a hugely difficult thing to understand in the first place. The amount of memory that a program is using depends whether you count all the pages it has mapped or only the ones that are paged in; whether you count any pages of memory mapped files; whether you count the pages in the disk cache which it is responsible for; how you count pages which are being shared with other programs; and loads of other factors. Windows doesn't display it very well because the true answer is that it's a huge complicated list of numbers which only mean anything for the exact state your computer is in right now; generalising is not always easy.

All versions of Windows use practically all of physical memory all the time; in fact, all operating systems do. Physical memory is so much faster than your disk that it would be stupid not to cache as much as possible: actual free memory is wasted memory! Vista displays many of these memory statistics differently to XP, though, which has confused a lot of people, and its virtual memory manager is somewhat changed in any case. I've not personally poked around on a Vista machine for any great length of time to be able to comment authoritatively on the differences..

If you're interested, here's a rough description of what some of the memory statistics in Windows mean. All labels and interpretations are from Windows XP's task manager as it's the only system I have to hand: other versions might display this stuff differently, so take it with a grain of salt.

Memory Usage on the Process tab refers to how many of the pages that the process owns are currently in physical ram. The process might actually own a lot more pages than that, but they are either in the swapfile or in a regular file. Now, here's where the first weirdness comes in: the memory usages of the various processes will probably add up to considerably more than the amount of ram you have. This is because any page that is mapped into more than one process is being counted under both Smiley The most common reason for this is that the pages are part of the actual code for the process: the exe/dlls. Any dll that's loaded by more than one process will have its pages mapped into all of them, but it only actually takes up one copy's space in memory. Of course, which parts of the exe/dll are actually *in* memory at the moment is subject to change constantly as they get paged in and out based on usage. So, what does this number really mean? Very little. Hooray! Larger numbers generally mean the process is responsible for more of the system's memory usage, but even that isn't guaranteed. The other memory-related stats on the Process tab don't mean very much either, they are generally counted using extremely vague measures and include/exclude various kinds of pages without much explanation Smiley

The Performance tab has a bunch more numbers referring to the system as a whole. PF Usage, the graph in the middle, is displaying the same number as Commit Charge in the status bar at the bottom and under the Commit Charge section. What this is referring to is how many pages of writable memory have been committed, i.e. promised to various processes. This doesn't mean that much memory is actually in use: pages are not generally allocated at the time they are committed. What it basically means is that there has been space in the swapfile reserved to accommodate them; the first time the page is actually touched, it will be allocated. If you write a program that allocates, say, 100MB of ram and then never uses it, then the commit charge will increase by 100MB but the memory usage of the program will not, and the amount of free memory on the system will not go down. The commit charge is limited by the available physical ram and swapfile space, because Windows generally avoids overcommitting (promising more pages than actually exist) - other OSes sometimes allow overcommitting by some factor to allow for pages which will never be used (or in the case of Linux's default, overcommitting by any amount you like *grin*). So... the commit charge is actually kinda meaningful: it's the closest thing to "total memory needed" that is measured. What it doesn't count, though, is read-only pages like code: if you have a commit charge of 512MB and you have 512MB of ram then you still need to push some of it out into the swapfile, to make room for some actual code in order to do anything *with* all that data. Peak commit charge is just the highest it's been since you last rebooted.

Kernel Memory is just how much memory belongs to the kernel itself, rather than to any userspace application: paged is how much of it could be paged out (either by just discarding it if it's a read-only mapping, or by swapping it out if not), and nonpaged is how much cannot be paged out under any circumstances (because it may be used by parts of the kernel which cannot tolerate taking a page fault: this is where the PAGE_FAULT_IN_UNPAGED_MEMORY bluescreen comes from Smiley )

Last, Physical Memory. Total is pretty self-explanatory, I hope. System Cache is how large the disk cache currently is. Available is.. an interesting claim. It includes all the pages of the disk cache, on the assumption that they can be discarded if more memory is needed, but sadly this isn't actually true because pages which have been modified (dirty pages) can't simply be discarded, they have to be written back to disk first which takes time. It also includes memory which is actually not being used for anything at all: this amount is whatever the difference between Available and System Cache is. There's two reasons memory wouldn't be in use: one is that some memory is always held back from the cache to ensure that important allocations (such as the kernel wanting to bring in a page for its own use) will succeed quickly, and another is that the system cache simply hasn't needed to grow since the page was freed up Smiley
What it *doesn't* include is all the memory that could be freed up just as easily as the disk cache: paged-in read-only mappings. These can also just be discarded, but they Just Don't Count for this purpose. Smiley So, the actual Physical Memory Available number means very little: it is neither the minimum nor the maximum amount of memory that could be freed up for use.

Actually displaying this any better is, well, difficult. What are you going to display? Smiley Linux has similar issues, see the definitions of virtual size, resident set size, shared size as displayed by ps/top (none of which represent anything particularly more informative than the Windows equivalents). There is a very recent patch to the Linux kernel which allows a userspace program to calculate two new values: the proportional set size and the unique set size. These are more useful (though still not perfect): the unique set size represents the number of pages which are mapped *only* into this program, and thus is about how much memory you would expect to free up if you exited it, and the proportional set size is the number of pages mapped into the program, with each page divided by the number of processes who map it. This is about the best guideline you can get for "which program is using all the ram", but still has issues. These numbers can't be calculated unless the kernel exports very detailed information about which pages are mapped where, though, which Windows certainly does not and even Linux has only started doing very recently.

I should probably stop now. This may have gone on a bit longer than I intended, but I do love to lecture. Smiley

(if you're wondering, yes I do this stuff for a living, I'm a kernel developer *grin*)
Logged
neonpolaris
Xbox Hacker
*****
Posts: 1051


View Profile
« Reply #13 on: February 16, 2009, 01:53:35 PM »

Wow, thanks torne!  That's incredibly informative.  That's why I decided to ask here instead of somewhere like Yahoo! Answers. (Bunch of retards)  From searching I can't find data that means anything in the real world.  So I asked.

I think I'll have to read through that a couple more times to digest it all.

Here's what I got so far:
Dual channel is better than not, but doesn't mean much.  There can be some benefit to matched pairs in the right slots on compatible motherboards, but only enthusiasts/fanatics should be troubled with changing it if it's not already utilized.

Paging to disk isn't intrinsically bad.  Don't worry about trying to add enough memory so that the swapfile is never used.

Seeing absolute memory usage is terribly difficult.

--

Do I ever get to the point where I'm staring at my screen as my computer slows to a crawl and I hear my harddrive grinding away?  No.
Do I ever get to the point where I'm waiting on my computer at all?  Not really.  I think my RAM is fine.
Logged

torne
Master Hacker
****
Posts: 105


View Profile
« Reply #14 on: February 17, 2009, 06:01:31 AM »

You pretty much have it there. If you are swapping excessively you will notice, because your machine will grind to a halt and thrash the hell out of the disk. If not, then you have enough ram to be getting on with. More ram will pretty much always perform better but if you don't notice problems now, then you are unlikely to notice much improvement either Smiley
Logged
chickenpie
Master Hacker
****
Posts: 335



View Profile
« Reply #15 on: February 17, 2009, 11:58:20 AM »

that was a very interesting read, great stuff torne Smiley
Logged

"Computer games don't affect kids; I mean if Pac-Man affected us as kids,
We'd all be running around in darkened rooms, munching magic pills and listening to repetitive electronic music."
Pages: 1
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.11 | SMF © 2006-2009, Simple Machines LLC

Valid XHTML 1.0! Valid CSS! Dilber MC Theme by HarzeM