), unless the work doesn’t fit into a 32-bit address space, moving to 64-bit can actually degrade performance."Īlso, a lot of people don't realize that there is a 64-bit version of the toolsets (for C++ at least). Lastly, moving to 64-bit isn’t a panacea – as others have noted (. In addition, there is an ecosystem of thousands of extensions for Visual Studio ( ) which would need to also port to 64-bit. We’d still need to ship a 32-bit version of the product for various use cases, so adding a 64-bit version of the product would double the size of our test matrix. "So why not just move Visual Studio to be a 64-bit application? While we’ve seriously considered this porting effort, at this time we don’t believe the returns merit the investment and resultant complexity. They've actually talked about why they haven't done this here:
As it's not a stupid idea to use a 32bit OS on a PC with 2Gb of RAM or even less.
This can mean addressing a whole pile of nothing-to-do-with-64-bit-whatsoever problems before even starting to solve 64-bit problems.įirst, it could be as simple as you use a legacy proprietary software that you don't have access to the sources to recompile it 64 bit, on macOS is less of a problem because 32bit was used for a short transition period, but if you think maybe you want to run 32bit Windows software with wine.Īnother reason is that the program was not written with portability in mind, and so it works well on 32bit and on 64bit it has strange behaviors, this could be due to a infinite number of possibilities, and so if you want to use it on a 64bit you must not only recompile the program but debug and fix it.Īnd then it could have be done on purpose, yes Microsoft compiles Visual Studio 32bit on purpose, the reason is that the main advantage of 64bit is a bigger address space, and a couple more registers, otherwise on the Intel architecture the performance is the same, but with 64bit you consume significantly more memory, because every pointer inside you program is now twice as big: so if you program doesn't need an address space bigger than 32bit and you want to save some RAM, it's not a stupid idea as it would seem to still compile it 32bit. In the case of Apple, the huge pain will be their decision to not port all of Carbon to 32-bit (despite Forstall getting on stage years ago and stating that Carbon was going to be 64-bit “top to bottom”, it never happened).
somebody ends up not initializing 4 of the 8 bytes in a now-wider field).
If there’s code that tries to manually populate the bytes of data structures, sometimes bugs appear when the target field size changes (e.g. These hacks were more common in the days of porting older systems to 32-bit but it could still happen moving to 64-bit. Sometimes legacy code can make assumptions about pointer size. Basically, if you needed a lot of structures allocated and they all grew, your memory use becomes noticeably bigger. Every pointer doubles in size, the alignment of a structure containing a pointer may grow, etc.
The main issue would be that 64-bit may cause an app to use more memory.