Benchmarking Visual Studio Performance: One Developer’s Experience, Part II
As mentioned in Part I of this post, I’m trying to identify ways to improve the performance of Visual Studio, and the results are in!
TEST 1: Loading Visual Studio (no solution)
Just after launching Visual Studio I timed how long it took for the IDE to load up and be ready for use. I disabled the news channel updating to get more accurate times:
As expected, load times were faster with the SSD. However, these times represent only the first loading of Visual Studio, subsequent loads occurred much faster on the 7200RPM drive, and were more in line with the SSD load times. So there appears to be significant caching going on after the initial loading.
TEST 2: Loading a solution in Visual Studio
I launched Visual Studio by double-clicking on a solution file, so I am measuring the time it takes for the IDE + the solution to load. I chose three solutions, all fairly substantial in size so the load time for the solution could be easily separated from the IDE load time. I again measured the time it took for the IDE to be “ready for use”, which meant that I had to wait for ReSharper to parse through assemblies/source files:
Again there is a clear trend towards faster load times with the SSD. Also, there seems to be a significant CPU contribution to the load times. As in test 1, these times represent the first load only, any loads after that were faster on the slower drive due to caching.
TEST 3: Compiling a solution using MSBUILD
Each of the three solutions above have a MSBUILD script that we run to compile each solution. So I ran MSBUILD for each and noted how long it took to run, using the “time elapsed” value it reports. I did multiple trials and averaged the times:
In this case a fast CPU seemed to be most important, which was a bit surprising to me as I expected the fast drive to be the most dominant factor. As in the previous tests, the first trial run produced slightly slower times (1-2 seconds), but nothing significant.
TEST 4: Code searches
I did code searches over an entire solution (CTRL+SHIFT+F) for text that would produce a lot of hits, and timed how long it took:
Once again a fast CPU was important here, followed by fast memory, with hard drive being least important.
So what is the conclusion? If you are a .NET developer, and you believe the results above (I am by no means a benchmarking expert, so take the above results with a grain of salt), then get a fast CPU (and faster memory to a lesser extent) to improve compilation and search times, and get a fast hard drive to improve loading times for the IDE. Personally, I would like to have fast compilation and searching, since those things are done many times during the development process, while you only have to load the solution once. So does that mean that buying the SSD was a waste of money? I don’t think so, as it was helpful in some areas, and of course there are other reasons to buy a SSD, like Windows boot times of 10-15 seconds, instantaneous loading of browsers and other programs, etc.
Have any developers out there do similar benchmarking with Visual Studio? What were your conclusions?