PC upgrade advice

DB, Atilla,

I ran an experiment. On the Excel sheet, I selected an area and set a Find and Replace, changing 1 cell reference (20001 to 20000). With DB’s Disable All, and subsequent Enable All, the time for both, changing 67,280 cells was 45 seconds. CPU Usage was c 50% until completion.

Automatic updates is already disabled. There is a printer connected but it’s turned off.

Atilla,

Just read your latest. Will do as you suggest (after I’ve eaten). And Presumably there will be no harm if I maintain DB’s Disable All.

.NET Runtime Optimization. My 2 your 1, what’s it for anyway?
Grant.

I'm not too sure my self. It's Microsofts new vision for an integrated application platform I think. .Net framework for developers to build apps that all talk to each other with "Simplicity" or something... Provides support for XML and various other languages.

Any developers here?

Just curious as to why you have two different versions I guess. Use the most recent one.
 
I know SFA about Excel, but I will post a comment about CPU Level 2 cache sizes and system performance that may be useful.

The benchmarking of CPUs on popular hardware PC sites can be very misleading as regarding the importance of cache size/architecture on application performance. In particular applications, that effect can be really dramatic, but the inexperience of the reviewers means that a myth endures that cache is not all that important. Sometimes that is not true at all.

Applications where cache/memory performace dominate overall system performace are those that make a lot of random accesses over a very large memory space. Such apps generate a lot of cache misses.

It occurs to me that very large spreadsheets MAY at least partially fall into this category.

To illustrate my point, several years ago a company I was contracting to bench marked a Java app on Pentium PIII and Sun Sparc work station. In small memory spaces the PIII easily won out. In large memory spaces the Sparc was much better - all because of cache size/memory architecture.

With some of my own Java code recently I've seen the same thing. Large memory space with many random accesses - small cache AMD Sempron compared to Intel C2D overclocked to 3.2Ghz. The Intel is 30 times faster on the same code. This almost unbelievable difference has a lot to do with the far superior cache on the Intel. Of course better AMD processors with bigger cache than the Sempron would fare a lot better in the comparison, but the message is crystal clear.
 
I'm not too sure my self. It's Microsofts new vision for an integrated application platform I think. .Net framework for developers to build apps that all talk to each other with "Simplicity" or something... Provides support for XML and various other languages.

Any developers here?

Just curious as to why you have two different versions I guess. Use the most recent one.

I don't know much about .NET (I try to avoid MS) but I would guess that this optimization is very similar to Sun's Java Hotspot optimization. To put it briefly, Hotspot optimizes compiled Java code on the fly as it detects execution patterns. The argument goes that by so doing, optimizations are possible that cannot be done with statically optimized code (as for example is produced by a C++ compiler). Hotspot is very effective for a lot of apps. I would think that in general, you would want to have .NET runtime optimization turned on if you are running .NET applications. The setting should not affect non .NET apps.
 
I know SFA about Excel, but I will post a comment about CPU Level 2 cache sizes and system performance that may be useful.

The benchmarking of CPUs on popular hardware PC sites can be very misleading as regarding the importance of cache size/architecture on application performance. In particular applications, that effect can be really dramatic, but the inexperience of the reviewers means that a myth endures that cache is not all that important. Sometimes that is not true at all.

Applications where cache/memory performace dominate overall system performace are those that make a lot of random accesses over a very large memory space. Such apps generate a lot of cache misses.

It occurs to me that very large spreadsheets MAY at least partially fall into this category.

To illustrate my point, several years ago a company I was contracting to bench marked a Java app on Pentium PIII and Sun Sparc work station. In small memory spaces the PIII easily won out. In large memory spaces the Sparc was much better - all because of cache size/memory architecture.

With some of my own Java code recently I've seen the same thing. Large memory space with many random accesses - small cache AMD Sempron compared to Intel C2D overclocked to 3.2Ghz. The Intel is 30 times faster on the same code. This almost unbelievable difference has a lot to do with the far superior cache on the Intel. Of course better AMD processors with bigger cache than the Sempron would fare a lot better in the comparison, but the message is crystal clear.

I concur with your post dcraig. Also with SQL databases and more specificaly for me and Disaster Recovery, disk to tape writes so much faster with cache. I would strongly recommend a good scsi card with as much Cache Write Battery Backed RAM also coupled with RAID 10 for fast disk read / writes.

Coming back to the Spread-Sheet I did notice memory usage was at 1.7Gb whilst 2Gb being available. Obviously the spread sheet and formulas having a big impact on RAM utilisation. Layer 2 cache would help. Can't remember if PC had any?

This is a tough one.

Best to use elimination to see where problem lies: software, spreadsheet or hardware... In that order.

Good luck Grantx
 
Atilla,

Multi-core cache configuration is 2 x 512kb.

If cache is the key here then compatible upgrades are the 3.2 Ghz with 2 x 1Mb cache or the Opteron 3.0Ghz, also with 2 x 1Mb cache.

The question then is, will I see much improvement?

Grant.
 
Atilla,

Multi-core cache configuration is 2 x 512kb.

If cache is the key here then compatible upgrades are the 3.2 Ghz with 2 x 1Mb cache or the Opteron 3.0Ghz, also with 2 x 1Mb cache.

The question then is, will I see much improvement?

Grant.

Cache RAM always helps imo.

Whether that is the problem or if it will be significant improvement or not I can't say.

Fwiw, exercised I carried out in the past - adding cache to our SCSI controller on an SQL server provided 15% improvement in speeding up reports that were taking 5 - 6 hours to run.

I appreciate we are talking about cache to the CPU but if formulas and calculations are the bottle neck I can only assume adding more cache will speed it up. By how much it would be worth benchmarking it as no doubt you will do.

This site Tom's Hardware is pretty good for PC Hardware stuff.

Sorry not much help with placing head on guillatine... :cheesy:

Good luck.
 
This is an example of a cell:
=IF($AZ6<MIN($B$4:$B$20001),"",VLOOKUP($AZ6,$B$4:$AP$20000,22))
In this particular case this is from a table 800 rows x 40 columns (32,000 cells).

Try resizing from 20,000 to the minimum required (800?) in all cells. Do this trivially by using the "Replace" functionality.
 
Forgot about this little "feature" of excel.
Try mininmising the XL window then restoring it. Any change in CPU usage?
 
DB,

No difference.

As a temporary measure I've reduced the amount of data.

In the final analysis Excel is not an efficient method for number crunching. Therefore, realsitically, one should look at the alternatives (another day). We've all spent enough time on this, and I doubt we'll get the quick fix I naively expected. So I reckon we should kill it. Thank you to everyone who contributed.

Grant.
 
Top