Architecture & Design
Cache is like very fast RAM that is part of the processor die itself. Cache is used to store data close to the processor where it is easily accessible at high speeds. A rough rule of the thumb has it that doubling the size of the processor cache will generally give you a 7% boost in processor performance. While that may not be entirely accurate, it's a good indication of just how important cache is.
Cache comes in different levels, with Level 1 cache being the closest to the processor core and Level 2 next in line, and so forth. The cache closest to the processor is always the fastest cache but it is also the hardest to increase in size. Thus, chip makers prefer to advertise the size of the L2 cache because it is much easier to increase its size.
No matter whether it's L1 or L2 or even L3 cache, the larger the cache, the better the processor's performance. However, size is not the only factor. You should also look at the cache's level of associativity. Without going into the details, a higher cache associativity generally means a higher cache hit rate (and lower miss rate), resulting in better cache performance.
Another factor you might want to take into consideration if you are looking at a multi-core processor, is whether it has a shared or independent cache. Some of these processors have their processing cores share a single, large L2 cache while others have a dedicated cache for each core. Cache size being the same, a unified cache is better than independent caches.
Independent caches suffer from two problems - duplicated data and cache coherency problems. Separate caches require some data to be duplicated, which reduces the efficiency of the caches. Cache coherency between separate caches require cache coherency protocols like Intel's MESI and AMD's MOESI, which incurs a performance overhead.
Although many AMD fans would like to disprove my rhetoric by saying that the AMD64's request crossbar mitigates this, Xbitlabs reports differently and has shown that an average latency of 120 cycles is leveraged on such cache coherency requests. Currently, the only x86 processors that use a shared cache on the market are Intel's Core line of microprocessors, although this does not apply to their quad core line of Core processors. Those quad-core processors share two separate L2 caches - each cache is shared by two cores.
Although overclocking is still pretty much a niche art restricted to relatively few PC users, it is still useful to determine the overclockability of a processor. Even if you do not overclock, a processor's overclockability will tell you about its potential stability even in extreme conditions. If a processor is running extremely close to its real world ceiling, then it could fail if external conditions change significantly.
By overclocking the processor, we can find out what its maximum real world limit is. This information allows even non-overclockers to determine a processor's "margin of safety". Although you won't get overclocking numbers from chip manufacturers, most computer hardware review sites perform overclocking tests when they review hardware. So, it pays to check out reviews like our review of the Kingston HyperX PC2-8500 DDR2 SDRAM modules.
For those who want to overclock, you should note that your overclocking mileage depends on many factors other than your processor, like system memory, motherboard, BIOS settings, the batch and stepping of the processor and even the power supply.
But no matter whether you overclock or not, the rule of the thumb is - the more overclockable a processor is, the better. Yay for simplicity!
<<< Architecture & Design - Clock Speeds, Bus Speeds : Previous Page | Next Page : Special Features >>>