Components

Sunday, May 27, 2007

Asus M2A-VM HDMI (AMD 690G)

ASUS takes a break from its barrage of Intel P965 motherboard SKUs over the past few months to focus on AMD's new chipset, but unlike Sapphire, ASUS does what they do best, and that's to specialize. Instead of one board to fit every usage model, ASUS has two variants of the AMD 690G: The no-frills M2A-VM for the more business minded and the entertainment focused M2A-VM HDMI. Both boards are essentially designed on the same PCB; the only difference being that the latter features additional FireWire support as well as being bundled with additional audio and video connectors. In this review, we take a look at what the M2A-VM HDMI has to offer.

PCMark05's results on the other hand, were more favorable to the ASUS M2A-VM HDMI than SYSmark was. The board tied with the Sapphire board in the CPU subsystem performance scores, and while its memory performance fell below the Sapphire Pure Innovation HDMI, the ASUS still managed to outdo the ECS. Lastly, we see that all three boards share similar HDD performance numbers.

In our Sapphire Pure Innovation HDMI review, we mentioned about the versatility of the AMD 690G chipset, on how its features can be configured for the home theater generation or purely for business. ASUS is one of the manufacturers we know that would make use of this and develop specific products to cater to the different sectors. However, is it worth the trouble of pushing out two product lines for a mainstream chipset such as the AMD 690G? The M2A-VM HDMI is definitely the more interesting of the two, but its single defining feature is an add-on module that would probably work on either board. Together, the connection options on the ASUS M2A-VM HDMI are able to match the Sapphire board, but connectivity comes at a premium. The regular M2A-VM retails for around US$70 today, while the M2A-VM HDMI averages about US$20 more - just for the extra HDMI module and added FireWire connectivity.

The motherboard itself is a very well packaged product, and we've come to expect no less from ASUS. They've done a great job with its design and layout, avoiding potential cable, expansion and heat problems that might arise in the smaller micro ATX form factor. The Northbridge does tend to get very warm under load however, but the board seems to take things in stride. We did not encounter any compatibility or stability issues with the M2A-VM HDMI, even under heavy benchmarking loops. While the high-end motherboard market has been evangelizing the use of newer materials such as solid capacitors and digital PWM circuitry, the M2A-VM HDMI does it the old school way, and we're quite happy that it works just as well. The only gripe we have against the M2A-VM HDMI is the lack of proper 8-channel analog audio jacks, which puts the regular PC users at a disadvantage.

We were a little disappointed at first that ASUS did not include any memory tweaking options for the board, but since performance and overclocking isn't its main attraction, we let the matter slide. At the very least, ASUS does allow some rudimentary voltage and overclocking to take place. Performance-wise, the M2A-VM HDMI managed to be consistent across our benchmarks, though the Sapphire Pure Innovation HDMI takes the top spot in every test. Of course, when you benchmark a board that is designed to run at stock configurations (ASUS M2A-VM HDMI) with one that has a comprehensive tweaking BIOS (Sapphire Pure Innovation HDMI), there will be an expected gap between the two.

Comparisons with the Sapphire Pure Innovation HDMI will likely the be topic of the day when it comes to AMD 690G motherboards and while Sapphire may have stolen the thunder from larger motherboard manufacturers, the ASUS M2A-VM HDMI is still a very well designed and well implemented motherboard for its target market as an OEM or home entertainment platform.

Gigabyte GA-P35-DS3R

This week Intel launched a new core logic set that will support the upcoming Penryn processors and promising DDR3 SDRAM. Today we would like to introduce to you one of the first mainboards based on this chipset that appears to be a very promising platform. Read more in our new article!

The launch of new processor family based on Core micro-architecture that keeps expanding into different market segments strengthened Intel’s positions even more. This CPU is currently extremely popular, which is not surprising at all as it offers the today’s best combination of consumer features. Trying to secure its leading position and increase the company’s influence in the processor market Intel continues growing the processor family releasing new CPU models aimed at lower as well as upper market segments.


As we have seen, the manufacturer has paid special attention to inexpensive processor models lately. The price of the youngest Core 2 Duo solutions has dropped down to $100 range, and hence pushed away the competitors quite noticeably.

However at the same time Intel certainly doesn’t forget about its high-performance solutions either. This summer we should welcome dual-core Core 2 Duo processors and quad-core Core 2 Extreme processors working at higher clock speeds and supporting faster processor bus – 1333MHz Quad Pumped Bus.
Of course, the increase in the bus frequency of Intel’s flagship processors requires Intel to make sure that the proper infrastructure is there. And first of all, it implies the launch of the new core logic sets as the existing LGA775 chipsets, i975X and iP965, officially support only 1067MHz Quad Pumped Bus. No wonder, that the new chipsets from this family start appearing in the market already; Intel P35 and integrated Intel G33 have already been launched.

As for the supported memory types, Gigabyte engineers decided not to introduce the innovative DDR3 interface in their GA-P35-DS3R mainboard. At this time, this is a totally justified solution, because this memory is not available in retail yet. Even when it starts selling its price will evidently be higher than that of DDR2 SDRAM, even though there will be no evident performance advantages at the time and the only factor affecting the price will be the fact that it is a new product.

As a result, Gigabyte GA-P35-DS3R features four traditional DDR2 SDRAM slots, like many other mainboards on older iP965 and i975X chipsets. Like many other mainboards, our hero can perform at its best with dual-channel DDR2 SDRAM. Therefore, DIMM slots on the mainboard PCB are color-coded, indicating how the module pairs should be installed for maximum performance.

As for additional controllers, the mainboard has a network chip and a chip ensuring Parallel ATA channel and two additional Serial ATA channels implementation. Gigabyte engineers chose PCI Express x1 Gigabit LAN controller from Realtek – RTL 8111B. The addiotnal ATA controller is a PCI Express x1 Gigabyte SATA2 chip. It provides the board with a Parallel ATA-133 channel, because the chipset doesn’t support Parallel ATA interface. However, besides PATA, this chip also supports two Serial ATA-300 channels that can also be utilized for the best.

So, the board ends up having 8 Serial ATA channels (with NCQ support and 3Gbit/s data transfer rate): 6 of these ports are connected to ICH9R and the remaining 2 – to the external controller chip. Both, the integrated ICH9R ATA controller as well as Gigabyte SATA2 chip, allow creating RAID 0 and 1 arrays. ICH9R also supports RAID 0+1 and 5 arrays and Matrix Storage Technology.

I would like to give Gigabyte engineers kudos for eSATA interface implementation. Gigabyte GA-P35-DS3R doesn’t have the corresponding ports on the rear panel, as we would see in most cases, but it features two ports laid out on a separate bracket included with the board.

As for the expansion slots, Gigabyte GA-P35-DS3R offers a pretty god list of them. besides the PCI Express x16 graphics card slot, the mainboard also carries three PCI Express x1 slots (one of them may be blocked by the graphics card cooling system), and three PCI slots. Unfortunately, Gigabyte engineers decided not to equip their mainboard with a second PCI Express x16 slot physically connected to the PCI x4 bus. It means that this mainboard is not compatible with ATI Crossfire technology.

As for the Gigabyte GA-P35-DS3R mainboard that we have reviewed today, it is one of the first solutions on the new Intel P35 chipset to appear in the market and certainly deserves your attention. It is a good alternative to iP965 based mainboards. It supports widespread DDR2 SDRAM, but at the same time offers better consumer features and specifications. Moreover, Gigabyte GA-P35-DS3R did very well in our CPU overclocking tests and performed at a very high level in nominal work mode.

However, as we have already mentioned this mainboard still has some frustrating drawbacks, such as limited memory overclocking and not the ultimate performance level with FSB set above the nominal. Hopefully Gigabyte engineers will take our comments into account when working on new mainboard revisions and modifications.

Summing up let me once again list all the cons and pros of the new Gigabyte GA-P35-DS3R mainboard.

MSI P35 Platinum (Intel P35)

MSI is one of the top tier motherboard manufacturers in the consumer PC industry, but there is no mistaking 2006 as the year that ASUS and Gigabyte took most of the limelight. Gigabyte's marketing machine pushed the whole solid capacitor design into high gear while ASUS flooded the market with specialized target focus motherboard models such as the TeleSky and Republic of Gamers. Both manufacturers also drew a lot of attention with an announced alliance (which didn't last), but most importantly, they made exciting motherboards that kept the market fresh. MSI on the other hand, was focusing more on their graphics card business.

The P35 Platinum is based on the P35 and ICH9R chipset combination, which gives the board AHCI SATA and RAID capabilities in addition to its regular features. MSI makes use of the eSATA feature of the Southbridge though and uses two ports as dedicated eSATA connectors in the rear panel, leaving only four ports to be used as internal HDD connectors. As the ICH9R also does not have any IDE support, MSI uses a Marvell 88SE6111 controller to make up for one Ultra ATA port and one extra SATA 3.0Gbps port.

There is one Gigabit LAN port onboard powered by a Marvell 88E8111B PHY and two FireWire-400 ports via a VIA VT6308P controller. The most interesting component here is the use of the Realtek ALC888T HD Audio CODEC, instead of the standard ALC888 we saw in the preview. You see, the ALC888T has a special VoIP switching functionality that can automatically switch between VoIP and PSTN connectivity in event of a power failure. This of course requires some kind of VoIP add-on card, a handset and a phone line to take advantage of since the board doesn't have one built-in. MSI will actually be introducing such an add-on card very soon that works with Skype, called the SkyTel, but we have an impression that the SkyTel card will have its own audio chip, so how it interfaces with the ALC888T has to be seen.

For enthusiasts, the P35 Platinum comes with a total of six onboard fan connectors, an quick CMOS clear button and a small row of Debug LEDs. MSI has in the past made use of their D-Bracket debug system, which is still a component of the board, but this new row of LEDs are so much more useful. You can check out the row of mini debug LEDs just next to the blue SATA connector on the PCB.

Most of the components you find on the P35 Platinum isn't all to different from any other high-end P965 board you can get today, as the P35 chipset doesn't really add any new component count to its repertoire, but MSI did make use of the eSATA port multiplier for the board, and while it does reduce the internal SATA capabilities, the board will provide an extensive range of plug-and-play external high-speed connections. The improved audio chipset used in the retail motherboard also opens up possibilities of additional functionality of the board when the SkyTel add-on card comes out. We also like how MSI provides six USB 2.0 ports by default, but the chunky rear panel is really quite hideous (but then again, that's basically our only rant with the board).

Performance-wise, the P35 Platinum's results from our benchmarking run turned out pretty well. While we weren't expecting any phenomenal scores from the new P35 chipset, it did surprise us quite a bit with a very strong SYSmark 2004 performance. Overall, the P35 Platinum proved to be quite consistently better than a reference P965 and able to keep up with the NVIDIA nForce 680i SLI. With lower chipset TDPs, we had expected better overclocking potential from the P35 Platinum, but 470MHz isn't exactly shabby, considering that 1333MHz (333MHz) FSB is being hyped as the next big thing. You can go way beyond that point with simple air cooling.

Although this is the first Intel P35 motherboard we've reviewed, the MSI P35 Platinum proves to be a very well built and well rounded motherboard. MSI plays to the strengths of the chipset and delivers a solid entry into the market. If you aren't planning of a total overhaul of your PC, DDR2 is still here to stay for a while yet. With dropping prices, setting up a 4GB or higher capacity rig isn't so hard anymore and boards like the P35 Platinum will probably let you extend the life of your memory for another year or so. Of course, if you really must have only the latest and greatest, the one factor that remains to be seen is how the DDR3 variant (MSI P35 Platinum D3) would fare in comparison, but that is another board for another day.

eVGA 122-CK-NF66-A1 nForcke 650i

The 680i SLI motherboards were launched with a tremendous public relations effort by NVIDIA back in November. There was a lot of hype, speculation, and fanfare surrounding NVIDIA's latest chipset for the Intel market, and it promised an incredible array of features and impressive performance for the enthusiast. At the time of launch we were promised the mid to low range 650i SLI and Ultra chipsets would be shipping shortly to flesh out NVIDIA's Intel portfolio. NVIDIA had plans to truly compete against Intel, VIA, ATI, and SIS in the majority of Intel market sectors within a very short period of time after having some limited success earlier in 2006 with the C19A chipset.

However, all of this planning seemed to unravel as the week
s progressed after the 680i launch. It seemed as if NVIDIA's resources were concentrated on fixing issues with the 680i chipset instead of forging ahead with their new product plans. Over the course of the past few months we finally saw the 650i SLI launched in a very reserved manner, followed by the 680i LT launch that offered a cost reduced alternative to the 680i chipset. While these releases offered additional choices in the mid to upper range performance sectors, we still did not know how well or even if NVIDIA would compete in the budget sector.

All of the chipsets offer support for the latest Intel Socket 775 based processors along with official 1333FSB speeds for the upcoming 1333FSB based CPUs. The 650i SLI and 650i Ultra chipsets are based on the same 650i SPP and utilize the nF430 MCP. The only differentiator between the two is how this SPP/MCP combination is implemented on a board with the 650i SLI offering SLI operation at 8x8 compared to the single x16 slot on the 650i Ultra.

Other differences between the chipsets center on the features that the 680i offers that are not available on the 650i. These features include two additional USB 2.0 ports, two additional SATA ports, an additional Gigabit Ethernet port, Dual-Net technology, and EPP memory support. Otherwise, depending upon BIOS tuning, the performance of the chipsets is very similar across a wide range of applications, with overclocking capabilities being slightly more pronounced on the 680i chipset. In our testing we have found that the other chipsets also offer very good overclocking capabilities with SLI performance basically being equal at common resolutions on supported chipsets.

The overclocking aspects of the board are terrific considering the price point and with the asynchronous memory capability you can really push the FSB while retaining budget priced DDR2-800 memory in the system. This is one area where NVIDIA has an advantage over Intel in this price sector as the P965 boards are generally limited to 400FSB and less than stellar memory performance. We typically found that 4-4-4-12 1T or 4-4-3-10 2T timings at DDR2-800 offered a nice balance between memory price considerations and performance on this board.

We feel like the EVGA 650i Ultra offers a high degree of quality, performance. This board is certainly not perfect nor is it designed for everyone but it offers almost the perfect package in an Intel market sector that has not had anything real interesting to talk about for a long time. We are left wondering why NVIDIA chose the silent path to introduce this chipset when it's obvious they really have something interesting to discuss this time around.

AMD Athlon 64 FX-74 4x4

It has been a little over seven months since officials at AMD first discussed their upcoming quad-processing solution with us. Back then details were still pretty hush-hush, all we were told was that their upcoming technology was intended to roll over the competition from Intel, hence the 4x4 codename. Like a real 4x4, this was intended to be a strong performer, only AMD would be relying on two processor sockets to achieve this performance rather than one.

Over the following months, AMD revealed more details on their 4x4 platform, including the fact that 4x4 CPUs would be sold under the high-end FX brand, and that 4x4 CPUs would be sold in pairs, with kits starting “well under $1,000”. AMD also committed to supporting cheaper unbuffered memory and tweakable motherboards that offered a range of HyperTransport speeds and other BIOS options for CPU and memory overclocking. AMD also concocted a new nickname for their 4x4 technology: the quadfather.

Today marks the official introduction of AMD’s quad-processing technology and it is indeed quite a performer under the right situations. Before we get into that though, let’s first discuss why AMD felt now was the time for four processing cores…

With so few games taking advantage of dual-core processsors, much less four processors, many of you have questioned why AMD’s been targeting 4x4 at hardware enthusiasts and the hardcore gaming crowd.

The reasoning is simple, while it’s true that most of today’s games don’t take advantage of multi-core processing right now, console gaming has accelerated the development of multi-core development. Already there are millions of dual-core CPUs out there in the PC space, and with next-generation consoles such as the Xbox 360 and PlayStation 3 boasting multiple processing cores now out on the market, game developers no longer have an excuse not to program their games with multiple threads in mind.

By AMD’s estimates, more than 20 multi-threaded games are set to be released in 2007. This includes multiple genres of gaming as well, from first-person shooters, to RTS and RPG titles, from developers such as BioWare, Crytek, Epic, and Gas Powered Games.

Besides gaming, another usage scenario for 4-cores is what AMD describes as “megatasking”. Megatasking takes multitasking to the next level, as it involves running multiple CPU-intensive tasks at once. An example would be encoding an HD video (or two) while also watching an HD video, or MP3 encoding while also touching up a batch of photos in Photoshop. For those of you who are into MMOs, you could load two instances of the game at once and trade items you’ve collected back and forth between characters, or have one character fighting while the other is healing him.

This is where having four processing cores really shines.

Debuting alongside the new 4x4 CPUs are a new chipset (NVIDIA’s nForce 680a) and a new motherboard based on that chipset (the ASUS L1N64-SLI WS), AMD dubs all this the Quad FX dual socket direct connect platform.

AMD’s Quad FX platform doesn’t roll over the competition like AMD had hoped, mainly because Intel accelerated the launch of Kentsfield from Q1’07 to Q4’06, but it does lay the groundwork for AMD’s next-generation K8 processor, codenamed Barcelona. For the hardcore crowd that could really use the extra SATA ports or would like to setup a mega 8-display system, AMD’s Quad FX platform may be tempting, but for the rest of you, we can’t help but feel that all this may be a little too much until more apps are available that take advantage of multi-core.

AMD Athlon 64 X2 6000+

A performance review of AMD’s new dual core processors against Intel’s Core 2 Duo range from Hot Hardware shows how far Intel has come in beating AMD in the performance stakes in the post Core 2 Duo era.

Although AMD is steadily moving to 65nm processors which will minimize the gap in power consumption but not affecting speed too greatly, AMD’s new processors will challenge Intel from a price perspective if nothing else, as they’ll still give everyday consumers plenty of power to compute almost anything they want to.

The AMD 6000+ was due to arrive in November last year, but were delayed until now, with the AMD X2 6000+ at 3Ghz and a 2Mb L2 cache, the 5600+ at 2.8Ghz with a 2Mb cache, the 5400+ at 2.8Ghz with a 1Mb cache, the 5200+ at 2.6Ghz with a 2Mb cache and finally the 5000+, a 2.6Ghz processor with a 1Mb cache.

The 6000+ uses 125 watts of power, which is a no-no in today’s energy conscious world, with a 89-watt model due around September or October this year.

The 6000+ costs computer manufacturers US $464 each in batches of 1000, compared with Intel’s Core 2 Duo E6700 at US $530 each in batches of 1000, which certainly makes the Intel attractive – less than $100 more will get you a faster processor that uses less energy.

AMD has a lot of work to do ahead of it to catch up to Intel in the performance stakes, and Intel in turn to trump any AMD advances – with both companies feverishly working on advancing their quad core designs and pushing for even more cores on desktop processors to become standard. Intel’s recent 80 core prototype is a prime example.

But given that plenty of people are still stuck in a single core world, waiting to upgrade soon to a new dual core computer, thank goodness dual core processors have advanced in leaps and bounds over the last two years. AMD will no doubt compete on price and do all they can to keep Intel in check – with you, me and all other consumers the prime beneficiaries.

Saturday, May 26, 2007

ATI Radeon HD 2900 XT

It has been a long time coming but today AMD is finally set to release its massively anticipated GPU codenamed R600 XT to the world with the official retail name of ATI Radeon HD 2900 XT. It is a hugely important part for AMD right now, who recently posted massive profit loss figures. It is counting on all these new models, along with the high-end 512MB DDR-3 DX10 part with 512-bit memory interface to kick ass and help raise revenue reports against the current range from the green GeForce team, which is selling like super hot cakes.

Today AMD is launching an enthusiast part HD 2900 series with the HD 2900 XT, performance parts with the HD 2600 series including HD 2600 XT and HD 2600 PRO, along with value parts including HD 2400 XT and 2400 PRO. The HD 2600 and 2400 series have had issues of their own and you will need to wait a little longer before being able to buy these various models on shop shelves (July 1st). The HD 2900 XT will be available at most of your favorite online resellers as of today. Quantity is "not too bad" but a little on the short side with most of AMD's partners only getting between 400 – 600 units which is not that much considering the huge number of ATI fans out there. You may want to get in quick and place your order, if you are interested – some AIB companies are not sure when they will get in their next order, too.

Our focus today is solely on the HD 2900 XT 512MB GDDR-3 graphics card – it is the first GPU with a fast 512-bit memory interface but what does this mean for performance? While it is AMD's top model right now, it is actually priced aggressively at around the US$350 - US$399 mark in United States, which puts it price wise up against Nvidia's GeForce 8800 GTS 640MB. After taking a look at the GPU and the card from PowerColor as well as some new Ruby DX10 screenshots, we will move onto the benchmarks and compare the red hot flaming Radeon monster against Nvidia's GeForce 8800 GTX along with the former ATI GPU king, the Radeon X1950 XTX.

Due to limited availability as well as the fact press in different regions are getting priority over others, we tested an actual retail graphics card from PowerColor. It has the same clock speeds as all other reference cards floating around – 742MHz core clock and 512MB of GDDR-3 memory clocked at 828MHz or 1656MHz DDR.

The PowerColor PCI Express x16 card looks just the same as reference cards. Later on you will see more expensive water cooled HD 2900 XT models from the usual suspects along with overclocked models in the following weeks. We did not get time to perform any overclocking tests but reports are floating around that the core is good to at least 800 - 850MHz and the GDDR-3 memory more than likely has room to increase. You may even see some companies produce HD 2900 XT OC models which use 1GB of faster GDDR-4 memory operating at over 2000MHz DDR or they will use special cooling to get the most out of the default setup.

As far as size goes, the HD 2900 XT is a little longer than the Radeon X1950 XTX but a good deal shorter than the GeForce 8800 GTX, as you can see from the shot above with the PowerColor HD 2900 XT sitting in the middle of the group. Both of the other cards take up two slots and the HD 2900 XT is no different.

In 2D mode (non-gaming in Windows), the clock speeds are automatically throttled back to 506MHz on the core and 1026MHz DDR on the memory. This is done to reduce power consumption and also to reduce temperatures, which seems to pretty important for the HD 2900 XT.

We expect factory overclocked HD 2900 XT cards to start selling in less than one month from now. AIB partners currently have the option of ordering 1GB GDDR-4 models with faster clock speeds but it is unsure if this product will be called HD 2900 XT 1GB GDDR-4 or HD 2900 XTX – you may end up seeing these types of cards appear in early June (around Computex Taipei show time). If we saw a product like this with slightly faster core clock and obviously much faster memory clock (2000 - 2100MHz DDR vs. 1656MHz DDR), we think it would compete very nicely against the GeForce 8800 GTX as far as price vs. performance goes. Sadly we did not have a GeForce 8800 GTS 640MB handy for testing but matching up with our previous testing on similar test beds, the HD 2900 XT will beat it quite considerably, by around the 20% mark in 3DMark06, for example. This is rather interesting since the HD 2900 XT is in the same price range as the GeForce 8800 GTS 640MB – we will test against this card shortly!

Summing it up, we are happy to see a new high-end Radeon graphics card from AMD – it literally is the red hot flaming monster but it manages to offer a good amount of performance and impressive feature set with full DX10 and Shader Model 4.0, Crossfire and Windows Vista support and a host of others which we did not even have enough time to cover in full today, such as improved anti-aliasing and UVD. It was a long time coming but it is able to offer very good bang for buck against the equivalent from Nvidia - GeForce 8800 GTS.

It is also something for the green team to think about if AMD comes out with a faster version of R600 XT either with faster operating GDDR-4 memory (and more of it) or faster clock speeds using 65nm processor technology, later in the year. Interesting times ahead in the GPU business but for right now the Radeon HD 2900 XT offers very solid performance for the price but we will be more interested in what is coming in the following weeks as overclocked versions emerge and shake things up even more.


Nvidia GeForce 8800 Ultra

WHAT HAPPENS WHEN YOU take the fastest video card on the planet and turn up its clock speeds a bit? You have a new fastest video card on the planet, of course, which is a little bit faster than the old fastest video card on the planet. That's what Nvidia has done with its former king-of-the-hill product, the GeForce 8800 GTX, in order to create the new hotness it's announcing today, the GeForce 8800 Ultra.

There's more to it than that, of course. These are highly sophisticated graphics products we're talking about here. There's a new cooler involved. Oh, and a new silicon revision, for you propellerheads who must know these things. And most formidable of all may be the new price tag. But I'm getting ahead of myself.

Perhaps the most salient point is that Nvidia has found a way to squeeze even more performance out of its G80 GPU, and in keeping with a time-honored tradition, the company has introduced a new top-end graphics card just as its rival, the former ATI now owned by AMD, prepares to launch its own DirectX 10-capable GPU lineup. Wonder what the new Radeon will have to contend with when it arrives? Let's have a look.

By and large, the GeForce 8800 Ultra is the same basic product as the GeForce 8800 GTX that's ruled the top end of the video card market since last November. It has the same 128 stream processors, the same 384-bit path to 768MB of GDDR3 memory, and rides on the same 10.5" board as the GTX. There are still two dual-link DVI ports, two SLI connectors up top, and two six-pin PCIe auxiliary power connectors onboard. The feature set is essentially identical, and no, none of the new HD video processing mojo introduced with the GeForce 8600 series has made its way into the Ultra.

Yet the Ultra is distinct for several reasons. First and foremost, Nvidia says the Ultra packs a new revision of G80 silicon that allows for higher clock speeds in a similar form factor and power envelope. In fact, Nvidia says the 8800 Ultra has slightly lower peak power consumption than the GTX, despite having a core clock of 612MHz, a stream processor clock of 1.5GHz, and a memory clock of 1080MHz (effectively 2160MHz since it uses GDDR3 memory). That's up from a 575MHz core, 1.35GHz SPs, and 900MHz memory in the 8800 GTX.

The Ultra's tweaked clock speeds do deliver considerably more computing power than the GTX, at least in theory. Memory bandwidth is up from 86.4GB/s to a stunning 103.7GB/s. Peak shader power, if you just count programmable shader ops, is up from 518.4 to 576 GLOPS—or from 345.6 to 384 GFLOPS, if you don't count the MUL instruction that the G80's SPs can co-issue in certain circumstances. The trouble is that "overclocked in the box" versions of the 8800 GTX are available now with very similar specifications. Take the king of all X's, the XFX GeForce 8800 GTX XXX Edition. This card has a 630MHz core clock, 1.46GHz shader clock, and 1GHz memory.

So the Ultra is—and this is very technical—what we in the business like to call a lousy value. Flagship products like these rarely offer stellar value propositions, but those revved-up GTX cards are just too close for comfort.

The saving grace for this product, if there is one, may come in the form of hot-clocked variants of the Ultra itself. Nvidia says the Ultra simply establishes a new product baseline, from which board vendors may improvise upward. In fact, XFX told us that they have plans for three versions of the 8800 Ultra, two of which will run at higher clock speeds. Unfortunately, we haven't yet been able to get likely clock speeds or prices from any of the board vendors we asked, so we don't yet know what sort of increases they'll be offering. We'll have to watch and see what they deliver.

We do have a little bit of time yet on that front, by the way, because 8800 Ultra cards aren't expected to hit online store shelves until May 15 or so. I expect some board vendors haven't yet determined what clock speeds they will offer.

In order to size up the Ultra, we've compared it against a trio of graphics solutions in roughly the same price neighborhood. There's the GeForce 8800 GTX, of course, and we've included one at stock clock speeds. For about the same price as an Ultra, you could also buy a pair of GeForce 8800 GTS 640MB graphics cards and run them in SLI, so we've included them. Finally, we have a Radeon X1950 XTX CrossFire pair, which is presently AMD's fastest graphics solution.

I also prefer the Ultra to the option of running two GeForce 8800 GTS cards in SLI, for a variety of reasons. The 8800 GTS SLI config we tested was faster than the Ultra in some cases, but it was slower in others. Two cards take up more space, draw more power, and generate more heat, but that's not the worst of it. SLI's ability to work with the game of the moment has always been contingent on driver updates and user profiles, which is in itself a disadvantage, but SLI support has taken a serious hit in the transition to Windows Vista. We found that SLI didn't scale well in either Half-Life 2: Episode One or Supreme Commander, and these aren't minor game titles. I was also surprised to have to reboot in order to switch into SLI mode, since Nvidia fixed that issue in its Windows XP drivers long ago. Obviously, Nvidia has higher priorities right now on the Vista driver front, but that's just the problem. SLI likely won't get proper attention until Nvidia addresses its other deficits compared to AMD's Catalyst drivers for Vista, including an incomplete control panel UI, weak overclocking tools, and some general functionality issues like the Oblivion AA problem we encountered.

That fact tarnishes the performance crown this card wears, in my view. I expect the Ultra to make more sense as a flagship product once we see—if we see—"overclocked in the box" versions offering some nice clock speed boosts above the stock specs. GeForce 8800 Ultra cards may never be killer values, but at least then they might justifiably command their price premiums.

We'll be keeping an eye on this issue and hope to test some faster-clocked Ultras soon.

MSI NX 8800 GTS - T2D320E-HD

Not wanting to leave any enthusiasts out, those looking for the power of the GeForce 8800 family of cards on a budget have had their prayers answered in the form of the GeForce 8800 GTS 320MB card. With prices below $300, the power of the best NVIDIA has to offer is now available for everyone. With the only difference between the two GTS models being the amount of GDDR3 RAM (640MB vs. 320MB) the sub-$300 price is attractive and reasonable.

MSI took the reference design and upped the ante by increasing the core and memory clock speeds in their NX8800GTS-T2D320E-HD OC offering. The increase in raw speed on the card served as a good reminder that the marketing hype of "more memory = faster" is definitely not always the case as was discovered when put the through the paces against its bigger, but slower, 640MB brother.

As the chart below shows, the NX8800GTS-T2D320E-HD OC is identical to the 640MB version except in the amount of RAM and the core and memory speeds. Both feature nice boosts in speed that lend to impressively increased performance throughout a number of benchmarks and tests.
Apart from a few ultra-high resolution instances, MSI's NX8800GTS-T2D320E-HD OC proved that, with a little extra oomph from end-user overclocking, raw speed still can and does have a major impact on gaming performance; while more memory certainly can't hurt, it isn't the be-all cure to ensure faster performance.

If considering a GeForce 8800 GTS, don't let the lesser amount of RAM fool you: these factory overclocked 320MB GTS cards are packing all the same heat and can perform almost as good and sometimes even better than their bigger-but-slower brother.

Nvidia GeForce 8800 GTX

DirectX 10 is sitting just around the corner, hand in hand with Microsoft Vista. It requires a new unified architecture in the GPU department that neither hardware vendor has implemented yet and is not compatible with DX9 hardware. The NVIDIA G80 architecture, now known as the GeForce 8800 GTX and 8800 GTS, has been the known DX10 candidate for some time, but much of the rumors and information about the chip were just plain wrong, as we can now officially tell you today.

Well, we've talked about what a unified architecture is and how Microsoft is using it in DX10 with all the new featurs and options available to game designers. But just what does NVIDIA's unified G80 architecture look like??

All hail G80!! Well, um, okay. That's a lot of pretty colors and boxes and lines and what not, but what does it all mean, and what has changed from the past? First, compared to the architecture of the G71 (GeForce 7900), you'll notice that there is one less "layer" of units to see and understand. Since we are moving from a dual-pipe architecture to a unified one, this makes sense. Those eight blocks of processing units there with the green and blue squares represent the unified architecture and work on pixel, vertex and geometry shading.

The new flagship is the 8800 GTX card, coming in at an expected MSRP with a hard launch; you should be able to find these cards for sale today. The clock speed on the card is 575 MHz, but remember that the 128 stream processors run at 1.35 GHz, and they are labeled as the "shader" clock rate here. The GDDR3 memory is clocked at 900 MHz, and you'll be getting 768MB of it, thanks to the memory configuration issue we talked about before. There are dual dual-link DVI ports and an HDTV output as well.


NVIDIA should be commended once again for being able to pull off a successful hard launch of a product that has been eagerly awaited for months now. Only time will tell us if supply is able to keep up with demand, but I'll be checking in during the week to find out!

ATI Radeon X1950 Pro 256MB

The mainstream video card market is actually comprised of two different levels, separated by the old price-performance matrix. Cards like the GeForce 7600 and Radeon X1650-based products represent the entry-level section, while the top-end offers video cards with greater performance and a higher price tag. NVIDIA has been a serious powerhouse at the upper range, first with the GeForce 7600 GT 256MB, and when that faded, the GeForce 7900 GS 256MB was quick to take its place. ATI has had a very tough time competing, especially as there was initially a huge gap between the Radeon X1600 XT and Radeon X1900 XT cards. This was bridged with the Radeon X1900 GT, but the subsequent Radeon X1900 Pro is the real deal, and better able to stem the NVIDIA tide.

The Radeon X1950 Pro is built on the 80nm RV570 graphics core, and sports a similar architecture to the lower-clocked, 90nm Radeon X1900 GT. The RV570 features 12 pixel pipelines, 12 texture units, 8 vertex shaders, and 12 ROPs. This may seem low for a high-end mainstream video card, but the Radeon X1950 Pro includes 3 pixel shaders per pipeline, for a total of 36. This can yield a serious performance edge, especially in SM3.0 games. The Radeon X1950 Pro features 256MB of onboard GDDR3 memory using a 256MB link to the internal ring bus controller. This is the latest generation of 80nm ATI parts, and like the Radeon X1650 XT, the Radeon X1950 Pro supports HDCP and "native" CrossFire using internal connectors.
The base architecture may be similar, but the Radeon X1950 Pro is clocked higher than current Radeon X1900 GT boards, and the RV570 core runs at 575 MHz, while the 256MB of GDDR3 memory is set at 1.38 GHz. This provides theoretical fill rates of 6.9 GPixels/s, 6.9 GTexels/s (standard) and 20.7 GTexels/s (shaded). This last figure helps illustrate just how powerful this type of design can be, given a game or application that really stresses its pixel shading abilities. The memory bandwidth is definitely high-end, as the 1.38 GHz memory clock and its 256-bit link translate into 44.2 GB/s of memory bandwidth - about on par with a GeForce 7950 GT. The Radeon X1950 Pro also includes support for AVIVO, up to 6X AA & 16X AF modes, 3Dc+ texture compression, and native support for CrossFire multi-GPU technology.

The ATI version of the Radeon X1950 Pro is a standard design without any of the enhancements offered by their 3rd-party vendors. This is both a positive and a negative, as you know the card design is fully tested, compatible and rock solid, but you forgo any higher default clock speeds or nifty cooling apparatus. The card itself is a full-length PCI Express model, with a sleek red heatsink-fan covering virtually the entire PCB. We like this format, especially compared to Radeon X1950 Pro cooling designs with a taller heatsink-fan, as the ATI card offers a seamless install for adjacent peripherals.
The ATI Radeon X1950 Pro 256MB card is clocked at standard speeds, with its core set at 575 MHz, and the onboar memory running at 1.38 GHz. The card offers the standard connectivity options, featuring two dual-link DVI connectors and an S-Video/HDTV-out port. The DVI output offers resolutions up to 2560x1600, VGA maxes out at 2048x1536, and HDTV-out runs up to 1080i. As with all Radeon X1950 Pro cards, the ATI version also requires external power through a single PCI Express connector. CrossFire is supported in native mode, and t

The ATI Radeon X1950 Pro retail box includes a CrossFire bridge interconnect for future upgrades. Also included in the bundle are a Driver CD, composite and S-video cables, HDTV-out cable, and DVI to VGA adapters. ATI also offers a 1-year limited warranty and supports operating systems from Windows XP to MCE to Vista.

MSI NX 8600 GTS T2D256E

You can tell plenty from the model code of this new MSI graphics card. The 'NX8600GTS' part tells you that it uses the new Nvidia GeForce 8600GTS graphics chip along with 256MB of fast GDDR-3 graphics memory, while the 'OC' suffix flags up the fact that this is a factory overclocked graphics card.

Nvidia launched the DirectX 10 GeForce 8800 GT and GTX way back in October 2006 so this mid-range chip has been a long time coming. It's given Nvidia time to move from a 90nm fabrication process to 80nm, so the GeForce 8600 uses faster clock speeds on the core, unified shaders and memory than we ever saw on the 8800. This gives us a taste of what we can expect when Nvidia launches the GeForce 8800 Ultra in a few weeks' time to nicely mess up the launch of the ATi Radeon HD 2900.

Getting back to the MSI, the GeForce 8600GTS chip uses 289 million transistors, compared to the GeForce 8800 which has 691 million. The graphics core runs at 700MHz, the 32 unified shaders are clocked at 1.45GHz and the 256MB of memory has a speed of 2.1GHz, yet the power rating is fairly low at 71W so there's a single six-pin PCI Express power connector.

MSI calls this an overclocked card but the increase is fairly small as the reference core speeds for an 8600GTS are 675MHz for the core with 2GHz memory speed.

MSI has clearly given a fair amount of thought to the cooling on this model as the pictures we've seen of competing GeForce 8600GTS cards use a slim-line cooler that looks very similar to a GeForce 7800GTX. MSI has instead opted for a double-slot design that connects the heatsink to a finned radiator. The fan blows cooling air through a duct and across the cooler but there's a sizeable gap between the heat exchanger and the vented bracket.

We suspect this design is intended to quieten the cooler, however it still seemed rather noisy to our ears and made a continuous drone that was rather off-putting.

The graphics card is bulky and noisy and the MSI package is rather basic. There's a PCI-E power adapter, two DVI adapters, a splitter cable with Component and S-Video outputs and an S-Video extension cable. Apart from some MSI utilities there is no software in the games department where you would hope for, say, a voucher for Unreal Tournament 2007.

During our testing we compared the MSI with a GeForce 7950GT, as that is the DirectX 9.0c part that will inevitably be replaced by the GeForce 8600GT, so when we ran 3DMark06 and a few games on both cards in Windows XP and Windows Vista, we were surprised to find that the difference in performance was minimal.

Sure, there were swings and roundabouts, but you couldn't say that either graphics card was better than the other, and for that matter both of the two operating systems delivered the goods. While the MSI is undoubtedly a very competent performer it came as a real surprise that it wasn't markedly better than the card that it is due to replace.

The reason is, of course, DirectX 10.

The GeForce 8000 family all support DirectX 10 and use a new design that replaces dedicated Pixel Shaders, Vertex Shaders and Geometry Shaders with more flexible Unified Shaders. That's a good move which will doubtless reap benefits once DirectX 10 games come to market, but that's still some months in the future.

AMD Athlon 64 FX-62

AMD's Athlon 64 FX-62 represents a major shift in design for AMD. The chip itself is straightforward; it's a dual-core performance CPU that offers a marginal performance increase over the older Athlon 64 FX-60. More importantly, the FX-62 is the flagship of AMD's new AM2 motherboard chipset, which introduces several new features to the AMD desktop platform. This means that if you want to upgrade to this CPU, you'll also need a new motherboard. You'll get plenty of advanced features if you make the switch, but keep in mind that Intel's Core 2 Duo chips are right around the corner, and early tests have shown that AMD's hold on the performance belt might be slipping.

The Athlon 64 FX-62 is not a revolutionary upgrade to AMD's old lineup. Aside from the new interface, the biggest change is the upgrade to a 2.8GHz per core, a minor uptick from the FX-60's 2.6GHz. Despite the predictable CPU tweaking, the more important development is the FX-62's transition to AMD's new AM2 chipset. For the past two years, AMD's desktop chips have used either Socket 939 or the lower-end Socket 754 motherboards. With AM2, AMD introduces not only an entirely new pin layout for its desktop chips, it also brings support for DDR2 memory. AM2 will support 667MHz DDR2 memory for all of AMD's chips, and at least up to 800MHz memory when paired with compatible Athlon 64 X2 and Athlon FX CPUs. It's been rumored that AM2 can support up to 1,066MHz DDR2 memory as well, although AMD won't officially support it. For Intel's part, its 900-series chipsets have supported DDR2 in various clock speeds since their debut in early 2004, but that support hasn't translated to performance wins due to DDR2 memory's higher latency than plain old DDR. DDR2 generally isn't slower than DDR, but it hasn't really offered a benefit. But the time is now ripe for AMD to switch because falling prices are making high quantities of DDR2 memory more cost effective than DDR, and with Windows Vista and its 1GB system memory requirements, you can expect that PCs with 2GB and 4GB of memory will soon become the norm.

There's more to the Socket AM2 story (check back for a blog later), but as far as the Athlon 64 FX-62 is concerned, neither the new chip nor the new chipset translate to remarkable performance gains. Its SysMark 2004 overall scores are only 3.5 percent faster than the FX-60's. The FX-62's strongest improvement was on our multitasking test, where it showed a 10 percent performance gain. Otherwise, on our dual-core and gaming tests, the FX-62 turned in scores between 8 and 1 percent faster than the FX-60, barely overcoming the statistical margin of error. Still, right now the Athlon 64 FX-62 is technically the desktop CPU performance leader. But with recent brewings at Intel, that lead could change hands soon.

At Intel's Spring Developer's Forum, Intel provided tech enthusiast site Anandtech the chance to test its upcoming Core 2 Duo desktop chip (then code-named Conroe) against an overclocked Athlon 64 FX-60. The results weren't in AMD's favor. We can't judge based on prerelease testing, especially when it was conducted in an Intel-controlled environment (Anandtech acknowledged the possibility for Intel chicanery as well), but with no major performance boost from the Athlon 64 FX-62, and the fact that Intel's Core 2 Duo represents a brand-new architecture for the desktop, AMD's performance lead looks vulnerable. Intel's road map puts the release date of its next-generation Extreme Edition CPU in the second half of the year, and the company announced the official name of the mainstream Core 2 Duo chip this month. If you absolutely need more power now, the Athlon 64 FX-62 will deliver. But we feel that it's worth waiting at least a month or two to see what Intel brings to the table.

Intel Core 2 Duo E6700

Intel announced its line of Core 2 Duo desktop CPUs today. If you're buying a new computer or building one of your own, you would be wise to see that it has one of Intel's new dual-core chips in it. The Core 2 Duo chips are not only the fastest desktop chips on the market, but also the most cost effective and among the most power efficient. About the only people these new chips aren't good for are the folks at AMD, who can claim the desktop CPU crown no longer.

We've given the full review treatment to two of the five Core 2 Duo chips. You can read about the flagship Core 2 Extreme X6800 here and the entire Core 2 Duo series here. In this review, we examine the next chip down, the 2.67GHz Core 2 Duo E6700. While the Extreme X6800 chip might be the fastest in the new lineup, we find the E6700 the most compelling for its price-performance ratio. For just about half the cost of AMD's flagship, the Athlon 64 FX-62, the Core 2 Duo E6700 gives you nearly identical, if not faster performance, depending on the application.

It's the first desktop chip family that doesn't use the NetBurst architecture, which has been the template for every design since the Pentium 4. Instead, the Core 2 Duo uses what's called the Core architecture (not to be confused with Intel's Core Duo and Core Solo laptop chips, released this past January). The advances in the Core architecture explain why even though the Core 2 Duo chips have lower clock speeds, they're faster than the older dual-core Pentium D 900 series chips. The Core 2 Extreme X6800 chip, the Core 2 Duo E6700, and the Core 2 E6600 represent the top tier of Intel's new line, and in addition to the broader Core architecture similarities, they all have 4MB of unified L2 cache. The lower end of the Core 2 Duo line, composed of the E6400 and the E6300, have a 2MB unified L2 cache.

We won't belabor each point here, since the blog post already spells them out, but the key is that it's not simply one feature that gives the Core 2 Duo chips their strength, but rather it's a host of design improvements across the chip and the way it transports data that improves performance. And out test results bear this out.

On our gaming, Microsoft Office, and Adobe Photoshop tests, the E6700 was second only to the Extreme X6800 chip. Compared to the 2.6GHz Athlon 64 FX-62, the E6700 was a full 60 frames per second faster on Half-Life 2, it finished our Microsoft Office test 20 seconds ahead, and it won on the Photoshop test by 39 seconds. On our iTunes and multitasking tests, the E6700 trailed the FX-62 by only 2 and 3 seconds, respectively. In other words, with the Core 2 Duo E6700 in your system, you'll play games more smoothly, get work done faster, and in general enjoy a better computing experience than with the best from AMD--and for less dough.

For its own dual-core Athlon 64 X2 chips, AMD tells its hardware partners to prepare for a TDP of between 89 and 110 watts (although its Energy Efficient and Small Form Factor Althon 64 X2 products, which have yet to hit the market in any quantity, go to 65 and 35 watts, respectively). Intel has caught flak in the past for providing fan makers with inadequate TDP ratings, which resulted in overly noisy fans for the Pentium D chips that had to spin exceedingly fast to cool the chips properly. But the Falcon Northwest Mach V desktop we reviewed alongside this launch came with stock cooling parts. It will be hard to tell exactly how well Intel's provided specs live up to their real-world requirements until the hardware has been disseminated widely, but the fact that a performance stickler like Falcon sent the standard-issue cooling hardware suggests that Intel took note of the problems it had in the past.

And as to the surrounding parts, if you already have an Intel-based PC and would like to upgrade, Intel has made it easy. The Core 2 Duo chips use the same Socket LGA775 interface as the Pentium D 900 series. If you have an Intel motherboard using a 965 chipset, you're ready to go with Core 2 Duo and a single graphics card. If you want to run Intel and a dual graphics configuration, you have two options: Intel's 975 chipsets support ATI's CrossFire tech only, and if you want to run SLI, you'll need a motherboard in Nvidia's NForce 500 for Intel series.

For AMD, the outlook isn't great. Its so-called 4x4 design, which will let you run two Athlon 64 FX-62s in a single PC, might overtake a single Core 2 Extreme X6800 on raw performance. Details are scant about 4x4's particulars, but if a single Athlon 64 FX-62 costs about $1,031, two will have you crossing the $2,000 mark on chips alone, not to mention the motherboard, the size of the case, as well as the cooling hardware required to operate it. AMD says it will drop prices this month to compete on the price-performance ratio. That might make for some compelling desktop deals, but for now, Intel has the superior technology.

Intel Core 2 Extreme X6800

Intel announced its line of Core 2 Duo desktop CPUs today. If you're buying a new computer or you're building one of your own, you would be wise to see that it has one of Intel's new dual-core chips in it. The Core 2 Duo chips include not only the fastest desktop chips on the market, but also the most cost-effective and among the most power-efficient. About the only people these new chips aren't good news for are the folks at AMD, who can claim the desktop CPU crown no longer.

We've given the full review treatment to two the five Core 2 Duo chips. You can read about the price-performance champ, the Core 2 Duo E6700 here and the entire Core 2 Duo series here. In this review we'll examine Intel's flagship, the 2.93GHz Core 2 Extreme X6800, which is now the fastest desktop CPU you can buy.

The Core 2 Duo represents a new era for Intel. It's the first desktop chip family that doesn't use the NetBurst architecture, which has been the template for every design since the Pentium 4. Instead, the Core 2 Duo uses what's called the Core architecture (not to be confused with Intel's Core Duo and Core Solo laptop chips, released this past January). The advances in the Core architecture explain why even though the Core 2 Duo chips have lower clock speeds, they're faster than the older dual-core Pentium D 900 series chips. The Core 2 Extreme X6800 chip, the Core 2 Duo E6700, and the Core 2 E6600 represent the top tier of Intel's new line, and in addition to the broader Core architecture similarities, they all have 4MB of unified L2 cache. The lower end of the Core 2 Duo line, comprised of the $224 E6400 and the $183 E6300, has a 2MB unified L2 cache.

We won't belabor each point here since the blog post already spells it out, but the key is that it's not simply one feature that gives the Core 2 Duo chips their strength, but rather a host of design improvements across the chip and the way it transports data that improves performance. And our test results bear this out.

The Core 2 Extreme X6800 made a clean sweep of all of our benchmarks. AMD's closest competition, the 2.6GHz Athlon 64 FX-62, came within 5 percent on our iTunes, multitasking, and Microsoft Office tests, but on our Half-Life 2 and our Adobe Photoshop CS2 tests, AMD lost badly, by as much as 28 percent on Half-Life 2. At, Intel's new flagship processor might not be as compelling a deal as the only slightly slower Core 2 Duo E6700, but for enthusiasts and others with the passion and the wallet to ensure that they have the fastest chip out there, the Core 2 Extreme X6800 is now it.

But there's even more to the Core 2 Duo story than performance. One of the key elements of the new chips is their power efficiency. We base our findings on a number called the thermal design power (TDP), which is the number that AMD and Intel each provide to system vendors and various PC hardware makers for determining how much power each chip will require, and thus the amount of heat they'll need to dissipate. On Intel's last generation of dual-core desktop chips, the Pentium D 900s, the TDP rating fell between 95 and 130 watts. But because the Core 2 Duo design incorporates power management techniques from Intel's notebook chips, its power requirements are much more forgiving. All but the Core 2 Extreme X6800 have a TDP of 65 watts, while the Extreme chip itself is only 75 watts.

For AMD, the outlook isn't great at the moment. Its so-called 4x4 design, which will let you run two Athlon 64 FX-62 chips in a single PC, might overtake a single Core 2 Extreme X6800 on raw performance. AMD says it's going to drop prices this month to compete on price-performance ratio. That might make for some compelling desktop deals, but for now Intel boasts the superior technology.

Intel Core 2 Duo E6600

You will need to excuse Intel if it's seen gloating over its new Core 2 Duo processors. Ever since dual-core desktop processors arrived on the scene last year, Intel has taken a backseat to rival AMD. Well, AMD's dominating run has come to a screeching halt with today's release of the Intel Core 2 Duo processors.

We've tested the top two chips, the 2.93GHz Core 2 Extreme X6800 and the 2.67GHz Core 2 Duo E6700, and were blown away by the performance they turned in. Rounding out the line are the 2.4GHz Core 2 Duo E6600 , the 2.13GHz Core 2 Duo E6400 , and the 1.86GHz Core 2 Duo E6200 . Like the top two chips, the E6600 features 4MB of unified L2 cache; the bottom two chips serve up a single 2MB block. All five chips feature a 1,066MHz frontside bus and operate on the Socket LGA775 interface.

The Core 2 Duo E6700 looks especially sweet; it won our Editors' Choice award for delivering near the performance of the Extreme X6800 for roughly half the cost. Compared to AMD's Athlon 64 FX-62, the Core 2 Duo E6700 turned in better or near equal results on all of CNET Labs' benchmarks. AMD price cuts are expected soon (we'll update this page accordingly once they are announced), but for now, the E6700 offers the best price-performance ratio of any desktop chip on the market.

Core 2 Duo should also put to rest another criticism lobbied against Intel: that its chips are power-hungry, heat-generating behemoths. System builders had to go to great lengths--more cooling fans, larger heat sinks--to keep the chips safely running. Borrowing from Intel's laptop chips, the Core 2 Duo desktop processors require less power to run and, therefore, put out less heat. You can read more about the improved thermals in either of the full reviews, where you'll also find performance charts that show how the Core 2 Extreme X6800 and the Core 2 Duo E6700 stack up against AMD's two top chips, the Athlon 64 FX-62 and the Athlon 64 X2 5000+.

We expect to see similar performance from the three chips we've yet to test and review. We'd be surprised to see much of a gap in performance between the E6700 and the E6600, since the only difference between the two is clock speed. There might be a bigger jump down in performance when you go from the E6600 to the E6400, because not only is the lower-end chip clocked slower, but it also features half the L2 cache. We expect to test these other three chips soon to give you the full rundown of the entire Core 2 Duo family. But if the first two chips are any indication, any of the Core 2 Duo processors will be an attractive option next time you're in the market for a PC, regardless of your budget.

Intel Core 2 Extreme QX6700

Barely wrapped your brain around dual-core processors? It only gets worse from here, folks. Welcome to quad core, by way of Intel's Core 2 Extreme QX6700. Don't let the "Core 2" fool you (great job, Intel Product Naming department), this new chip has four physical processing cores in it that make it a multitasking beast. And if you're still stuck doing only one thing at a time on your desktop, the QX6700's promise for single-application performance is large, as well. We suspect that professionals and forward-looking gamers will be most interested in quad-core chips, and of the pros, the digital-media editors might not want to get rid of their Mac Pro's just yet. We found that with certain applications, Apple's high-end designer box is faster. At $999, the Core 2 Extreme QX6700 will likely end up in only the most expensive of desktops, but the fact is that the multicore revolution is fully upon us. You might not need a PC with such a pricey chip now, but our testing found that for applications and scenarios that will put it to the test, Intel's new quad-core chip will give you an absolute boost in performance.

We spared you the gory chip architecture details in our review of Intel's Core 2 Extreme X6800, and we're going to do the same here. The big news is doubling the number of cores to four; the rest of the chip architecture remains the same for the most part. . The key specs of the Core 2 Extreme QX6700 are its 2.66GHz-per-core clock speed, and its two separate 4MB L2 cache allotments--giving each pair of cores a 4MB pool to draw upon. That's, logically, twice as much cache as the dual-core Extreme X6800 chip. But if you've been paying attention to recent CPU developments, you might remember that the X6800 actually has a faster clock speed, coming in at 2.93GHz. Here's where multicore CPUs start to complicate our understanding of desktop processors.

If you'll recall, both Intel and AMD have been laying the groundwork to get people away from thinking of raw megahertz as the primary indicator of processor capability. The reason in a word is heat: The faster a chip runs, the hotter it becomes. When those Pentium Extreme Edition chips started hitting 3.6GHz and higher, the cumbersome liquid-cooling hardware required to keep them from overheating became a visible, noisy reminder that heat dissipation is a major challenge for system builders. Both AMD and Intel knew this before the Extreme Edition chips came to market, of course, but with the quad-core Core 2 Extreme QX6700, the answer to the problem becomes much easier to understand than even with dual-core CPUs; rather than make the chips faster, Intel has made them able to do more things at once.

If you're wondering what kind of performance increase you can expect from the Core 2 Extreme QX6700, we saw dramatic speed increases with multitasking and multithreaded applications compared to Intel's Core 2 Duo Extreme X6800 and AMD's AMD Athlon 64 FX-62--the fastest dual-core chips Intel and AMD had to offer, respectively. Apple's Mac Pro, however, presents a different story. Our Apple test bed has two dual-core Xeon 5160 chips, each running at 3.0GHz. That makes its raw CPU speed faster than that of the Core 2 Extreme QX6700. On some of our apps--iTunes and Photoshop in particular--differences between running the programs on Windows XP and Apple OS X likely impact performance, but it's worth noting that even with a slower hard drive, the Mac Pro outpaced the Core 2 Extreme QX6700 chip on a number of tests, likely due to its clock speed advantage.

It seems to us that the performance takeaway is that for Windows users who can afford it, the Core 2 Extreme QX6700 is the way to go for the fastest PC today. As our single-core CineBench scores show, you might run into some apps that benefit more from raw clock speed than having multiple cores, but in general, we haven't seen a faster desktop chip. But professionals who have the luxury to choose among platforms are probably better off sticking with a Mac Pro, all other things being equal. We imagine that due to its partnership with Intel, Apple will be updating the CPUs in its high-end desktop in the near future, so it's not hard to fathom a Mac Pro with a single quad-core chip or perhaps two quad-core chips, so just because the current two dual-core Xeon design isn't quite a true "quad-core CPU," Mac loyalists shouldn't feel like they're limiting themselves.

But say you wanted to build your own quad-core PC. You won't be able to purchase the Core 2 Extreme QX6700 until November 14, and on that date, you'll also have to decide between building one on your own and buying one from Dell, Gateway, Velocity Micro, or any of the other typically high-end PC vendors. If you do go it alone, you'll need an Intel 975XBX2-based motherboard. As the company did with the original Core 2 Duo chips, we expect that Nvidia will have a compatible motherboard chipset for sale as well, but as of November 1, it hadn't announced anything. Neither Intel's nor Nvidia's previous Core 2 Duo-supporting chipsets are compatible with the Core 2 Extreme QX6700, so if you recently purchased such a motherboard, you'll need to upgrade. Memory support officially includes 533MHz and 667MHz DDR2 SDRAM, with unofficial support for faster 800MHz DDR2 RAM.

You also need to consider power management. Intel claims a 130-watt Thermal Design Power (TDP) rating for the Core 2 Extreme QX6700. That's almost twice as much as the Core 2 Extreme X6800's 75-watt TDP. That number is an outer-limit rating, meaning that fan and heat makers should design their parts to dissipate the attendant heat of a 130-watt TDP part but that in most cases, it's not going to get that hot. We suspect that Intel might be accommodating for overclocking here, as well. The new built-in digital thermometer also seems particularly overclocking friendly. The sample motherboard and fan we received didn't support the new thermometer, but Intel informed us that production boards will ship with that feature fully enabled. It's also worth noting that mainstream vendor Gateway is selling its new Core 2 Extreme QX6700-equipped FX530XL desktop factory-overclocked, and the overclocked parts are under warrantee. That a volume producer such as Gateway is going to back overclocking this chip, we have to believe that the chip's tolerance has plenty of room to grow.

If you're wondering what the future of quad-core processing looks like, AMD's 4x4 solution, which pairs two dual-core CPUs, sits on the horizon. We've talked to a number of system vendors, however, who back up our own trepidations about the price-performance and thermal issues of a two-chip solution. We'll give AMD the benefit of the doubt until we have 4x4 in our hands and have had an opportunity to test it out. We also expect that both Intel's and AMD's quad-core designs will trickle down to mainstream-priced chips before too long. Don't expect it to end there, though: Intel has already announced an eight-core server chip on its road map for the future.

AMD Athlon 64 X2 5000+

Along with last week's Athlon 64 FX-62 CPU and Socket AM2 chipset announcements, AMD introduced a more mainstream dual-core chip, the Athlon 64 X2 5000+. At $696 (according to AMD's pricing per 1,000 units), the X2 5000+ has a lot of performance to offer for the price, stacking up well alongside AMD's pricier Athlon 64 X2 FX-60 and FX-62 CPUs, as well as Intel's most advanced desktop chip, the Pentium Extreme Edition 965. If Intel weren't close to announcing a major overhaul to its CPU lineup in the coming months, we'd be able to provide a clearer recommendation for the Athlon 64 X2 5000+. As it stands, powerful though it is, we suggest you hold off purchasing such an expensive chip until we know what Intel's next-generation Core 2 Duo processors will bring to the computing table.

Despite the impending Intel announcement, the X2 5000+ deserves merit. Compared to everything else in the field right now, the X2 5000+ will serve everyone but demanding gamers well. At 2.6GHz per core, it's faster than all of AMD's original X2 series of dual-core CPUs. It was also announced on the same date as the aforementioned Socket AM2 chipset for a reason.

The new AM2 chipset brings all of AMD's CPUs onto an updated motherboard platform, although the company needs to reissue separate AM2 versions of the old Socket 939 chips. The X2 5000+, however, is Socket AM2 only. About all that really means is that you'll need to buy a new motherboard (Socket AM2 and Socket 939 aren't cross-compatible) and new DDR2 memory, since AM2 boards don't use DDR memory. Aside from the memory switch, the only other major advantage of the new platform is reduced power consumption. Whereas on Socket 939, the highest-end X2 chip, the 4800+, required 115 watts from your power supply, the X2 5000+ (and the AM2 version of the 4800+) needs only 89 watts. While we appreciate the improvement, it will really benefit you only if you're building a PC with multiple high-end graphics cards.

Chipset updates aside, the real news about the X2 5000+ is its performance. It performed so well, about the only task we don't recommend it for is extreme gaming. Otherwise, it will give you fast performance at significant cost savings. The best example is our multitasking test. The X2 5000+ finished our test a few seconds faster than the Athlon 64 FX-60, which costs roughly $125 more. And even where it didn't win, the X2 5000+ turned in strong scores. Both its SysMark 2004 scores and its times on our multimedia tests trailed the FX-60 slightly. And for Intel's part, its $1,100 Pentium Extreme Edition 965 chip trailed the X2 5000+ on all but the stand-alone DivX 6.2 encoding test. In other words, the X2 5000+ is a great choice for digital content creation and fast day-to-day computing.

In our gaming test, the X2 5000+ didn't fare as well as on the other benchmarks. It was still faster than Intel's 965 chip, but it fell significantly behind AMD's FX chips. In fairness, its 97.4 frames-per-second score on our Half-Life 2: Lost Coast demo is still very strong, so if you won't be disappointed if you use this chip for gaming. Just know that AMD has faster options in its FX line if your chief concern is 3D games.

If all of that sounds like a resounding endorsement of AMD's new chip, we have to throw in a caveat. You see, way back in March, enthusiast site Anandtech was able to get some Intel-guided hands-on time with Intel's new Core 2 Duo desktop chips (then code-named Conroe). The testing was admittedly dubious, conducted as it was on systems set up by Intel, but it provided enough of a glimpse at the future to suggest that you at least wait and see what Intel's next-generation chips have to offer before buying an expensive new CPU from either company. They're due out soon, so we suggest you keep an eye out for our full-fledged, unbiased benchmark testing.