Components

Saturday, August 4, 2007

Tuesday, June 19, 2007

HP - xw6400

HP's latest personal workstation series, the xw6400, is aimed at mid-range business users and, housed in a new mini-tower case, it's ideal where desk space is at a premium yet a system with plenty of grunt is still required.

It looks quite smart in its grey and metallic silver finish and, measuring just 441 x 165 x 440mm, the case has been designed to fit in a rack as well as on or under a desk, and its tool-free design should delight most IT managers.

With such a small case, compromises have to be made and in this instance one compromise is the number of drives you can fit into the xw6400. As standard there are two internal 3.5-inch drive bays plus a third with external access, while there are just two 5.25-inch bays.

Our review sample came with both internal drive bays filled by 500GB Seagate hard drives, to give a terabyte of internal storage, so there is an argument to say that should be sufficient; but if not, HP offers 750GB drives as an alternative. An LG, 16x, Lightscribe, multi-format DVD burner filled one of the two larger bays.

As you might expect, the xw6400 uses an HP-branded motherboard in which sits a quad core Xeon, an E5310 to be exact, which is clocked at a lowly 1.66GHz but has an FSB speed of 1,066MHz and 8MB of L2 cache. This gives the xw6400 a disappointing SYSMark04 SE overall score of just 260, but that's not the whole picture as the system is aimed at the professional user, so of more importance is what the four cores and the 8MB of L2 cache allow you to do: this is demonstrated by the very good Cinebench 9.5 multi-CPU rendering test score of 860.

Backing up the CPU is Intel's latest server/workstation Northbridge chipset, the 5000X, which can support up to 32GB of memory and, with four memory modules installed, Quad Channel memory support is available too. Unfortunately HP's motherboard can only support a maximum of 16GB; the review system came with 4GB of PC2-5300 DDR2 memory installed in two of the four DIMM slots.

As you might expect from a workstation, the graphics card isn't the usual 3D card but instead an Nvidia Quadro FX1500 which comes with 256MB of GDDR memory. The motherboard has two x16 PCI-E slots so you can add a second card at a later date to take advantage of SLI technology, but not at full speed: when running in SLI mode the PCI-E bus runs at x8 speed for each card.

HP supplied a 20-inch TFT display with the system, with a handy native resolution of 1,680 x 1,050 pixels, along with an HP-branded USB keyboard and mouse combo to round out the hardware package.
HP - xw6400 features - Verdict

This is an ideal system if you are looking to upgrade from an entry level workstation, while its small format makes it ideal for the smaller office, although on the flip side it does mean that your options are limited if you want to expand the hardware.

Friday, June 8, 2007

Toshiba HD-E1 HD-DVD Player

While there is no clear winner in the home high-definition optical storage feud yet, Toshiba has reaffirmed its direction with the release of the HD-E1, an affordable HD-DVD player for your next home entertainment revamp.

..Overall, we have no doubt in our mind that the Toshiba HD-E1 HD-DVD player will do a great job at providing 1080i video goodness. What’s more, with support for the next generation of home cinema standards, easy firmware updating and USB expansion slots, this latest HD-DVD player has longevity printed all over it.

Now, an estimated retail price of USD$645 (SGD$999) may be a little hard to stomach for most but when compared to rival Blu-ray players, it isn't really too much to ask to equip yourself with high-definition entertainment. So, if you want to have a HD player that will make every movie a perfect audio and video sensory experience each time, then the HD-E1’s arrival (sometime around the second or third quarter of 2007) will be well worth the wait.

HP Unveils External HD DVD ROM

HP, a leading supplier of personal computer, has announced the world’s first HD DVD ROM (read-only memory) device for personal computers. Even though the device poses some interest to those, looking forward to adding high definition DVD playback capability to their desktop, notebook or media center personal computers (PCs), the price of the device, at least, in the UK does not seem to be truly affordable.


HP’s hd100 external optical drive can read and playback, CDs (up to 14x), DVDs (up to 5x) and HD DVDs (at up to 2.4x speed) and supports various CD and DVD standards. The drive is compatible with USB 2.0 interconnection standard, but still requires an additional power supply unit, which HP provides. HP will also add CyberLink player software to the hd100 external drive.


The supplier advices that users looking forward HD DVD playback should have a powerful dual-core central processing unit, such as AMD Athlon 64 X2 4200+, Intel Pentium D 945 or more advanced. The company also recommends users to ensure that their graphics cards is, at least, as powerful as ATI Radeon X1600 or Nvidia GeForce 7600 GT. HP notes that the graphics card and monitor should be HDCP-compliant, even though there is unofficial information that this is not a compulsory requirement for high-resolution HD DVD playback nowadays.

HP has always been a strong backer of the Blu-ray disc standard, however, after its proposals were rejected by the Blu-ray disc association back in 2005, it decided to support HD DVD as well. Currently the company also sells HP Pavilion dv9000t laptop with HD DVD built-in.

Even though the hd100 external drive seems to be good solution for enabling HD DVD playback on the PC, according to a news-story by PCPro web-site, the part is going to retail for £399 (€588, $771) in the UK, which makes it as expensive as Toshiba’s HD-E1 HD DVD player, which is set to retail for €599.

Friday, June 1, 2007

MSI GeForce 8500 GT

The main difference between GeForce 8 and GeForce 7 families is the adoption of DirectX 10 on GeForce 8 family. What this means is that they will support the next generation of games to be released starting this year. It also means that instead of using separated shader units for each kind of shader processing (pixel, vertex, physics and geometry) video cards from this family use a unified shader architecture, where the shader engines can process any one of these tasks.

So far AMD has announced their ATI Radeon HD 2000 family – which also supports DirectX 10 and uses unified shader architecture –, however mid-range products will be only available in late June, i.e. one month from now. This leaves mid-range cards from GeForce 8 family like GeForce 8500 GT without real direct competitors.

We can find this model from MSI costing around USD 100, so at this price range we have ATI Radeon X1300 XT competing with GeForce 8500 GT.

GeForce 8500 GT runs at 450 MHz and accesses its 256 MB DDR2 memory at 800 MHz (400 MHz transferring two data per clock cycle) thru a 128-bit interface, so it can access its memory at a maximum transfer rate of 12.8 GB/s.

It has only 16 shader processors running at 900 MHz (GeForce 8600 GT and GTS has 32 shader processors).

  • Graphics chip: GeForce 8500 GT, running at 450 MHz.
  • Memory: 256 MB DDR2 memory (2.5 ns, 128-bit interface) from Hynix (HY5PS561621AFP-25), running at 800 MHz (400 MHz DDR).
  • Bus type: PCI Express x16.
  • Connectors: One DVI, one VGA and one S-Video output supporting component video.
  • Video Capture (VIVO): No.
  • Number of CDs/DVDs that come with this board: Two.
  • Games that come with this board: Toca Race Driver 3 (full).
  • Programs that come with this board: None.
  • Even though this competitor from AMD/ATI does not feature a Shader 4.0 unified engine – i.e. not supporting DirectX 10 – in our benchmark it achieved better results than the reviewed video card. It is important to notice that the Radeon X1300 XT model we compared GeForce 8500 GT to featured GDDR3 memories running at 1 GHz, and there are on the market models with DDR2 memories running at 800 MHz.

    Radeon X1300 XT was between 4.27% and 9.98% faster on 3DMark03 with no image quality settings enabled, but when we enabled anti-aliasing and anisotropic filtering, these two video cards achieved a similar performance at 1600x1200, with GeForce 8500 GT being 5.82% faster than Radeon X1300 XT at 1024x768.

    For the kind of user this video card is targeted – someone willing to spend only around USD 100 on a video card – we think Radeon X1300 XT with GDDR3 memories is a better buy.

    Also, if you have USD 20 more to spend, we highly recommend the Radeon X1650 Pro video card: spending only 20% more you get up to 79% more in performance. That is definitely the kind of deal we are looking for!



    ViewSonic VG930m

    ViewSonic's (as of January 12, 2007) VG930m is a good, utilitarian display, though it has some design quirks.

    The VG930m performed well in the text section of our image quality tests, showing particularly sharp text in a Microsoft Word document and on a page of multisize fonts. In the graphics portion of our tests, however, the VG930m didn't make as strong a showing. Colors seemed slightly bland and muted, though some hues in a photo of a fruit tart looked realistic.

    You can access the VG930m's on-screen adjustments via buttons that are inset into the side of the bezel. Though options are not plentiful, we had no trouble maneuvering through the menus to make changes.

    The VG930m tilts, but it rocks a bit when it's moved. Swiveling the model entails moving the base as well, though it does swivel smoothly. The display does not pivot. Speakers built into the bottom of the display are adequate for system sounds, but you'll want to invest in a set of stand-alone speakers for movies and music. The VG930m does not have built-in USB ports.

    The accompanying user guide is on a CD. It explains what each on-screen adjustment does and how to use it, as well as providing general troubleshooting information.

    Overall, the VG930m's strong representation of text makes it a good candidate for general office work.

    Size (inches): 19
    Resolution (pixels): 1280 by 1024
    Contrast Ratio: 700:1
    Adjustments: Multiple adjustments
    Weight (pounds): 13.3
    Interfaces: Analog and digital

    Samsung SyncMaster 305T

    The SyncMaster 305T performed well in our image quality tests, showing sharp text and nuanced color. Jurors were impressed with the 305T's rendering of text in a Microsoft Word document, as well as in a screen of repeating E's and M's.

    The 305T showed a couple of photos in which colors appeared a bit muted in comparison with the other displays in the test group, but color representation seemed good overall.

    This SyncMaster has a smooth black bezel and a circular stand. It tilts and swivels smoothly, though it is not height-adjustable, and it does not pivot. It is wall-mountable.

    The only control it has to adjust the screen is one for brightness--buttons on the front of the bezel increase and decrease the brightness of the display, and that's it. Currently no on-screen components are available to support the 305T's high resolution of 2560 by 1600, but Samsung states that future versions of this model should include an on-screen display for adjustments.

    Two downstream USB ports reside on the back of the display. Samsung also includes a power-saver mode--this switches the monitor to a low-power status when it has not been in use for some time.

    At $1800, this SyncMaster is certainly pricey, but it falls well within the range of what other 30-inch LCDs cost. If you seek a monitor on which to spread out and see lots of documents at once, the 305T would be a fine choice.

    Size (inches): 30
    Resolution (pixels): 2560 by 1600
    Contrast Ratio: 1000:1
    Adjustments: Multiple adjustments
    Weight (pounds): 26.5
    Interfaces: Digital only

    HP LP3065

    HP's 30-inch LCD monitor, the LP3065, turned in impressive image quality in our tests. It also has a nice design and unique features that set it apart. Like other 30-inch displays, it's pricey, but it might be worth it if your work demands lots of screen real estate.

    The LP3065 performed well in our text and graphics tests, showing extremely sharp text in both a Microsoft Word document and an Excel spreadsheet. Black text on a white background was quite easy to read, even at very small font sizes. However, because of the display's high resolution of 2560 by 1600, icons and text can appear rather small. You can remedy this by increasing the font display size in Windows.

    Colors on the LP3065 looked bright and lively, both in photographs and on a screen of a Web site. Again, its high resolution helped the display excel at showing fine detail.

    With three DVI ports, the LP3065 is more versatile than other displays--it can be hooked up to several PCs at once using a digital connection, which is a nice extra. All of the ports are dual-link DVI, so they all require a dual-graphics card setup. If you don't already have such a graphics card, you'll have to shell out for one to accommodate this monitor.

    The LP3065 lacks an on-screen display (OSD) for adjusting the image--because no components yet exist that support its high resolution. HP states that future versions of the LP3065 will include an OSD. The only adjustment you can make is to brightness--several buttons on the front of the unit dial brightness up or down.

    The monitor has a nice streamlined design, with a thin black bezel and a silver stand. It can tilt and swivel, but not pivot. It's also height-adjustable and wall-mountable.

    But the LP3065 costs $1699, which is quite a bit for a display. Granted, you do get excellent image quality and lots of room to spread out in, and if you crave viewing detail in your photos and documents, the LP3065 might be worth the splurge.

    Size (inches): 30
    Resolution (pixels): 2560 by 1600
    Contrast Ratio: 1000:1
    Adjustments: Multiple adjustments
    Weight (pounds): 29.5
    Interfaces: Digital only

    Sunday, May 27, 2007

    Asus M2A-VM HDMI (AMD 690G)

    ASUS takes a break from its barrage of Intel P965 motherboard SKUs over the past few months to focus on AMD's new chipset, but unlike Sapphire, ASUS does what they do best, and that's to specialize. Instead of one board to fit every usage model, ASUS has two variants of the AMD 690G: The no-frills M2A-VM for the more business minded and the entertainment focused M2A-VM HDMI. Both boards are essentially designed on the same PCB; the only difference being that the latter features additional FireWire support as well as being bundled with additional audio and video connectors. In this review, we take a look at what the M2A-VM HDMI has to offer.

    PCMark05's results on the other hand, were more favorable to the ASUS M2A-VM HDMI than SYSmark was. The board tied with the Sapphire board in the CPU subsystem performance scores, and while its memory performance fell below the Sapphire Pure Innovation HDMI, the ASUS still managed to outdo the ECS. Lastly, we see that all three boards share similar HDD performance numbers.

    In our Sapphire Pure Innovation HDMI review, we mentioned about the versatility of the AMD 690G chipset, on how its features can be configured for the home theater generation or purely for business. ASUS is one of the manufacturers we know that would make use of this and develop specific products to cater to the different sectors. However, is it worth the trouble of pushing out two product lines for a mainstream chipset such as the AMD 690G? The M2A-VM HDMI is definitely the more interesting of the two, but its single defining feature is an add-on module that would probably work on either board. Together, the connection options on the ASUS M2A-VM HDMI are able to match the Sapphire board, but connectivity comes at a premium. The regular M2A-VM retails for around US$70 today, while the M2A-VM HDMI averages about US$20 more - just for the extra HDMI module and added FireWire connectivity.

    The motherboard itself is a very well packaged product, and we've come to expect no less from ASUS. They've done a great job with its design and layout, avoiding potential cable, expansion and heat problems that might arise in the smaller micro ATX form factor. The Northbridge does tend to get very warm under load however, but the board seems to take things in stride. We did not encounter any compatibility or stability issues with the M2A-VM HDMI, even under heavy benchmarking loops. While the high-end motherboard market has been evangelizing the use of newer materials such as solid capacitors and digital PWM circuitry, the M2A-VM HDMI does it the old school way, and we're quite happy that it works just as well. The only gripe we have against the M2A-VM HDMI is the lack of proper 8-channel analog audio jacks, which puts the regular PC users at a disadvantage.

    We were a little disappointed at first that ASUS did not include any memory tweaking options for the board, but since performance and overclocking isn't its main attraction, we let the matter slide. At the very least, ASUS does allow some rudimentary voltage and overclocking to take place. Performance-wise, the M2A-VM HDMI managed to be consistent across our benchmarks, though the Sapphire Pure Innovation HDMI takes the top spot in every test. Of course, when you benchmark a board that is designed to run at stock configurations (ASUS M2A-VM HDMI) with one that has a comprehensive tweaking BIOS (Sapphire Pure Innovation HDMI), there will be an expected gap between the two.

    Comparisons with the Sapphire Pure Innovation HDMI will likely the be topic of the day when it comes to AMD 690G motherboards and while Sapphire may have stolen the thunder from larger motherboard manufacturers, the ASUS M2A-VM HDMI is still a very well designed and well implemented motherboard for its target market as an OEM or home entertainment platform.

    Gigabyte GA-P35-DS3R

    This week Intel launched a new core logic set that will support the upcoming Penryn processors and promising DDR3 SDRAM. Today we would like to introduce to you one of the first mainboards based on this chipset that appears to be a very promising platform. Read more in our new article!

    The launch of new processor family based on Core micro-architecture that keeps expanding into different market segments strengthened Intel’s positions even more. This CPU is currently extremely popular, which is not surprising at all as it offers the today’s best combination of consumer features. Trying to secure its leading position and increase the company’s influence in the processor market Intel continues growing the processor family releasing new CPU models aimed at lower as well as upper market segments.


    As we have seen, the manufacturer has paid special attention to inexpensive processor models lately. The price of the youngest Core 2 Duo solutions has dropped down to $100 range, and hence pushed away the competitors quite noticeably.

    However at the same time Intel certainly doesn’t forget about its high-performance solutions either. This summer we should welcome dual-core Core 2 Duo processors and quad-core Core 2 Extreme processors working at higher clock speeds and supporting faster processor bus – 1333MHz Quad Pumped Bus.
    Of course, the increase in the bus frequency of Intel’s flagship processors requires Intel to make sure that the proper infrastructure is there. And first of all, it implies the launch of the new core logic sets as the existing LGA775 chipsets, i975X and iP965, officially support only 1067MHz Quad Pumped Bus. No wonder, that the new chipsets from this family start appearing in the market already; Intel P35 and integrated Intel G33 have already been launched.

    As for the supported memory types, Gigabyte engineers decided not to introduce the innovative DDR3 interface in their GA-P35-DS3R mainboard. At this time, this is a totally justified solution, because this memory is not available in retail yet. Even when it starts selling its price will evidently be higher than that of DDR2 SDRAM, even though there will be no evident performance advantages at the time and the only factor affecting the price will be the fact that it is a new product.

    As a result, Gigabyte GA-P35-DS3R features four traditional DDR2 SDRAM slots, like many other mainboards on older iP965 and i975X chipsets. Like many other mainboards, our hero can perform at its best with dual-channel DDR2 SDRAM. Therefore, DIMM slots on the mainboard PCB are color-coded, indicating how the module pairs should be installed for maximum performance.

    As for additional controllers, the mainboard has a network chip and a chip ensuring Parallel ATA channel and two additional Serial ATA channels implementation. Gigabyte engineers chose PCI Express x1 Gigabit LAN controller from Realtek – RTL 8111B. The addiotnal ATA controller is a PCI Express x1 Gigabyte SATA2 chip. It provides the board with a Parallel ATA-133 channel, because the chipset doesn’t support Parallel ATA interface. However, besides PATA, this chip also supports two Serial ATA-300 channels that can also be utilized for the best.

    So, the board ends up having 8 Serial ATA channels (with NCQ support and 3Gbit/s data transfer rate): 6 of these ports are connected to ICH9R and the remaining 2 – to the external controller chip. Both, the integrated ICH9R ATA controller as well as Gigabyte SATA2 chip, allow creating RAID 0 and 1 arrays. ICH9R also supports RAID 0+1 and 5 arrays and Matrix Storage Technology.

    I would like to give Gigabyte engineers kudos for eSATA interface implementation. Gigabyte GA-P35-DS3R doesn’t have the corresponding ports on the rear panel, as we would see in most cases, but it features two ports laid out on a separate bracket included with the board.

    As for the expansion slots, Gigabyte GA-P35-DS3R offers a pretty god list of them. besides the PCI Express x16 graphics card slot, the mainboard also carries three PCI Express x1 slots (one of them may be blocked by the graphics card cooling system), and three PCI slots. Unfortunately, Gigabyte engineers decided not to equip their mainboard with a second PCI Express x16 slot physically connected to the PCI x4 bus. It means that this mainboard is not compatible with ATI Crossfire technology.

    As for the Gigabyte GA-P35-DS3R mainboard that we have reviewed today, it is one of the first solutions on the new Intel P35 chipset to appear in the market and certainly deserves your attention. It is a good alternative to iP965 based mainboards. It supports widespread DDR2 SDRAM, but at the same time offers better consumer features and specifications. Moreover, Gigabyte GA-P35-DS3R did very well in our CPU overclocking tests and performed at a very high level in nominal work mode.

    However, as we have already mentioned this mainboard still has some frustrating drawbacks, such as limited memory overclocking and not the ultimate performance level with FSB set above the nominal. Hopefully Gigabyte engineers will take our comments into account when working on new mainboard revisions and modifications.

    Summing up let me once again list all the cons and pros of the new Gigabyte GA-P35-DS3R mainboard.

    MSI P35 Platinum (Intel P35)

    MSI is one of the top tier motherboard manufacturers in the consumer PC industry, but there is no mistaking 2006 as the year that ASUS and Gigabyte took most of the limelight. Gigabyte's marketing machine pushed the whole solid capacitor design into high gear while ASUS flooded the market with specialized target focus motherboard models such as the TeleSky and Republic of Gamers. Both manufacturers also drew a lot of attention with an announced alliance (which didn't last), but most importantly, they made exciting motherboards that kept the market fresh. MSI on the other hand, was focusing more on their graphics card business.

    The P35 Platinum is based on the P35 and ICH9R chipset combination, which gives the board AHCI SATA and RAID capabilities in addition to its regular features. MSI makes use of the eSATA feature of the Southbridge though and uses two ports as dedicated eSATA connectors in the rear panel, leaving only four ports to be used as internal HDD connectors. As the ICH9R also does not have any IDE support, MSI uses a Marvell 88SE6111 controller to make up for one Ultra ATA port and one extra SATA 3.0Gbps port.

    There is one Gigabit LAN port onboard powered by a Marvell 88E8111B PHY and two FireWire-400 ports via a VIA VT6308P controller. The most interesting component here is the use of the Realtek ALC888T HD Audio CODEC, instead of the standard ALC888 we saw in the preview. You see, the ALC888T has a special VoIP switching functionality that can automatically switch between VoIP and PSTN connectivity in event of a power failure. This of course requires some kind of VoIP add-on card, a handset and a phone line to take advantage of since the board doesn't have one built-in. MSI will actually be introducing such an add-on card very soon that works with Skype, called the SkyTel, but we have an impression that the SkyTel card will have its own audio chip, so how it interfaces with the ALC888T has to be seen.

    For enthusiasts, the P35 Platinum comes with a total of six onboard fan connectors, an quick CMOS clear button and a small row of Debug LEDs. MSI has in the past made use of their D-Bracket debug system, which is still a component of the board, but this new row of LEDs are so much more useful. You can check out the row of mini debug LEDs just next to the blue SATA connector on the PCB.

    Most of the components you find on the P35 Platinum isn't all to different from any other high-end P965 board you can get today, as the P35 chipset doesn't really add any new component count to its repertoire, but MSI did make use of the eSATA port multiplier for the board, and while it does reduce the internal SATA capabilities, the board will provide an extensive range of plug-and-play external high-speed connections. The improved audio chipset used in the retail motherboard also opens up possibilities of additional functionality of the board when the SkyTel add-on card comes out. We also like how MSI provides six USB 2.0 ports by default, but the chunky rear panel is really quite hideous (but then again, that's basically our only rant with the board).

    Performance-wise, the P35 Platinum's results from our benchmarking run turned out pretty well. While we weren't expecting any phenomenal scores from the new P35 chipset, it did surprise us quite a bit with a very strong SYSmark 2004 performance. Overall, the P35 Platinum proved to be quite consistently better than a reference P965 and able to keep up with the NVIDIA nForce 680i SLI. With lower chipset TDPs, we had expected better overclocking potential from the P35 Platinum, but 470MHz isn't exactly shabby, considering that 1333MHz (333MHz) FSB is being hyped as the next big thing. You can go way beyond that point with simple air cooling.

    Although this is the first Intel P35 motherboard we've reviewed, the MSI P35 Platinum proves to be a very well built and well rounded motherboard. MSI plays to the strengths of the chipset and delivers a solid entry into the market. If you aren't planning of a total overhaul of your PC, DDR2 is still here to stay for a while yet. With dropping prices, setting up a 4GB or higher capacity rig isn't so hard anymore and boards like the P35 Platinum will probably let you extend the life of your memory for another year or so. Of course, if you really must have only the latest and greatest, the one factor that remains to be seen is how the DDR3 variant (MSI P35 Platinum D3) would fare in comparison, but that is another board for another day.

    eVGA 122-CK-NF66-A1 nForcke 650i

    The 680i SLI motherboards were launched with a tremendous public relations effort by NVIDIA back in November. There was a lot of hype, speculation, and fanfare surrounding NVIDIA's latest chipset for the Intel market, and it promised an incredible array of features and impressive performance for the enthusiast. At the time of launch we were promised the mid to low range 650i SLI and Ultra chipsets would be shipping shortly to flesh out NVIDIA's Intel portfolio. NVIDIA had plans to truly compete against Intel, VIA, ATI, and SIS in the majority of Intel market sectors within a very short period of time after having some limited success earlier in 2006 with the C19A chipset.

    However, all of this planning seemed to unravel as the week
    s progressed after the 680i launch. It seemed as if NVIDIA's resources were concentrated on fixing issues with the 680i chipset instead of forging ahead with their new product plans. Over the course of the past few months we finally saw the 650i SLI launched in a very reserved manner, followed by the 680i LT launch that offered a cost reduced alternative to the 680i chipset. While these releases offered additional choices in the mid to upper range performance sectors, we still did not know how well or even if NVIDIA would compete in the budget sector.

    All of the chipsets offer support for the latest Intel Socket 775 based processors along with official 1333FSB speeds for the upcoming 1333FSB based CPUs. The 650i SLI and 650i Ultra chipsets are based on the same 650i SPP and utilize the nF430 MCP. The only differentiator between the two is how this SPP/MCP combination is implemented on a board with the 650i SLI offering SLI operation at 8x8 compared to the single x16 slot on the 650i Ultra.

    Other differences between the chipsets center on the features that the 680i offers that are not available on the 650i. These features include two additional USB 2.0 ports, two additional SATA ports, an additional Gigabit Ethernet port, Dual-Net technology, and EPP memory support. Otherwise, depending upon BIOS tuning, the performance of the chipsets is very similar across a wide range of applications, with overclocking capabilities being slightly more pronounced on the 680i chipset. In our testing we have found that the other chipsets also offer very good overclocking capabilities with SLI performance basically being equal at common resolutions on supported chipsets.

    The overclocking aspects of the board are terrific considering the price point and with the asynchronous memory capability you can really push the FSB while retaining budget priced DDR2-800 memory in the system. This is one area where NVIDIA has an advantage over Intel in this price sector as the P965 boards are generally limited to 400FSB and less than stellar memory performance. We typically found that 4-4-4-12 1T or 4-4-3-10 2T timings at DDR2-800 offered a nice balance between memory price considerations and performance on this board.

    We feel like the EVGA 650i Ultra offers a high degree of quality, performance. This board is certainly not perfect nor is it designed for everyone but it offers almost the perfect package in an Intel market sector that has not had anything real interesting to talk about for a long time. We are left wondering why NVIDIA chose the silent path to introduce this chipset when it's obvious they really have something interesting to discuss this time around.

    AMD Athlon 64 FX-74 4x4

    It has been a little over seven months since officials at AMD first discussed their upcoming quad-processing solution with us. Back then details were still pretty hush-hush, all we were told was that their upcoming technology was intended to roll over the competition from Intel, hence the 4x4 codename. Like a real 4x4, this was intended to be a strong performer, only AMD would be relying on two processor sockets to achieve this performance rather than one.

    Over the following months, AMD revealed more details on their 4x4 platform, including the fact that 4x4 CPUs would be sold under the high-end FX brand, and that 4x4 CPUs would be sold in pairs, with kits starting “well under $1,000”. AMD also committed to supporting cheaper unbuffered memory and tweakable motherboards that offered a range of HyperTransport speeds and other BIOS options for CPU and memory overclocking. AMD also concocted a new nickname for their 4x4 technology: the quadfather.

    Today marks the official introduction of AMD’s quad-processing technology and it is indeed quite a performer under the right situations. Before we get into that though, let’s first discuss why AMD felt now was the time for four processing cores…

    With so few games taking advantage of dual-core processsors, much less four processors, many of you have questioned why AMD’s been targeting 4x4 at hardware enthusiasts and the hardcore gaming crowd.

    The reasoning is simple, while it’s true that most of today’s games don’t take advantage of multi-core processing right now, console gaming has accelerated the development of multi-core development. Already there are millions of dual-core CPUs out there in the PC space, and with next-generation consoles such as the Xbox 360 and PlayStation 3 boasting multiple processing cores now out on the market, game developers no longer have an excuse not to program their games with multiple threads in mind.

    By AMD’s estimates, more than 20 multi-threaded games are set to be released in 2007. This includes multiple genres of gaming as well, from first-person shooters, to RTS and RPG titles, from developers such as BioWare, Crytek, Epic, and Gas Powered Games.

    Besides gaming, another usage scenario for 4-cores is what AMD describes as “megatasking”. Megatasking takes multitasking to the next level, as it involves running multiple CPU-intensive tasks at once. An example would be encoding an HD video (or two) while also watching an HD video, or MP3 encoding while also touching up a batch of photos in Photoshop. For those of you who are into MMOs, you could load two instances of the game at once and trade items you’ve collected back and forth between characters, or have one character fighting while the other is healing him.

    This is where having four processing cores really shines.

    Debuting alongside the new 4x4 CPUs are a new chipset (NVIDIA’s nForce 680a) and a new motherboard based on that chipset (the ASUS L1N64-SLI WS), AMD dubs all this the Quad FX dual socket direct connect platform.

    AMD’s Quad FX platform doesn’t roll over the competition like AMD had hoped, mainly because Intel accelerated the launch of Kentsfield from Q1’07 to Q4’06, but it does lay the groundwork for AMD’s next-generation K8 processor, codenamed Barcelona. For the hardcore crowd that could really use the extra SATA ports or would like to setup a mega 8-display system, AMD’s Quad FX platform may be tempting, but for the rest of you, we can’t help but feel that all this may be a little too much until more apps are available that take advantage of multi-core.

    AMD Athlon 64 X2 6000+

    A performance review of AMD’s new dual core processors against Intel’s Core 2 Duo range from Hot Hardware shows how far Intel has come in beating AMD in the performance stakes in the post Core 2 Duo era.

    Although AMD is steadily moving to 65nm processors which will minimize the gap in power consumption but not affecting speed too greatly, AMD’s new processors will challenge Intel from a price perspective if nothing else, as they’ll still give everyday consumers plenty of power to compute almost anything they want to.

    The AMD 6000+ was due to arrive in November last year, but were delayed until now, with the AMD X2 6000+ at 3Ghz and a 2Mb L2 cache, the 5600+ at 2.8Ghz with a 2Mb cache, the 5400+ at 2.8Ghz with a 1Mb cache, the 5200+ at 2.6Ghz with a 2Mb cache and finally the 5000+, a 2.6Ghz processor with a 1Mb cache.

    The 6000+ uses 125 watts of power, which is a no-no in today’s energy conscious world, with a 89-watt model due around September or October this year.

    The 6000+ costs computer manufacturers US $464 each in batches of 1000, compared with Intel’s Core 2 Duo E6700 at US $530 each in batches of 1000, which certainly makes the Intel attractive – less than $100 more will get you a faster processor that uses less energy.

    AMD has a lot of work to do ahead of it to catch up to Intel in the performance stakes, and Intel in turn to trump any AMD advances – with both companies feverishly working on advancing their quad core designs and pushing for even more cores on desktop processors to become standard. Intel’s recent 80 core prototype is a prime example.

    But given that plenty of people are still stuck in a single core world, waiting to upgrade soon to a new dual core computer, thank goodness dual core processors have advanced in leaps and bounds over the last two years. AMD will no doubt compete on price and do all they can to keep Intel in check – with you, me and all other consumers the prime beneficiaries.

    Saturday, May 26, 2007

    ATI Radeon HD 2900 XT

    It has been a long time coming but today AMD is finally set to release its massively anticipated GPU codenamed R600 XT to the world with the official retail name of ATI Radeon HD 2900 XT. It is a hugely important part for AMD right now, who recently posted massive profit loss figures. It is counting on all these new models, along with the high-end 512MB DDR-3 DX10 part with 512-bit memory interface to kick ass and help raise revenue reports against the current range from the green GeForce team, which is selling like super hot cakes.

    Today AMD is launching an enthusiast part HD 2900 series with the HD 2900 XT, performance parts with the HD 2600 series including HD 2600 XT and HD 2600 PRO, along with value parts including HD 2400 XT and 2400 PRO. The HD 2600 and 2400 series have had issues of their own and you will need to wait a little longer before being able to buy these various models on shop shelves (July 1st). The HD 2900 XT will be available at most of your favorite online resellers as of today. Quantity is "not too bad" but a little on the short side with most of AMD's partners only getting between 400 – 600 units which is not that much considering the huge number of ATI fans out there. You may want to get in quick and place your order, if you are interested – some AIB companies are not sure when they will get in their next order, too.

    Our focus today is solely on the HD 2900 XT 512MB GDDR-3 graphics card – it is the first GPU with a fast 512-bit memory interface but what does this mean for performance? While it is AMD's top model right now, it is actually priced aggressively at around the US$350 - US$399 mark in United States, which puts it price wise up against Nvidia's GeForce 8800 GTS 640MB. After taking a look at the GPU and the card from PowerColor as well as some new Ruby DX10 screenshots, we will move onto the benchmarks and compare the red hot flaming Radeon monster against Nvidia's GeForce 8800 GTX along with the former ATI GPU king, the Radeon X1950 XTX.

    Due to limited availability as well as the fact press in different regions are getting priority over others, we tested an actual retail graphics card from PowerColor. It has the same clock speeds as all other reference cards floating around – 742MHz core clock and 512MB of GDDR-3 memory clocked at 828MHz or 1656MHz DDR.

    The PowerColor PCI Express x16 card looks just the same as reference cards. Later on you will see more expensive water cooled HD 2900 XT models from the usual suspects along with overclocked models in the following weeks. We did not get time to perform any overclocking tests but reports are floating around that the core is good to at least 800 - 850MHz and the GDDR-3 memory more than likely has room to increase. You may even see some companies produce HD 2900 XT OC models which use 1GB of faster GDDR-4 memory operating at over 2000MHz DDR or they will use special cooling to get the most out of the default setup.

    As far as size goes, the HD 2900 XT is a little longer than the Radeon X1950 XTX but a good deal shorter than the GeForce 8800 GTX, as you can see from the shot above with the PowerColor HD 2900 XT sitting in the middle of the group. Both of the other cards take up two slots and the HD 2900 XT is no different.

    In 2D mode (non-gaming in Windows), the clock speeds are automatically throttled back to 506MHz on the core and 1026MHz DDR on the memory. This is done to reduce power consumption and also to reduce temperatures, which seems to pretty important for the HD 2900 XT.

    We expect factory overclocked HD 2900 XT cards to start selling in less than one month from now. AIB partners currently have the option of ordering 1GB GDDR-4 models with faster clock speeds but it is unsure if this product will be called HD 2900 XT 1GB GDDR-4 or HD 2900 XTX – you may end up seeing these types of cards appear in early June (around Computex Taipei show time). If we saw a product like this with slightly faster core clock and obviously much faster memory clock (2000 - 2100MHz DDR vs. 1656MHz DDR), we think it would compete very nicely against the GeForce 8800 GTX as far as price vs. performance goes. Sadly we did not have a GeForce 8800 GTS 640MB handy for testing but matching up with our previous testing on similar test beds, the HD 2900 XT will beat it quite considerably, by around the 20% mark in 3DMark06, for example. This is rather interesting since the HD 2900 XT is in the same price range as the GeForce 8800 GTS 640MB – we will test against this card shortly!

    Summing it up, we are happy to see a new high-end Radeon graphics card from AMD – it literally is the red hot flaming monster but it manages to offer a good amount of performance and impressive feature set with full DX10 and Shader Model 4.0, Crossfire and Windows Vista support and a host of others which we did not even have enough time to cover in full today, such as improved anti-aliasing and UVD. It was a long time coming but it is able to offer very good bang for buck against the equivalent from Nvidia - GeForce 8800 GTS.

    It is also something for the green team to think about if AMD comes out with a faster version of R600 XT either with faster operating GDDR-4 memory (and more of it) or faster clock speeds using 65nm processor technology, later in the year. Interesting times ahead in the GPU business but for right now the Radeon HD 2900 XT offers very solid performance for the price but we will be more interested in what is coming in the following weeks as overclocked versions emerge and shake things up even more.


    Nvidia GeForce 8800 Ultra

    WHAT HAPPENS WHEN YOU take the fastest video card on the planet and turn up its clock speeds a bit? You have a new fastest video card on the planet, of course, which is a little bit faster than the old fastest video card on the planet. That's what Nvidia has done with its former king-of-the-hill product, the GeForce 8800 GTX, in order to create the new hotness it's announcing today, the GeForce 8800 Ultra.

    There's more to it than that, of course. These are highly sophisticated graphics products we're talking about here. There's a new cooler involved. Oh, and a new silicon revision, for you propellerheads who must know these things. And most formidable of all may be the new price tag. But I'm getting ahead of myself.

    Perhaps the most salient point is that Nvidia has found a way to squeeze even more performance out of its G80 GPU, and in keeping with a time-honored tradition, the company has introduced a new top-end graphics card just as its rival, the former ATI now owned by AMD, prepares to launch its own DirectX 10-capable GPU lineup. Wonder what the new Radeon will have to contend with when it arrives? Let's have a look.

    By and large, the GeForce 8800 Ultra is the same basic product as the GeForce 8800 GTX that's ruled the top end of the video card market since last November. It has the same 128 stream processors, the same 384-bit path to 768MB of GDDR3 memory, and rides on the same 10.5" board as the GTX. There are still two dual-link DVI ports, two SLI connectors up top, and two six-pin PCIe auxiliary power connectors onboard. The feature set is essentially identical, and no, none of the new HD video processing mojo introduced with the GeForce 8600 series has made its way into the Ultra.

    Yet the Ultra is distinct for several reasons. First and foremost, Nvidia says the Ultra packs a new revision of G80 silicon that allows for higher clock speeds in a similar form factor and power envelope. In fact, Nvidia says the 8800 Ultra has slightly lower peak power consumption than the GTX, despite having a core clock of 612MHz, a stream processor clock of 1.5GHz, and a memory clock of 1080MHz (effectively 2160MHz since it uses GDDR3 memory). That's up from a 575MHz core, 1.35GHz SPs, and 900MHz memory in the 8800 GTX.

    The Ultra's tweaked clock speeds do deliver considerably more computing power than the GTX, at least in theory. Memory bandwidth is up from 86.4GB/s to a stunning 103.7GB/s. Peak shader power, if you just count programmable shader ops, is up from 518.4 to 576 GLOPS—or from 345.6 to 384 GFLOPS, if you don't count the MUL instruction that the G80's SPs can co-issue in certain circumstances. The trouble is that "overclocked in the box" versions of the 8800 GTX are available now with very similar specifications. Take the king of all X's, the XFX GeForce 8800 GTX XXX Edition. This card has a 630MHz core clock, 1.46GHz shader clock, and 1GHz memory.

    So the Ultra is—and this is very technical—what we in the business like to call a lousy value. Flagship products like these rarely offer stellar value propositions, but those revved-up GTX cards are just too close for comfort.

    The saving grace for this product, if there is one, may come in the form of hot-clocked variants of the Ultra itself. Nvidia says the Ultra simply establishes a new product baseline, from which board vendors may improvise upward. In fact, XFX told us that they have plans for three versions of the 8800 Ultra, two of which will run at higher clock speeds. Unfortunately, we haven't yet been able to get likely clock speeds or prices from any of the board vendors we asked, so we don't yet know what sort of increases they'll be offering. We'll have to watch and see what they deliver.

    We do have a little bit of time yet on that front, by the way, because 8800 Ultra cards aren't expected to hit online store shelves until May 15 or so. I expect some board vendors haven't yet determined what clock speeds they will offer.

    In order to size up the Ultra, we've compared it against a trio of graphics solutions in roughly the same price neighborhood. There's the GeForce 8800 GTX, of course, and we've included one at stock clock speeds. For about the same price as an Ultra, you could also buy a pair of GeForce 8800 GTS 640MB graphics cards and run them in SLI, so we've included them. Finally, we have a Radeon X1950 XTX CrossFire pair, which is presently AMD's fastest graphics solution.

    I also prefer the Ultra to the option of running two GeForce 8800 GTS cards in SLI, for a variety of reasons. The 8800 GTS SLI config we tested was faster than the Ultra in some cases, but it was slower in others. Two cards take up more space, draw more power, and generate more heat, but that's not the worst of it. SLI's ability to work with the game of the moment has always been contingent on driver updates and user profiles, which is in itself a disadvantage, but SLI support has taken a serious hit in the transition to Windows Vista. We found that SLI didn't scale well in either Half-Life 2: Episode One or Supreme Commander, and these aren't minor game titles. I was also surprised to have to reboot in order to switch into SLI mode, since Nvidia fixed that issue in its Windows XP drivers long ago. Obviously, Nvidia has higher priorities right now on the Vista driver front, but that's just the problem. SLI likely won't get proper attention until Nvidia addresses its other deficits compared to AMD's Catalyst drivers for Vista, including an incomplete control panel UI, weak overclocking tools, and some general functionality issues like the Oblivion AA problem we encountered.

    That fact tarnishes the performance crown this card wears, in my view. I expect the Ultra to make more sense as a flagship product once we see—if we see—"overclocked in the box" versions offering some nice clock speed boosts above the stock specs. GeForce 8800 Ultra cards may never be killer values, but at least then they might justifiably command their price premiums.

    We'll be keeping an eye on this issue and hope to test some faster-clocked Ultras soon.

    MSI NX 8800 GTS - T2D320E-HD

    Not wanting to leave any enthusiasts out, those looking for the power of the GeForce 8800 family of cards on a budget have had their prayers answered in the form of the GeForce 8800 GTS 320MB card. With prices below $300, the power of the best NVIDIA has to offer is now available for everyone. With the only difference between the two GTS models being the amount of GDDR3 RAM (640MB vs. 320MB) the sub-$300 price is attractive and reasonable.

    MSI took the reference design and upped the ante by increasing the core and memory clock speeds in their NX8800GTS-T2D320E-HD OC offering. The increase in raw speed on the card served as a good reminder that the marketing hype of "more memory = faster" is definitely not always the case as was discovered when put the through the paces against its bigger, but slower, 640MB brother.

    As the chart below shows, the NX8800GTS-T2D320E-HD OC is identical to the 640MB version except in the amount of RAM and the core and memory speeds. Both feature nice boosts in speed that lend to impressively increased performance throughout a number of benchmarks and tests.
    Apart from a few ultra-high resolution instances, MSI's NX8800GTS-T2D320E-HD OC proved that, with a little extra oomph from end-user overclocking, raw speed still can and does have a major impact on gaming performance; while more memory certainly can't hurt, it isn't the be-all cure to ensure faster performance.

    If considering a GeForce 8800 GTS, don't let the lesser amount of RAM fool you: these factory overclocked 320MB GTS cards are packing all the same heat and can perform almost as good and sometimes even better than their bigger-but-slower brother.

    Nvidia GeForce 8800 GTX

    DirectX 10 is sitting just around the corner, hand in hand with Microsoft Vista. It requires a new unified architecture in the GPU department that neither hardware vendor has implemented yet and is not compatible with DX9 hardware. The NVIDIA G80 architecture, now known as the GeForce 8800 GTX and 8800 GTS, has been the known DX10 candidate for some time, but much of the rumors and information about the chip were just plain wrong, as we can now officially tell you today.

    Well, we've talked about what a unified architecture is and how Microsoft is using it in DX10 with all the new featurs and options available to game designers. But just what does NVIDIA's unified G80 architecture look like??

    All hail G80!! Well, um, okay. That's a lot of pretty colors and boxes and lines and what not, but what does it all mean, and what has changed from the past? First, compared to the architecture of the G71 (GeForce 7900), you'll notice that there is one less "layer" of units to see and understand. Since we are moving from a dual-pipe architecture to a unified one, this makes sense. Those eight blocks of processing units there with the green and blue squares represent the unified architecture and work on pixel, vertex and geometry shading.

    The new flagship is the 8800 GTX card, coming in at an expected MSRP with a hard launch; you should be able to find these cards for sale today. The clock speed on the card is 575 MHz, but remember that the 128 stream processors run at 1.35 GHz, and they are labeled as the "shader" clock rate here. The GDDR3 memory is clocked at 900 MHz, and you'll be getting 768MB of it, thanks to the memory configuration issue we talked about before. There are dual dual-link DVI ports and an HDTV output as well.


    NVIDIA should be commended once again for being able to pull off a successful hard launch of a product that has been eagerly awaited for months now. Only time will tell us if supply is able to keep up with demand, but I'll be checking in during the week to find out!

    ATI Radeon X1950 Pro 256MB

    The mainstream video card market is actually comprised of two different levels, separated by the old price-performance matrix. Cards like the GeForce 7600 and Radeon X1650-based products represent the entry-level section, while the top-end offers video cards with greater performance and a higher price tag. NVIDIA has been a serious powerhouse at the upper range, first with the GeForce 7600 GT 256MB, and when that faded, the GeForce 7900 GS 256MB was quick to take its place. ATI has had a very tough time competing, especially as there was initially a huge gap between the Radeon X1600 XT and Radeon X1900 XT cards. This was bridged with the Radeon X1900 GT, but the subsequent Radeon X1900 Pro is the real deal, and better able to stem the NVIDIA tide.

    The Radeon X1950 Pro is built on the 80nm RV570 graphics core, and sports a similar architecture to the lower-clocked, 90nm Radeon X1900 GT. The RV570 features 12 pixel pipelines, 12 texture units, 8 vertex shaders, and 12 ROPs. This may seem low for a high-end mainstream video card, but the Radeon X1950 Pro includes 3 pixel shaders per pipeline, for a total of 36. This can yield a serious performance edge, especially in SM3.0 games. The Radeon X1950 Pro features 256MB of onboard GDDR3 memory using a 256MB link to the internal ring bus controller. This is the latest generation of 80nm ATI parts, and like the Radeon X1650 XT, the Radeon X1950 Pro supports HDCP and "native" CrossFire using internal connectors.
    The base architecture may be similar, but the Radeon X1950 Pro is clocked higher than current Radeon X1900 GT boards, and the RV570 core runs at 575 MHz, while the 256MB of GDDR3 memory is set at 1.38 GHz. This provides theoretical fill rates of 6.9 GPixels/s, 6.9 GTexels/s (standard) and 20.7 GTexels/s (shaded). This last figure helps illustrate just how powerful this type of design can be, given a game or application that really stresses its pixel shading abilities. The memory bandwidth is definitely high-end, as the 1.38 GHz memory clock and its 256-bit link translate into 44.2 GB/s of memory bandwidth - about on par with a GeForce 7950 GT. The Radeon X1950 Pro also includes support for AVIVO, up to 6X AA & 16X AF modes, 3Dc+ texture compression, and native support for CrossFire multi-GPU technology.

    The ATI version of the Radeon X1950 Pro is a standard design without any of the enhancements offered by their 3rd-party vendors. This is both a positive and a negative, as you know the card design is fully tested, compatible and rock solid, but you forgo any higher default clock speeds or nifty cooling apparatus. The card itself is a full-length PCI Express model, with a sleek red heatsink-fan covering virtually the entire PCB. We like this format, especially compared to Radeon X1950 Pro cooling designs with a taller heatsink-fan, as the ATI card offers a seamless install for adjacent peripherals.
    The ATI Radeon X1950 Pro 256MB card is clocked at standard speeds, with its core set at 575 MHz, and the onboar memory running at 1.38 GHz. The card offers the standard connectivity options, featuring two dual-link DVI connectors and an S-Video/HDTV-out port. The DVI output offers resolutions up to 2560x1600, VGA maxes out at 2048x1536, and HDTV-out runs up to 1080i. As with all Radeon X1950 Pro cards, the ATI version also requires external power through a single PCI Express connector. CrossFire is supported in native mode, and t

    The ATI Radeon X1950 Pro retail box includes a CrossFire bridge interconnect for future upgrades. Also included in the bundle are a Driver CD, composite and S-video cables, HDTV-out cable, and DVI to VGA adapters. ATI also offers a 1-year limited warranty and supports operating systems from Windows XP to MCE to Vista.

    MSI NX 8600 GTS T2D256E

    You can tell plenty from the model code of this new MSI graphics card. The 'NX8600GTS' part tells you that it uses the new Nvidia GeForce 8600GTS graphics chip along with 256MB of fast GDDR-3 graphics memory, while the 'OC' suffix flags up the fact that this is a factory overclocked graphics card.

    Nvidia launched the DirectX 10 GeForce 8800 GT and GTX way back in October 2006 so this mid-range chip has been a long time coming. It's given Nvidia time to move from a 90nm fabrication process to 80nm, so the GeForce 8600 uses faster clock speeds on the core, unified shaders and memory than we ever saw on the 8800. This gives us a taste of what we can expect when Nvidia launches the GeForce 8800 Ultra in a few weeks' time to nicely mess up the launch of the ATi Radeon HD 2900.

    Getting back to the MSI, the GeForce 8600GTS chip uses 289 million transistors, compared to the GeForce 8800 which has 691 million. The graphics core runs at 700MHz, the 32 unified shaders are clocked at 1.45GHz and the 256MB of memory has a speed of 2.1GHz, yet the power rating is fairly low at 71W so there's a single six-pin PCI Express power connector.

    MSI calls this an overclocked card but the increase is fairly small as the reference core speeds for an 8600GTS are 675MHz for the core with 2GHz memory speed.

    MSI has clearly given a fair amount of thought to the cooling on this model as the pictures we've seen of competing GeForce 8600GTS cards use a slim-line cooler that looks very similar to a GeForce 7800GTX. MSI has instead opted for a double-slot design that connects the heatsink to a finned radiator. The fan blows cooling air through a duct and across the cooler but there's a sizeable gap between the heat exchanger and the vented bracket.

    We suspect this design is intended to quieten the cooler, however it still seemed rather noisy to our ears and made a continuous drone that was rather off-putting.

    The graphics card is bulky and noisy and the MSI package is rather basic. There's a PCI-E power adapter, two DVI adapters, a splitter cable with Component and S-Video outputs and an S-Video extension cable. Apart from some MSI utilities there is no software in the games department where you would hope for, say, a voucher for Unreal Tournament 2007.

    During our testing we compared the MSI with a GeForce 7950GT, as that is the DirectX 9.0c part that will inevitably be replaced by the GeForce 8600GT, so when we ran 3DMark06 and a few games on both cards in Windows XP and Windows Vista, we were surprised to find that the difference in performance was minimal.

    Sure, there were swings and roundabouts, but you couldn't say that either graphics card was better than the other, and for that matter both of the two operating systems delivered the goods. While the MSI is undoubtedly a very competent performer it came as a real surprise that it wasn't markedly better than the card that it is due to replace.

    The reason is, of course, DirectX 10.

    The GeForce 8000 family all support DirectX 10 and use a new design that replaces dedicated Pixel Shaders, Vertex Shaders and Geometry Shaders with more flexible Unified Shaders. That's a good move which will doubtless reap benefits once DirectX 10 games come to market, but that's still some months in the future.

    AMD Athlon 64 FX-62

    AMD's Athlon 64 FX-62 represents a major shift in design for AMD. The chip itself is straightforward; it's a dual-core performance CPU that offers a marginal performance increase over the older Athlon 64 FX-60. More importantly, the FX-62 is the flagship of AMD's new AM2 motherboard chipset, which introduces several new features to the AMD desktop platform. This means that if you want to upgrade to this CPU, you'll also need a new motherboard. You'll get plenty of advanced features if you make the switch, but keep in mind that Intel's Core 2 Duo chips are right around the corner, and early tests have shown that AMD's hold on the performance belt might be slipping.

    The Athlon 64 FX-62 is not a revolutionary upgrade to AMD's old lineup. Aside from the new interface, the biggest change is the upgrade to a 2.8GHz per core, a minor uptick from the FX-60's 2.6GHz. Despite the predictable CPU tweaking, the more important development is the FX-62's transition to AMD's new AM2 chipset. For the past two years, AMD's desktop chips have used either Socket 939 or the lower-end Socket 754 motherboards. With AM2, AMD introduces not only an entirely new pin layout for its desktop chips, it also brings support for DDR2 memory. AM2 will support 667MHz DDR2 memory for all of AMD's chips, and at least up to 800MHz memory when paired with compatible Athlon 64 X2 and Athlon FX CPUs. It's been rumored that AM2 can support up to 1,066MHz DDR2 memory as well, although AMD won't officially support it. For Intel's part, its 900-series chipsets have supported DDR2 in various clock speeds since their debut in early 2004, but that support hasn't translated to performance wins due to DDR2 memory's higher latency than plain old DDR. DDR2 generally isn't slower than DDR, but it hasn't really offered a benefit. But the time is now ripe for AMD to switch because falling prices are making high quantities of DDR2 memory more cost effective than DDR, and with Windows Vista and its 1GB system memory requirements, you can expect that PCs with 2GB and 4GB of memory will soon become the norm.

    There's more to the Socket AM2 story (check back for a blog later), but as far as the Athlon 64 FX-62 is concerned, neither the new chip nor the new chipset translate to remarkable performance gains. Its SysMark 2004 overall scores are only 3.5 percent faster than the FX-60's. The FX-62's strongest improvement was on our multitasking test, where it showed a 10 percent performance gain. Otherwise, on our dual-core and gaming tests, the FX-62 turned in scores between 8 and 1 percent faster than the FX-60, barely overcoming the statistical margin of error. Still, right now the Athlon 64 FX-62 is technically the desktop CPU performance leader. But with recent brewings at Intel, that lead could change hands soon.

    At Intel's Spring Developer's Forum, Intel provided tech enthusiast site Anandtech the chance to test its upcoming Core 2 Duo desktop chip (then code-named Conroe) against an overclocked Athlon 64 FX-60. The results weren't in AMD's favor. We can't judge based on prerelease testing, especially when it was conducted in an Intel-controlled environment (Anandtech acknowledged the possibility for Intel chicanery as well), but with no major performance boost from the Athlon 64 FX-62, and the fact that Intel's Core 2 Duo represents a brand-new architecture for the desktop, AMD's performance lead looks vulnerable. Intel's road map puts the release date of its next-generation Extreme Edition CPU in the second half of the year, and the company announced the official name of the mainstream Core 2 Duo chip this month. If you absolutely need more power now, the Athlon 64 FX-62 will deliver. But we feel that it's worth waiting at least a month or two to see what Intel brings to the table.

    Intel Core 2 Duo E6700

    Intel announced its line of Core 2 Duo desktop CPUs today. If you're buying a new computer or building one of your own, you would be wise to see that it has one of Intel's new dual-core chips in it. The Core 2 Duo chips are not only the fastest desktop chips on the market, but also the most cost effective and among the most power efficient. About the only people these new chips aren't good for are the folks at AMD, who can claim the desktop CPU crown no longer.

    We've given the full review treatment to two of the five Core 2 Duo chips. You can read about the flagship Core 2 Extreme X6800 here and the entire Core 2 Duo series here. In this review, we examine the next chip down, the 2.67GHz Core 2 Duo E6700. While the Extreme X6800 chip might be the fastest in the new lineup, we find the E6700 the most compelling for its price-performance ratio. For just about half the cost of AMD's flagship, the Athlon 64 FX-62, the Core 2 Duo E6700 gives you nearly identical, if not faster performance, depending on the application.

    It's the first desktop chip family that doesn't use the NetBurst architecture, which has been the template for every design since the Pentium 4. Instead, the Core 2 Duo uses what's called the Core architecture (not to be confused with Intel's Core Duo and Core Solo laptop chips, released this past January). The advances in the Core architecture explain why even though the Core 2 Duo chips have lower clock speeds, they're faster than the older dual-core Pentium D 900 series chips. The Core 2 Extreme X6800 chip, the Core 2 Duo E6700, and the Core 2 E6600 represent the top tier of Intel's new line, and in addition to the broader Core architecture similarities, they all have 4MB of unified L2 cache. The lower end of the Core 2 Duo line, composed of the E6400 and the E6300, have a 2MB unified L2 cache.

    We won't belabor each point here, since the blog post already spells them out, but the key is that it's not simply one feature that gives the Core 2 Duo chips their strength, but rather it's a host of design improvements across the chip and the way it transports data that improves performance. And out test results bear this out.

    On our gaming, Microsoft Office, and Adobe Photoshop tests, the E6700 was second only to the Extreme X6800 chip. Compared to the 2.6GHz Athlon 64 FX-62, the E6700 was a full 60 frames per second faster on Half-Life 2, it finished our Microsoft Office test 20 seconds ahead, and it won on the Photoshop test by 39 seconds. On our iTunes and multitasking tests, the E6700 trailed the FX-62 by only 2 and 3 seconds, respectively. In other words, with the Core 2 Duo E6700 in your system, you'll play games more smoothly, get work done faster, and in general enjoy a better computing experience than with the best from AMD--and for less dough.

    For its own dual-core Athlon 64 X2 chips, AMD tells its hardware partners to prepare for a TDP of between 89 and 110 watts (although its Energy Efficient and Small Form Factor Althon 64 X2 products, which have yet to hit the market in any quantity, go to 65 and 35 watts, respectively). Intel has caught flak in the past for providing fan makers with inadequate TDP ratings, which resulted in overly noisy fans for the Pentium D chips that had to spin exceedingly fast to cool the chips properly. But the Falcon Northwest Mach V desktop we reviewed alongside this launch came with stock cooling parts. It will be hard to tell exactly how well Intel's provided specs live up to their real-world requirements until the hardware has been disseminated widely, but the fact that a performance stickler like Falcon sent the standard-issue cooling hardware suggests that Intel took note of the problems it had in the past.

    And as to the surrounding parts, if you already have an Intel-based PC and would like to upgrade, Intel has made it easy. The Core 2 Duo chips use the same Socket LGA775 interface as the Pentium D 900 series. If you have an Intel motherboard using a 965 chipset, you're ready to go with Core 2 Duo and a single graphics card. If you want to run Intel and a dual graphics configuration, you have two options: Intel's 975 chipsets support ATI's CrossFire tech only, and if you want to run SLI, you'll need a motherboard in Nvidia's NForce 500 for Intel series.

    For AMD, the outlook isn't great. Its so-called 4x4 design, which will let you run two Athlon 64 FX-62s in a single PC, might overtake a single Core 2 Extreme X6800 on raw performance. Details are scant about 4x4's particulars, but if a single Athlon 64 FX-62 costs about $1,031, two will have you crossing the $2,000 mark on chips alone, not to mention the motherboard, the size of the case, as well as the cooling hardware required to operate it. AMD says it will drop prices this month to compete on the price-performance ratio. That might make for some compelling desktop deals, but for now, Intel has the superior technology.

    Intel Core 2 Extreme X6800

    Intel announced its line of Core 2 Duo desktop CPUs today. If you're buying a new computer or you're building one of your own, you would be wise to see that it has one of Intel's new dual-core chips in it. The Core 2 Duo chips include not only the fastest desktop chips on the market, but also the most cost-effective and among the most power-efficient. About the only people these new chips aren't good news for are the folks at AMD, who can claim the desktop CPU crown no longer.

    We've given the full review treatment to two the five Core 2 Duo chips. You can read about the price-performance champ, the Core 2 Duo E6700 here and the entire Core 2 Duo series here. In this review we'll examine Intel's flagship, the 2.93GHz Core 2 Extreme X6800, which is now the fastest desktop CPU you can buy.

    The Core 2 Duo represents a new era for Intel. It's the first desktop chip family that doesn't use the NetBurst architecture, which has been the template for every design since the Pentium 4. Instead, the Core 2 Duo uses what's called the Core architecture (not to be confused with Intel's Core Duo and Core Solo laptop chips, released this past January). The advances in the Core architecture explain why even though the Core 2 Duo chips have lower clock speeds, they're faster than the older dual-core Pentium D 900 series chips. The Core 2 Extreme X6800 chip, the Core 2 Duo E6700, and the Core 2 E6600 represent the top tier of Intel's new line, and in addition to the broader Core architecture similarities, they all have 4MB of unified L2 cache. The lower end of the Core 2 Duo line, comprised of the $224 E6400 and the $183 E6300, has a 2MB unified L2 cache.

    We won't belabor each point here since the blog post already spells it out, but the key is that it's not simply one feature that gives the Core 2 Duo chips their strength, but rather a host of design improvements across the chip and the way it transports data that improves performance. And our test results bear this out.

    The Core 2 Extreme X6800 made a clean sweep of all of our benchmarks. AMD's closest competition, the 2.6GHz Athlon 64 FX-62, came within 5 percent on our iTunes, multitasking, and Microsoft Office tests, but on our Half-Life 2 and our Adobe Photoshop CS2 tests, AMD lost badly, by as much as 28 percent on Half-Life 2. At, Intel's new flagship processor might not be as compelling a deal as the only slightly slower Core 2 Duo E6700, but for enthusiasts and others with the passion and the wallet to ensure that they have the fastest chip out there, the Core 2 Extreme X6800 is now it.

    But there's even more to the Core 2 Duo story than performance. One of the key elements of the new chips is their power efficiency. We base our findings on a number called the thermal design power (TDP), which is the number that AMD and Intel each provide to system vendors and various PC hardware makers for determining how much power each chip will require, and thus the amount of heat they'll need to dissipate. On Intel's last generation of dual-core desktop chips, the Pentium D 900s, the TDP rating fell between 95 and 130 watts. But because the Core 2 Duo design incorporates power management techniques from Intel's notebook chips, its power requirements are much more forgiving. All but the Core 2 Extreme X6800 have a TDP of 65 watts, while the Extreme chip itself is only 75 watts.

    For AMD, the outlook isn't great at the moment. Its so-called 4x4 design, which will let you run two Athlon 64 FX-62 chips in a single PC, might overtake a single Core 2 Extreme X6800 on raw performance. AMD says it's going to drop prices this month to compete on price-performance ratio. That might make for some compelling desktop deals, but for now Intel boasts the superior technology.