Click

Monday, October 15, 2007

Mobile Platform Wars: AMD vs. Intel

AMD has fallen on some hard times, dating back to the launch of Intel's Core 2 lineup in the fall of 2006. Many enthusiasts have been feeling quite anxious, holding out hope that Barcelona would mark the return of yesterday's AMD, where the K8 architecture basically scored a knockout punch for the underdog, but the chances of that occurring are becoming increasingly unlikely. At least in terms of raw performance, Intel has a roadmap in place that should keep the heavyweight belt firmly in their grasp. However, as many people are ready to point out, performance isn't everything. Is there some truth to that statement, or is it a convenient phrase that merely serves as an excuse? That's what we're here to find out.It's no secret that the mobile PC market trails the desktop and server markets quite a bit in terms of computational power. Quad-core desktop systems are becoming increasingly common, and octal-core workstations and servers are more affordable than ever before. Bounce back to the mobile market and you will find plenty of dual-core offerings, but only at lower clock speeds. Laptops also come with slower memory, hard drives (with the exception of the new solid-state models), graphics chips, and system buses. Not surprisingly, for about $1500 you can build a high-quality desktop system that is capable of outperforming even the fastest notebook currently on the market. On the other hand, you can't easily take such a system on the road with you - and you certainly can't use it in an airplane or car. And if you want to talk about performance per watt, many notebooks are able to offer competitive performance to desktop systems that consume two or three times as much power.In fact, compared to one year ago, about the only significant changes to the desktop performance landscape are the addition of quad-core CPUs and extreme performance graphics chips, neither of which are really necessary for a large number of computer users. Businesses in particular don't require such amenities, as they are rarely running their computers at full load and they don't tend to run a lot of 3D applications (aka "games"). Adding higher performance parts to such an environment would simply increase energy usage without necessarily increasing productivity. Throw in the mobility factor of notebooks, and there are a lot of businesses that are getting away from traditional desktops and moving towards using laptops for most of their employees. (There are exceptions of course, so we are speaking about typical businesses - game developers, 3D animation studios, and other high-performance computing companies can and do continue to use desktops and workstations.)A couple months ago, AMD quietly launched their newest update to their mobile Turion X2 processor line. The latest addition is the TL-66, which increases the maximum clock speed to 2.3GHz, an admittedly small bump in performance relative to the TL-64 that runs at 2.2GHz. However, the TL-66 also holds the distinction of being one of the first 65nm Turion X2 parts to hit the market. The Brisbane core was AMD's first 65nm part, and while that part wasn't much faster than the previous 90nm offerings it did lower power requirements somewhat. With a more mature 65nm process, it certainly makes sense for AMD to migrate their mobile CPU production to the new fabrication facilities. We've got HP's 6515b business laptop in-house for testing, equipped with both a TL-60 and a TL-66 processor, so we will be able to see exactly what has changed (if anything) with AMD's new mobile CPU.Naturally, we also want to look at how AMD's fastest Turion X2 compares to Intel's latest Core 2 Duo laptop processors. As we want to keep the system configurations as similar as possible, we will be focusing on performance compared to HP's dv6500t, which is based off of Intel's Santa Rosa platform. It's also noteworthy that both of these notebooks use integrated graphics, so we will also take a moment to look at the current state of IGPs. These are not strictly apples-to-apples comparisons, but by the time we're through with the benchmarks we should have a fair idea of how the two mobile platforms currently compare to each other.

HP Blackbird 002 Revisited

A few weeks back, we provided our initial review of HP's Blackbird 002. What we found was a very interesting and exotic design, but without more information on pricing and availability it was difficult to come to any final conclusions. In fact, we were almost left with more questions than answers, so we spent some time talking to HP Gaming's CTO (and VoodooPC founder) Rahul Sood and the Blackbird sales team. There are still some questions that we weren't able to get answered, but we did get a lot of good material and we felt it would be worthwhile to revisit the Blackbird as well as HP's Gaming division.One of the first things that might be a bit confusing for some people is how HP Gaming relates to VoodooPC. While HP bought out VoodooPC last year, they continue to exist as a separate brand (though still under the HP corporate umbrella). You can still go out and purchase a VoodooPC computer, and you will get the same thing that you always got from Voodoo: extreme attention to detail, premium components, and prices that might just leave you gasping for breath. VoodooPC is as much a status symbol as anything, and while the performance and construction is definitely top-notch, the simple truth is that we just don't see many people being willing to plunk down as much as $10,000 (or more!) on hardware that is going to be second-tier performance in 12 months.This gets into one of those dirty little secrets about computers that some companies don't like to discuss. AnandTech of course isn't one of those companies, so let's air the dirty laundry. There are a few truths about extreme performance computer hardware. First, naturally, is that it costs quite a bit of money. Second, you generally get rapidly diminishing returns as you move up the performance ladder. Third, newer and faster products are always just six to twelve months away. Finally, if you take the top performing parts currently on the market and slap them together in a system, the difference in performance between something manufactured by a boutique computer shop (VoodooPC, Falcon Northwest, Alienware, etc.) and something built in your parents' basement is, generally speaking, negligible.These aren't the only truths, of course. Another point that frequently comes up in enthusiast circles is that overclocking - particularly of CPUs - can save you a truckload of money. Practically speaking, there is no difference in performance between a QX6850 running at 3.0GHz and an overclocked Q6600 running at the same speed (9x333). With the right cooling, you can most likely push both processors up to around 3.5-3.6GHz (9x400), and performance will remain equal. What you do get with the QX6850 is more flexibility and (typically) slightly lower voltages. The unlocked multiplier on the QX6850 (and all of the Core 2 Extreme line) means that adjusting front side bus speeds isn't only way to affect the CPU clock speed. However, it's difficult to find a good reason to spend over three times as much on the CPU just for convenience. The best reason to purchase a Core 2 Extreme is honestly if you're not planning on overclocking and you want the best possible guaranteed performance. In that case, you might be more interested in a factory overclocked - and warrantied - system like the Blackbird.The take away from all of this discussion is that our real question in regards to something like the Blackbird 002 is: what can it add to the computing experience that isn't directly related to raw performance? With a bit more time using the system, more configuration options available, and a lot more details on pricing, we should be able to answer that question.

Foxconn MARS: Lab Update

We recently took a first look at the Foxconn MARS motherboard and discovered that Foxconn is finally headed down the right path when it comes to product offerings in the enthusiast sector. Our initial testing of the MARS board revealed a product that is capable of competing with other mid-range enthusiast boards but does not stand out from the crowd. However, the MARS board does provide a very competitive feature set which includes a BIOS design that caters to the overclocking crowd. This is a vast departure for a company that is heavily involved in the OEM sector.
Foxconn has been busy over the past couple of weeks and is now ready to launch their new Quantum Force product line that will focus on providing very good price to performance products for the enthusiast. With the initial launch of the Intel P35-based Foxconn MARS motherboard also comes the first official retail BIOS release for the board. Foxconn provided the P06 release to us a few days ago and after testing it, we decided to provide a quick update to our original article that utilized the P03 BIOS.We are glad to report that the P06 BIOS did not break, damage, or harm any objects or personnel in the labs this last week. To cut to the chase, the P06 BIOS did provide some very minor performance improvements, reduce voltage requirements when overclocking, and generally remained extremely stable throughout testing. Foxconn has been extremely diligent in addressing problems and providing solutions quickly during our testing. This bodes well for future customer support in our opinion although the proof is in the pudding as we will find out shortly as the boards are shipping now.Not all is perfect with the BIOS release as we still cannot get 2x2GB or 4x2GB configurations to work correctly and there are a few tuning improvements that we would like to see addressed quickly. These improvements include improved 4:5 memory ratio performance when overclocking, ability to manually set the memory straps, drop the quick shutdown and reboot procedure after completing minor FSB or memory settings, and further balancing of memory/chipset timings during overclocking (they tend to be very tight or very loose). With that said, we are going to take a quick look at the board's performance with the P06 BIOS now.

Test Setup
Processor=Intel Core 2 Quad Q6600Quad 2.4GHz, 2x4MB Unified Cache, 9x Multiplier, 1066FSB
CPU Voltage=1.200V Stock
Cooling=Thermalright 120 eXtreme
Power Supply=OCZ 1000W
Memory=Corsair Twin2x2048-10000C5DF
Memory Settings=4-4-4-12 (DDR2-1066)
Video Cards=MSI HD X2900 XT 512MB
Video Drivers=ATI Catalyst 7.9
Hard Drive=Western Digital 7200RPM 750GB SATA 3/Gbps 16MB Buffer
Optical Drives=Plextor PX-B900A, Toshiba SD-H802A
Case=Cooler Master Stacker 830 Evo
BIOS=P.03
Operating System=Windows Vista Home Premium 32-bit

Our test setup did not change except for the BIOS update and all settings were maintained the same, as much as possible, over the platforms tested. Our game tests are run at settings of 1280x1024 HQ to ensure our MSI HD 2900XT is not the bottleneck during testing. All results are reported in our charts and color-coded for easier identification of results. So, let's take a quick look at the results.

AMD's Newest TV Wonder: Clear QAM For The Masses

For a while now, we've been able to watch over the air (OTA) channels and analog cable on our PCs. TV tuners are nothing new. The ability to turn an HTPC into a DVR is quite nice trick. Unfortunately, there are limitations. Many current TV tuners lack the ability to tune in digital cable channels. For viewers in our area, this means anything above channel 75 is out of reach. But there are options for those who want to watch unencrypted digital cable (the channels that come with a basic digital cable subscription) on their PC. The least desirable option is to connect a cable box to the PC. This gets in the way of easily scheduling recordings and the like. Alternately, you can pick up a TV tuner that supports Clear QAM (the type of modulation used to for digital cable). While a TV tuner that supports Clear QAM can tune in some digital cable channels, PC owners still won't be able to watch premium or pay-per-view content without a solution that supports a cable card. And even with a cable card, PC owners aren't able to take advantage of on demand video features. While technologically feasible, the industry has not yet decided on standards for opening up their networks to the two way communication necessary for on demand and similar functionality. Today, AMD joins Hauppauge and Pinnacle in offering Clear QAM TV tuners for the PC. This is basically a refresh of the TV Wonder 6xx line-up, as the only major difference is the addition of Clear QAM support for digital cable. This does come with some caveats though. Let's take a look at AMD's new TV Wonder lineup.

Thursday, September 27, 2007

RAID Primer: What's in a number?

The majority of home users have experienced the agony of at least one hard drive failure in their lives. Power users often experience bottlenecks caused by their hard drives when they try and accomplish I/O-intensive tasks. Every IT person who has been in industry for any length of time has dealt with multiple hard drive failures. In short, hard drives have long caused the majority of support headaches in standard desktop or server configurations today, with little hope of improvement in the near term.

With the increased use of computers in the daily lives of people worldwide, the dollar value of data stored on the average computer has steadily increased. Even as MTBF figures have moved from 8000 hours in the 1980s (example: MiniScribe M2006) to the current levels of over 750,000 hours (Seagate 7200.11 series drives), this increase in data value has offset the relative decrease of hard drive failures. The increase in the value of data, and the general unwillingness of most casual users to back up their hard drive contents on a regular basis, has put increasing focus on technologies which can help users to survive a hard drive failure. RAID (Redundant Array of Inexpensive Disks) is one of these technologies.

Drawing on whitepapers produced in the late 1970s, the term RAID was coined in 1987 by researchers at the University of California, Berkley in an effort to put in practice theoretical gains in performance and redundancy which could be made by teaming multiple hard drives in a single configuration. While their paper proposed certain levels of RAID, the practical needs of the IT industry have brought several slightly differing approaches. Most common now are:

RAID 0 - Data Striping
RAID 1 - Data Mirroring
RAID 5 - Data Striping with Parity
RAID 6 - Data Striping with Redundant Parity
RAID 0+1 - Data Striping with a Mirrored Copy

Each of these RAID configurations has its own benefits and drawbacks, and is targeted for specific applications. In this article we'll go over each and discuss in which situations RAID can potentially help - or harm - you as a user.

RAID Primer: What's in a number?

The majority of home users have experienced the agony of at least one hard drive failure in their lives. Power users often experience bottlenecks caused by their hard drives when they try and accomplish I/O-intensive tasks. Every IT person who has been in industry for any length of time has dealt with multiple hard drive failures. In short, hard drives have long caused the majority of support headaches in standard desktop or server configurations today, with little hope of improvement in the near term.

With the increased use of computers in the daily lives of people worldwide, the dollar value of data stored on the average computer has steadily increased. Even as MTBF figures have moved from 8000 hours in the 1980s (example: MiniScribe M2006) to the current levels of over 750,000 hours (Seagate 7200.11 series drives), this increase in data value has offset the relative decrease of hard drive failures. The increase in the value of data, and the general unwillingness of most casual users to back up their hard drive contents on a regular basis, has put increasing focus on technologies which can help users to survive a hard drive failure. RAID (Redundant Array of Inexpensive Disks) is one of these technologies.

Drawing on whitepapers produced in the late 1970s, the term RAID was coined in 1987 by researchers at the University of California, Berkley in an effort to put in practice theoretical gains in performance and redundancy which could be made by teaming multiple hard drives in a single configuration. While their paper proposed certain levels of RAID, the practical needs of the IT industry have brought several slightly differing approaches. Most common now are:

RAID 0 - Data Striping
RAID 1 - Data Mirroring
RAID 5 - Data Striping with Parity
RAID 6 - Data Striping with Redundant Parity
RAID 0+1 - Data Striping with a Mirrored Copy

Each of these RAID configurations has its own benefits and drawbacks, and is targeted for specific applications. In this article we'll go over each and discuss in which situations RAID can potentially help - or harm - you as a user.

HP Blackbird 002: Back in Black

Whether it's cars, aircrafts, houses, motorcycles, or computers people always seem to like hearing about the most exotic products on the planet. HP's latest and greatest desktop computer offering bears the name of one of the most mystical aircrafts of all time, the SR-71 Blackbird. We can't say for sure whether the choice of name actually comes from the famous surveillance aircraft or not, but we would venture to say this is the case. See, besides the name, the two have quite a few other common attributes.

The SR-71 Blackbird was on the cutting edge of technology, pushing the boundaries of what was deemed achievable. It was the first aircraft that was designed to reduce its radar signature, and while it would fail in this respect it helped pave the way for future stealth aircraft. Perhaps more notably, the Blackbird was the fastest aircraft ever produced, officially reaching speeds of Mach 3.2 and unofficially reaching even higher. The actual top speed remains classified to this day. The extremely high speeds required some serious out of box thinking to achieve, so the Blackbird was built from flexible panels that actually fit loosely together at normal temperatures; only after the aircraft heated up from air friction would the panels fit snugly, and in fact the SR-71 would leak fuel while sitting on the runway before takeoff. After landing, the surface of the jet was so hot (above 300°C) that maintenance crews had to leave it alone for several hours to allow it to cool down.


So how does all of that relate to the HP Blackbird 002? In terms of components and design, the Blackbird is definitely on the cutting edge of design and technology, and it features several new "firsts" in computers. When we consider that the Blackbird comes from a large OEM that doesn't have a reputation for producing such designs, it makes some of these firsts even more remarkable. Talking about the temperatures that the SR-71 reached during flight was intentional, because the Blackbird 002 can put out a lot of heat. No, you won't need to let it cool down for several hours after running it, but the 1100W power supply is definitely put to good use. If electricity is the fuel of the 002, saying that it leaks fuel while sitting idle definitely wouldn't be an overstatement. And last but not least, the Blackbird 002 is fast - extremely fast - easily ranking among the best when it comes to prebuilt desktop computers.

Where did all of this come from? We are after all talking about HP, a company that has been in the computer business for decades, and during all that time they have never released anything quite like this. Flash back to about a year ago, when HP acquired VoodooPC, a boutique computer vendor known for producing extremely high-performance computers with exotic paint jobs and case designs - with an equally exotic price. The HP Blackbird 002 represents the first fruits of this merger, and while it may not be quite as exotic as the old VoodooPC offerings in all respects, it certainly blazes new trails in the world of OEM computers. There's clearly a passion for computer technology behind the design, and even if we might not personally be interested in purchasing such a computer, we can certainly appreciate all the effort that has gone into creating this latest "muscle car" - or pseudo-stealth aircraft, if you prefer.

Dell 2407WFP and 3007WFP LCD Comparison

Originally founded in 1984, Dell is one of the largest computer electronics companies in the world, currently ranking a strong #2 to HP in terms of computer systems shipped. When you sell that many computers, it's not at all surprising that you also sell quite a few displays. A large portion of Dell's sales come from the business sector, and businesses were one of the first areas that really pushed for the more compact LCDs. One of the goals with any successful business is to try and reduce your costs and increase your profit margins, and one way to accomplish that is by bringing manufacturing in-house. Back in the days of CRTs, many large OEMs would simply take a proven display and brand it with their own name, but with LCDs they've taken that a step further. What started as merely one component to be sold with any new computer system has grown into a sizable market all its own, and nearly every large OEM now has a line of LCDs that they manufacture and sell with their systems.

Apple was one of the first companies to come out with very large LCDs with their Cinema Display line, catering to the multimedia enthusiasts that have often appreciated Apple's systems. Dell followed their lead when they launched the 24" 2405FPW several years ago, except that with their larger volumes they were able to offer competing displays at much more attractive prices. In short order, the 800 pound gorilla of business desktops and servers was able to occupy the same role in the LCD market. Of course, while many enthusiasts wouldn't be caught running a Dell system, the most recent Dell LCDs have been received very favorably by all types of users -- business, multimedia, and even gaming demands feel right at home on a Dell LCD. Does that mean that Dell LCDs are the best in the world? Certainly not, but given their price and ready worldwide availability, they have set the standard by which most other LCDs are judged.

In 2006, Dell launched their new 30" LCD, matching Apple's 30" Cinema Display for the largest commonly available computer LCD on the market. Dell also updated most of their other LCD sizes with the xx07 models, which brought improved specifications and features. These displays have all been available for a while now, but we haven't had a chance to provide reviews of them until now. As we renew our LCD and display coverage on AnandTech, and given the number of users that are already familiar with the Dell LCDs, we felt it was important to take a closer look at some of these Dell LCDs in order to help establish our baseline for future display reviews.

We recently looked at the gateway fpd
our first LCD review in some time, and we compared it with the original Dell 24" LCD, the 2405FPW. In response to some comments and suggestions, we have further refined our LCD reviewing process and will be revisiting aspects of both of the previously tested displays. However, our primary focus is going to be on Dell's current 24" and 30" models, the 2407WFP and 3007WFP. How well do these LCDs perform, where do they excel, and where is there room for improvement? We aim to provide answers to those questions.

AMD's New Gambit: Open Source Video Drivers

As the computer hardware industry has matured, it has established itself in to a very regular and predictable pattern. Newer, faster hardware will come out, rivals will fire press releases back and forth showcasing that their product is the better one, price wars will break out, someone cheats now and then, someone comes up with an even more confusing naming scheme, etc. The fact of the matter is that in the computer hardware industry, there's very little that actually surprises us. We aren't psychic and can't predict when and to whom the above will happen to, but we can promise you that it will happen to someone and that it will happen again a couple of years after that. The computer hardware play book is well established and there's not much that goes on that deviates from it.

So we have to admit that we're more than a little surprised when AMD told us earlier this month that they intended to do something well outside of the play book and something that we thought was practically impossible: they were going to officially back and provide support for open source drivers for their video cards, in order to establish a solid full feature open source Linux video driver. The noteworthiness of this stems from the fact that the GPU industry is incredibly competitive and consequently incredibly secretive about plans and hardware. To allow for modern, functional open source video drivers to be made, a great deal of specifications must be released so that programmers may learn how to properly manipulate the hardware, and this flies in the face of the secretive nature of how NVIDIA and ATI go about their hardware and software development. Yet AMD is and has begun to take the steps required to pull this off, and we can't help but to be immediately befuddled by what's going on, nor can we ignore the implications of this.

Before we go any further however, we first should talk quickly about what has lead up to this, as there are a couple of issues that have directly lead to what AMD is attempting to do. We'll start with the Linux kernel and the numerous operating system distributions based upon it.

Unlike Windows and Mac OS X, the Linux kernel is not designed for use with binary drivers, that is drivers supplied pre-compiled by a vendor and plugged in to the operating system as a type of black box. While it's possible to make Linux work with such drivers, there are several roadblocks in doing so, among these being a lack of a stable application programming interface (API) for writing such drivers. The main Linux developers do not want to hinder the development of the kernel, but having a stable driver API would do just that by forcing them to avoid making any changes or improvements in that section of the code that would break the API. Furthermore by not supporting a stable driver API, it encourages device makers to release only open source drivers, in line with the open source philosophy of the Linux kernel itself.

This is in direct opposition to how AMD and NVIDIA prefer to operate, as their releasing of open source drivers would present a number of problems for them, chief among them exposing how parts of their hardware work when they want to keep that information secret. As a result both have released only binary drivers for their products, including their Linux drivers, and doing the best they can to work around any problems that the lack of a stable API may cause.

For a number of reasons, AMD's video drivers for Linux have been lackluster. NVIDIA has set the gold standard for the two, as their Linux drivers perform very close to their Windows drivers and are generally stable. Meanwhile AMD's drivers have performed half as well at times, and there have been several notable stability issues with their drivers. AMD's Linux drivers aren't by any means terrible (nor are NVIDIA's drivers perfect) but they're not nearly as good as they should be.

Meanwhile the poor quality of the binary drivers has as a result given AMD's graphics division a poor name in the open source community. While we have no reason to believe that this has significantly impacted AMD's sales since desktop usage of Linux is still low (and gaming even lower) it's still not a reputation AMD wants to have as it can eventually bleed over in to the general hardware and gaming communities.

This brings us back to the present, and what AMD has announced. AMD will be establishing a viable open source Linux driver for their X1K and HD2K series video cards, and will be continuing to provide their binary drivers simultaneously. AMD will not be providing any of their current driver code for use in the open source driver - this would break licensing agreements and reveal trade secrets - rather they want their open source driver built from the ground-up. Furthermore they will not be directly working on the driver themselves (we assume all of their on-staff programmers are "contaminated" from a legal point of view) and instead will be having the open source community build the drivers, with Novell's SuSE Linux division leading the effort.

With that said, their effort is just starting and there are a lot of things that must occur to make everything come together. AMD has done some of those things already, and many more will need to follow. Let's take a look at what those things are.

Saturday, September 15, 2007

Building a Better (Linux) GPU Benchmark

For those who follow our Linux reviews, we have made a lot of headway in the last two months. Our benchmarking has improved, our graph engine is top notch and we are working closely with all the major manufacturers to bring a definitive resource for Linux hardware to our readers. Today, we want to introduce everyone to our GPU Linux benchmarks and how we will run them in the future. This isn't a comparative analysis yet, but we won't keep you waiting long for that.The adherent flaw with any benchmark is that you, the reader, only receives a sampling of data - engineers and statisticians alike call this "data compression". When we sample data from a timedemo and format it into an average frames per second, we lose all sort of valuable data, such as what the lowest frames per second was, what the highest was, when the largest dip in FPS had occured, what the image looked like, and the list goes on. There have been a few attempts to convey more than just an average FPS in video benchmarks, most notably with FRAPS. FRAPS does not entirely address the issue of reproducibility and FRAPS runs on Windows only. Fortunately, we have been graced with some very talented programmers who worked with us to build a benchmarking utility similar to FRAPS (on Linux) that we may eventually port over to Windows as well. Consider this to be our experiment in advancing our benchmarking methods while using Linux as our guinea pig. Eventually, we anticipate releasing the benchmark complete with source to the public. Here is how our utility works, as explained by the lead developer, Wiktor Kopec.
"The program computes frames per second for an application that uses OpenGL or SDL. It also takes screenshots periodically, and creates an overlay to display the current FPS/time."This is accomplished by defining a custom SwapBuffers function. For executables that are linked to GL at compile time, the LD_PRELOAD environment variable is used to invoke the custom SwapBuffers function. For executables that use run-time linking - which seems to be the case for most games - a copy of the binary is made, and all references to libGL and the original glXSwapBuffers function are replaced by references to our library and the custom SwapBuffers function. A similar procedure is done for SDL. We can then do all calculations on the frame buffer or simply dump the frame at will."You can read more about SDL and OpenGL. SDL is a "newer" library bundled with most recent Linux games (Medal of Honor: AA, Unreal Tournament 2004). In many ways, SDL behaves very similarly to DirectX for Linux, but utilizes OpenGL for 3D acceleration.

Building a Better (Linux) GPU Benchmark

For those who follow our Linux reviews, we have made a lot of headway in the last two months. Our benchmarking has improved, our graph engine is top notch and we are working closely with all the major manufacturers to bring a definitive resource for Linux hardware to our readers. Today, we want to introduce everyone to our GPU Linux benchmarks and how we will run them in the future. This isn't a comparative analysis yet, but we won't keep you waiting long for that.The adherent flaw with any benchmark is that you, the reader, only receives a sampling of data - engineers and statisticians alike call this "data compression". When we sample data from a timedemo and format it into an average frames per second, we lose all sort of valuable data, such as what the lowest frames per second was, what the highest was, when the largest dip in FPS had occured, what the image looked like, and the list goes on. There have been a few attempts to convey more than just an average FPS in video benchmarks, most notably with FRAPS. FRAPS does not entirely address the issue of reproducibility and FRAPS runs on Windows only. Fortunately, we have been graced with some very talented programmers who worked with us to build a benchmarking utility similar to FRAPS (on Linux) that we may eventually port over to Windows as well. Consider this to be our experiment in advancing our benchmarking methods while using Linux as our guinea pig. Eventually, we anticipate releasing the benchmark complete with source to the public. Here is how our utility works, as explained by the lead developer, Wiktor Kopec.
"The program computes frames per second for an application that uses OpenGL or SDL. It also takes screenshots periodically, and creates an overlay to display the current FPS/time."This is accomplished by defining a custom SwapBuffers function. For executables that are linked to GL at compile time, the LD_PRELOAD environment variable is used to invoke the custom SwapBuffers function. For executables that use run-time linking - which seems to be the case for most games - a copy of the binary is made, and all references to libGL and the original glXSwapBuffers function are replaced by references to our library and the custom SwapBuffers function. A similar procedure is done for SDL. We can then do all calculations on the frame buffer or simply dump the frame at will."You can read more about SDL and OpenGL. SDL is a "newer" library bundled with most recent Linux games (Medal of Honor: AA, Unreal Tournament 2004). In many ways, SDL behaves very similarly to DirectX for Linux, but utilizes OpenGL for 3D acceleration.

Dell 2707WFP: Looking for the Middle Ground of Large LCDs

We've taken a look at several high-end 30" LCDs recently, like the HP LP3065 and the Dell 3007WFP. While these are undoubtedly nice monitors, many people have a few concerns with them. One of the major problems is that they require a dual-link DVI connection, so they essentially require a higher end graphics card than what many people have. Hooking them up to a notebook is also generally out of the question, with a few exceptions. They are also quite large, but with their 2560x1600 native resolution they still have a very fine pixel pitch. Some will think that's a good thing, while those who are more visually challenged [Ed: raises hand] might prefer a slightly lower native resolution.Furthermore, while nearly everyone will agree that running your LCD at its native resolution is the best solution, gaming on a 30" LCD at 2560x1600 requires some serious graphics horsepower. Then there's the lack of input options on the 30" LCDs; due to a lack of any scaler ICs that can handle the native resolution, the displays only support dual-link DVI connections (or single-link with a very limiting 1280x800 resolution, with a few caveats).This is not to say that 30" LCDs are bad; merely that they are not a solution that all find ideal. Enter the 27" LCD panel.There are definitely people that would like something slightly larger than a 24" LCD, but they don't want to deal with some of the aforementioned problems with 30" LCDs. These people basically have a few options. First, they could always look at some of the 1080p HDTV solutions, which are currently available in 32", 37", 42", and several larger sizes. If resolution isn't a concern, there are plenty of other HDTV solutions out there, but those are less than ideal for computer work. The other option, and the one we'll be looking at today, is to get something like Dell's 27" 2707WFP.


We've already looked at Dell's 2407WFP and 3007WFP, so we refer back to the earlier review for anyone interested in additional information about Dell's other LCDs, warranty, and support policies. Our primary focus here is going to be on how the 2707WFP compares to both the slightly larger and slightly smaller offerings on the market.One of the factors that many people are going to be interested in is the pixel pitch of the various LCD offerings. We've compiled a list of typical pixel pitch sizes for a variety of LCD panels and resolutions. Some people feel a smaller pixel pitch is always more desirable, and while that might be true for some uses, reading text on an extremely fine pixel pitch can at times be difficult for some of us. If you've used a 15" laptop with a 1920x1200 resolution, you will hopefully understand. We know plenty of other users that find the typical 17" LCDs are not comfortable to use at the native 1280x1024 resolution, which is why many people prefer 19" LCDs. (Modifying the DPI setting of Windows can help in some areas, but there are quirks to changing the DPI from the default 96dpi setting.

As you can see from the above table, the 27" LCDs currently boast the largest pixel pitch outside of HDTV offerings. However, the difference between a 15" or 19" pixel pitch and that of the 2707WFP is really quite small. If you're one of those that feel a slightly larger pixel pitch is preferable - for whatever reason - the 2707WFP doesn't disappoint. Dell has made some other changes relative to their other current LCD offerings, however, so let's take a closer look at this latest entrant into the crowded LCD market.

Low Power Server CPU Redux: Quad-Core Comes to Play

A couple months ago, we took a look at the low voltage (LV) server CPU market. At the time, we focused on four-way solutions using two dual-core processors, since those represent the largest slice of the server pie. Our conclusion was that while the power savings brought about by using low voltage CPUs were real, processor choice was only one part of the equation. AMD came out ahead overall in performance/watt, not because they were faster or because their CPUs used less power, but rather because their platform as a whole offered competitive performance while using less power.We discussed previously exactly what's involved in a low voltage part, but of course the picture is far bigger than just talking about power requirements. Take for example Intel's low-voltage Woodcrest parts; they are rated at 40W compared to the regular Woodcrest parts that are rated at 80W. The price premium for upgrading to a low-voltage part varies; in the case of AMD it's typically anywhere from $100 to $300 per CPU, while on the Intel side some low-voltage parts cost more, the same, or even less than the regular parts (i.e., the Xeon 5140 currently sells for about $450 while the low voltage Xeon 5148 only costs $400). Regardless of price, it's difficult to justify low-voltage processors in terms of power bill savings.An extra 40W of power in a device running 24/7 for an entire year works out to around $35 per year, so at the low-end of the equation you would need a minimum of three years to recoup the investment (at which point it's probably time to upgrade the server). Other factors are usually the driving consideration.Saving 40W per CPU socket may not save you money directly in terms of power bills, but generally speaking these chips are going into servers that sit in a datacenter. Air conditioning for the datacenter typically has costs directly related to the amount of power being consumed, so every 40W of power you can save could end up saving another 20W-40W of power in air conditioning requirements. That's still not even the primary concern for a lot of companies, though.Datacenters often run dozens or even hundreds of servers within a single large room, and the real problem is making sure that there's enough power available to run all of the equipment. The cost of building a datacenter is anything but cheap, and if you can pack more processing power into the same amount of space, that is where low-voltage parts can really become useful. Blade servers were specifically created to address this requirement, and if you can reduce the total power use of the servers by 20% that means some companies could choose to run 20% more servers.Of course, that doesn't mean that every company out there is interested in running a datacenter with hundreds of computers, so individually businesses need to look at what sort of server setup will best fit their needs. After determining that, then they need to look at low-voltage CPUs and decide whether or not they would actually be helpful. Assuming low-voltage parts are desired, the good news is that it's extremely easy to get them in most modern servers. Dell, HP, and other large server vendors usually include low-voltage parts as an easy upgrade for a small price premium. And that brings us to our low-voltage CPU update.Intel Quad G-SteppingIntel doesn't seem to sit still these days, pushing the power and performance envelope further and further. Recently, Intel announced two new G-stepping quad-core parts. The new parts run at the extreme ends of the power consumption spectrum. The first is a 2.0GHz 1333FSB part that runs at 50W while the second is a 3.0GHz 1333FSB part that runs at 120W. There are two main changes to the G-stepping parts, the first of which is power consumption: G-stepping introduces optimizations for idle state power. The second change involves enhancements to the Virtualization Extensions (VT), which mainly improve interrupt handling in the virtualization of Microsoft Windows 32-bit operating systems.Of course, we would be remiss if we didn't mention AMD's recently launched Barcelona processor here. AMD expects their new quad-core processor to run within the same power envelope as the previous dual-core Opterons, which means twice as many CPU cores potentially without increasing power requirements, resulting in a potential doubling of performance/watt on the socket level. Low-voltage (HE) Barcelona parts will still be available, but even the regular chips include many new enhancements to help with power requirements. We are doing our best to get some additional Barcelona servers in-house in order to test this aspect of the performance/power equation and we hope to follow up in the near future.One final item worth mentioning is that Intel's 45nm Harpertown refresh of Clovertown is due out in the very near future, which is one more item we can to look forward to testing. Unlike the desktop world, however, acquiring and testing server products often requires a lot more time and effort. Even with the appropriate hardware, the sort of benchmarks we run on servers can often take many hours just to complete a single test, and there are many parameters that can be tuned to improve performance. Since there aren't a lot of early adopters in the server market, though, we should be able to provide you with results before any of the IT departments out there are ready to upgrade. Now let's get on to the testing.

Wednesday, September 12, 2007

Optio X: A Look at Pentax's Ultra Thin 5MP Digicam

The Optio X certainly has a unique appearance with a twistable body and the slimmest in the Optio series. With a stylish black and silver body, the Optio X is one of Pentax's latest feature-packed 5 megapixel digicams. The Optio X features a 3x optical zoom lens and 15 different recording modes as well as movie and voice memo functionality. In addition, the camera offers advanced functions such as manual white balance and 5 different bracketing options. In our review of this camera, we discovered several strengths and weaknesses. For starters, it isn't the fastest camera available, but in most cases it puts in a decent performance. For example, it has respectable startup and write speeds. Unfortunately, the auto focus can be somewhat slow. In our test, the camera took nearly a full second to focus and take a picture. With respect to image quality, the Optio X generally does a good job taking even exposures with accurate color. However, we found clipped shadows and highlights in some of our samples in addition to jaggies along diagonal lines. Read on for more details of the ultra-thin Optio X.

ASUS P5N32-E SLI Plus: NVIDIA's 650i goes Dual x16

When the first 680i SLI motherboards were launched back in November they offered an incredible array of features and impressive performance to boot. However, all of this came at a significant price of $250 or more during the first month of availability. We thought additional competition from the non-reference board suppliers such as ASUS, abit, and Gigabyte would drive prices down over time. The opposite happened to a certain extent with the non-reference board suppliers, as prices zoomed above the $400 mark for boards like the ASUS Striker Extreme. The reference board design from suppliers such as EVGA and BFG has dropped to near $200 recently but we are still seeing $300 plus prices for the upper end ASUS and Gigabyte 680i boards.ASUS introduced the P5N32-E SLI board shortly after the Striker Extreme as a cost reduced version of that board in hopes of attracting additional customers. While this was a good decision, that board did not compete too well against the reference 680i boards in the areas of performance, features and cost. With necessity being the mother of all inventions, ASUS quickly went to work on a board design that would offer excellent quality and performance at a price point that was at least half that of the Striker Extreme.They could not get there at the time with the 680i SLI chipset and the recently released 680i LT SLI cost reduced chipset was not available so ASUS engineered their own version that would meet a market demand for a sub-$200 motherboard that offered the features and performance of the 680i chipset. They took the recently introduced 650i SLI SPP (C55) and paired it with the 570 SLI MCP (MCP55) utilized in the AMD 570/590 SLI product lines. ASUS called this innovative melding of an Intel SPP and AMD MCP based chipsets their Dual x16 Chipset with HybridUp Technology. Whatever you want to call it, we know it just flat out works and does so for around $185 as you will see in our test results shortly.

Before we get to our initial performance results and discussion of the ASUS's P5N32-E SLI Plus hybrid board design we need to first explain the differences between it and the 680i/680i LT motherboards. In an interesting turn of events we find the recently introduced 680i LT SLI chipset also utilizing the nForce 570 SLI MCP with the new/revised 680i LT SPP while the 680i SLI chipset utilizes the nForce 590 SLI MCP and 680i SLI MCP.With the chipset designations out of the way, let's get to the real differences. All three designs officially support front-side bus speeds up to 1333MHz, so the upcoming Intel processors are guaranteed to work and each design offers very good to excellent overclocking capabilities with the 680i SLI offering the best overclocking rates to date in testing. Each board design also offers true dual x16 PCI Express slots for multi-GPU setups with ASUS designing the dual x8 capability on the 650i SPP as a single x16 setup with the second x16 slot capability being provided off the MCP as in the other solutions. We are still not convinced of the performance advantages of the dual x16 designs over the dual x8 offerings in typical gaming or application programs. We only see measurable differences between the two solutions once you saturate the bus bandwidth at 2560x1600 resolutions with ultra-high quality settings, and even then the performance differences are usually less than 5%.The 680i SLI and the ASUS hybrid 650i boards offer full support for Enhanced Performance Profile (SLI-Ready) memory at speeds up to 1200MHz with the 680i LT only offering official 800MHz support. However, this only means you will have to tweak the memory speed and timings yourself in the BIOS, something most enthusiasts do anyway. We had no issue running all three chipset designs at memory speeds up to 1275MHz when manually adjusting the timings.Other minor differences have the 680i SLI and ASUS Plus boards offering LinkBoost technology that has shown zero to very minimal performance gains in testing. Both boards also offer a third x16 physical slot that operates at x8 electrical to provide "physics capability" - something else that has not been introduced yet. Fortunately this slot can be used for PCI Express devices up to x8 speeds so it is not wasted. Each design also offers dual Gigabit Ethernet connections with DualNet technology and support for 10 USB devices. The 680i LT SLI design offers a single Gigabit Ethernet, support for 8 USB devices, and does not support LinkBoost or a third x16 physical slot.What we basically have is the ASUS hybrid design offering the same features as the 680i SLI chipset at a price point near that of the feature reduced 680i LT SLI setup. This leads us into today's performance review of the ASUS P5N32-E SLI Plus. In our article today we will go over the board layout and features, provide a few important performance results, and discuss our findings with the board. With that said, let's take a quick look at this hybrid solution and see how well it performs against the purebreds.

Gigabyte GA-P35T-DQ6: DDR3 comes a knocking, again

The recent introduction of the Intel's new P35 chipset brought with it the official introduction of the 1333FSB and DDR3 support for Intel processors. The P35 chipset is also the first chipset to officially support the upcoming 45nm CPU architecture. We reviewed the P35 chipset and the new Intel ICH9 Southbridge in detail and found the combination to offer one of the best, if not the best, performance platforms for Intel's Core 2 Duo family of processors.This does not necessarily mean the P35 is the fastest chipset on paper or by pure design; it's just that the current implementation of this "wünder" chip by the motherboard manufacturers has provided us with the overall top performing chipset in the Intel universe at this time. Of course this could change at a moment's notice based upon new BIOS or chipset releases, but the early maturity and performance levels of the P35 has surprised us.Our first look at DDR3 technology provides a glimpse of where memory technology is headed for the next couple of years. We do not expect widespread support for DDR3 until sometime in 2008 but with the right DDR3 modules we have seen performance equaling or bettering that of current DDR2 platforms. However, this does not mean DDR2 memory technology is stagnant now; far from it, as we will soon see standard DDR2-1066 modules with fairly low latencies running at 1.8V with overclocking capabilities up to or exceeding DDR2-1500 in some cases.What immediate impact this will have on the DDR3 memory market is unclear right now. Based on our early information we should see the more performance oriented DDR2 motherboards outperforming their DDR3 counterparts until DDR3 latencies and speeds are greatly improved. We do expect these improvements to come, just not quick enough to hold off the initial onslaught of DDR2-1066 and what is shaping up to be some impressive overclocking capabilities.

That brings us to today's discussion of the Gigabyte GA-P35T-DQ6 motherboard based upon the Intel P35 chipset with full DDR3 compatibility. This motherboard is Gigabyte's current flagship product and we expect the product to launch on or right after June 4th. Gigabyte was kind enough to provide us with a full retail kit for our preview article today.The P35T-DQ6 motherboard is based on the same platform utilized by its DDR2 counterpart, the P35-DQ6, which has already provided an excellent performance alternative to the ASUS P5K series of motherboards. As we stated in our preview article, making a choice between the current P35 motherboards is difficult and is largely dependent upon the user's requirements.We initially found the ASUS boards to be slightly more mature, as they offer a performance oriented BIOS with some additional fine tuning options not available in the other boards. However, that is quickly changing as we receive BIOS updates and new board designs from other manufacturers. We will provide an answer for what board we think best exemplifies the performance and capability of the P35 chipset in our roundup coming in the latter part of June.In the meantime, we have our second DDR3 board in-house for testing and will provide some early results with this somewhat unique motherboard that brings an excellent level of performance to the table. The question remains if this board can outperform the ASUS P5K3 Deluxe, and we hope to provide some early answers to that question today. Let's take a quick glimpse at the Gigabyte GA-P35T-DQ6 now and see how it performs.

Sunday, September 9, 2007

Canon Digital Rebel XT: Hardly an Entry-Level DSLR

Recently, Canon introduced the EOS 350D (also called the Rebel XT) as their entry-level SLR replacement to the EOS 300D. Although our review model is silver and black, an all black model is also available - giving the camera a more professional appearance. The new Rebel has a large 8 megapixel sensor that can shoot images as JPEG or RAW files. Other than the higher resolution, some of the exciting upgrades include a smaller and lighter body, ultra-fast DIGIC II image processor, larger buffer, selectable metering and AF modes, and Custom Functions. In fact, the 350D has so many improvements over the 300D that it actually shares more in common with the prosumer EOS 20D. Given that so many people are casting aside their fixed lens point-and-shoot cameras to venture into the digital SLR world, we thought that it would be well worth a look at one of the most popular options.
In our review, we found that the 350D acts nothing like an entry-level camera. It has an instant startup time just like the 20D and its cycle/write times are nearly identical. The 350D is capable of capturing extraordinary detail and offers several parameters to adjust the in-camera processing levels. In our noise test, the 350D shows impressive noise control and produces surprisingly clean images throughout the ISO range. Read on for a full review of this remarkable camera to see why it might be your first digital SLR.

Saturday, September 8, 2007

Toshiba Satellite X205-S9359 Take Two: Displays and Drivers

Last week we took a first look at the Toshiba Satellite X205-S9359, a laptop sporting NVIDIA's fastest current mobile DirectX 10 offering. We took a look at gaming performance and general application performance and came away relatively impressed, but we weren't finished testing just yet. We're here today to wrap up our review of the X205, with a look at the LCD, additional performance benchmarks - including DirectX 10 results for a couple of games - and some commentary on other features of this laptop.
Jumping right into the thick of things, let's take a moment to discuss preinstalled software. Most OEMs do it, and perhaps there are some users out there that actually appreciate all of the installed software. Some of it can be truly useful, for instance applications that let you watch high-definition movies on the included HD DVD drive. If you don't do a lot of office work, Microsoft Works is also sufficient for the basics, although most people should probably just give in and purchase a copy of Microsoft Office. We're still a bit torn about some of the UI changes in Office 2007, but whether you choose the new version or stick with the older Office 2003 the simple fact of the matter is that most PCs need a decent office suite. The standard X205 comes with a 60 day trial version of Office 2007 installed, however, which is unlikely to satisfy the needs of most users. If you don't need Office and will be content with Microsoft Works, there's no need to have a 60 day trial. Conversely, if you know you need Microsoft Office there's no need to have Works installed and you would probably like the option to go straight to the full version of Office 2007.There is an option to purchase Microsoft Office Home and Student 2007 for $136, but it's not entirely clear if that comes in a separate box or if the software gets preinstalled - and if it's the latter, hopefully Microsoft Works gets removed as well. As far as we can tell, Toshiba takes a "one-size-fits-all" approach to software, and we would really appreciate a few more options. In fact, what we would really appreciate is the ability to not have a bunch of extra software installed. Above is a snapshot of Windows Vista's Add/Remove Programs tool for the X205 we received, prior to installing or uninstalling anything. The vast majority of the extra software is what we would kindly classify as "junk". All of it is free and easily downloadable for those who are actually interested in having things like Wild Tangent games on their computer. We prefer to run a bit leaner in terms of software, so prior to conducting most of our benchmarks we of course had to uninstall a lot of software, which took about an hour and several reboots. If this laptop is in fact intended for the "gamer on the go", which seems reasonable, we'd imagine that most gamers would also prefer to get a cleaner default system configuration. Alienware did an excellent job in this area, and even Dell does very well on their XPS line. For now, Toshiba holds the record for having the most unnecessary software preinstalled.

Gateway FX530: Mad Cows and Quad Core Overclocking

There are a few things that we tend to take for granted in life: death, taxes, and a lack of overclocking support on PCs from major OEMs. Certainly, there are many people that don't need overclocking, and those are exactly the people that tend to purchase name brand PCs in the first place. If your typical computer use doesn't get much more complex than surfing the Internet, the difference between a massively overclocked CPU and the stock configuration is hardly going to be noticeable. What's more, overclocking tends to come with drawbacks. System stability is frequently suspect, and outside of a few boutique computer shops that factory overclock systems, you will generally void your warranty by overclocking.Conversely, overclocking gives those willing to take a chance the ability to squeeze extra performance out of even the top performing parts. Alternately, consumers can save money by purchasing a cheaper processor and running it at speeds that would normally cost two or three times as much. Intel's latest Core 2 processors have rekindled interest in overclocking, in part simply because they overclock so well. In the past, the benchmark for highly overclockable chips has generally been set at 50% or more, with good overclocking chips achieving a 25% to 50% overclock. Core 2 Duo blows away some of these old conventions, with some chips like the E4300 managing even massive 100% overclocks! And they manage this without breaking a sweat. With chips that overclock so well, it seems a shame to run them at stock speeds.Over the years, we have seen a few factory overclocked systems, but rarely from a major OEM. The big OEMs like Dell, Gateway, HP, etc. tend to play it safe, but Gateway has broken with tradition by releasing a significantly overclocked Core 2 Extreme QX6700 system. What's more, they have done it at a price that is likely to turn a lot of heads - and yes, the factory warranty remains intact. We talked in the past about the type of people that actually can make use of a quad core system, and the people that are likely to want a quad core processor are often the people that stand to benefit the most from additional performance courtesy of overclocking. With Intel's QX6700 already reigning supreme as the fastest multi-core configuration on the market, why not add another 20% performance? We've seen similar configurations for sale from boutique manufacturers, often with astronomical prices. While the QX6700 certainly won't be cheap no matter how you slice it, Gateway offers their 20% overclock for a modest $100 price increase. Considering the price difference between the Q6600 and the QX6700 is $150 for a 266 MHz speed increase, doubling that speed increase for a mere $100 is a real bargain!A super fast processor sounds great, especially if it still carries a factory warranty. However, warranties don't mean a lot if the system won't run stable. Besides the processor, however, there are many other components that can affect system performance. The type of work you plan on doing with the computer will also affect how much benefit a fast CPU gets you. We'll assume right now that anyone planning on purchasing a quad core system routinely needs a lot of CPU power, but unfortunately there are still CPU intensive tasks that can't properly utilize multiple processor cores. In order to see just how much faster this Gateway system is compared to other options, we will be comparing performance results with the test systems used in our AMD Quad FX article. Before we get to the actual performance, however, let's take a closer look at the FX530.

A Messy Transition (Part 3): Vista Buys Some Time


As we saw in
part 1 of this series, large applications and games under Windows are getting incredibly close to hitting the 2GB barrier, the amount of virtual address space a traditional Win32 (32-bit) application can access. Once applications begin to hit these barriers, many of them will start acting up and/or crashing in unpredictable ways which makes resolving the problem even harder. Developers can work around these issues, but none of these except for building less resource intensive games or switching to 64-bit will properly solve the problem without creating other serious issues.
Furthermore, as we saw in
part 2, games are consuming greater amounts of address space under Windows Vista than Windows XP. This makes Vista less suitable for use with games when at the same time it will be the version of Windows that will see the computing industry through the transition to 64-bit operating systems becoming the new standard. Microsoft knew about the problem, but up until now we were unable to get further details on what was going on and why. As of today that has changed.
Microsoft has published
knowledge base article 940105 on the matter, and with it has finalized a patch to reduce the high virtual address space usage of games under Vista. From this and our own developer sources, we can piece together the problem that was causing the high virtual address space issues under Vista.
As it turns out, our initial guess about the issue being related to memory allocations being limited to the 2GB of user space for security reasons was wrong, the issue is simpler than that. One of the features of the Windows Vista Display Driver Model (WDDM) is that video memory is no longer a limited-sharing resource that applications will often take complete sovereign control of; instead the WDDM offers virtualization of video memory so that all applications can use what they think is video memory without needing to actually care about what else is using it - in effect removing much of the work of video memory management from the application. From both a developer's and user's perspective this is great as it makes game/application development easier and multiple 3D accelerated applications get along better, but it came with a cost.
All of that virtualization requires address space to work with; Vista uses an application's 2GB user allocation of virtual address space for this purpose, scaling the amount of address space consumed by the WDDM with the amount of video memory actually used. This feature is ahead of its time however as games and applications written to the DirectX 9 and earlier standards didn't have the WDDM to take care of their memory management, so applications did it themselves. This required the application to also allocate some virtual address space to its management tasks, which is fine under XP.
However under Vista this results in the application and the WDDM effectively playing a game of chicken: both are consuming virtual address space out of the same 2GB pool and neither is aware of the other doing the exact same thing. Amusingly, given a big enough card (such as a 1GB Radeon X2900XT), it's theoretically possible to consume all 2GB of virtual address space under Vista with just the WDDM and the application each trying to manage the video memory, which would leave no further virtual address space for anything else the application needs to do. In practice, both the virtual address space allocations for the WDDM and the application video memory manager attempt to grow as needed, and ultimately crash the application as each starts passing 500MB+ of allocated virtual address space.
This obviously needed to be fixed, and for a multitude of reasons (such as Vista & XP application compatibility) such a fix needed to be handled by the operating system. That fix is KB940105, which is a change to how the WDDM handles its video memory management. Now the WDDM will not default to using its full memory management capabilities, and more importantly it will not be consuming virtual address space unless specifically told to by the application. This will significantly reduce the virtual address space usage of an application when video memory is the culprit, but at best it will only bring Vista down to the kind of virtual address space usage of XP.

Laptop LCD Roundup: Road Warriors Deserve Better

Two of the areas where we've seen the most growth in the last few years are notebooks and flat-panel displays. The reasons for the tremendous growth differ, of course. Notebooks are a hot item because people are becoming enamored with wireless networks and portability, while LCDs have become popular because few manufacturers are making CRTs anymore and the small footprint of LCDs is desired by many people. We're working on increasing our coverage of both of these sectors, but up until now we haven't actually taken a close look at where they intersect.Since the first laptops began shipping, LCDs have been the de facto display standard. Years before most people were using LCDs on their desktop, laptops were sporting these thin, sleek, attractive displays. As anyone who used one of the earlier laptops can tell you, however, the actual quality of the LCD panels was often severely lacking. With the ramp up in production of both LCD panels and notebook computers, you might be tempted to assume that the quality of laptop displays has improved dramatically over the years. That may be true to a certain degree, but with power considerations being a primary factor in the design of most notebooks, compromises continue to be made.Without even running any objective tests, most people could pretty easily tell you that the latest and greatest desktop LCDs are far superior to any of the laptop LCDs currently available. While desktop LCDs have moved beyond TN panels to such technologies as S-IPS, S-PVA, and S-MVA we are aware of only a few laptop brands that use something other than a TN panel. (Unfortunately, we have not yet been able to get any of those laptops for review.) We have also complained about desktop LCDs that have reached the point where they are actually becoming too bright, in an apparent attempt to win the marketing war for maximum brightness. The same can't be said of laptops, as very few can even break the 200 cd/m2 mark. Individual preferences definitely play a role, but outside of photography and print work most people prefer a brightness setting of somewhere between 200 and 300 cd/m2.Luckily, there are plenty of new technologies being worked on that aim to improve the current situation. Not only should we get brighter laptop panels in the near future, but color accuracy may improve and power requirements may actually be reduced relative to current models. LED backlighting is one technology that holds a lot of promise, and it has only just begun to show up on desktop LCDs. Dynamic backlighting - were the brightness of some LEDs can be increased or decreased in zones depending on what content is currently being shown - is another technology that we may see sooner rather than later. Then there are completely new display technologies like OLED.With the current laptop landscape in mind, we have decided that it's time for us to put a bigger focus on the quality of laptop LCDs. To accomplish this we have put together a roundup of the current notebooks that we have in-house. Future laptop reviews will continue this trend by including a section covering display analysis and quality, but we wanted to build a repertoire of past notebook displays in the meantime. While we only have four laptops at present, it is also important to remember that there are only a few companies that actually manufacture LCD panels. We would also expect any companies that release notebooks with higher-quality LCDs to make a bullet point out of the fact, which means that if you don't see any particular emphasis placed on the display panel in a notebook's specifications it probably has a panel similar to one of the laptops we're looking at today.

Killing the Business Desktop PC Softly

Despite numerous attempts to kill it, it is still alive and kicking. It is "fat" some say, and it hogs up lots of energy and money. To others it is like a mosquito bearing malaria: nothing more than a transmitter of viruses and other parasites. This "source of all evil" in the IT infrastructure is also known as the business desktop PC. Back at the end of nineties, Larry Ellison (Oracle) wanted to see the PC die, and proposed a thin client device as a replacement dubbed the NC (Network Computer). Unfortunately for Oracle, the only thing that died was the NC, as the desktop PC quickly adapted and became a more tamable beast.When we entered the 21st century, it became clear that the thin PC is back. Server based computing (SBC), the prime example being Citrix Metaframe Presentation Servers, has become quite popular, and it has helped to reduce the costs of traditional desktop PC computing. What's more, you definitely don't need a full blown desktop client to connect to Citrix servers, so a thin client should be a more cost friendly alternative. When Microsoft Windows Server 2003 came out with a decent Terminal Server, SBC became even more popular for light office work. However the good old PC hung on. First, as interfaces and websites became more graphically intensive, the extra power found in typical PCs made thin clients feel slow. Second, the easily upgradeable PC offered better specs for the same price as the inflexible thin client. Third and more importantly, many applications were not - and still are not - compatible with SBC.That all could change in 2007, and this time the attempt on the PC's life is much more serious. In fact, the murder is planned by nobody less than the "parents" of the PC. Father IBM is involved, and so is mother Compaq (now part of HP). Yes, two of the most important companies in the history of the PC are ready to slowly kill the 25 year old. Will these super heavyweights finally offer a more cost friendly alternative to the desktop PC? Let's find out.

Silver Power Blue Lightning 600W

Most of our readers are probably not familiar with the company Silver Power, which is no surprise considering that this is a new brand name primarily targeting the European market. However, the parent company of Silver Power is anything but new and has been manufacturing a variety of power supplies for many years. MaxPoint is headquartered in Hamburg Germany and they have ties to several other brands of power supplies, the most notable being Tagan

The Tagan brand was established to focus more on the high-end gamers and enthusiasts, where quality is the primary concern and price isn't necessarily a limiting factor. Silver Power takes a slightly different route, expanding the product portfolio into the more cost-conscious markets. Having diverse product lines that target different market segments is often beneficial for a company, though of course the real question is whether or not Silver Power can deliver good quality for a reduced price.

We were sent their latest model, the SP-600 A2C "Blue Lightning" 600W, power supply for testing. This PSU delivers 24A on the 3.3V rail and 30A on the 5V rail, which is pretty average for a 600W power supply. In keeping with the latest power supply guidelines, the 12V power is delivered on two rails each capable of providing up to 22A. However, that's the maximum power each 12V rail can deliver; the total combined power capable of being delivered on the 3.3V, 5V, and 12V rails is 585W, and it's not clear exactly how much of that can come from the 12V rails which are each theoretically capable of delivering up to 264W each.

RAID Primer: What's in a number?

The majority of home users have experienced the agony of at least one hard drive failure in their lives. Power users often experience bottlenecks caused by their hard drives when they try and accomplish I/O-intensive tasks. Every IT person who has been in industry for any length of time has dealt with multiple hard drive failures. In short, hard drives have long caused the majority of support headaches in standard desktop or server configurations today, with little hope of improvement in the near term.With the increased use of computers in the daily lives of people worldwide, the dollar value of data stored on the average computer has steadily increased. Even as MTBF figures have moved from 8000 hours in the 1980s (example: MiniScribe M2006) to the current levels of over 750,000 hours (Seagate 7200.11 series drives), this increase in data value has offset the relative decrease of hard drive failures. The increase in the value of data, and the general unwillingness of most casual users to back up their hard drive contents on a regular basis, has put increasing focus on technologies which can help users to survive a hard drive failure. RAID (Redundant Array of Inexpensive Disks) is one of these technologies.Drawing on whitepapers produced in the late 1970s, the term RAID was coined in 1987 by researchers at the University of California, Berkley in an effort to put in practice theoretical gains in performance and redundancy which could be made by teaming multiple hard drives in a single configuration. While their paper proposed certain levels of RAID, the practical needs of the IT industry have brought several slightly differing approaches. Most common now are:RAID 0 - Data StripingRAID 1 - Data MirroringRAID 5 - Data Striping with ParityRAID 6 - Data Striping with Redundant ParityRAID 0+1 - Data Striping with a Mirrored CopyEach of these RAID configurations has its own benefits and drawbacks, and is targeted for specific applications. In this article we'll go over each and discuss in which situations RAID can potentially help - or harm - you as a user.

OCZ Introduces DDR3-1800

Memory based on the exciting new Micron Z9 memory chips for DDR3 first appeared a couple of weeks ago and we first looked at it in Super Talent & TEAM: DDR3-1600 is Here! As predicted in that review, it was only a matter of days until most of the major enthusiast memory makers began talking about their own products based on Micron Z9 chips. Some even announced fast availability of the new kits in the retail market.The reasons for this are basic. All memory makers buy raw memory chips available in the open market. Some memory makers do not like to talk about the chips used in their DIMMs, as they consider that information proprietary, but this secrecy does not normally last very long. It is rare to see a memory manufacturer with a truly exclusive supply arrangement with a memory vendor, but several companies have been trying very hard to do just this, and we may see more of these attempts in the future.The DIMM manufacturers then speed grade or "bin" the chips to create one or more speed grades from a single chip type. Memory chips are then surface-mounted on generic or proprietary circuit boards with SPD (Serial Presence Detect) chips programmed with generic code or custom SPD programming done by the DIMM maker. This is why the introduction of fast new chips like the Micron Z9 often circulates rapidly through the enthusiast memory market as each manufacturer tries to introduce products based on the new chips with new twists that outdo the competition. This does not mean the memory you buy from Super Talent, for example, is exactly the same as the Micron Z9-based memory you buy from Corsair. Companies pride themselves on the sophistication of their speed-grading technology, their design and/or sourcing of PCBs, and their skill at programming the SPD.Despite the real differences that emerge in memory performance from different DIMM manufacturers, the normal arrangement is one company successfully uses a new chip in a top-performing new DIMM, and then everyone in the market has a similar memory product based on the same chip. That is why every memory company has announced, or will soon be announcing, their own Micron Z9-based memory.One of the more interesting of the announcements is OCZ DDR3-1800, rated at 8-8-8 timings at DDR3-1800, which is the fastest production DDR3 kit currently available. This new PC3-14400 Platinum Edition kit is specified to reach DDR3-1800 at 1.9V and is claimed to have substantial headroom above this speed. It certainly appears that OCZ is binning Micron Z9 chips for even higher memory speeds, along with possibly some other tweaks to squeeze more from these chips. The test results should tell us what these new DIMMs can actually do.

OCZ Introduces DDR3-1800

Memory based on the exciting new Micron Z9 memory chips for DDR3 first appeared a couple of weeks ago and we first looked at it in Super Talent & TEAM: DDR3-1600 is Here! As predicted in that review, it was only a matter of days until most of the major enthusiast memory makers began talking about their own products based on Micron Z9 chips. Some even announced fast availability of the new kits in the retail market.The reasons for this are basic. All memory makers buy raw memory chips available in the open market. Some memory makers do not like to talk about the chips used in their DIMMs, as they consider that information proprietary, but this secrecy does not normally last very long. It is rare to see a memory manufacturer with a truly exclusive supply arrangement with a memory vendor, but several companies have been trying very hard to do just this, and we may see more of these attempts in the future.The DIMM manufacturers then speed grade or "bin" the chips to create one or more speed grades from a single chip type. Memory chips are then surface-mounted on generic or proprietary circuit boards with SPD (Serial Presence Detect) chips programmed with generic code or custom SPD programming done by the DIMM maker. This is why the introduction of fast new chips like the Micron Z9 often circulates rapidly through the enthusiast memory market as each manufacturer tries to introduce products based on the new chips with new twists that outdo the competition. This does not mean the memory you buy from Super Talent, for example, is exactly the same as the Micron Z9-based memory you buy from Corsair. Companies pride themselves on the sophistication of their speed-grading technology, their design and/or sourcing of PCBs, and their skill at programming the SPD.Despite the real differences that emerge in memory performance from different DIMM manufacturers, the normal arrangement is one company successfully uses a new chip in a top-performing new DIMM, and then everyone in the market has a similar memory product based on the same chip. That is why every memory company has announced, or will soon be announcing, their own Micron Z9-based memory.One of the more interesting of the announcements is OCZ DDR3-1800, rated at 8-8-8 timings at DDR3-1800, which is the fastest production DDR3 kit currently available. This new PC3-14400 Platinum Edition kit is specified to reach DDR3-1800 at 1.9V and is claimed to have substantial headroom above this speed. It certainly appears that OCZ is binning Micron Z9 chips for even higher memory speeds, along with possibly some other tweaks to squeeze more from these chips. The test results should tell us what these new DIMMs can actually do.

µATX Overview: Prelude to a Roundup


Our upcoming series of µATX articles has traveled a long road (Ed: that's an understatement!). When we first envisioned a long-overdue look at the µATX form factor motherboards, we thought it would be your typical motherboard roundup with maybe a twist or two tossed in to keep it interesting. One thing led to another and before you knew it, our minds started to run rampant with additional items that we felt were important for the article. This led to scope creep and those of us who manage projects - or who have been unlucky enough to be on a project that has featuritis - know what happens next.That's right, we over-emphasized the new article features to the detriment of our primary focus, providing a motherboard roundup that featured the often ignored but market leading µATX form factor. What started out with adding a couple of features such as IGP video quality comparisons and midrange CPU performance turned into a maze of thoughts and ideas that led us to believe it would be quite easy to add additional tests without affecting the overall schedule too much. We were wrong, but we hope that our future motherboard articles will be better for it.How did we get stuck in the quagmire of µATX hell? It began with innocent thoughts of adding budget to midrange CPU coverage, low to midrange graphics comparisons against the IGP solutions, High Definition playback comparisons utilizing not one but each competing standard, Windows XP versus Vista versus Linux, onboard audio versus add-in cards, and even tests of input devices and external storage items. It ended with our project scope changing from being motherboard specific to platform encompassing.We started down that path but despite periodic excitement, at times we also ended up with a dreaded case of paralysis by analysis syndrome. Don't get us wrong: we do not regret the effort that has been expended on this roundup; however, we sincerely regret the time it has taken to complete it and we apologize to those of you who have been waiting months for this information. It turns out that we ignored one of our favorite quotes from C. Gordon Bell, "The cheapest, fastest, and most reliable components are those that aren't there." That is one of the many factors that caused us problems, as it became quite obvious during testing that getting all of this equipment to work together and then benchmarking as planned was not exactly going to be a walk in the park.We have been constantly waiting on that one BIOS or driver to fix a malady of problems that we've discovered along the way. The manufacturers would ask - sometimes plead - for us to retest or wait as "that problem is being solved and a fix should be available immediately". Immediately it turns means days and weeks, not hours. We also received several product revisions during the course of testing that required us to throw out the old results and start again. In the end, we hope our efforts paid off and at least we have the knowledge that every supplier has had ample opportunity to fix any ills with their product.Our experiences with a wide variety of components will be discussed extensively in a series of articles to be published over the coming month. However, at the end of the day, the star of this show is still the motherboard. If the CPU is the brain of a computer and the video card is its eyes, then the motherboard is the central nervous system. It truly is the central focal point of the system and having one that works correctly makes it really easy to put a system together.As such, we are changing our testing emphasis from being primarily performance based to a combination of performance, features, stability, support, and those intangibles that we experience during testing that might set one board apart from another. While performance is important, does a few tenths of second or an additional two frames per second in a benchmark really mean that much when you cannot get a USB port working due to a crappy BIOS release or your system does not properly recover from S3 sleep state when you are set to record the last episode of the Sopranos? We thought as much also, so we are changing our vantage point on motherboard testing.While we are performance enthusiasts at heart, the fastest board available is not worth much if the included features do not work as advertised or the board constantly crashes when trying to use an application. Our testing emphasis, especially between boards based on the same chipset, will be focused on stability and compatibility with a wide range of peripherals in both stock and overclocked conditions. Speaking of features, we will place a renewed emphasis on networking, storage, memory, and audio performance. More importantly, we will provide additional analysis on overclocking, energy consumption, cooling capabilities, layout, and power management features where applicable.We also want to take this opportunity to put the manufacturers on notice: we will not countenance delays, patches, and numerous updates again, particularly on products that are available in the retail market! If a lemon of a motherboard gets released to consumers and it needs more BIOS tuning or perhaps an entirely new revision, we are going to do our best to point this fact out to the readers. We understand that it can be difficult to get every single peripheral to work properly, especially with new devices coming out all the time, but when a motherboard fails to work properly with a large number of USB devices, memory types, GPUs, etc. that product shouldn't be on the market.At the end of this journey we will provide three different platform recommendations based on the various components we have utilized in testing. Our platforms are designed around HTPC, Gaming, and Home/Office centric configurations with a heavy emphasis on the systems being quiet, reliable, and affordable. Okay, we blew the budget on the HTPC configuration but we will provide several alternatives to help control costs on that particular buildup. Let's find out what else is changing and exactly what will be included in our comprehensive review of the µATX motherboards and surrounding technologies.

Thursday, September 6, 2007

Zippy Serene (GP2-5600V)

With this review of the Serene we now have our second Zippy power supply for review. For those unfamiliar with the company and its roots, we suggest reading our first Zippy review as well. Zippy has been around for quite some time and in the server world they are recognized as having one of the highest qualities available in the market. Zippy is located in a suburb of Taipei called Xin Dian (Hsin Tien) and manufactures all of their power supplies in their factory over there.

They are known for having extremely reliable server power supplies, but recently Zippy has made the step into the retail desktop PSU market with several high class offerings. The Gaming G1 power supply in our last review exhibited very high quality, but it could still use quite a bit of improvement in order to better target the retail desktop PC market. Today we will be looking at the Serene 600W (GP2-5600V), a power supply that was built with the goal of having the best efficiency possible. The package claims 86%, which is quite a lofty goal for a retail product.

As we have seen many times with other power supplies, the Serene comes with a single 12V rail. We have written previously that this does not conform to the actual Intel Power Supply Design Guidelines, but as we have seen, readers and manufacturers have a different opinion about this issue. While some say it is no problem at all - there will be enough safety features that will kick in before something bad happens, i.e. overloading the power supply - the other half prefers to stick to the rules and have released power supplies with up to six 12V rails. While the lower voltage rails have each 25A on disposal, the single 12V rail has 40A and should have no difficulty powering everything a decent system needs.Given the name, one area that will be of particular interest to us is how quiet this power supply manages to run. Granted, delivering a relatively silent power supply that provides 600W is going to be a bit easier than making a "silent" 1000W power supply, but we still need to determine whether or not the Zippy Serene can live up to its name.

ASRock 4CoreDual-SATA2: Sneak Peek

When discussing current Intel chipsets, the phrase "budget sector" is somewhat of an oxymoron. While there are a lot of choices in the $45 to $60 range for Core 2 Duo compatible boards, these are mainly based on older designs that do not offer anything in the way of additional features, extended overclocking, or performance oriented chipsets. This is not to say they are in any way bad, as our favorite Intel budget board in the lab is the VIA based ASRock 4CoreDual-VSTA, but rather these boards are targeted at an audience that is price sensitive or just looking for the best bang for the buck.ASRock has built a very good reputation on offering these types of solutions. The performance oriented crowd will often snub these products due to their sometimes quirky nature but you cannot deny their value. In the case of the ASRock 4CoreDual-SATA2, this board allows you to move to the Core 2 Duo platform for a minimal cost. Besides offering good performance for a great price this board also provides the capability to utilize DDR memory and an AGP graphics card.
We provided a series of reviews centered on the 775Dual-VSTA last year, which was the first board in the VIA PT880 based family of products from ASRock. That board was replaced by the 4CoreDual-VSTA last winter that brought with it a move from the PT880 Pro to the Ultra version of the chipset along with quad core compatibility. ASRock is now introducing the 4CoreDual-SATA2 board with the primary difference being a move from the VT8237A Southbridge to the VT8237S Southbridge that offers native SATA II compatibility.Our article today is a first look at this new board to determine if there are any performance differences between it and the 4CoreDual-VSTA in a few benchmarks that are CPU and storage system sensitive. We are not providing a full review of the board and its various capabilities at this time; instead this is a sneak peek to answer numerous reader questions surrounding any differences between the two boards.We will test DDR, AGP, and even quad core capabilities in our next article that will delve into the performance attributes of this board and several other new offerings from ASRock and others in the sub-$70 Intel market. While most people would not run a quad core processor in this board, it does have the capability and our Q6600 has been running stable now for a couple of weeks, though we have run across a couple of quirks that ASRock is working on. The reason we even mention this is that with Intel reducing the pricing on the Q6600 to the $260 range shortly, it might just mean the current users of the 4CoreDual series will want to upgrade CPUs without changing other components (yet). In the meantime, let's see if there are any initial differences between the two boards besides a new Southbridge.