Click
Thursday, September 27, 2007
RAID Primer: What's in a number?
With the increased use of computers in the daily lives of people worldwide, the dollar value of data stored on the average computer has steadily increased. Even as MTBF figures have moved from 8000 hours in the 1980s (example: MiniScribe M2006) to the current levels of over 750,000 hours (Seagate 7200.11 series drives), this increase in data value has offset the relative decrease of hard drive failures. The increase in the value of data, and the general unwillingness of most casual users to back up their hard drive contents on a regular basis, has put increasing focus on technologies which can help users to survive a hard drive failure. RAID (Redundant Array of Inexpensive Disks) is one of these technologies.
Drawing on whitepapers produced in the late 1970s, the term RAID was coined in 1987 by researchers at the University of California, Berkley in an effort to put in practice theoretical gains in performance and redundancy which could be made by teaming multiple hard drives in a single configuration. While their paper proposed certain levels of RAID, the practical needs of the IT industry have brought several slightly differing approaches. Most common now are:
RAID 0 - Data Striping
RAID 1 - Data Mirroring
RAID 5 - Data Striping with Parity
RAID 6 - Data Striping with Redundant Parity
RAID 0+1 - Data Striping with a Mirrored Copy
Each of these RAID configurations has its own benefits and drawbacks, and is targeted for specific applications. In this article we'll go over each and discuss in which situations RAID can potentially help - or harm - you as a user.
RAID Primer: What's in a number?
With the increased use of computers in the daily lives of people worldwide, the dollar value of data stored on the average computer has steadily increased. Even as MTBF figures have moved from 8000 hours in the 1980s (example: MiniScribe M2006) to the current levels of over 750,000 hours (Seagate 7200.11 series drives), this increase in data value has offset the relative decrease of hard drive failures. The increase in the value of data, and the general unwillingness of most casual users to back up their hard drive contents on a regular basis, has put increasing focus on technologies which can help users to survive a hard drive failure. RAID (Redundant Array of Inexpensive Disks) is one of these technologies.
Drawing on whitepapers produced in the late 1970s, the term RAID was coined in 1987 by researchers at the University of California, Berkley in an effort to put in practice theoretical gains in performance and redundancy which could be made by teaming multiple hard drives in a single configuration. While their paper proposed certain levels of RAID, the practical needs of the IT industry have brought several slightly differing approaches. Most common now are:
RAID 0 - Data Striping
RAID 1 - Data Mirroring
RAID 5 - Data Striping with Parity
RAID 6 - Data Striping with Redundant Parity
RAID 0+1 - Data Striping with a Mirrored Copy
Each of these RAID configurations has its own benefits and drawbacks, and is targeted for specific applications. In this article we'll go over each and discuss in which situations RAID can potentially help - or harm - you as a user.
HP Blackbird 002: Back in Black
The SR-71 Blackbird was on the cutting edge of technology, pushing the boundaries of what was deemed achievable. It was the first aircraft that was designed to reduce its radar signature, and while it would fail in this respect it helped pave the way for future stealth aircraft. Perhaps more notably, the Blackbird was the fastest aircraft ever produced, officially reaching speeds of Mach 3.2 and unofficially reaching even higher. The actual top speed remains classified to this day. The extremely high speeds required some serious out of box thinking to achieve, so the Blackbird was built from flexible panels that actually fit loosely together at normal temperatures; only after the aircraft heated up from air friction would the panels fit snugly, and in fact the SR-71 would leak fuel while sitting on the runway before takeoff. After landing, the surface of the jet was so hot (above 300°C) that maintenance crews had to leave it alone for several hours to allow it to cool down.
So how does all of that relate to the HP Blackbird 002? In terms of components and design, the Blackbird is definitely on the cutting edge of design and technology, and it features several new "firsts" in computers. When we consider that the Blackbird comes from a large OEM that doesn't have a reputation for producing such designs, it makes some of these firsts even more remarkable. Talking about the temperatures that the SR-71 reached during flight was intentional, because the Blackbird 002 can put out a lot of heat. No, you won't need to let it cool down for several hours after running it, but the 1100W power supply is definitely put to good use. If electricity is the fuel of the 002, saying that it leaks fuel while sitting idle definitely wouldn't be an overstatement. And last but not least, the Blackbird 002 is fast - extremely fast - easily ranking among the best when it comes to prebuilt desktop computers.
Where did all of this come from? We are after all talking about HP, a company that has been in the computer business for decades, and during all that time they have never released anything quite like this. Flash back to about a year ago, when HP acquired VoodooPC, a boutique computer vendor known for producing extremely high-performance computers with exotic paint jobs and case designs - with an equally exotic price. The HP Blackbird 002 represents the first fruits of this merger, and while it may not be quite as exotic as the old VoodooPC offerings in all respects, it certainly blazes new trails in the world of OEM computers. There's clearly a passion for computer technology behind the design, and even if we might not personally be interested in purchasing such a computer, we can certainly appreciate all the effort that has gone into creating this latest "muscle car" - or pseudo-stealth aircraft, if you prefer.
Dell 2407WFP and 3007WFP LCD Comparison
Apple was one of the first companies to come out with very large LCDs with their Cinema Display line, catering to the multimedia enthusiasts that have often appreciated Apple's systems. Dell followed their lead when they launched the 24" 2405FPW several years ago, except that with their larger volumes they were able to offer competing displays at much more attractive prices. In short order, the 800 pound gorilla of business desktops and servers was able to occupy the same role in the LCD market. Of course, while many enthusiasts wouldn't be caught running a Dell system, the most recent Dell LCDs have been received very favorably by all types of users -- business, multimedia, and even gaming demands feel right at home on a Dell LCD. Does that mean that Dell LCDs are the best in the world? Certainly not, but given their price and ready worldwide availability, they have set the standard by which most other LCDs are judged.
In 2006, Dell launched their new 30" LCD, matching Apple's 30" Cinema Display for the largest commonly available computer LCD on the market. Dell also updated most of their other LCD sizes with the xx07 models, which brought improved specifications and features. These displays have all been available for a while now, but we haven't had a chance to provide reviews of them until now. As we renew our LCD and display coverage on AnandTech, and given the number of users that are already familiar with the Dell LCDs, we felt it was important to take a closer look at some of these Dell LCDs in order to help establish our baseline for future display reviews.
We recently looked at the gateway fpd our first LCD review in some time, and we compared it with the original Dell 24" LCD, the 2405FPW. In response to some comments and suggestions, we have further refined our LCD reviewing process and will be revisiting aspects of both of the previously tested displays. However, our primary focus is going to be on Dell's current 24" and 30" models, the 2407WFP and 3007WFP. How well do these LCDs perform, where do they excel, and where is there room for improvement? We aim to provide answers to those questions.
AMD's New Gambit: Open Source Video Drivers
As the computer hardware industry has matured, it has established itself in to a very regular and predictable pattern. Newer, faster hardware will come out, rivals will fire press releases back and forth showcasing that their product is the better one, price wars will break out, someone cheats now and then, someone comes up with an even more confusing naming scheme, etc. The fact of the matter is that in the computer hardware industry, there's very little that actually surprises us. We aren't psychic and can't predict when and to whom the above will happen to, but we can promise you that it will happen to someone and that it will happen again a couple of years after that. The computer hardware play book is well established and there's not much that goes on that deviates from it.
So we have to admit that we're more than a little surprised when AMD told us earlier this month that they intended to do something well outside of the play book and something that we thought was practically impossible: they were going to officially back and provide support for open source drivers for their video cards, in order to establish a solid full feature open source Linux video driver. The noteworthiness of this stems from the fact that the GPU industry is incredibly competitive and consequently incredibly secretive about plans and hardware. To allow for modern, functional open source video drivers to be made, a great deal of specifications must be released so that programmers may learn how to properly manipulate the hardware, and this flies in the face of the secretive nature of how NVIDIA and ATI go about their hardware and software development. Yet AMD is and has begun to take the steps required to pull this off, and we can't help but to be immediately befuddled by what's going on, nor can we ignore the implications of this.
Before we go any further however, we first should talk quickly about what has lead up to this, as there are a couple of issues that have directly lead to what AMD is attempting to do. We'll start with the Linux kernel and the numerous operating system distributions based upon it.
Unlike Windows and Mac OS X, the Linux kernel is not designed for use with binary drivers, that is drivers supplied pre-compiled by a vendor and plugged in to the operating system as a type of black box. While it's possible to make Linux work with such drivers, there are several roadblocks in doing so, among these being a lack of a stable application programming interface (API) for writing such drivers. The main Linux developers do not want to hinder the development of the kernel, but having a stable driver API would do just that by forcing them to avoid making any changes or improvements in that section of the code that would break the API. Furthermore by not supporting a stable driver API, it encourages device makers to release only open source drivers, in line with the open source philosophy of the Linux kernel itself.
This is in direct opposition to how AMD and NVIDIA prefer to operate, as their releasing of open source drivers would present a number of problems for them, chief among them exposing how parts of their hardware work when they want to keep that information secret. As a result both have released only binary drivers for their products, including their Linux drivers, and doing the best they can to work around any problems that the lack of a stable API may cause.
For a number of reasons, AMD's video drivers for Linux have been lackluster. NVIDIA has set the gold standard for the two, as their Linux drivers perform very close to their Windows drivers and are generally stable. Meanwhile AMD's drivers have performed half as well at times, and there have been several notable stability issues with their drivers. AMD's Linux drivers aren't by any means terrible (nor are NVIDIA's drivers perfect) but they're not nearly as good as they should be.
Meanwhile the poor quality of the binary drivers has as a result given AMD's graphics division a poor name in the open source community. While we have no reason to believe that this has significantly impacted AMD's sales since desktop usage of Linux is still low (and gaming even lower) it's still not a reputation AMD wants to have as it can eventually bleed over in to the general hardware and gaming communities.
This brings us back to the present, and what AMD has announced. AMD will be establishing a viable open source Linux driver for their X1K and HD2K series video cards, and will be continuing to provide their binary drivers simultaneously. AMD will not be providing any of their current driver code for use in the open source driver - this would break licensing agreements and reveal trade secrets - rather they want their open source driver built from the ground-up. Furthermore they will not be directly working on the driver themselves (we assume all of their on-staff programmers are "contaminated" from a legal point of view) and instead will be having the open source community build the drivers, with Novell's SuSE Linux division leading the effort.
With that said, their effort is just starting and there are a lot of things that must occur to make everything come together. AMD has done some of those things already, and many more will need to follow. Let's take a look at what those things are.