Click

Sunday, September 6, 2009

A New Kind of Home Computer: Windows Home Server Preview

The Interface of WHS

Although we'll touch on specific points of the GUI of WHS as we come to the various functions of the OS, we'll still spend a bit of time with the WHS interface since it's one of the other critical components that separates WHS from other server products and makes it work. Because WHS needs to be usable by a subset of users that are only partially computer literate, several special considerations had to go into making an interface for the OS. Furthermore the entire thing needs to be able to run headless once a WHS server is set up.

Microsoft has opted to go with a single application to control all of the functionality of WHS, the simply-titled Windows Home Server Console. As we alluded to earlier, the console actually runs on the server, and via a specialized RDP client is controlled from the clients. For clients that install the full connector suite (used for enabling backups) the specialized client is installed, which initiates the console on a remote computer and then transparently uses RDP to display it on the client as a local application. Because this is done via RDP, other clients from other OSs connect to and control the server via normal RDP; in this case they'll get the entire desktop of the server. At this point Microsoft is seriously entertaining the idea of pushing WHS onto non-Windows households, the Mac platform especially since an official RDP client is available.

The console effectively breaks up administration into 6 tasks: backups, user accounts, shared folders, server storage/drive management, network status, and WHS settings. As far as all of these interfaces go, Microsoft isn't working with any new human-computer interaction memes, rather everything is scaled down to be as simple as possible without losing effectiveness. This means that there's little we can say that's remarkable about the interface; it looks like Windows and there's a lack of buttons to push or things to break.

We're not completely sold on the effectiveness of the interface, but torn as to why. We don't think Microsoft could have made the interface any simpler without taking out features, but that doesn't preclude making it better. The interface is effectively a listing of a bunch of things to do, with help menus available that explain what each and every last thing does. It gets the job done, but a certain degree of computer literacy is required to understand what's going on. We'd say MS has done better with simplifying complex interfaces with Vista MCE, which manages to break complex issues such as storing recordings into a simple manner very well.

To that extent organizations like Geek Squad will probably get a good amount of business out of setting WHS up; it's not by any means hard, but there will be a sizable minority of potential customers that will lack the literacy required to do it themselves. However once set up WHS is by all indications plenty capable of continuing on indefinitely on its own; even its automatic update function has been revised for headless operation so that it can install any and all updates without human intervention (which is not the case today with XP or Vista). This is the reason we're torn, since most WHS servers probably won't need administration for 99.9% of their lives. The interface, especially for backups and user accounts, is good enough that once the server is set up it should be possible for more or less anyone to handle what little administrative duties remain.

On the whole Microsoft could have done a better job on making the interface accessible for everyone, but it's good enough for now.


Windows Vista Update: RC1/5728 Preview

At long last the light is at the end of the tunnel. After a several year development period, the longest for any single consumer version of Windows, the end is near for Windows Vista. While it's clearly not ready to be delivered in to the hands of users quite yet, Vista is finally at a point where we can begin talking about what will happen, and not what may.

Although Microsoft uses the Release Candidate nomenclature for Vista builds 5600 and above, including build 5728 we're looking at today, the reality of the situation is that the shipping version of Windows Vista will not be these builds or even a few builds down the line. Given the complexity of an operating system, there are still messy quirks and bona fide bugs in these release candidates, and it's going to be at least another month before we're talking about Microsoft having released a final version, and even then there will be a good amount of post-launch patching to be done as Vista ends up in the hands of the ultimate bug hunters, everyday users.

With that said, this is the first time that we can say without flinching that Vista is in an acceptable state for general use. Compatibility on the x86 version is remarkably improved over what we saw earlier, and in our testing we only managed to come up with a single program - non-commercial at that - that simply wouldn't function correctly under Vista no matter what. Otherwise, everything could be made to work under Vista given enough cajoling, which is an enormous feat given the amount of under-the-hood work the operating system has received compared to Windows XP.
User Account Controls have not changed much since build 5472, which is not necessarily a bad thing. Like Windows overall, UAC is usable at this point, and not nearly the nuisance it was as of Beta 2. It still has rough spots, and we'll get to that in a bit, but at this point enthusiasts are the only group that will have problems with it.

Hardware support and in-the-box drivers are also coming together, no doubt due to the portability of drivers between Vista and XP in most cases. A quick run-through of our lab turned up only two pieces of hardware that weren't supported under Vista: a Hauppauge TV tuner that had three of the four drivers it needed, and our PhysX card, both of which should have full support soon. All things considered, this will likely be the least-painful Windows transition on the driver front, as vendors have been on top of the few key kernel/driver changes for a while.

At this point we've been using RC1 for nearly a month, and the newer build 5728 for over two weeks, and while we're ready to switch back to XP until Vista is completed due to some video issues, Vista is ready to be taken seriously.

Hardware Virtualization: the Nuts and Bolts


The second generation: Intel's EPT and AMD's NPT

As we discussed in "memory management", managing the virtual memory of the different guest OS and translating this into physical pages can be extremely CPU intensive.



Without shadow pages we would have to translate virtual memory (blue) into "guest OS physical memory" (gray) and then translate the latter into the real physical memory (green). Luckily, the "shadow page table" trick avoids the double bookkeeping by making the MMU work with a virtual memory (of the guest OS, blue) to real physical memory (green) page table, effectively skipping the intermediate "guest OS physical memory" step. There is catch though: each update of the guest OS page tables requires some "shadow page table" bookkeeping. This is rather bad for the performance of software-based virtualization solutions (BT and Para) but wreaks havoc on the performance of the early hardware virtualization solutions. The reason is that you get a lot of those ultra heavy VMexit and VMentry calls.

The second generation of hardware virtualization, AMD's nested paging and Intel's EPT technology partly solve this problem by brute hardware force.



EPT or Nested Page Tables is based on a "super" TLB that keeps track of both the Guest OS and the VMM memory management.

As you can see in the picture above, a CPU with hardware support for nested paging caches both the Virtual memory (Guest OS) to Physical memory (Guest OS) as the Physical Memory (Guest OS) to real physical memory transition in the TLB. The TLB has a new VM specific tag, called the Address Space IDentifier (ASID). This allows the TLB to keep track of which TLB entry belongs to which VM. The result is that a VM switch does not flush the TLB. The TLB entries of the different virtual machines all coexist peacefully in the TLB… provided the TLB is big enough of course!

This makes the VMM a lot simpler and completely annihilates the need to update the shadow page tables constantly. If we consider that the Hypervisor has to intervene for each update of the shadow page tables (one per VM running), it is clear that nested paging can seriously improve performance (up to 23% according to AMD). Nested paging is especially important if you have more than one (virtual) CPU per VM. Multiple CPUs have to sync the page tables often, and as a result the shadow page tables have to update a lot more too. The performance penalty of shadow page tables gets worse as you use more (virtual) CPUs per VM. With nested paging, the CPUs simply synchronize TLBs as they would have done in a non-virtualized environment.

There is only one downside: nested paging or EPT makes the virtual to real physical address translation a lot more complex if the TLB does not have the right entry. For each step we take in the blue area, we need to do all the steps in the orange area. Thus, four table searches in the "native situation" have become 16 searches (for each of the four blue steps, four orange steps).

In order to compensate, a CPU needs much larger TLBs than before, and TLB misses are now extremely costly. If a TLB miss happens in a native (non-virtualized) situation, we have to do four searches in the main memory. A TLB miss then results in a performance hit. Now look at the "virtualized OS with nested paging" TLB miss situation: we have to perform 16 (!) searches in tables located in the high latency system RAM. Our performance hit becomes a performance catastrophe! Fortunately, only a few applications will cause a lot of TLB misses if the TLBs are rather large.


Intel's Core 2 Extreme & Core 2 Duo: The Empire Strikes Back

The architecture is called Core, processor family is Core 2, the product names are Core 2 Duo and Core 2 Extreme. In the past we've talked about its architecture and even previewed its performance, but today is the real deal. We've all been waiting for this day, the day Intel lifts the last remaining curtain on the chip that is designed to re-take the performance crown from AMD, to return Intel to its days of glory.

It sure looks innocent enough:


Core 2 Duo (left) vs. Pentium D (right)

What you see above appears to be no different than a Pentium D. Honestly, unless you flip it over there's no indication of what lies beneath that dull aluminum heat spreader.


Core 2 Duo (left) vs. Pentium D (right)

But make no mistake, what you see before you is not the power hungry, poor performing, non-competitive garbage (sorry guys, it's the truth) that Intel has been shoving down our throats for the greater part of the past 5 years. No, you're instead looking at the most impressive piece of silicon the world has ever seen - and the fastest desktop processor we've ever tested. What you're looking at is Conroe and today is its birthday.

Intel's Core 2 launch lineup is fairly well rounded as you can see from the table below:

CPUClock SpeedL2 Cache
Intel Core 2 Extreme X68002.93GHz4MB
Intel Core 2 Duo E67002.66GHz4MB
Intel Core 2 Duo E66002.40GHz4MB
Intel Core 2 Duo E64002.13GHz2MB
Intel Core 2 Duo E63001.86GHz2MB

As the name implies, all Core 2 Duo CPUs are dual core as is the Core 2 Extreme. Hyper Threading is not supported on any Core 2 CPU currently on Intel's roadmaps, although a similar feature may eventually make its debut in later CPUs. All of the CPUs launching today also support Intel's Virtualization Technology (VT), run on a 1066MHz FSB and are built using 65nm transistors.

The table above features all of the Core 2 processors Intel will be releasing this year. In early next year Intel will also introduce the E4200, which will be a 1.60GHz part with only a 800MHz FSB, a 2MB cache and no VT support. The E4200 will remain a dual core part, as single core Core 2 processors won't debut until late next year. On the opposite end of the spectrum Intel will also introduce Kentsfield in Q1 next year, which will be a Core 2 Extreme branded quad core CPU from Intel.

Core 2 Extreme vs. Core 2 Duo

Previously Intel had differentiated its "Extreme" line of processors by giving them larger caches, a faster FSB, Hyper Threading support, and/or higher clock speeds. With the Core 2 processor family, the Extreme version gets a higher clock speed (2.93GHz vs. 2.66GHz) and this time around it also gets an unlocked multiplier. Intel officially describes this feature as the following:

Core 2 Extreme is not truly "unlocked". Officially (per the BIOS Writers Guide), it is "a frequency limited processor with additional support for ratio overrides higher than the maximum Intel-tested bus-to-core ratio." Currently, that max tested ratio is 11:1 (aka 2.93G @ 1066 FSB). The min ratio is 6:1. However, do note that the Core 2 Extreme will boot at 2.93G unlike prior generation XE processors which booted to the lowest possible ratio and had to be "cranked up" to the performance ratio.

In other words, you can adjust the clock multiplier higher or lower than 11.0x, which hasn't been possible on a retail Intel chip for several years. By shipping the Core 2 Extreme unlocked, Intel has taken yet another page from AMD's Guide to Processor Success. Unfortunately for AMD, this wasn't the only page Intel took.

Manufacturing Comparison

The new Core 2 processors, regardless of L2 cache size, are made up of 291 million transistors on a 143 mm^2 die. This makes the new chips smaller and cheaper to make than Intel's Pentium D 900 series. The new Core 2 processors are also much smaller than the Athlon 64 X2s despite packing more transistors thanks to being built on a 65nm process vs. 90nm for the X2s.

CPUManufacturing ProcessTransistor CountDie Size
AMD Athlon 64 X2 (2x512KB)90nm154M183 mm^2
Intel Core 265nm291M143 mm^2
Intel Pentium D 90065nm376M162 mm^2

Intel's smaller die and greater number of manufacturing facilities results in greater flexibility with pricing than AMD.

System Buyers Guide: PCs under $800

AMD HTPC

Everyone asks for HTPC component recommendations, and then when we publish them readers can't wait to throw rocks at our recommendations. Perhaps this is because the HTPC, more than any other computer class, is a very personal machine. It needs to meet the specific needs and demands of the end users, who vary widely in what they plan to do with their new HTPC. So let's first talk about our concept in these two HTPC configurations.

We are assuming the user already has the HDTV (likely) or display he plans to feed, along with a sound system for that HDTV. The motherboards we recommend can reasonably feed audio signals for your Blu-ray movies, but they are not integrated audio amplifiers. Since most end-users are on cable or satellite for TV, we will make no recommendations at all for a TV tuner. Of the many possible uses of an HTPC the great majority of end-users store, play, and stream movies with their HTPC computers. That is mostly what their HTPC systems are used for and that is where we have concentrated our recommendations. In general the processing power in both systems has increased since our December 2009 guide, but costs have gone down a bit.

AMD HTPC System
HardwareComponentPrice
ProcessorMD Phenom II X3 710
(2.6GHzx3, 3x512KB L2, 6MB L3 Cache)
$119
CoolingCPU Retail HSF$-
VideoOn-Board$-
MotherboardASUS M3N78-EM$90
Memory4GB DDR2-800 - GSkill F2-6400CL5D-4GBPQ$37
Hard DriveWestern Digital Caviar Green WD10EACS 1TB SATA 3.0Gb/s Hard Drive - OEM$105
Optical DriveLG BD/HD DVD / 16x DVD+/- RW GGC-H20L - Retail$110
AudioOn-Board$-
CaseSILVERSTONE Black Aluminum/Steel LC13B-E ATX HTPC Case (After $15 Rebate)$100
Power SupplyPC Power & Cooling Silencer PPCS500 500W ATX12V / EPS12V SLI Ready CrossFire Ready 80 PLUS Certified Active PFC Power Supply - Retail (after $25 Rebate)$50
Base System Total$611
Keyboard and MouseLogitech Cordless Desktop EX110 Black USB RF Wireless Keyboard & Optical Mouse$30
Operating SystemMicrosoft Vista Home Premium OEM$99
Complete System Bottom Line$740

The CPU chosen for the AMD HTPC computer is the new triple core Phenom II X3 710 with 6MB of L3 cache. You get the expanded processing power of the Phenom II, which is always useful in an HTPC, at the same price as the older Phenom CPU chosen in the December guide. The three CPU cores each run at 2.6GHz, each with a 512KB cache, and a shared 6MB L3 cache - the same L3 cache sized shared on Phenom II quad-core processors. We hesitate to call a Phenom II X3 CPU a low-end chip, but this is certainly the most reasonable Phenom II you can buy. It has plenty of power, however, to drive your AMD HTPC to most anywhere you choose to go.

With DDR2-800 so reasonable these days we equipped the HTPC with 4GB of G.Skill memory. We aren't really interested in overclocking this HTPC (though it's technically still possible), and spending additional money on even higher performance RAM just doesn't make sense. 4GB of memory, however, does make perfect sense in an HTPC box.

The $90 ASUS M3N78-EM is based on the NVIDIA GeForce 8300 chipset. The board features one PCI-E x16 slot, one PCI-E x1 slot, two PCI slots, 8GB memory support, NVIDIA Gigabit LAN, 7.1 HD audio, 12 USB ports, five 3Gb/s SATA ports with RAID support, IEEE 1394a, one eSATA port, HDMI/DVI/VGA output, and full support for the Phenom 140W processors. This board offers overclocking capabilities along with being a top notch HTPC capable board. We highly recommend the GF8200/8300 series for the HTPC market due to hardware accelerated Blu-ray/H.264 playback, multi-channel LPCM output, and very good application performance.

As we discussed in the HTPC introduction, we did not include a TV tuner in the configuration since most end-users are now using their cable and satellite feeds. Few users, therefore, have any real need for a TV tuner card. There's something else to consider in this, and that is the US government mandated deadline to end analog broadcasts (which is now in June), so older/cheaper analog tuner cards are now useless unless you have an analog Cable/Satellite signal. If you truly need a Digital TV tuner, one option that is pretty unique on the TV tuner side is the HD HomeRun from Silicondust USA. This is a dual HDTV tuner/recorder that functions over a network and provides ATSC/QAM support. The price of $169 is more than many other options, but this is arguably a more flexible overall solution - particularly with the mandated move to digital and away from analog.

What's the point of having an HTPC if you don't have a lot of storage space? To that end, we selected a newly affordable 1TB (1000GB) Western Digital Caviar Green WD10EACS SATA hard drive at just $105. The WD Green is a variable speed energy saving design that we found to be among the quietest drives we have ever evaluated. For an HTPC, quiet operation is paramount and this WD Green will not disappoint. The WD Green is a bit slower than true 7200RPM 1TB drives, but the real performance difference is very minor. Another excellent HD option is the Seagate Barracuda 7200.11 ST31000333AS 7200RPM 1TB at $110. Performance of this 1TB drive has been exemplary in early testing at AnandTech, and the drive has proved to be reasonably quiet. Seagate also makes a super-reliable 1TB drive optimized for video storage and retrieval called the Seagate SV35.3 ST31000340SV 1TB at $150. This "video" Seagate features 24x7 reliability with > 1 million hours MTBF and improved read/write reliability. For those willing to pay the small premium the "video" Seagate would be a good alternate choice.

The optical drive is certainly an upgrade to the entry and budget systems since a reasonable HTPC requires Blu-ray playback capabilities. The LG Black 6X Blu-ray SATA fits the bill without breaking the bank. It provides 6X Blu-ray playback and the fastest recording and playback of DVD and CD media. The current price is around $110, but this drive sometimes goes on sale for $100 so look out for specials. There are also Blu-ray options under $100 from Lite-On and a 6X Blu-ray player at $105. We do not have much experience with this Lite-On drive, but Lite-On drives in the past have proved reliable. That would make the Lite-On Black 6X Blu-ray SATA a more reasonably priced alternative where every penny counts.

Our choice for an HTPC case is the audio component look in the Silverstone LC13B-E, which is an extremely flexible design with two silent fans and silent power when combined with the PC Power and Cooling 500W Silencer power supply. This solid Silverstone case can handle either ATX or Micro ATX motherboards, with space for four internal hard drives in addition to two 5.25" External bays and two 3.5" external bays. If your plans for your HTPC include lots of comportments and storage the Silverstone is an excellent choice. If you prefer a small cube case the Lian Li PC-V350B is a gem of a small black aluminum case. The Lian Li is our choice for the Intel HTPC system on the next page, and you can find more information on that case there.

Since most will place their HTPC near their HDTV or big screen monitor, a wired keyboard and mouse are not really very useful in most setups. Control is more often from across the room, so a wireless RF Logitech keyboard and mouse were selected. At just $25 for the pair, the Logitech Cordless Desktop EX110 wireless keyboard and mouse is a great value. This is also the HTPC preferred RF wireless set, which does not require "line of sight" that is needed for IR wireless.

The final price of the AMD HTPC comes to just $740. That has to be considered a bargain considering the triple core Phenom II CPU, 4GB of memory, and 1TB hard drive all housed in a quiet Silverstone HTPC case with a PC Power and Cooling Silencer 500W PSU. You can certainly spend even less on a basic HTPC box, but we doubt you can build a more powerful or quiet system for the same money.


Seagate 7200.10 500GB: Hitting the Sweet Spot



Seagate announced the Barracuda 7200.10 series over a year ago as the successor to their Barracuda 7200.9 series with much surprise as that particular product line had only been marketed for a very short period. The Barracuda 7200.10 series quickly became their flagship product for personal desktop solutions with the 7200.9 series quickly being regulated as a value performance offering. The Barracuda 7200.10 product was the first desktop centric hard drive to feature perpendicular magnetic recording technology

While Seagate has not introduced capacities larger than
750GB or revised their product lineup since a year ago that does not mean they have not been busy. Their new 7200.11 series will be introduced shortly that features their next generation PMR technology. Along with several new product enhancements, Seagate will increase their per-platter capacities to 250GB and early testing shows sustained transfer rates right over 100 MB/s not to mention improved seek times. The new 7200.11 series will feature capacities from 250GB to 1TB.



While that is an interesting bit of history as well as a brief look at what's coming down the pipeline, we are here today to discuss the Seagate Barracuda 7200.10 500GB drive. Historically, our Buyer's Guides have allowed 500GB hard drive recommendations in only the highest end systems where price is less of an issue. As hard drive capacities have now soared to 1TB range, we now see that 500GB drives have finally reached a point where they are a solid if not spectacular value for the majority of end-users who use their systems for everything from video/audio encoding to gaming. We are now seeing the 500GB drive category quickly becoming the new sweet spot for the desktop from both a cost and performance viewpoint.

Indeed, a quick look at today's prices shows that several 500GB hard drives have fallen below the $0.25-per-gigabyte level. The Seagate 500GB 7200.10 drive in particular has reached a threshold of $0.24 per gigabyte, making it one of the better values on the market today. However, we have to be honest here as the most recent 500GB desktop centric drives from Samsung and Western Digital offer a cost per gigabyte of $0.22. Our next storage article will take a look at these two drives.

Drive capacity is only one variables in the equation, however. Now that pricing has become very competitive, the consumer is able to look more aggressively at the feature set of the drives. Cache size, platter capacity (and number of platters), noise, thermals, power consumption, and warranty/support are only a few of the categories manufacturers need to address in order to earn a recommendation in our labs.

In this review, we will see how much attention Seagate has paid to these categories when developing their 7200.10 500GB hard drive, and if it shines above the direct competition from Hitachi's T7K500 in this increasingly crowded field. Let's find out how the Seagate 7200.10 500GB performs against other SATA based drives in its class.

The New Theater 650 TV Tuner Solution from ATI

The New Theater 650 TV Tuner Solution from ATIIntroduction

TV Tuners are becoming more and more popular as home theaters and computers start to merge. Already, many countries outside the US are making the move to integrated computer and home theater/entertainment centers in their homes instead of separate components, particularly in parts of Asia where space is limited. Of course, many people in the US are also beginning to see the benefits of combining their TVs and computers into one unit, and it seems reasonable to predict that this will be the norm in the near future.

We recently reviewed NVIDIA's DualTV Media Center Edition TV tuner card, and in the article, we looked briefly at the ATI Theater 550 Pro (again). ATI has had success with their Theater cards in the past, and now they are unveiling a new addition to the series, the Theater 650. This is the newest TV tuner chip/card from ATI, and like the 550 it's still a single tuner card (unlike NVIDIA's DualTV MCE), but there are some new features with this one that set it apart from the rest.

One of the most notable features incorporated into this card is that it has digital capabilities and is one of the first solutions to properly combine digital and analog TV reception, recording, and encoding in hardware in one solution. It boasts much better filtering capabilities as well; for example, it has a new motion adaptive 3D comb filter for better image quality. There are a few other features of the Theater 650 and of course we'll be looking at all of them further in the review.

We've chosen to limit the comparisons to only cards that are compatible with Windows Media Center Edition, in order to keep consistency between TV tuner applications. We will be comparing the Theater 650 to the older Theater 550, as well as NVIDIA's DualTV MCE. We'll be looking at not only image quality, but also CPU utilization between these three cards.

We were very appreciative of all of the comments and suggestions from the last TV Tuner article (the NVIDIA DualTV MCE) and hope to provide better coverage of this card and it's features this time around. Reader feedback is very important to us here at AnandTech and we are very concerned with what our readers want to see in a TV tuner card review. That said, in this review of ATI's Theater 650, we'll be looking at the card, its features and how it compares to a couple of other solutions in both performance and image quality. So without further fanfare, let's look at the ATI Theater 650 Pro.

Revisiting Linux Part 1: A Look at Ubuntu 8.04


Back in the early part of 2008 we decided that we wanted to take a fresh look at Linux on the desktop. To do so we would start with a “switcher” article, giving us the chance to start anew and talk about some important topics while gauging the usability of Linux.

That article was supposed to take a month. As I have been continuously reminded, it has been more than a month. So oft delayed but never forgotten, we have finally finished our look at Ubuntu 8.04, and we hope it has been worth the wait.

There are many places I could have started this article, but the best place to start is why this article exists at all. Obviously some consideration comes from the fact that this is my job, but I have been wanting to seriously try a Linux distribution for quite some time. The fact that so much time has transpired between the last desktop Linux article here at AnandTech and my desire to try Linux makes for an excellent opportunity to give it a shot and do something about our Linux coverage at the same time.

After I threw this idea at Anand, the immediate question was what distribution of Linux should we use. As Linux is just an operating system kernel, and more colloquially it is the combination of the Linux kernel and the GNU toolset (hence the less common name GNU/Linux), this leaves a wide variation of actual distributions out there. Each distribution is its own combination of GNU/Linux, applications, window managers, and more, to get a complete operating system.

Since our target was a desktop distribution with a focus on home usage (rather than being exclusively enterprise focused) the decision was Ubuntu, which has established a strong track record of being easy to install, easy to use, and well supported by its user community. The Linux community has a reputation of being hard to get into for new users, particularly when it comes to getting useful help that doesn’t involve being told to read some esoteric manual (the RTFA mindset), and this is something I wanted to avoid. Ubuntu also has a reputation for not relying on the CLI (Command-Line Interface) as much as some other distributions, which is another element I was shooting for – I may like the CLI, but only when it easily allows me to do a task faster. Otherwise I’d like to avoid the CLI when a GUI is a better way to go about things.

I should add that while we were fishing for suggestions for the first Linux distro to take a look at, we got alot of suggestions for PCLinuxOS. On any given day I don’t get a lot of email, so I’m still not sure what that was about. Regardless, while the decision was to use Ubuntu, it wasn’t made in absence of considering any other distributions. Depending on the reception of this article, we may take a look at other distros.

But with that said, this article serves two purposes for us. It’s first and foremost a review of Ubuntu 8.04. And with 9.04 being out, I’m sure many of you are wondering why we’re reviewing anything other than the latest version of Ubuntu. The short answer is that Ubuntu subscribes to the “publish early, publish often” mantra of development, which means there are many versions, not all of which are necessarily big changes. 8.04 is a Long Term Support release; it’s the most comparable kind of release to a Windows or Mac OS X release. This doesn’t mean 9.04 is not important (which is why we’ll get to it in Part 2), but we wanted to start with a stable release, regardless of age. We’ll talk more about this when we discuss support.

The other purpose for this article is that it’s also our baseline “introduction to Linux” article. Many components of desktop distributions do not vary wildly for the most part, so much of what we talk about here is going to be applicable in future Linux articles. Linux isn’t Ubuntu, but matters of security, some of the applications, and certain performance elements are going to apply to more than just Ubuntu.


Without a viable 64-bit Windows solution available today, enthusiasts and neophytes alike are looking to Linux for new opportunities. Is Linux mature

Linux and L2 Cache; Sempron vs. AthlonAs AMD rolls out its newest Sempron processor line, many readers are asking us if the reduced cache Socket 754 Sempron 3100+ really compares with already shipping Athlon 64 single channel solutions. Today we take two single channel, 1.8GHz processors with differing L2 cache and compare them in the same Linux benchmarks we have used in the past. The Athlon 64 2800+ and the Sempron 3100+ are nearly identical processors, except for the 256KB cache difference. There is also a $20 delta between the two retail products, so today we decide if the $20 difference between the two processors is worth the sacrafice of level two cache and 64-bit addressing. We have provided benchmarks of another 1.8GHz 32-bit processor from AMD, as well as the Athlon 64 3000+ for reference only.

Update: This article got pushed live prematurely. If you read it before 12PM EST on the 18th, you read an incomplete, unfinished article.

Performance Test Configuration
Processor(s):

AMD Athlon 64 2800+ (130nm, 1.8GHz, 512KB L2 Cache)
AMD Athlon 64 3000+ (130nm, 2.0GHz, 512KB L2 Cache)
AMD Sempron 3100+ (130nm, 1.8GHz, 256KB L2 Cache)
AMD Athlon XP 2200+ (130nm, 1.8GHz, 256KB L2 Cache, 266FSB)

RAM:2 x 512MB PC-3200 CL2 (400MHz)
Memory Timings:Default
Motherboard:Chaintech ZNF-250 (nForce3, Socket 754)
DFI NFII Infinity (nForce2, Socket 462)
Operating System(s):SuSE 9.1 Professional (32 bit)
Linux 2.6.4-52-default
Compiler:linux:~ # gcc -v Reading specs from /usr/lib/gcc-lib/i586-suse-linux/3.3.3/specs Configured with: ../configure --enable-threads=posix --prefix=/usr --with-local-prefix=/usr/local --infodir=/usr/share/info --mandir=/usr/share/man --enable-languages=c,c++,f77,objc,java,ada --disable-checking --libdir=/usr/lib --enable-libgcj --with-gxx-include-dir=/usr/include/g++ --with-slibdir=/lib --with-system-zlib --enable-shared --enable-__cxa_atexit i586-suse-linux Thread model: posix gcc version 3.3.3 (SuSE Linux)
Libraries:linux:~ # /lib/libc.so.6 GNU C Library stable release version 2.3.3 (20040405), by Roland McGrath et al. Copyright (C) 2004 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. Configured for i686-suse-linux. Compiled by GNU CC version 3.3.3 (SuSE Linux). Compiled on a Linux 2.6.4 system on 2004-04-05. Available extensions: GNU libio by Per Bothner crypt add-on version 2.1 by Michael Glad and others linuxthreads-0.10 by Xavier Leroy GNU Libidn by Simon Josefsson NoVersion patch for broken glibc 2.0 binaries BIND-8.2.3-T5B libthread_db work sponsored by Alpha Processor Inc NIS(YP)/NIS+ NSS modules 0.19 by Thorsten Kukuk Thread-local storage support included. Report bugs using the `glibcbug' script to .

Even though we are using 1GB of memory in a dual channel configuration, the Socket 754 platform will only perform in single channel mode. Fortunately for AMD, since the memory controller is directly on the processor we do not see large latencies going from dual channel to single channel mode. Only the Athlon 64 2800+ can run 64-bit binaries, so for the sake of experiment we will only look at 32-bit binaries today. We have looked at 32-bit versus 64-bit performance in the past, and we will revisit it again in a few weeks, so today we will just focus on 32-bit performance.

Also keep in mind the GCC 3.3.3 included with SuSE 9.1 Pro has many back ported options from the official 3.4.1 tree. Our results with GCC 3.3.3 are much more optimized than the standard GCC 3.3.3.


AMD and Linux: Reaching for the 64-bit Trophy

Without a viable 64-bit Windows solution available today, enthusiasts and neophytes alike are looking to Linux for new opportunities. Is Linux mature enough to take advantage of the same technology released to the public only months ago? The answers are more complicated than many of us originally thought, particularly considering the competition.

To get a well-rounded breakdown of where Linux is going, and where it trumps (or fails against) Windows, we took the two largest 64-bit Linux distributions, their 32-bit counterparts, and the Windows XP 64-bit public beta for a test drive. The way that we are running the benchmark is slightly unique; we do not recompile or optimize benchmarks per hardware platform. Our goal is to see which out-of-the-box operating system performs the best with as much support as possible. Thus, we use RPMs and binaries packaged with or compiled for the specific operating system tested.

Performance Test Configuration
Processor(s):Athlon 64 3500+ Socket 939 (2.2GHz, 512KB Cache)
RAM:2 x 512MB Mushkin PC3500 Level II
Memory Timings:Default
Hard DrivesSeagate 120GB 7200RPM IDE (8Mb buffer)
Video AGP & IDE Bus Master Drivers:Linux NVIDIA Core Logic: 1.0-275
Linux NVIDIA Graphics: 1.0-5332
Windows 64 bit Graphics: 57.30
Windows 64 bit Core Logic: 4.34a
Video Card(s):NVIDIA GeForceFX 5600SE 128MB
Operating System(s):SuSE 9.1 Professional (32/64 bit)
Fedora Core 2 (32/64 bit)
Windows XP SP1 (32/64* bit)
Motherboards:NVIDIA NForce3 250 Reference Board

*Windows XP SP1 64-bit is the February 2004 open beta release.

We attempted to keep our test configuration as close to CPU/Motherboard/Memory Windows test configuration as possible. The only major change that we adopted for this analysis include the change in processor, IDE rather than SATA hard drive, and the NVIDIA GeForceFX video card. We opted for an NVIDIA card over an ATI card for these benchmarks primarily because of 64-bit Linux driver support. We have a Linux video card roundup planned for the future; so, in that article, we can take a better look at where the particular differences lie in video processing.

Building a Linux PVR Part I - MythTV Setup and Install



Introduction

Once in a while, we get so excited about writing an article that we completely lose focus and end up with a 10,000 word epic instead of a concise little review. This two-part Linux TiVo article ended up being one of those articles. Of course, we aren't really building a Linux TiVo, but rather something as close as we can come, with some rudimentary hardware and free software - such as Linux and MythTV.

Considering the cost of a TiVo, service runs anywhere from $100 to $600 per year depending on what DVR and subscription you buy. Building a moderate MythTV machine for around $500 actually saves us money in the long run. Building our own device also allows us to upgrade hardware easily and reconfigure the software at our will.

Ultimately, we would like our machine comparable to a TiVo device; but in actuality, we really would like it to perform as well or better than a machine based on the same hardware running the newest Windows Media Center (Anand wrote a small MCE introduction 18 months ago). Recently, we obtained a new whitebox Media Center device with similar hardware found in this Part I of our Linux TiVo experiment. Our goal for Part I of the article is to get Linux, MythTV and all the trimmings working successfully so that we can square both machines off against each other; and then compare encoding, image quality and functionality. We are publishing Part I: The Installation today, but expect Part II: The Comparison in just a few days.

If all goes well, we will probably follow up with a Part III some weeks later, dissecting Freevo, GeexBox, etc.

Building a Linux PVR, Part 2: Microsoft's MCE 2004

IntroductionBuilding a Linux PVR, Part 2: Microsoft's MCE 2004

A few weeks ago, we introduced the first of a series of articles on building a home made PVR, "Building a Linux PVR Part I - MythTV Setup and Install". Today, we bring you the second part of the series, which focuses on Microsoft's Windows XP Media Center Edition 2004 and how it compares to the Linux-based MythTV.

When Microsoft first introduced their Media Center Edition of Windows, many saw this as a great opportunity to acquire a cheap PVR, since it was combined with a PC that could be used for the usual day-to-day tasks, such as word processing or browsing the Internet. But with that, Microsoft decided to bundle MCE with custom PCs that are built using the short list of supported hardware by big names in the industry, such as Hewlett-Packard and Gateway Computers. It could not be bought off store shelves by PC enthusiasts who already had hardware capable of PVR operations, nor did Microsoft plan on supporting hardware besides those from the few names it worked with.

Fast forward to today where Microsoft has begun selling OEM versions of their Media Center Edition to "Mom and Pop" shops to be installed on only Media Center Edition certified machines. This is a step forward, since it gives more power to those smaller shops. Media Center Edition still does not have support for the long list of hardware that MythTV does, but Microsoft has expanded their driver list quite a bit from their first release.

Although we installed MythTV from scratch in the previous review, we will use KnoppMyth in this half of the analysis. KnoppMyth installs cleanly and easily, but does not offer as much support as getting your hands dirty with a "from scratch" install.

Media Center Edition 2004 vs MythTV Next Page

Doom3 Linux and Windows Battlegrounds

IntroductionDoom3 Linux and Windows Battlegrounds

Doom3 was a turning point for a lot of us as it marked an important milestone in next generation game engines. We have been keeping a very close eye on id's Linux adventure, and at the core of id's Linux development is Timothee Besset, the Linux port maintainer.

"I'm getting surprisingly good performance compared to the Windows version."

Timothee Besset, Linuxgames.com [1]

This sounds like the premise of a wonderful opportunity to put Doom3 through its paces. We crafted this entire analysis around Timothee's expectations.

Our goals for this analysis are twofold. We want to take the newest working video cards that we can find and test their performance on Linux using Doom3. This is slightly a continuation of last week's GPU roundup as the Doom3 engine will ultimately become the next cornerstone for Linux first-person shooter games. This includes exhaustive image quality (IQ) testing. Secondly, we wish to run comparative analysis on how Doom3 performs and looks on Linux versus Windows.

Gaming Laptop Roundup

Standard Gaming PerformanceGaming Laptop Roundup

Starting the benchmarks, we'll cut right to the chase and begin with gaming performance. That's what you buy one of these laptops for, after all. As mentioned in our last article, we have recently updated our laptop gaming benchmarks. We use built-in performance tests on Company of Heroes, Crysis, Devil May Cry 4, Enemy Territory: Quake Wars, and Unreal Tournament 3. For Assassin's Creed, GRID, Mass Effect, and Oblivion we benchmark a specific scene using FRAPS. In all tests, we run each benchmark at least four times, discard the top result, and report the highest remaining score. We will use resolution scaling graphs to compare the different laptop configurations, as that will allow us to examine how the GPU and CPU affect performance. At lower resolutions we should become more CPU limited, while the higher resolutions and detail settings should put more of a bottleneck on the GPU.


















The Gateway notebooks obviously offer a lot of bang for the buck, even if they don't top the performance charts. The Alienware m15x is faster overall, but the margin of victory over the P-7811 is only about 7% (around 25% over the P-171XL). As we mentioned in our Gateway P-7811 review, there also appears to be a driver glitch with the P-171XL in Devil May Cry 4 - which incidentally is the only game where we don't see significant performance improvements from SLI at higher resolutions. The Sager NP9262 is clearly in a league of its own when it comes to performance. If you've ever wondered why people would consider purchasing a heavy SLI notebook, gaming performance that's almost twice as fast as the closest single GPU solution is the answer. Unfortunately, that means the CPU becomes more of a bottleneck, which is why we see several games where the Sager laptop has an almost flat line.


NVIDIA 9500 GT: Mainstream Graphics Update


Even though sub $100 hardware is very high volume, we don't often see a lot of heated debate surrounding them. People don't usually get excited about mainstream and low end hardware. The battle for who can run the newest game with the coolest effects at the highest resolution, while not applicable to most people, tends to generate quite a bit of interest. There is a lot of brand loyalty in this industry, and people like to see the horse they backed come out on top. Others, while not siding with a particular company, jump on the wagon with what ever company has the fastest part at any given time. I, myself, am a fan of the fastest hardware out at any given time. I get excited by how far we've come, and how much closer the top of the line gets us to the next step. Keeping up with top of the line hardware is more like attending a sporting event or taking in a play: the struggle itself is entertainment value.

For some, knowing what's best does have relevance. For many many others, it is more important for to keep track of hardware that, while cheap, is as capable as possible. And that is where we are today.

At the end of July, NVIDIA released their GeForce 9500 GT. This part (well, the GDDR3 version anyway) is almost a drop in replacement for the 8600 GT as far as the specifications go. In fact, the prices are nearly the same as well.

No, it isn't that exciting. But even these very low end add in cards are head and feet above integrated graphics solutions. While we'd love to see everything get more performance, the price of the 8600 GT has dropped significantly over time. We haven't gotten to a point where people who aren't willing or able to spend above $100 on a graphics card can get good experiences on modern games. At least software and hardware complexity tend to parallel each other to the point where the disparity in how new a title can be played on cheap hardware isn't getting any worse.

So with so many similarities, why release this part? There won't be an endless supply of G84 hardware going forward. Thus the G96 comes along with nearly the same specs selling at the same price. The decreased die size of the 65nm G96 (as opposed to the 80nm G84) will also help to increase profits for NVIDIA and board partners on this hardware while they sell at the same price point. There are rumors that NVIDIA will even move the G96 to 55nm later this year further increasing their saving and possibly enabling passive cooling solutions. But we will have to wait for a while yet to find out if that will actually happen.

Before we get into the 9500 GT itself, let's take a look at the state of the industry that brought us to this point.


Faster Graphics For Lower Prices: ATI Radeon HD 4770

First things first: the Radeon HD 4770 is faster than existing 4800 series hardware (namely the 4830). Yes, this is by design.

We hate to start another article complaining about naming (there seems to be some sort of pervasive renaissance of poor naming this year), but let's talk about why exactly we are in this situation with a look back at something from our RV670 coverage:

At least it's ironic.

Yes, the problem is born out of AMD's attempt at sensible, appropriate naming. The problem is that AMD seems to want to associate that "family" number with the physical GPU than with the a performance class. This is despite the fact that they generally use increasing numbers for "families" that are generally faster. Thus, the 40nm RV740 needs a new family name, and they can't really choose 49xx presumably (by us) because people would be more upset if they saw a high number and got lower performance than if they saw a lower number and got higher performance. So Radeon HD 4770 it is.


When we brought up our issues with the naming scheme, AMD was quick to respond that naming is one of the most contentious things that go on in bringing a graphics card to market. People get passionate about the issue. Passion is great, but not if it confuses, misleads, or distracts the end user. And that's what a decision like this does. There is no practical reason that this card shouldn't be named 4840 to reflect where it's performance falls. After all, the recently released 4890 is host to quite a few tweaks to the physical layout of the chip and it isn't called the 4970.

At the same time, that trailing zero is doing nothing on all AMD hardware. There is an extra number in there that could allow AMD to shift some things around in their naming scheme to retain all the information they want to reflect about architecture generation, processes revision, performance class and specific performance within that class. If we are going to have a model number system, in order to have real value to both the informed and casual graphics card user it needs to be built to properly represent the underlying hardware AND be strictly related to performance. With this move, AMD joins NVIDIA in taking too many liberties with naming to the detriment of the end user.

Now that that's taken care of, what we have today is a 40nm GPU (the first) paired with 512MB of RAM on a $110 card. The package delivers performance at a level between the 4830 and the 4850. First indications were that this would be a $99 part and the performance we see with this card at the "magic" price would be terrific. It's still not bad at a 10% higher price. AMD had indicated that there should be some $10 mail in rebates available for those who are interested in the extra bonus hassle and upfront cost to get the cash.