Click

Friday, April 26, 2013

Apple MacBook Pro with Retina Display Review

Pros
  • Screen, screen, screen!
  • Thin
  • Strong performance
  • Excellent speakers

Cons
  • Very expensive for performance level
  • Increased costs for repair and battery replacement
  • Few applications ready for Retina at this time
  • Performance issues with some browsing
  • Quick Take:
    The MacBook Pro with Retina Display is a bit of the past and a bit of the future, wrapped up into one tasty, toasty present. Brilliant screen meets refined design.

    Overview

    The next-generation MacBook Pro with Retina Display - it's a mouthful of a name. Apple's latest notebook represents the culmination of a number of trends from one of the industry's most visible players - from unibody aluminum construction to soldered-in components; from solid-state storage to the much-vaunted Retina Display.
    Apple clearly has a vision for where they want to take portable computing, and while impressive, it has its drawbacks, too. Let's jump into things by taking a look at the MacBook Pro's most talked about feature: its stunningly high resolution display.
    Apple started the trend of ultra-high resolution screens with the iPhone 4, back in 2010. The iPhone 4's display doubled each dimension of pixels over its predecessor from 480x320 to 960x640. The iPad 3 did the same thing - the best-selling tablet jumped from 1024x768 to 2048x1536.
    A Retina-enabled MacBook Pro follows the same trajectory. Previously, the 15-inch MacBook Pro shipped with a standard resolution of 1440x900. This new MacBook Pro, then, uses 2880x1800, which equals a more than 5 Megapixel image. In terms of sharpness, it figures out to almost 221 pixels per inch. This compares to 315ppi for the iPhone, and 264ppi for the iPad.
    So why are they still called Retina I mean, "retina screens" are just a marketing concept, but there exists real science behind the nomenclature.
    It has to do with how your eye works, and how you use your specific device. You hold a phone closer than a tablet, and you'll probably hold a tablet closer to you than you would your laptop. So despite the MacBook Pro employing a lower pixel density than its more mobile counterparts, it still gets to lay claim to the Retina Display name.
    If you're handy with math, you can figure out that your HDTV is probably pretty close to Retina quality, in terms of your ability to distinguish between individual pixels, now.

    The screen - oh my god, the screen

    While the sharpness plays a role in how good the screen looks - and the new MacBook Pro's screen looks better than any other notebook that has ever existed, bar none - so does the panel technology. I want to make that clear - if display quality is paramount to you, for whatever reason, this is the only laptop you should remotely be considering. It's simply that good. Apple uses IPS screens in their next-gen MBP, just like in the iPhone and iPad. It's a welcome step up from the screens they've used in the past; as TN panels, they suffered from color distortions and poor viewing angles.
    *Note - see the comments at the end of the article for a couple of notes on color accuracy.
    These new screens fix all of that.
    You might hope that with such a high resolution display, we've finally entered the era of resolution independence. Regrettably, it's not quite the case. As a result, Apple has been forced to hack together a way to make balance the sharpness of the display and the usability of the UI. Mind you, "hack together" makes it sound worse than it is; as these solutions go, it is really quite elegant, and quite a bit better than simply changing the DPI settings in Windows.
    When you boot the MacBook Pro up for the first time and dive into the resolution settings, you'll be confronted with a new settings pane. Apple forces you to choose between two options: one is balanced by default for the Retina Display ("Best for Retina display"), while the others let you choose between five different resolution settings
    Unlike traditional screens, there aren't any resolution numbers here. At least, not at first. Inside of the 'Scaled' option, you get to choose between five different display orientations. Larger text, which Apple says "Looks like 1024x640", one higher, "Looks like 1280x800", the 'Best for Retina' default, "Looks like 1440x900", a fourth, which "Looks like 1680x1050", and 'More Space', which "Looks like 1920x1200".
    A warning pops up beneath any non-default resolution that "Using a scaled resolution may reduce performance." This is because Apple doesn't simply scale any resolution beneath 2880x1800 up to the native resolution of the panel - they do a little scaling wizardry.
    For the 1680x1050 and 1920x1200 modes, OS X actually renders the display at 3360x2100 and 3840x2400, respectively. They do this in order to supersample the ultra-high (9.21MP!) resolution and maximize the clarity of the non-native resolution. Clear it is, too; it's probably the clearest screen we've seen for an LCD displaying non-native imagery.

    Retina vs. non-retina

    Applications that are "Retina-aware", however, get to employ even more trickery! If you're mucking about in software such as Aperture, iMovie, Final Cut Pro X, or most other Apple applications (Adobe has promised Photoshop updates, but they've not yet been released), the UI elements get doubled, but the media - photos, videos, etc. - get displayed on a 1:1 basis. If you're editing, for example, a 3000:2000 image in Aperture, you'd get to see the entire image displayed on screen, while the UI remains clearly visible. It's a neat sort of hybrid resolution that lets crafty developers really take advantage of super pixel dense displays.
    Software that isn't Retina-aware, however, doesn't fare nearly as well. Anything that isn't rendered on screen by some sort of OS API looks fuzzy. That means that any web browsing, unless you use the included Safari browser, isn't going to look so hot. A lot of legacy applications, unless updated, will look similarly.
    Compare this, meanwhile, with how things are handled in Microsoft's Windows OS. When you install Windows via Apple's Boot Camp software, then install the Boot Camp drivers, Apple makes a few modifications for you. The DPI is changed, for example, making fonts and some UI elements look larger than normal - it's a pretty clunky result.
    You do have a lot more freedom to set how you want things displayed, however, including the ability to push the screen to its native, 2880x1800, eye-searing max. Seriously. Eye-searing. It's sort of interesting, in an academic sense, to run the OS at that resolution, but it's pretty uncomfortable. Windows doesn't by default allow you to pick a pixel-quartered resolution of 1440x900, either, which is puzzling. Unless you really need to stay in Windows, you should probably avoid it; unlike prior Intel-based Macs, OS X just plain looks better.
    The real exception to this is Metro. The Start Screen and Metro applications look gorgeous at the full 2880x1800 display, with things rendered at human-readable sizes. Everything just looks pretty. Still, Metro isn't supremely useful on the desktop quite just yet, but that's a story for another day.
    Viewing angles Solid - you can lay the display flat against a table and not experience the color shift and distortion you find on other screens. Backlighting was similarly commendable, with zero noticeable light bleeds - everything is really quite surprisingly uniform.
    According to our measurements, the average static contrast ratio was roughly 945:1, which is quite good for a mobile display. Parts of the screen ranged from 827:1 to 1048:1, but on the whole, the differences are completely unnoticeable to the naked eye.
    One of the specifications picked up by a lot of tech blogs and papers after the WWDC announcement was the fact that the new MacBook Pro with Retina (abbreviated herein as rMBP for brevity) featured a "less glossy" screen. It's true - the display is less glossy. That's because Apple finally managed to rid themselves of that ridiculous extra panel of glass in front of the LCD.
    I have never been a fan of pushing screens in that direction, since it adds a frustrating amount of extra gloss, shine and reflection, not to mention thickness and weight. In this respect, the rMBP is very similar to the MacBook Airs. The new panel has glass bonded directly to the screen. It's still glossy, but it's actually usable at angles, unlike some MacBooks in the recent past (glare monsters).

    Impressively thin

    I know it seems overboard, but I really can't speak highly enough about the display on this computer. This is the measure by which future displays will be judged.
    The rest of the rMBP's design is still impressive, if subdued. It looks mostly like its predecessor, save for the fact that it's about a quarter of an inch thinner. Coming in at 0.71 inches, the new MacBook Pro is just three hundredths of an inch thicker than the MacBook Air line - of course, the rMBP doesn't follow the same wedge-shaped design; it runs straight in all directions, apart from some tapering at the edge.
    It all adds up to an impressively thin profile. There are definitely thinner notebooks on the market, but none that can match the same feature set. Similarly, the new rMBP weighs 4.46 pounds - not the lightest we've seen for a 15-inch notebook, but still impressive. Users used to an old MBP will appreciate the weight reduction, while those jumping ship from a MacBook Air may find it a bit clunky in comparison.
    As a whole, the build quality is impressive; the machine feels like a solid block of aluminum. There's little to no give anywhere on the computer, and the hinges are stiff without being exasperating. Fun note: thanks to the engineering upgrades to the screen, there wasn't an easy way for Apple to blaze their logo all over the bottom of the bezel, and so it got stuck on the underside of the machine. The pure minimalism of the design is impressive, as a result.

    Ports and features

    The MacBook Pro with Retina Display has a full two Thunderbolt ports. This underused high-speed interconnect is looking to come into its own over the next year, as we've seen a number of companies prepping compatible products for release (let's hope they actually make it to market).
    These can serve as mini-DisplayPort ports, too, with no special adapter required, save for converting mini-DP to DP. They're located on the left side of the notebook. An HDMI port on the right, the first on an Apple portable, means that you can hook up three external displays. The built-in screen makes it four. I have a USB 3.0 - HDMI adapter sitting here, but haven't tried it yet; five displays would be weirdly impressive. The MagSafe adapter has been shrunk down to fit into the smaller chassis; Apple replaced the "L" style connector to the previous "T" style one.
    While the "T" style adapter had issues with fraying, it looks like Apple addressed that by sheathing the connector in the same aluminum as the rest of the notebook.
    Speaking of USB, Apple has finally made the jump from USB 2.0 to USB 3.0. It has taken them an unforgivably long time to make the switch, which was delayed until Intel added support natively into their Ivy Bridge chipsets. There's one USB port on either side. A headphone jack on the left and SD card slot on the right round out the port selection.
    There is no optical drive on this notebook. It's part of the way that Apple saved both thickness and weight, and given the trends, unlikely to be missed by most people. OS X still supports the ability to use the optical drive on another networked computer, however, so between that and cheap USB drives, you should be good to go if you really need to read discs.
    The expansion issue is probably Apple's most controversial decision. That is to say, the new MacBook Pro with Retina Display can't be upgraded. Period. The RAM is soldered down, the CPU is soldered down. The GPU is on-board. The SSD features a proprietary shape and port (though at least it isn't soldered down, too). Even the battery, which lost easy swappability with the advent of the unibody MacBook construction, is glued directly to the chassis.
    OWC and other companies will probably come up with a compatible third-party SSD, just like they did with the Air. That does little to change the static nature of the rest of the machine - you'd better decide up front how much memory you're going to need.
    Fortunately, 8GB of RAM is the default shipping option - which it should be, at that price - and for most people, that's going to be more than enough. Despite what many enthusiasts think, most people never bother upgrading the memory on their laptops, and RAM, past the first weeks of ownership, rarely out and out fails.
    What is most regrettable about this new design is the battery. Since Apple glues the battery straight onto the body of the machine, getting the battery replaced means that the entire top portion of the machine will need to be replaced. That brings extra cost, which gets passed directly onto the consumer - in this case, it'll be a $199 fee, or $70 more than the other portables. Even though heavy use should see three or more years out of the battery before noticeable degradation sets in, it's an annoying principle.















Facebook's "Open Compute" Server tested


Facebook Technology Overview
Facebook had 22 Million active users in the middle of 2007; fast forward to 2011 and the site now has 800 Million active users, with 400 million of them logging in every day. Facebook has grown exponentially, to say the least! To cope with this kind of exceptional growth and at the same time offer a reliable and cost effective service requires out of the box thinking. Typical high-end, brute force, ultra redundant software and hardware platforms (for example Oracle RAC databases running on top of a few IBM Power 795 systems) won’t do as they're too complicated, power hungry, and most importantly far too expensive for such extreme scaling.
Facebook first focused on thoroughly optimizing their software architecture, which we will cover briefly. The next step was the engineers at Facebook deciding to build their own servers to minimize the power and cost of their server infrastructure. Facebook Engineering then open sourced these designs to the community; you can download the specifications and mechanical CAD designsat the Open Compute site.
The Facebook Open Compute server design is ambitious: “The result is a data center full of vanity free servers which is 38% more efficient and 24% less expensive to build and run than other state-of-the-art data centers.” Even better is that Facebook Engineering sent two of these Open Compute servers to our lab for testing, allowing us to see how these servers compare to other solutions in the market.
As a competing solution we have an HP DL380 G7 in the lab. Recall from our last server clash that the HP DL380 G7 was one of the most power efficient servers of 2010. Is a server "targeted at the cloud" and designed by Facebook engineering able to beat one of the best and most popular general purpose servers? That is the question we'll answer in this article.

Corsair Obsidian 350D Case Review


Introducing the Corsair Obsidian 350D
It seems like just yesterday we were talking about Corsair's gargantuan Obsidian 900D, a behemoth designed with the single goal of housing as much computer as you can possibly imagine. The Obsidian 900D supersized the already successful 800D (along with its price tag), and judging from the comments left on the review it's exactly what a lot of the watercooling enthusiasts were waiting for.
What you may not be aware of is the fact that the 900D ran...a little late. I had one of the early review units, and it had actually been sitting in my living room for some time before the new embargo date hit and gave me a deadline. That's part of the reason why we're seeing another case from Corsair as quickly as we are; had the 900D been on time this still would've seemed like a pretty quick turnaround time. Proving someone over there has a sense of humor, though, Corsair is following up their largest case with their smallest.
I'm actually a little disappointed that the campaign around the 350D was basically subsumed by the 900D, because of the two cases I think the micro-ATX 350D is actually the more interesting one. With the 900D, the sky is really the limit as to what you can put in it (or more accurately, your wallet is the limit). The 350D, on the other hand, is a case for people who thrive on limitations. That's not to say the case has limitations, per se, but when you're confined to the micro-ATX standard you start having to make creative decisions. As you'll see, Corsair made a few of their own that make the 350D a particularly interesting specimen in what's often one of the most diverse enclosure categories.
Corsair Obsidian 350D Specifications
Motherboard Form FactorMini-ITX, Micro-ATX
Drive BaysExternal2x 5.25"
Internal3x 2.5", 2x 3.5"
CoolingFront1x 140mm intake fan (supports 2x 120mm/140mm)
Rear1x 120mm exhaust fan
Top2x 120mm/140mm fan mount
Side-
Bottom-
Expansion Slots5
I/O Port2x USB 3.0, 1x Headphone, 1x Mic
Power Supply SizeATX
ClearancesHSF160mm
PSU200mm
GPU300mm
Dimensions17.3" x 8.3" x 17.7"
440mm x 210mm x 450mm
Weight13.3 lbs. / 6.1 kg
Special FeaturesUSB 3.0 via internal header
Removable drive cages
Removable filters on intakes and bottom
Supports 280mm radiators
Price$99/$109 (without window/with window) MSRP
What needs to be considered in evaluating the Corsair Obsidian 350D is that this case is pretty clearly designed capitalize on liquid cooling. While my experiences with Corsair's closed loop coolers have been inconsistent, everyone benefits from them having a 280mm cooler like the H110 in their lineup. The existence of a 280mm cooler in Corsair's portfolio doesn't necessarily demand they include a place to mount it in all subsequent case designs, but it makes a convincing argument.
The reviewer's guide makes a big deal about using the 350D for water cooling, both with Corsair's products and with custom loops. There are five total fan mounts, and all of them support radiators: the top of the case features two 120mm/140mm mounts, the front of the case features another pair of 120mm/140mm mounts (and the 3.5" drive cage is removable), and then the rear of the case features a 120mm fan mount. What does surprise me is that Corsair opted not to include an additional fan mount beneath the drive cage, in the bottom of the case. It feels like a missed opportunity.

Intel's Return to DRAM: Haswell GT3e to Integrate 128MB eDRAM?

 We've known for a while now that Intel will integrate some form of DRAM on-package for the absolute highest end GPU configurations of its upcoming Haswell SoC. Memory bandwidth is a very important enabler of GPU (and multi-core CPU) performance, but delivering enough of it typically required very high speed interfaces (read: high power) and/or very wide interfaces (read: large die areas). Neither of the traditional approaches to scaling memory bandwidth are low power or cost effective, which have kept them out of ultra mobile and integrated processor graphics. 
The days of simple performance scaling by throwing more transistors at a design are quickly coming to an end. Moore's Law will continue but much like the reality check building low power silicon gave us a while ago, building high performance silicon will need some out of the box thinking going forward.
Dating back to Ivy Bridge (3rd gen Core/2012), Intel had plans to integrate some amount of DRAM onto the package in order to drive the performance of its processor graphics. Embedding DRAM onto the package adds cost and heat, and allegedly Paul Otellini wasn't willing to greenlight the production of a part that only Apple would use so it was canned. With Haswell, DRAM is back on the menu and this time it's actually going to come out. We've referred to the Haswell part with embedded DRAM as Haswell GT3e. The GT3 refers to the GPU configuration (40 EUs), while the lowercase e denotes embedded DRAM. Haswell GT3e will only be available in a BGA package (soldered-on, not socketed), and is only expected to appear alongside higher TDP (read: not Ultrabook) parts. The embedded DRAM will increase the thermal load of the SoC, although it shouldn't be as painful as including a discrete GPU + high speed DRAM. Intel's performance target for Haswell GT3e is NVIDIA's GeForce GT 650M. 
What we don't know about GT3e is the type, size and speed of memory that Intel will integrate. Our old friend David Kanter at RealWorldTech presented a good thesis on the answers to those questions. Based on some sound logic and digging through the list of papers to be presented at the 2013 VLSI Technology Symposium in Kyoto, Kanter believes that the title of this soon to be presented Intel paper tells us everything we need to know:
"A 22nm High Performance Embedded DRAM SoC Technology Featuring Tri-Gate Transistors and MIMCAP COB"
According to Kanter's deductions (and somewhat validated by our own sources), Haswell GT3e should come equipped with 128MB of eDRAM connected to the main SoC via a 512-bit bus. Using eDRAM vs. commodity DDR3 makes sense as the former is easier to integrate into Intel's current fabs. There are also power, manufacturability and cost concerns as well that resulted in the creation of Intel's own DRAM design. The interface width is a bit suspect as that would require a fair amount of area at the edges of the Haswell die, but the main takeaway is that we're dealing with a parallel interface. Kanter estimates the bandwidth at roughly 64GB/s, not anywhere near high-end dGPU class but in the realm of what you can expect from a performance mainstream mobile GPU. At 22nm, Intel's eDRAM achieves a density of around 17.5Mbit/mm^2, which works out to be ~60mm^2 for the eDRAM itself. Add in any additional interface logic and Kanter estimates the total die area for the eDRAM component to be around 70 - 80mm^2. Intel is rumored to be charging $50 for the eDRAM adder on top of GT3, which would deliver very good margins for Intel. It's a sneaky play that allows Intel to capture more of the total system BoM (Bill of Materials) that would normally go to a discrete GPU company like NVIDIA, all while increasing utilization of their fabs. NVIDIA will still likely offer better perfoming solutions, not to mention the benefits of much stronger developer relations and a longer history of driver optimization. This is just the beginning however.
Based on leaked documents, the embedded DRAM will act as a 4th level cache and should work to improve both CPU and GPU performance. In server environments, I can see embedded DRAM acting as a real boon to multi-core performance. The obvious fit in the client space is to improve GPU performance in games. At only 128MB I wouldn't expect high-end dGPU levels of performance, but we should see a substantial improvement compared to traditional processor graphics. Long term you can expect Intel to bring eDRAM into other designs. There's an obvious fit with its mobile SoCs, although there we're likely talking about something another 12 - 24 months out.
AMD is expected to integrate a GDDR5 memory controller in its future APUs, similar to what it has done with the PlayStation 4 SoC, as its attempt to solve the memory bandwidth problem for processor based graphics.


Securifi's Almond+ 802.11ac Touchscreen Wi-Fi Router Integrates ZigBee and Z-Wave



It isn't often that we write about products seeking crowd funding. We had written about ioSafe's Indiegogo campaign for the N2 NAS back in September 2012, and the review of that product went out yesterday. Unless a product has already been demonstrated in its full working state and is guaranteed to ship, we are hesitant to provide dedicated publicity and hype to ideas and concepts that may never reach the consumer.
We have also recently ramped up our coverage of home automation technologies. In one of the initial pieces, we were bullish on the upcoming 802.11ah Wi-Fi standard for the Internet of Things revolution happening right now. 802.11ah standardization and devices are a good 2 to 3 years away, and in the meanwhile, Z-Wave and ZigBee will extend their reach further into the home.
One of the primary roadblocks to adoption of home automation technologies is the need for consumers to invest in a dedicated controller (very much similar to investing in a wireless router for Wi-Fi, but only much costlier). Securifi, a consumer networking startup, aims to solve this problem by launching a Wi-Fi router with both Z-Wave and ZigBee radios. Securifi is no stranger to the router world. They launched the Almond touchscreen router last year and it has proved to be very popular on Amazon
.

NVIDIA R319 Series Beta Driver 320.00 Available


NVIDIA's driver numbering can be a bit of a mystery at times, but after the R313 Series that encompasses all version 313.x and 314.x releases, NVIDIA is jumping ahead to their R319 Series drivers. Naturally, that means the first beta release of R319 is...320.00. Like I said, it can be a bit confusing at times. The good news is that the drivers as usual are available for all recent desktop and laptop GPUs.
OS support is a bit of a change from some releases. Windows XP and even Vista support look like they're finally starting to disappear, or at least they're not a high priority, so the current beta driver is only available for Windows 7 and 8 on laptops, in both 32-bit and 64-bit versions. Desktops on the other hand get the full set of support for everything from XP to Windows 8 in 32-bit and 64-bit form.
The big headliners for this series of drivers is that these are the "game ready" drivers for Dead Island: Riptide, Neverwinter, and Star Trek. NVIDIA is also listing performance improvements for single and SLI configurations for a variety of games, including Dirt: Showdown (up to 18%), Tomb Raider (up to 8%), and StarCraft II (up to 6%), though that's with a GTX 660 so your mileage may vary depending on your specific GPU. Other titles receiving performance tweaks include Sniper Elite V2, Metro 2033, Far Cry 3, Deus Ex: Human Revolution, F1 2012, Assassin's Creed III, Battlefield 3,and BioShock: Infinite.

AMD Radeon HD 7990 Review: 7990 Gets Official

Officially canonized back in 2008 with AMD’s “small die” strategy, dual-GPU cards have since become a staple of AMD’s product lineup. Filling a small-but-important niche for AMD, dual-GPU cards allow AMD to both deliver ultra-enthusiast performance levels their traditional single-GPU products can’t offer, and at the same time compete with NVIDIA’s big die flagship cards without AMD needing to produce a big die GPU of their own. As a result, though these cards aren’t necessarily obligatory, with each generation we’re left eagerly awaiting just what AMD has in store for their capstone product.
Of course with that said, like so many other facets of the 7000 series, the dual-GPU situation has played out rather unusually in the past year. In a typical year we would see AMD release a standard design, and then later on partners like Asus and PowerColor would release their own custom designs in the name of product differentiation and squeezing out just a bit more performance. Instead the 7000 series has played out in reverse: Asus and PowerColor released their designs first. Consequently, up until this point the 7990 has been “officially unofficial”, reflecting the fact that the first 7990s were AMD sanctioned products, but not based on AMD designs.
But at long last the 7990 is becoming fully official. AMD is getting into the game with their own 7990 design, and perhaps more importantly they’re doing so while coming to bear with the kind of engineering resources that only a GPU manufacturer can provide. This isn’t going to be the first 7990 – that honor belongs toPowerColor’s 7990 – but this is unquestionably the most important 7990.  For AMD and their partners going official doesn’t just mean the AMD is taking a greater role in matters, but as we’ll see it means changing the rules of the game entirely.
AMD GPU Specification Comparison
 AMD Radeon HD 7990AMD Radeon HD 7970 GHz EditionAMD Radeon HD 7970AMD Radeon HD 6990
Stream Processors2 x 2048204820482 x 1536
Texture Units2 x 1281281282 x 96
ROPs2 x 3232322 x 32
Core Clock950MHz1000MHz925MHz830MHz
Boost Clock1000MHz1050MHzN/AN/A
Memory Clock6GHz GDDR56GHz GDDR55.5GHz GDDR55GHz GDDR5
Memory Bus Width2 x 384-bit384-bit384-bit2 x 256-bit
VRAM2 x 3GB3GB3GB2 x 2GB
FP641/41/41/41/4
Transistor Count2 x 4.31B4.31B4.31B2 x 2.64B
PowerTune Limit/TDP375W250W+250W375W
Manufacturing ProcessTSMC 28nmTSMC 28nmTSMC 28nmTSMC 40nm
ArchitectureGCNGCNGCNVLIW4
Launch Date04/23/201306/22/201201/09/201203/11/2011
Launch Price$999$499$549$699
Diving right into the thick of things, like the officially unofficial cards before it, AMD’s 7990 is a dual-Tahiti part, placing two of AMD’s flagship GPUs on a single PCB to make a single card. AMD has held nothing back and these are fully enabled GPUs, so each GPU has all 2048 stream processors, 32 ROPs, and their full 384-bit memory buses present. Joining these GPUs is 6GB of GDDR5 RAM, split up between the two GPUs for the 7900-series standard of 3GB of VRAM per GPU.
The big question with any dual-GPU card of course is what kinds of clockspeeds it can run at, and as a turns out the 7990 can clock rather high. The 7990 is a PowerTune Boost part like the 7970GE it’s based on, giving the card a base clockspeed of 950MHz, and a boost clock of 1000MHz. Meanwhile the memory is clocked at 6GHz, the same as the 7970GE. As a result the 7990 is surprisingly close to being a 7970GE Crossfire setup on a single card, clocked just 50MHz below AMD’s single-GPU flagship card. In fact this is better than some of the earlier 7990s such as PowerColor’s, which were clocked lower and simultaneously lacked PT Boost.
But perhaps the most defining aspect of AMD’s 7990, and the thing that sets it apart from unofficial 7990s that came before it is the TDP. AMD’s 7990 has an official TDP of just 375W, which although common for official dual-GPU cards, is quite a bit lower than the TDPs of the unofficial 7990s. As the GPU manufacturer AMD has the ability to do finely grained binning that their partners cannot, so while Asus and PowerColor have essentially been putting together cards that really are two 7970s on a single card – right down to the TDP – official 7990s get the advantage of AMD’s binning process, significantly reducing power consumption. The end result is that while an unofficial 7990 would be a 450W+ part, AMD can deliver the same or better performance while consuming much less power, putting the 7990 within the all-important 375W envelope that OEMs and boutique builders look for.

While we’re on the subject of power, this is the first official AMD dual-GPU part to include AMD’s ZeroCore power technology, which was introduced with the GCN family. ZeroCore as you might recall allows AMD to almost completely shut off slave GPUs when they’re not in use, which in turn allows AMD to further reduce their idle power consumption. The biggest benefits are found in multi-card setups since this allows the fans on those slave cards to be shut down, but even on the 7990 it still provides a benefit by allowing AMD to curtail their idle power consumption. Consequently this pushes the idle TDP of the 7990 down to around 20W, which is greater than a single card, but a clear improvement over 6990 and earlier AMD dual-GPU cards.
Moving on to product stacks and competition, it comes as no great surprise that AMD is placing their newest flagship part directly opposite NVIDIA’s flagship cards. AMD doesn’t produce a GPU equivalent to GTX Titan’s massive GK110 GPU, so the 7990 is AMD’s official answer to both Titan and NVIDIA’s own dual-GPU card, the nearly year-old GTX 690. In the case of the GTX 690 it’s a rather straightforward matchup since both cards are based on the same principles, while against Titan AMD needs to make a case about raw performance versus the inherent simplicity of a single-GPU solution over a dual-GPU solution.
Along those lines, since AMD is placing the 7990 against NVIDIA’s flagships they will also be pricing it directly against NVIDIA’s flagships, setting the MSRP for the 7990 at $999. This steep price tag raised some ire with the GTX 690 and with GTX Titan, and it likely will here once more. But with single 7970GEs still regularly going for $400-$500 and the fact that AMD is throwing in their best Tahiti chips into 7990, there’s little incentive to charge less. A 7970GE CF setup will be both faster and cheaper, but as a pair of those cards take up 6 slots after accounting for cooling needs, AMD can bank on the fact that the 7990 is essentially the same size as a 7970GE, charging a premium for the size advantage.
Ultimately customers interested in the 7990 will have a bit of time to sit on the matter and decide if they want one. The 7990 is being launched ahead of its actual retail availability, with AMD telling us the cards will hit etailers within two weeks. Meanwhile all of AMD’s usual partners will be participating on this 7990, so expect to see 7990 cards from all of major AMD partners, and sold at all of the major etailers.


Finally, AMD has been having a blast with game bundles over the last few months, and they won’t be stopping with the 7990. In a game bundle that quite frankly I cannot recall being rivaled by anything else done in the last 20 years, AMD will be bundling the 7990 with 8 different games from the current and past Never Settle bundles. All of AMD’s current bundle titles are included: Crysis 3, Bioshock Infinite, Tomb Raider, and Far Cry 3: Blood Dragon. Along with that AMD is also packing in the best games out of their previous bundles: Far Cry 3, Hitman: Absolution, Sleeping Dogs, and Deus Ex: Human Revolution. Simply put, 7990 buyers will be well-stocked for games to play on their new video card.
Meanwhile on housekeeping note, AMD will be changing how vouchers are distributed for the 7990; rather than having etailers distribute the vouchers with qualifying purchases, AMD’s partners will be packing the vouchers into the product box. Though the etailers have been good about including vouchers, they do at times forget them. So for the 7990 AMD and their partners aren’t going to be taking any chances.
April 2013 GPU Pricing Comparison
AMDPriceNVIDIA
AMD Radeon HD 7990$1000GeForce GTX Titan/GTX 690
PowerColor Radeon HD 7990$900 
Radeon HD 7970 GHz Edition$450GeForce GTX 680
Radeon HD 7970$390 
 $350GeForce GTX 670
Radeon HD 7950$300