Intel’s Xe-HP Now Available to Select Customers

As part of Intel’s next generation Xe graphics strategy, one of the key elements is going to be the offerings for the enterprise market. For commercial enterprises that need hardcore compute resources, the Intel’s Xe-HP products are expected to be competition against Ampere and CDNA. The HP design, as we’ve already seen in teaser photos, is designed to leverage both scale-up and scale-out by using a multi-tile strategy paired with high-bandwidth memory.


As far as we’re aware, Intel’s timeline for Xe-HP is to enable general availability sometime in 2021. The early silicon built on Intel’s 10nm Enhanced SuperFin process is in-house, working, and has been demonstrated to the press in a first-party video transcode benchmark. The top offering is a quad-tile solution, with a reported peak performance somewhere in the 42+ TFLOP (FP32) range in NEO/OpenCL-based video transcode. This would be more than twice as much as NVIDIA’s Ampere A100.



In an announcement today, Intel says that it is offering its dual-tile variant to select customers already, however not directly. Intel is going to offer Xe-HP use through its development cloud infrastructure, DevCloud. Approved partners will be able to spin up instances on Intel’s service, compile, and run Xe-HP compatible software in order to gauge both performance and code adaptability for their workflows.


Normally in a situation where a hardware provider offers a cloud-based program for unreleased products, there is a tendency to think that they’re not actually providing the hardware, that it’s an obfuscation on the back-end to what could be a series of FPGAs emulating what the customer thinks they’re using. This is part of the problem with these non-transparent cloud services. However, Intel has confirmed to us that the hardware on the back-end is indeed its Xe-HP silicon, running over a custom PCIe interface and powered through its Xeon infrastructure.



One of the common elements to new silicon is finding bugs and edge cases. All hardware vendors do their own validation testing, however in recent years Intel itself has presented the front where its customers, due to the scale of workloads and deployment, can test far deeper and wider than Intel does – up to a scale of 20-50x. But that’s when those partners have the hardware in hand, perhaps early engineering samples at lower frequencies; by using DevCloud, some of those big partners can attempt some of those workflows in preparation for a bigger direct shipment, and optimize the whole processes.


Intel did not state what the requirements were to get access to Xe-HP in the cloud. I suspect that if you have to ask, then you probably don’t qualify. In other news, Intel’s Xe-LP solution, Iris Xe MAX, is available in DevCloud for public access.


Related Reading




Source: AnandTech – Intel’s Xe-HP Now Available to Select Customers

Intel Launches Xe-LP Server GPU: First Product Is H3C’s Quad GPU XG310 For Cloud Gaming

Following the formal launch of Intel’s first discrete GPU in over a generation, the DG1, this morning Intel is launching the server counterpart to that chip, the very plainly named “Intel Server GPU”. Previously referred to as SG1, the Intel Server GPU is based on the same Xe-LP architecture design as the DG1, but aimed at the server market. And like the consumer DG1, Intel is planning on taking an interesting, somewhat conservative tack with their new silicon, chasing after specific markets that are well suited for Xe-LP’s significant investment into video encode hardware.


One such market that Intel has decided to chase with their new silicon is the Android game streaming market. The company sees both gaming and video as growth markets, and game streaming is the perfect intersection of the two. So, with an expectation that Android cloud gaming is going to greatly grow in the coming years – especially in China – the company is positioning the GPU the hardware and developing a suitable software stack to use the Server GPU for hosting Android games in the cloud.


As for the Server GPU itself, it is virtually identical to the DG1 that was unveiled barely two weeks ago. That is to say that it’s essentially a discrete version of Tiger Lake’s integrated GPU, offering 96 EUs and 24 ROPs, and bound to a 128-bit LPDDR4X memory bus. The one notable difference here is that while DG1 laptop implementations are all using 4GB of VRAM (thus far), Intel expects Server GPU installations to get 8GB per GPU.



In any case, the mobile versions of this silicon get what Intel estimates to be around NVIDIA MX350-levels of performance, which is admittedly nothing to write home about. Server GPU should do better than this thanks to its higher clockspeeds and TDPs, but not extensively so. As a result, the best use cases for Intel’s first discrete GPU are not big, shader-bottlenecked games or other rendering tasks, but rather tasks that take advantage of the dGPU’s compute capabilities and/or extensive media encode hardware.


This brings us back to Android cloud gaming, which by its very nature relies heavily on video encoding. The name of the game, as Intel sees it, is all about enabling high-density server installations for game streaming at a lower total cost of ownership (read: cheap), with Intel taking a particular pot-shot at rival NVIDIA over their software licensing costs. Ultimately, with their Server GPU they believe they can meet those specific demands.


And while the Server GPU isn’t going to be setting any performance records, it should still be well ahead of most mobile SoCs – and particularly the kind of lower-performance SoCs that go into the bulk of Android phones. So even at its modest performance levels, Intel is estimating that a single Server GPU can power between 10 and 20 game instances, and that number scales up significantly when you start talking about multiple GPUs.



To that end, Intel has even worked on developing the necessary Linux drivers and modules to support Android game streaming workloads, in order to bootstrap the market. Which for Intel means not only developing the base kernel drivers, but also the necessary tools for gaming-focused virtual machines, and even some internal projects to improve game streaming performance. Of particular note here is what Intel is calling Project Flip Fast, which is intel’s optimized client game streaming stack. Among other things, the company has enabled zero copy transfers between the host and the guest in order to avoid the performance hit from copying data between the two levels.



But Intel can only take their Server GPU so far on their own. Not unlike their CPU market, Intel isn’t interested in selling whole systems – or in this case video cards. So the company is working with third parties to bring the Server GPU to market in commercial products.



In conjunction with today’s launch of the Server GPU, Intel is also announcing their first customer and the first product using the GPU: H3C’s new XG310 game streaming card. The pathfinder product for the Server GPU, the XG310 is a quad GPU card aimed specifically at the Android game streaming market. The card offers 8GB of LPDDR4X per GPU, for a total of 32GB of on-board VRAM. Meanwhile connectivity is provided via a PCIe 3.0 x16 connection, which from the looks of the card is being fed into a PLX PCIe switch before being distributed out to the individual GPUs.



The XG310 is intended to be sold to server OEMs and game service providers, and Intel has already enrolled Tencent to show off the card. According to the company, a single XG310 card is able to run 60 streams of Arena of Valor at 720p30, and doubling the number of cards doubles the number of streams, which feeds back into Intel’s overall TCO argument.



Ultimately, as things stand, H3C’s Intel-based card is being validated with servers now. And, according to Intel, it’s already shipping worldwide with orders having already come in. The first customer for it is likely to be Tencent, who will be introducing their own cloud gaming service. Otherwise, thought Intel isn’t talking about any other customers at this time, it’s clear that they’re eager to line up more Android game streaming providers in the future. As well, these style of multi-GPU cards would also be a good fit for Intel’s more traditional high-density media encode customers, whom were previously buying things like Intel’s Visual Compute Accelerator cards.




Source: AnandTech – Intel Launches Xe-LP Server GPU: First Product Is H3C’s Quad GPU XG310 For Cloud Gaming

IBM and AMD to Advance Confidential Computing

Earlier this year, we wrote a news segment about AMD and Google’s announcement about bringing confidential virtual machines to the cloud using features enabled through 2nd Gen AMD EPYC processors. At the time, AMD and Google marketed the announcement as the first commercially available cloud confidential VMs, powered through AMD’s Secure Encryption Virtualization feature. After writing up the news, I got a rather sternly worded email from IBM, stating that AMD/Google were not the first – IBM claims to have been offering confidential VMs to its client base for almost two years at this point. The difference I could find is that Google’s offering is more open-to-the-public, compared to IBM’s solution which is strictly more a B2B arrangement.


Today AMD and IBM are combining their effort in this space. In a press release today, the two companies have announced a multi-year joint development agreement to advance the use of confidential computing in the cloud, with a nod to accelerating artificial intelligence.


In the combined press release, the agreement is based upon a vision of open-source software, open standards, and open system architectures to drive confidential computing advancements across a wide range of markets such as high-performance computing, enterprise critical environments, through virtualization and encryption. The goal of this project is to protect sensitive data, especially datasets used for AI training as well as incoming data for inference. Both companies have openly discussed different cloud computing models, such as private clouds, public clouds, and hybrid-clouds, and according to analyst research presented through IBM and AMD, securing sensitive data is still a barrier to entry for organizations looking at deploying hybrid and scalable cloud strategies. Ultimately the goal here is to enable hybrid cloud with the same security as a private cloud, but accessible to more ecosystems local to where it is required.


The announcement today states that engagement between AMD and IBM is already underway. It does not state how long this ‘multi-year joint development agreement’ will last or the goal is to evolve into something more open with other members.


Related Reading


Source: AMD


 



Source: AnandTech – IBM and AMD to Advance Confidential Computing

Apple Intros First Three ‘Apple Silicon’ Macs: Late 2020 MacBook Air, 13-Inch MacBook Pro, & Mac Mini

As previously announced by Apple this summer, the company is embarking on a major transition within its Mac product lineup. After almost a decade and a half of relying on Intel’s x86 processors to serve at the heart of every Mac, the company is going to be shifting to relying on its own, in-house designed Arm processors to power their now un-PC computers. At the time Apple set the start of the transition at the end of this year, and right on cue, today Apple announced the first three Apple Silicon-powered Macs: the Late 2020 editions of the MacBook Air, the 13-Inch MacBook Pro, and the Mac Mini.

Three of the lower-end devices within the Mac family, Apple is starting small for their Arm transition. The Mac Mini is of course the smallest and most integrated of Apple’s desktop-style computers. Meanwhile the MacBook Air and MacBook Pro are Apple’s two 13.3-inch laptops, focused on portability and performance respectively. Fittingly, these are also the areas where performance-per-watt is generally the most critical, as Apple is very strongly power-constrained on these platforms, and thus performance-limited as well.



Source: AnandTech – Apple Intros First Three ‘Apple Silicon’ Macs: Late 2020 MacBook Air, 13-Inch MacBook Pro, & Mac Mini

Apple Announces The Apple Silicon M1: Ditching x86 – What to Expect, Based on A14

Today, Apple has unveiled their brand-new MacBook line-up. This isn’t an ordinary release – if anything, the step that Apple is making today is something that hasn’t happened in 15 years: The start of a CPU architecture transition across their whole consumer Mac line-up.


Thanks to the company’s vertical integration across hardware and software, this is a monumental change that nobody but Apple can so swiftly usher in. The last time Apple ventured into such an undertaking in 2006, the company had ditched IBM’s PowerPC ISA and processors in favour of Intel x86 designs. Today, Intel is being ditched in favour of the company’s own in-house processors and CPU microarchitectures, built upon the ARM ISA.


The new processor is called the Apple M1, the company’s first SoC design for Macs. With four large performance cores, four efficiency cores, and an 8-GPU core GPU it features 16 billion transistors on the new 5nm process node. Apple’s is starting a new SoC naming scheme for this new family of processors, but at least on paper it does look at lot like an A14X.


Today’s event contained a whole ton of new official announcement, but also lacked (in typical Apple fashion) in detail. Today, we’re doing to be dissecting the new Apple M1 news, as well as doing an microarchitectural deep dive based on the already released Apple A14 SoC.



Source: AnandTech – Apple Announces The Apple Silicon M1: Ditching x86 – What to Expect, Based on A14

The Apple Fall 2020 Mac Event Live Blog: 10am PST (18:00 UTC)

Today Apple is expected to pull the trigger on new ‘Apple Silicon’ Macbooks. Years in the making, today we should be hearing about a lew of new devices from the Cupertino company which ditch x86 processors in favour of their own in-house designs.


We don’t know exactly what Apple has in store for us, but an upsized chip variant of the A14, maybe an A14X, is going to be a likely bet. Whatever Apple presents today, following the event, expect an in-depth microarchitectural exploration of the A14 and the Firestorm cores – with us attempting to put into context Apple’s big bet on Apple Silicon and how the competitive landscape might look like.



Source: AnandTech – The Apple Fall 2020 Mac Event Live Blog: 10am PST (18:00 UTC)

Compute eXpress Link 2.0 (CXL 2.0) Finalized: Switching, PMEM, Security

One of the more exciting connectivity standards over the past year has been CXL. Built upon a PCIe physical foundation, CXL is a connectivity standard designed to handle much more than what PCIe does – aside from simply acting as a data transfer from host to device, CXL has three branches to support, known as IO, Cache, and Memory. As defined in the CXL 1.0 and 1.1 standards, these three form the basis of a new way to connect a host with a device. The new CXL 2.0 standard takes it a step further.


CXL 2.0 is still built upon the same PCIe 5.0 physical standard, which means that there aren’t any updates in bandwidth or latency, but adds some much needed functionality that customers are used to with PCIe. At the core of CXL 2.0 are the same CXL.io, CXL.cache and CXL.memory intrinsics, dealing with how data is processed and in what context, but with added switching capabilities, added encryption, and support for persistent memory.



Source: AnandTech – Compute eXpress Link 2.0 (CXL 2.0) Finalized: Switching, PMEM, Security

Kioxia Announces XG7 PCIe 4.0 Client SSDs

Last week, Kioxia rounded out their lineup of PCIe 4.0 enterprise and datacenter SSDs with the announcement of the XD6. Now, they’re bringing PCIe 4.0 support to their client NVMe product line with the new XG7 and XG7-P M.2 NVMe SSDs. The XG7 family doubles the sequential read speed and increases sequential write speed by 60% compared to the XG6 series, thanks to the PCIe 4.0 support and a new SSD controller. Based on XG6 specs, this should be at least 6.3GB/s reads and 4.6GB/s writes for the XG7. That’s may not be quite fast enough for high-end consumer SSDs sold at retail, but for an OEM-only client SSD it’s still extremely fast.


For the first time, Kioxia (formerly Toshiba Memory) is introducing the higher performance/higher capacity -P models at the same time as the base models in their XG series. The XG7 will be available in capacities from 256GB to 1TB, and the XG7-P will offer 2TB and 4TB capacities. This likely makes the XG7-P the first client OEM SSD to add a 4TB NVMe option.


Full performance specs for the XG7(-P) family have not yet been announced, but in terms of features Kioxia is implementing NVMe 1.4 support and TCG Opal and TCG Pyrite security options. The XG7 and XG7-P will start appearing in pre-built systems in 2021. Unlike the past two generations, we’re not getting review samples of the XG7, so we’ll have to wait until next year to get first-hand experience with Kioxia’s PCIe 4.0 solution.



Source: AnandTech – Kioxia Announces XG7 PCIe 4.0 Client SSDs

Micron Announces 176-layer 3D NAND

Just in time for Flash Memory Summit, Micron is announcing their fifth generation of 3D NAND flash memory, with a record-breaking 176 layers. The new 176L flash is their second generation developed since the dissolution of Micron’s memory collaboration with Intel, after which Micron switched from a floating-gate memory cell design to a charge-trap cell. Micron’s previous generation 3D NAND was a 128-layer design that served as a short-lived transitional node for them to work out any issues with the switch to charge trap flash. Micron’s 128L flash has had minimal presence on the market, so their new 176L flash will in many cases serve as the successor to their 96L 3D NAND as well.


Micron is still withholding many technical details about their 176L NAND, with more information planned to be shared at the end of the month. But for now, we know their first 176L parts are 512Gbit TLC dies, built using string stacking of two 88-layer decks—Micron would seem to now be in second place behind Samsung for how many layers of NAND flash memory cells they can fabricate at a time.



The switch to a replacement gate/charge trap cell design seems to have enabled a significant reduction in layer thickness: the 176L dies are 45µm thick, about the same total thickness as Micron’s 64L floating-gate 3D NAND. A 16-die stacked package comes in at less than 1.5mm thick, suitable for most mobile and memory card use cases. As with previous generations of Micron 3D NAND, the chip’s peripheral logic is mostly fabricated under the NAND memory cell stacks, a technology Micron calls “CMOS under Array” (CuA). This has repeatedly helped Micron deliver some of the smallest die sizes, and Micron estimates their 176L 512Gbit die is about 30% smaller than the best their competitors currently offer.



The 176L NAND supports an interface speed of 1600MT/s, up from 1200MT/s for their 96L and 128L flash. Read and write (program) latency are both improved by over 35% compared to their 96L NAND, or by over 25% compared to their 128L NAND.  Micron cites an overall mixed workload improvement of about 15% for compared to their UFS 3.1 modules using 96L NAND.


Micron’s 176L 3D NAND has already started volume production and is shipping in some Crucial-branded consumer SSD products. However, Micron hasn’t specified which specific Crucial products are now using 176L NAND (or their 128L NAND, for that matter), so we expect that this is a fairly low-volume release for now. Still, over the next year we should 176L NAND production ramp up to higher volumes than their 128L process ever reached, and we can expect a wide range of products based on this 176L NAND to be released and replace most of what’s using their 96L NAND.




Source: AnandTech – Micron Announces 176-layer 3D NAND

Razer Book 13: It’s a 4K 16:10 Notebook, with 3840×2400 Resolution!

According to Razer, among the top requests from its users (aside from a toaster) is a more work-focused commercial style product using the same design identity as its popular Razer Blade and Blade Stealth series. Despite this going against the grain of the ‘gamer’ ethos that Razer is fond of, the company has finally let loose with its first generation Razer Book like – aimed specifically at users who want the Razer design and feel, but aimed at workflow rather than specifically for gaming. The cherry on top is the display.



Source: AnandTech – Razer Book 13: It’s a 4K 16:10 Notebook, with 3840×2400 Resolution!

MSI Confirms Fire at PCB Factory in Bao’an, China

After initial reports through third-party sources on social media, MSI has confirmed to AnandTech that there has been an incident at one of its largest manufacturing facilities in Bao’an, China. MSI reports that no injuries were caused, and no damage was done to any production lines. This facility/factory is responsible for manufacturing PCBs for a wide range of components, including (but not limited to) motherboards, graphics cards, laptops, servers, small-form-factor devices, and industrial installations.


MSI confirmed that the a large amount of smoke was generated at the Bao’an facility on the afternoon of November 5th. The company also states that emergency measures were enacted, and the fire department was called out and able to get the incident under control. MSI claims that no production lines were damaged in the incident, and there should not be any issues with future production. We suspect that there shouldn’t be any issues with storage/stock/distribution of models either, however usually with an incident like this, investigations happen and extra training is enacted, which may have knock on effects to production.


The Bao’an facility is one of a number of locations that MSI uses for its components. For example, notebooks and pre-built desktops are assembled in Shanghai, but those require components from elsewhere. A video was posted to Reddit showing the extent of the fire.


It should be noted that supply of RTX and Radeon GPUs that are due for sale over the next few weeks are likely to already be in the distribution channel. So if there is any hiccup in production (MSI says there won’t be), it would more likely be seen around a month from now. However, that won’t stop the company potentially redirecting stock to its higher margin markets if needed.


We have reached out to MSI for more information, with an expected official statement to follow.



Source: AnandTech – MSI Confirms Fire at PCB Factory in Bao’an, China

AMD Zen 3 Ryzen Deep Dive Review: 5950X, 5900X, 5800X and 5600X Tested

When AMD announced that its new Zen 3 core was a ground-up redesign and offered complete performance leadership, we had to ask them to confirm if that’s exactly what they said. Despite being less than 10% the size of Intel, and very close to folding as a company in 2015, the bets that AMD made in that timeframe with its next generation Zen microarchitecture and Ryzen designs are now coming to fruition. Zen 3 and the new Ryzen 5000 processors, for the desktop market, are the realization of those goals: not only performance per watt and performance per dollar leaders, but absolute performance leadership in every segment. We’ve gone into the new microarchitecture and tested the new processors. AMD is the new king, and we have the data to show it.



Source: AnandTech – AMD Zen 3 Ryzen Deep Dive Review: 5950X, 5900X, 5800X and 5600X Tested

The Xbox Series X Review: Ushering In The Next Generation of Game Consoles

What makes a console generation? The lines have been blurred recently. We can state that the Xbox Series X, and its less-powerful sibling, the Series S, are the next generation consoles from Microsoft. But how do you define the generation? Just three years ago, Microsoft launched the Xbox One X, the most powerful console in the market, but also with full compatibility with all Xbox One games and accessories. With multiple tiers of consoles and mid-generation refreshes that were significantly more powerful than their predecessors – and in some cases, their successors – the generational lines have never been this blurred before.

None the less, the time for a “proper” next generation console has finally arrived, and Microsoft is fully embracing its tiered hardware strategy. To that end, Microsoft is launching not one, but two consoles, with the Xbox Series X, and the Xbox Series S, each targeting a difference slice of the console market both in performance and price. Launching on November 10, 2020, the new Xboxes bring some serious performance upgrades, new designs, and backwards compatibility for not only the Xbox One, but also a large swath of Xbox 360 games and even a good lineup of games from the original 2001 Xbox. The generational lines have never been this blurred before, but for Microsoft the big picture is clear: it’s all Xbox.



Source: AnandTech – The Xbox Series X Review: Ushering In The Next Generation of Game Consoles

Kioxia Announces XD6 Datacenter SSDs: PCIe 4.0 and EDSFF At Scale

Kioxia (formerly Toshiba Memory) is announcing the new XD6 series datacenter NVMe SSDs, featuring PCIe 4.0 support and using the EDSSF E1.S form factors. The XD6 is Kioxia’s first mass-produced product to use an EDSFF form factor, and may end up being one of the first widely-available products using an E1.S form factor.


Kioxia started their PCIe 4.0 transition earlier this year with the CD6 and CM6 NVMe SSDs, and also launched the related PM6 24G SAS SSD. Those drives are mostly designed for the storage appliance and traditional server markets, using the well-established 2.5″ SSD form factor, supporting a broad range of capacities and endurance tiers and offering advanced features like dual-port support and multiple namespaces. The new XD6 series focuses more specifically on the hyperscale datacenter market and is based on a new controller architecture that is separate from that used in the CD6/CM6/PM6 family, but includes the same upgrade to 96L 3D TLC NAND flash memory.


Driven by the needs of hyperscalers like Facebook, the XD6 is tuned more for power efficiency than raw single-drive performance. The capacity options are more limited, and only one endurance tier (1 DWPD) is offered. Many of the more advanced reliability features like dual-port support and multiple sector sizes including Protection Information have been omitted, reflecting the fact that hyperscale datacenters apply the “cattle, not pets” mindset to entire racks or more—redundancy is managed in software on a broader scale, and it’s not cost-effective at that scale to treat individual drives as mission-critical.



Compared to the previous-generation XD5 SSDs, the most obvious and significant change the XD6 brings is the switch from M.2 22110 and 2.5″/7mm U.2 form factors to EDSFF E1.S. The introduction of EDSFF to Kioxia’s product line with the XD6 was hinted at in their roadmap over a year ago, but we didn’t expect it to be such an aggressive transition: the M.2 option has been dropped and the 2.5″ U.2 version will be launching after the EDSFF versions.


Hyperscalers are the main driving force behind the new EDSFF family of form factors, and within that market segment the EDSFF form factors are catching on quite well. Transitioning to PCIe Gen4-capable platforms also provides a convenient opportunity to adopt a new form factor that combines the density of M.2 with the hot-swappability and higher power support of U.2 drives. The XD6 uses the E1.S form factor (1U height, short length) with three different thicknesses: 9.5mm with a heatspreader case, and 15mm or 25mm with different heatsink sizes. All three versions of the XD6 use the same PCB and offer the same performance and power consumption; the different thickness options primarily trade off between density and airflow requirements.















Kioxia Enterprise and Datacenter NVMe SSD Specifications
Model XD6 XD5 CD6 CM6
Form Factor EDSFF E1.S:

9.5mm,

15mm,

or 25mm


(U.2 later)
2.5″ 7mm

U.2
M.2

22110
2.5″ 15mm U.3
Interface, Protocol PCIe 4 x4

NVMe 1.3c
PCIe 3 x4

NVMe 1.2.1
PCIe 4 x4

NVMe 1.4
NAND Flash 96L 3D TLC 64L 3D TLC 96L 3D TLC
Capacities (TB) 1.92 TB

3.84 TB
960GB

1.92TB

3.84TB

 

1.92TB

3.84TB
960GB

1.92TB

3.84TB

7.68TB

15.36TB
800GB

1.6TB

3.2TB

6.4TB

12.8TB
960GB

1.92TB

3.84TB

7.68TB

15.36TB

30.72TB
800GB

1.6TB

3.2TB

6.4TB

12.8TB
Write Endurance (DWPD) 1 1 <1 1 3 1 3
Sequential Read (GB/s) 6.5 GB/s 2.7 GB/s 6.2 GB/s 6.9 GB/s
Sequential Write (GB/s) 2.4GB/s 895 MB/s 4 GB/s 4.2 GB/s
Random Read IOPS 850k 250k 1M 1.4M
Random Write IOPS 90k 21k 85k 250k 170k 350k
Power (typical active) 14 W 7 W 19 W 19 W
Note: Performance and Power values are for highest-performing capacities of each model

The combined effects of PCIe 4.0 support, newer NAND flash memory, and a new form factor supporting higher power levels mean the XD6 is dramatically faster than the preceding XD5. Sequential performance is up by a factor of 2.5, random IO performance is up by a factor of four. But power draw has only doubled, from 7W to 14W—so the XD6 is clearly not sacrificing any power efficiency to deliver this improved performance.


The XD6 was designed to comply with an Open Compute Project (OCP) standard for NVMe cloud SSDs. This standard sets performance and power targets and requires many features that are declared optional by the base NVMe specification but are valuable for hyperscale datacenter usage. The XD6 is currently in qualification with big customers like Facebook but general availability is still months away. (Not that there’s much demand for EDSFF drives yet outside of the hyperscale segment, given the very limited options for purchasing servers with PCIe 4.0 E1.S slots.)


The Rest Of Kioxia’s SSD Lineup


As mentioned above, Kioxia’s high-end enterprise and datacenter product lines have already been updated this year and are still near the beginning of what are usually the longest product cycles in the whole lineup. The CM6 and CD6 added U.3 support with backwards compatibility with U.2 ports, but those two families and the PM6 SAS SSDs are still using the familiar 2.5″/15mm form factor. The CD6 and the new XD6 are splitting the datacenter segment, with the CD6 serving as the 2.5″ option and the XD6 as the EDSFF E1.S option.


The decision to drop the lowest 960GB capacity option from the XD series means the XD6 is even less suitable than the XD5 for server boot drive usage. Kioxia has also marketed the client XG series for this purpose, but those drives lack power loss protection and are thus considered unsuitable by many customers (unless used in a system with provisions for off-drive power loss protection). This is a small but important product segment, with currently very few options offering the ideal combination of low capacity, small form factor (M.2 2280 instead of 22110) and power loss protection. Kioxia is at least considering developing a drive for this segment, but if they do we won’t be seeing it anytime soon.


Kioxia’s HK6 is the end of the road for their enterprise/datacenter SATA products and won’t be getting a successor, but Kioxia hasn’t quite discontinued it yet. (The client SG6 SATA SSD has already reached EOL.)



Moving to the client/consumer end of the product line, the XG series of client NVMe drives is due for an update. The XG6 was the first drive released with 96L 3D TLC and it’s now pretty old. The XG7 has not been formally announced by Kioxia but did pass compliance testing with UNH-IOL over the summer and is listed on their NVMe Integrator’s List. The XG7 will be a PCIe Gen4 NVMe SSD, and based on typical Toshiba/Kioxia strategy they might not be in a hurry to release it until PC OEMs are shipping Gen4-capable pre-built desktops and notebooks in large volume. We haven’t heard much new about low-end client/consumer SSDs. UNH-IOL’s NVMe Integrator’s List shows the AG1 as a M.2 2230 or BGA SSD, so that’s possibly a BG series replacement. XT1 is also listed for their XFMEXPRESS form factor. Both are PCIe 3 drives that have not been officially announced by Kioxia, and went through compliance testing almost a year ago.


Aside from a few region-specific models that use third-party controllers, Kioxia doesn’t currently have retail consumer SSD products. The XG and BG families of client NVMe drives have been OEM-only for most of their history, and there’s no sign of Kioxia changing that strategy.


Related Reading:




Source: AnandTech – Kioxia Announces XD6 Datacenter SSDs: PCIe 4.0 and EDSFF At Scale

Apple Announces Event for November 10th: Arm-Based Macs Expected

We don’t normally publish news posts about Apple sending out RSVPs for product launch events, but this one should be especially interesting.


This morning Apple has sent notice that they’re holding an event next Tuesday dubbed “One more thing.” In traditional Apple fashion, the announcement doesn’t contain any detailed information about the content expected; but as Apple has already announced their updated iPads and iPhones, the only thing left on Apple’s list for the year is Macs. Specifically, their forthcoming Arm-powered Macs.


As previously announced by Apple back at their summer WWDC event, the company is transitioning its Mac lineup from x86 CPUs to Arm CPUs. With a two-year transition plan in mind, Apple is planning to start the Arm Mac transition this year, and wrapping things up in 2022.



For the new Arm Macs, Apple will of course be using their own in-house designed Arm processors, the A-series. As we’ve seen time and time again from the company, Apple’s CPU design team is on the cutting-edge of Arm CPU cores, producing the fastest Arm CPU cores for several years running now, and more recently even overtaking Intel’s x86 chips in real-world Instruction Per Clock (IPC) rates. Suffice it to say, Apple believes they can do better than Intel by designing their own CPUs, and especially with the benefits of vertical integration and total platform control, they might be right.



Apple has been shipping early Arm Macs to developers since the summer, using modified Mac Minis containing their A12Z silicon. We’re obviously expecting something newer, but whether it’s a variant of Apple’s A14 SoC, or perhaps something newer and more bespoke specifically for their Macs, remains to be seen.


In the meantime, because this is a phased transition, Apple will be selling Intel Macs – including new models – alongside the planned Arm Macs. So although Apple will no doubt focus on their new Arm Macs, I wouldn’t be the least bit surprised to see some new Intel Macs announced alongside them. Apple will be supporting Intel Macs for years to come, and in the meantime they need to avoid Osborning their x86 systems.


As always, we’ll have a live blog of the events next Tuesday, along with a breakdown of Apple’s announcements afterwards. So please be sure to drop in and check that out.



Source: AnandTech – Apple Announces Event for November 10th: Arm-Based Macs Expected

A Broadwell Retrospective Review in 2020: Is eDRAM Still Worth It?

Intel’s first foray into 14nm was with its Broadwell product portfolio. It launched into the mobile market with a variety of products, however the desktop offering in 2015 was extremely limited – only two socketed desktop processors ever made it to retail, and in limited quantities. This is despite users waiting for a strong 14nm update to Haswell, but also because of the way Intel built the chip. Alongside the processor was 128 MB of eDRAM, a sort of additional cache between the CPU and the main memory. It caused quite a stir, and we’re retesting the hardware in 2020 to see if the concept of eDRAM is still worth the effort.



Source: AnandTech – A Broadwell Retrospective Review in 2020: Is eDRAM Still Worth It?

Intel’s DG1 GPU Coming to Discrete Desktop Cards Next Year; OEM-Only

Alongside today’s launch of Intel’s DG1-based Iris Xe MAX graphics for laptops, the company is also quietly confirming that DG1 will be coming to desktop video cards as well, albeit in a roundabout way.


Though still in the early stages, a hereto unnamed third party has reached an agreement with Intel to produce DG1-based desktop cards. These cards, in turn, will be going into OEM desktop systems, and they are expected to appear early next year.


The very brief statement from Intel doesn’t contain any other details. The company isn’t saying anything about the specifications of the OEM desktop cards (e.g. clockspeeds), nor are they naming the third party that will be making the cards, or any OEMs who might be using the cards. For today at least, this is a simple notification that there will be OEM cards next year.


As for the market for such cards, there are a couple of avenues. OEMs could decide to treat the cards similarly to how Iris Xe MAX is being positioned in laptops, which is to say as a cheap add-in accelerator for certain GPU-powered tasks. Intel has baked a significant amount of video encode performance into the Xe-LP architecture, so the cards could be positioned as video encode accelerators. This would be very similar to Intel’s own plans, as the company will be selling a DG1-based video encode card for servers called the SG1.


Alternatively, the third party may just be looking to sell the DG1 card to OEMs as simple entry-level discrete graphics cards. Based on what we know about Xe MAX for laptops, DG1 is not expected to be significantly more powerful than Tiger Lake integrated graphics. However, as pointed out by our own Dr. Ian Cutress, it should be a good bit better than the Gemini Lake Atom’s integrated GPU.



Sadly, the OEM card probably won’t be as fancy as Intel’s DG1 development card




Source: AnandTech – Intel’s DG1 GPU Coming to Discrete Desktop Cards Next Year; OEM-Only

Intel’s Discrete GPU Era Begins: Intel Launches Iris Xe MAX For Entry-Level Laptops

Today may be Halloween, but what Intel is up to is no trick. Almost a year after showing off their alpha silicon, Intel’s first discrete GPU in over two decades has been released and is now shipping in OEM laptops. The first of several planned products using the DG1 GPU, Intel’s initial outing in their new era of discrete graphics is in the laptop space, where today they are launching their Iris Xe MAX graphics solution. Designed to complement Intel’s Xe-LP integrated graphics in their new Tiger Lake CPUs, Xe MAX will be showing up in thin-and-light laptops as an upgraded graphics option, and with a focus on mobile creation.



Source: AnandTech – Intel’s Discrete GPU Era Begins: Intel Launches Iris Xe MAX For Entry-Level Laptops

NVIDIA Launches Call of Duty Game Bundle for GeForce RTX 3080 & 3090 Cards

For those of you fortunate enough to be able to find one of NVIDIA’s new GeForce RTX 30 series video cards, the GPU maker has launched a new game bundle for the much sought-after cards. For their latest promotion, NVIDIA is bundling the forthcoming Call of Duty: Black Ops Cold War with GeForce RTX 3080 & GeForce RTX 3090 cards, highlighting the launch of the latest Call of Duty game as well as its support for various NVIDIA technologies. This latest game bundle replaces the Watch Dogs bundle, which ended yesterday.


NVIDIA has offered Call of Duty bundles a few times in the past, so including this year’s game is quickly becoming a regular tradition for the company. One of the biggest games on the market every year, NVIDIA and its developer relations team have worked with developer Treyarch to implement several technologies, including ray tracing support, NVIDIA’s DLSS, and NVIDIA’s new Reflex latency-reduction tech. So as with a lot of NVIDIA’s RTX series game bundles, it’s designed to show off the capabilities of the hardware as much as it is extra kicker to add to the value of NVIDIA’s cards.









NVIDIA Current Game Bundles

(October/November 2020)
Video Card

(incl. systems and OEMs)
Bundle
GeForce RTX 3090 Call of Duty: Black Ops Cold War
GeForce RTX 3080 Call of Duty: Black Ops Cold War
GeForce RTX 3070 None
GeForce RTX 20 Series (All) None
GeForce GTX 16 Series (All) None

NVIDIA is including the standard edition of Call of Duty with the top two cards of their new RTX 30-series lineup, the RTX 3080 and RTX 3090, as well as new desktop systems that include those cards. Which at face value is a bit surprising; though game bundles with high-end cards aren’t unusual, RTX 30 series card sales are going so well that NVIDIA hardly needs to include extra swag to sell their cards. In fact, if you can even get an RTX 3080 or RTX 3090 card then you’re fortunate given how quickly they sell out. On the flip side, however, this may change in November when AMD launches its rival RX 6800/6900 series cards, as the company has made it known that they’re gunning for NVIDIA – so perhaps it’s never too early to sweeten the pot.


At any rate, this is NVIDIA’s only game bundle for the moment. The company is not running the Call of Duty bundle for the recently-launched RTX 3070, nor are they including any games with their previous-generation RTX 20-series or GTX 16-series cards.


As always, codes must be redeemed via the GeForce Experience application on a system with a qualifying graphics card installed. The Call of Duty bundle runs from today through December 10th, and more information and details can be found in the terms and conditions. Be sure to verify the participation of any vendors purchased from, as NVIDIA will not give codes for purchases made from non-participating sellers.



Source: AnandTech – NVIDIA Launches Call of Duty Game Bundle for GeForce RTX 3080 & 3090 Cards

Sabrent Rocket Nano Rugged IP67 Portable SSD Review: NVMe in a M.2 2242 Enclosure

Portable bus-powered SSDs are a growing segment of the direct-attached storage market. The ongoing glut in flash memory (and the growing confidence of flash vendors in QLC) has brought down the price of these drives. Sabrent, a computer peripherals and accessories manufacturer, has made a name for itself in the space by catering to niche segments such as ultra-high capacity and compact SSDs. The company sent over a bunch of unique external SSDs to put through our strenuous review process. The first product we are going to take a look at is the Rocket Nano Rugged 2TB USB 3.2 Gen 2 drive. Read on to find out how it stacks against the rest of the competition.



Source: AnandTech – Sabrent Rocket Nano Rugged IP67 Portable SSD Review: NVMe in a M.2 2242 Enclosure