AMD Reports Q3 2021 Earnings: Records All Around

Continuing our earnings season coverage for Q3’21, today we have the yin to Intel’s yang, AMD. The number-two x86 chip and discrete GPU maker has been enjoying explosive growth ever since AMD kicked off its renaissance of sorts a couple of years ago, and that trend has been continuing unabated – AMD is now pulling in more revenue in a single quarter than they did in all of 2016. Consequently, AMD has been setting various records for several quarters now, and their latest quarter is no exception, with AMD setting new high water marks for revenue and profitability.


For the third quarter of 2021, AMD reported $4.3B in revenue, making a massive 54% jump over a year-ago quarter for AMD, when the company made just $2.8B in a then-record quarter. That makes Q3’21 both the best Q3 and the best quarter ever for the company, continuing a trend that has seen the company’s revenue grow for the last 6 quarters straight – and this despite a pandemic and seasonal fluctuations.


As always, AMD’s growing revenues have paid off handsomely for the company’s profitability. For the quarter, the company booked $923M in net income – coming within striking distance of their first $1B-in-profit quarter. This is a 137% increase over the year-ago quarter, underscoring how AMD’s profitability has been growing even faster than their rapidly rising revenues. Helping AMD out has been a strong gross margin for the company, which has been holding at 48% over the last two quarters.











AMD Q3 2021 Financial Results (GAAP)
  Q3’2021 Q3’2020 Q2’2021 Y/Y
Revenue $4.3B $2.8B $3.45B +54%
Gross Margin 48% 44% 48% +4pp
Operating Income $948M $449M $831M +111%
Net Income $923M $390M $710M +137%
Earnings Per Share $0.75 $0.32 $0.58 +134%


Breaking down AMD’s results by segment, we start with Computing and Graphics, which encompasses their desktop and notebook CPU sales, as well as their GPU sales. That division booked $2.4B in revenue for the quarter, $731M (44%) more than Q2 2021. Accordingly, the segment’s operating income is up quite a bit as well, going from $384M a year ago to $513M this year. Though, in a mild surprise, it is down on a quarterly basis, which AMD is ascribing to higher operating expenses.


As always, AMD doesn’t provide a detailed breakout of information from this segment, but they have provided some selective information on revenue and average selling prices (ASPs). Overall, client CPU sales have remained strong; client CPU ASPs are up on both a quarterly and yearly basis, indicating that AMD has been selling a larger share of high-end (high-margin) parts – or as AMD likes to call it, a “richer mix of Ryzen processor sales”. For their earnings release AMD isn’t offering much commentary on laptop versus desktop sales, but it’s noteworthy that the bulk of the company’s new consumer product releases in the quarter were desktop-focused, with the Radeon RX 6600 XT and Ryzen 5000G-series APUs.



Speaking of GPUs, AMD’s graphics and compute processor business is booming as well. As with CPUs, ASPs for AMD’s GPU business as up on both a yearly and quarterly basis, with graphics revenue more than doubling over the year-ago quarter. According to the company this is being driven by both high-end Radeon sales as well as AMD Instinct sales, with data center graphics revenue more than doubling on both a yearly and quarterly basis. AMD began shipping their first CDNA2-based accelerators in Q2, so for Q3 AMD has been enjoying that ramp-up as they ship out the high-margin chips for the Frontier supercomputer.












AMD Q3 2021 Reporting Segments
  Q3’2021 Q3’2020 Q2’2021

Computing and Graphics

Revenue $2398M $1667M $2250M
Operating Income $513M $384M $526M

Enterprise, Embedded and Semi-Custom

Revenue $1915M $1134M $1600M
Operating Income $542M $141M $398M


Moving on, AMD’s Enterprise, Embedded, and Semi-Custom segment has yet again experienced a quarter of rapid growth, thanks to the success of AMD’s EPYC processors and demand for the 9th generation consoles. This segment of the company booked $1.9B in revenue, $781M (69%) more than what they pulled in for Q3’20, and 20% ahead of an already impressive Q2’21. The gap between the CG and EESC groups has also further closed – the latter is now only behind AMD’s leading group by $483M in revenue.


And while AMD intentionally doesn’t separate server sales from console sales in their reporting here, the company has confirmed that both are up. AMD’s Milan server CPUs, which were launched earlier this quarter, have become the majority of AMD’s server revenue, pushing them to their 6th straight quarter of record server processor revenue. And semi-custom revenue – which is primarily the game consoles – is up not only on a yearly basis, but on a quarterly basis as well, with AMD confirming that they have been able to further expand their console APU production.



Looking forward, AMD’s expectations for the fourth quarter and for the rest of the year have been bumped up yet again. For Q4 the company expects to book a record $4.5B (+/- $100M) in revenue, which if it comes to pass will be 41% growth over Q4’20. AMD is also projecting a 49.5% gross margin for Q4, which if they exceed it even slightly, would be enough to push them to their first 50% gross margin quarter in company history. Meanwhile AMD’s full year 2021 projection now stands at a 65% year-over-year increase in revenue versus their $9.8B FY2020, which is 5 percentage points higher than their forecast from the end of Q2.


As for AMD’s ongoing Xilinx acquisition, while the company doesn’t have any major updates on the subject, they are confirming that they’re making “good progress” towards securing the necessary regulatory approvals. To that they, they are reiterating that it remains on-track to close by the end of this year.


Finally, taking a break from growing the company by 50% every year, AMD is scheduled to hold their AMD Accelerated Data Center Premiere event on Monday, November 8th. While AMD isn’t giving up too much information in advance, the company is confirming that we’ll hear more about their CDNA2 accelerator architecture, which along with the current Frontier supercomputer, will be going into their next generation Radeon Instinct products. As well, the company will also be delivering news on their EPYC server processors, which were just recently updated back in March with the launch of the 3rd generation Milan parts. As always, AnandTech will be virtually there, covering AMD’s announcements in detail, so be sure to drop by for that.



Source: AnandTech – AMD Reports Q3 2021 Earnings: Records All Around

AnandTech Interviews Mike Clark, AMD’s Chief Architect of Zen

AMD is calling this time of the year as its ‘5 years of Zen’ time, indicating that back in 2016, it was starting to give the press the first taste of its new microarchitecture which, in hindsight, ultimately saved the company. How exactly Zen came to fruition has been slyly hidden from view all these years, with some of the key people popping up from time to time: Jim Keller, Mike Clark, and Suzanne Plummer hitting the headlines more often than most. But at the time AMD started to disclose details about the design, it was Mike Clark front and center in front of those slides. At the time I remember asking him for all the details, but as part of the 5 Year messaging, offered Mike for a formal interview on the topic.



Source: AnandTech – AnandTech Interviews Mike Clark, AMD’s Chief Architect of Zen

Kingston KC3000 PCIe 4.0 NVMe Flagship SSD Hits Retail

Kingston had previewed their 2021 flagship PCIe 4.0 x 4 M.2 NVMe SSD (codename “Ghost Tree”) at CES earlier this year. Not much was divulged other than the use of the Phison E18 controller at that time. The product is hitting retail shelves today as the KC3000. The M.2 2280 SSD will be available in four capacities ranging from 512GB to 4TB. Kingston also provided us with detailed specifications.





















Kingston KC3000 SSD Specifications
Capacity 512 GB 1024 GB 2048 GB 4096 GB
Controller Phison E18
NAND Flash Micron 176L 3D TLC NAND
Form-Factor, Interface Single-Sided M.2-2280, PCIe 4.0 x4, NVMe 1.4 Double-Sided M.2-2280, PCIe 4.0 x4, NVMe 1.4
DRAM 512 MB DDR4 1 GB DDR4 2 GB DDR4 4 GB DDR4
Sequential Read 7000 MB/s
Sequential Write 3900 MB/s 6000 MB/s 7000 MB/s
Random Read IOPS 450K 900K 1M
Random Write IOPS 900K 1M
Avg. Power Consumption 0.34 W 0.33 W 0.36 W
Max. Power Consumption 2.7 W (R)

4.1 W (W)
2.8 W (R)

6.3 W (W)
2.8 W (R)

9.9 W (W)
2.7 W (R)

10.2 W (W)
SLC Caching Yes
TCG Opal Encryption No
Warranty 5 years
Write Endurance 400 TBW

0.44 DWPD
800 TBW

0.44 DWPD
1600 TBW

0.44 DWPD
3200 TBW

0.44 DWPD
MSRP ? (?¢/GB) ? (?¢/GB) ? (?¢/GB) ? (?¢/GB)


SSDs based on Phison’s E18 controller have been entering the market steadily over the last few months. While early ones like the Sabrent Rocket 4 Plus and Mushkin Gamma Gen 4 came with Micron’s 96L flash, the newer ones such as the Corsair MP600 PRO XT and the Kingston’s KC3000 are using 176L NAND. The KC3000’s 0.44 DWPD endurance rating slightly edges ahead of the MP600 PRO XT’s 0.38 DWPD despite similar component choices. Claimed performance numbers are similar to ones achieved by other E18 SSDs with similar NAND configuration – 7GBps for sequential accesses, and up to 1M IOPS for random accesses. The thermal solution involves an overlaid graphene aluminum heat-spreader that still keeps the thickness down to 2.21mm for the single-sided SKUs, and 3.5mm for the double-sided ones. On the power consumption side, the 4TB version can consume as much as 10.2W. On the positive side, all SKUs support a 5mW deep sleep mode.


Kingston is targeting the KC3000 towards both desktops and notebooks. Primary storage-intensive use-cases include 3D rendering and 4K content creation. In this market, the drive is going up against established competition like the Samsung 980 PRO, and Western Digital’s SN850. Both of these SSDs have lower endurance numbers and don’t have 4TB options, giving the KC3000 an edge for consumers looking at those aspects specifically.



Source: AnandTech – Kingston KC3000 PCIe 4.0 NVMe Flagship SSD Hits Retail

Apple's M1 Pro, M1 Max SoCs Investigated: New Performance and Efficiency Heights

Last week, Apple had unveiled their new generation MacBook Pro laptop series, a new range of flagship devices that bring with them significant updates to the company’s professional and power-user oriented user-base. The new devices particularly differentiate themselves in that they’re now powered by two new additional entries in Apple’s own silicon line-up, the M1 Pro and the M1 Max. We’ve covered the initial reveal in last week’s overview article of the two new chips, and today we’re getting the first glimpses of the performance we’re expected to see off the new silicon.



Source: AnandTech – Apple’s M1 Pro, M1 Max SoCs Investigated: New Performance and Efficiency Heights

The ASRock X570S PG Riptide Motherboard Review: A Wave of PCIe 4.0 Support on A Budget

Officially announced at Computex 2021, AMD and its vendors unveiled a new series of AM4 based motherboards for Ryzen 5000 processors. The new X570S chipset is, really, not that different from the previous version launched back in 2019 from a technical standpoint. The main user difference is that all of the X570S models now feature a passively cooled chipset. Some vendors have opted to either refresh existing models, or others are releasing completely new variants, such as the ASRock X570S PG Riptide we are reviewing today. Aimed at the entry-level extreme chipset, the X570S PG Riptide features a Killer-based 2.5 GbE controller, dual PCIe 4.0 x4 M.2 slots, and support for up to 128 GB of DDR4-5000.



Source: AnandTech – The ASRock X570S PG Riptide Motherboard Review: A Wave of PCIe 4.0 Support on A Budget

Intel Reports Q3 2021 Earnings: Client Down, Data Center and IoT Up

Kicking off another earnings season, Intel is once again leading the pack of semiconductor companies in reporting their earnings for the most recent quarter. As the company gets ready to go into the holiday quarter, they are coming off what’s largely been a quiet quarter for the chip maker, as Intel didn’t launch any major products in Q3. Instead, Intel’s most recent quarter has been driven by ongoing sales of existing products, with most of Intel’s business segments seeing broad recoveries or other forms of growth in the last year.


For the third quarter of 2021, Intel reported $19.2B in revenue, a $900M improvement over the year-ago quarter. Intel’s profitability has also continued to grow – even faster than overall revenues – with Intel booking $6.8B in net income for the quarter, dwarfing Q3’2020’s “mere” $4.3B. Unsurprisingly, that net income growth has been fueled in part by higher gross margins; Intel’s overall gross margin for the quarter was 56%, up nearly 3 percentage points from last year.
















Intel Q3 2021 Financial Results (GAAP)
  Q3’2021 Q2’2021 Q3’2020
Revenue $19.2B $19.7B $18.3B
Operating Income $5.2B $5.7B $5.1B
Net Income $6.8B $5.1B $4.3B
Gross Margin 56.0% 53.3% 53.1%
Client Computing Group Revenue $9.7B -4% -2%
Data Center Group Revenue $6.5B flat +10%
Internet of Things Group Revenue $1.0B +2% +54%
Mobileye Revenue $326M flat +39%
Non-Volatile Memory Solutions Group $1.1B flat -4%
Programmable Solutions Group $478M -2% +16%


Breaking things down by Intel’s individual business groups, most of Intel’s groups have enjoyed significant growth over the year-ago quarter. The only groups not to report gains are Intel’s Client Computing Group (though this is their largest group) and their Non-Volatile Memory Solutions Group, which Intel is in the process of selling to SK Hynix.



Starting with the CCG then, Intel’s core group is unfortunately also the only one struggling to grow right now. With $9.7B in revenue, it’s down just 2% from Q3’2020, but that’s something that stands out when Intel’s other groups are doing so well. Further breaking down the numbers, platform revenue overall is actually up 2% on the year, but non-platform revenue – “adjacencies” as Intel terms them, such as their modem and wireless communications product lines – are down significantly. On the whole this isn’t too surprising since Intel is in the process of winding down its modem business anyhow as part of that sale to Apple, but it’s an extra drag that Intel could do without.


The bigger thorn in Intel’s side at the moment, according to the company, is the ongoing chip crunch, which has limited laptop sales. With Intel’s OEM partners unable to source enough components to build as many laptops as they’d like, it has the knock-on effect of reducing their CPU orders, even though Intel itself doesn’t seem to be having production issues. The upshot, at least, is that desktop sales are up significantly versus the year-ago quarter, and that average selling prices (ASPs) for both desktop and notebook chips are up.



Meanwhile, Intel’s Data Center Group is enjoying a recovery in enterprise spending, pushing revenues higher. DCG’s revenue grew 10% year-over-year, with both sales volume and ASPs increasing by several percent on the back of their Ice Lake Xeon processors. A bit more surprising here is that Intel believes they could be doing even better if not for the chip crunch; higher margin products like servers are typically not impacted as much by these sorts of shortages, since server makers have the means to pay for priority.


Unfortunately, unlike Q2 Intel isn’t providing a quarter-over-quarter (i.e. vs the previous quarter) figures for their breakdowns. So while overall DCG revenue is flat on a quarterly basis, it sounds like Intel hasn’t really recovered from the hit they took in Q2. Meanwhile, commerntary on Intel’s earnings call suggests the sales of the largest (XCC) Ice Lake Xeons has been softer than Intel first expected, which has kept ASP growth down in an otherwise DCG-centric quarter.



The third quarter was also kind to Intel’s IoT groups and their Programmable Solutions Group. All three groups are up by double-digit percentages on a YoY basis, particularly the Internet of Things Group (IoTG), which is up 54%. According to Intel, that IOTG growth is largely due to businesses recovering from the pandemic, with a similar story for the Mobileye group thanks to automotive production having ramped back up versus its 2020 lows.


Otherwise, Intel’s final group, the Non-Volatile Memory Solutions Group, was the other declining group for the quarter. At this point Intel has officially excised the group’s figures from their non-GAAP reporting, and while they’re still required to report those figures in GAAP reports, they aren’t further commenting on a business that will soon no longer be theirs.


Finally, tucked inside Intel’s presentation deck is an interesting note: Intel Foundry Services (IFS) has shipped its first revenue wafers. Intel is, of course, betting heavily on IFS becoming a cornerstone of its overall chip-making business in the future as part of its IDM 2.0 strategy, so shipping customers’ chips for revenue is an important first step in that process. Intel has laid out a very aggressive process roadmap leading up to 20A in 2024, and IFS’s success will hinge on whether they can hit those manufacture ring technology targets.



For Intel, Q3’2021 was overall a decent quarter for the group – though what’s decent is relative. With the DCG, IOTG, and Mobileye groups all setting revenue records for the quarter (and for IOTG, overall records), Intel continues to grow. On the flip side, however, Intel missed their own revenue projections for the quarter by around $100M, so in that respect they’ve come in below where they intended to be. And judging from the 7% drop in the stock price during after-hours trading, investors are taking note.


Looking forward, Intel is going into the all-important Q4 holiday sales period, typically their biggest quarter of the year. At this point the company is projecting that it will book $18.3B in non-GAAP revenue (excluding NSG), which would be a decline of 5% versus Q4’2020. Similarly, the company is expecting gross margins to come back down a bit, forecasting a 53.5% margin for the quarter. On the product front, Q4 will see the launch of the company’s Alder Lake family of processors, though initial CPU launches and their relatively low volumes tend not to move the needle too much.



On that note, Intel’s Innovation event is scheduled to take place next week, on the 27th and 28th. The two day event is a successor-of-sorts to Intel’s IDF program, and we should find out more about the Alder Lake architecture and Intel’s specific product plans at that time.




Source: AnandTech – Intel Reports Q3 2021 Earnings: Client Down, Data Center and IoT Up

Intel Reaffirms: Our Discrete GPUs Will Be On Shelves in Q1 2022

Today is when Intel does its third-quarter 2021 financial disclosures, and there’s one little tidbit in the earnings presentation about its upcoming new discrete GPU offerings. The earnings are usually a chance to wave the flag of innovation about what’s to come, and this time around Intel is confirming that its first-generation discrete graphics with the Xe-HPG architecture will be on shelves in Q1 2022.


Intel has slowly been disclosing the features for its discrete gaming graphics offerings. Earlier this year, the company announced the branding for its next-gen graphics, called Arc, and with that the first four generations of products: Alchemist, Battlemage, Celestial, and Druid. It’s easy to see that we’re going ABCD here. Technically at that disclosure, in August 2021, Intel did state that Alchemist will be coming in Q1, the reaffirmation of the date today in the financial disclosures indicates that they’re staying as close to this date as possible.



Intel has previously confirmed that Alchemist will be fully DirectX 12 Ultimate compliant – meaning that alongside RT, it will offer variable-rate shading, mesh shaders, and sampler feedback. This will make it comparable in core graphics features to current-generation AMD and NVIDIA hardware. Although it has taken a few years now to come to fruition, Intel has made it clear for a while now that the company has intended to become a viable third player in the discrete graphics space. Intel’s odyssey, as previous marketing efforts have dubbed it, has been driven primarily by developing the Xe family of GPU microarchitectures, as well as the GPUs based on those architectures. Xe-LP was the first out the door last year, as part of the Tiger Lake family of CPUs and the DG1 discrete GPU. Other Xe family architectures include Xe-HP for servers and Xe-HPC for supercomputers and other high-performance compute environments.



The fundamental building block of Alchemist is the Xe Core. For manufacturing, Intel is turning to TSMC’s N6 process to do it. Given Intel’s Q1’22 release timeframe, Intel’s Alchemist GPUs will almost certainly be the most advanced consumer GPUs on the market with respect to manufacturing technology. Alchemist will be going up against AMD’s Navi 2x chips built on N7, and NVIDIA’s Ampere GA10x chips built on Samsung 8LPP. That said, as AMD can attest to, there’s more to being competitive in the consumer GPU market than just having a better process node. In conjunction with the use of TSMC’s N6 process, Intel is reporting that they’ve improved both their power efficiency (performance-per-watt) and their clockspeeds at a given voltage by 50% compared to Xe-LP. Note that this is the sum total of all of their improvements – process, logic, circuit, and architecture – so it’s not clear how much of this comes from the jump to TSMC N6 from Intel 10SF, and how much comes from other optimizations.



Exactly what performance level and pricing Intel will be pitching its discrete graphics to is currently unknown. The Q1 launch window puts CES (held the first week of January) as a good spot to say something more.


Related Reading




Source: AnandTech – Intel Reaffirms: Our Discrete GPUs Will Be On Shelves in Q1 2022

SK Hynix Announces Its First HBM3 Memory: 24GB Stacks, Clocked at up to 6.4Gbps

Though the formal specification has yet to be ratified by JEDEC, the memory industry as a whole is already gearing up for the upcoming launch of the next generation of High Bandwidth Memory, HBM3. Following announcements earlier this summer from controller IP vendors like Synopsys and Rambus, this morning SK Hynix is announcing that it has finished development of its HBM3 memory technology – and according to the company, becoming the first memory vendor to do so. With controller IP and now the memory itself nearing or at completion, the stage is being set for formal ratification of the standard, and eventually for HBM3-equipped devices to start rolling out later in 2022.


Overall, the relatively lightweight press release from SK Hynix is roughly equal parts technical details and boasting. While there are only 3 memory vendors producing HJBM – Samsung, SK Hynix, and Micron – it’s still a technically competitive field due to the challenges involved in making deep-stacked and TSV-connected high-speed memory work, and thus there’s a fair bit of pride in being first. At the same time, HBM commands significant price premiums even with its high production costs, so memory vendors are also eager to be first to market to cash in on their technologies.


In any case, both IP and memory vendors have taken to announcing some of their HBM wares even before the relevant specifications have been announced. We saw both parties get an early start with HBM2E, and now once again with HBM3. This leaves some of the details of HBM3 shrouded in a bit of mystery – mainly that we don’t know what the final, official bandwidth rates are going to be – but announcements like SK Hynix’s help narrow things down. Still, these sorts of early announcements should be taken with a small grain of salt, as memory vendors are fond of quoting in-lab data rates that may be faster than what the spec itself defines (e.g. SK Hynix’s HBM2E).


Getting into the technical details, according to SK Hynix their HBM3 memory will be able to run as fast as 6.4Gbps/pin. This would be double the data rate of today’s HBM2E, which formally tops out at 3.2Gbps/pin, or 78% faster than the company’s off-spec 3.6Gbps/pin HBM2E SKUs. SK Hynix’s announcement also indirectly confirms that the basic bus widths for HBM3 remain unchanged, meaning that a single stack of memory is 1024-bits wide. At Hynix’s claimed data rates, this means a single stack of HBM3 will be able to deliver 819GB/second worth of memory bandwidth.












SK Hynix HBM Memory Comparison
  HBM3 HBM2E HBM2
Max Capacity 24 GB 16 GB 8 GB
Max Bandwidth Per Pin 6.4 Gb/s 3.6 Gb/s 2.0 Gb/s
Number of DRAM ICs per Stack 12 8 8
Effective Bus Width 1024-bit
Voltage ? 1.2 V 1.2 V
Bandwidth per Stack 819.2 GB/s 460.8 GB/s 256 GB/s


SK Hynix will be offering their memory in two capacities: 16GB and 24GB. This aligns with 8-Hi and 12-Hi stacks respectively, and means that at least for SK Hynix, their first generation of HBM3 memory is still the same density as their latest-generation HBM2E memory. This means that device vendors looking to increase their total memory capacities for their next-generation parts (e.g. AMD and NVIDIA) will need to use memory with 12 dies/layers, up from the 8 layer stacks they typically use today.


What will be interesting to see in the final version of the HBM3 specification is whether JEDEC sets any height limits for 12-Hi stacks of HBM3. The group punted on the matter with HBM2E, where 8-Hi stacks had a maximum height but 12-Hi stacks did not. That in turn impeded the adoption of 12-Hi stacked HBM2E, since it wasn’t guaranteed to fit in the same space as 8-Hi stacks – or indeed any common size at all.


On that matter, the SK Hynix press release notably calls out the efforts the company put into minimizing the size of their 12-Hi (24GB) HBM3 stacks. According to the company, the dies used in a 12-Hi stack – and apparently just the 12-Hi stack – have been ground to a thickness of just 30 micrometers, minimizing their thickness and allowing SK Hynix to properly place them within the sizable stack. Minimizing stack height is beneficial regardless of standards, but if this means that HBM3 will require 12-Hi stacks to be shorter – and ideally, the same height as 8-Hi stacks for physical compatibility purposes – then all the better for customers, who would be able to more easily offer products with multiple memory capacities.



Past that, the press release also confirms that one of HBM’s core features, integrated ECC support, will be returning. The standard has offered ECC since the very beginning, allowing device manufacturers to get ECC memory “for free”, as opposed to having to lay down extra chips with (G)DDR or using soft-ECC methods.


Finally, it looks like SK Hynix will be going after the same general customer base for HBM3 as they already are for HBM2E. That is to say high-end server products, where the additional bandwidth of HBM3 is essential, as is the density. HBM has of course made a name for itself in server GPUs such as NVIDIA’s A100 and AMD’s M100, but it’s also frequently tapped for high-end machine learning accelerators, and even networking gear.


We’ll have more on this story in the near future once JEDEC formally approves the HBM3 standard. In the meantime, it’s sounding like the first HBM3 products should begin landing in customers’ hands in the later part of next year.



Source: AnandTech – SK Hynix Announces Its First HBM3 Memory: 24GB Stacks, Clocked at up to 6.4Gbps

The Huawei MateBook 16 Review, Powered by AMD Ryzen 7 5800H: Ecosystem Plus

Having very recently reviewed the Matebook X Pro 2021 (13.9-inch), our local PR in the UK offered me a last-minute chance to examine the newest element to their laptop portfolio. The Huawei MateBook 16, on paper at least, comes across as a workhorse machine designed for office and on the go. A powerful CPU that can go into a high-performance mode when plugged in, and sip power when it needs to. No discrete graphics to get in the way, and a massive 84 Wh battery is designed for an all-day workflow. It comes with a color-accurate large 3:2 display, and with direct screen share with a Huawei smartphone/tablet/monitor, it means if you buy into the ecosystem there’s a lot of potential. The question remains – is it any good?



Source: AnandTech – The Huawei MateBook 16 Review, Powered by AMD Ryzen 7 5800H: Ecosystem Plus

Google Announces Pixel 6, Pixel 6 Pro: The New Real Flagship Pixels

Today, after many weeks, even months of leaks and teasers, Google has finally announced the new Pixel 6 and Pixel 6 Pro – their new flagship line-up of phones for 2021 and carrying them over into next year. The two phones had been teased quite on numerous occasions and have probably one of the worst leak records of any phone ever, and today’s event revealed little unknowns, but yet still Google manages to put on the table a pair of very interesting phones, if not, the most interesting Pixel phones the company has ever managed to release.



Source: AnandTech – Google Announces Pixel 6, Pixel 6 Pro: The New Real Flagship Pixels

The Arm DevSummit 2021 Keynote Live Blog: 8am PT (15:00 UTC)

This week seems to be Arm’s week across the tech industry. Following yesterday’s Arm SoC announcements from Apple, today sees Arm kick off their 2021 developer’s summit, aptly named DevSummit. As always, the show is opening up with a keynote being delivered by Arm CEO Simon Segars, who will be using the opportunity to lay out Arm’s vision of the future.


Arm chips are already in everything from toasters to PCs – and Arm isn’t stopping there. So be sure to join us at 8am PT (15:00 UTC) for our live blog coverage of Arm’s keynote.



Source: AnandTech – The Arm DevSummit 2021 Keynote Live Blog: 8am PT (15:00 UTC)

Apple Announces M1 Pro & M1 Max: Giant New Arm SoCs with All-Out Performance

Today’s Apple Mac keynote has been very eventful, with the company announcing a new line-up of MacBook Pro devices, powered by two different new SoCs in Apple’s Silicon line-up: the new M1 Pro and the M1 Max.


The M1 Pro and Max both follow-up on last year’s M1, Apple’s first generation Mac silicon that ushered in the beginning of Apple’s journey to replace x86 based chips with their own in-house designs. The M1 had been widely successful for Apple, showcasing fantastic performance at never-before-seen power efficiency in the laptop market. Although the M1 was fast, it was still a somewhat smaller SoC – still powering devices such as the iPad Pro line-up, and a corresponding lower TDP, naturally still losing out to larger more power-hungry chips from the competition.


Today’s two new chips look to change that situation, with Apple going all-out for performance, with more CPU cores, more GPU cores, much more silicon investment, and Apple now also increasing their power budget far past anything they’ve ever done in the smartphone or tablet space.



Source: AnandTech – Apple Announces M1 Pro & M1 Max: Giant New Arm SoCs with All-Out Performance

The Apple 2021 Fall Mac Event Live Blog 10am PT (17:00 UTC)

Following last month’s announcement event of Apple’s newest iPhone and iPad line-ups, today we’re seeing Apple hold its second fall event, where we expect the company to talk about all new things Mac. Last year’s event was a historic one, with Apple introducing the M1 chip and new powered Mac devices, marking the company’s move away from x86 chips from Intel, taking instead their own future in their hands with their own custom Arm silicon. This year, we’re expecting more chips and more devices, with even more performance to be release. Stay tuned as we cover tonight’s show.



Source: AnandTech – The Apple 2021 Fall Mac Event Live Blog 10am PT (17:00 UTC)

TSMC Roadmap Update: 3nm in Q1 2023, 3nm Enhanced in 2024, 2nm in 2025

TSMC has introduced a brand-new manufacturing technology roughly every two years over the past decade. Yet as the complexity of developing new fabrication processes is compounding, it is getting increasingly difficult to maintain such a cadence. The company has previously acknowledged that it will start producing chips using its N3 (3 nm) node about four months later than the industry is used to (i.e., Q2), and in a recent conference call with analysts, TSMC revealed additional details about its latest process technology roadmap, focusing on their N3, N3E, and N2 (2 nm) technologies.


N3 in 2023


TSMC’s N3 technology will provide full node scaling compared to N5, so its adopters will get all performance (10% – 15%), power (-25% ~ -30%), and area (1.7x higher for logic) enhancements that they come to expect from a new node in this day and age. But these advantages will come at a cost. The fabrication process will rely extensively on extreme ultraviolet (EUV) lithography, and while the exact number of EUV layers is unknown, it will be a greater number of layers than the 14 used in N5. The extreme complexity of the technology will further add to the number of process steps – bringing it toto well over 1000 – which will further increase cycle times. 


As a result, while mass production of the first chips using TSMC’s N3 node will begin in the second half of 2022, the company will only be shipping them to an undisclosed client for revenue in the first quarter of 2023. Many observers, however, expected these chips to ship in late 2022.


“N3 risk production is scheduled in 2021, and production will start in second half of 2022,” said C.C. Wei, CEO of TSMC. “So second half of 2022 will be our mass production, but you can expect that revenue will be seen in first quarter of 2023 because it takes long — it takes cycle time to have all those wafer out.”


N3E in 2024


Traditionally, TSMC offers performance-enhanced and application-specific process technologies based on its leading-edge nodes several quarters after their introduction. With N3, the company will be changing their tactics somewhat, and will introduce a node called N3E, which can be considered as an enhanced version of N3. 


This process node will introduce an improved process window with performance, power, and yield enhancements. It is unclear whether N3 meets TSMC’s expectations for PPA and yield, but the very fact that the foundry is talking about improving yields indicates that there is a way to improve it beyond traditional yield boosting methods. 


“We also introduced N3E as an extension of our N3 family,” said Wei. “N3E will feature improved manufacturing process window with better performance, power and yield. Volume production of N3E is scheduled for one year after N3.”


TSMC has not commented on whether N3E will be compatible with N3’s design rules, design infrastructure, and IPs. Meanwhile, since N3E will serve customers a year after N3 (i.e., in 2024), there will be quite some time for chip designers to prepare for the new node.


N2 in 2025


TSMC’s N2 fabrication process has largely been a mystery so far. The company has confirmed that it was considering gate-all-around field-effect transistors (GAAFETs) for this node, but has never said that the decision was final. Furthermore, it has never previously disclosed a schedule for N2. 


But as N2 gets closer, TSMC is slowly locking down some additional details. Particularly, the company is now formally confirming that the N2 node is scheduled for 2025. Though they are not elaborating on whether this means HVM in 2025, or shipments in 2025.


“I can share with you that in our 2-nm technology, the density and performance, will be the most competitive in 2025,” said Wei.



Source: AnandTech – TSMC Roadmap Update: 3nm in Q1 2023, 3nm Enhanced in 2024, 2nm in 2025

TSMC to Build Japan's Most Advanced Semiconductor Fab

Fabs are well-known for being an expensive business to be in, so any time a new fab is slated for construction, it tends to be a big deal – especially amidst the current chip crunch. To that end, TSMC this week has announced plans to build a new, semi-specialized fab in Japan to meet the needs of its local customers. The semiconductor manufacturing facility will focus on mature and specialty fabrication technologies that are used to make chips with long lifecycles for automakers and consumer electronics. The fab will be Japan’s most advanced fab for logic when it becomes operational in late 2024 and if the rumors about planned investments are correct, it could also be Japan’s largest fab for logic chips.


“After conducting due diligence, we announce our intention to build a specialty technology fab in Japan, subject to our board of directors approval,” announced CC Wei, chief executive officer of TSMC, during a conference call with investors and financial analysts. “We have received a strong commitment to support this project from both our customers and the Japanese government.”


Comes Online in Late 2024


TSMC’s fab in Japan will process 300-mm wafers using a variety of specialty and mature nodes, including a number of 28 nm technologies as well as 22ULP process for ultra-low-power devices. These nodes are not used to make leading-edge ASICs and SoCs, but they are widely used by automotive and consumer electronics industries and will continue to be used for years to come not only for existing chips, but for upcoming solutions as well. 


“This fab will utilize 20 nm to 28 nm technology for semiconductor wafer fabrication,” Wei added. “Fab construction is scheduled to begin in 2022 and production is targeted to begin in late 2024, further details will be provided subject to the board approval.”


While TSMC disclosed the specialized nature of the fab, its schedule, and the fact that it gained support from clients and the Japanese government, the company is not revealing anything beyond that. In fact, while it confirmed that the cost of the semiconductor production facility is not included in its $100 billion three-year CapEx plan, it refused to give any estimates about its planned investments in the project.


Meanwhile, there are many things that make this fab special for TSMC, Japan, and the industry.


The Most Advanced Logic Fab in Japan


It was late 2005, AMD and Intel started to ship their first dual-core processors and the CPU frequency battle was officially over. Intel was getting ready to introduce its first 65nm chips in early 2006 and all of a sudden Panasonic said that it had started volume production of the world’s first application processors using a 65 nm technology, which it co-developed with Renesas, putting Panasonic a couple of months ahead of mighty Intel. In mid-2007, Panasonic again beat Intel to punch by several months with its 45 nm fabrication process.


But with their 32 nm node, Panasonic was 9 – 10 months behind Intel. And while the company did a half-node shrink of this process, it ultimately pulled the plug on 22nm following other Japanese conglomerates that opted out from the process technology race even earlier. By now, all Japanese automotive and electronics companies outsource their advanced chips to foundries, who in turn, build the majority of them outside of Japan.


By bringing a 22ULP/28nm-capable fab to Japan, TSMC’s plans will not only brings advanced logic manufacturing back to the country, but it would also amount to the most advanced fab in Japan. TSMC is also constructing an R&D center in Japan and cooperates with the University of Tokyo on various matters, so its presence in the country is growing, which is good news for the local semiconductor industry.


Previously TSMC concentrated its fabs and R&D facilities in Taiwan, but it looks like its rapid growth fueled by surging demand for semiconductors as well as geopolitical matters are compelling the foundry to diversify its production and R&D locations. 


What is particularly interesting is that according to a Nikkei report, the Japanese production facility will be co-funded by TSMC, the Japanese government, and Sony. This marks another major strategy shift for TSMC, which tends to fully own its fabs. In fact, if the Nikkei report is to be believed, the whole project will cost around $7 billion (though it is not said whether this is the cost of first phase of the fab, or a potential multi-year investment).


To put the number into context, SMIC recently announced plans to spend around $8.87 billion on a fab with planned capacity of around 100,000 300-mm wafer starts per month (WSPM). TSMC’s facility will presumably cost less and will be built in a country with higher operating costs, so it may well not be a GigaFab-level facility (which have capacity of ~100K WSPM). But still, we are talking about a sizable fab that could have a capacity of tens of thousands of wafer starts per month, which would make it Japan’s biggest 300-mm logic facility ever. Just for comparison, the former Panasonic fab in Uozo (now controlled by Tower Semiconductor and Nuvoton) has a capacity of around 8,000 WSPM.


TSMC has not formally confirmed any numbers about its Japanese fab, but the company tends to build rather large production facilities that can be expanded if needed. Meanwhile, a fab in Japan that well serve needs of local automotive and electronics conglomerates promises to help them to avoid shortages of chips in the future. This would also leave TSMC free to assign its 28nmTaiwanese and Chinese production lines to other applications, including PCs, which is important for the whole industry.



Source: AnandTech – TSMC to Build Japan’s Most Advanced Semiconductor Fab

G.Skill Unveils Premium Trident Z5 and Z5 RGB DDR5 Memory, Up To DDR5-6400 CL36

With memory manufacturers clamoring over themselves to push out DDR5 in time for the upcoming launch of Intel’s Alder Lake processors, G.Skill has unveiled its latest premium Trident Z5 kits. The latest Trident kits are based on Samsung’s new DDR5 memory chips and range in speed from DDR5-5600 to DDR5-6400, with latencies of either CL36 or CL40. Meanwhile, G.Skill has also opted to use this opportunity to undertake a complete design overhaul from its previous DDR4 memory, with a fresh new look and plenty of integrated RGB, and is .









G.Skill Trident Z5 DDR5 Memory Specifications
Speed Latencies Voltage Capacity
DDR5-6400 36-36-36-76

40-40-40-76
??? 32 GB (2 x 16 GB)
DDR5-6000 36-36-36-76

40-40-40-76
??? 32 GB (2 x 16 GB)
DDR5-5600 36-36-36-76

40-40-40-76
??? 32 GB (2 x 16 GB)


Looking at performance, the top SKU comes with fast speeds of DDR5-6400, with either a latency of CL 36-36-36-76 or CL 40-40-40-76. Both the lower-rated kits of DDR5-6000 and DDR5-5600 are available with the same latencies, while all of the six combinations will be available in 32 GB kits, with 2 x 16 GB memory modules. The new G.Skill Trident Z5 and Z5 RGB memory kits will also feature the latest Samsung memory ICs, with G.Skill hand screening the memory chips themselves to ensure maximum stability and performance.


At the time of writing, G.Skill hasn’t confirmed the operating voltages of each kit. G.Skill also hasn’t unveiled its pricing at this time, but it did say that the Trident Z5 and Trident Z5 RGB kits will be available from November.


Meanwhile in terms of aesthetics, the G.Skill Trident Z5 DDR5 memory features a new design compared with previous Trident Z series kits. The Trident Z5 comes with a new dual texture heat spreader design and is available either with a black top bar (Z5) or a new translucent RGB light bar (Z5 RGB). It’s also available in black and silver, with a black brushed aluminum insert across both colors, making it stand out.



With the RGB enabled G.Skill Trident Z5 RGB DDR5 memory kits, the lighting can be customized via its Trident Z lighting control software or synced with other third-party software supplied by vendors such as ASRock, ASUS, GIGABYTE, and MSI’s own RGB software.


Gallery: G.Skill


Related Reading




Source: AnandTech – G.Skill Unveils Premium Trident Z5 and Z5 RGB DDR5 Memory, Up To DDR5-6400 CL36

The EVGA Z590 Dark Motherboard Review: For Extreme Enthusiasts

Getting the most out of Intel’s Core i9-11900K primarily relies on two main factors: premium cooling for the chip itself, and a solid motherboard acting as the foundation. And while motherboard manufacturers such as EVGA can’t do anything about the former, they have quite a bit of experience with the latter.

Today we’re taking a look at EVGA’s Z590 Dark motherboard, which is putting EVGA’s experience to the test as one of a small handful of LGA1200 motherboards geared for extreme overclocking. A niche market within a niche market, few people really have the need (or the means) to overclock a processor within an inch of its life. But for those that do, EVGA has developed a well-earned reputation with its Dark series boards for pulling out all of the stops in helping overclockers get the most out of their chips. And even for the rest of us who will never see a Rocket Lake chip pass 6GHz, it’s interesting to see just what it takes with regards to motherboard design and construction to get the job done.



Source: AnandTech – The EVGA Z590 Dark Motherboard Review: For Extreme Enthusiasts

The Be Quiet! Pure Loop 280mm AIO Cooler Review: Quiet Without Compromise

Today we’re taking our first look at German manufacturer Be Quiet’s all-in-one (AIO) CPU liquid coolers, with a review of their Pure Loop 280mm cooler. True to their design ethos, Be Quiet! has built the Pure Loop to operate with as little noise as is reasonably possible, making for a record-quiet cooler that also hits a great balance between overall performance, an elegant appearance, and price.



Source: AnandTech – The Be Quiet! Pure Loop 280mm AIO Cooler Review: Quiet Without Compromise

AMD Launches Radeon RX 6600: More Mainstream Gaming For $329

AMD this morning is once again expanding its Radeon RX 6000 family of video cards, this time with the addition of a second, cheaper mainstream offering: the Radeon RX 6600. Being announced and launched this morning, the Radeon RX 6600 is aimed at the mainstream 1080p gaming market, taking its place as a second, cheaper alternative to AMD’s already-released Radeon RX 6600 XT. Based on the same Navi 23 GPU as its sibling, the Radeon RX 6600 comes with 28 CUs’ worth of graphics hardware, 8GB of GDDR6 VRAM, and a 32MB Infinity Cache, with prices starting at $329.



Source: AnandTech – AMD Launches Radeon RX 6600: More Mainstream Gaming For 9

Netgear Updates Orbi Lineup with RBKE960 Wi-Fi 6E Quad-Band Mesh System

Mesh networking kits / Wi-Fi systems have become quite popular over the last few years. Despite competition from startups such as eero (now part of Amazon) and Plume (with forced subscriptions), as well as big companies like Google (Google Wi-Fi and Nest Wi-Fi), Netgear’s Orbi continues to enjoy popularity in the market. Orbi’s use of a dedicated backhaul provides tangible benefit over other Wi-Fi systems using shared backhauls. However, the costs associated with the additional radio have meant that the Orbi Wi-Fi systems have always carried a premium compared to the average market offerings in the space.


Netgear introduced their first Wi-Fi 6E router – the Nighthawk RAXE500 – at the 2021 CES. Priced at $600, the router utilized a Broadcom platform (BCM4908 network processing SoC + BCM46384 4-stream 802.11an/ac/ax radio). Today, the company is updating the Orbi lineup with a Wi-Fi 6E offering belonging to the AXE11000 class. Based on Qualcomm’s Hawkeye (IPQ8074) / Pine (QCN9074) platform, the company is touting their RBKE960 Orbi series to be the world’s first quad-band Wi-Fi 6E mesh system.


Netgear’s high-end Orbi kits have traditionally been tri-band solutions, with a second 5 GHz channel as a dedicated backhaul. With Wi-Fi 6E, a tri-band solution is mandated – 2.4 GHz, 5 GHz, and 6 GHz support are all needed for certification. The 6 GHz channel, as discussed previously, opens up multiple 160 MHz channels that are free of interference. The RBKE960 series supports the three mandated bands, and also retains a dedicated 5 GHz backhaul, making it a quad-band solution with combined Wi-Fi speeds of up to 10.8 Gbps across all four considered together.



Netgear has opted to retain 5 GHz for the backhaul in order to maximize range. While the 6 GHz band is interference-free, the power restrictions prevent the communication in those channels from having as much range as the existing 5 GHz ones. Having a dedicated backhaul ensures that all the ‘fronthaul’ channels are available for client devices (shared backhauls result in a 50% reduction in speeds available for client devices for each additional node / satellite). The benefits of Wi-Fi 6E and what consumers can expect from the 6GHz band have already been covered in detail in our Nighthawk RAXE500 launch piece. The Orbi RBKE960 series supports up to seven 160 MHz channels, allowing for interference-free operation even in dense apartments with multiple neighbors.


The RBKE960 supports 16 Wi-Fi streams, making for an extremely complex antenna design. Netgear has made improvements based on past experience to the extent that the new Orbi RBKE960 performs better than the Orbi RBK850 even for 5GHz communication (the larger size of the unit also plays a part in this).



In terms of hardware features, the router sports a 10G WAN port, 3x 1GbE, and 1x 2.5GBASE-T ports. The satellite doesn’t have the WAN port, but retains the other three. The 2.5GBASE-T port can be used to create an Ethernet backhaul between the router and the satellite. On the software side, the new Orbi creates four separate Wi-Fi networks for different use-cases.



The reduced range in the 6GHz band means that large homes might require multiple satellites to blanket the whole area with 6GHz coverage.


Installation and management is via the Orbi app. Netgear also includes the NETGEAR Armor cyber-security suite with integrated parental controls – some features in Armor are subscription-based.



Netgear is also introducing an ‘Orbi Black Edition’ available exclusively on Netgear’s own website. With the RAXE500 setting the stage with its $600 price point, it is no surprise that the RBSE960 satellite costs the same (trading the WAN port and other features for an extra 4×4 radio). A kit with a router and a single satellite (RBKE962) is priced at $1100, while the RBKE963 (an additional satellite) bumps up the price tag to $1500. With home Wi-Fi becoming indispensable thanks to the work-from-home trend among other things, Netgear believes consumers will be ready to fork out what is essentially the price of a high-end smartphone or notebook for a reliable and future-proof Wi-Fi solution.



Source: AnandTech – Netgear Updates Orbi Lineup with RBKE960 Wi-Fi 6E Quad-Band Mesh System