Predators: Acer Launches 24.5 & 27-Inch Fast IPS 240 Hz Monitors

Acer Japan has unleashed the company’s first Predator displays that use Fast IPS panels, and therefore offering a 240 Hz refresh rate along with all the advantages that the IPS technology has, including rich colors, and wide viewing angles. The new 24.5-inch and 27-inch Acer Predator 240 Hz IPS LCDs also support VESA’s Adaptive-Sync technology and are Display HDR400-certified.


Acer’s lineup of Fast IPS monitors currently consists of two models: the 24.5-inch Predator XB253QGXbmiiprzx as well as the 27-inch Predator XB273GXbmiiprzx (not a typo). General characteristics of the displays are similar to those of other Fast IPS-based LCDs available today, so we are talking about a 1920×1080 resolution, 400 nits peak luminance, a 1000:1 contrast ratio, 178°/178° viewing angles, a 1 ms GtG response time (which can be reduced further to 0.1 ms – 0.5 ms response time with overdrive, depending on the model), and a 240 Hz maximum refresh rate with VESA’s Adaptive-Sync technology as well as NVIDIA’s G-Sync Compatible certification on top. The LCD can display 16.78 million of colors and can reproduce 99% of the sRGB color space, just like other monitors that use the same panels.


For connectivity, the new Acer Predators 240 Hz monitors have one DisplayPort 1.2a connector, two HDMI 2.0b inputs, and a quad-port USB 3.0 hub. On the audio side of things, the LCDs have 2W stereo speakers, and a headphone output.


Traditionally for Acer Predator monitors aimed at esports professionals and hardcore gamers, the displays come equipped with aggressively looking stands that can adjust height, tilt, and swivel. Also the LCDs can work in portrait mode.






















Acer’s Fast IPS Displays with a 240 Hz Refresh Rate
  XB273GXbmiiprzx XB253QGXbmiiprzx
Panel 27-inch class IPS 24.5-inch class IPS
Native Resolution 1920 × 1080
Maximum Refresh Rate 240 Hz
Dynamic Refresh Technology VESA Adaptive-Sync

NVIDIA G-Sync Certified
Range DP: 50 Hz – 240 Hz (?)

HDMI: 56 Hz – 240 Hz (?)
Brightness Standard: 350 cd/m²

HDR: 400 cd/m²
Standard: 400 cd/m²

HDR: 400 cd/m²
Contrast 1000:1
Viewing Angles 178°/178° horizontal/vertical
Response Time 1 ms GtG

OD: 0.1 ms
1 ms GtG

OD: 0.5 ms
Pixel Pitch ~0.3113 mm² ~0.2825 mm²
Pixel Density ~82 PPI ~90 PPI
Color Gamut Support 99% sRGB
Inputs 1×DP 1.2a

2×HDMI 2.0b
Audio audio output
USB 4-port USB 3.0 hub
Stand Height: +/- 115 mm

Tilt: 5° to 20°

Swivel: 20° to 20°

Pivot: 90° to 90°


Built in cable management

Height: +/- 115 mm

Tilt: 5° to 25°

Swivel: 20° to 20°

Pivot: 90° to 90°


Swivel: 20° to 20°

Pivot: 90° to 90°

Warranty 3 years
MSRP ? ?

So far, only Acer Japan has introduced the company’s first 240 Hz Fast IPS Predator-branded displays with a plan to start selling them as early as this week, but we are not sure about intentions of Acer’s divisions from other countries. The smaller 24.5-inch Predator XB253QGXbmiiprzx is expected to be priced at ¥46,000 (think about an MSRP of around $430 in the States), whereas the larger 27-inch Predator XB273GXbmiiprzx is projected to cost ¥55,000 (so, expect an MSRP of about $500 in the USA).


Related Reading:


Sources: Acer, PC Watch



Source: AnandTech – Predators: Acer Launches 24.5 & 27-Inch Fast IPS 240 Hz Monitors

AMD Financial Analyst Day 2020 Round-Up: Laying A Path For Bigger & Better Things

AMD’s first Financial Analyst Day since 2017 has just wrapped up. In the last three years AMD has undergone a dramatic change, launching its Zen CPU architecture, and greatly improving the trajectory of a company that was flirting with bankruptcy a few years ago. And now that AMD’s foundation is once again secure, the company has gathered to once again talk to its loyal (and looking at stock prices, now much richer) investors and how it’s planning to use this success to push into bigger and better things.


We’ve been covering FAD 2015 throughout the afternoon, and we have seen AMD make a number of announcements and roadmap reveals throughout the event. The individual announcements are below, and meanwhile now that the event has wrapped up we want to provide a quick summary of what AMD is going to be up to over the next five years.



Source: AnandTech – AMD Financial Analyst Day 2020 Round-Up: Laying A Path For Bigger & Better Things

AMD Clarifies Comments on 7nm / 7nm+ for Future Products: EUV Not Specified

As part of AMD’s Financial Analyst Day 2020, the company gave the latest updates for its CPU and GPU roadmap. A lot of this we have seen before, with the company talking out to Zen 4 and Genoa on its datacenter CPU product line, out to Zen 3 and Ryzen 4000 with the consumer product line, and now with the RDNA/CDNA split between consumer and compute graphics. In previous graphs of a similar nature, AMD used the term ‘7nm+’ when referring to products beyond the first iteration of 7nm. AMD has today clarified to us that this does not mean they are using TSMC’s N7+ process node for those items.


TSMC has three high-level versions of its 7nm process:


  • N7, which is the basic initial version using ‘DUV’ only tools (so no EUV),
  • N7P, which is the second generation version of N7 which is also only DUV
  • N7+, which is an EUV version of N7 for a number of layers in the metal stack

This nomenclature has been finalized within the past year or so.


Before this, AMD had presented various CPU and GPU roadmaps to the public. For the Zen 2 hardware, such as Ryzen 3000 series (Matisse), AMD had labeled this as ‘7nm’, which was all widely interpreted to mean TSMC’s N7 process. For future products, such as Zen 3, AMD had the slide listed as ‘7nm+’, which everyone had understood was ‘a better version of 7nm’.



Example Roadmap from CES 2018



From Next Horizons in July 2019


Because AMD labeled those as 7nm+, when TSMC called its version of 7nm with EUV to be N7+, one of the obvious assumptions that people have made is that where AMD wrote 7nm+, it was to be on the N7+ process. We have since learned that this is not entirely correct.


In order to avoid confusion, AMD is dropping the ‘+’ from its roadmaps. In speaking with AMD, the company confirmed that its next generations of 7nm products are likely to use process enhancements and the best high-performance libraries for the target market, however it is not explicity stating whether this would be N7P or N7+, just that it will be ‘better’ than the base N7 used in its first 7nm line.



This doesn’t necessarily mean that AMD isn’t going to be using EUV in the future – we were told it will be on a case by case basis, and at this time they wanted to clarify that AMD is not making any specific clarifications of which version of 7nm from TSMC it plans to use. More will be detailed at future events.


Interested in more of our AMD Financial Analyst Day 2020 Coverage? Click here.



Source: AnandTech – AMD Clarifies Comments on 7nm / 7nm+ for Future Products: EUV Not Specified

Updated AMD Ryzen and EPYC CPU Roadmaps March 2020: Milan, Genoa, and Vermeer

Everyone is interested in roadmaps – they give us a sense of an idea of what is coming in the future, and for the investors, it gives a level of expectation as to where the company might be in a year to five years. Today at AMD’s Financial Analyst Day, the company gave the latest updates on the CPU side of the business, for consumer and for enterprise.


AMD stated that its CPU roadmaps for its enterprise portfolio are going to offer more vision into the future than its consumer side for a couple of reasons. First, the enterprise market is built on a longer product cycle and it helps when planning these systems to know what is in the pipe publicly, but also from an investor standpoint where the enterprise market ultimately offers the bigger financial opportunity.


To that end, AMD confirmed what we essentially knew, with Zen 3 based Milan coming in ‘late 2020’.


Zen 4 based Genoa has already been announced as the CPU to power the El Capitan supercomputer, and in this roadmap AMD has put it as coming out by 2022. We asked AMD for clarification, and they stated that in this sort of graph, we should interpret it as the full stack of Genoa should be formally launched by the end of 2022. Given AMD’s recent 12-15 month cadence with the generations of EPYC, and the expected launch of Milan late this year, we would expect to see Genoa in early 2022.



Astute users might notice that Milan / Zen 3 has been listed as ‘7nm’, where previously it was listed as ‘7nm+’. We’ve got a whole news post on why AMD has made this change, but the short of it is that AMD initially put ‘7nm+’ to mean ‘an advanced version of 7nm’. When TSMC named its EUV version of 7nm as N7+, people had assumed they were the same, and AMD wanted to clarify that Milan is on a version of 7nm, and the exact version will be disclosed at a later date. In the future the company will avoid using ‘+’ so this doesn’t happen again (!). We also have Genoa listed as a 5nm product.


Harder numbers about Milan and Genoa are expected to be unveiled closer to their respective launch times.


On the consumer side, AMD said a little less, with its roadmap only going out to Zen 3, which has the codename ‘Vermeer’ for the desktop product.



In this graph, we see that the Zen 3 product here is on the far right, but so is the date – 2021. Does this mean Zen 3 for consumers is coming 2021? We asked AMD to clarify, and were told that we should interpret this as that the range of Zen 3 consumer products, such as desktop CPUs, HEDT CPUs, mobile APUs, and consumer APUs, should all be available by the end of 2021. The company clarified that Zen 3 will hit the consumer market ‘later this year’, meaning late 2020.


So here comes a poignant question – what is going to come first in 2020? Zen 3 for enterprise is listed as ‘late 2020’, and Zen 3 for consumer is ‘later this year’. AMD makes a lot more money on its enterprise products than its consumer products, and while it enjoys a healthy performance lead in both, it really wants to push its market share in enterprise a lot more to drive home the bigger financial potential. With this in mind, I highly suspect that given AMD’s lead in the consumer market, we might see the company push more of its Zen 3 silicon into the enterprise market as a priority, with only a limited 2020 consumer release. I could be wrong, but we will find out closer to the time.


Interested in more of our AMD Financial Analyst Day 2020 Coverage? Click here.



Source: AnandTech – Updated AMD Ryzen and EPYC CPU Roadmaps March 2020: Milan, Genoa, and Vermeer

AMD's RDNA 2 Gets A Codename: “Navi 2X” Comes This Year With 50% Improved Perf-Per-Watt

While AMD’s Financial Analyst Day is first and foremost focused on the company’s financial performance – it’s right there in the title – this doesn’t stop the company from dropping a nugget or two of technical information along the way, to help excite investors on the future of the company.


One such nugget this year involves AMD’s forthcoming RDNA 2 family of client GPUs. The successor to the current RDNA (1) “Navi” family, RDNA 2 has been on AMD’s roadmap since last year. And it’s been previously revealed that, among other things, it will be the GPU architecture used in Microsoft’s forthcoming Xbox Series X gaming console. And while we’re still some time off from a full architecture reveal from AMD, the company is offering just a few more details on the architecture.


First and foremost, RDNA 2 is when AMD will fill out the rest of its consumer product stack, with their eye firmly on (finally) addressing the high-end, extreme performance segment of the market. The extreme high end of the market is small in volume, but it’s impossible to overstate how important it is to be seen there – to be seen as competing with the best of the best from other GPU vendors. While AMD isn’t talking about specific SKUs or performance metrics at this time, RDNA 2 will include GPUs that address this portion of the market, with AMD aiming for the performance necessary to deliver “uncompromising” 4K gaming.



But don’t call it “Big Navi”. RDNA 2 isn’t just a series of bigger-than-RDNA (1) chips. The GPUs, which will be the codename “Navi 2X” family, also incorporate new graphics features that set them apart from earlier products. AMD isn’t being exhaustive here – and indeed they’re largely already confirming what we know from the Xbox Series X announcement – but hardware ray tracing as well as variable rate shading are on tap for RDNA 2. This stands to be important for AMD at multiple levels, not the least of which is closing the current feature gap with arch-rival NVIDIA.



And AMD didn’t stop there, either. Even to my own surprise, AMD isn’t just doing RDNA (1) with more features; RDNA 2 will also deliver on perf-per-watt improvements. All told, AMD is aiming for a 50% increase in perf-per-watt over RDNA (1), which is on par with the improvements that RDNA (1) delivered last year. Again speaking at a high level, these efficiency improvements will come from several areas, including microarchitectural enhancements (AMD even lists improved IPC here), as well as optimizations to physical routing and unspecified logic enhancements to “reduce complexity and switching power.”



Process nodes will also play some kind of a role here. While AMD is still going to be on a 7nm process here – and they are distancing themselves from saying that they’ll be using TSMC’s EUV-based “N7+” node – the company has clarified that they will be using an enhanced version of 7nm. To what extent those enhancements are we aren’t sure (possibly using TSMC’s N7P?), but AMD won’t be standing still on process tech.


This strong focus on perf-per-watt, in turn, will be a key component of how AMD can launch itself back into being a fully viable, top-to-bottom competitor with NVIDIA. While AMD is already generally at parity with NVIDIA here, part of that advantage comes from an atypical advantage in manufacturing nodes that AMD can’t rely on keeping. NVIDIA isn’t standing still for 2020, and neither can AMD. Improving power efficiency for RDNA 2 (and beyond) will be essential for convincingly beating NVIDIA.



Overall, AMD has significant ambitions with RDNA 2, and it shows. The architecture will be the cornerstone of a generation of consoles, and it will be AMD’s first real shot in the last few years at taking back the flagship video card performance crown. So we’re eagerly awaiting to see what else RDNA 2 will bring to the table, and when this year the first video cards based on the new architecture will begin shipping.



Source: AnandTech – AMD’s RDNA 2 Gets A Codename: “Navi 2X” Comes This Year With 50% Improved Perf-Per-Watt

AMD's 2020-2022 Client GPU Roadmap: RDNA 3 & Navi 3X On the Horizon With More Perf & Efficiency

As has become something of a tradition for AMD, this year’s Financial Analyst Day included a high level update to the company’s GPU roadmap. The last roadmap we saw from AMD, unveiled back at the Radeon RX 5700 XT launch last summer, went as far as RDNA 2. Now with RDNA 2 (Navi 2X) set to launch this year, AMD has extended the roadmap to include what’s next. And what’s next is RDNA 3.


The successor to RDNA 2, RDNA 3 will build off of what AMD will achieve in the coming quarters with their forthcoming GPU architecture. With RDNA 2 not off the ground yet, for obvious reasons AMD is saying very little about RDNA 3 at this time, especially with regards to features. But what we do know is that, like RDNA 2, AMD is targeting continual perf-per-watt increases, as power consumption remains the ultimate bottleneck to total GPU performance.


Helping AMD get there will be a new process node. For now the company is not disclosing which node it will be, using “Advanced Node” as a catch-all for what they decide to use. Coming off of TSMC’s 7nm process they will have several options to use, including TSMC’s 6nm and 5nm processes. And given TSMC’s roadmaps, it’s more or less inevitable that this will be the point where AMD begins using an EUV-based process for their GPUs, as AMD has indicated that this year’s RDNA 2 will not be using TSMC’s EUV-based 7nm+ process.


Overall AMD has a lot of work ahead of them. While RDNA (1) and Navi 1X helped to reinvigorate AMD last year, the company is still struggling to reestablish itself as a fully viable top-to-bottom competitor to market leader NVIDIA. AMD currently trails on hardware features, while their overall perf-per-watt is currently competitive, but due in part to AMD having an atypical advantage in manufacturing nodes. So continuing to quickly iterate their GPU architecture to improve both features and their overall perf-per-watt – including moving to newer manufacturing nodes – is exactly what the company needs to do in order to become the market leader they desire to be. On the whole it’s very Zen-like, and this is clearly intentional.


Meanwhile it’s interesting to see that AMD is going to keep the Navi architecture name leading into RDNA 3. With the architecture set to include numerous new features compared to RDNA (1) as well as a new process node, we’d normally see a naming break at some point along that line. As more details come out about AMD’s next two GPU architectures, we’ll find out more about just what AMD is changing and improving.




Source: AnandTech – AMD’s 2020-2022 Client GPU Roadmap: RDNA 3 & Navi 3X On the Horizon With More Perf & Efficiency

AMD Unveils CDNA GPU Architecture: A Dedicated GPU Architecture for Data Centers

Over the last decade, the industry has seen a boom in demand for GPUs for the data center. Driven in large part by rapid progress in neural networking, deep learning, and all things AI, GPUs have become a critical part of some data center workloads, and their role continues to grow with every year.


Unfortunately for AMD, they’ve largely been bypassed in that boom. The big winner by far as been NVIDIA, who has gone on to make billions of dollars in the field. Which is not to say that AMD hasn’t had some wins with their previous and current generation products, including the Radeon Instinct series, but their share of that market and its revenue has been a fraction of what NVIDIA has enjoyed.


AMD’s fortunes are set to change very soon, however. We already know that AMD (as a supplier to Cray) has scored two big supercomputer wins with the United States – totaling over $1 billion for CPUs and GPUs – so there have been a lot of questions on just what AMD has been working on that has turned the heads of the US government. The answer, as AMD is revealing today, is their new dedicated GPU architecture for data center compute: CDNA.


The Compute counterpart to the gaming-focused RDNA, CDNA is AMD’s compute-focused architecture for data center and other uses. Like everything else being presented at today’s Financial Analyst Day, AMD’s reveal here is at a very high level. But even at that high level, AMD is making it clear that there’s a fission of sorts going on in their GPU development process, leading to CDNA and RDNA becoming their own architectures.



Just how different these architectures are (and over time, will be) remains to be seen. AMD has briefly mentioned that CDNA is going to have less “graphics-bits”, so it’s likely that these parts will have limited (if any) graphics capabilities, making them quite dissimilar from RDNA GPUs in some ways. So broadly speaking, AMD is now on a path similar to what we’ve seen from other GPU vendors, where compute GPUs are increasingly becoming a distinct class of product, as opposed to repurposed gaming GPUs.


AMD’s goals for CDNA are simple and straightforward: build family of big, powerful GPUs that are specifically optimized for compute and data center usage in general. This is a path AMD already started to go down with GPUs such as Vega 20 (used in the Radeon Instinct MI 50/60), but now with even more specialization and optimization. A big part of this will of course be machine learning performance, which means supporting faster execution of smaller data types (e.g. INT4/INT8/FP16), and AMD even goes as far as to explicitly mention tensor ops. But this can’t come at the cost of traditional FP32/FP64 compute either; those supercomputers that AMD’s GPUs will be going in will be doing a whole lot of high precision math. So AMD needs to perform well across the compute and machine learning spectrum, across many data types.


To get there, AMD will also need to improve their performance-per-watt, as this is an area they have frequently trailed at. Today’s Financial Analyst Day announcement isn’t going into any real detail on how AMD is going to do this – beyond the obvious improvements in manufacturing processes, at least – but AMD is keenly aware of their need to improve.


All the while CDNA will also differentiate itself with features, including some things only AMD can do. Enterprise-grade reliability and security will be one leg here, including support for ever-popular virtualization needs.



But AMD will also be leaning on their Infinity Fabric to give them an edge in performance scaling and CPU/GPU integration. Infinity Fabric has been a big part of AMD’s success story this far on the CPU side of matters, and AMD is applying this same logic to the GPU side of matters. This means using IF to not only link GPUs to other GPUs, but using IF to link GPUs to CPUs. Which is something we’ve already seen in the works for AMD’s supercomputer wins, where both systems will be using IF to team up 4 GPUs with a single CPU.


AMD’s big win, however, will be a bit further down the line, when their 3rd gen Infinity Fabric is ready. It’s at that point where AMD intends to deliver a fully unified CPU/GPU memory space, fully leveraging their ability to provide both the CPUs and GPUs for a system. Unified memory can take a few different forms, so there are some important details that are missing here that will be saved for another day, but ultimately having a unified memory space should make programming heterogenous systems a whole lot easier, which in turn makes incorporating GPUs into servers all the better choice.


And since CNDA is now its own branch of AMD’s GPU architecture – with command of it falling under data center boss Forrest Norrod, interestingly enough – it also has its own roadmap with multiple generations of GPUs. With AMD treating Vega 20 as the branching point here, the company is revealing two generations of CDNA to come, aptly named CDNA (1) and CDNA 2.



CDNA (1) is AMD’s impending data center GPU. We believe this to be AMD’s “Arcturus”, and according to AMD it will be optimized for machine learning and HPC uses. This will be an Infinity Fabric-enabled part, using AMD’s second-generation IF technology. Keeping in mind that this is a high level overview, at this point it’s not super clear whether this part is going in either of AMD’s supercomputer wins; but given what we know so far about the later El Capitan – which is now definitely using CDNA 2 – CNDA (1) may be what’s ending up in Frontier.


Following CDNA (1) of course is CDNA 2. AMD is not sharing too much in the way of details here – after all, they haven’t yet shipped the first CDNA let alone the second – but they have confirmed that it will incorporate AMD’s third generation Infinity Fabric. As well, it will use a newer manufacturing node, which AMD is calling “Advanced Node” for now, as they are not disclosing the specific node they intend to use. So in a few different respects, CDNA 2 will be the piece de resistance of AMD’s heterogeneous compute plans, where they finally get to have a unified, coherent memory system across discrete CPUs and GPUs.


As for shipping dates, while AMD isn’t disclosing exact dates at this time, the roadmap itself only extends to the end of 2022, meaning that AMD expects to be shipping CDNA 2 in volume by then. This aligns fairly well with this week’s El Capitan announcement, which has the supercomputer being delivered in 2023.


Overall, AMD has some significant ambitions for their future data center GPUs. And while they have a lot of catching do to realize those ambitions, they’ve certainly laid out a promising roadmap to get there. AMD isn’t wrong about the importance of the data center market from both a technology perspective and a revenue perspective, and having a dedicated branch of their GPU architecture to get there may be just what AMD needs to finally find the success they seek.




Source: AnandTech – AMD Unveils CDNA GPU Architecture: A Dedicated GPU Architecture for Data Centers

AMD Moves From Infinity Fabric to Infinity Architecture: Connecting Everything to Everything

Another element to AMD’s Financial Analyst Day 2020 was the disclosure of how the company intends to evolve its interconnect strategy with its Infinity Fabric (IF). The plan over the next two generations of products is for the IF to turn into its own architectural design, no longer just between CPU-to-CPU or GPU-to-GPU, and future products will see a near all-to-all integration.


AMD introduced its Infinity Fabric with the first generation of Zen products, which was loosely described as a superset of Hypertransport allowing for fast connectivity between different chiplets within AMD’s enterprise processors, as well as between sockets in a multi-socket server. With Rome and Zen 2, the company unveiled its second generation IF, providing some more speed but also GPU-to-GPU connectivity.



This second generation design allowed two CPUs to be connected, as well as four GPUs to be connected in a ring, however the CPU-to-GPU connection was still based in PCIe. With the next generation, now dubbed Infinity Architecture, the company is scaling it not only to allow for an almost all-to-all connection (6 links per GPU) for up to eight GPUs, but also for CPU-to-GPU connectivity. This should allow for a magnitude of improved operation between the two, such as unified memory spaces and the benefits to come with that. AMD is citing a considerable performance uplift with this paradigm.



AMD and LLNL recently disclosed that the new El Capitan supercomputer will have the latest generation Infinity Architecture installed, with 1 Zen 4-based Genoa EPYC CPU to 4 GPUs. This puts the timeline for this feature in the ballpark of early 2022.



Interested in more of our AMD Financial Analyst Day 2020 Coverage? Click here.





Source: AnandTech – AMD Moves From Infinity Fabric to Infinity Architecture: Connecting Everything to Everything

AMD Discusses ‘X3D’ Die Stacking and Packaging for Future Products: Hybrid 2.5D and 3D

One of AMD’s key messages at its Financial Analyst Day 2020 is that the company wants to remain on the leading edge when it comes to process node technology as well as the latest packaging technology on its newest products. To that end, AMD discussed how it has surged forward with not only 2.5D interposer designs in its GPUs, but also stacked memory and chiplet implementations. The next stage of this journey, according to AMD, is a new X3D die stacking and packaging technology.


The nature of the Financial Analyst Day means that AMD didn’t go into too much detail here, aside from a few diagrams, but the company was clear that it sees its aggressive roadmap for chiplet and 3D integration to lead to this X3D design, where the X stands for ‘hybrid’. AMD’s diagrams show four main compute chiplets, arranged in a 2×2 pattern, and then 4-high stacked die with one per chiplet. All of these chips are then on a large interposer underneath.



Representation of AMD’s diagram


In this case, it seems that the ‘die stacking’ element points to HBM or some form of memory, while the compute chiplets in the middle are only one high, but all connected through the interposer. AMD is claiming that this level of integration offers a 10x increase in bandwidth density, allowing more data to be shuttled from the memory stacks into the cores (and hopefully from storage into the memory stacks too).


The new packaging technology was listed as ‘future’, with no confirmed date. With AMD today announcing its new CDNA architecture for programmable compute graphics solutions, that sort of product would fit in really well here. With AMD’s prowess in CPU chiplet design, there could also be additional scope in future enterprise CPU developments as well. Graphics is a bit more farfetched, as the chiplet paradigm in graphics is a tricky one to solve.


We asked AMD more information about it, as to whether it correlates to any of TSMC’s latest packaging developments such as CoWoS or LIPINCON, however AMD stated that more would be detailed closer to the time at a dedicated event. We requested an AMD Architecture Day as soon as travel will allow. More to come as we find out about it.


Interested in more of our AMD Financial Analyst Day 2020 Coverage? Click here.




Source: AnandTech – AMD Discusses ‘X3D’ Die Stacking and Packaging for Future Products: Hybrid 2.5D and 3D

AMD Shipped 260 Million Zen Cores by 2020

Today’s Financial Analyst Day 2020 from AMD is full of small nuggets of information. With the company  building its foundation on its new x86 Zen high-performance architecture, keeping track of the finances is a good marker to find out how well its products are doing. Another marker is how many chips are in the wild. To that end, AMD’s CTO Mark Papermaster presented this graph:



Since the launch of the first Zen products in 2017, the company states that it has shipped 260,000,000 Zen cores to date. It is worth noting that this is cores, not chips, and so there’s a mix of everything from 2-core to 64-core products in there. But this counts consumer, enterprise, commercial, and mobile products. With the launch of the Zen 2 based consoles later this year, this number is expected to shoot up by a significant margin.


Side reading this graph, we get the following numbers:


2017-2018: ~30m cores

2018-2019: 80m cores (~110m total)

2019-2020: 150m cores (~260m total)


Interested in more of our AMD Financial Analyst Day 2020 Coverage? Click here.




Source: AnandTech – AMD Shipped 260 Million Zen Cores by 2020

Buffalo Launches Miniature Rugged External SSD w/ USB Type-A & Type-C

SSDs can survive drops and other kinds of hostile treatment much better than hard drives, but they can still be broken if their PCB or one of the chips gets damaged. For those who want to reduce their risk of losing their data, Buffalo has introduced its new family of SSDs — the SSD-PSMU3 — that is specifically designed to withstand drops. Unlike typical rugged devices, the new drives are rather miniature and more resemble flash drives.


Buffalo’s SSD-PSMU3 series SSDs are designed to endure MIL-STD 810G 516.6 Procedure IV drop test, known as the ‘transit drop’. This means that the device was tested to survived six face drop tests, eight corner drop tests, and 12 edge drop tests from a height of around 1.2 meters, remained in working condition and suffered no physical of internal damage. The drives measure 33×9.5×59.5 mm and weigh 15 grams (dimensions and weight akin to those of a box of PEZ mints), so it should not be particularly hard to make them rugged enough to survive drops from 1.2 meters.



The SSD-PSMU3 drives feature a 120 GB, 250 GB, 480 GB, and 960 GB capacity as well as a USB 3.2 Gen 1 Micro-B interface that connects them to their hosts using a USB Type-A or a USB Type-C cable. Buffalo rates the drives for about 430 MB/s throughput, but considering the interface used, we are probably looking at something near ~400 MB/s due to overhead incurred by 8b/10b encoding.



The rugged SSDs fully support Buffalo’s SecureLock Mobile2 technology that encodes data using an AES-256 key. Meanwhile, it is unclear whether encryption is done using hardware or software. In addition, the drives support SMART function and can be used with Mimamori Signal software that predicts failures of storage components based on SMART data.



Set to be available in white, aquamarine, and pink, Buffalo’s rugged SSD-PSMU3 drives will hit the shelves in Japan starting March 4. The cheapest 120 GB drive will cost ¥5,700 ($54) without VAT, whereas the highest capacity 960 GB model will be priced at ¥22,300 ($210) without taxes.


Related Reading:


Source: Buffalo (via Hermitage Akihabara)




Source: AnandTech – Buffalo Launches Miniature Rugged External SSD w/ USB Type-A & Type-C

SSSTC Launches CL1 M.2-2230 SSD: SMI, Up to 512 GB, Up to 2 GB/s

As notebooks are getting thinner and smaller, PC manufacturers require smaller components and therefore demand tinier SSDs as well as densely-packed SoCs. BGA SSDs are of course among the most compact storage devices around, but modular drives offer flexibility for PC manufacturers as well as end-users. To address this need, in recent years SSD vendors have started to offer M.2-2230 form-factor drives for client computers. This week, Kioxia-owned SSSTC unveiled its new CL1 M.2-2230 SSD that is not only fast, but is interesting for other capabilities as well.


The SSSTC CL1 M.2-2230 SSD is based on Silicon Motion’s SM2263XT controller (NVMe, quad-channel, DRAM-less, TCG, AES, PCIe 3.0 x4) and carries 128 GB, 256 GB, and 512 GB of usable 3D TLC NAND memory. The drive is rated for up to 2000 MB/s sequential read speed as well as up to 1100 MB/s sequential write speed, which is in line with what other ultra-compact SSDs offer.


Solid State Storage Technology Corp. (SSSTC) is a former SSD division of Lite-On that was acquired by Kioxia Holdings (formerly Toshiba Memory) last September. Toshiba Memory itself often used rebadged or customized controllers from Phison for its SSDs, so usage of SMI’s SM2263XT is interesting. What is also noteworthy is that the Kioxia BG4 SSD (available in M.2-2230 and BGA M.2-1620 form-factors) is actually rated for higher sequential speeds. As a result, Kioxia now has two products competing for the same niche market segment.


SSSTC’s CL1 drives will likely be available shortly, but prices are currently unknown.


Related Reading:


Source: SSSTC (via Hermitage Akihabara)



Source: AnandTech – SSSTC Launches CL1 M.2-2230 SSD: SMI, Up to 512 GB, Up to 2 GB/s

ATP’s DDR4-3200 Industrial DIMMs: Up to 128GB @ 1.2V for AMD & Intel

ATP has unveiled its latest memory modules for servers and industrial applications, boasting a 3200 MT/s data transfer rate, an industry-standard voltage, and capacities ranging from 2 GB to 128 GB. The modules are available in various form-factors and configurations to address a variety of designs.


ATP’s family of server/embedded/industrial DDR4-3200 at 1.2 V memory modules are validated to work with AMD’s EPYC 7002-series as well as Intel’s 2nd Generation Xeon Scalable CPUs, and are ready for AMD’s upcoming Milan and Genoa CPUs, as well as Intel’s Cooper Lake and Ice Lake processors. ATP uses a variety of certified memory chips for different modules featuring capacities ranging from 2 GB to 128 GB (e.g., the former uses 4 Gb chips, whereas the latter relies on 16 Gb dies).



The industrial-grade modules from ATP use special PCBs featuring thicker gold contacts, PCB underfill, conformal coating, and anti-sulfur resistors that are meant to protect DIMMs from shock/vibration, electromagnetic disturbance, humidity, and harsh chemicals in the air. Also, like other industrial components they are rated for extreme temperatures from –40°C to +85°C. Last but not least, these modules undergo module-level test during burn-in (TDBI) to reveal weak DIMMs that can produce errors.



Given that ATP’s family of DDR4-3200 at 1.2 V modules for server/embedded/industrial are aimed at a variety of designs, they come in LRDIMM, RDIMM, UDIMM, UDIMM ECC, SO-RDIMM, SO-DIMM, SO-DIMM ECC, Mini-RDIMM, and Mini-UDIMM ECC form-factors. Meanwhile, unbuffered DDR4-3200 modules are available in SO-DIMM, UDIMM, ECC UDIMM, ECC SO-DIMM, and RDIMM configurations.


The new DDR4-3200 DIMMs from ATP are expected to be available shortly, at prices that will depend on configurations and form-factors.


Related Reading:


Source: ATP



Source: AnandTech – ATP’s DDR4-3200 Industrial DIMMs: Up to 128GB @ 1.2V for AMD & Intel

Going Big: Iiyama Intros 43-inch ProLite X4372UHSU-B1 4K Monitor

Using a TV-sized display as a monitor always seemed like a fanciful idea. Until one day it wasn’t. Thanks to the increasing commoditization of LCD panels and the continual downward pressure that has put on monitor prices, demand for large format monitors has been growing just as fast as monitors themselves. And while these kinds of large monitors are still far from ubiquitous, they’ve become an increasingly common sight in the monitor market.


Besides making them more accepted in general, one of the benefits of the normalization of large format monitors is that it’s enticed more manufacturers to enter the field. And now, Iiyama, a respected display maker, has become the latest vendor to jump into the market, introducing their own 42.5-inch monitor for work and play.


Iiyama’s ProLite X4372UHSU-B1 is a 42.5-inch monitor featuring an IPS panel with a 3840×2160 resolution. The display features a typical brightness of 450 nits, a 1300:1 contrast ratio, a 4 ms response time, and a 60 Hz refresh rate. The monitor can reproduce 1.07 million of colors and is listed as supporting HDR, but the manufacturer doesn’t list how much of the DCI-P3 gamut the monitor can reproduce, only noting that the LCD can cover 85% of the NTSC color gamut.



The manufacturer is positioning its ProLite X4372UHSU-B1 monitor for a wide range of applications, including CAD/CAM, entertainment, photography, and visualization. To that end, the monitor supports picture-by-picture and picture-in-picture capabilities, and comes with a total of four inputs: two DisplayPort 1.2 inputs, as well as two HDMI 2.0 ports. The monitor also has an outbound DisplayPort for daisy-chaining it with another LCD. In addition, the device has a quad-port USB hub supporting two 3.0 and two 2.0 connectors. On the audio side of matters, the LCD has two 9 W speakers, a line in as well as a headphone output.



Like many other large-sized monitors, the Iiyama ProLite X4372UHSU-B1 comes with a modest stand that can only adjust tilt. The good news, at least, is that it supports VESA mounts, so it can be used with a third party stands if necessary.




















Iiyama’s 42.5-Inch Display
   ProLite X4372UHSU-B1
Panel 42.5″ IPS
Resolution 3840 × 2160
Refresh Rate 60 Hz
Variable Refresh Rate
Response Time 4 ms
Brightness 450 cd/m²
Contrast 1000:1 Typical
Viewing Angles 178°/178° horizontal/vertical
PPI 104 pixels per inch

0.245 mm² pixel pitch
Colors 1.07 billion
Inputs 2 × DisplayPort 1.2

2 × HDMI 2.0
1 × DisplayPort Out
USB Hub 4-port USBType-A hub (2×USB 3.0, 2×USB 2.0)
Audio Audio Input

Headphone Output
Stand Tilt: 1° to 8°
Launch Date Q1 2020
Launch Price ~ €480

The ProLite X4372UHSU-B1 is currently available from European retailers for around €480.



Related Reading:


Source: Iiyama (via Guru3D)



Source: AnandTech – Going Big: Iiyama Intros 43-inch ProLite X4372UHSU-B1 4K Monitor

TSMC & Broadcom Develop 1,700 mm2 CoWoS Interposer: 2X Larger Than Reticles

With transistor shrinks slowing and demand for HPC gear growing, as of late there has been an increased interest in chip solutions larger than the reticle size of a lithography machine – that is, chips bigger than the maximum size that a single chip can be produced. We’ve already seen efforts such as Cerebras’ truly massive 1.2 trillion transistor wafer scale engine, and they aren’t alone. As it turns out, TSMC and Broadcom have also been playing with the idea of oversized chips, and this week they’ve announced their plans to develop a supersized interposer to be used in Chip-on-Wafer-on-Substrate (CoWoS) packaging.


Overall, the proposed 1,700 mm² interposer is twice the size of TSMC’s 858 mm² reticle limit. Of course, TSMC can’t actually produce a single interposer this large all in one shot – that’s what the reticle limit is all about – so instead the company is essentially stitching together multiple interposers, building them next to each other on a single wafer and then connecting them. The net result is that an oversized interposer can be made to function without violating reticle limits.


The new CoWoS platform will initially be used for a new processor from Broadcom for the HPC market, and will be made using TSMC’s EUV-based 5 nm (N5) process technology. This system-in-package product features ‘multiple’ SoC dies as well as six HBM2 stacks with a total capacity of 96 GB. According to Broadcom’s press release, the chip will have a total bandwidth of up to 2.7 TB/s, which is in line with what Samsung’s latest HBM2E chips can offer.


By doubling the size of SiPs using its mask stitching technology, TSMC and its partners can throw in a significantly higher number of transistors at compute-intensive workloads. This is particularly important for HPC and AI applications that are developing very fast these days. It is noteworthy that TSMC will continue refining its CoWoS technology, so expect SIPs larger than 1,700 mm2 going forward.


Greg Dix, vice president of engineering for the ASIC products division at Broadcom, said the following:


“Broadcom is happy to have collaborated with TSMC on advancing the CoWoS platform to address a host of design challenges at 7nm and beyond. Together, we are driving innovation with unprecedented compute, I/O and memory integration and paving the way for new and emerging applications including AI, Machine Learning, and 5G Networking.”


Related Reading:


Source: TSMC



Source: AnandTech – TSMC & Broadcom Develop 1,700 mm2 CoWoS Interposer: 2X Larger Than Reticles

El Capitan Supercomputer Detailed: AMD CPUs & GPUs To Drive 2 Exaflops of Compute

Back in August, the United States Department of Energy and Cray announced plans for a third United States exascale supercomputer, El Capitan. Scheduled to be installed in Lawrence Livermore National Laboratory (LLNL) in early 2023, the system is intended primarily (but not exclusively) for use by the National Nuclear Security Administration (NNSA), who uses supercomputers in their ongoing nuclear weapons modeling. At the time the system was announced, The DOE and LLNL confirmed that they would be buying a Shasta system from Cray (now part of HPE), however the announcement at the time didn’t go into any detail about what hardware would actually be filling one of Cray’s very flexible supercomputers.

But as of today, the wait is over. This afternoon the DOE and HPE are announcing the architectural details of the supercomputer, revealing that AMD will be providing both the CPUs and accelerators (GPUs), as well as revising the performance estimate for the supercomputer. Already expected to be the fastest of the US’s exascale systems, El Capitan was originally commissioned as a 1.5 exaflop system seven months ago. However thanks to some late configuration changes, the DOE now expects the system to reach 2 exaflops once it’s fully installed, which would cement its place at the top of the US’s supercomputer inventory.



Source: AnandTech – El Capitan Supercomputer Detailed: AMD CPUs & GPUs To Drive 2 Exaflops of Compute

EVGA Launches B5 Modular PSUs: 80Plus Bronze At Up to 850 W

EVGA this week has introduced a new family of entry-level, 80Plus power supplies. Promise to bring together strong performance, a rich feature set, and a relatively low price, EVGA’s modular B5-series PSUs are designed to tick all of the boxes expected for a basic PSU in 2020.


The EVGA B5-series PSU family includes 550 W, 650 W, 750 W, and 850 W models, and are compliant with the latest ATX12 v2.52/EPS12V specifications. Fully modular, the most powerful SKU has six 8-pin PCIe power connectors, the 750 W model features four 8-pin PCIe power plugs, the 650 W flavor has three, whereas the entry 550 W model has two 8-pin PCIe power connectors. Obviously, the PSUs have SATA as well as Molex plugs too. The new power supplies correspond to the 80Plus Bronze requirements, so they are they are mandated to be 81% – 88% efficient under a 20% or 50% load as well as 81% – 85% efficient under a 100% load.



EVGA’s latest B5-series PSUs use 100% Japanese capacitors on 750 W and 850 W models (and some on less powerful SKUs), and feature an no-fan/ECO mode that shuts down the 135-mm fluid-dynamic bearing fan under low and medium loads. In addition, the power supplies feature a comprehensive set of protection technologies that includes OVP (Over Voltage Protection), UVP (Under Voltage Protection), OCP (Over Current Protection), OPP (Over Power Protection), SCP (Short Circuit Protection), and OTP (Over Temperature Protection).











EVGA B5-Series PSUs Output Specifications
  550 W 650 W 750 W 850W
Rated Combined Rated Combined Rated Combined Rated Combined
+3.3V 24 A 120 W 24 A 120 W 24 A 120 W 24 A 120 W
+5V 24 A 24 A 24 A 24 A
+12V 45.8 A 550 W 54.1 A 650 W 62.5 A 750 W 70.8 A  850 W
-12V 0.5 A 6 W 0.5 A 6 W 0.5 A 6 W 0.5 A 6 W
+5Vsb 3 A 15 W 3 A 15 W 3 A 15 W 3 A 15 W
Total Power 550 W 650 W 750 W 850 W

EVGA’s B5-series power supplies measure 150x50x86 mm, which is pretty typical for mid-capacity PSUs, so they should easily fit into virtually any standard ATX case. And as mentioned earlier, the PSUs are fully modular, so they provide additional flexibility to system builders.











NZXT C-Series PSUs Connectivity Specifications
Connector type 550 W 650 W 750 W 850 W
ATX 24 Pin 1
EPS 4+4 Pin 1 2
EPS 8 Pin
PCIe 6+2 Pin 2 3 4 6
SATA 6 9
4P Molex 3
Floppy 1

EVGA’s B5-series PSUs are immediately available directly from the company. The cheapest model, the 550 W version, is priced at $79.99, whereas the most powerful 850 W SKU is priced at $129.99. The power supplies are covered by a five-year warranty, which is typical for inexpensive PSUs.



Related Reading:


Source: EVGA



Source: AnandTech – EVGA Launches B5 Modular PSUs: 80Plus Bronze At Up to 850 W

Intel CFO: Our 10nm Will Be Less Profitable than 22nm [Morgan Stanley Transcription]

This week at Morgan Stanley’s Analyst Conference, Intel’s CFO, George Davis, sat down to discuss the future of where Intel’s profitability lies. No stranger to the odd comments relating to how Intel manages its money, Mr. Davis was on fine form explaining that Intel is going to be in for a rough time as it corresponds to the leading edge. Among the statements made, Mr. Davis confirmed that Intel’s new 10nm node will be less profitable than its 22 nm node, let alone its 14 nm node.



Source: AnandTech – Intel CFO: Our 10nm Will Be Less Profitable than 22nm [Morgan Stanley Transcription]

UNISOC Unveils T7520 SoC for 5G Smartphones: Octa-Core, 6nm EUV

UNISOC, formerly Spreadtrum Semiconductor, has announced its first mobile application processor with an integrated 5G modem. Dubbed the T7520, the SoC also happens to be one of the world’s first chips to be made using TSMC’s 6 nm process technology, which uses extreme ultraviolet lithography (EUVL) for several layers.


The UNISOC T7520 application processor packs four high-performance Arm Cortex-A76 cores, four energy-efficient Arm Cortex-A55 cores, as well as an Arm Mali-G57 GPU with a display engine that supports multiple screens with a 4K resolution and HDR10+. Furthermore, the SoC integrates a new NPU that is said to offer a 50% higher TOPS-per-Watt rate than the company’s previous-generation NPU. In addition, the chip features a four-core ISP that supports up to 100 MP sensors and multi-camera processing capability. Finally, the AP also features the company’s latest Secure Element processor that supports ‘most of crypto algorithms’ and can handle compute-intensive security scenarios, such as encrypted video calls.


One of the key features of the UNISOC T7520 is of course its integrated 2G/3G/4G/5G-supporting modem, which supports 5G NR TDD+FDD carrier aggregation, as well as uplink and downlink decoupling for enhanced coverage. All told, the T7520’s modem is designed to offer peak uplink speed of 3.25 Gbps.


The high level of integration of the T7520 SoC is designed to enable smartphone manufacturers to build more reasonably priced 5G handsets, which will inevitably increase their popularity and adoption of the technology. Meanwhile, usage of TSMC’s 6 nm fabrication technology (known as N6) should allow UNISOC to make the AP for less than compared to non-EUV fabrication processes.


UNISOC did not announce when it plans to start shipments of its T7520 application processor, though it is reasonable to expect it to become available this year.


Related Reading:


Source: UNISOC



Source: AnandTech – UNISOC Unveils T7520 SoC for 5G Smartphones: Octa-Core, 6nm EUV

Western Digital Introduces WD Gold Enterprise SSDs

On what would have been the first day of the Open Compute Project’s annual Global Summit, Western Digital is bringing out a new line of enterprise SSDs. The WD Gold brand for enterprise drives is getting an SSD counterpart to the existing WD Gold enterprise hard drives. WD’s color-based drive branding now features both SSDs and hard drives in almost every product segment: Blue and Green mainstream consumer drives, Black for high-end consumer, Red for NAS systems, and Gold for enterprise. The only one missing an SSD option is the WD Purple family for video surveillance recording (though there is a WD Purple microSD card).


The new WD Gold SSD isn’t anything new technologically; it’s basically a re-branding of a portion of the Ultrastar DC SN640 product line. Where the WD Gold differs is in the target markets: Like other WD (color) products, the WD Gold SSD is intended for channel and retail sales rather than the large-scale direct B2B sales model used for Western Digital’s Ultrastar datacenter drives and their client OEM drives. The WD Gold SSD will make Western Digital’s enterprise SSD technology more accessible to small and medium enterprise customers.



















Western Digital WD Gold SSDs
Capacity 960 GB 1.92 TB 3.84 TB 7.68 TB
Form Factor 2.5″ U.2 7mm
Interface PCIe 3.0 x4, NVMe 1.3
Controller WD Proprietary
NAND 96-layer BICS4 3D TLC NAND
Sequential Read 3000 MB/s 3100 MB/s 3100 MB/s 3100 MB/s
Sequential Write 1100 MB/s 2000 MB/s 1800 MB/s 1800 MB/s
Random Read (4 kB) IOPS 413k 472k 469k 467k
Random Write (4 kB) IOPS 44k 63k 63k 65k
70/30 R/W Mixed IOPS 111k 194k 174k 187k
Power Active Configurable 10, 11, 12 W limit
Idle 4.6 W 4.62 W 4.94 W 4.95 W
Encryption AES-256
Power Loss Protection Yes
Write Endurance 1.4 PB

0.8 DWPD
2.8 PB

0.8 DWPD
5.61 PB

0.8 DWPD
11.21 PB

0.8 DWPD
Warranty Five years

The WD Gold SSD is based on the same hardware as the Ultrastar DC SN640 series, but the WD Gold product line doesn’t include as many options. The SN640 comes in two endurance tiers: 0.8 drive writes per day and 2 DWPD. The WD Gold SSD line only includes the 0.8 DWPD drives, and only the U.2 form factor versions: a total of four capacity options from 960 GB up to 7.68 TB. These drives use the latest Western Digital/Kioxia 96-layer 3D TLC NAND flash memory and one of Western Digital’s own in-house NVMe controller designs.


The technical specs for the WD Gold SSDs are identical to the matching Ultrastar DC SN640 models. The performance is limited largely by the PCIe 3.0 interface and the power/thermal constraints of the 2.5″/7mm U.2 form factor: these drives idle just under 5W and can draw up to 12 W under load, with configurable power states to throttle down to 10 or 11 W for high-density deployments that can’t quite keep them cool at the full 12W each.


The WD Gold SSDs are planned to ship starting in early Q2. Pricing has not been announced.



Source: AnandTech – Western Digital Introduces WD Gold Enterprise SSDs