ASUS Takes Over Intel NUC Brand, Begins the Next Era for the Next Unit of Computing

As of September 1, ASUS has officially taken over Intel’s NUC brand and product range. Per the companies’ previously announced agreement, ASUS has become the de facto heir to Intel’s NUC business, taking on the support obligations along with a non-exclusive license to build further NUCs. ASUS’s newly introduced NUC series, in turn, features a diverse array of NUC PCs and compute elements aiming for a broad spectrum of needs, from business to gaming and beyond, all powered by Intel processors.


“Starting September 1st, NUC becomes a proud member of the ASUS product lineup, setting off on an exhilarating journey ahead,” an ASUS statement at Twitter reads. “Delve into NUC product specifics on the official ASUS website.”


ASUStek’s compact NUC offerings are designed for a wide variety of usage scenarios, including typical productivity at home or in office, gaming, edge computing, commercial, and even professional applications. ASUS also plans to keep providing NUC compute elements for custom commercial applications.


Although ASUS has rights to offer NUC PCs based on 10th, 11th, 12th, and 13th Generation Core CPUs, all systems featured on the ASUS NUC website are powered by Intel’s 13th Generation Core ‘Raptor Lake’ processors. As noted previously, ASUS’s agreement does require them to continue supporting the older NUCs (including hardware replacements, as warranties dictate), but they are not required to sell the older NUCs.


In any case, with Intel’s 13th Generation Core CPUs having been out for a year now, the change in ownership is as good a time as any to tidy up the product line. Though like similar business handoffs in the past, this won’t be an overnight change; none of the ASUS NUCs feature ASUS’s logotype, and at least one machine still carries Intel’s.



ASUS acquired the NUC line after Intel decided to step away from this segment in mid-July. The two companies signed a non-exclusive agreement, allowing ASUS to manufacture existing NUC models and develop new ones, but leaves NUC doors open for other PC makers. Meanwhile, being a top PC maker ASUS is uniquely positioned to manage this line of products on a large scale.


Acquisition of Intel’s NUC product range makes a great sense for ASUS, which has been losing market share in the recent years as it tried to focus on profitability. By taking over Intel’s NUC business, ASUS is in a good position to increase its market share, while giving the NUC concept a new lease on life from a manufacturer who is more directly connected to the pre-built PC market.




Source: AnandTech – ASUS Takes Over Intel NUC Brand, Begins the Next Era for the Next Unit of Computing

Samsung Unveils Industry's First 32Gbit DDR5 Memory Die: 1TB Modules Incoming

Samsung early on Friday revealed the world’s first 32 Gb DDR5 DRAM die. The new memory die is made on the company’s 12 nm-class DRAM fabrication process and not only offers increased density, but also lowers power consumption. The chip will allow Samsung to build record 1 TB RDIMMs for servers as well as lower costs of high-capacity memory modules. 


“With our 12nm-class 32 Gb DRAM, we have secured a solution that will enable DRAM modules of up to 1 TB, allowing us to be ideally positioned to serve the growing need for high-capacity DRAM in the era of AI (Artificial Intelligence) and big data,” said SangJoon Hwang, executive vice president of DRAM product & technology at Samsung Electronics.


32 Gb memory dies not only enable Samsung to build a regular, single-rank 32 GB module for client PCs using only eight single-die memory chips, but they also allow for higher capacity DIMMs that were not previously possible. We are talking about 1 TB memory modules using 40 8-Hi 3DS memory stacks based on eight 32 Gb memory devices. Such modules may sound overkill, but for artificial intelligence (AI), Big Data, and database servers, more DRAM capacity can easily be put to good use. Eventually, 1TB RDIMMs would allow for up to 12 TB of memory in a single socket server (e.g. AMD’s EPYC 9004 platform), something that cannot be done now. 


With regards to power consumption, Samsung says that using the new dies they can build 128 GB DDR5 RDIMMs for servers that consume 10% less power than current-generation modules built around 16 Gb dies. This drop of power consumption can be attributed to both 12 nm-class DRAM production node as well as avoiding the use of 3D stacked (3DS) chips that pack two 16 Gb dies into a single package.


Samsung is not disclosing the speed bins of its 32 Gb memory dies, but finished 16 Gb modules made on the same 12 nm-class technology offer a 7200 MT/s data transfer rate.


Samsung intends to start mass production of 32 Gb memory dies by the end of 2023, but for now the company isn’t detailing when it plans to offer finished chips to customers. it’s likely that the company will start with client PCs first, though whether that translasts into any cost savings remains to be seen.


Otherwise, for servers it usually takes a while for server platform developers and vendors to validate and qualify new memory components. So while Samsung has 1 TB RDIMMs in its future, it will take some time before we see them in shipping servers.




Source: AnandTech – Samsung Unveils Industry’s First 32Gbit DDR5 Memory Die: 1TB Modules Incoming

The ASUS TUF Gaming 850W Gold PSU Review: Tough But Fair

Though ASUS as a company needs no introduction to regular AnandTech readers, even for us it’s easy to overlook just how vast their range of product lines is these days. As the company has moved beyond PC motherboards and core components to kept diversifying over the years, they’ve established whole subsidiary brand names in the process, such as the “Republic of Gamers” or “ROG”. Nowadays, the ASUS logo can be found on almost every PC component and peripheral there is, from mouse pads to gaming laptops.


One of the many series of products ASUS is supplying under its brand name – and that, somehow, we’ve never reviewed up until now – is a rather extensive array of power supply units. The company splits its units into three main series, the ROG, the TUF Gaming, and the Prime, all of which are targeting higher segments of the market. In fact, ASUS is fairly rare in this respect; unlike most other manufacturers, ASUS largely stays out of the low-to-middle range of the market altogether, instead focusing on the more lucrative premium and gaming segments.


Today’s review directs its focus towards ASUS’s TUF Gaming series, which is – in our opinion – the most versatile series that the company currently markets. The TUF Gaming units are designed with long-term reliability and high performance in focus and are being marketed accordingly. The new 850W Gold variant of this series aligns with Intel’s ATX 3.0 design guidelines, with the 80Plus Gold certification and 10-year manufacturer’s warranty as the major highlights, and retails for a reasonable price tag.



Source: AnandTech – The ASUS TUF Gaming 850W Gold PSU Review: Tough But Fair

MSI Goes Compact with New GeForce RTX 40-Series Gaming Slim Cards

Modern graphics cards are notorious for their massive cooling systems and dimensions that are hard to fit into mainstream PC chassis and almost impossible to fit into compact PC cases that are rather popular among gamers. MSI this week attempted to address these concerns by announcing a family of GeForce RTX 40-series graphics cards called Gaming Slim that are meant to be slimmer than a typical add-in-board. But are they?


Judging by MSI’s GeForce RTX 4070 Ti Gaming X Slim (which measures 307×125×51 mm and weights 1086 grams), it is indeed considerably more compact than MSI’s GeForce RTX 4070 Ti Gaming X Trio (337×140×62 mm @ 1594 grams). What is no less important is that the Gaming Slim is nearly as fast as its full-fat counterpart with its maximum 2745 MHz GPU clock vs. 2760 MHz.



With that said, despite the “slim” designation, we’re still looking at a graphics card that is 2.5-wide (51 mm) wide. So the most densely packed of systems that can’t take a card over 2 slots will still need to look elsewhere. For everything else, it will certainly be easier to assemble a PC with MSI’s Gaming Slim graphics card due to its squeezed dimensions.


MSI says that its Gaming Slim-badged graphics cards will provide signature features of its Gaming-branded graphics cards, including factory overclocked GPU and enhanced cooling system, but will be thinner and lighter, enabling ‘flexible system assembly.’ MSI further noted that it would offer all of its Gaming GeForce RTX 40-series products in Gaming Slim flavor, including GeForce RTX 4090, RTX 4080, RTX 4070, and RTX 4060.


MSI says that its Gaming Slim graphics cards will come in black and white versions, will use the company’s latest cooling solutions (Tri Frozr triple-fan cooling system with Fan Torx Fan 5.0 fans and up to eight pipes), and addressable RGB LEDs.



For now, the company has Gaming Slim versions of GeForce RTX 4070 Ti, GeForce RTX 4070, and GeForce RTX 4060 Ti, so the higher-end products will probably be available at a later date. Meanwhile, we can only wonder how compact will MSI manage to get NVIDIA’s GeForce RTX 4090 and whether the company’s GeForce RTX 4090 Gaming Slim will be more compact than NVIDIA’s GeForce RTX 4090 Founders Edition.




Source: AnandTech – MSI Goes Compact with New GeForce RTX 40-Series Gaming Slim Cards

Minisforum Teases AMD Ryzen-Based Tablet

Having received worldwide recognition for its compact desktop computers, Minisforum is looking beyond PCs to grow its business. Earlier this year the company introduced its first tablet and is prepping a tablet with AMD’s Zen 4-based accelerated processing unit (APU) inside.


The tablet that Minisforum plans to introduce will be a 2-in-1 device with a detachable keyboard and a stylus, according to the company’s presentation in China, which was caught by Liliputing. For now, Minisforum plans to use AMD’s eight-core Ryzen 7 7840U APU with Radeon RX 780M GPU as the foundation for the device, though by the time the product shows up the form might decide to go with something else. 


Specifications of the device have not been touched upon, which is not particularly surprising given that it is at its early stages of development. Meanwhile, the company stressed that it will take advantage of AMD’s AI-co-processor built into Ryzen 7 7840U processor and will therefore use Windows AI capabilities.


For now, it is hardly a good business to speculate what to expect from Minisforum’s tablet, though the company is known for offering products with decent specifications at moderate price points.


While we do not know much about Minisforum tablets, the very fact that the company is going this route is remarkable. Keeping in mind that Minisforum is known primarily for PCs, notebooks were arguably the most logical way to expand its business. Yet, the market of notebooks seems to be too crowded, so the company opted for Windows tablets, an untapped market largely because Microsoft’s platform is not particularly popular among tablet users.


The choice of AMD’s Ryzen platform for a tablet is also noteworthy since we have not seen tablets based on AMD for quite a while. Perhaps, because the company makes so many systems based on AMD’s APUs, it knows the platforms so well that it decided to use AMD’s Ryzen 7840U for tablet, a form-factor that has not been tapped by this processor just yet.




Source: AnandTech – Minisforum Teases AMD Ryzen-Based Tablet

Hot Chips 2023: Intel Details More on Granite Rapids and Sierra Forest Xeons

With the annual Hot Chips conference taking place this week, many of the industry’s biggest chip design firms are at the show, talking about their latest and/or upcoming wares. For Intel, it’s a case of the latter, as the company is at Hot Chips to talk about its next generation of Xeon processors, Granite Rapids and Sierra Forest, which are set to launch in 2024. Intel has previously revealed this processors on its data center roadmap – most recently updating it in March of this year – and for Hot Chips the company is offering a bit more in the way of technical details for the chips and their shared platform.

While there’s no such thing as an “unimportant” generation for Intel’s Xeon processors, Granite Rapids and Sierra Forest promise to be one of Intel’s most important updated to the Xeon Scalable hardware ecosystem yet, thanks to the introduction of area-efficient E-cores. Already a mainstay on Intel’s consumer processors since 12th generation Core (Alder Lake), with the upcoming 6th generation Xeon Scalable platform will finally bring E-cores over to Intel’s server platform. Though unlike consumer parts where both core types are mixed in a single chip, Intel is going for a purely homogenous strategy, giving us the all P-core Granite Rapids, and the all E-core Sierra Forest.

As Intel’s first E-core Xeon Scalable chip for data center use, Sierra Forest is arguably the most important of the two chips. Fittingly, it’s Intel’s lead vehicle for their EUV-based Intel 3 process node, and it’s the first Xeon to come out. According to the company, it remains on track for a H1’2024 release. Meanwhile Granite Rapids will be “shortly” behind that, on the same Intel 3 process node.



Source: AnandTech – Hot Chips 2023: Intel Details More on Granite Rapids and Sierra Forest Xeons

GEEKOM Mini IT13 Packs Core i9 into 4×4 NUC Chassis: 14-Cores NUC

While Intel’s classic 4×4 NUCs have been pretty powerful systems capable of handling demanding workloads, the company never cared to install its top-of-the-range CPUs into its compact PCs. GEEKOM apparently decided to fix this and this week introduced its Mini IT13: the industry’s first 4×4 desktop with an Intel Core i9 processor, offering with 14 CPU cores inside. 


The Mini IT13 from GEEKOM measures 117 mm × 112 mm × 49.2 mm, making it as small as Intel’s classic NUC systems. Despite its compact size, it can pack Intel’s mobile-focused 14-core Core i9-13900H (6P+8E cores, 20 threads, up to 5.40 GHz, 24 MB cache, 45W) that comes with integrated Xe graphics processing unit with enhanced performance (Xe-LP, 96 EUs or 768 stream processors at up to 1.50 GHz). 


To maintain consistent performance of the CPU and avoid overheating and performance drops of even under significant loads, the system employs a blower-style cooler, which produces up to 43.6 dBA of noise, so the machine is not exactly whisper quite to say the least.



The compact PC supports  up to 64 GB of DDR4 memory through two SODIMMs, an M.2-2280 with a PCIe 4.0 x4interface and an M.2-2242 SSD with a SATA interface, and an additional 2.5-inch HDD or SSD for more extensive storage.


As far as connectivity is concerned, the GEEKOM Mini IT13 comes with a Wi-Fi 6E+ Bluetooth 5.2 module, a 2.5 GbE port, two USB4 connectors, three USB 3.2 Gen2 ports, one USB 2.0 Type-A connector, two HDMI 2.0 outputs (in addition to two DPs supported through USB4), an SD card reader, and a TRRS audio jack for headphones. 



Although GEEKOM does not directly mention it, the USB4 ports potentially allow to connect an external graphics card in an eGFX enclosure and make the Mini IT13 a quite decent gaming machine. Meanwhile, even without an external graphics card, the unit can support up to four displays simultaneously.


Interestingly, the GEEKOM IT13 machine does not cost an arm and a leg. The cheapest version with Core i5-13500H, 16 GB of RAM, and a 512 GB SSD can be purchased for $499, whereas the most expensive model with Core i9-13900H, 32 GB of memory, and 2 TB of solid-state storage costs $789.




Source: AnandTech – GEEKOM Mini IT13 Packs Core i9 into 4×4 NUC Chassis: 14-Cores NUC

AMD Teases FSR 3 and Hypr-RX: Updated Radeon Performance Technologies Available in September

Alongside this morning’s announcement of the new Radeon RX 7800 XT and RX 7700 XT video cards, AMD is also using their Gamescom launch event to deliver an update on the state of their Radeon software stack. The company has been working on a couple of performance-improvement projects since the launch of the Radeon RX 7000 series in late 2022, including the highly anticipated FSR 3, and they’re finally offering a brief update on those projects ahead of their September launches.


FSR 3 Update: First Two Games Available in September


First and foremost, AMD is offering a bit of a teaser update on Fidelity FX Super Resolution 3 (FSR 3), their frame interpolation (frame generation) technology that is the company’s answer to NVIDIA’s DLSS 3 frame generation feature. First announced back at the Radeon RX 7900 series launch in 2022, at the time AMD only offered the broadest of details on the technology, with the unspoken implication being that they had only recently started development.


At a high level, FSR 3 was (or rather, will be) AMD’s open source take on frame interpolation, similar to FSR 2 and the rest of the Fidelity FX suite of game effects. With FSR 3, AMD is putting together a portable frame interpolation technique that they’re dubbing “AMD Fluid Motion Frames” and, unlike DLSS, is not vendor proprietary and can work on a variety of cards from multiple vendors. Furthermore, the source code for FSR3 will be freely available as part of AMD’s GPUOpen community.


Since that initial announcement, AMD has not had anything else of substance to say on the state of FSR 3 development. But at last, the first shipping version of FSR 3 is in sight, with AMD expecting to bring it to the first two games next month.



Ahead of that launch, they are offering a small taste of what’s to come, with some benchmark numbers for the technology in action on Forspoken. Using a combination of Fluid Motion Frames, Anti-Lag+, and temporal image upscaling, AMD was able to boost 4K performance on Forspoken from 36fps to 122fps.


Notably, AMD is using “Performance” mode here, which for temporal upscaling means rendering at one-quarter the desired resolution of a game – in this case, rendering at 1080p for a 4K/2160p output. So a good deal of the heavy lifting is being done by temporal upscaling, but not all of it.



AMD has also published a set of numbers without temporal upscaling, using their new native anti-aliasing mode, which renders at the desired output resolution and then uses temporal techniques for AA, and combines that with Fluid Motion Frames. In that case, performance at 1440p goes from 64fps to 106fps.


For the time being, these are the only two sets of data points AMD is providing. Otherwise, the screenshots included in their press deck are not nearly high enough in quality to make any kind of meaningful image quality comparisons, and AMD hasn’t published any videos of the technology in action. So convincing visual evidence of FSR 3 in action is, at least ahead of today’s big reveal, lacking. But it is a start none the less.


As for the technical underpinnings, AMD has answered a few questions relating to FSR3/Fluid Motion Frames, but the company is not doing a deep dive on the technology at this time. So there remains a litany of unanswered questions about its implementation.


With regards to compatibility, according to AMD FSR3 will work on any RDNA (1) architecture GPU or newer, or equivalent hardware. RDNA (1) is a DirectX Feature Level 12_1 architecture, which means equivalent hardware spans a pretty wide gamut of hardware, potentially going back as far as NVIDIA’s Maxwell 2 (GTX 900) architecture. That said, I suspect there’s more to compatibility than just DirectX feature levels, but AMD isn’t saying much more about system requirements. What they are saying, at least, is that while it will work on RDNA (1) architecture GPUs, they recommend RDNA 2/RDNA 3 products for the best performance.


Along with targeting a wide range of PC video cards, AMD is also explicitly noting that they’re targeting game consoles as well. So in the future, game developers will be able to integrate FSR3 and have it interpolate frames on the PlayStation 5 and Xbox Series X|S consoles, both of which are based on AMD RDNA 2 architecture GPUs.


Underpinning FSR 3’s operation, in our briefing AMD made it clear that it would require motion vectors, similar to FSR 2’s temporal upscaling, as well as rival NVIDIA’s DLSS 3 interpolation. The use of motion vectors is a big part of why FSR 3 requires per-game integration – and a big part of delivering high quality interpolated frames. Throughout our call “optical flow” did not come up, but, frankly, it’s hard to envision AMD not making use of optical flow fields as well as part of their implementation. In which case they may be relying on D3D12 motion estimation as a generic baseline implementation, since it wouldn’t require accessing each vendor’s proprietary integrated optical flow engine.


What’s not necessary, however, is AI/machine learning hardware. Since AMD is targeting the consoles, they wouldn’t be able to rely on it anyhow.


AMD also says that FSR 3 includes further latency reduction technologies (which are needed to hide the latency of interpolation). It’s unclear if this is something equivalent to NVIDIA’s Reflex marker system, or if it’s something else entirely.



The first two games to get FSR 3 support will be the aforementioned Forspoken, as well as the recently launched Immortals of Aveum. AMD expects FSR 3 to be patched in to both games here in September – presumably towards the later part of the month.



Looking farther forward, AMD has lined up several other developers and games to support the technology, including RPG-turned-technology-showcase Cyperpunk 2077. Given that Forspoken and Immortals of Aveum are essentially going to be the prototypes for testing FSR 3, AMD isn’t saying when support for the technology will come to these other games. Though with plans to make it available soon as an Unreal Engine plugin, the company certainly has its eyes on enabling wide-scale deployment of the technology over time.


For now, this is only a brief teaser of details. I expect that AMD will have a lot more to say about FSR 3, and to disclose, once the FSR 3 patches for Forspoken and Immortals of Aveum are ready to be released.


Hypr-RX: Launching September 6th


Second up, we have Hypr-RX. AMD’s smorgasbord feature combines the company’s Radeon Super Resolution (spatial upscaling), Radeon Anti-Lag+ (frame queue management), and Radeon Boost (dynamic resolution scaling). All three technologies are already available in AMD’s drivers today, however they can’t all currently be used together. Super Resolution and Boost both touch the rendering resolution of a game, and Anti-Lag steps on the toes of Boost’s dynamic resolution adjustments.



Hypr-RX, in turn, is designed to bring all three technologies together to make them compatible with one another, and to put the collective set of features behind a single toggle. In short, if you turn on Hypr-RX, AMD’s drivers will use all of the tricks available to improve game performance.


Hypr-RX was supposed to launch by the end of the first half of the year, a date that has since come and gone. But, although a few months late, AMD has finally finished pulling together the feature, just in time for the Radeon RX 7800 XT launch.


To that end, AMD will be shipping Hypr-RX in their next Radeon driver update, which is due on September 6th. This will be the launch driver set posted for the RX 7800 XT, bringing support for the new card and AMD’s newest software features all at once. On that note, it bears mentioning that Hypr-RX requires an RDNA 3 GPU, meaning it’s only available for Radeon RX 7000 video cards as well as the Ryzen Mobile 7040HS CPU family.



Taking a quick look at performance, AMD has released some benchmark numbers showing both the latency and frame rates for several games on the RX 7800 XT. In all cases latency is down and framerates are up, varying on a game-by-game basis. As these individual technologies are already available today, there’s not much new to say about them, but given the overlap in their abilities and the resulting technical hurdles, it’s good to see that AMD finally has them playing nicely together.



But with FSR 3 and its frame interpolation abilities soon to become available, AMD won’t be stopping there for Hypr-RX. The next item on AMD’s to-do list is to add Fluid Motion Frame support to Hypr-RX, allowing AMD’s drivers to use frame interpolation (frame generation) to further improve game performance.


This is a bigger and more interesting challenge than it may first appear, because AMD is essentially promising to (try to) bring frame interpolation to every game on a driver level. FSR 3 requires that the technology be built in to each individual game, in part because it relies on motion vector data. That motion vector data is not available to driver-level overrides, which is why Hypr-RX’s Radeon Super Resolution ability is only a spatial upscaling technology.


Put another way: AMD apparently thinks they can do frame interpolation without motion vectors, and still achieve a good enough degree of image quality. It’s a rather audacious goal, and it will be interesting to see how it turns out in the future.


Wrapping things up, AMD’s current implementation of Hypr-RX will be available on September 6th as part of their new driver package. Meanwhile Hypr-RX with frame interpolation is a work in progress, and will be coming at a later date.




Source: AnandTech – AMD Teases FSR 3 and Hypr-RX: Updated Radeon Performance Technologies Available in September

AMD Announces Radeon RX 7800 XT & Radeon RX 7700 XT: Enthusiast-Class RDNA3 For 1440p Gaming

With the Gamescom convention taking place in Germany this week, AMD is using Europe’s largest video game trade show as the venue for their latest Radeon video card announcements. This morning the company is announcing their long-awaited middle members of the Radeon RX 7000 series, the Radeon RX 7800 XT and Radeon RX 7700 XT. Aimed at the 1440p gaming market and based on AMD’s new Navi 32 GPU, the new cards are designed to slot in to the middle of AMD’s product stack, offering a set of potent RDNA 3 architecture video cards for gamers who don’t need the bleeding-edge performance (and wallet-bleeding costs) of the Radeon RX 7900 series cards.

With today’s announcement being just that – an announcement – the retail launch of these cards will follow in a couple of weeks, on Wednesday, September 6th. That date also not-so-coincidentally happens to be the release date for Bethesda’s ARPG Starfield, for which AMD is the game’s exclusive PC hardware partner, and which AMD will be offering with the new Radeon cards as part of the company’s latest game bundle. So for AMD, the stars are aligning to make September 6th a big day for the company’s GPU division.

But as for us, we’re here to talk about hardware, so let’s discuss the Radeon RX 7800 XT and 7700 XT, as well as the Navi 32 GPU that underpins them.



Source: AnandTech – AMD Announces Radeon RX 7800 XT & Radeon RX 7700 XT: Enthusiast-Class RDNA3 For 1440p Gaming

Samsung 990 Pro SSD Gets 4TB Model

When Samsung launched its 990 Pro family of SSDs for retail market last year, it only introduced 1 TB and 2 TB models, surprisingly omitting premium high-capacity 4 TB version. Now, the company is about to correct this wrong by launching 4 TB version this fall, the world’s largest SSD supplier revealed in an X post.




“You wanted it so badly, we had no choice but to deliver,” the company’s post reads. “The 4TB 990 PRO by #SamsungSSD is coming. Same blazing-fast storage with double the max capacity for gaming, video, 3D editing, and more. Stay tuned for more details.”


From performance point of view, Samsung’s 990 Pro 4 TB drive offer an up to 7,450 MB/s sequential read speed and an up to 6,900 MB/s sequential write speed, which is in line with what 1 TB and 2 TB models offer. As for random operations, the SSD achieves 1,400,000 IOPS for reads and 1,550,000 IOPS for writes, which is comparable to the performance of other flagship SSDs.



Since Samsung’s 990 Pro is a family of SSDs with a PCIe 4.0 x4 interface, the 4 TB drive may not attract attention of owners of shiny new systems based on AMD Ryzen 7000-series or Intel Core 12000 and 13000-series processors. However, there are loads of PCs with M.2 slots featuring a PCIe 4.0 interface just waiting for an upgrade and 4 TB SSD makes a lot of sense these days given low prices of 3D TLC NAND.


According to Samsung’s spec sheets, the company intends to offer two versions of 990 Pro 4 TB SSDs: one with a simplistic graphene heat spreader to maximize compatibility with laptops, another with a larger aluminum heatsink to ensure consistent performance under high loads.


Samsung has not formally disclosed MSRP of its Samsung 990 Pro 4 TB SSD, though expect it to be priced in line with market realities. The drive is covered with a five-year limited warranty with an endurance rating of 2400 terabytes written (TBW).




Source: AnandTech – Samsung 990 Pro SSD Gets 4TB Model

Sony Unveils The PlayStation Portal: A Remote Play Handheld For PlayStation 5

For all of their ups and downs in the handheld game console space over the years, one of Sony’s bigger successes has been their local game streaming support, better known as Remote Play. Allowing the PS3 and PS4/PS5 consoles to be remotely played on the Playstation Portable and PS Vita respectively, it’s been a defining feature of Sony’s consoles for the past decade and a half. And while Sony is no longer making dedicated gaming handhelds, the company is still eager to leverage their remote play capabilities to provide new experiences and sell new hardware. To that end, this week Sony unveiled their dedicated remote play companion device for the PS5.


The PlayStation Portal is designed to enable portable gaming experiences for PlayStation 5 owners. It comes equipped with an eight-inch, 1080p LCD display, with remote play able to stream games at up to 60fps. While the PlayStation Portal is a device that has its own system-on-chip that runs its operating system and connects to the Internet using Wi-Fi, the Portal is not designed to run games on its own and can only enable remote play on a PlayStation 5 using Wi-Fi.


Designed to extend the PS5 experience as much as reasonably possible, the PlayStation Portal comes with controllers that closely resemble the design and functionality PS5’s DualSense controllers. These built-in controllers provide gamers with familiar haptic feedback and adaptive triggers, ensuring a consistent gaming experience. Additionally, the device has the PlayStation 5’s home screen, offering a dedicated section for media playback.


Avid readers will certainly ask about latency since the Portal is a remote gaming devices. A review from IGN has demonstrated the device’s minimal latency during gameplay.



Meanwhile, the PlayStation Portal will not be compatible with Sony’s anticipated cloud streaming service for PS5 titles, according to The Verge. This means that the handheld is only able to stream games already installed on a user’s PS5 console, and from no where else.


Despite the overall simplicity of the device, Sony has also made a notably odd choice with regards to audio capabilities. In short, the handheld device lacks Bluetooth audio support. Instead, Sony is using the Portal to introduce its proprietary PlayStation Link wireless technology, which promises to deliver lossless, lag-free audio. As a result, the handheld is not compatible with existing wireless headsets from Apple, Beats, Samsung, and even Sony itself. in order to get wireless audio out of the Portal, gamers will have to use Sony’s new wireless headphones and earbuds, which are being released alongside the handheld and will be the first audio devices with PlayStation Link support. Thankfully, for those who prefer wired audio, the device also includes a 3.5mm headphone jack.


While many details about the PlayStation Portal have been shared, Sony still hasn’t disclosed some specifications, such as the expected battery life. However, indications suggest that Sony is aiming for a battery duration comparable to its DualSense controller, which is around seven to nine hours, according to Cnet. At any rate, Sony has left itself plenty of time to work out these details; for the moment, the device lacks a public launch date, with Sony saying the Portal will be released “later this year.”




Source: AnandTech – Sony Unveils The PlayStation Portal: A Remote Play Handheld For PlayStation 5

Samsung Launches 57-Inch Odyssey Neo G9: An Ultimate Curved Gaming LCD

Samsung has begun taking pre-orders for its new curved 57-inch Odyssey Neo G9 gaming monitor, with the aim of raising the bar for ultrawide gaming displays. Dubbing the 57-incher the “world’s first dual UHD gaming monitor,” the 7,680 x 2,160 LCD display is designed to be as wide as two typical 32-inch 4K displays.


First unveiled back at CES 2023, the Samsung Odyssey Neo G9 model G95NC is based upon a unique VA LCD panel featuring a 1000R curvature, a 7,680 × 2,160 pixels resolution, and a 32:9 aspect ratio. The panel supports a variable refresh rate of up to 240 Hz with AMD’s FreeSync Premium Pro on top and a 1 ms GtG response time. 


To improve color reproduction and contrasts, the display features Samsung’s Quantum Mini LED lighting, which uses Mini LED-based backlighting for better contrasts (though the company does not disclose the number of  dimming zones it uses for the monitor). Meanwhile quantum dots are being used to widen the gamut of the backlight, reaching 95% of the DCI color space.  Samsung claims the display offers DisplayHDR 1000 compliance – indicating a peak luminance of 1000 nits in HDR mode and support for at least HDR10 format.



To ensure that the display runs at its full resolution of 7,680 × 2,160 with an up to 240 Hz refresh rate, Samsung’s Odyssey Neo G9 must be connected to its host using either DisplayPort 2.1 with UHBR 13.5 (which is currently only available on AMD’s Radeon RX 7900-series graphics cards) or HDMI 2.1 with in FRL 12 mode (with DSC). In addition to display inputs, the unit has a USB hub.


The monitor supports capabilities like picture-in-picture and picture-by-picture, so it can be used with multiple PCs at once. An interesting feature the product supports is Auto Source Switch+, which allows to instantly connect to new devices without flipping through input sources.



While the 57-inch ultrawide Samsung Odyssey Neo G9 is very impressive, there are people who might prefer something more classic. For them, Samsung has its new curved 4K 55-inch Odyssey Ark (G97NC) with a 165 Hz maximum refresh rate and AMD’s FreeSync Premium Pro badge.


Samsung says that both Odyssey Neo G9 57-inch and Odyssey Ark 55-inch displays are now available for preorder worldwide. Thus far pre-orders have opened up in the UK and Australia, among other regions, however Samsung has yet to list the monitor on its US website.




Source: AnandTech – Samsung Launches 57-Inch Odyssey Neo G9: An Ultimate Curved Gaming LCD

Beelink GTR7 mini-PC Review: A Complete AMD Phoenix Package at 65W

The increasing popularity of small form-factor (SFF) PCs has allowed a number of second and third tier vendors to market their wares. While the trend was kickstarted by the Intel NUC in the early 2010s, recent years have seen various Asian companies such as Beelink, Chuwi, GEEKOM, MinisForum, etc. focus solely on these types of systems. Beelink has been slowly gaining popularity over the last five years or so with their wide range of products using the latest processors from Intel and AMD. The company became one of the first vendors to launch an AMD Phoenix (Zen 4 CPU + RDNA3 iGPU in a power envelop suitable for notebooks) mini-PC lineup when they announced the GTR7 product line. Read on for a detailed look at the performance profile and value proposition of Beelink’s entry-level configuration – the GTR7 7840HS.



Source: AnandTech – Beelink GTR7 mini-PC Review: A Complete AMD Phoenix Package at 65W

NVIDIA Reports Q2 FY2024 Earnings: $13B Revenue Blows Past Records On Absurd Data Center Demand

NVIDIA this afternoon has announced their results for the second quarter of their 2024 fiscal year, delivering what’s arguably the most anticipated earnings report of the season. Riding high on unprecedented demand for their data center-class GPUs for use in AI systems, NVIDIA’s revenues have been on a rapid rise – as well as their standing on Wall Street.


For the second quarter of their 2024 fiscal year, NVIDIA booked $13.5 billion in revenue, which is a 101% increase over the year-ago quarter. The company has, at this point, shaken off the broader slump in technology spending on the back of an explosion in demand for their data center products, and to a lesser extent the latest generation of their consumer GeForce graphics products. As a result, this is a quarter for the record books, as NVIDIA has set new records for everything from revenue to net income.











NVIDIA Q2 FY2024 Financial Results (GAAP)
  Q2 FY2024 Q1 FY2024 Q2 FY2023 Q/Q Y/Y
Revenue $13.5B $7.2B $6.7B +88% +101%
Gross Margin 70.1% 64.6% 43.5% +5.5ppt +26.6ppt
Operating Income $6.8B $2.1B $499M +218% +1263%
Net Income $6.1B $2.0B $656M +203% +843%
EPS $2.48 $0.82 $0.26 +202% +854%


Driven by their highly profitable, high-margin data center products, NVIDIA achieved a GAAP gross margin of 70.1% for the quarter. Coupled with their record revenue, this has resulted in NVIDIA booking a blistering $6.1B in net income, an 843% improvement over Q2’22, and even more than trebling their net income versus just the previous quarter.


And while high margins are not unheard of for fabless semiconductor companies like NVIDIA, it’s all but unheard of for a company of this scale to hit those kinds of margins. In the span of just a year, NVIDIA has gone from earning $6 billion a quarter in revenue to keeping $6 billion in revenue as profits. Suffice it to say, it’s very good to be NVIDIA right now – or at least, it’s good to be working in NVIDIA’s data center product teams right now.


Things seem set to continue going NVIDIA’s way, as well. The company, handily beating their already very bullish $11B revenue projection for Q2, is projecting a further 18%+ jump in revenue for Q3, to $16B in revenue. Which, if NVIDIA’s projections pan out, would afford a 71.5% GAAP gross margin. This would set a new round of records for NVIDIA, who in just the last quarter became a trillion-dollar market capitalization company, and as of this moment is already knocking on $1.3 trillion in after-hours trading. But with lofty projections will also come lofty expectations to perform, and to maintain that kind of performance for more than a handful of quarters.


NVIDIA Market Segment Results











NVIDIA Market Platform Results, Q2 FY2024 (GAAP)
  Q2 FY2024 Q1 FY2024 Q2 FY2023 Q/Q Y/Y
Data Center $10,323M $4,284M $3,806M +141% +171%
Gaming $2,486M $2,240M $2,042M +11% +21%
Professional Visualization $379M $295M $496M +28% -24%
Automotive $253M $296M $220M -15% +15%
OEM & IP $66M $77M $140M -14% -53%


Diving into the performance of NVIDIA’s individual market segments, the bellwether of NVIDIA’s product portfolio remains their data center segment. That segment posted $10.3B in revenue for Q2, not just setting a new segment record, but smashing the old record in the process.


NVIDIA’s data center segment has grown by leaps and bounds over the past year in particular, on the back of developments with large language models (LLMs) in the AI space, and the subsequent spike in demand for high-performance processors that can train and run those models. According to the company, the bulk of this additional demand has come from a mix of cloud service providers and consumer internet companies, with data center compute product revenue growing by 195% year-over-year. At this point NVIDIA is full speed ahead with the production of Hopper architecture (GH100) based products, and if a report from the Financial Times is correct, the company is now looking to triple its GH100 production, in anticipation of shipping over 1.5M units in 2024.


The jump in sales in their data center processors has also spurred on similar growth in NVIDIA’s other data center product segments as well. Networking revenue for the company was up 94% year-over-year, as customers have been buying up increasing amounts of InfiniBand hardware to wire up their GPU installations. Unfortunately, NVIDIA doesn’t provide a further breakdown here of how much of this increase is in the form of bundled sales – customers buying DGX SuperPods and other NVIDIA products that come with InfiniBand hardware installed – and how much of that is ad-hoc networking equipment sales. But either way the success of NVIDIA’s data center GPUs is good news for their networking division.


But NVIDIA’s success in the data center compute market also means that the company’s overall revenues have become increasingly imbalanced. In the last couple of years NVIDIA has gone from being primarily a gaming company to primarily a compute company to almost entirely a compute company. NVIDIA’s compute and networking segment sales – one of NVIDIA’s two canonical reporting segments – now make up 77% of their overall revenue, and the disparity is increasing. So while NVIDIA is doing well on the whole, the lopsided success driven by the generative AI market means that they are, at least for the moment, not very well diversified with regards to revenue.


Speaking of things that aren’t data center GPUs, NVIDIA’s gaming market segment recorded $2.5B in revenue for Q2. This is up a “mere” 22% versus the year-ago quarter, coming on the back of the launch of NVIDIA’s GeForce RTX 40 series products. Now that the company has finished releasing the full product stacks for both mobile and desktop, the company is enjoying a surge in sales as gamers are picking up the new hardware, and retailers have largely finished selling off old GeForce RTX 30 stock.


And while NVIDIA’s gaming revenue pales in comparison to the data center, this is otherwise a good quarter for that market segment. While it does not end up being anything near a record due to the most recent cryptocurrency rush blowing up NVIDIA’s gaming revenues a couple of years back, excluding those quarters, this would be one of NVIDIA’s best quarters for the gaming segment on a revenue basis. Diving a bit into NVIDIA’s historical data, gaming sales have grown by about $1.2B in the last 4 years, falling just short of doubling NVIDIA’s revenues there. Though it goes without saying that gamers are less enthused about the current state of video card prices that are allowing NVIDIA to afford such revenue growth.


Moving down the list, NVIDIA’s professional visualization segment finds itself in a weaker spot. The ramp of Ada Lovelace architecture workstation products has helped, especially in quarterly revenue, but at $379M in revenue, year-over-year revenue is down 24%. The professional visualization market has seemingly reached its saturation point, and while revenue ebbs and flows from one quarter to the next, NVIDIA has not been able to grow it significantly over the past several years.


The automotive segment, meanwhile, is NVIDIA’s final market segment to show growth for the quarter. That segment booked $253M in revenue for Q2, up 15% from the year-ago quarter. According to NVIDIA, the bump in revenue was primarily driven by sales of self-driving platforms, tapered by lower overall car sales (particularly in China).


Finally, NVIDIA’s OEM & Other segment was another that saw significant declines, dropping 53% to $66M. The company hasn’t offered any further details with this quarter’s financial results release, but in the previous quarter the drop was attributed to declines in GeForce MX GPU sales.


Looking Forward: To $16B Of Revenue In Q3


Given the rapid tear NVIDIA has been on in growing its revenues and profitability over the past year, half of the anticipation with recent NVIDIA earnings releases has not just been how well they’ve performed, but how well they expect to perform in the future. And at least for the next quarter, NVIDIA is projecting another set of record results.


For the third quarter of their 2024 fiscal year, NVIDIA is projecting $16 billion (plus or minus 2%) in revenue. That would be a 169% year-over-year jump in total revenue for the company, eclipsing the 101% growth of Q2. So long as NVIDIA’s data center sales remain high, the company seems set to remain on a growth spurt through the rest of the year, as Q2 is the first quarter where NVIDIA has been shipping Hopper architecture products in large volumes – meaning that Q2 is essentially the start of the Hopper architecture era from and NVIDIA sales perspective. And should NVIDIA beat their own projections by more than a fraction, then the company will book more revenue in Q3’24 than they did in all of FY2021.


The further expected growth in data center sales is also expected to push NVIDIA’s gross margins higher as well. The company is projecting a GAAP gross margin of 71.5% for the third quarter, beating Q2’s already impressive figures.


As for what NVIDIA is doing with their newfound riches, where they aren’t already investing more into data center GPU production to try to catch up with demand, NVIDIA is sinking their cash into stock buybacks. Already in the midst of a share repurchase program with $3.95 billion left, this week the company’s board of directors has authorized NVIDIA to buy back an additional $25 billion in shares.


Besides bringing NVIDIA slightly more private by removing outstanding shares, this is almost certain to further boost NVIDIA’s stock price, which like the company itself, has been on a tear this year. At the time of their Q1 earnings report, NVIDIA’s stock was hovering around $307 a share, for a market cap of around $755 billion. Now the price is at $471, and in after-hours trading it’s jumped a further 7% to $505 on the back of NVIDIA beating the street on their earnings report. As a result, NVIDIA is closing in on a market capitalization of $1.3 trillion, almost 4x the valuation of rivals AMD and Intel combined.


For the moment, at least, it would seem the sky’s the limit for data center GPU sales. NVIDIA is already unable to keep up with demand for Hopper products, and that won’t be changing in the near future. So, for as long as they can last for NVIDIA, let the good times roll.




Source: AnandTech – NVIDIA Reports Q2 FY2024 Earnings: B Revenue Blows Past Records On Absurd Data Center Demand

Synopsys Surpasses $500M/Year in AI Chip Revenue, Expects Further Rapid Growth

Demand for generative artificial intelligence (AI) applications is so high that NVIDIA’s high-performance compute GPUs like A100 and H100 are reportedly sold out for quarters to come. Dozens of companies are developing AI-oriented processors these days and, like the gold rushes of old, the tool suppliers are some of the biggest winners. As part of their Q3 earnings report, Synopsys, one of the leading suppliers of electronic design automation (EDA) tools and chip IP, disclosed that it’s already booked over half of a billion of dollars in AI-related revenue in the last year.


“AI chips are a core value stream for Synopsys, already accounting on a trailing 12-month basis for well over $0.5 billion,” said Aart J. de Geus, the outgoing chief executive of Synopsys, at the conference call with analysts and investors (via SeekingAlpha). “We see this growth continuing throughout the decade.”


Rising demand for diverse generative AI applications is propelling the AI server market’s growth, going from $30 billion in 2023 to an impressive $150 billion by 2027, according to the head of Foxconn. The market for AI processors is poised to expand at a similar pace, and Synopsys is projecting it to exceed $100 billion by 2030.


“Use cases for AI are proliferating rapidly, as are the number of companies designing AI chips,” said de Geus. “Novel architectures are multiplying, stimulated by vertical markets, all wanting solutions optimized for their specific application. Third parties estimate that today’s $20 billion to $30 billion market for AI chips will exceed $100 billion by 2030.”


AI processors are set to become a sizable part of the semiconductor market in general. In fact, sales of AI chips may account for 10% of the whole semiconductor market several years down the road. Furthermore, they will be a major driver for the semiconductor market growth as they will enable new types of applications, such as self-driving vehicles.


“In this new era of ‘smart everything,’ these chips in turn drive growth in surrounding semiconductors for storage, connectivity, sensing, AtoD and DtoA converter, power management,” said the head of Synopsys. “Growth predictions for the entire semi market to pass $1 trillion by 2030 are thus quite credible.”


Perhaps the most amusing part about Synopsys earning over $500 million on AI chips in about a year is that a significant part of the company’s revenue comes from AI-enabled EDA tools. Essentially, the company is selling EDA software that uses artificial intelligence to develop artificial intelligence chips.


Sources: Synopsys, SeekingAlpha.




Source: AnandTech – Synopsys Surpasses 0M/Year in AI Chip Revenue, Expects Further Rapid Growth

The ASUS ROG Strix Scar 17 (2023) Laptop Review: Mobile Ryzen 9 7945HX3D with 3D V-Cache Impresses

With all the success and esteem that AMD’s 3D V-Cache on their desktop CPUs has garnered over the last year, it was only a matter of time before we saw a mobile-ready version hit the retail shelves. Last month AMD announced their first mobile processor using 3D V-Cache, the Zen 4 architecture Ryzen 9 7945HX3D, a 16 core chip with a combined 128 MB of L3 cache across both core complex dies (CCDs). Similar to other dual CCD Ryzen 7000 chips with the 3D V-Cache, such as the Ryzen 9 7950X3D, one of the CCDs comes with a large 96 MB of L3 cache, while the other CCD comes with the standard 32 MB.

Looking to put their best foot forward for this important mobile launch, AMD has sampled us with ASUS ROG Strix Scar 17 (G733PYV), a premium and highly powerful desktop replacement-class (DTR) gaming notebook, and also the sole initial launch system for the new chip. Alongside the new Ryzen chip, the ROG Strix Scar 17 incorporates a 17-inch, 1440p@240Hz IPS display that’s driven by NVIDIA’s powerful RTX 4090 laptop graphics card, 1TB of PCIe 4.0 x4 M.2 storage, 32 GB of DDR5-4800 memory, and Wi-Fi 6E connectivity. The ASUS ROG Strix Scar 17 is a premium example of a DTR gaming laptop, and an understandable launch platform to showcase AMD’s Ryzen 9 7945HX3D processor to the market.

Designed as a complete desktop replacement for gamers looking for a little more flexibility in where they can game, the ASUS ROG Strix Scar 17 is a premium example and an understandable launch platform to showcase AMD’s Ryzen 9 7945HX3D processor to the market. We’re taking a closer look at the AMD Ryzen 9 7945HX3D with all the benefits of the 96 MB of 3D V-Cache on one of the CCDs and how it performs within ASUS’s flagship ROG Scar 17 gaming notebook.



Source: AnandTech – The ASUS ROG Strix Scar 17 (2023) Laptop Review: Mobile Ryzen 9 7945HX3D with 3D V-Cache Impresses

Arm to Be Public Once More, Files for IPO on Nasdaq

The ongoing saga of ownership of Arm appears to be finally nearing its end, as Arm has announced this afternoon that the company has made its long-awaited filing for an initial public offering (IPO) on the Nasdaq exchange. Share prices and listing dates have not been set as of this time, but Arm has secured the ARM ticker symbol for the new offering.


The SoftBank-owned chip IP designer, whose designs are at the core of virtually every smartphone and countless other embedded computers, has been hanging in a state of limbo since early 2022, when NVIDIA’s acquisition of the company was called off due to regulatory pressure. At the time, SoftBank announced that they would instead take Arm public – a much more challenging and less profitable endeavor – using the last 18 months to prepare for an IPO.


SoftBank originally acquired Arm in 2016 as a growth vehicle for the investment firm, paying roughly $32 billion for the chip designer. The company then began shopping Arm around in 2020 after other SoftBank investments such as WeWork turned sour, and SoftBank looked to shore up its balance sheets. Ultimately, the group found a potential buyer in NVIDIA, who was offering $40 billion for Arm, only for that exchange to never come to pass as regulators deemed Arm too critical of a company to be held by NVIDIA – or presumably any other single tech company, for that matter.


The failure of the NVIDIA acquisition has left Arm in a state of limbo for the past year and a half. While there’s little doubt that SoftBank will be able to find investors on the open market, there’s a good deal more doubt over whether SoftBank would be able to sell any stake in Arm at a profit, given their relatively high 2016 buy-in and the fact that NVIDIA’s top offer was only 25% above that. SoftBank’s plans seemed to have softened in the meantime – the IPO filing indicates that SoftBank will be retaining voting control over Arm, so they’re not divesting themselves of Arm entirely – but the company is still looking to turn a profit on Arm, and IPO timing is an important factor in accomplishing that.


At this point, Arm has not announced how many shares of the company will be sold or at what price, as those details will be determined later. Meanwhile, according to a report from Reuters on Friday, SoftBank re-acquired the outstanding 25% stake of Arm held by its Vision Fund unit in a deal valuing Arm at $64 billion. This is consistent with other reports that SoftBank is aiming for an IPO valuation between $60 billion and $70 billion, far better than NVIDIA’s offer and a well over what the investment firm acquired Arm for in the first place. These reports also claim that SoftBank is courting NVIDIA, Intel, and other tech companies as initial investors, which would result in Arm being partially held by what amounts to a quasi-consortium of tech companies.


A successful IPO should also provide some stability for the engineering side of Arm, though it won’t alleviate investment pressures entirely. As a public company, investors will be pushing Arm to further grow the company and raise revenues – a familiar spot for the previously-public chip designer – but now Arm will be able to develop products without the looming prospect of being sold to another company, and the change in priorities that would come from that. Ultimately, Arm is going to have to find ways to drive growth without making customers flinch from royalty rates, a tricky task given the success of RISC-V MCUs and other alternative processor designs.




Source: AnandTech – Arm to Be Public Once More, Files for IPO on Nasdaq

SK hynix Begins Sampling HBM3e, Volume Production Planned For H1 2024

SK hynix on Monday announced that it had completed initial development of its first HBM3E memory stacks, and has begun sampling the memory to a customer. The updated (“extended”) version of the high bandwidth memory technology is scheduled to begin shipping in volume in the first half of next year, with hardware vendors such as NVIDIA already lining up to incorporate the memory into their HPC-grade compute products.


First revealed by SK hynix back at the end of May, HBM3E is an updated version of HBM3 that is designed to clock higher than current HBM3, though specific clockspeed targets seem to vary by manufacturer. For SK hynix, as part of today’s disclosure the company revealed that their HBM3E memory modules will be able to hit data transfer rates as high as 9 GT/sec, which translates to a peak bandwidth of 1.15 TB/sec for a single memory stack.


Curiously, SK hynix has yet to reveal anything about the planned capacity for their next-gen memory. Previous research from TrendForce projected that SK hynix would mass produce 24 GB HBM3E modules in Q1 2024 (in time to address applications like NVIDIA’s GH200 with 144GB of HBM3E memory), boosting capacity over today’s 16GB HBM3 stacks. And while this still seems likely (especially with the NV announcement), for now it remains unconfirmed.




TrendForce HBM Market Projections (Source: TrendForce)


Meanwhile, the SK hynix also confirms that its HBM3E stacks are set to use its Advanced Mass Reflow Molded Underfill (MR-RUF) technology to reduce their heat dissipation by 10%. But thermals is not the only benefit MR-RUF can provide. MR-RUF implies the usage of an improved underfill between layers, which improves thermals and reduces thickness of HBM stacks, which allows the construction of 12-Hi HBM stacks that are only as tall as 8-Hi modules. This does not automatically mean that we are dealing with 12-Hi HBM3E stacks here, of course.


At present, SK hynix is the only high volume manufacturer of HBM3 memory, giving the company a very lucrative position, especially with the explosion in demand for NVIDIA’s H100 and other accelerators for generative AI. And while the development of HBM3E is meant to help SK hynix keep that lead, they will not be the only memory vendor offering faster HBM next year. Micron also threw their hat into the ring last month, and where those two companies are, Samsung is never too far behind. In fact, all three companies seem to be outpacing JEDEC, the organization that is responsible for standardizing DRAM technologies and various memory interfaces, as that group has still not published finalized specifications for the new memory.




Source: AnandTech – SK hynix Begins Sampling HBM3e, Volume Production Planned For H1 2024

Zotac Taps Desktop and Laptop GeForce RTX 4070s For New SFF Zbox PCs

Zotac this week introduced two new compact desktops, both packing versions of NVIDIA’s higher-end GeForce RTX 4070-series graphics processors. The Zbox Magnus One is an upgradable, desktop-style SFF machine with an 8.3-liter chassis and includes a desktop GeForce RTX 4070 card inside. Meanwhile the Zbox Magnus is a tiny machine that’s close to a NUC in size and construction, and includes a soldered-down GeForce RTX 4070 Laptop GPU.


Zbox Magnus One: Not So Small, But Powerful


Zotac has been offering its Zbox Magnus One small form-factor desktops for some time (and we even reviewed one of them), but the models ERP74070C and ERP74070W pack Intel’s latest Core i7-13700 processor paired with NVIDIA’s GeForce RTX 4070 graphics card with 12 GB of GDDR6X memory (AD104, 5888 CUDA cores, 29 FP32 TFLOPS). The machine supports up to 64 GB of DDR5-5600 memory using two SO-DIMM modules, two M.2-2280 SSDs with a PCIe 4.0 x4 interface, and even one 2.5-inch drive with a SATA interface.



In addition, the new Zbox Magnus One ERP74070 boasts with a rich connectivity department that includes a Wi-Fi 6 + Bluetooth 5.2 adapter, a 2.5 GbE Killer-enabled port, a regular GbE port, one Thunderbolt 4 on the back, seven USB 3.0/3.1 connectors (one USB Type-C on the front), an SD card slot with UHS-II support, five display outputs (three DP 1.4a and one HDMI 2.1 on the graphics card and one HDMI on the motherboard) and a TRRS audio connector for headsets.



The unit measures 265.5 mm (10.45 inches) × 126 mm (4.96 inches) × 249mm (9.8 inches) and is not that small, truth to be told. But it has an indisputable advantage over other SFF desktops as it can be upgraded with a more powerful 65W CPU and a more powerful graphics card as long as its 500W 80+ Platinum-badged can handle it.


Traditionally, Zotac will offer its Zbox Magnus One as barebones and as fully-configured machines with Windows pre-installed.


Zbox Magnus: Tiny Yet Mighty


As for the Zbox Magnus EN3740C, it is based upon Intel’s Core i7-13700HX CPU as well as NVIDIA’s NVIDIA’s GeForce RTX 4070 Laptop GPU (AD106, 4608 CUDA cores at up to 2175 MHz, 20 FP32 TFLOPS) with 8 GB of GDDR6. The use of a laptop-spec GPU means that it’s not as powerful as its desktop counterpart, but it comes at a considerably lower thermal envelope. The machine can be equipped with 64 GB of DDR5-4800 memory using two modules and a couple of M.2-22110 SSDs with a PCIe 4.0 x4 interface.


Connectivity-wise, the small PC is not far behind its bigger brother as it has a Wi-Fi 6 + Bluetooth 5.2 adapter, one 2.5 GbE Killer-enabled port, one GbE connector, one Thunderbolt 4 port, five USB 3.1 ports, three display outputs (two DisplayPorts, one HDMI), an SD card reader with UHS-II support, and a TRRS audio connector for headsets.



Since the PC uses soldered-down mobile components, it cannot be upgraded (at least not by the end user), but on the bright side, it measures 210 mm × 203 mm × 62.2 mm (8.27 in × 7.99 in × 2.45 in) and it offers quite a lot already from such a compact package.


Just like in case of all Zotac’s PCs, Zbox Magnus EN3740C will be available as barebones as well as a complete system with memory, SSD, and Windows installed.




Source: AnandTech – Zotac Taps Desktop and Laptop GeForce RTX 4070s For New SFF Zbox PCs

Intel Cuts Some R&D Positions in California to Reduce Costs

As Intel continues to refocus on its core competencies, the company has been no stranger to shedding business units and jobs in the process. And while the roughly 132,000 headcount company hasn’t enacted any massive layoffs, there have been numerous cuts at all levels over the past couple of years, with these layoffs now extending to R&D.


The Sacramento Inno reported this week that Intel is set to lay off 140 employees, including 89 from the Folsom, California campus, and 51 from San Jose. The Folsom cuts span across 37 job classifications, but most prominently impact roles titled ‘engineer’ and ‘architect.’ To provide further specifics, the layoffs include 10 GPU software development engineers, eight system software development engineers, six cloud software engineers, six product marketing engineers, and six system-on-chip design engineers.


The reductions are intended to decrease Intel’s operational costs and pave a path to renewed profitability. Though it remains surprising that Intel decided to cut workforce at one of its key sites. In the end, Intel’s long-term success depends on its R&D prowess and software is as important as hardware in Intel’s business.


Intel’s Folsom site has historically been pivotal for various research and development endeavors, including SSDs, graphics processors, software, and chipsets. Since Intel sold its 3D NAND and SSD business to SK Hynix in late 2021, engineers working on appropriate products either joined Solidigm, were relayed to other projects, left themselves, or were laid off. The recent layoffs of GPU specialists are somewhat unexpected, given that Intel’s long-term plans still have the company developing GPUs for every step of the market, from datacenter accelerators to integrated GPUs.


California is where Intel is headquartered. As of now, Intel employs over 13,000 people in California, which is more than 12,000 in Arizona, but less than 20,000 in Oregon, two major manufacturing sites for the company. As of early 2022, the Folsom site employed 5,300 individuals, but considering these reductions, a total of almost 500 positions have been eliminated from the Folsom R&D campus within this year, following previous layoffs in January, March, and May.


Meanwhile, according to the Inno, in notifying state authorities, Intel has hinted at the possibility of internal relocations for some affected employees.




Source: AnandTech – Intel Cuts Some R&D Positions in California to Reduce Costs