Computex 2020 Cancelled; Coming Back In 2021

After a protracted battle with the SARS-CoV-2 virus, this year’s Computex trade show has finally succumbed to the pathogen.


One of the world’s largest IT trade shows – and frequently a venue for major PC-related announcements – Computex 2020 was scheduled to take place last week. However due to the coronavirus and all of the health and travel restrictions born from it, back in March the show was delayed and rescheduled for late September. But as it turns out, even a 3 month delay won’t be quite enough to make the show work, and as a result event organizer TAITRA has given up on plans to host the trade show this year.


Calling the latest change in plans a “rescheduling” of Computex, the show has been officially moved to June 1st through the 5th of 2021. Which means that although the show overall has not been canceled and that there will be another Computex next year, for all practical purposes the 2020 show has been cancelled.


In the brief announcement, TAITRA cited the ongoing travel restrictions as being the primary reason for cancelling the 2020 show. Taiwan is still largely banning foreign nationals from entering the country, which if still in place in September, would pose an obvious issue to attending the trade show. At the same time, the original plan to reschedule the show to September was always a bit of a dicey proposition, as the delay put the show out of sync with annual product release cycles and fewer companies were planning to attend, leading to TAITRA scaling down the show accordingly.


Notably, this makes 2020 the first year that Computex has been cancelled entirely. Even in the SARS outbreak of 2003, the show was successfully moved to September. Which goes to show how much more serious and disruptive SARS-CoV-2 has turned out to be.



Source: AnandTech – Computex 2020 Cancelled; Coming Back In 2021

Sony Teases PlayStation 5 Design

Today during Sony’s “The Future of Gaming” show where the company and its partners revealed a slew of next-generation game titles, we also had a first glimpse of the physical design of the new PlayStation 5.



The new console is a significant departure for Sony’s console hardware which has retained a standard black design aesthetic ever since the PlayStation 2 (Although different colour scheme variants have been available). The new PlayStation 5 immediately stands out with its white-black design, as well for the fact that Sony is seemingly presenting the new console in a primarily vertical standing position.


The looks of the console are defined by an enveloping white rounded body that envelops a central glossy black middle section like some sort of cape. The black middle section at the top emits a blue light, illuminating the white side panels as well as the ventilation grills.



Today’s teaser showcased the first time what the console’s cooling hardware might look like. The new design looks to have ventilation grills throughout the whole top of the console as well as the top half of the front of the device, curving along the top corner of the design, with the grills present on both lateral sides. We don’t know if this is an exhaust or intake, or maybe even both, as we haven’t yet seen the back side of the new unit.


Sony’s presentation only showed the console in an upright position, so the design was possibly designed to be used like this in its most optimal fashion.



Another hint that the console might not be designed to be used in a horizontal position is the odd “hump” that appears where the BluRay disk player is located. It’s a pretty unusual asymmetric design choice that inarguably will spark a lot of discussions.


Sony is also announcing a Digital Edition of the PlayStation 5 which doesn’t feature a disk player, getting rid of the hump in the design. Digital distribution has gained a ton of popularity of the last few years and Sony now releasing a digital only console certainly points out that the company envisions this trend to continue and grow.



Alongside the two new PS5 variants, Sony also announced several new accessories for the console: The new DualSense controller which we’ve known for some time now, a new DualSense charging station which charges up to two controllers at a time, a stereoscopic HD camera, a media remote, and a new headset dubbed the PULSE 3D Wireless Headset.



3D audio is meant to be a big part of the new PlayStation 5 experience thanks to the console’s new raytraced audio hardware capabilities – so Sony releasing a first-party headset tied in with the console release isn’t too big of a surprise.


The Sony PlayStation 5 is scheduled to be launched this holiday season at a yet undisclosed price. It is powered by a custom AMD SoC employing 8 Zen2 cores up to 3.5GHz, a new customised RDNA 2-based GPU with 36 CUs and up to 2.23GHz frequency, and a new ultra-fast SSD and storage architecture that is said to be multiple times faster than the best PC storage devices on the market.



Source: AnandTech – Sony Teases PlayStation 5 Design

Jim Keller Resigns from Intel, Effective Immediately

Intel has just published a news release on its website stating that Jim Keller has resigned from the company, effective immediately, due to personal reasons. Jim Keller was hired by Intel two years ago to the role as Senior Vice President of Intel’s Silicon Engineering Group, after a string of successes at Tesla, AMD, Apple, AMD (again), and PA Semiconductor. As far as we understand, Jim’s goal inside Intel was to streamline a lot of the product development process on the silicon side, as well as providing strategic platforms though which future products can be developed and optimized to market. We also believe that Jim Keller has had a hand in looking at Intel’s manufacturing processes, as well as a number of future products.


Intel’s press release today states that Jim Keller is leaving the position on June 11th due to personal reasons. However, he will remain with the company as a consultant for six months in order to assist with the transition.


As a result of Jim’s departure, Intel has realigned some of its working groups internally with a series of promotions.


  1. Sundari Mitra, the former CEO and founder of Net Speed, will lead a newly created IP Engineering Group.
  2. Gene Scuteri will head the Xeon and Networking Engineering Group
  3. Daaman Hejmadi will lead the Client Engineering Group focused on SoC execution
  4. Navid Shahriari will continue to lead the Manufacturing and Product Engineering Group

Jim Keller’s history in the industry has been well documented – his work has had a significant effect in a number of areas that have propelled the industry forward. This includes work on Apple’s A4 and A5 processors, AMD’s K8 and Zen high-level designs, as well as Tesla’s custom silicon for self driving which analysts have Tesla’s competitors have said put the company up to seven years ahead.


With our interview with Jim Keller, several weeks after taking the job at Intel, we learned that Keller went in to the company with a spanner. Keller has repeatedly said that he’s a fixer, more than a visionary, and Intel would allow him to effect change at a larger scale than he had ever done previously.


From our interview:


JK: I like the whole pipeline, like, I’ve been talking to people about how do our bring up labs and power performance characterization work, such as how does our SoC and integration and verification work? I like examining the whole stack. We’re doing an evaluation on how long it takes to get a new design into emulation, what the quality metrics are, so yeah I’m all over the place.


We just had an AI summit where all the leaders for AI were there, we have quite a few projects going on there, I mean Intel’s a major player in AI already, like virtually every software stack runs on Xeon and we have quite a few projects going on. There’s the advanced development stuff, there’s nuts and bolts execution, there’s process and methodology bring up. Yeah I have a fairly broad experience in the computer business. I’m a ‘no stone unturned’ technical kind of person – when we were in Haifa and I was bugging an engineer about the cleanliness of the fixture where the surface mount packages plug into the test boards.


Jim’s history has shown that he likes to spend a few years at a company and move on to different sorts of challenges. His two year stint at Intel has been one of his shorted tenures, and even recently Forbes published a deep expose on Jim, stating that ‘Intel is betting its chips on microprocessor mastermind Jim Keller’. So the fact that he is leaving relatively early based on his previous roles is somewhat different.


Intel’s press release on the matter suggests that this has been known about for enough time to rearrange some of the working groups around to cover Jim’s role. Jim will be serving at Intel for at least another six months it seems, in the role of a consultant, so it might be that long before he lands another spot in the industry.


It should be noted that Jim Keller is still listed to give one of the keynote addresses at this year’s Hot Chips conference on behalf on Intel. We will update this story if that changes.


Related Reading


 



Source: AnandTech – Jim Keller Resigns from Intel, Effective Immediately

ASUS ROG Maximus XII Hero Wi-Fi Review: The Tale of Two Motherboards

Some of the recent discussions around motherboard design are whether the motherboard manufacturers are actually adhering to the CPU vendor specifications. If a motherboard manufacturer improves the base power delivery and cooling, should they be allowed to go beyond Intel’s suggested turbo power limits, for example? The question is actually rather moot, given that the vendors have been doing this for over a decade in one form or another, so varying degrees of extreme. As this practice has come more into the public light, especially with Intel’s high-end processors going north of 250 watts, companies like ASUS have come under increased scrutiny. That is why, at least with the Maximus XII Hero we are testing today, ASUS offers two options on the first boot: Intel Recommended, and ASUS Optimized.



Source: AnandTech – ASUS ROG Maximus XII Hero Wi-Fi Review: The Tale of Two Motherboards

GeIL Unveils Orion Series Memory, Up to DDR4-4000 with 32 GB Memory Modules

GeIL has announced its newest family of DDR4 modules, the Orion series. Available in two versions, one standard and one for AMD platforms, the Orion series offers SKUs ranging from single 8 GB sticks up to 64 GB kits with two matching 32 GB memory modules. Meanwhile the new modules will be available at memory speeds ranging from DDR4-2666 up to DDR4-4000.



Clad in in either Racing Red or Titanium Grey for something a bit more subtle, GeIL’s Orion series of DDR4 memory is offered in kits specially designed for AMD’s platforms. And for hardware purists (or closed case owners) out there, the Orion range omits the use of RGB LEDs for a more clean-cut look. Meanwhile it’s interesting to note that, at least going by the photos provided by GeIL, the Orion modules look surprisingly tall for otherwise simple, RGB-free memory. Unfortuantely we don’t have the physical dimentions of the DIMMs, but users with low clearance coolers and the like may want to double-check that there will be sufficient room.


Onto the technical specifications, GeIL plans to make the Orion flexible with both single and dual-channel kits  available. These range from 8 GB to 32 GB modules, with the highest spec kit topping out at 64 GB of DDR4-4000, with latencies of CL18 and an operating voltage of 1.35 V. 









GeIL Orion DDR4 Memory Specifications
Speed Latency Voltage Available Configurations
DDR4-2666 19-19-19-43 1.20 V 8 GB (1 x 8 GB)

16 GB (1 x 16 GB)

16 GB (2 x 8 GB)

32 GB (1 x 32 GB)

32 GB (2 x 16 GB)

64 GB (2 x 32 GB)
DDR4-3000 16-18-18-36 1.35 V
DDR4-3200 16-18-18-36

22-22-22-52
1.20 – 1.35 V
DDR4-3600 18-20-20-40

18-22-22-42
1.35 V
DDR4-4000 18-24-24-44 1.35 V

At present, GeIL hasn’t unveiled pricing for any kits in its Orion series, nor has it provided details of when they will hit retail channels.


Related Reading




Source: AnandTech – GeIL Unveils Orion Series Memory, Up to DDR4-4000 with 32 GB Memory Modules

ZADAK Announces First PCIe SSD: The Spark RGB M.2, NVMe Up to 2 TB

ZADAK, a company that up until now has primarily been known for its memory modules, has just announced its first-ever PCIe 3.0 SSD. The ZADAK Spark PCIe 3.0 x4 M.2 is exactly what the name says on the tin – a PCie 3.0 x4 M.2 SSD – and like so many other products these days, includes integrated RGB LED lighting, which is built into the inclusive aluminium heatsink.


In terms of performance metrics and specifications, the ZADAK Spark RGB PCIe 3.0 x4 M.2 is rated for sequential read speeds of up to 3,200 MB/s, while sequential write speeds go up to 3,000 MB/s. Meanwhile the drive will be available in three different capacities: 512 GB, 1 TB, and 2 TB.



One of the drive’s more unique design feature focuses on the integrated RGB LEDs, which look to be equipped to the rear of the SSD.  This design gives the Spark RGB PCIe 3.0 x4 M.2 SSD more of an under glow, as opposed to a direct light source from the top of the black and silver aluminum heatsink. And rather than reinventing the wheel by developing their own lighting control system, ZADAK has opted to focus on making the the integrated RGB lighting compatible with the major motherboard manufacturers’ existing ecosystems. As a result, the RGB lighting can be used with ASRock, ASUS, MSI, and GIGABYTE’s RGB customization software, allowing users to sync the drive’s RGB lighting with compatible RGB-lit motherboards and memory modules.


Unfortunately, ZADAK hasn’t released a list of detailed specifications for the drive; so we don’t currently have any information on the controller type, the thickness of the heatsink, nor has it released the type of V-NAND technology it is using. But we do know that the ZADAK Spark RGB PCIe 3.0 x4 M.2 SSD is set to be available in late July, with the 512 GB model starting at $119, while the the 2 TB version will go for $389.


Related Reading




Source: AnandTech – ZADAK Announces First PCIe SSD: The Spark RGB M.2, NVMe Up to 2 TB

Intel Discloses Lakefield CPUs Specifications: 64 Execution Units, up to 3.0 GHz, 7 W

Over the past 12 months, Intel has slowly started to disclose information about its first hybrid x86 platform, Lakefield. This new processor combines one ‘big’ CPU core with four ‘small’ CPU cores, along with a hefty chunk of graphics, with Intel setting out to deliver a new computing form factor. Highlights for this processor include its small footprint, due to new 3D stacking ‘Foveros’ technology, as well as its low standby SoC power, as low as 2.5 mW, which Intel states is 91% lower than previous low power Intel processors. Today’s announcement comes in two parts: first, the specifications.



Source: AnandTech – Intel Discloses Lakefield CPUs Specifications: 64 Execution Units, up to 3.0 GHz, 7 W

Patriot's New 32GB Modules Available: UDIMM up to DDR4-3600, SODIMM up to DDR4-3000

Patriot has released a new series of DDR4 32GB memory modules in its VIPER GAMING STEEL series, complementing the 32GB offerings of the Blackout series as well as for the first time offering such a capacity in a 32GB SODIMM format at speeds of up to DDR-3000.



The biggest addition to Patriot’s repertoire is the new 32GB SODIMM modules, allowing laptop and SFF PC users with corresponding memory slots to double up on the maximum configurable memory all whilst retaining high performance speeds. The small form-factor modules are available in their new 32GB size at DDR4-3000, -2666 and -2400 speeds with timings ranging from 18-20-20-43 at 1.25V for the higher frequency SKU to 15-15-15-35 at 1.2V for the lowest frequency part.


Pricing for the new SODIMM modules land at $145 for the DDR4-3000 modules and $140 for the -2666 and -2400 variants and are available now on Amazon and Newegg.



Although patriot already had 32GB modules available in its UDIMM Blackout series, it’s expanding that offering to the Steel series, which essentially includes the more stylish heat spreader.


The new 32GB modules are available as single modules or as a 2x32GB kit up to speeds of DDR-3600 and timings of 18-22-22-42 at 1.35V. The kit is now also available on Amazon and Newegg, and goes for $310.


Related Reading:




Source: AnandTech – Patriot’s New 32GB Modules Available: UDIMM up to DDR4-3600, SODIMM up to DDR4-3000

Electromigration: Why AMD Ryzen Current Boosting Won't Kill Your CPU

Electromigration is an issue that affects all electronics – the act of electrons bumping into silicon or copper atoms and moving them out of the crystal lattice raises the resistance of the wire, causing more voltage to be needed which exacerbates the issue. With modern processors, built on the nanometer scale, it becomes ever more important to keep the rate of electromigration low, as wires are only dozens of atoms wide. Things that affect the rate of electromigration include voltage, current, and temperature.


So when it was recently been discovered that motherboard manufactuers on AMD’s AM4 platform are adjusting the default current values as detected by Ryzen’s power delivery co-processors, increasing the thermals and ultimately providing more power being delivered to the CPU, what does this mean?. What does this mean for performance? What does this mean for the longevity of the processor? Does this affect electromigration to the extent that I should be worried? Here’s our take on the matter.



Source: AnandTech – Electromigration: Why AMD Ryzen Current Boosting Won’t Kill Your CPU

Cerebras’ Wafer Scale Engine Scores a Sale: $5m Buys Two for the Pittsburgh Supercomputing Center

One of the highlights of Hot Chips 2019 was the presentation of the Cerebras Wafer Scale Engine – an AI processor chip that was as big as a wafer, containing 1.2 trillion transistors and set at over 46225 square millimetres of silicon. This was enabled through breakthrough techniques in cross-reticle patterning, but with the level of redundancy built into the design, ensured a yield of 100%, every time. The first WSE system, the CS-1, was put out on display at Supercomputing 2019, where we got a chance to bite into the design with Andrew Feldman, the founder and CEO of Cerebras.



Unfortunately I never got around to writing up my discussions with Andrew, however what we did learn at the time is that the CS1 is a fully integrated 15U chassis that requires 20 kW of power to push to the chip through 12x 4 kW power supplies (some redundancy built-in). The chip is mounted vertically for the sake of ease of access, which is quite bizarre in the modern world of computing. Most of the chassis was custom built for the CS-1, including the tooling and a fair amount of commercial 3D printing. Andrew also said at the time that while there was no minimum order quantity for the CS-1, however each one would cost ‘a few million’.


Today’s announcement from the Pittsburgh Supercomputing Center (PSC) helps round that number down to perhaps ~$2 million. Through a $5 million grant from the National Science Foundation (NSF) to the PSC, a new AI supercomputer will be built, called Neocortex. At the heart of Neocortex will be hardware built in partnership with Cerebras and Hewlett Packard Enterprise.


Specifically, there will be two CS-1 machines at the heart of Neocortex. The CS-1 supports asynchronous models through TensorFlow and pyTorch, with the software platform able to optimize the size of the workloads for the available area on the CS-1 Wafer Scale Engine. 



Each front panel half is machined from a single piece of aluminium


The pair of CS-1 machines will be coupled with an ‘extreme’ shared-memory HPE Superdome Flex server, which contains 32 Xeon CPUs, 24 TB of DDR4, 205 TB of storage, and 1.2 Tbps of network interfacing. Neocortex is expected to be used to enable AI researchers to train their models, covering areas such as healthcare, disease, power generation, transportation, as well as pressing issues of the day.


The machine will be installed in late 2020. PSC has stated that access to Neocortex will be available to researchers in the US at no cost.


When we spoke to Cerebras last year, the company stated that they already had orders in the ‘strong double digits’. When pressed, I managed to get that from ’12 to several dozen’. A number of machines were ordered for the Argonne National Laboratories at the time, and I suspect others are now investing.


Interestingly enough, at Hot Chips 2020 this year, the company is set to disclose its second generation Wafer Scale Engine. At a guess, I would suggest that this is slightly further away to commercialization than WSE1 was when it was announced, but the company seems to have had substantial interest in their technology.


Related Reading





Source: AnandTech – Cerebras’ Wafer Scale Engine Scores a Sale: m Buys Two for the Pittsburgh Supercomputing Center

ID-Cooling Aims Low: 47mm Low-Profile CPU Cooler with 130W TDP

One part of the industry that requires millimeter precision is building systems for small form factor designs – being able to take advantage of every small bit of volume inside a chassis but also maximize performance yet minimize noise is a critical element to the success of these systems. ID-Cooling has thrown another hat into the ring when it comes to cooling the processor, something that can be a tough job in such an enclosed space. The new IS-47K is designed with a maximum height of 47mm, and is apparently rated for CPUs up to 130 W.



Featuring six copper heatpipes and a 92mm PWM fan measuring 15mm thick, the IS-47K situates the fan in between the copper contact plate of the cooler and the heatsink, pushing air up through the aluminium fins, from CPU to outside. The whole element is nickel plated, along with a ‘metal frosted’ frame to keep the dimensions nice and snug. Judging by the renders, this cooler is designed to sit just on top of the rear IO, with a stepped type of cooling to facilitate rear connectors that come back a fair way into the socket area.



It should be noted that while this CPU is rated to support a TDP of 130 W, some processors during turbo modes will surpass that 130 W limit. Users will have to adjust their BIOSes accordingly.



The IS-47K offers brackets for all Intel LGA115x/1200 sockets, as well as AMD AM4. It comes bundled with ID-Cooling’s own TG25 thermal grease, rated at 10.5 W/mK. ID-Cooling claims full memory compatibility with all mini-ITX motherboards, as it doesn’t go over the top of any memory modules. This means that the double-height G.Skill modules, enabling double-density on certain motherboards, should be suitable.


The IS-48K will be sold for $45 at the end of June.


Source: ID-Cooling


Related Reading


 




Source: AnandTech – ID-Cooling Aims Low: 47mm Low-Profile CPU Cooler with 130W TDP

Intel Kaby Lake-G GPU Driver Updates Left In Limbo, Currently Unsupported

While the retail shelf life of Intel’s unusual Kaby Lake-G processor has pretty much passed at this point, it looks like it has become the gift that keeps on giving when it comes to confusion about how support for the combined Intel/AMD chip will work. First spotted by Tom’s Hardware, AMD’s latest driver Radeon driver installer doesn’t include support for the chip’s AMD “Vega M” GPU, and as a result there are currently no up-to-date drivers available for the platform. And while Tom’s Hardware did get a cryptic-but-promising response from AMD about future driver support, for the moment it’s not clear what’s going to happen or how long-term driver support for the processor will work.


A one-off collaboration between Intel and AMD, Intel’s Kaby Lake-G processor combined a quad-core Intel Kaby Lake CPU with a discrete AMD Polaris-based GPU, all on a single package. With the AMD dGPU covering for Intel’s traditionally weak integrated graphics, Kaby Lake-G gave Intel and interesting chip that could deliver great compute performance and much stronger graphics performance as well.


However since the GPU portion of Kaby Lake-G came from outside Intel, the chip has always existed in an odd place where it’s never been fully embraced by either manufacturer. Even as an Intel-sold and Intel-branded product, Kaby Lake-G’s Radeon roots were never really hidden, and indeed the chip’s GPU drivers have clearly been a derivative of AMD’s standard driver set since the very beginning. But this has also meant that Intel has been reliant on AMD to provide those drivers, and for reasons that are not entirely public or transparent, this hasn’t been handled well. After a very long break between GPU driver updates, Intel essentially gave up on putting any kind of façade on the source of their GPU drivers, and began directing users to install AMD’s Radeon drivers, which at the same time gained official support for the chip.



And that was the end of that. Or so we thought.


Instead, as spotted by the Tom’s Hardware crew, Kaby Lake-G support has once again gone missing from AMD’s drivers. As a result, it’s not possible to install current drivers for the hardware – and even finding drivers that can be installed is a bit of an easter egg hunt.


When they reached out to Intel about the matter – and specifically, about updated drivers for Hades Canyon, Intel’s Kaby Lake-G based NUC –  Tom’s Hardware did get a promising, but nonetheless cryptic response from the chipmaker:


We are working to bring back Radeon graphics driver support to Intel NUC 8 Extreme Mini PCs (previously codenamed “Hades Canyon”).


And for the moment, this is where things stand, with no official explanation as to what’s going on. Driver support for Kaby Lake-G hangs in limbo, as Intel and AMD seem to be unable to sort out responsibility for the chip.


Joint projects like these are some of the most difficult in the industry, as having multiple vendors involved in a single product means that there’s some degree of cooperation required. Which is easier said than done when it involves historic rivals like Intel and AMD. Still, I would have expected that driver support is something that would have been hammered out in a contract early on – such that AMD was committed to deliver and paid for the necessary 5 years of drivers – rather than the current situation of Intel and AMD seemingly dancing around the issue.


In the meantime, here’s to hoping that Kaby Lake-G’s driver situation gets a happier ending in due time.



Source: AnandTech – Intel Kaby Lake-G GPU Driver Updates Left In Limbo, Currently Unsupported

Lion Semi: How High-Efficiency ICs Enable Fast-Charging

The last few years have seen quite a large shift in the mobile market as smartphone vendors have engaged in a literal arms-race aiming for the fastest charging phones possible. In only a few years we’ve seen phones go from what used to be considered “fast charging” at rate of up to 18W to new advertised 65W rates. What a lot of consumers however often misunderstand, is that these new fast-charging systems aren’t primarily enabled by new battery technologies, but rather by new advances in charging systems that have become more and more efficient.


There are many different solutions to increasing charging efficiency, but today’s topic surrounds a younger start-up called Lion Semiconductor that specialises in a very different voltage conversion technology for power ICs, called switched-capacitor voltage converters. The San Francisco based start-up is seeing some increasing success in today’s mobile market where it enables vendors to achieve some of today’s fast charging phones.



Source: AnandTech – Lion Semi: How High-Efficiency ICs Enable Fast-Charging

ASRock Rack Offers Rome mATX Motherboard with only 6 Memory Channels

One of the items that makes a motherboard immediately standout is the amount of memory slots it has. For mainstream platforms, having two or four memory slots, for dual channel memory at one DIMM per channel (1 DPC) or two modules per channel (2 DPC) respectively is normal. If we saw a motherboard with three, it would be a little odd.


We’ve seen high-end desktop platforms have three (Nehalem) or four (almost everything else) memory channels, so we see either 3/4 or 6/8 memory slots respectively for 1 DPC and 2 DPC. When moving into server hardware, Intel’s Xeons have six channels, while AMD’s EPYC has eight channels, so 6/8 and 12/16 for 1 DPC and 2 DPC are obvious.


So what happens when a motherboard displays a different number of memory slots than expected? This is what happens with the new ASRock Rack ROME6U-2L2T motherboard. It supports AMD EPYC processors, both Naples and Rome, which have eight channel memory. Even at 1 module per channel, we expect a minimum of eight memory slots. But for this motherboard, there is only six.



Source: AnandTech – ASRock Rack Offers Rome mATX Motherboard with only 6 Memory Channels

AMD Confirms That SmartShift Tech Only Shipping in One Laptop For 2020

Launched earlier this year, AMD’s Ryzen 4000 “Renoir” APUs brought several new features and technologies to the table for AMD. Along with numerous changes to improve the APU’s power efficiency and reduce overall idle power usage, AMD also added an interesting TDP management feature that they call SmartShift. Designed for use in systems containing both an AMD APU and an AMD discrete GPU, SmartShift allows for the TDP budgets of the two processors to be shared and dynamically reallocated, depending on the needs of the workload.



As SmartShift is a platform-level feature that relies upon several aspects of a system, from processor choice to the layout of the cooling system, it is a feature that OEMs have to specifically plan for and build into their designs. Meaning that even if a laptop uses all AMD processors, it doesn’t guarantee that the laptop has the means to support SmartShift. As a result, only a single laptop has been released so far with SmartShift support, and that’s Dell’s G5 15 SE gaming laptop.



Now, as it turns out, Dell’s laptop will be the only laptop released this year with SmartShift support.


In a comment posted on Twitter and relating to an interview given to PCWorld’s The Full Nerd podcast, AMD’s Chief Architect of Gaming Solutions (and Dell alumni) Frank Azor has confirmed that the G5 15 SE is the only laptop set to be released this year with SmartShift support. According to the gaming frontman, the roughly year-long development cycle for laptops combined with SmartShift’s technical requirements meant that vendors needed to plan for SmartShift support early-on. And Dell, in turn, ended up being the first OEM to jump on the technology, leading to them being the first laptop vendor to release a SmartShift-enabled laptop.




Azor’s comment further goes on to confirm that AMD is working to get more SmartShift-enabled laptops on the market in 2021; there just won’t be any additional laptops this year. Which leaves us in an interesting situation where, Dell, normally one of AMD’s more elusive partners, has what’s essentially a de facto exclusive on the tech for 2020.



Source: AnandTech – AMD Confirms That SmartShift Tech Only Shipping in One Laptop For 2020

GIGABYTE Announces Z490 Aorus Ultra G2 Motherboard, with G2 Esports Tie-In

Collaborations between hardware vendors and Esports teams isn’t a new thing, but it is becoming an apparent and common trend within the industry. GIGABYTE and G2 Esports has announced the Z490 Aorus Ultra G2 motherboard which is a limited edition version of its Z490 Aorus Ultra model, but with a few refinements. The new board updates the aesthetics to a mixture of red, silver and black, with an inclusive ESS Sabre ES9280CPRO USB Type-C DAC bundled with the board. 


Using the GIGABYTE Z490 Aorus Ultra motherboard as its foundation, the limited edition Z490 Aorus Ultra G2 opts for red aluminium heatsink fins which cools the 12-phase power delivery, with a red, silver, and black aesthetic throughout. It includes two areas of integrated RGB LED lighting including the slash marks on the rear panel cover, and the G2 eye built into the chipset heatsink. There are three full-length PCIe 3.0 slots which operate at x16, x8/x8, and x8/x8/+x4, with three PCIe 3.0 M.2 slots and six SATA ports.



Included in the feature set is an Intel I225 2.5 gigabit Ethernet controller, with an Intel AX201 Wi-Fi 6. Primarily targeting the mid-range of the Z490 market, the Z490 Aorus Ultra G2 has three USB 3.2 G2 Type-A, one USB 3.2 G2 20 Gbps Type-C, two USB 3.2 G1 Type-A, and four USB 2.0 ports on the rear panel. Also present is a DisplayPort 1.4 video output, with five 3.5 mm audio jacks and S/PDIF optical output which are controlled by a Realtek ALC1220-VB HD audio codec.



What’s separates the Z490 Ultra G2 from the non-G2 Esports branded model is in the accessories bundle. Included is a G2 and GIGABYTE branded ESS Sabre ES9280CPRO USB DAC which features a USB Type-C connector to 3.5 mm output which allows gamers and audiophiles to benefit from higher quality audio. The bundle also includes an engraved aluminium plaque signed by G2’s CS: GO prodigy kennyS.


The GIGABYTE Z490 Aorus Ultra G2 is set to be available in the US, UK, Germany, France, Spain, Poland and Russia. However, GIGABYTE hasn’t announced pricing, nor do any of the major vendors such as Amazon or Newegg have it listed. 


Related Reading




Source: AnandTech – GIGABYTE Announces Z490 Aorus Ultra G2 Motherboard, with G2 Esports Tie-In

The Biostar Racing Z490GTN Review: $200 for Comet Lake mini-ITX

Small form factor boards are always a key talking point for any desktop market. The usual breakdown on Mini-ITX sales for any given generation is usually around 10%, and because these boards end up in the lower-cost systems, there tends to be a focus on the cheaper end of the spectrum, even when it comes to the Z series chipset which is the one with all the bells and whistles. With Intel’s new Comet Lake-S processors, ranging from Celeron all the way up to Core i9, and with the new socket for Comet Lake, there will be a renewed demand for those looking to build a small form factor Intel system. One of the popular mini-ITX low-cost boards in each generation is from BIOSTAR, and today we’re testing the Z490GTN.



Source: AnandTech – The Biostar Racing Z490GTN Review: 0 for Comet Lake mini-ITX

Amazon Makes AMD Rome EC2 Instances Available

After many months of waiting, Amazon today has finally made available their new compute-oriented C5a AWS cloud instances based on the new AMD EPYC 2nd generation Rome processors with new Zen2 cores.


Amazon had announced way back in November their intentions to adopt AMD’s newest silicon designs. The new C5a instances scale up to 96 vCPUs (48 physical cores with SMT), and were advertised to clock up to 3.3GHz.



The instance offerings scale from 2 vCPUs with 4GB of RAM, up to 96 vCPUs, with varying bandwidth to elastic block storage and network bandwidth throughput.



The actual CPU being used here is an AMD EPYC 7R32, a custom SKU that’s seemingly only available to Amazon / cloud providers. Due to the nature of cloud instances, we actually don’t know exactly the core count of the piece and whether this is a 64 or 48- core chip.



We quickly fired up an instance to check the CPU topology, and we’re seeing that the chip has two quadrants populated with the full 2 CCDs with four CCXs in total per quadrant, and two quadrants with seemingly only a single CCD populated, with only two CCXs per quadrant.


I quickly ran some tests, and the CPUs are idling at 1800MHz and boost up to 3300MHz maximum. All-core frequencies (96 threads) can be achieved at up to 3300MHz, but will throttle down to 3200MHz after a few minutes. Compute heavy workloads such as 456.hmmer will run at around 3100MHz all-core.


While it is certainly possible that this is a 64-core chip, Amazon’s offering of 96 vCPU metal instances point out against that. On the other hand, the 96 vCPU’s configuration of 192GB wouldn’t immediately match up with the memory channel count of the Rome chip unless the two lesser chip quadrants also each had one memory controller disabled. Either that, or there’s simply two further CCDs that aren’t can’t be allocated – makes sense for the virtualised instances but would be weird for the metal instance offering.


The new C5a Rome-based instances are available now in eight sizes in the US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), Europe (Frankfurt), Asia Pacific (Sydney), and Asia Pacific (Singapore) regions.


Related Reading:




Source: AnandTech – Amazon Makes AMD Rome EC2 Instances Available

Airy3D's DepthIQ: A Cheap Camera Depth Sensing Solution

Over the last few years, we’ve seen a lot of new technologies in the mobile market trying to address the problem of attempting to gather depth information with a camera system. There’s been various solutions by different companies, ranging from IR dot-projectors and IR cameras (structured light), stereoscopic camera systems, to the latest more modern time-of-flight special dedicated sensors. One big issue of these various implementations has been the fact that they’re all using quite exotic hardware solutions that can significantly increase the bill of materials of a device, as well as influence its industrial design choices.


Airy3D is a smaller new company that has been to date only active on the software front, providing various imaging solutions to the market. The company is now ready to transition to a hybrid business model, describing themselves as a hardware-enabled software company.


The company’s main product to fame right now is the “DepthIQ” platform – a hardware-software solution that promises to enable high-quality depth sensing to single cameras at a much cheaper cost than any other alternative.



At the heart of Airy3D’s innovation is an added piece of hardware to existing sensors in the market, called a transmissive diffraction mask, or TDM. This TDM is an added transmissive layer manufactured on top of the sensor, shaped with a specific profile pattern, that is able to encode the phase and direction of light that is then captured by the sensor.


The TDM in essence creates a diffraction pattern (Talbot effect) onto the resulting picture, that differs based on the distance of a captured object. The neat thing that Airy3D is able to do here, is employ advanced software algorithms that are able to decode this pattern, and transform the raw 2D image capture into a 3D depth map as well as a 2D image with the diffraction pattern compensated out.


Airy3D’s role in the manufacturing chain of a DepthIQ enabled camera module is designing the TDM grating which they then license out and cooperate with sensor manufacturers, who then integrate it into their sensors during production. In essence, the company would be partnering with any of the big sensor vendors such as Sony Semiconductor, Samsung LSI or Omnivision in order to produce a complete solution.


I was curious whether the company had any limits in terms of the resolution the TDM can be manufactured at, since many of today’s camera sensors employ 0.8µm pixel pitches and we’re even starting to see 0.7µm sensors coming to market. The company sees no issues in scaling the TDM grating down to 0.5µm – so there’s still a ton of leeway for future sensor generations for years to come.


Adding a transmissive layer on top of the sensor naturally doesn’t come for free, and there is a loss in sharpness. The company is quoting MTF sharpness reductions of around 3.5%, as well as a reduction of the sensitivity of the sensor due to TDM, in the range of 3-5% across the spectral range.


 
Camera samples without, and with the TDM


The company shared with us some samples of a camera system using the same sensor, once without the TDM, and once with the TDM employed. Both pictures are using the exact same exposure and ISO settings. In terms of sharpness, I wouldn’t say there’s major immediately noticeable differences, but we do see that the darker image with the TDM employed, a result of the reduced QE efficiency of the sensor.



The software processing is said to be comparatively light-weight compared to other depth-sensor solutions, and can be done on a CPU, GPU, DSP or even small FPGA.



The resulting depth discernation the solution is able to achieve from a single image capture is quite astounding – and there’s essentially no limit to the resolution that can be achieved as it scales with the sensor resolution.



More complex depth sensing solutions can add anywhere from $15 to $30 to the BOM of a device. Airy3D sees this technology to see the biggest adoption in the low- and mid-range, as usually the higher end is able to absorb the cost of other solutions, as also unlikely to be willing to make the make any sacrifice in image quality on the main camera sensors. A cheaper device for example would be able to have depth-sensing face ID unlocking with just a simple front camera sensor, which would represent notable cost savings.


Airy3D says they have customers lined up for the technology, and see a lot of potential for it in the future. It’s an extremely interesting way to achieve depth sensing given it’s a passive hardware solution that integrates into an existing camera sensor.



Source: AnandTech – Airy3D’s DepthIQ: A Cheap Camera Depth Sensing Solution

Corsair Issues Recall for Recent SF Series Power Supplies

Following an unexpected uptick in RMA requests, Corsair has initiated a limited recall for some of the manufacturer’s SF series of small form factor PSUs. The SFX power supplies, which were most recently revised in 2018 with the introduction of the SF Platinum series, have quickly become some of the most popular SFX power supplies on the market due to their high quality as well as Corsair’s reputation for support. The latter of which, as it turns out, is getting put to the test, as the company has discovered an issue in a recent run of the PSUs.


As noted by the crew over at Tom’s Hardware, Corsair has posted a notice to its forums alerting users of the recall. According to the company, an investigation of RMA’d PSUs has turned up an issue with PSUs made in the last several months. When in an environment with both “high temperatures, and high humidity”, the PSUs can unexpectedly fail. The fault is apparently a highly variable one – Corsair’s notice reports units failing both out of the box and later on – but thankfully seems to be relatively benign overall, as the problem is on the AC side of the transformer, well before any power is fed to PC components.


Ultimately, while it’s not an issue that Corsair believes will impact every SF series PSU, it’s enough of an issue issue that the company has initiated a voluntary recall/replacement program for swapping out the affected PSUs. According to the company, the issue is only present in PSUs manufactured within the last several months – from October of 2019 to March of 2020 – with lot codes 194448xx to 201148xx. PSUs manufactured before that window are unaffected, as are PSUs manufactured afterwards. The lot codes can be found on the PSU’s packaging, or if you’re like a certain editor-in-chief who has thrown out their box, it can be found on the PSU sticker itself.



Meanwhile, it’s worth noting that as part of the recall program, Corsair is offering to ship out replacement PSUs in advance. And of course, shipping costs in both directions are being picked up by Corsair.


All told, it’s extremely rare to see a recall notice put out for power supplies, and particularly high-end units like Corsair’s SF series. Which, if nothing else, makes it a notable event.


The full details on the program, including how to identify affected PSUs, can be found on Corsair’s forums. Affected users can then file a ticket for an advance RMA over on Corsair’s support website.



Source: AnandTech – Corsair Issues Recall for Recent SF Series Power Supplies