Apple Announces iPhone 12 Series: mini, Regular, Pro & Pro Max, all with 5G

A little later in the year than usual, but today we finally saw the announcement of Apple’s newest line-up of iPhones. This time around we didn’t get two, or even three phones, but a total of four new devices ranging both in size as well as in pricing. The iPhone 12 series is a major leap for Apple as they represent the company’s first ever 5G devices, preparing the company for the next generation of cellular networks for the better part of this decade.


The iPhone 12 Pro and 12 Pro Max are both straightforward upgrades to the 11 Pro series, whilst the regular iPhone 12 represents the mainstream option as a successor to the iPhone 11. The new entry in the line-up is the iPhone 12 mini – an incredibly exciting device for people who are looking for a more diminutive form-factor device, being smaller and more light-weight than even the iPhone SE released earlier in the year.


Thanks to the new A14 SoC, we’re seeing upgraded performance across the board, as well as greatly improve image processing on the camera systems, with particularly the iPhone 12 Pro Max standing out in terms of its camera systems.



Source: AnandTech – Apple Announces iPhone 12 Series: mini, Regular, Pro & Pro Max, all with 5G

More On NVIDIA Quadro Brand Retirement: Embracing the Graphics and Compute Overlap

Now that NVIDIA’s second GTC event of the year has wrapped up, we’ve finally gotten a chance to following up on last week’s announcement of NVIDIA’s RTX A6000 video card, and what that means for NVIDIA’s Quadro brand. In short, NVIDIA has confirmed that the Quadro brand is going away for sure, and as we suspected, it’s largely due to the overlap between graphics and compute.


As a quick refresher, last week NVIDIA launched their new professional visualization-focused video card, the RTX A6000. Based on the new GA102 GPU, the card ticks all the boxes for a high-end, pro-grade video card; and under normal circumstances, it would be part of NVIDIA’s Quadro family of products. However the card was notably excluded from the Quadro family in something of a last-minute change. At the time it wasn’t clear just what this meant for the Quadro brand as a whole, but now that GTC has wrapped we’ve been given some better insights into what’s going on.


First and foremost, NVIDIA has confirmed that the Quadro brand is being retired, or “streamlined” as the company calls it. Similar to the Tesla brand a couple of years back, the brand is set to be slowly retired from the market, as new professional visualization cards are released without the Quadro branding. Going forward, all of these cards will be given brand-less names, such as the “NVIDIA RTX A6000” and “NVIDIA A40”.



The more interesting aspect to this change is why: why would NVIDIA retire one of its oldest video card brands after so long? After all, the market for pro cards isn’t going away, and it remains a tidy, profitable business for NVIDIA. At the time we suspected that this has to do with the increasing overlap in NVIDIA’s product lines between professional visualization cards and compute cards, and the company has since confirmed that our hunch was correct.


As NVIDIA has continued to expand into the compute market, their professional visualization (ProViz) and compute products have increasingly overlapped in terms of features and pricing. As NVIDIA already charges “full” price for both their compute and ProViz cards, there are little-if-any feature differences between the two: desktop ProViz cards have the same access to compute features as compute cards. And compute cards, though almost exclusively server-mounted, can be provisioned as a virtual ProViz card as well.


One of the consequences of which has been that NVIDIA’s own messaging on what cards can do what tasks has become unfocused, never mind potentially confusing customers. If you need an actively-cooled desktop card for running neural network prototyping, for example, what card do you buy? Previously it was the Quadro card, despite the fact that it was a ProViz part. Similarly, the ex-Tesla V100 makes a great part for provisioning a virtual Quadro instance, even though it’s not a Quadro part.


As a result, NVIDIA has opted to go the route of essentially merging their compute and ProViz hardware lineups in an effort to simplify their offerings. NVIDIA wants for there to be a single brand – NVIDIA – which covers both markets, reflecting the flexibility of their cards and (largely) eliminating questions over which cards can be used for graphics or compute. At the same time, this also allows NVIDIA to reduce the number of hardware SKUs offered, as they no longer need overlapping products at the fringes of these markets.


Ultimately, the market for ProViz and for computing has quickly become one in the same. Though the two differ in their specific needs, they still use the same NVIDIA hardware and pay the same NVIDIA “premium”. So both are set to become a single product line to cover the needs of all of NVIDIA’s professional and commercial customers, whatever their graphics and compute needs.



Source: AnandTech – More On NVIDIA Quadro Brand Retirement: Embracing the Graphics and Compute Overlap

The Apple 2020 Fall iPhone Event Live Blog 10am PT (17:00 UTC)

Today Apple is holding its second fall 2020 launch event – only a few weeks after the traditional September launch which saw the unveiling of the a new Apple Watch, and a new line of iPads, including the new iPad Air which sports the new 5nm Apple A14 SoC. What was missing from the September event was any new announcements of new iPhones – which this year seem to have slightly slipped in terms of timing.


Today’s event should cover the new iPhones, and if industry reports are accurate, we actually should be seeing quite a slew of new devices in the form of two “regular” iPhones and two Pro models, for a total of four devices. It should demark the first time in 3 years that Apple will be introducing a new iPhone design, and this generation should be the first one to support 5G connectivity.


As always, we’ll be live-blogging the event and hold live commentary on Apple’s newest revelations. 


The event starts at 10am PDT (17:00 UTC).



Source: AnandTech – The Apple 2020 Fall iPhone Event Live Blog 10am PT (17:00 UTC)

Imagination Announces B-Series GPU IP: Scaling up with Multi-GPU

It’s almost been a year since Imagination had announced its brand-new A-series GPU IP, a release which at the time the company called its most important in 15 years. The new architecture indeed marked some significant updates to the company’s GPU IP, promising major uplifts in performance and promises of great competitiveness. Since then, other than a slew of internal scandals, we’ve heard very little from the company – until today’s announcement of the new next-generation of IP: the B-Series.

The new Imagination B-Series is an evolution of last year’s A-Series GPU IP release, further iterating through microarchitectural improvements, but most importantly, scaling the architecture up to higher performance levels through a brand-new multi-GPU system, as well as the introduction of a new functional safety class of IP in the form of the BXS series.



Source: AnandTech – Imagination Announces B-Series GPU IP: Scaling up with Multi-GPU

ASRock Industrial's NUC 1100 BOX Series Brings Tiger Lake to UCFF Systems

Intel’s introduction of the Tiger Lake U-series processors with support for a range of TDPs up to 28W has resulted in vendors launching a number of interesting systems with a twist to the original NUC’s 100mm x 100mm ultra-compact form-factor (UCFF). Notable among these have been the GIGABYTE’s BRIX PRO (3.5″ SBC form-factor), and ASRock Industrial’s STX-1500 mini-STX board, with the latter adopting the embedded versions of the Tiger Lake-U processors. ASRock Industrial also happens to be one of the first to adopt the Tiger Lake-U series for traditional UCFF systems with the launch of their NUC 1100 BOX series.


Intel’s Tiger Lake-based NUCs (Panther Canyon and Phantom Canyon) are an open secret in tech circles, but are yet to be officially announced. ASRock Industrial’s Tiger Lake NUCs such as the NUC BOX-1165G7 have also been hinted at in Intel’s marketplace – a retail follow-up to the embedded market-focused iBOX 1100 and NUC 1100 solutions. Today’s announcement makes the Tiger Lake NUCs from ASRock Industrial official. The company is launching three models in this series – NUC BOX-1165G7, NUC BOX-1135G7, and NUC BOX-1115G4. The specifications are summarized in the table below.



















ASRock Industrial NUC 1100 BOX (Tiger Lake-U) Lineup
Model NUC BOX-1115G4 NUC BOX-1135G7 NUC BOX-1165G7
CPU Intel Core i3-1115G4

2C/4T

1.7 – 4.1 GHz (3.0 GHz)

12 – 28 W (28W)
Intel Core i5-1135G7

4C/8T

0.9 – 4.2 GHz (2.4 GHz)

12 – 28 W (28W)
Intel Core i7-1165G7

4C/8T

1.2 – 4.7 GHz (2.8 GHz)

12 – 28 W (28W)
GPU Intel® UHD Graphics for 11th Gen Intel® Processors (48EU) @ 1.25 GHz Intel® Iris® Xe Graphics (80EU) @ 1.3 GHz Intel® Iris® Xe Graphics (96EU) @ 1.3 GHz
DRAM Two DDR4 SO-DIMM slots

Up to 64 GB of DDR4-3200 in dual-channel mode
Motherboard 4.02″ x 4.09″ UCFF
Storage SSD 1x M.2-2280 (PCIe 4.0 x4 (CPU-direct) or SATA III)
DFF 1 ×  SATA III Port (for 2.5″ drive)
Wireless Intel Wi-Fi 6 AX200

2×2 802.11ax Wi-Fi + Bluetooth 5.1 module
Ethernet 1 × GbE port (Intel I219-V)

1 × 2.5 GbE port (Intel I225-LM)
USB Front 1 × USB 3.2 Gen 2 Type-A

2 x USB 3.2 Gen 2 Type-C
Rear 2 × USB 3.2 Gen 2 Type-A
Display Outputs 1 × HDMI 2.0a

1 x DisplayPort 1.4

2 × DisplayPort 1.4 (using Front Panel Type-C ports)
Audio 1 × 3.5mm audio jack (Realtek ALC233)
PSU External (90W)
Dimensions Length: 117.5 mm

Width: 110 mm

Height: 47.85 mm
MSRP ? ? ?

The striking aspect of the NUC 1100 BOX-series chassis is the similarity to the 4X4 BOX-4000U series.



According to the products’ datasheet, ASRock Industrial plans to get the two Type-C ports in the front panel certified for USB4. Since the certification plan is still pending, they are being advertised as USB 3.2 Gen 2 for now. They also believe that Thunderbolt 3 devices can be used in the front Type-C ports (since Intel claims four USB4 / Thunderbolt 4 ports on Tiger Lake) – that would be interesting to test out, given the logo on the chassis only indicates SuperSpeed 10Gbps with DP-Alt Mode support.


The key updates compared to the existing NUCs from various vendors (based on Comet Lake-U) are the support for four simultaneous 4Kp60 displays along with the 2.5 GbE wired LAN interface. The performance advantages provided by the 10nm Tiger Lake with its new microarchitecture may probably help the NUC BOX-1100 series get the edge over the 4X4 BOX-4000U series (based on the Renoir APUs) in single-threaded workloads. On the multi-threaded side and GPU-intensive workloads, it is shaping up to be an interesting tussle – one we hope to analyze in more detail in our hands-on review.


Since the unit targets the embedded market also, it has the usual bells and whistles including an integrated watchdog timer and an on-board TPM. Pricing is slated to be announced towards the end of October 2020.



Source: AnandTech – ASRock Industrial’s NUC 1100 BOX Series Brings Tiger Lake to UCFF Systems

The Acer Nitro 5 Review: Renoir And Turing On A Budget

Acer has had a big year in 2020, thanks to their close relationship with AMD. Acer has long been a strong partner of AMD, through the good times, and the bad, and right now is about as good a time to be an AMD partner as it can be. AMD’s Renoir platform has been a revolution for their mobile device efforts. The company had strong packages for the desktop really ever since they launched the Ryzen platform in 2017, but those successes did not translate over to the laptop space, but with the latest Ryzen 4000 series processors, aka Renoir, all of that has changed.



Source: AnandTech – The Acer Nitro 5 Review: Renoir And Turing On A Budget

AMD Teases Radeon RX 6000 Card Performance Numbers: Aiming For 3080?

As part of today’s Zen 3 desktop CPU announcement from AMD, the company also threw in a quick teaser from the GPU side of the company in order to show off the combined power of their CPUs and GPUs. The other half of AMD is preparing for their own announcement in a few weeks, where they’ll be holding a keynote for their forthcoming Radeon RX 6000 video cards.


With the recent launch of NVIDIA’s Ampere-based GeForce RTX 30 series parts clearly on their minds, AMD briefly teased the performance of a forthcoming high-end RX 6000 video card. The company isn’t disclosing any specification details of the unnamed card – short of course that it’s an RDNA2-based RX 6000 part – but the company did disclose a few choice benchmark numbers from their labs.



Dialing things up to 4K at maximum quality, AMD benchmarked Borderlands 3, Gears of War 5, and Call of Duty: Modern Warfare (2019). And while these are unverified results being released for marketing purposes – meaning they should be taken with a grain or two of salt – the implied message from AMD is clear: they’re aiming for NVIDIA’s GeForce RTX 3080 with this part.


Assuming these numbers are accurate, AMD’s Borderlands 3 performance are practically in lockstep with the 3080. However the Gears 5 results are a bit more modest, and 73fps would have AMD trailing by several percent. Finally, Call of Duty does not have a standardized benchmark, so although 88fps at 4K looks impressive, it’s impossible to say how it compares to other hardware.


Meanwhile, it’s worth noting that as with all vendor performance teases, we’re likely looking at AMD’s best numbers. And of course, expect to see a lot of ongoing fine tuning from both AMD and NVIDIA over the coming weeks and months as they jostle for position, especially if AMD’s card is consistently this close.


Otherwise, the biggest question that remains for another day is which video card these performance numbers are for. It’s a very safe bet that this is AMD’s flagship GPU (expected to be Navi 21), however AMD is purposely making it unclear if this is their lead configuration, or their second-tier configuration. Reaching parity with the 3080 would be a big deal on its own; however if it’s AMD’s second tier-card, then that would significantly alter the competitive landscape.


Expect to find out the answers to this and more on October 28th, when AMD hosts their Radeon RX 6000 keynote.



Source: AnandTech – AMD Teases Radeon RX 6000 Card Performance Numbers: Aiming For 3080?

AMD Ryzen 5000 and Zen 3 on Nov 5th: +19% IPC, Claims Best Gaming CPU

Dr. Lisa Su, the CEO of AMD, has today announced the company’s next generation mainstream Ryzen processor. The new family, known as the Ryzen 5000 series, includes four parts and supports up to sixteen cores. The key element of the new product is the core design, with AMD’s latest Zen 3 microarchitecture, promising a 19% raw increase in performance-per-clock, well above recent generational improvements. The new processors are socket-compatible with existing 500-series motherboards, and will be available at retail from November 5th. AMD is putting a clear marker in the sand, calling one of its halo products as ‘The World’s Best Gaming CPU’.  We have details.



Source: AnandTech – AMD Ryzen 5000 and Zen 3 on Nov 5th: +19% IPC, Claims Best Gaming CPU

AMD Zen 3 Announcement by Lisa Su: A Live Blog at Noon ET (16:00 UTC)

One of the most anticipated launches of 2020 is now here. AMD’s CEO, Dr. Lisa Su, is set to announce and reveal the new Ryzen 5000 series processors using AMD’s new Zen 3 microarchitecture. Aside from confirming the product is coming this year, there are very few concrete facts to go on: we are expecting more performance as well as a competitive product. The presentation is scheduled to last 30 minutes, so we hope there is some juicy information to go on.


Come back at Noon ET for reporting and analysis at AnandTech.



Source: AnandTech – AMD Zen 3 Announcement by Lisa Su: A Live Blog at Noon ET (16:00 UTC)

Western Digital Launches New WD Black NVMe SSDs And Thunderbolt Dock

Today Western Digital is announcing a major expansion of their WD Black family of gaming-oriented storage products. In a digital event later today on Twitch, Western Digital will introduce their first PCIe Gen4 SSD, a new high-end PCIe Gen3 SSD, and their first Thunderbolt Dock.


WD Black SN850 PCIe Gen4 SSD



The new WD Black SN850 is Western Digital’s first PCIe 4 SSD and the successor to their WD Black SN750. The SN850 features Western Digital’s second generation in-house NVMe SSD controller and can hit speeds of 7GB/s (sequential) and 1M IOPS (random). The SN850 will initially be available as a standard M.2 NVMe SSD, suitable for gaming PCs and expected to work in the upcoming Sony PS5. Western Digital is also working on a version of the WD Black SN850 that will add a heatsink and RGB lighting. The plain M.2 version will be hitting the market later this fall with capacities from 500GB to 2TB, while the RGB+heatsink version likely will not be ready until next year.













WD Black SN850 Specifications
Capacity 500 GB 1 TB 2 TB
Form Factor M.2 2280 single-sided

optional heatsink
Interface PCIe 4 x4 NVMe
Controller Western Digital in-house, second generation
NAND Flash SanDisk 3D TLC
Sequential Read 7000 MB/s
Sequential Write 4100 MB/s 5300 MB/s 5100 MB/s
Warranty 5 years
Write Endurance 300 TB

0.3 DWPD
600 TB

0.3 DWPD
1200 TB

0.3 DWPD
MSRP

(No heatsink)
$149.99 $229.99 $449.99

 


WD Black AN1500 SSD: PCIe Gen4 Speeds for Gen3 Systems


For gamers on desktops that only support PCIe Gen3 speeds, Western Digital is introducing a new high-end SSD option. The WD Black AN1500 PCIe 3 x8 add-in card SSD puts two of their SN730 SSDs (OEM equivalents of the SN750) in a RAID-0 configuration for increased performance and capacity. The AN1500 uses the Marvell 88NR2241 NVMe RAID chip, which we reported on earlier this week as part of HPE’s new RAID1 card for server boot drives. Thanks to that hardware RAID capability, the AN1500 operates as a single drive with a PCIe 3.0 x8 uplink allowing for read speeds of 6.5GB/s and write speeds of 4.1GB/s. Since the AN1500 internally uses a pair of SN730/SN750 M.2 SSDs, the AN1500’s capacities are doubled: the smallest model is 1TB and the largest option is 4TB. The card is armored by a substantial aluminum heatsink and backplate that match the recent WD_BLACK design language, including customizable RGB lighting around the edge.



Single-chip NVMe SSD controllers supporting a PCIe 3 x8 interface do exist, but they’re only used in high-end enterprise SSDs. That means the WD Black AN1500 is the first consumer NVMe SSD capable of using an 8-lane interface, without the hassle of software RAID as used by competing NVMe RAID solutions. The AN1500 does not require PCIe port bifurcation support from the host system, and is also usable (with reduced performance) in PCIe slots that only provide four lanes of PCIe.

















WD Black AN1500 Specifications
Capacity 1 TB 2 TB 4 TB
Form Factor PCIe add-in card
Interface PCIe 3 x8
Controller 2x WD in-house NVMe + Marvell 88NR2241 RAID-0
NAND Flash SanDisk 3D TLC
Sequential Read 6500 MB/s
Sequential Write 4100 MB/s
4kB Random Read IOPS 760k 780k 780k
4kB Random Write IOPS 690k 700k 710k
Power Read 15.7 W
Write 12.8 W
Idle 8.5 W
Warranty 5 years
MSRP $299.99 $549.99 $999.99

 


WD Black D50 Thunderbolt 3 Game Dock


The WD Black family of products for external storage is also getting a new member. The current lineup consists of the P10 portable hard drive, P50 portable SSD, and D10 desktop 3.5″ external hard drive. The obvious gap is a desktop-oriented external SSD, but the new Western Digital WD Black D50 goes a bit beyond that: rather than merely provide Thunderbolt-attached NVMe storage, the D50 is a full Thunderbolt 3 dock providing a variety of port expansion. The D50 Game Dock will be available with either 1TB or 2TB of NVMe storage, and in a dock-only version without built-in storage. None of the three models are intended to allow the user to upgrade the storage. Customizable RGB lighting is of course present.



The WD Black D50’s natural competition will be Seagate’s similar FireCuda Gaming Dock. Seagate’s dock comes with a 4TB hard drive and an empty M.2 PCIe slot for the user to install the SSD of their choice, and slightly more ports. The WD Black D50 Game Dock is smaller overall, provides power to a connected laptop, and is intended to be used in a vertical orientation—it has a weighted base to help keep it upright.


The WD Black D50 with no built-in storage has a MSRP of $319.99, the 1TB model is $499.99, and the 2TB model is $679.99.



As Western Digital continues moving their WD Black brand toward a focus specifically on gaming, the products have inevitably been infected with RGB lighting. Western Digital’s own WD_BLACK Dashboard software for Windows can control these lighting elements, but Western Digital is also working to integrate with other RGB control systems. They currently have support for Gigabyte RGB Fusion 2.0, MSI Mystic Light Sync and ASUS Aura, and support for Razer Chroma RGB will be ready soon.



Source: AnandTech – Western Digital Launches New WD Black NVMe SSDs And Thunderbolt Dock

Intel Confirms Rocket Lake on Desktop for Q1 2021, with PCIe 4.0

In a blog post on Medium today, Intel’s John Bonini has confirmed that the company will be launching its next-generation desktop platform in Q1 2021. This is confirmed as Rocket Lake, presumably under Intel’s 11th Gen Core branding, and will feature PCIe 4.0 support. After several months (and Z490 motherboards) mentioning Rocket Lake and PCIe 4.0 support, this note from Intel is the primary source that confirms it all.


The blog post doesn’t go into any further detail about Rocket Lake. From our side of the fence, we assume this is another 14nm processor, with questions as to whether it is built upon the same Skylake architecture as the previous five generations of 14nm, or is a back-port of Intel’s latest Cove microarchitecture designs. Add in PCIe 4.0 support rather than PCIe 3.0 – there’s no specific indication at this time that there will be an increase in PCIe lane counts from the CPU, although that has been an idea that has been floated. Some motherboards, such as the ASRock Z490 Aqua, seem to have been built with the idea of a PCIe 4.0 specific storage M.2 slot, which when in use makes the PCIe 3.0 slot no longer accessible.


It is notable in the blog that John Bonini (VP/GM for Intel’s Desktop/Workstation/Gaming) cites high processor frequencies as a key metric for high performance in games and popular applications, mentioning Intel’s various Turbo Boost technologies. In the same paragraph, he then cites overclocking Intel’s processors to 7 GHz, failing to mention that this sort of overclocking isn’t done for the sake of gaming or workflow. The blog post also seems to bounce between talking about enthusiast gamers on the bleeding edge and squeezing out every bit of performance at the top-end, to then mentioning casual gamers on mobile graphics; it’s comes across as erratic and a bit bipolar. Note that this blog post is also posted on Medium, rather than Intel’s own website, for whatever reason, and also seems to change font size mid-paragraph in the version we were sent.


The reason why this blog post is being today, in my opinion, is two-fold. Firstly, recent unconfirmed leaks regarding Intel’s roadmap has placed the next generation of desktop processor firmly into that Q1/Q2 crossover in 2021. By coming out and confirming a Q1 launch window, Intel is at least putting those rumors to bed. The second reason is down to what the competition is announcing: AMD has a Zen3 related presentation on October 8th, and so with Intel’s footnote, we at least know what’s going on with both team blue and team red.


Related Reading




Source: AnandTech – Intel Confirms Rocket Lake on Desktop for Q1 2021, with PCIe 4.0

The NZXT N7 Z490 Motherboard Review: From A Different Direction

It’s been nearly two years to the day since NZXT last released a motherboard, which was the Z370 N7. NZXT initially used ECS as its motherboard OEM, but has opted to use ASRock this time round for a new N7 model. This has the same N7 infused armor, albeit using a combined metal and plastic instead of just metal which does reduce the overall cost. Aiming for the mid-range market, NZXT’s N7 Z490 features 2.5 GbE, Wi-Fi 6, dual M.2, and four SATA ports, and we give it our focus in this review.



Source: AnandTech – The NZXT N7 Z490 Motherboard Review: From A Different Direction

Insights into DDR5 Sub-timings and Latencies

Today we posted a news article about SK hynix’s new DDR5 memory modules for customers – 64 GB registered modules running at DDR5-4800, aimed at the preview systems that the big hyperscalers start playing with 12-18 months before anyone else gets access to them. It is interesting to note that SK Hynix did not publish any sub-timing information about these modules, and as we look through the announcements made by the major memory manufacturers, one common theme has been a lack of detail about sub-timings. Today can present information across the full range of DDR5 specifications.



Source: AnandTech – Insights into DDR5 Sub-timings and Latencies

Marvell and HPE Introduce NVMe RAID Adapter for Server Boot Drives

In 2018 Marvell announced the 88NR2241 Intelligent NVMe Switch: the first—and so far, only—NVMe hardware RAID controller of its kind. Now that chip has scored its first major (public) design win with Hewlett Packard Enterprise. The HPE NS204i-p is a new RAID adapter card for M.2 NVMe SSDs, intended to provide RAID-1 protection to a pair of 480GB boot drives in HPE ProLiant and Apollo systems.


The HPE NS204i-p is a half-height, half-length PCIe 3.0 x4 adapter card designed by Marvell for HPE. It features the 88NR2241 NVMe switch and two M.2 PCIe x4 slots that connect through the Marvell switch. This is not a typical PCIe switch as often seen providing fan-out of more PCIe lanes, but one that operates at a higher level and natively understands the NVMe protocol.


The NS204i-p adapter is configured specifically to provide RAID-1 (mirroring) of two SSDs, presenting them to the host system as a single NVMe device. This is the key advantage of the 88NR2241 over other NVMe RAID solutions: the host system doesn’t need to know anything about the RAID array and continues to use the usual NVMe drivers. Competing NVMe RAID solutions in the market are either SAS/SATA/NVMe “tri-mode” RAID controllers that require NVMe drives to be accessed using proprietary SCSI interfaces, or are software RAID systems with the accompanying CPU overhead.


Based on the provided photos, it looks like HPE is equipping the NS204i-p with a pair of SK hynix NVMe SSDs. The spec sheet indicates these are from a read-oriented product tier, so the endurance rating should be 1 DWPD (somewhere around 876 TBW for 480GB drives).


This solution is claimed to offer several times the performance of SATA boot drive(s), and can achieve high availability of the OS and log storage without using up front hot-swap bays on a server. The HPE NS204i-p is now available for purchase from HPE, but pricing has not been publicly disclosed.


 


Related Reading




Source: AnandTech – Marvell and HPE Introduce NVMe RAID Adapter for Server Boot Drives

DDR5 is Coming: First 64GB DDR5-4800 Modules from SK Hynix

Discussion of the next generation of DDR memory has been aflutter in recent months as manufacturers have been showcasing a wide variety of test vehicles ahead of a full product launch. Platforms that plan to use DDR5 are also fast approaching, with an expected debut on the enterprise side before slowly trickling down to consumer. As with all these things, development comes in stages: memory controllers, interfaces, electrical equivalent testing IP, and modules. It’s that final stage that SK Hynix is launching today, or at least the chips that go into these modules.



Source: AnandTech – DDR5 is Coming: First 64GB DDR5-4800 Modules from SK Hynix

USB 3.2 Gen 2×2 State of the Ecosystem Review: Where Does 20Gbps USB Stand in 2020?

USB has emerged as the mainstream interface of choice for data transfer from computing platforms to external storage devices. Thunderbolt has traditionally been thought of as a high-end alternative. However, USB has made rapid strides in the last decade in terms of supported bandwidth – From a top speed of 5 Gbps in 2010, the ecosystem moved to devices supporting 10 Gbps in 2015. Late last year, we saw the retail availability of 20 Gbps support with USB 3.2 Gen 2×2 on both the host and device sides. Almost a year down the line, how is the ecosystem shaping up in terms of future potential? Do the Gen 2×2 devices currently available in the retail market live up to their billing? What can consumers do to take advantage of the standard without breaking the bank? Read on to find out.



Source: AnandTech – USB 3.2 Gen 2×2 State of the Ecosystem Review: Where Does 20Gbps USB Stand in 2020?

NVIDIA BlueField-2 DPUs Set to Ship In 2021, Roadmaps BlueField-3&4 By 2023

Continuing this morning’s run of GTC-related announcements, NVIDIA is offering yet another update on the state of their Data Processing Unit (DPU) project. An initiative inherited from Mellanox as part of that acquisition, NVIDIA and Mellanox have been talking up their BlueField-2 DPUs for the better part of the last year. And now the company is finally nearing a release date, with BlueField-2 DPUs sampling now, and set to ship in 2021.


Originally hatched by Mellanox before the NVIDIA acquisition, the DPU was Mellanox’s idea for the next-generation of SmartNICs, combining their networking gear with a modestly powerful Arm SoC to offload various tasks from the host system, such as software-defined networking and storage, as well as dedicated acceleration engines. Mellanox had been working on the project for some time, and while the original BlueField products saw a relatively low-key release last year, the company has been hard at work on Bluefield-2, which NVIDIA has since elevated to a much greater position.



This second generation of DPU-accelerated hardware will go under the BlueField-2 name, and the two companies have been talking about it for most of the past year. Based on a custom SoC, the BlueField 2 SoC uses 8 Arm Cortex-A72 cores along with a pair of VLIW acceleration engines. All of this is then paired with a ConnectX-6 DX NIC for actual network connectivity. At a high level, the DPU is intended to be the next step in the gradual movement towards domain-specific accelerators within the datacenter, offering a more specialized processor that can offload networking, storage, and security workloads from the host CPU.


Coming off of their success in the datacenter market from broadening applications for GPUs, it’s easy to see NVIDIA’s interest in the DPU project: this is another piece of silicon they can sell to server builders and datacenter operators, and further undermines the importance of the one thing NVIDIA doesn’t have, a server-class CPU. So although not a project stated by NVIDIA, it’s a project they’re full embracing and expanding upon.


As the bulk of today’s DPU announcement is a recap for NVIDIA, the actual product plans for BlueField-2 have not changed. NVIDIA will be releasing two DPU-equipped cards, the BlueField-2, and the BlueField-2X. The former is a more traditional SmartNIC with the DPU and 2 100Gb/second Ethernet/InfiniBand ports. This allows it to be used for networking as well as storage tasks like NVMe-over-Fabrics.



Meanwhile the larger Bluefield-2X incorporates a DPU as well as one of NVIDIA’s Ampere-GPUs for further acceleration via in-network computing, as NVIDIA likes to call it. NVIDIA hasn’t disclosed the GPU used on the BlueField-2X, but if these renders are accurate, then the number of memory chips indicates it’s GA102, the same chip going into NVIDIA’s high-end video cards. Which would make BlueField-2X a very potent card with regards to compute performance.



And NVIDIA’s plans don’t stop with the BlueField-2 products. The company has planned out a series of cards based on BlueField-2, which will be released as the BlueField-3 and BlueField-4 family in successive years. BlueField-3 will be a souped-up version of BlueField-2, with separate DPU and DPU + GPU cards. Meanwhile BlueField-4 will be the first part where NVIDIA’s influence makes it into the core silicon, with the company planning a single high-performance DPU that would be able to significantly outperform the easier discrete DPU + GPU designs. All told, NVIDIA is expecting BlueField-4 to offer 400 TOPS of AI performance.



All of this, in turn, will come with NVIDIA’s traditional embrace of both hardware and software. The company is looking to mirror its CUDA strategy with DPUs, offering the Data Center Infrastructure-on-a-Chip Architecture (DOCA) as the software stack and programming model for BlueField 2 and later DPUs. This means assembling high-grade SDKs for developers to use, and then extending support for those SDKs and libraries over multiple generations. NVIDIA is clearly just getting DOCA off of the ground, but if history is any indication, software will play a huge role in the growth of the SmartNIC market, just like it did GPUs a decade prior.



Wrapping things up, the first BlueField-2 cards are now sampling to NVIDIA’s partners. Meanwhile commercial shipments will kick off in 2021, and BlueField-3 shipments may follow as soon as 2022.




Source: AnandTech – NVIDIA BlueField-2 DPUs Set to Ship In 2021, Roadmaps BlueField-3&4 By 2023

Quadro No More? NVIDIA Announces Ampere-based RTX A6000 & A40 Video Cards For Pro Visualization

NVIDIA’s second GTC of 2020 is taking place this week, and as has quickly become a tradition, one of CO Jensen Huang’s “kitchenside chats” kicks off the event. As the de facto replacement for GTC Europe, this fall virtual GTC is a bit of a lower-key event relative to the Spring edition, but it’s still one that is seeing some NVIDIA hardware introduced to the world.


Starting things off, we have a pair of new video cards from NVIDIA – and a launch that seemingly indicates that NVIDIA is getting ready to overhaul its professional visualization branding. Being announced today and set to ship at the end of the year is the NVIDIA RTX A6000, NVIDIA’s next-generation, Ampere-based professional visualization card. The successor to the Turing-based Quadro RTX 8000/6000, the A6000 will be NVIDIA’s flagship professional graphics card, offering everything under the sun as far as NVIDIA’s graphics features go, and chart-topping performance to back it up. The A6000 will be a Quadro card in everything but name; literally.






















NVIDIA Professional Visualization Card

Specification Comparison
  A6000 A40 RTX 8000 GV100
CUDA Cores 10752 10752 4608 5120
Tensor Cores 336 336 576 640
Boost Clock ? ? 1770MHz ~1450MHz
Memory Clock 16Gbps GDDR6 14.5Gbps GDDR6 14Gbps GDDR6 1.7Gbps HBM2
Memory Bus Width 384-bit 384-bit 384-bit 4096-bit
VRAM 48GB 48GB 48GB 32GB
ECC Partial

(DRAM)
Partial

(DRAM)
Partial

(DRAM)
Full
Half Precision ? ? 32.6 TFLOPS 29.6 TFLOPS
Single Precision ? ? 16.3 TFLOPS 14.8 TFLOPS
Tensor Performance ? ? 130.5 TFLOPS 118.5 TFLOPs

(FP16)
TDP 300W 300W 295W 250W
Cooling Active Passive Active Active
NVLink 1x NVLink3

112.5GB/sec
1x NVLink3

112.5GB/sec
1x NVLInk2

50GB/sec
2x NVLInk2

100GB/sec
GPU GA102 GA102 TU102 GV100
Architecture Ampere Ampere Turing Volta
Manufacturing Process Samsung 8nm Samsung 8nm TSMC 12nm FFN TSMC 12nm FFN
Launch Price ? ? $10,000 $9,000
Launch Date 12/2020 Q1 2021 Q4 2018 March 2018

The first professional visualization card to be launched based on NVIDIA’s new Ampere architecture, the A6000 will have NVIDIA hitting the market with its best foot forward. The card uses a fully-enabled GA102 GPU – the same chip used in the GeForce RTX 3080 & 3090 – and with 48GB of memory, is packed with as much memory as NVIDIA can put on a single GA102 card today. Notably, the A6000 is using GDDR6 here and not the faster GDDR6X used in the GeForce cards, as 16Gb density RAM chips are not available for the latter memory at this time. As a result, despite being based on the same GPU, there are going to be some interesting performance differences between the A6000 and its GeForce siblings, as it has traded memory bandwidth for overall memory capacity.


In terms of performance, NVIDIA is promoting the A6000 as offering nearly twice the performance (or more) of the Quadro RTX 8000 in certain situations, particularly tasks taking advantage of the significant increase in FP32 CUDA cores or the similar performance increase in RT core throughput. Unfortunately NVIDIA has either yet to lock down the specifications for the card or is opting against announcing them at this time, so we don’t know what the clockspeeds and resulting performance in FLOPS will be. Notably, the A6000 only has a TDP of 300W, 20W lower than the GeForce RTX 3090, so I would expect this card to be clocked lower than the 3090.



Otherwise, as we saw with the GeForce cards launched last month, Ampere itself is not a major technological overhaul to the previous Turing architecture. So while newer and significantly more powerful, there are not many new marquee features to be found on the card. Along with the expanded number of data types supported in the tensor cores (particularly BFloat16), the other changes most likely to be noticed by professional visualization users is decode support for the new AV1 codec, as well as PCI-Express 4.0 support, which will give the cards twice the bus bandwidth when used with AMD’s recent platforms.


Like the current-generation Quadro, the upcoming card also gets ECC support. NVIDIA has never listed GA102 as offering ECC on its internal pathways – this is traditionally limited to their big, datacenter-class chips – so this is almost certainly partial support via “soft” ECC, which offers error correction against the DRAM and DRAM bus by setting aside some DRAM capacity and bandwidth to function as ECC. The cards also support a single NVLink connector – now up to NVLink 3 – allowing for a pair of A6000s to be bridged together for more performance and to share their memory pools for supported applications. The A6000 also supports NVIDIA’s standard frame lock and 3D Vision Pro features with their respective connectors.


For display outputs, the A6000 ships with a quad-DisplayPort configuration, which is typical for NVIDIA’s high-end professional visualization cards. Notably this generation, however, this means the A6000 is in a bit of an odd spot since DisplayPort 1.4 is slower than the HDMI 2.1 standard also supported by the GA102 GPU. I would expect that it’s possible for the card to drive an HDMI 2.1 display with a passive adapter, but this is going to be reliant on how NVIDIA has configured the card and if HDMI 2.1 signaling will tolerate such an adapter.


Finally, the A6000 will be the first of today’s video cards to ship. According to NVIDIA, the card will be available in the channel as an add-in card starting in mid-December – just in time to make a 2020 launch. The card will then start showing up in OEM systems in early 2021.


NVIDIA A40 – Passive ProViz


Joining the new A6000 is a very similar card designed for passive cooling, the NVIDIA A40. Based on the same GA102 GPU as the A6000, the A40 offers virtually all of the same features as the active-cooled A6000, just in a purely passive form factor suitable for use in high density servers.



By the numbers, the A40 is a similar flagship-level graphics card, using a fully enabled GA102 GPU. It’s not quite a twin to the A6000, but other than the cooling difference, the only other change under the hood is the memory configuration. Whereas the A6000 uses 16 Gbps GDDR6, A40 clocks it down to 14.5 Gbps. Otherwise NVIDIA has not disclosed expected GPU clockspeeds, but with a 300W TDP, we’d expect them to be similar to the A6000.


Overall NVIDIA is no stranger to offering passively cooled cards; however it’s been a while since we last saw a passively cooled high-end Quadro card. Most recently, NVIDIA’s passive cards have been aimed at the compute market, with parts like the Tesla T4 and P40. The A40, on the other hand, is a bit different and a bit more ambitious, and a reflection of the blurring lines between compute and graphics in at least some of NVIDIA’s markets.


The most notable impact here is the inclusion of display outputs, something that was never on NVIDIA’s compute cards for obvious reasons. The A40 includes three DisplayPort outputs (one fewer than the A6000), giving the server-focused card the ability to directly drive a display. In explaining the inclusion of display I/O in a server part, NVIDIA said that they’ve had requests from users in the media and broadcast industry, who have been using servers in places like video trucks, but still need display outputs.


Ultimately, this serves as something of an additional feature differentiator between the A40 and NVIDIA’s official PCIe compute card, the PCIe A100. As the A100 lacks any kind of video display functionality (the underlying A100 CPU was designed for pure compute tasks), the A40 is the counterpoint to that product, offering something with very explicit video output support both within and outside of the card. And while it’s not specifically aimed at the edge compute market, where the T4 still reigns supreme, make no mistake: the A40 is still capable of being used as a compute card. Though lacking in some of A100’s specialty features like Multi-Instance GPU (MIG), the A40 is fully capable of being provisioned as a compute card, including support for the Virtual Compute Server vGPU profile. So the card is a potential alternative of sorts to the A100, at least where FP32 throughput might be of concern.


Finally, like the A6000, the A40 will be hitting the streets in the near future. Designed to be sold primarily through OEMs, NVIDIA expects it to start showing up in servers in early 2021.


Quadro No More?


For long-time observers, perhaps the most interesting development from today’s launch is what’s not present: NVIDIA’s Quadro branding. Despite being aimed at their traditional professional visualization market, the A6000 is not being branded as a Quadro card, a change that was made at nearly the last minute.


Perhaps because of that last-minute change, NVIDIA hasn’t issued any official explanation for their decision. At face value it’s certainly an odd one, as the Quadro brand is one of NVIDIA’s longest-lived brands, second only to GeForce itself. NVIDIA still controls the lion’s share of the professional visualization market as well, so at face value there seems to be little reason for NVIDIA to shake-up a very stable market.



With all of that said, there are a couple of factors in play that may be driving NVIDIA’s decision. First and foremost is that the company has already retired one of its other product brands in the last couple of years: Tesla. Previously used for NVIDIA’s compute accelerators, Tesla was retired and never replaced, leaving us with the likes of the NVIDIA T4 and A100. Of course, Tesla is something of a special case, as the name has increasingly become synonymous with the electric car company, despite in both cases being selected as a reference to the famous scientist. Quadro, by comparison, has relatively little (but not zero) overlap with other business entities.


But perhaps more significant than that is the overall state of NVIDIA’s professional businesses. An important cornerstone of NVIDIA’s graphics products, professional visualization is a fairly stable market – which is to say it’s not a major growth market in the way that gaming and datacenter compute have been. As a result, professional visualization has been getting slowly subsumed by NVIDIA’s compute parts, especially in the server space where many products can be provisioned for either compute or graphics needs. In all these cases, both Quadro and NVIDIA’s former Tesla lineup have come to represent NVIDIA’s “premium” offerings: parts that get access to the full suite of NVIDIA’s hardware and software features, unlike the consumer GeForce products which have certain high-end features withheld.


So it may very well be that NVIDIA doesn’t see a need for a specific Quadro brand too much longer, because the market for Quadro (professional visualization) and Tesla (computing) are one in the same. Though the two differ in their specific needs, they still use the same NVIDIA hardware, and frequently pay the same high NVIDIA prices.


At any rate, it will be interesting to see where NVIDIA goes from here. Even with the overlap in audiences, branding segmentation has its advantages at times. And with NVIDIA now producing GPUs that lack critical display capabilities (GA100), it seems like making it clear what hardware can (and can’t) be used for graphics is going to remain important going forward.



Source: AnandTech – Quadro No More? NVIDIA Announces Ampere-based RTX A6000 & A40 Video Cards For Pro Visualization

NVIDIA Gives Jetson Nano Dev Kit a Trim: 2GB Model For $59

As part of this morning’s fall GTC 2020 announcements, NVIDIA is revealing that they are releasing an even cheaper version of their budget embedded computing board, the Jetson Nano. Initially introduced back in 2015 as the Jetson TX1, an updated version of NVIDIA’s original Jetson kit with their then-new Tegra X1 SoC, the company has since kept the Jetson TX1 around in various forms as a budget option. Most recently, the company re-launched it in 2019 as the Jetson Nano, their pint-sized, $99 entry level developer kit.


Now, NVIDIA is lowering the price tag on the Jetson Nano once again with the introduction of a new, cheaper SKU. Dubbed the Jetson Nano 2GB, this is a version of the original Jetson Nano with 2GB of DRAM instead of 4GB. Otherwise the performance of the kit remains unchanged from the original Nano, with 4 Cortex-A57 CPU cores and the 128 CUDA core Maxwell GPU providing the heavy lifting for CPU and GPU compute, respectively.













NVIDIA Jetson Family Specifications
  Xavier NX

(15W)
Xavier NX

(10W)
AGX Xavier Jetson Nano
CPU 4x/6x Carmel

@ 1.4GHz

or

2x Carmel

@ 1.9GHz
4x/ Carmel

@ 1.2GHz

or

2x Carmel

@ 1.5GHz
8x Carmel

@ 2.26GHz
4x Cortex-A57

@ 1.43GHz
GPU Volta, 384 Cores

@ 1100MHz
Volta, 384 Cores @ 800MHz Volta, 512 Cores

@ 1377MHz
Maxwell, 128 Cores

@ 920MHz
Accelerators 2x NVDLA 2x NVDLA N/A
Memory 8GB LPDDR4X, 128-bit bus

(51.2 GB/sec)
16GB LPDDR4X, 256-bit bus

(137 GB/sec)
2/4GB LPDDR4, 64-bit bus

(25.6 GB/sec)
Storage 16GB eMMC 32GB eMMC 16GB eMMC
AI Perf. 21 TOPS 14 TOPS 32 TOPS N/A
Dimensions 45mm x 70mm 100mm x 87mm 45mm x 70mm
TDP 15W 10W 30W 10W
Price $399 $699 4GB: $99

2GB: $59

Meanwhile, though not mentioned in NVIDIA’s official press release, it looks like the company has simplified the carrier board a bit as part of their process of getting the price tag down. Relative to the original 4GB Nano, the Nano 2GB is pictured without a DisplayPort output, and with one fewer USB port. Furthermore those USB ports are no longer blue, hinting that they are USB 2.0 instead of USB 3.0. Finally, the barrel power connector has been replaced with a USB Type-C connector, and it looks like various pins have also been removed.


Overall, NVIDIA is pitching the cost-reduced Jetson Nano as a true starter kit for embedded computing, suitable for early training and learning. Despite receiving a minor neutering, the Nano 2GB can still run all of NVIDIA’s Jetson SDKs, allowing it to be used as a stepping stone of sorts towards learning NVIDIA’s NVIDIA’s ecosystem, and eventually moving on to their more powerful products like their GPU accelerators and Jetson Xavier NX kits. Ultimately, with their efforts to position it as a starter kit for teaching purposes, I imagine NVIDIA is gunning for the educational market, particularly with the continued uptick in STEM-focused programs.


The kit will go on sale later this month through NVIDIA’s usual distribution channels.





Source: AnandTech – NVIDIA Gives Jetson Nano Dev Kit a Trim: 2GB Model For

NVIDIA Delays GeForce RTX 3070 Launch to October 29th

In a brief news post made to their GeForce website last night, NVIDIA has announced that they have delayed the launch of the upcoming GeForce RTX 3070 video card. The high-end video card, which was set to launch on October 15th for $499, has been pushed back by two weeks. It will now be launching on October 29th.


Indirectly referencing the launch-day availability concerns for the RTX 3080 and RTX 3090 last month, NVIDIA is citing a desire to have “more cards available on launch day” for the delay. NVIDIA does not disclose their launch supply numbers, so it’s not clear just how many more cards another two weeks’ worth of stockpiling will net them – it likely still won’t be enough to meet all demand – but it should at least improve the odds.




















NVIDIA GeForce Specification Comparison
  RTX 3070 RTX 3080 RTX 3090 RTX 2070
CUDA Cores 5888 8704 10496 2304
ROPs 96 96 112 64
Boost Clock 1.73GHz 1.71GHz 1.7GHz 1.62GHz
Memory Clock 16Gbps GDDR6 19Gbps GDDR6X 19.5Gbps GDDR6X 14Gbps GDDR6
Memory Bus Width 256-bit 320-bit 384-bit 256-bit
VRAM 8GB 10GB 24GB 8GB
Single Precision Perf. 20.4 TFLOPs 29.8 TFLOPs 35.7 TFLOPs 7.5 TFLOPs
Tensor Perf. (FP16) 81.3 TFLOPs 119 TFLOPs 143 TFLOPs 59.8 TFLOPs
Tensor Perf. (FP16-Sparse) 163 TFLOPs 238 TFLOPs 285 TFLOPs 59.8 TFLOPs
TDP 220W 320W 350W 175W
GPU GA104 GA102 GA102 TU106
Transistor Count 17.4B 28B 28B 18.6B
Architecture Ampere Ampere Ampere Turing
Manufacturing Process Samsung 8nm Samsung 8nm Samsung 8nm TSMC 12nm “FFN”
Launch Date 10/15/2020

10/29/2020
09/17/2020 09/24/2020 09/20/2018
Launch Price MSRP: $499 MSRP: $699 MSRP: $1499 MSRP: $999

Founders $1199

Interestingly, this delay also means that the RTX 3070 will now launch after AMD’s planned Radeon product briefing, which is scheduled for October 28th. NVIDIA has already shown their hand with respect to specifications and pricing, so the 3070’s price and performance are presumably locked in. But this does give NVIDIA one last chance to react – or at least, distract – should they need it.



Source: AnandTech – NVIDIA Delays GeForce RTX 3070 Launch to October 29th