ASUS & GIGABYTE Prep Mini-ITX GeForce GTX 1660 Super Cards

Last week NVIDIA introduced its latest GeForce GTX 1660 Super performance mainstream GPU. There are plenty of designs to chose from, and both ASUS and GIGABYTE are now set to offer small form factor designs. 



ASUS has two new GeForce GTX 1660 Super boards that are 17.4 centimeters (6.9 inches) long. The ASUS Phoenix PH-GTX1660S-6G and Phoenix PH-GTX1660S-O6G cards are based on NVIDIA’s TU116 GPU with 1408 CUDA cores, carry 6 GB of GDDR6 memory, share the same PCB design with one 8-pin auxiliary PCIe power connector, feature three display outputs (DVI-D, DP 1.4, HDMI 2.0b), and use the same dual-slot cooling system with one dual ball bearing fan. The only difference between the two are their clocks and even they are pretty close: up to 1815 MHz vs 1830 MHz in OC mode.



GIGABYTE has a more ‘canonical’ GeForce GTX 1660 Super Mini ITX OC 6G (GV-N166SIXOC-6GD) board that is exactly 17 centimeters long. The card has NVIDIA’s TU116 GPU clocked at up to 1800 MHz, 6 GB of 14 Gpbs GDDR6 RAM, uses a dual-slot single-fan cooler with a heat pipe that can stop the fan in idle mode, has an 8-pin PCIe power connector, and offers four display outputs (DP 1.4, HDMI 2.0b).




















NVIDIA GeForce GTX 1660 Super Graphics Cards for Mini-ITX
  NVIDIA

Reference
ASUS

Phoenix
PH-GTX1660S-6G
ASUS

Phoenix
PH-GTX1660S-O6G
GIGABYTE
GV-N166SIXOC-6GD
CUDA Cores 1408
ROPs 48
Core Clock 1530 MHz 1530 MHz (?)
Boost Clock 1785 MHz 1815 MHz 1830 MHz 1800 MHz
Memory Clock 14 Gbps GDDR6
Memory Bus Width 192-bit
VRAM 6 GB
Single Precision Perf. 5 TFLOPS ~5 TFLOPS
Display Outputs 1×DVI-D

1×DP 1.4

1×HDMI 2.0b
1×DVI-D

1×DP 1.4

1×HDMI 2.0b
3×DP 1.4

1×HDMI 2.0b
TGP 125W ? ? ?
GPU TU116

(284 mm2)
Transistor Count 6.6B
Architecture Turing
Manufacturing Process TSMC 12nm “FFN”
Launch Date 10/29/2019 Q4 2019
Launch Price $229 ? ? ?

All three graphics cards are listed at ASUS’ and GIGABYTE’s websites, so expect them to be available shortly. Pricing wise, they should not be much more expensive than NVIDIA’s $229 MSRP for the GeForce GTX 1660 Super.



Related Reading


Sources: ASUS (1, 2), GIGABYTE



Source: AnandTech – ASUS & GIGABYTE Prep Mini-ITX GeForce GTX 1660 Super Cards

Western Digital Ultrastar DC SS540 SAS SSDs: Up to 15.36 TB, Up to 3 DWPD

Western Digital has introduced its new series of SSDs designed for mission critical applications, including OLTP, OLAP, hyper converged infrastructure (HCI), as well as software-defined storage (SDS) workloads. The Ultrastar DC SS540 drives are aimed at mixed and write intensive workloads and can be configured accordingly. Since the SSDs use an SAS 12 Gbps interface, they are drop in compatible with existing machines.


The Western Digital Ultrastar DC SS540 is based on the company’s sixth-generation dual-port SAS 12 Gbps platform co-developed with Intel as well as 96-layer 3D TLC NAND memory (presumably, also from Intel) and comes in a 2.5-inch/15 mm form-factor. The new SSDs are drop-in compatible with existing servers that support 9, 11, and 14 W per drive power options (SKUs with higher power consumption offer higher random read/write speeds).


As is traditional for SAS SSDs from Western Digital and Intel, the Ultrastar DC SS540 supports extended error correction code (ECC with a 1×10^-17 bit error rate) to ensure high performance and data integrity, exclusive-OR (XOR) parity in case a whole NAND die fails, and parity-checked internal data paths. In addition, the Ultrastar SS540 complies with the T10 Data Integrity Field (DIF) standard, which requires all interconnect buses to have parity protection (on the system level), as well as a special power loss data management feature that does not use supercapacitors. As usual, Western Digital’s Ultrastar SS540 will be available in different SKUs with capabilities like instant secure erase and/or TCG+FIPS encryption to conform with various security requirements.



The manufacturer plans to offer the Ultrastar DC SS540 rated for 1 or 3 drive writes per day (DWPD) to target different workloads. The former will offer capacities between 960 GB and 15.36 TB, whereas the latter will feature capacities from 800 GB to 6.4 TB. The new lineup does not include drives rated for 10 DWPD and less than 1 DWPD, so those who need higher or lower endurance (as they run extremely read intensive or extremely write intensive workloads) will have to use previous-generation offerings from Western Digital. When it comes to warranty and MTBF, the drives are rated for a 0.35% annual failure rate (AFR), 2.5 million hours MTBF and are covered with a five-year limited warranty (or the max PB written, whichever occurs first).


As far as sustained performance is concerned, the Ultrastar DC SS540 is rated for up to 2130 MB/s sequential read/write speed, up to 470K IOPS random write IOPS, and up to 240K random write IOPS, depending on exact model, which is generally in line with performance of the Ultrastar DC SS530 SSDs launched last year. Traditionally, higher capacity SSDs are slightly slower when it comes to writes and mixed workloads, but those who need maximum performance can always use more drives to hit desired speeds.


Western Digital’s Ultrastar DC SS540 SSDs are currently sampling and qualified by select clients of the company. The manufacturer plans to start commercial shipments of the drives in the Q1 2020.

























HGST Ultrastar SS540 Series Specifications
  3 DWPD 1 DWPD
Capacities 6.4 TB

3.2 GB

1.6 TB

800 GB
15.36 TB

7.68 TB

3.84 TB

1.92 TB

960 GB
Form Factor 2.5″/15mm
Interface SAS 6/12 Gb/s, dual port for 12 Gb/s
Controller Proprietary
NAND 96-layer

3D TLC NAND
Sequential Read 2116 ~ 2130 MB/s 1985 ~ 2130 MB/s
Sequential Write 1008 MB/s ~ 2109 MB/s 1985 MB/s ~ 2130 MB/s
Random Read (4 KB) IOPS 237K ~ 470K IOPS 237K ~ 470K IOPS
Random Write (4 KB) IOPS 128K ~ 240K IOPS 79K ~ 110K
Mixed Random R/W (70:30 R:W, 4KB)

Max IOPS
182K ~ 300K IOPS 143K ~ 200K IOPS
Read/Write Latency (average) 140/60 ms ~ 150/80 ms 140/90 ms ~ 150/300 ms
Power Idle 3.7 W (<15 TB) – 4.7 W (>15 TB)
Operating 9 W, 11 W, 14 W (configurable)
Endurance DWPD 3 1
Max. PB 6.4 TB: 36,150 TB

3.2 TB: 17,150 TB

1.6 TB: 9,410 TB

800 GB: 4.700 TB
15.36 TB: 30,110 TB

7.68 TB: 15,050 TB

3.84 TB: 7,000 TB

1.92 TB: 3,760 TB

960 GB: 1,880 TB
Encryption AES-256 (?)

TCG + FIPS
Power Loss Protection Yes
MTBF 2.5 million hours
Warranty Five years or max PB written (whichever occurs first)
Models WUSTR6464ABSS20x

WUSTR6432BSS20x

WUSTR6416BSS20x

WUSTR6480BSS20x

WUSTVA1A1BSS20x

WUSTVA176BSS20x

WUSTVA138BSS20x

WUSTVA119BSS20x

WUSTVA196BSS20x

Legend for Model Numbers Example: WUSTR6464ASS201=6.4TB, SAS 12Gb/s, TCG

W = Western Digital

U = Ultrastar

S = Standard

TR = NAND type/endurance

(TM=TLC/mainstream endurance,

TR=TLC/read-intensive) 64 = Full capacity (6.4TB) 64 = Capacity of this model

15 = 15.2TB 76 = 7.6TB 38 = 3.84TB 32 = 3.2TB 19 = 1.92TB 16 = 1.6TB 96 = 960GB 80 = 800GB 48 = 480GB 40 = 400GB

A = Generation code

S = Small form factor (2.5” SFF) S2 = Interface, SAS 12Gb/s

1 = Encryption setting

0 = Instant Secure Erase

1 = TCG Enterprise encryption

4 = No encryption/Secure Erase 5 = TCG+FIPS

Related Reading:


Source: Western Digital




Source: AnandTech – Western Digital Ultrastar DC SS540 SAS SSDs: Up to 15.36 TB, Up to 3 DWPD

MLPerf Releases Official Results For First Machine Learning Inference Benchmark

Since launching their organization early last year, the MLPerf group has been slowly and steadily building up the scope and the scale of their machine learning benchmarks. Intending to do for ML performance what SPEC has done for CPU and general system performance, the group has brought on board essentially all of the big names in the industry, ranging from Intel and NVIDIA to Google and  Baidu. As a result, while the MLPerf benchmarks are still in their early days – technically, they’re not even complete yet – the group’s efforts have attracted a lot of interest, which vendors are quickly turning into momentum.

Back in June the group launched its second – and arguably more interesting – benchmark set, MLPerf Inference v0.5. As laid out in the name, this is the MLPerf group’s machine learning inference benchmark, designed to measure how well and how quickly various accelerators and systems execute trained neural networks. Designed to be as much a competition as it is a common and agreed upon means to test inference performance, MLPerf Inference is intended to eventually become the industry’s gold standard benchmark for measuring inference performance across the spectrum, from low-power NPUs in SoCs to dedicated, high-performance inference accelerators in datacenters. And now, a bit over 4 months after the benchmark was first released, the MLPerf group is releasing the first official results for the inference benchmark.



Source: AnandTech – MLPerf Releases Official Results For First Machine Learning Inference Benchmark

NVIDIA Gives Jetson AGX Xavier a Trim, Announces Nano-Sized Jetson Xavier NX

Since it was launched earlier this decade, NVIDIA’s Jetson lineup of embedded system kits remains one of the odder success stories for the company. While NVIDIA’s overall Tegra SoC plans have gone in a very different direction than first planned, they’ve seen a lot of success with their system-level Jetson kits, as customers snatch them up both as dev kits and for use in commercial systems. Now in their third generation of Jetson systems, this afternoon NVIDIA is outlining their plans to diversify the family a bit more, announcing a physically smaller and cheaper version of their flagship Jetson Xavier kit, in the form of the Jetson Xavier NX.


Based on the same Xavier SoC that’s used in the titular Jetson AGX Xavier, the Jetson Xavier NX is designed to fill what NVIDIA sees as a need for both a cheaper Xavier option, as well as one smaller than the current 100mm x 87mm board. In fact the new Nano-sized board is quite literally that: the size of the existing Jetson (TX1) Nano, which was introduced earlier this year. Keeping the same form factor and pin compatibility, the Jetson Xavier NX sports the same 45mm x 70mm dimensions, making it a bit smaller than a credit card.



Compared to the full-sized Jetson AGX Xavier, NVIDIA is aiming the Jetson Xavier NX at customers who need to do edge inference in space-constrained use cases where the big Xavier won’t do. Since it’s based on the same Xavier SoC, the Jetson Xavier NX uses the same Volta GPU, and critically, the same NVDLA accelerator cores as the original. As a result, for inference tasks the Jetson Xavier NX should be significantly faster than the Jetson Nano and various Jetson TX2 products – curently NVIDIA’s most widely used embedded Jetson – none of which have hardware comparable to NVIDIA’s dedicated deep learning accelerator cores.



Not that Jetson Xavier NX is a wholesale replacement for Jetson AGX Xavier, however. The smaller Xavier board is taking a shave both in performance and in I/O for a mix of product segmentation, power consumption, and pin compatibility reasons. Notably, the Xavier SoC uses in the NX loses out on 2 CPU cores, 2 GPU SMs, and perhaps most important to heavy inference users, half of the chip’s memory bandwidth. As a result the Jetson Xavier NX should still be significantly ahead of Jetson TX1/TX2, but it will definitely trail the full-fledged Jetson AGX Xavier.













NVIDIA Jetson Family Specifications
  Xavier NX

(15W)
Xavier NX

(10W)
AGX Xavier Jetson Nano
CPU 4x/6x Carmel

@ 1.4GHz

or

2x Carmel

@ 1.9GHz
4x/ Carmel

@ 1.2GHz

or

2x Carmel

@ 1.5GHz
8x Carmel

@ 2.26GHz
4x Cortex-A57

@ 1.43GHz
GPU Volta, 384 Cores

@ 1100MHz
Volta, 384 Cores @ 800MHz Volta, 512 Cores

@ 1377MHz
Maxwell, 128 Cores

@ 920MHz
Accelerators 2x NVDLA 2x NVDLA N/A
Memory 8GB LPDDR4X, 128-bit bus

(51.2 GB/sec)
16GB LPDDR4X, 256-bit bus

(137 GB/sec)
4GB LPDDR4, 64-bit bus

(25.6 GB/sec)
Storage 8GB eMMC 32GB eMMC 16GB eMMC
AI Perf. 21 TOPS 14 TOPS 32 TOPS N/A
Dimensions 45mm x 70mm 100mm x 87mm 45mm x 70mm
TDP 7.5W 15W 30W 10W
Price $399 $999 $129

All told, for inference applications NVIDIA is touting 21 TOPS of performance at the card’s full power profile of 15 Watts. Alternatively, at 10 Watts – which happens to be the max power state for Jetson Nano – this drops down to 14 TOPS as clockspeeds are reduced and two more CPU cores are shut off.


Otherwise, the Jetson Xavier NX is designed to slot right in with the rest of the Jetson family, as well as NVIDIA’s hardware and software ecosystem. The embedded system board is being positioned purely for use in high-volume production systems, and accordingly, NVIDIA won’t be putting together a developer kit version of the NX. Since the current Jetson AGX Xavier will be sticking around, it will fill that role, and NVIDIA is offering software patches for developers who need to specifically test against Jetson Xavier NX performance levels.


The Jetson Xavier NX will be shipping from NVIDIA in March of 2020, with NVIDIA pricing the board at $399.






Source: AnandTech – NVIDIA Gives Jetson AGX Xavier a Trim, Announces Nano-Sized Jetson Xavier NX

AOC Reveals Q27T1 Monitor: With 'Style'

AOC is about to start sales of its new Q27T1 display. It comes with a ‘designed by Studio F. A. Porsche’ logo which aims to propogate a sense of style, decent specifications, and a ‘reasonable’ price. The monitor is targeted at office workers, yet it may also appeal to gamers and multimedia enthusiasts as it supports AMD’s FreeSync variable refresh rate technology.


AOC’s Porsche Design Q27T1 display comes in a sleek chassis with slim bezels on three sides and featuring an asymmetric stand. The monitor is very thin, yet it has a an integrated cable management with a special cover.



Characteristics wise, the AOC Q27T1 is a good performance-mainstream display: it has a 27-inch IPS panel featuring a 2560×1440 resolution, 350 nits brightness, a 1300:1 contrast ratio, a variable refresh rate between 48 Hz and 75 Hz, and viewing angles of 178º. The display claims to cover 107% of the sRGB and 90% of the NTSC color gamut. To make it more comfortable for work, the monitor has an anti-glare coating. As for inputs, the LCD has two HDMI 1.4 connectors, one DisplayPort 1.2, a line in, and a headphone output.



The Q27T1 is not AOC’s first Porsche Design monitor. About two and a half years the company introduced its PDS-series 23-inch and 27-inch LCDs designed for the same style-minded audience and offering a Full-HD resolution. One of the peculiarities of those displays was an external PSU with a proprietary connection that integrated an HDMI interface and a power cable into the same wire, which is not particularly practical. The new monitor not only features improved specifications, but does not use proprietary connections (yet it still has an external PSU).






















AOC Porsche Design 27-Inch Display
  Q27T1
Panel 27″ IPS with anti-glare coating
Native Resolution 2560 × 1440
Maximum Refresh Rate 75 Hz
Dynamic Refresh Rate AMD FreeSync (48 Hz ~ 75 Hz)
Response Time 5 ms (gray-to-gray)
Brightness 350 cd/m²
Contrast 1300:1
Viewing Angles 178°/178° horizontal/vertical
Color Gamut 107% sRGB, 90% NTSC
Pixel Pitch 0.2331×0.2331 mm
PPI 109 PPI
Inputs 1 × DisplayPort 1.2

2 × HDMI 1.4

1 × Line-In
Audio 3.5-mm headphone jack
Color Gray + Silver
Power Consumption Standby 0.3 W
Maximum 30 W
Additional Information Link
Price $299 (?)

AOC will start sales of the Q27T1 display later this month, in time for holiday shopping season. The monitor will cost £279 in the UK, so it is reasonable to assume that it will carry a $299 MSRP in the US.


Related Reading


Source: AOC




Source: AnandTech – AOC Reveals Q27T1 Monitor: With ‘Style’

Intel’s EMIB Now Between Two High TDP Die: The New Stratix 10 GX 10M FPGA

The best thing about manufacturing Field Programmable Gate Arrays (FPGAs) is that you can make the silicon very big. The nature of the repeatable unit design can absorb issues with a process technology, and as a result we often see FPGAs be the largest silicon dies that enter the market for a given manufacturing process. When you get to the limit of how big you can make a piece of silicon (known as the reticle limit), the only way to get bigger is to connect that silicon together. Today Intel is announcing its latest ‘large’ FPGA, and it comes with a pretty big milestone with its connectivity technology.



Source: AnandTech – Intel’s EMIB Now Between Two High TDP Die: The New Stratix 10 GX 10M FPGA

GlobalFoundries and SiFive to Design HBM2E Implementation on 12LP/12LP+

GlobalFoundries and SiFive announced on Tuesday that they will be co-developing an implementation of HBM2E memory for GloFo’s 12LP and 12LP+ FinFET process technologies. The IP package will enable SoC designers to quickly integrate HBM2E support into designs for chips that need significant amounts of bandwidth.


The HBM2E implementation by GlobalFoundries and SiFive includes the 2.5D packaging (interposer) designed by GF, with the HBM2E interface developed by SiFive. In addition to HBM2E technology, licensees of SiFive also gain access to the company’s RISC-V portfolio and DesignShare IP ecosystem for GlobalFoundries’ 12LP/12LP+, which will enable SoC developers to build RISC-V-based devices GloFo’s advanced fab technology.


GlobalFoundries and SiFive suggest that the 12LP+ manufacturing process and the HBM2E implementation will be primarily used for artificial intelligence training and inference applications for edge computing, with vendors looking to optimize for TOPS-per-milliwatt performance.


For GlobalFoundries, it is important to land customers who need specialized process technologies and may not be ready for leading-edge processes from TSMC and Samsung Foundry for cost or other reasons. As for SiFive’s involvement, this is a bit trickier – RISC-V itself isn’t likely to be used for the core logic in deep learning accelerators, but it is a solid architecture to use for the embedded CPU cores needed to control the dataflows within an accelerator.


SiFive’s HBM2E interface and custom IP for GlobalFoundries’ 12LP and 12LP+ technology are being developed at GF’s Fab 8 in Malta, New York. The two companies expect that they’ll be able to wrap up their work in the first half of 2020, at which point the IP will become available for licensing.


Related Reading:


Source: GlobalFoundries




Source: AnandTech – GlobalFoundries and SiFive to Design HBM2E Implementation on 12LP/12LP+

Seagate: 18 TB HDD Due in First Half 2020, 20 TB Drive to Ship in Late 2020

Seagate last week clarified its high-capacity HDD roadmap during its earnings call with analysts and investors. The company is on track to ship its first commercial HAMR-based hard drives next year, but only in the back half of the year. Before that, Seagate intends to ship its 18 TB HDDs.


It is expected that Seagate’s 18 TB hard drive will be based on the same nine-platter platform that is already used for the company’s Exos 16 TB HDD, which means that it will be relatively easy for the company to kick off mass production of 18 TB hard drives. Overall, Seagate’s HDD roadmap published in September indicates that the company’s 18 TB drive will use conventional magnetic recording (CMR) technology. In addition to this product, Seagate’s plans also include a 20 TB HDD based on shingled magnetic recording (SMR) technology that is due in 2020.



Seagate says that its Exos 16 TB hard drives are very popular among its clients and even expects to ship more than a million of such drives in its ongoing quarter, which ends in December. The launch of its 18 TB HDD will maintain Seagate’s capacity leadership in the first half of next year before Western Digital starts volume shipments of its HAMR+CMR-based 18 TB and HAMR+SMR-based 20 TB hard drives.


Seagate itself will be ready with its HAMR-based 20 TB drive late in 2020. Right now, select Seagate customers are qualifying HAMR-based 16 TB HDDs, so they will likely be ready to deploy 20 TB HAMR drives as soon as they are available. It is noteworthy that Seagate is readying HAMR HDDs with both one and two actuators, as to offer the right performance and capacity for different customers. This would follow Seagate’s current dual-actuator MACH.2 drives, which the company started shipping for revenue last quarter.


Dave Mosley, CEO of Seagate, said the following:


“We are preparing to ship 18 TB drives in the first half of calendar year 2020 to maintain our industry capacity leadership. We are also driving areal density leadership with our revolutionary HAMR technology, which enables Seagate to achieve at least 20% areal density CAGR over the next decade. We remain on track to ship 20 TB HAMR drives in late calendar year 2020.


As drive densities increase, multi-actuator technology is required to maintain fast access to data and scale drive capacity without compromising performance. We generated revenue from our MACH.2 dual actuator solutions for the first time in the September quarter. We are working with multiple customers to qualify these drives, including a leading US hyperscale customer, who is qualifying the technology to meet their rigorous service level agreements without having to employ costly hardware upgrades. We expect to see demand for dual actuator technology to increase as customers transition to drive capacities above 20 TB.”


Related Reading:


Source: Seagate




Source: AnandTech – Seagate: 18 TB HDD Due in First Half 2020, 20 TB Drive to Ship in Late 2020

Dell’s Introduces UltraSharp 27-Inch 4K Monitor (UP2720Q) With Integrated Colorimeter

Just in time for this week’s Adobe MAX conference, Dell has introduced an updated version of its popular 27-inch 4K UltraSharp professional display. The latest iteration of Dell’s pro monitor, the UltraSharp 27 4K PremierColor Monitor (UP2720Q) is shaking things up by taking the already factory-calibrated monitor family and integrating a colorimeter for even further calibration options, as well as Thunderbolt 3 support. At the same time, however, Dell is also dropping HDR support, making this (once again) a purely SDR display.


Like its predecessors, the UltraSharp 27 4K PremierColor Monitor UP2720Q is particularly aimed at photographers, designers, and other people with color-critical workloads. The LCD comes factory calibrated to a Delta <2 accuracy so to be ready to work out of the box and is equipped with a light shielding hood.


Under the hood, the UP2720Q is based on a 10-bit IPS panel featuring a 3840×2160 resolution. The now purely SDR monitor offers a typical brightness of 250 nits, a 1300:1 contrast ratio, a 6 ms GtG response time, 178°/178° viewing angles, and has a 3H anti-glare hard coating. Being aimed at graphics and photography professionals, the LCD can display 1.07 billion colors and covers 100% of the Adobe RGB color gamut, 98% of DCI-P3 , and 80% of BT.2020. Furthermore, the monitor can display two color gamuts at once when Picture-by-Picture capability is used.



The key new feature of the UP2720Q is its built-in colorimeter, which is compatible with CalMAN software and allows users to ensure that they use the most accurate colors possible. Typically, monitors used for graphics and photo editing need to be recalibrated every several months and integrated colorimeter stands to make the task much easier.



The monitor can connect to host PCs using a DisplayPort 1.4, two HDMI 2.0 inputs, or, new to this latest model, a Thunderbolt 3 connector. The display has an additional TB3 port to daisy chain another TB3 device, and also includes a USB 3.2 Gen 2 hub and a headphone output. The Thunderbolt 3 port can supply its host PC with up to 90 W of power, enough for high-end 15.6-inch laptops.



Just like other professional monitors, the UltraSharp 27 4K PremierColor Monitor UP2720Q has a stand that can adjust height, tilt, and swivel. Besides, the display can be used in portrait mode.



Dell’s PremierColor Monitor UP2720Q will be available starting from January 15, 2020, at a price of $1,999.99.

















The Dell UltraSharp PremierColor 27-inch Monitor Specs
  UP2720Q UP2718Q
Display Size 27-inch 27-inch
Panel Type 10-bit IPS 10-bit IPS
Resolution 3840×2160 3840×2160
Refresh Rate 60 Hz 60 Hz
Response time 8 ms typical

6 ms GtG in fast mode
6 ms GtG
Contrast Ratio 1300:1 1000:1 (SDR)

20,000:1 (HDR)
Brightness 250 nits 400 nits (SDR)

1000 nits (HDR)
Color Gamut 100% AdobeRGB

98% DCI-P3

80% BT.2020
100% AdobeRGB

98% DCI-P3

77% BT.2020
HDR Yes
Stand Height adjustability stand (130 mm),

Tilt (-5° to 21°),

Swivel (-45° to 45°),

Pivot (-90° to 90°)
Height adjustability stand (145 mm),

Tilt (-5° to 21°),

Swivel (-45° to 45°),

Pivot (-90° to 90°)
Connectivity 2 x HDMI 2.0

1 x DisplayPort 1.4

1 x Thunderbolt 3 (Upstream)

1x Thunderbolt 3 (Downstream)

1 x Headphone output

USB 3.2 Gen 2 Hub
2 x HDMI 2.0

1 x DisplayPort 1.4

1x Mini DisplayPort 1.4

1 x Headphone output

USB 3.0 Gen 1 Hub
Availability January 2020 May 2017
Price $1999.99 $1999.99

Related Reading:


Source: Dell



Source: AnandTech – Dell’s Introduces UltraSharp 27-Inch 4K Monitor (UP2720Q) With Integrated Colorimeter

Dell’s Latitude 7220 Rugged Extreme Tablet Gets Quad-Core CPUs & 1000-Nits Display

Dell has introduced a new version of its high end 12-inch Latitude Rugged Extreme tablet, the Latitude 7220 Rugged Extreme. Like other ruggedized PCs, the new 7220 is designed to reliably work in harsh environments, offering against scapes, drops, and material ingresses of all kinds. At a high level, Dell’s latest ruggedized tablet largely carries over their earlier designs – making it compatible with ‘most’ of the accessories developed for them – but now it has been upgraded to a quad-core Intel Core 8th Generation CPU, a 1000 nits display, as well as the latest in connectivity technologies.


Dell has a history of offering fully-rugged tablets that goes back to 2015, and with the Latitude 7220 RE they are now on their third generation tablet. Just like its predecessors, the Latitude 7220 Rugged Extreme tablet comes in the MIL-STD-810G-certified 24-mm thick chassis designed to withstand operating drops, thermal extremes (-29°C to 63°C (20°F to 145°F), dust, sand, humidity, blowing rain, vibration, functional shock and all other kinds of physical impact. The tablet is also MIL-STD-461F certified, meaning that the Latitude 7220 RE is both designed to avoid leaking electromagnetic interference, as well as being able to resist it.



Because of the significant chassis bulk required meet the durability requirements for a ruggedized device, the Latitude 7220 Rugged Extreme is neither small nor light; the device weighs in 1.33 kilograms, which is comparable to a full-blown 13.3-inch notebook. In fact, the new model is 60 grams heavier than the previous-generation Latitude 7212 Rugged Extreme.


From a technology perspective, one of the key improvements with the Latitude 7220 RE is its new display panel, which offers a peak luminance of 1000 nits and should be bright enough to provide decent image quality even under direct sunlight. The brightness of the screen can now be regulated on the front panel of the tablet, which should be rather convenient. Meanwhile, the tablet no longer has a dedicated Windows button, but the latter is still present on Dell’s optional IP65-rated keyboard cover with kickstand.


Under the hood of the new tablet is Intel’s 8th Generation Core i3/i5/i5 (Whiskey Lake) processor, which offers two or four cores along with Intel’s UHD Graphics 630. Depending on exact tablet SKU, that CPU can be accompanied by 8 GB or 16 GB of LPDDR3 memory and a PCIe 3.0 x4-based Class 35 or Class 40 M.2 SSD, with capacities ranging from 128 GB to 2 TB. The system can be powered by two hot-swappable batteries, each with a 34 Wh capacity (by default, the system includes only one), though Dell isn’t promoting specific battery life figures since they expect the tablet’s customers to have a pretty varied range of use cases.



Meanwhile, Dell’s latest ruggedized tablet has also received a communications upgrade. The tablet not only offers Wi-Fi 5/6 and Bluetooth (which can be hardware disabled for military-bound devices), but also can include an optional Qualcomm Snapdragon X20 4G/LTE modem, as well as a FirstNet module to access networks for first responders.


As for wired I/O, the Latitude 7220 Rugged Extreme includes a USB 3.1 Type-C connector that can be used for charging and external display connectivity, a USB 3.0 Type-A port, an optional micro RS-232 port, a POGO connector for the keyboard, and a 3.5-mm audio jack for headsets. The tough tablet also features rear and front cameras, an SD card reader, an optional contactless smart card reader, as well as a touch fingerprint sensor. Meanwhile, notably unlike its predecessor, the 7220 no longer includes a GbE port, VGA, HDMI, nor some other legacy I/O options.


As far as security is concerned, Dell’s Latitude 7220 Rugged Extreme can be configured to cover all the bases. The tablet has a fingerprint reader, Dell’s ControlVault advanced authentication, Intel vPro remote management, a TPM 2.0 module, optional encryption for SSDs, and NIST SP800-147 secure platform.



































Specifications of the Dell Latitude 12 Rugged Extreme Tablets
  Latitude 12 7212

2017
Latitude 7220

2019
LCD Diagonal 11.6″ 11.6″
Resolution 1920×1080
Features Outdoor-readable display with gloved multi-touch AG/AR/AS/Polarizer and Gorilla Glass Brightness: 1000 cd/m²

Outdoor-readable, anti-glare, anti-smudge, polarizer, glove-capable touchscreen
CPU Dual-Core 7th Gen Intel Core i5 CPUs (Skylake-U)

Dual-Core 7th Gen Intel Core i3/i5/i7 CPUs (Kaby Lake-U)
Intel Core i7-8665U: 4C/8T vPro Intel Core i5-8365U: 4C/8T vPro

Intel Core i3-8145U: 2C/4T
Graphics Intel HD Graphics 520/620

(24 EUs)
Intel UHD Graphics 630

(24 EUs)
RAM 8 GB or 16 GB LPDDR3 8 GB or 16 GB LPDDR3-2133
Storage 128 GB SATA Class 20 SSD

256 GB SATA Class 20 SSD Opal 2.0 SED

256 GB SATA Class 20 SSD

256 GB PCIe NVMe Class 40 SSD Opal 2.0 SED

512 GB SATA Class 20 SSD Opal 2.0 SED

512 GB SATA Class 20 SSD

512 GB PCIe NVMe Class 40 SSD

1 TB SATA Class 20 SSD

1 TB PCIe NVMe Class 40 SSD
M.2 PCIe 3.0 x4 SSDs:


Class 35: 128 GB;


Class 40: 256 GB, 512 GB, 1 TB, 2 TB;


Class 40 SED: 256 GB, 512 GB, 1 TB.

Wireless Wi-Fi,

Bluetooth options
Wireless LAN Options:


Intel Dual Band Wireless-AC 8265 with Bluetooth 4.2 + vPro Mobile broadband


Intel Dual Band Wireless-AC 8265 + No Bluetooth 4.2 Wireless Card


Qualcomm QCA61x4A 802.11ac Dual Band (2×2) Wireless Adapter+ Bluetooth 4.1

Wireless LAN Options:


Intel Wireless-AC 9560, 2×2, 802.11ac with Bluetooth 5.0


Intel Wi-Fi 6 AX200, 2×2, 802.11ax with MU-MIMO, Bluetooth 5.0


Intel Wi-Fi 6 AX200, 2×2, 802.11ax with MU-MIMO, without Bluetooth

Mobile Broadband

(optional)
Qualcomm Snapdragon X7 LTE-A for Win 10 (DW5811e Gobi5000) for Worldwide (Windows 7 and 10 options)

Qualcomm Snapdragon X7 LTE-A for Win 10 (DW5811e Gobi5000) for AT&T (Windows 7 and 10 options)

Qualcomm Snapdragon X7 LTE-A for Win 10 (DW5811e Gobi5000) for Verizon (Windows 7 and 10 options)

Qualcomm Snapdragon X7 LTE-A for Win 10 (DW5811e Gobi5000) for Sprint (Windows 7 and 10 options)

Dell Wireless 5816e multi-mode Gobi 5000 4G LTE WAN Card (Japan/ANZ only)
DW5821E Qualcomm Snapdragon X20 4G/LTE Wireless WAN card for AT&T, Verizon, Sprint
GPS Dedicated u-blox NEO-M8 GPS card
Additional Dual RF-passthough (Wi-Fi and mobile broadband), Near field communication (NFC) ?
USB 3.1 1 × USB 3.0 Type-C w/ DP, PD
3.0 1 × USB 3.0 Type-A
Cameras Front Front-facing camera 5 MP RGB + IR FHD webcam with privacy shutter
Back Rear-facing camera with flash LED 8 MP rear camera with flash and dual microphone
Security Optional Security includes:


TPM 2.0;

ControlVault advanced authentication;

Dell Security Tools;

Dell data protection encryption;

Contactless SmartCard reader; Fingerprint reader.

Steel reinforced cable lock slot


Optional Security includes:


TPM 2.0;

ControlVault advanced authentication;

Dell Security Tools;

Dell data protection encryption

Contactless/Contacted SmartCard reader;

Fingerprint reader;

NIST SP800-147 secure platform;

Dell Backup and Recovery.

Other I/O TRRS audio jack, micro RS-232 (optional), POGO, SD Card reader, etc.
Battery 34 Wh Primary battery 34 Wh Primary

(ExpressCharge)

34 Wh Secondary (optional?)
Dimensions Width 312 mm | 12.3 inch 312.2 mm | 12.29 inch
Height 203 mm | 8 inch 203 mm | 8 inch
Thickness 24 mm | 0.96 inch 24.4 mm | 0.96 inch
Weight 1270 grams (tablet) 1330 grams (tablet)
Operating System Microsoft Windows 10 Pro 64 Bit

Microsoft Windows 10 Pro with Windows 7 Professional Downgrade (64 bit) – Skylake CPU required
Microsoft Windows 10 Pro 64 Bit
Regulatory and Environmental Compliance MIL-STD-810G Transit drop (48”/1.22m; single unit; 26 drops), operating drop (36”/0.91m), blowing

rain, blowing dust, blowing sand, vibration, functional shock, humidity, salt fog, altitude, explosive atmosphere,

thermal extremes, thermal shock, freeze/thaw, tactical standby to operational.
Operating thermal range -20°F to 145°F (-29°C to 63°C)
Non-operating thermal range -60°F to 160°F (-51°C to 71°C)
IEC 60529 ingress protection IP-65 (dust-tight, protected against pressurized water)
Hazardous locations ANSI/ISA.12.12.01 certification capable (Class I, Division 2, Groups A, B, C,D),

CAN/CSA C22.2
ANSI/ISA.12.12.01 certification capable (Class I, Division 2, Groups A, B, C,D)
Electromagnetic interference MIL-STD-461F certified MIL-STD-461F and MIL-STD-461G
Optional Accessories Dell Desktop Dock for the Rugged Tablet,

Dell Dock WD15,

Dell Power Companions,

Kickstand and Rugged RGB Backlit Keyboard cover,

Shoulder Strap,

Soft and Rigid Handle options,

Chest Harness,

Cross Strap, Active Pen,

Backpack Modules,

Dell monitors,

Dell Wireless Keyboard and Mouse
Rugged Tablet Dock

Keyboard with Kickstand

Havis Vehicle Dock

PMT Vehicle Dock

Gamber-Johnson Vehicle Dock

Carrying accessories

Scanner module

Extended I/O module

Dell monitors (with USB-C or over a USB-C-to-DP adapter)

Dell wireless keyboard and mice
Price Starting at $1,899

The Dell Latitude 7220 Rugged Extreme tablet is now available from Dell starting at $1,899.



Related Reading:


Source: Dell

Some images are made by Getty Images and distributed by Dell



Source: AnandTech – Dell’s Latitude 7220 Rugged Extreme Tablet Gets Quad-Core CPUs & 1000-Nits Display

Intel Launches Xeon-E 2200 Series for Servers: 8 Cores, up to 5.0 GHz

The Xeon-E family from Intel replaced the Xeon E3-1200 parts that were found common place in a lot of office machines and small servers. The Xeon E parts are almost direct analogues of the current leading consumer processor hardware, except with ECC memory support and support for vPro out of the box. Today’s launch is a secondary launch, with Intel having released the Xeon-E 2200 series some time ago for the cloud market, but this launch marks general availability for consumers and the small-scale server market.



Source: AnandTech – Intel Launches Xeon-E 2200 Series for Servers: 8 Cores, up to 5.0 GHz

Samsung Confirms Custom CPU Development Cancellation

The fate of Samsung’s custom CPU development efforts has been making the rounds of the rumour mill for almost a month, and now we finally have confirmation from Samsung that the company has stopped further development work on its custom Arm architecture CPU cores. This public confirmation comes via Samsung’s HR department, which last week filled an obligatory notice letter with the Texas Workforce Commission, warning about upcoming layoffs of Samsung’s Austin R&D Center CPU team and the impending termination of their custom CPU work.


The CPU project, said currently to be around 290 team members large, started off sometime in 2012 and has produced the custom ARMv8 CPU microarchitectures from the Exynos M1 in the Exynos 8890 up to the latest Exynos M5 in the upcoming Exynos 990.


Over the years, Samsung’s custom CPU microarchitectures had a tough time in differentiating themselves from Arm’s own Cortex designs, never being fully competitive in any one metric. The Exynos-M3 Meerkat cores employed in the Exynos 9810 (Galaxy S9), for example, ended up being more of a handicap to the SoC due to its poor energy efficiency. Even the CPU project itself had a rocky start, as originally the custom microarchitecture was meant to power Samsung’s custom Arm server SoCs before the design efforts were redirected towards mobile use.


In a response to Android Authority, Samsung confirmed the choice was based on business and competitive merits. A few years ago, Samsung had told us that custom CPU development was significantly more expensive than licensing Arm’s CPU IP. Indeed, it’s a very large investment to make in the face of having the up-hill battle of not only to designing a core matching Arm’s IP, but actually beating them.


Beyond the custom CPU’s competitiveness, the cancellation likely is tied to both Samsung’s and Arm’s future CPU roadmaps and timing. Following Deimos (Cortex-A77) and Hercules (Cortex-A78?), Arm is developing a new high-performance CPU on the new ARMv9 architecture, and we expect a major new v9 little core to also accompany the Matterhorn design. It’s likely that Samsung would have had to significantly ramp up R&D to be able to intercept Arm’s ambitious design, if even possible at all given the area, performance, and efficiency gaps.


In practice, the end result is bittersweet. On one hand, the switch back to Cortex-A CPUs in future Exynos flagship SoCs should definitely benefit SLSI’s offerings, hopefully helping the division finally achieve SoC design wins beyond Samsung’s own Mobile division – or dare I hope, even fully winning a Samsung Galaxy design instead of only being a second-source alongside Qualcomm.


On the other hand, it means there’s one less custom CPU development team in the industry which is unfortunate. The Exynos 990 with the M5 cores will be the last we’ll see of Samsung’s custom CPU cores in the near future, as we won’t be seeing the in-development M6. M6 was an SMT microarchitecture, which frankly quite perplexed me as a mobile targeted CPU – I definitely would have wanted to see how that would have played out, just from an academic standpoint.


The SARC and ACL activities in Austin and San Jose will continue as Samsung’s SoC, AI, and custom GPU teams are still active, the latter project which seems to be continuing alongside the new AMD GPU collaboration and IP licensing for future Exynos SoCs.


Related Content:




Source: AnandTech – Samsung Confirms Custom CPU Development Cancellation

LG’s E9, C9 & B9 OLED TVs to Get NVIDIA G-Sync via Firmware Update

Back in September, LG and NVIDIA teamed up to enable G-Sync variable refresh rate support on select OLED televisions. Starting this week and before the end of the year LG will issue firmware updates that add support for the capability on the company’s latest premium OLED TVs.


LG’s 2019 OLED TVs have been making waves throughout the gaming community since their launch earlier this year. The TVs are among the first to support HDMI 2.1’s standardized variable refresh rate technology, adding a highly demanded gaming feature to LG’s already popular lineup of TVs. This has put LG’s latest generation of TVs on the cutting edge, and, along with Microsoft’s Xbox One X (the only HDMI-VRR source device up until now), the duo of devices has been serving as a pathfinder for HDMI-VRR in general.


Now, NVIDIA is getting into the game by enabling support for HDMI-VRR on recent video cards, as well as working with LG to get the TVs rolled into the company’s G-Sync Compatible program. The two companies have begun rolling out the final pieces needed for variable refresh support this week, with LG releasing a firmware update for their televisions, while NVIDIA has started shipping a new driver with support for the LG TVs.


On the television side of matters, LG and NVIDIA have added support for the 2019 E9 (65 and 55 inches), C9 (77, 65 and 55 inches), and B9 (65 and 55 inches) families of TVs, all of which have been shipping with variable refresh support for some time now.


The more interesting piece of the puzzle is arguably on the video card side of matters, where NVIDIA is enabling support for the TVs on their Turing generation of video cards, which covers the GeForce RTX 20 series as well as the GeForce GTX 16 series of cards. At a high level, NVIDIA and LG are branding this project as adding G-Sync Compatible support for the new TVs. But, as NVIDIA has confirmed, under the hood this is all built on top of HDMI-VRR functionality. Meaning that as of this week, NVIDIA has just added support for HDMI’s variable refresh standard to their Turing video cards.


While HDMI-VRR was introduced as part of HDMI 2.1, the feature is an optional extension to HDMI and is not contingent on the latest standard’s bandwidth upgrades. This has allowed manufacturers to add support for the tech to HDMI 2.0 devices, which is exactly what has happened with the Xbox One X and now NVIDIA’s Turing video cards. Which in the case of NVIDIA’s cards came as a bit of a surprise, since prior to the LG announcement NVIDIA never revealed that they could do HDMI-VRR on Turing.


At any rate, the release of this new functionality gives TV gamers another option for smooth gaming on big-screen TVs. Officially, the TVs are part of the G-Sync Compatible program, meaning that on top of the dev work NVIDIA has done to enable HDMI-VRR, they are certifying that the TVs meet the program’s standards for image stability (e.g. no artifacting or flickering). Furthermore, as these are HDR-capable OLED TVs, NVIDIA is also supporting HDR gaming as well, covering the full gamut of features available in LG’s high-end TVs.


Ultimately, LG is the first TV manufacturer work with NVIDIA to get the G-Sync Compatible certification, which going into the holiday shopping season will almost certainly be a boon for both companies. So it will be interesting to see whether other TV makers will end up following suit.



Related Reading:


Source: LG, NVIDIA



Source: AnandTech – LG’s E9, C9 & B9 OLED TVs to Get NVIDIA G-Sync via Firmware Update

Google To Acquire Fitbit for $2.1 Billion

Google on Friday announced that that it had reached an agreement to buy Fitbit, a leading maker of advanced fitness trackers. Google stressed that data obtained and processed by Fitbit’s devices will remain in appropriate datacenters and will not go elsewhere.


Under the terms of the agreement, Google will pay $2.1 billion in cash, valuing the company at $7.35 per share. In accordance with the deal, Google will become the sole owner of Fitbit, owning its IP and handling all hardware and software development and distribution.


The takeover of Fitbit is the latest step in Google’s ongoing strategy to make its Android platform more attractive to consumers. Fitbit has more than 28 million of active users, and while the company is far from the lion’s share of the wearables market, it has a significantly bigger presence than the small number of Wear OS devices that Google’s partners have been able to sell.


Overall, this is is the second major wearables-related acquisition for Google this year. Earlier this year the company also bought technology and R&D personnel from watch maker Fossil.


James Park, co-founder and CEO of Fitbit, said the following:


“Google is an ideal partner to advance our mission. With Google’s resources and global platform, Fitbit will be able to accelerate innovation in the wearables category, scale faster, and make health even more accessible to everyone. I could not be more excited for what lies ahead.”


The transaction will be closed in 2020 and from there expect Google to integrate Fibit’s IP into the Android platform.


Related Reading:


Source: Google/Fitbit press release




Source: AnandTech – Google To Acquire Fitbit for .1 Billion

Western Digital Begins Shipments of 96-Layer 3D QLC-Based SSDs, Retail Products

Western Digital announced this week that it has started shipments of its first products based on 3D QLC NAND memory. The initial devices to use the highly-dense flash memory are retail products (e.g., memory cards, USB flash drives, etc.) as well as external SSDs. Eventually, high-density 3D QLC NAND devices will be used to build high-capacity SSDs that will compete against nearline hard drives.


During Western Digital’s quarterly earnings conference call earlier this week, Mike Cardano, president and COO of the company, said that in the third quarter of calendar 2019 (Q1 FY2020) the manufacturer “began shipping 96-layer 3D QLC-based retail products and external SSDs.” The executive did not elaborate which product lines now use 3D QLC NAND, though typically we see higher capacity NAND first introduced in products such as high-capacity memory cards and external drives.


Western Digital and its partner Toshiba Memory (now called Kioxia) were among the first companies to develop 64-layer 768 Gb 3D QLC NAND back in mid-2017 and even started sampling of these devices back then, but WD/Toshiba opted not to mass produce the NAND. Meanwhile, in mid-2018, Western Digital introduced its 96-layer 1.33 Tb 3D QLC NAND devices that could either enable to build storage products with considerably higher capacities, or cut costs of drives when compared to 3D TLC-based solutions.


At present, Western Digital’s 1.33 Tb 3D QLC NAND devices are the industry’s highest-capacity commercial NAND chips, so from this standpoint the company is ahead of its rivals. But while it makes a great sense to use 1.33 Tb 3D QLC NAND for advanced consumer storage devices, these memory chips were developed primarily for ultra-high-capacity SSDs that could rival nearline HDDs for certain applications.


It is hard to say when Western Digital commercializes such drives as the company is only starting to qualify 96-layer 3D QLC NAND for SSDs, but it will definitely be interesting to see which capacity points will be hit with the said memory.


On a related note, Western Digital also said that in Q3 2019 (Q1 FY2020), bit production of 96-layer 3D NAND exceeded bit production of 64-layer 3D NAND.


Related Reading:


Source: Western Digital


 



Source: AnandTech – Western Digital Begins Shipments of 96-Layer 3D QLC-Based SSDs, Retail Products

Rambus Demonstrates GDDR6 Running At 18 Gbps

While GDDR6 is currently available at speeds up to 14Gbps, and 16Gbps speeds are right around the corner, if the standard is going to have as long a lifespan as GDDR5, then it can’t stop there. To that end, Rambus this week demonstrated operation of its GDDR6 memory subsystem at a data transfer rate of 18 GigaTransfers/second, a new record for the company. Rambus’s controller and PHY can deliver a peak bandwidth of 72 GB/s from a single 32-bit GDDR6 DRAM chip, or a whopping 576 GB/s from a 256-bit memory subsystem, which is what we’re commonly seeing on graphics cards today.


The Rambus demonstration involved the company’s silicon-proven GDDR6 PHY implemented using one of TSMC’s 7 nm process nodes, accompanied by Northwest Logic’s GDDR6 memory controller and GDDR6 chips from an unknown maker. According to a transmit eye screenshot published by Rambus, the subsystem worked fine and the signals were clean.


Both GDDR6 controller and PHY can be licensed from Rambus by developers of SoCs, so the demonstration is both a testament to how well the company’s highly-integrated 7 nm GDDR6 solution works, and a means to promote their IP offerings.


It is noteworthy that Rambus, along with Micron and a number of other companies, has been encouraging the use of GDDR6 memory in products besides GPUs for quite some time. Various accelerators for AI, ML, and HPC workloads as well as networking gear and autonomous driving systems greatly benefit from the technology’s high memory bandwidth and are therefore a natural fit for GDDR6. The demonstration is meant to show companies developing SoCs that Rambus has a fine GDDR6 memory solution implemented using a leading-edge process technology that can be easily integrated (with the help of engineers from Rambus) into their designs.


For the graphics crowd, Rambus’ demonstration gives a hint of what to expect from upcoming implementations of GDDR6 memory subsystems and indicates that GDDR6 still has some additional room for growth in terms of data transfer rates.


Related Reading:


Source: Rambus



Source: AnandTech – Rambus Demonstrates GDDR6 Running At 18 Gbps

GIGABYTE Enhances Aorus RGB Memory with Aorus Memory Boost Capability

One of the advantages of having a highly-integrated product stack is ability to fine tune performance of your devices when they work together. On the one hand, this allows to get higher performance while ensuring maximum compatibility and thus differentiate from rivals. On the other hand, this enables to sell more products per end user, sometimes at a premium. This is exactly what GIGABYTE is doing with its Aorus motherboards and Aorus Memory Boost feature of its Aorus RGB Memory modules.


GIGABYTE, which introduced its first DIMMs in mid-2018, is a relative newbie on the market of memory, so its DDR4 product lineup is currently limited to seven SKUs conservatively rated for DDR4-2666, DDR4-3200, and DDR4-3600 operation (whereas faster kits announced at CES 2019 are yet to be launched commercially). Meanwhile, the company appears to have some kind of secret weapon in the form of a special SPD setting called Aorus Memory Boost (AMB) that slightly increases speed of its top-of-the-range DRAM modules.


When used with select AMD X570 and Intel Z390-based GIGABYTE Aorus motherboards, the Aorus RGB Memory DDR4-3600 with CL18 19-19-39 16 GB dual-channel kits (GP-AR36C18S8K2HU416R and GP-AR36C18S8K2HU416RD) can automatically set themselves to DDR4-3700 or DDR4-3733 (Intel/AMD) mode when their AMB profile is activated.



It is unclear whether such an overclock affects timings in a bid to slightly increase data transfer rates, or GIGABYTE can guarantee stable operation at higher clocks on its motherboards due to superior design of the latter, but it is evident that the company’s modules work better with its platforms. The latter would not be particularly surprising as Aorus-branded mainboards are engineered to feature an overclocking headroom for CPU and DRAM, so GIGABYTE does not really take many risks here. Meanwhile, we can only wonder whether GIGABYTE’s Aorus Memory Boost will be available on its higher-end DDR4-4000+ modules that are harder to overclock and guarantee a long-term stability.


GIGABYTE’s Aorus RGB Memory DDR4-3600 16 GB dual-channel kits are already listed on the company’s website, so expect them to hit retail shortly. Prices are unknown, but it remains to be seen whether the manufacturer decides to capitalize on the Aorus Memory Boost and sell the modules at a slight premium.


Related Reading:


Sources: GIGABYTE (1, 2), TechPowerUp



Source: AnandTech – GIGABYTE Enhances Aorus RGB Memory with Aorus Memory Boost Capability

Sony to Build New Fab to Boost CMOS Sensor Output

Sony this week has revealed that the company will be building a new semiconductor fab to boost output of its CMOS sensors, as part of a broader effort to respond to growing demand for these products. The company will build the new fab at its Nagasaki Technology Center and expects it to tangibly increase their production of CMOS wafers.


Being one of the leading suppliers of CMOS camera sensors for smartphones, Sony earns billions of dollars selling them. In the third quarter (Q2 FY2019) Sony’s Imaging and Sensing Solutions (I&SS) division earned $2.871 billion in revenue (up 56.3% year-over-year) and $706 million in profits*. As of late March 2018, Sony’s CMOS production capacity was 100 thousand 300-mm wafer starts per month, and the company is gradually increasing its output by improving efficiency of its fab space utilization and outsourcing part of the production. But that may not be enough.


In the coming years demand for CMOS sensors is going to grow because of several factors: smartphones now use not two (for main and selfie cameras), but three or even more camera modules; smartphone sensors are getting larger; and more devices are going to get computer vision support, requiring more sensors there as well.



In order to satisfy demand for such products, the company constantly improves its fabs and expects to boost their total output capacity to around 138 thousand of wafer starts per month by late March 2021. Furthermore, Sony plans to invest billions of dollars (PDF, page 152) in fab upgrades as well as building an additional fab (or even fabs) at its Nagasaki Technology Center. The new manufacturing facility (or facilities) is expected to start production sometime during the company’s 2021 fiscal year, which starts on April 1, 2021. That being said, it is reasonable to expect that Sony is aiming to start construction of the facility in the coming months.


It is noteworthy that Sony’s semiconductor division (which is now called I&SS) reportedly has not invested anything in brand-new production facilities for 12 years. The company did acquire a semiconductor fab from Toshiba and then re-purposed it to make sensors in 2016, but this was not a new fab. Apparently, Sony now forecasts such high demand for sensors in the coming years that it has decided to invest in all-new production lines.


Sony’s statement reads as follows:


“We expect demand for our image sensors to continue to increase from next fiscal year as well due to the adoption of multi-sensor cameras and larger- sized sensors by smartphone makers.


In order to respond to this strong demand, we have further improved the efficiency of space utilization in our existing factories and have raised our production capacity target for the end of March 2021 from 130,000 wafers per month to 138,000 wafers per month.


Moreover, we have decided to move forward in stages with the investment we had been considering to build new fabs at our Nagasaki facility to accommodate demand from the fiscal year beginning April 1, 2021.


Through this action, we are working to continue growing the I&SS business so as to achieve the mid-range targets we established at the IR Day this year: 60% revenue share of the image sensor market and 20-25% ROIC in the fiscal year ending March 31, 2026.”


*For the sake of clarity, it is necessary to note that Sony’s I&SS division still produces some chips for Sony’s needs, so not 100% of its revenue comes from image sensors


Related Reading:


Sources: Sony, Sony, Nikkei, ElectronicsWeekly, Reuters



Source: AnandTech – Sony to Build New Fab to Boost CMOS Sensor Output

Western Digital Launches WD Red SA500 Caching SSDs for NAS

Western Digital has introduced its new WD Red SA500 family of specialized SSDs, which are designed for caching data in NAS devices. The drives are available in four different capacities from 500 GB to 4 TB to satisfy demands of different customers. To maximize their compatibility, the SSDs feature a SATA 6 Gbps interface and come in M.2-2280 or 2.5-inch/7-mm form-factors.


Now that many desktop PCs have either been replaced by laptops or are so small that they cannot house a decent number of capacious hard drives, NAS use is gaining traction among those individuals and small businesses who need to store fairly large amounts of data. To provide such customers high performance (which is comparable to that of internal storage), many NAS these days feature a 10GbE network adapter as well as a special bay (or bays) for a caching SSD. However, the vast majority of client SSDs on the market were not designed for pure caching workloads, which are more write-heavy than typical consumer workloads. Seagate with its IronWolf 110 was the first company to launch an SSD architected for NAS caching early this year and now Western Digital follows the suit with its WD Red SA500 family, which is broader than that offered by its rival.



While it’s not being disclosed by the company, Western Digital’s WD Red SA500 SSDs are based on Marvell’s proven 88SS1074 controller, and paired with the company’s 3D TLC NAND memory. When it comes to capacities, the new WD Red SA500 drives are available in two form-factors: M.2-2280 models offer 500 GB, 1 TB, and 2 TB capacities, whereas 2.5-inch/7-mm SKUs can store 500 GB, 1 TB, 2 TB and 4 TB of data.



Performance-wise, the WD Red SA500 offers up to 560 MB/s sequential read speeds, up to 530 MB/s sequential write speeds, and up to 95K/85K random read/write IOPS, which is in line with advanced client SATA SSDs. But the key difference between typical client drives equipped with the same controller and the WD Red SA500 is a special firmware optimized for more evenly mixed workloads and engineered to ensure longevity. By contrast,  client SSDs are tailored mostly for fast reads.


As far as endurance is concerned, the WD Red SA500 SSDs are rated for 0.32 – 0.38 DWPD over a five-year warranty period, which is in line with that of modern desktop drives. This is admittedly not especially high for a drive that can fill itself in under an hour, but presumably Western Digital confident that the caching algorithms in modern NASes are not so aggressive that the drives will be extensively rewritten. Moreover, at the end of the day we are talking about consumer as well as SMB-class NASes, where the expected workloads are lower than with enterprise systems.





























The WD Red SA500 Caching SSDs for NAS
Capacity 500 GB 1 TB 2 TB 4 TB
Model Number ? ? ? ?
Controller Marvell 88SS1074
NAND Flash 3D TLC NAND
Form-Factor, Interface M.2 M.2-2280, SATA 6 Gbps
DFF 2.5-inch/7-mm, SATA 6 Gbps
Sequential Read 560 MB/s
Sequential Write 530 MB/s
Random Read IOPS 95K
Random Write IOPS 85K 82K
Pseudo-SLC Caching ?
DRAM Buffer Yes, capacity unknown
TCG Opal Encryption ?
Power Consumption Avg Active 52 mW 60 mW 60 mW
Max. Read 2050 mW 2550 mW 3000 mW
Max. Write 3350 mW 3750 mW 3800 mW
Slumber 56 mW 56 mW
DEVSLP 5-7 mW 5-12 mW
Warranty 5 years
MTBF 2 million hours
TBW 350 600 1300 2500
DWPD 0.38 0.32 0.35 0.34
UBER 1E10^17
Additional Information Link
MSRP M.2 $72 ? $297
DFF $75 ? ? $600

Western Digital’s WD Red SA500 SSDs are currently available directly from the company, with broader availability expected in November. The cheapest 500 GB model costs $72 – $75 depending on the form-factor, the top-of-the-range M.2 2 TB SKU is priced at $297, whereas the highest-capacity 4 TB 2.5-inch model carries a $600 price tag.



Related Reading:


Source: Western Digital



Source: AnandTech – Western Digital Launches WD Red SA500 Caching SSDs for NAS

Samsung Develops Intel Lakefield-Based Galaxy Book S: Always-Connected x86 PC

Among several items at its developers conference this week, Samsung revealed that it was working on a version of its always-connected Galaxy Book S laptop powered by Intel’s Lakefield processor. When it becomes available in 2020, the notebook is expected to be the first mobile PC powered by Intel’s hybrid SoC, which containing a mix of high-performance and energy-efficient cores.


There are many laptop users nowadays who want their PCs to be very sleek, offer decent performance, be always connected to the Internet, and to last for a long time on a charge. Modern premium x86-based notebooks are very compact and can be equipped with a 4G/LTE modem, but even configured properly, the extra radio brings a hit to battery life over a non-modem model. The immediate solution is of course to use Intel’s low-power/energy-efficient Atom SoCs or Qualcomm’s Snapdragon processors tailored for notebooks, but this will have an impact on performance.



To offer both performance and energy efficiency for always-connected notebooks, Intel has developed its Lakefield SoC that features one high-performance Ice Lake core, four energy-efficient Tremont cores, as well as Gen 11 graphics & media cores. Internally, Intel’s Lakefield consists of two dies — a 10 nm Compute die and a 14 nm Base die — integrated into one chip using the company’s Foveros 3D packaging technology to minimize its footprint. Courtesy of Foveros, the chip measures 12×12 mm and can be integrated into a variety of emerging always-connected devices.


As it turns out, Samsung’s upcoming version of the 13.3-inch Galaxy Book S will be the first to use Intel’s Lakefield, where it will be paired with Intel’s 4G/LTE modem to offer Internet connectivity everywhere.


Samsung is not disclosing pricing or availability details for its Lakefield-powered Galaxy Book S; but since Intel plans to start production of the SoC this quarter, expect the machine to launch in 2020.



Related Reading:


Source: Intel/Samsung



Source: AnandTech – Samsung Develops Intel Lakefield-Based Galaxy Book S: Always-Connected x86 PC