ADATA at CES 2018: Incoming XPG SX8200 SSDs, From 240 GB to 1.92 TB in M.2

LAS VEGAS, NV — At CES 2018, ADATA demonstrated its new high-end XPG SX8200 SSDs. The new drives use Silicon Motion’s SM2262 controller as well as 3D TLC NAND. The SSDs will be available in configurations featuring up to 2 TB of raw 3D TLC NAND memory and promise to offer up to 3.2 GB/s sequential read speed.


Silicon Motion quietly introduced its latest-generation controllers at Computex 2017, with multiple vendors showing off SSDs powered by the SM2262 and SM2263XT controllers. Back then some promised that a new breed of SMI-based drives was just around the corner and the products would ship in 2017. As it happens sometimes, final stages of development took a bit longer than expected and we are going to see a host of new SSDs in 2018. At least on paper, SM2262-based SSDs look very fast. In addition, improved performance of the controller (vs. predecessors) enables developers of drives to use sophisticated LDPC ECC algorithms, enhancing endurance of SSDs that use 3D TLC memory.



ADATA will likely be among the first companies to ship SM2262-based drives with the XPG SX8200 being their first offering featuring the controller. The XPG SX8200 will be available in 240 GB, 480 GB, 960 GB as well as 1.92 GB configurations and will thus be able to address users with vastly different capacity needs. ADATA’s current-generation consumer M.2 drives offer capacities up to 1 TB, leaving the highest end of the market to companies like Samsung. With the XPG SX8200, ADATA is announcing a 1.92 TB version that will inevitably compete against Samsung’s top-of-the-line 960 Pro SSD for end-users who need a lot of non-volatile storage space.


As for performance of the XPG SX8200, ADATA is sharing official figures from Silicon Motion: up to 3.2 GB/s sequential read speed and up to 1.7 GB/s sequential write speed. The manufacturer does not publish random performance numbers, but Silicon Motion expects SM2262-based drives to hit up to 370K/300K random read/write 4K IOPS. Both Silicon Motion and ADATA cite peak numbers and real-world performance of the drives will be different.













ADATA XPG SX8200 Brief Specifications
Capacity 240 GB 480 GB 960 GB 1.92 TB
Controller Silicon Motion SM2262
NAND Flash 3D TLC NAND
Form-Factor, Interface M.2-2280, PCIe 3.0 x4, NVMe 1.2
Sequential Read 3200 MB/s
Sequential Write 1700 MB/s
Random Read IOPS 370K IOPS (SMI data)
Random Write IOPS 300K IOPS (SMI data)
Pseudo-SLC Caching Supported
DRAM Buffer Yes, capacity unknown

ADATA plans to start selling the XPG SX8200 sometime in February, starting with the lower capacities. The company did not announce anything concerning its offerings based on a more powerful SM2262EN controller at CES. Meanwhile, the fact that it assigned the SM2262-based SSDs to the XPG SX8200-series indicates that the XPG 9200-series (offering even higher performance) is a spare number in the stack and will be used when the time is right.


Related Reading




Source: AnandTech – ADATA at CES 2018: Incoming XPG SX8200 SSDs, From 240 GB to 1.92 TB in M.2

Samsung Starts Mass Production of 16Gb GDDR6 Memory ICs with 18 Gbps I/O Speed

This week, Samsung has announced that it has started mass production of its GDDR6 memory chips for next-generation graphics cards and other applications. The new chips will be available in 16 Gb densities and will feature an interface speed that is significantly higher when compared to that of the fastest GDDR5 and GDDR5X ICs can offer.


GDDR6 is a next-generation specialized DRAM standard that will be supported by all three leading makers of memory. Over time, the industry will introduce a great variety of GDDR6 ICs for different applications, performance and price requirements. What Samsung is announcing this week is its first 16 Gb GDDR6 IC that features an 18 Gbps per pin data transfer rate and offers up to 72 GB/s of bandwidth per chip. A 256-bit memory subsystem comprised of such DRAMs will have a combined memory bandwidth of 576 GB/s, whereas a 384-bit memory subsystem will hit 864 GB/s, outperforming existing HBM2-based 1.7 Gbps/3092-bit memory subsystems that offer up to 652 GB/s. The added expense with GDDR6 will be in the power budget, much like current GDDR5/5X technology.












GPU Memory Math: GDDR6 vs. HBM2 vs. GDDR5X
  Theoretical GDDR6 256-bit memory sub-system Theoretical GDDR6 384-bit memory sub-system NVIDIA Titan V

(HBM2)
NVIDIA Titan Xp

 
NVIDIA GeForce GTX 1080 Ti NVIDIA GeForce GTX 1080
Total Capacity 16 GB 24 GB 12 GB 12 GB 11 GB 8 GB
B/W Per Pin 18 Gb/s 1.7 Gb/s 11.4 Gbps 11 Gbps
Chip capacity 2 GB (16 Gb) 4 GB (32 Gb) 1 GB (8 Gb)
No. Chips/KGSDs 8 12 3 12 11 8
B/W Per Chip/Stack 72 GB/s 217.6 GB/s 45.6 GB/s 44 GB/s
Bus Width 256-bit 384-bit 3092-bit 384-bit 352-bit 256-bit
Total B/W 576 GB/s 864 GB/s 652.8 GB/s 547.7 GB/s 484 GB/s 352 GB/s
DRAM Voltage 1.35 V 1.2 V (?) 1.35 V

The new GDDR6 architecture enables Samsung to support new and higher data transfer rates with non-esoteric memory form factors. To increase the interface speed, GDDR6 memory was redesigned both internally and externally. While details about the new standard will be covered in a separate article, two key things about the new memory tech is that GDDR6 features a x8/x16 per-channel I/O configuration, and each chip now has two channels. By contrast, GDDR5/GDDR5X ICs feature a x16/x32 I/O config as well as one channel per chip. While physically GDDR6 chips continue to feature a 16-/32-bit wide bus, it now works differently when compared to prior generations (as it consists of two channels).


In addition to higher performance, Samsung’s GDDR6 16 Gb chips also operate at 1.35 V voltage, down 13% from 1.55 V required by high-performance GDDR5 ICs (e.g., 9 Gbps, 10 Gbps, etc.). According to Samsung, the lowered voltage enables it to lower energy consumption of GDDR6 components by 35% when compared to ultra-fast GDDR5 chips. Samsung attributes lowered voltage to its new low-power circuit design. Meanwhile, based on information we know from Micron and SK Hynix, their GDDR6 DRAMs will also operate at 1.35 V.


Samsung uses one of its 10nm-class process technology to produce its GDDR6 components. The company claims that its 16 Gb ICs bring about a 30% manufacturing productivity gain compared to its 8 Gb GDDR5 chips made using its 20 nm process technology. Typically, Samsung’s productivity gain means increase in the number of chips per wafer, so the company has managed to make its 16 Gb ICs smaller than its previous-gen 8 Gb ICs. The company does not elaborate on its achievement, but it looks like the new chips are not only made using a thinner process technology, but have other advantages over predecessors, such as a new DRAM cell structure, or an optimized architecture.



Samsung’s 16 Gb GDDR6 chips come in FBGA180 packages, just like all industry-standard GDDR6 memory components from other manufacturers.


Samsung did not disclose when it plans to ship its GDDR6 DRAMs commercially, but since it had already started mass production, it is highly likely that the company’s clients are ready to build products featuring the new memory.


Related Reading




Source: AnandTech – Samsung Starts Mass Production of 16Gb GDDR6 Memory ICs with 18 Gbps I/O Speed

The SilverStone SX800-LTI SFX-L 800W PSU Review: Big PSU, Small Niche

Small form factor and living room gaming systems are becoming more and more popular, with ever-increasing capabilities – and power requirements. SilverStone’s latest SFX-L PSU, the SX800-LTI, brings 80Plus Titanium efficiency levels and a maximum power output of 800 Watts for those seeking to build a compact but extra powerful gaming behemoth.



Source: AnandTech – The SilverStone SX800-LTI SFX-L 800W PSU Review: Big PSU, Small Niche

HyperX at CES 2018: Fury SSD with RGB, for Bling

LAS VEGAS, NV – At CES, HyperX had on display its new HyperX Fury RGB SSD which adds RGB LEDs to the Fury based line of drives. These drives are set to hit in Q3.


HyperX showcased the new drive with its added RGB LEDs on the shell of the 7mm, 2.5-inch SATA based SSD. The RGB LEDs are located on the top part of the SSD with a large area above and below the large “X” lit up as well as the HyperX symbol in the middle. The RGB LEDs are powered by a micro USB port as well as using that data path to synchronize the LEDs with the system it is attached to.



 

The HyperX Fury RGB will use Toshiba’s 3rd Generation 3D NAND/BiCS type of flash which has a far higher die area density when compared to 2D NAND as well as using less power. The drive is SATA based with speeds rated for 550 MB/s reads and 520 MB/s writes. HyperX did not share which controller it will be using. We do know the HyperX Savage line of SSDs used a Phison controller (PS3110-S10), so perhaps an updated version will make its way to the Fury RGB. There are three capacities for the drives; 240GB, 480GB, and 960GB. Pricing was not listed, but these will be available in Q3 2018.



Related Reading:


 



Source: AnandTech – HyperX at CES 2018: Fury SSD with RGB, for Bling

Huawei Pushes G.hn Powerline Networking with the WiFi Q2 Whole-Home Wi-Fi Solution

Mesh networking / whole-home Wi-Fi systems have seen rapid growth over the last couple of years. Almost all vendors in the consumer networking space have one or more offerings in that hot segment. At CES 2018, Huawei threw its hat into the ring with the WiFi Q2 Whole-Home Wi-Fi Solution. Huawei is no stranger to consumer networking equipment, but, their presence in the North American market is minimal.


In 2016, Huawei had introduced the Q1 single-band router with a bundled powerline-based Wi-Fi extender. It utilized HomePlug AV (200 Mbps) as the backhaul. With the WiFi Q2, Huawei is going in for a major overhaul in both the internals as well as the industrial design. The new device looks more like the Netgear Orbi kits, but, the similarities end there. Unlike other vendors who use very similar hardware for both the base / main router and the satellites, Huawei uses completely different platforms for the two. Like most of the mesh kits targeting the low-end and mid-range market, there is no dedicated wireless backhaul channel. Instead, the kit uses powerline and Wi-Fi for the base-satellite communication.


The main router is a 2×2 802.11ac + 1 Gbps G.hn PLC kit, using the Realtek RTL8197FS for the 2.4 GHz radio, and the Realtek RTL8812B for the 5 GHz radio. The G.hn chipset is Hisilicon Hi5630. This marks a departure from HomePlug AV to G.hn for the powerline communication (PLC) aspect in terms of the upgrade from Q1 to Q2.


The satellite, on the other hand, is a pure HiSilicon solution, with a HiSilicon Hi1151 to handle both 2×2 2.4 GHz and 5 GHz radio duties. The G.hn chipset from the main router is retained.


The WiFi Q2 comes in two flavors – a base and a number of satellites, or, multiple base stations (main routers). In a configuration with a base and a satellite, the backhaul is a dedicated PLC, but, a configuration with two main routers could use both PLC and the 5 GHz (867 Mbps) channel. The 5 GHz channel could be shared with clients. The system does support load balancing and can change automatically according to the connection quality of the PLC and mesh Wi-Fi. The 3-base package supports Ethernet backhaul.



Huawei WiFi Q2 Triple Base Pack


From the perspective of networking industry observers, the key update here is the shift from HomePlug to G.hn for the PLC segment. The HomePlug Powerline Alliance has stopped working on updates to the standard – there are going to be no new chipsets to improve performance and cater to the upcoming consumer / service provider requirements. On the other hand, G.hn, still has legs to go (theoretically) beyond what the best HomePlug chipsets can offer. Even though we didn’t hear of any new G.hn silicon at CES 2018, there has been talk of chipsets with 2 Gbps theoretical throughputs being trialed by service providers. HiSilicon’s Hi5630 is still a 1 Gbps G.hn chipset similar to the Marvell 88LX3142 / 88LX2718 used in the Arris RipCurrent products. The status of various suppliers in the G.hn market is pretty interesting – DS2 was purchased by Marvell, which has now sold the division off to MaxLinear. Copper-Gate’s fate is unknown, after having been purchased by Sigma Designs, which is currently in the middle of divesting its non-Z-Wave assets after being purchased by Silicon Labs. In this context, Huawei’s decision to source the PLC chipset from its subsidiary, HiSilicon, is a strategic one.


Huawei compared the performance of HiSilicon’s Hi5630 with the HomePlug AV products that they were shipping. They saw some obvious advantages (similar to what we saw in our coverage of HomePlug AV vs. G.hn). Given that HomePlug AV2 is essentially a dead-end because of the absence of any roadmap from HomePlug silicon vendors, it was a no-brainer for Huawei to go with G.hn. Huawei’s tests showed that G.hn’s wider frequency brand, higher speed, stronger anti-interference, and lower delay made the decision much easier.



Huawei’s G.hn vs. HomePlug Testing Results


At this juncture, it is clear that G.hn will be the technology of choice for PLC backhaul purposes in whole-home networking systems. However, the absence of support from high-profile vendors such as Qualcomm and Broadcom is an issue that might make other consumer networking equipment vendors avoid PLC backhaul altogether.


Coming back to the WiFi Q2, it is clear that the system is targeting the low-end to mid-range market segment, currently served by the likes of Google WiFi. That said, the product stands out from the crowd, thanks to its PLC backhaul. Pricing and other launch information for the Huawei WiFi Q2 is not available yet.



Source: AnandTech – Huawei Pushes G.hn Powerline Networking with the WiFi Q2 Whole-Home Wi-Fi Solution

ZOTAC at CES 2018: Gemini Lake 'Credit Card' Pico PCs

LAS VEGAS, NV — Last year Intel introduced its Compute Card initiative, aimed mostly at manufacturers of specialized PCs and smart devices that benefit from high integration, easy installation, and a standardized dimension or interface. This has, apparently, given makers of consumer computers an opportunity in ultra-small desktop PCs. This year at CES, ZOTAC has demonstrated its new-generation ultra-small ZBOX Pico PCs that looks like a pile of credit cards, but still offers a rather decent feature-set and connectivity. The first is the PI226, which is very small, and in addition there is a larger ZBOX Pico PI336 with enhanced connectivity.


ZBOX Pico PI226: A Credit Card-Sized Desktop



ZOTAC’s ZBOX Pico PI226 is based on Intel’s Celeron N4000 SoC, which has two cores and the UHD 600 graphics engine, but is also the most ‘affordable’ mobile Gemini Lake chip that Intel lists for $107. Because of the new SoC, the ZBOX Pico PI226 offers a bit higher general-purpose performance as well as improved media processing capabilities when compared to its predecessor the ZBOX Pico PI225 launched last year. Just like its predecessor, the ZBOX Pico PI226 comes in black metallic chassis and does not require any active cooling. The listed TDP of the Celeron N4000 SoC is just 6.5 W, but nevertheless how ZOTAC has postitioned the TDP means that this amount of heat can be dissipated by convection alone in this chassis.



The credit card-sized computer is equipped with 4 GB of LPDDR4 memory, 32 GB eMMC storage and a microSD card reader to expand storage capabilities. Wireless connectivity of the tiny PC includes a 802.11ac Wi-Fi + Bluetooth 4.2 wireless module, whereas wired connectivity is comprised of two USB 3.0 Type-C ports and a micro-USB power header. ZOTAC plans to bundle a USB-C dongle with an HDMI and two USB Type-A ports with the Pico PI226, just like it does with its current-generation ZBOX Pico PI225.


ZBOX Pico PI336: A Palm-Sized Desktop



ZOTAC’s ZBOX Pico PI336 is considerably larger than the Pico PI226, but is still unbelievably small for a desktop computer. This one is based on the quad-core Celeron N4100 with the UHD 600 iGPU and thus offers higher performance in multi-threaded applications when compared to the PI226. It has the same RAM/storage configuration, with 4 GB of LPDDR4 memory, 32 GB of eMMC NAND flash and a microSD card reader.



Where the ZBOX Pico PI336 clearly excels the Pico PI226 is connectivity. In addition to 802.11ac Wi-Fi, Bluetooth 4.2, a USB 3.0 Type-C port, this one is equipped with a GbE connector, two USB 3.0 Type-A headers, an HDMI 2.0 output, a DisplayPort 1.2 as well as a 3.5-mm TRRS audio jack.
















Preliminary Specifications of ZOTAC’s Gemini Lake Mini PCs
  ZBOX Pico PI226 ZBOX Pico PI336
CPU Intel Celeron N4000

2 Cores

1.1 GHz – 2.6 GHz

4 MB

6.5 W TDP
Intel Celeron N4100

4 Cores

1.1 – 2.4 GHz

4 MB

6.5 W TDP
iGPU UHD 600, 12 EUs at 650 MHz UHD 600, 12 EUs at 700 MHz
Memory 4 GB LPDDR4
Storage eMMC 32 GB
Other microSD/SD
Wireless 802.11ac Wi-Fi + BT 4.2
Ethernet 1 × Gigabit Ethernet with RJ45 connector
Display Outputs HDMI 1.4 via USB-C 1 × DisplayPort 1.2

1 × HDMI 2.0
Audio via USB-C/HDMI/DP 1 × TRRS connector
USB 2 × USB 3.1 Type-C with DP 1.2

2 × USB 3.0 Type-A on dongle
1 × USB 3.1 Type-C

2 × USB 3.0 Type-A
PSU External
OS Microsoft Windows 10 or none

ZOTAC plans to start selling the new ZBOX Pico PI226 and PI336 sometimes in the second quarter. Pricing has not been announced, but since Intel did not change pricing of its SoCs since the Apollo Lake generation, it makes sense to expect pricing of the Pico PI226 to be in the same ballpark with that of the Pico PI225. The latter hit the market in November and is available for less than $200. ZOTAC’s ZBOX Pico PI3-series PCs also cost around $200, so expect the new Pico PI336 to retail for a similar amount of money.


Related Reading




Source: AnandTech – ZOTAC at CES 2018: Gemini Lake ‘Credit Card’ Pico PCs

Kingston at CES 2018: Nucleum, a Portable 7-in-1 USB-C Dock for Notebooks

LAS VEGAS, NV — Notebook manufacturers, Apple in particular, were heavily criticized in 2016/2017 for introducing notebooks that feature only USB Type-C connectors. For end-users, such a transition meant making an additional investment into new peripherals, or docks. However, for hardware manufacturers, the transition to USB-C opens up new opportunities. At CES, Kingston demonstrated its first USB Type-C dock that supports seven ports as well as power pass-through.


The Kingston Nucleum is a rather compact sleek device made of aluminum and plastic to match design of Apple’s MacBook/MacBook Pro and other silver/metallic consumer laptops with USB Type-C ports. The dock has two USB 3.0 Type-A ports (one supports charging), one USB Type-C header, an HDMI 1.4 display output (max resolution is 3840×2160 at 30 Hz), an SD card reader, a microSD card reader as well as a USB Type-C power input.



The Nucleum supports power delivery pass through of up to 60 W, which is enough to power and charge a 13” laptop (such as a modern MacBook Pro). Larger and more power-hungry machines demand more power and their charging will take longer time using the dock from Kingston.










Kingston’s Nucleum 7-in-1 USB-C Dock
  C-HUBC1-SR-EN
Main Connection USB 3.0 Type-C at 5 Gbps with power delivery
Display Outputs HDMI 1.4 (max resolution is 3840×2160 at 30 Hz)
USB 1 × USB 3.0 Type-A (5V 1500mA)

1 × USB 3.0 Type-A

1 × USB 3.0 Type- C
Power Input up to 60 W
Dimensions 127 × 45 × 14.2 mm

5 × 1.77 × 0.56 inches
Weight 142 grams

5 ounces

Kingston’s Nucleum USB-C dock is already available at Amazon.com for $79.99. In the coming months Kingston will expand availability of the device to markets outside the U.S.




Related Reading




Source: AnandTech – Kingston at CES 2018: Nucleum, a Portable 7-in-1 USB-C Dock for Notebooks

ZOTAC at CES 2018: Workstation Mini-PCs with NVIDIA Quadro

LAS VEGAS, NV — ZOTAC’s new lineup of workstations consists of three systems based on Intel’s quad-core Core i5-7500T processor and one of three GPUs: the NVIDIA Quadro P1000 with 4 GB of GDDR5, the NVIDIA Quadro P3000 with 6 GB of GDDR5, and the NVIDIA P5000 with 16 GB of GDDR5. These are professional-level graphics solutions, which ZOTAC has used in the MXM module form-factor. All three machines use a different chassis, depending on performance, expandability and power draw. The ZOTAC ZBOX P1000 workstation is the smallest one, whereas the ZBOX P5000 is the largest and the most powerful one.


All three workstations from ZOTAC share the same concept: they are fully-integrated SFF PCs that support all modern connectivity technologies, including gigabit Ethernet, 802.11ac Wi-Fi, USB 3.0 Type-A, USB 3.0 Type-C, SD/microSD, DisplayPort 1.2, HDMI 2.0 and others. The goal is that these systems can be quickly deployed without significant customization. All the systems support up to 32 GB of DDR4-2400 memory (two SO-DIMMs), feature one M.2-2280 PCIe/SATA slot for SSDs, and one 2.5” bay for another storage device.  

















Preliminary Specifications of ZOTAC’s Workstations
  ZBOX Mini PCs

with NVIDIA Quadro P-Series GPUs
CPU Intel Core i5-7500T

4C/8T

2.7 GHz – 3.3 GHz

6 MB

35 W
GPU Quadro P1000

640 CUDA Cores

4 GB GDDR5
Quadro P3000

1280 CUDA Cores

6 GB GDDR5

192-bit

75 W
Quadro P5000

2048 CUDA Cores

16 GB GDDR5

256-bit

100 W
Memory 2 × DDR4 SO-DIMM slots, up to 32 GB of memory
Storage M.2 M.2 2280 slot for PCIe/SATA SSD
DFF 1 × 2.5″ SSD/HDD
Card Reader SD/microSD
Wireless 802.11ac Wi-Fi + BT 4.2
Ethernet 1 × Gigabit Ethernet with RJ45 connector
Display Outputs ? × DisplayPort 1.2

? × HDMI 2.0
2 × DisplayPort 1.2

2 × HDMI 2.0
Audio 3.5 mm audio-in

3.5 mm audio-out
USB USB 3.0 Type-A

USB 3.? Type-C
PSU External
OS Microsoft Windows 10 or none

The demonstration of ZBOX workstations with NVIDIA Quadro GPUs at CES shows that ZOTAC is interested in offering professional-grade systems. The PCs demonstrated at CES 2018 were based on Intel’s quad-core Core i5-7500T processor, which comes across as a rather unorthodox choice for a workstation, but is an understandable one given the fact that ZOTAC specializes on gaming and SFF PCs and simply has the said chips (and supporting PCH) in stock. If demand for ZOTAC-made workstations is high enough, the company might develop Xeon-based machines at some point in the future. It is noteworthy, however, that a 2018 workstation does not support Thunderbolt 3. Workstation workloads need storage space and ZOTAC’s workstations can hardly offer a lot of it (a M.2 SSD and a 2.5”/7-mm HDD will give 4 TB in total).


Another interesting takeaway from ZOTAC’s workstation announcement is NVIDIA’s Quadro P1000 graphics solution in MXM form-factor. Professional MXM solutions are manufactured and sold only by NVIDIA itself, so the module is not a custom-made card by ZOTAC. The Quadro P1000 product is not listed among other professional GPUs for laptops that NVIDIA offers. 



ZOTAC plans to start selling its workstations in Q2 2018. Pricing will depend on multiple factors, including purchase volumes.


Related Reading




Source: AnandTech – ZOTAC at CES 2018: Workstation Mini-PCs with NVIDIA Quadro

The ASUS NovaGo: Two Minutes with Snapdragon 835 and Windows

LAS VEGAS, NV – Late last year, at Qualcomm’s Snapdragon Tech Event in Hawaii, we had the formal introduction of the first devices that were using the new Windows on Snapdragon platform and Qualcomm’s dream of bringing mobile technology to laptops to provide ‘Always Connected PCs’, connected through an LTE data connection. Qualcomm sells the upsides of this technology of providing laptops with 20hrs+ of battery life through using a smartphone processor, and through working with Microsoft, have a full version of proper Windows based on the system. The devices use native apps for best performance through the Windows Store, however 32-bit apps are machine translated into instructions that the Snapdragon SoC can process. It’s a lot of technology in a tiny device, and Qualcomm would seem to be the first CPU manufacturer to actually pull off x86 translation for the consumer market.


All of that aside, one of the first devices that should enter the market is the ASUS NovaGo. This is a 13-inch premium laptop/360-degree 2-in-1 design that has features such as Windows Hello and a fingerprint sensor built in while maintaining ASUS’ laptop quality and claiming up to 22 hours of battery life. We have wanted to get our hands on one for a while, and I managed to get a couple of minutes at the show with one at the ASUS suite.



Truth be told, the main fear that we have had with these devices is responsiveness. Smartphones on Android can be fast, with but something much bigger like Windows, it was not always on the cards that we would get the same level of responsiveness as, say, a Y-series Intel design. Back when we saw a super-early demo behind closed doors at IFA, it wasn’t the fastest, but on the NovaGo at least, everything seemed in order. Basic applications were quick and easy to open, and no visible lag from my untrained eye. Using the native compiled version of Edge, the best website in the world loaded as it normally does, and I was able to navigate the device as I would do normally with an Intel based laptop. Being familiar with ASUS’ device design, there were no surprises in the feel of the keyboard and touchpad either. Port support extends to a 3.5mm jack, a HDMI port, and two USB 3.0 ports.



A quick look through the system settings showed eight Snapdragon 835 cores, the Adreno graphics, and a PCIe based SSD for storage if I remember correctly. With it being connected to the internet, I tried downloading CPU-Z, even in 32-bit, but it required me adding it through the Store page to get it to work. Alas, the Wi-Fi at the Las Vegas Encore was not giving me any favors with the Windows Store so I was unable to go down that route, so at some point I obviously want to see the effects of x86 translation.


We were told by Qualcomm at the event a number of interesting things about the design of the platform, and how it has changed since we last met with them. Windows scheduler is configured to deal with cores of different level of performance, and it knows what programs are where and how to deal with them for performance and power, much like a good Android based scheduler. This was one of our worries, but we were categorically told that any internal worries they ever had are now fixed and it should run like a well-oiled machine.



In gaming, Qualcomm stated that with the modern APIs, the Adreno GPU is natively compiled and doesn’t need translation. As a result, due to the way that Adreno works, for some titles it ends up being more computationally efficient over other solutions and causes less work on the CPU, allowing for more of the power budget on the GPU and an overall better frame rate. Obviously we want to get a hold of the device and test the claim, but if offers an interesting prospect.



As for the NovaGo, with the addition of LTE and if it stands up to the battery life claims, it could be a neat little device depending on the price. ASUS said they expect it to launch sometime during Q2.


Related Reading




Source: AnandTech – The ASUS NovaGo: Two Minutes with Snapdragon 835 and Windows

Meizu Announces M6s with Exynos 7872

Today Meizu launched the new M6s, successor to last year’s M5s. The new M6s brings significant upgrades for the entry-level smartphone as it upgrades the SoC, screen and camera.


The M6s is among one of the first smartphones in its price category to include a SoC with an ARM big core. The Exynos 5 (Mid-range series) 7872 is Samsung’s first SoC below the high-end to adopt Cortex A73 cores alongside the usual A53 cores. The new 2x A73 4x A53 configuration runs at respectively 2.0 and 1.6GHz, resulting in expected performance far ahead of the M5s’ MT6753 which only had A53 cores up to 1.3GHz. The GPU is a new Mali G71MP1 running up to a very high 1.2GHz, likely to compensate for the fact that it only has a single core. The SoC is also manufactured on a 14nm process so we should expect overall large efficiency and battery life gains.

















Technical Specifications
  Meizu M6s Meizu M5s
SoC Exynos 5 7872


4x Cortex A53 @ 1.6GHz

2x Cortex A73 @ 2.0GHz


Mali G71MP1 @ 1.2GHz


14nm

MediaTek MT6753


4x Cortex-A53 @ 1.0GHz

4x Cortex-A53 @ 1.3GHz


Mali-T720MP2 @ 546MHz


28nm

RAM 3GB LPDDR3 3GB LPDDR3
NAND 32 / 64GB (eMMC 5.1)

+ microSD
16GB / 32GB (eMMC 5.1)
+ microSD
Display 5.7-inch 1440 x 720

IPS LCD (18:9)
5.2-inch 1280 x 720

IPS LCD
Dimensions 148.2 x 72.8 x 8.3 mm (TBC)

143 grams (TBC)
148.2 x 72.5 x 8.4 mm

143 grams
Modem Exynos (Integrated)

2G / 3G / 4G LTE (Category 7)


FDD-LTE / TD-LTE / TD-SCDMA / WCDMA / CDMA / GSM

MediaTek (Integrated)

2G / 3G / 4G LTE (Category 4)


FDD-LTE / TD-LTE / TD-SCDMA / WCDMA / CDMA / GSM

SIM Size 2x NanoSIM (dual standby) 2x NanoSIM (dual standby)
Front Camera 8MP 5MP, f/2.0
Rear Camera 16MP Samsung,

f/2.0


dual-tone LED flash

13MP, 1/3.06″ OmniVision OV13853, 1.12µm pixels, f/2.2, PDAF,


dual-tone LED flash

Battery 3070 mAh 3000mAh
Connectivity 802.11b/g/n, BT 5.0,

GPS/GNSS, BeiDou, Galileo

microUSB 2.0

802.11b/g/n, BT 4.0 LE,

GPS/GNSS,


microUSB 2.0

Launch OS Meizu Flyme OS 6

Android 7.0
Meizu Flyme OS 5.1
Android 5.1
Launch Price

(No Contract)
¥999 / ¥1199 RMB

$155 / $189 USD

127€ / 152€ EUR
¥799 / ¥999 RMB

$120 / $150 USD

101€ / 127€ EUR

This is also the first time we’ve seen an Exynos SoC released with integrated CDMA capability, confirming the rumours that SLSI is finally transitioning towards a world-modem and properly competing against other SoC vendors in CDMA markets such as China and the US. The SoC is also a fully integrated connectivity platform as it also integrates WiFi up to 802.11n, Bluetooth 5 and FM radio without having to rely on external combo-chips.



The M6’s screen keeps the rather low-end 720p resolution of the M5s but transitions to a 18:9 aspect ratio, thus increasing the vertical resolution to 1440 pixels.


The cameras have seen an upgrade as the main shooter now includes a higher resolution 16MP Samsung sensor with an f/2.0 lens system as well as an unspecified 8MP front camera.



The transition to a edge-to-edge display and removal of the front-facing physical home button in the M6s has obliged Meizu to move the fingerprint sensor to the side of the device near the power button. To replace the lack of the multi-function home button that Meizu devices usually ship with, Meizu has reintroduced a software Halo button for navigation which is also pressure sensitive on the screen.


The M6s comes in black, blue, gold and silver in 32 and 64GB variants for respectively ¥999 / ¥1199 RMB or equivalent $155 / $189 USD, making the phone a very attractive proposition and value for money.




Source: AnandTech – Meizu Announces M6s with Exynos 7872

ZOTAC at CES 2018: AMD Raven Ridge APU in a ZBOX MA551 Mini-PC

LAS VEGAS, NV — ZOTAC is preparing a small form-factor PC based on an AMD’s Ryzen processor with integrated Vega graphics. The ZOTAC ZBOX MA551 will be among the first compact computers powered by AMD’s code-named ‘Raven Ridge’ chips, and the system design should allow it support all AM4 APUs as well as a comprehensive set of connectivity features.


ZOTAC’s ZBOX MA551 will exist in at least two variants equipped with AMD’s quad-core Ryzen 3 2200G and Ryzen 5 2400G APUs with the Radeon Vega integrated graphics. The chips are rated to dissipate a maximum of 65 W of power (based on AMD’s TDP data) and ZOTAC outfits the APUs with a cooling system that features a large copper heatsink and a blower. The cooler looks like a GPU cooler, so its peak performance likely exceeds 65 W and enables ZOTAC to install APUs with a higher TDP or for better boost. So far AMD has announced only two Raven Ridge SoCs for desktops, so if the company rolls-out APUs with higher power and cooling requirements, the MA551 will be ready to house them.



ZOTAC’s ZBOX MA551 comes in a matte black metallic enclosure, with the internal architecture the mini-PC looking very simple, allowing the user to easily install key components as well as potentially upgrade them. The mini-PC can be equipped with up to 32 GB of DDR4-2400 memory using two SO-DIMMs, an M.2-2280 PCIe/SATA SSD, and a separate 2.5” storage device.



When it comes to connectivity, the ZBOX MA551 is outfitted with an 802.11ac + Bluetooth 4.2 module, a gigabit Ethernet connector, four USB 3.0 Type-A headers, a USB Type-C port, three display outputs (two DisplayPort 1.2, one HDMI 2.0) and an SD/microSD card reader.

















Preliminary Specifications of ZOTAC’s Ryzen APU-Based SFF PC
  MA551
CPU AMD Ryzen 3 2200G

4C/4T

3.5 – 3.7 GHz

6 MB cache

65 W TDP
AMD Ryzen 5 2400G

4C/8T

3.6 – 3.9 GHz

6 MB cache

65 W TDP
iGPU Radeon Vega

8 CUs

512 SPs

Up to 1100 MHz
Radeon Vega

11 CUs

704 SPs

Up to 1250 MHz
Memory 2 × DDR4-2400 SO-DIMM slots

up to 32 GB of memory
Storage M.2 M.2 2280 slot for PCIe/SATA SSD-
DFF 1 × 2.5″ SSD/HDD
Card Reader SD/microSD
Wireless 802.11ac Wi-Fi + BT 4.2
Ethernet 1 × Gigabit Ethernet with RJ45 connector
Display Outputs 1 × DisplayPort 1.2

2 × HDMI 2.0
Audio 3.5 mm audio-in

3.5 mm audio-out
USB 4×USB 3.0 Type-A

1×USB 3.? Type-C
PSU External
OS Microsoft Windows 10 or none

ZOTAC plans to start selling its Raven Ridge-based ZBOX MA551 sometimes in the second quarter, after AMD makes such processors widely available. Pricing is yet unknown.



Related Reading




Source: AnandTech – ZOTAC at CES 2018: AMD Raven Ridge APU in a ZBOX MA551 Mini-PC

ASRock at CES 2018: Hands-On with the ASRock X399M Taichi

LAS VEGAS, NV – While smaller motherboards are fun to look at, and an engineering challenge, they still represent a small part of the market. The main benefit for a motherboard manufacturer to push out one of the smaller form factor motherboards is if they are the only ones (or one of the only ones) with a product in that segment. Thus ASRock is first to market with a micro-ATX motherboard for AMD’s Ryzen Threadripper processors. It is a big socket on a tiny motherboard.


We reported on ASRock announcing this board earlier during CES, but we met with ASRock at their suite and got some hands on. If there is one thing easy to spot on this motherboard, it is the socket – it does not leave a lot of room for anything else. Even though ASRock is known for its esoteric designs such as the X299-ITX/ac, ASRock categorically stated that with a socket this big, mini-ITX is impossible without some major compromises such as two DRAM slots.



For the X399M Taichi, it lives up to the Taichi name and comes well equipped with the basics without going overboard. The motherboard itself is fairly hefty, if only for the socket and heatsink arrangement, but ASRock feel they have had a good swing at the functionality. With 64 PCIe lanes from the processor to play with, it made sense to offer three x16 PCIe slots, even though in most cases only one or two would be used (or perhaps a GPU and two of those quad M.2 cards!) and still leave 16 PCIe lanes left over for other things. This gives a U.2 port, three M.2 slots, 802.11ac Wi-Fi, more USB 3.0 ports, USB 3.1, Purity Sound audio, and plenty of action besides.



The key differentiation features, aside from the size of the board, come in a number of areas. First, two of the M.2 slots are found on the top right hand side of the board. ASRock felt that users would prefer three M.2 drives in total and only four memory slots rather than one M.2 drive and eight memory slots. Another change is the secondary EPS 8-pin connector, which appears on the top right of the board in an odd location. ASRock felt it necessary on a board like this to also include eight SATA ports, making the X399M Taichi a potential catch all for any storage requirements. Ultimately the only thing missing here is 5GbE or 10GbE Ethernet from Aquantia, but instead ASRock uses dual Intel controllers instead. Not to be left out, there are some RGB LEDs on the board as well.



ASRock intends to ship the X399M Taichi sometime in Q1, with the price still to be determined.


Related Reading




Source: AnandTech – ASRock at CES 2018: Hands-On with the ASRock X399M Taichi

ADATA at CES 2018: A 1U AIC Server for 5G Comms, with 36x 8TB M.3 Drives

LAS VEGAS, NV – I had no idea what an M.3 drive was until I visited ADATA at CES. Having lived through IDE, SATA, and now M.2, I had never put much thought into what the next physical storage implementation would be, but M.3 has a nice ring to it.



The M.3 drive on display was listed as IM3P33EI – an enterprise level component name for a 2TB drive using PCIe 3.0 x4, supporting NVMe 1.3, LDPC ECC, and RAID engines. One of the prominent features on the drive was for live hot swap with what is a PCIe drive, allowing the drive to be used in the 1U server. The drive was using a Silicon Motion SM2262G controller, and in the server designed by AIC there were 36 M.3 hot-swappable slots for a set. The information on the drive showed that it would be available up to 1920GB, however we were told that the form factor and server should be sufficient to take up to 8TB per M.3 module.



The server was described to us as a custom design specifically for Samsung by AIC for 5G communications, using two Intel Xeon Scalable processors, up to 12 memory slots per processor, dual redundant hard drives, and in order to be able to control 144 PCIe lanes for the storage drives, dual PCIe switches. It was unclear which model of PCIe switch was being used due to the heatsinks, however I would suspect that these were PLX9000 series (probably 9290 models) which are not cheap. We were told that a fully kitted out server would have a ‘list’ price (note, this is a product specifically for Samsung) of around $250,000. It is half-surprising that the drives are not Samsung, though.



So despite being in awe for a couple of minutes, my amazement at seeing an ‘M.3’ drive was short lived. It turns out that this drive is not actually called ‘M.3’ at all: Samsung initially tried to market this form factor, a competitor to Intel’s ‘ruler’ storage form factor, as M.3 but PCI-SIG was not having any of it. So despite what it said at ADATA’s suite at CES, AIC’s specification sheet described the drive as ‘NGSFF’. 


Gallery: ADATA NGSFF



Related Reading




Source: AnandTech – ADATA at CES 2018: A 1U AIC Server for 5G Comms, with 36x 8TB M.3 Drives

GIGABYTE at CES 2018: A First Look at X470

LAS VEGAS, NV – One of the announcements from AMD’s Tech Day prior to CES was that of a new chipset coming to the Ryzen market. The purpose of the new chipset, called X470, was for iterative updates: better memory support, lower power consumption, and a couple of other things to be announced closer to launch around April.



At GIGABYTE’s suite, they had to hand an ‘X470’ Aorus Gaming 7 WiFi, with the chipset name bit taped over. The initial view of the motherboard was that of a standard high-end AM4 motherboard, using M.2 heatsinks, PCIe reinforcement, DRAM slot reinforcement, LEDs between the DRAM slots, the plastic LED section near the 24-pin connector, a few buttons for overclocking, and heatsinks indicative of GIGABYTE’s Aorus brand. With an iterative update, we were not expecting much change.



The change most obvious out of the few was the heatsink – GIGABYTE is set to go back to a bare-metal many-finned design for the power delivery heatinks. This might not be the most aesthetically pleasing design, however it is one that offers better power delivery cooling than the plastic shrouds we sometimes see on high-end motherboards hiding a small metal mass. GIGABYTE stated that they will be using the latest International Rectifier solution for the power delivery, and are ready for users to crank up the frequency when they want to. To add to the story on power and heatsinks, the rear of the motherboard also has a retention/rigidity plate around the power delivery, which may provide additional support.



GIGABYTE also has an integrated rear-panel backplate on this X470 design, to save users the embarrassment of having to disassemble the PC having forgotten about it after the fact. On this rear panel there is a power switch, 802.11ac Wi-Fi, and the usual sets of USB and Ethernet connections. It is worth noting that there are two USB 3.1 (10 Gbps) ports here, plus one header on the board as well. We were told that these were all native, which would suggest that there is an increase in USB 3.1 support on the X470 chipset.



In April we will see the launch of AMD’s second generation Ryzen processors based on Zen+ cores and built on GloFo’s 12nm process, and while X470 will be optimized for these parts, the motherboards will still except first generation Ryzen (and 300-series motherboards will accept second generation with a BIOS update). It would appear that there is no specific NDA/embargo around X470 for the motherboard manufacturers, however, GIGABYTE was the only one to be comfortable showing hardware at CES. When asked, the other manufacturers stated that with the launch several months away, they were not ready to show anything.



Related Reading




Source: AnandTech – GIGABYTE at CES 2018: A First Look at X470

ASUS at CES 2018: How To Remove Multi-Monitor Bezels, Safely

LAS VEGAS, NV –Do you ever buy more than one monitor, and then the brain adjusts such that instead of focusing on the next headshot you end up looking directly into a bezel? Or perhaps you bought two monitors, and have a post-it with crosshairs in the middle? Apparently ASUS has a solution for you – at least for specific monitors.


Introducing ASUS’ Bezel Free kit: the design is overtly simple – by using a flexible plastic bi-prism where two monitors meet, the screen will be distorted enough that a gamer brain will not be able to see the bezel. Normally bending light is very difficult, but ASUS solves the issue by not showing the bezel at all.



As an initial concept, this sounds great. Looking at the set of monitors with the kit applied and not applied did make a difference for sure, even if the 130-degree monitor angles were really tight compared to how most multi-gaming setups happen. When looking directly at the bi-prism, it is very obvious that it is there, but during normal game-play for peripheral vision, it did seem to make a difference. The bi-prism has rubber mounts at the top and the bottom, which fit with the depth of the monitor very easy for no fuss and no scratches.



With and Without


ASUS stated that they will sell the kit as a pair, and it works initially with the ROG Swift PG258Q. If a user has happened to buy two or three of them, then this now becomes an optional accessory. ASUS said it the kit works on a couple of other similar sized ROG monitors, and they are looking at expanding the design to bigger monitors as well. While this doesn’t mean there will be a future universal kit for all monitors (even ASUS monitors) of the same size, it is something that we are likely to see other vendors offer in due course.



Related Reading




Source: AnandTech – ASUS at CES 2018: How To Remove Multi-Monitor Bezels, Safely

Synaptics at CES 2018: In-screen Fingerprint & OLED DDIC

At CES 2018 we’ve had opportunity to visit Synaptics’ booth and check out what new technologies they had to offer. One of the big stories of CES 2018 in terms of mobile coverage of course was Synaptics’ and Vivo’s demonstration of the first under-screen fingerprint reader. The industry was waiting for some time now to see this technology brought to market so for Synaptics to be the first to actually achieve this is a major feat that deserves congratulations. We’ve covered the technical details of the new sensor in our announcement article a few weeks ago so please read that for more information on the sensor itself. The short story is that Synaptics’ implementation is based on a CMOS sensor that sits underneath the OLED screen and captures the fingerprint through the OLED stack with help of illumination of the screen itself.



In practice the sensor in the unnamed Vivo flagship smartphone behaved exactly as advertised and the experience was generally pretty flawless. In the implementation of the Vivo device the FS9500 is found underneath the Samsung Display AMOLED panel at 45° angle to achieve better surface area reach. The area of the sensor is I think the weak point of the implementation as it’s much smaller than traditional fingerprint sensors as it’s limited by the CMOS sensor size which is only 4x5mm, so both finger positioning as well as a more thorough registration phase become more important. I had come accustomed to the haptic feedback of the Galaxy S8’s pressure sensitive under-screen home button and had such a feature been implemented in the Vivo I imagine that it would make the experience even more distinguished.


While the under-screen fingerprint got most of the attention for Synaptics, what I think the far bigger and far-reaching story for the mobile industry was a small demonstration in the corner of the booth. Synaptics was showcasing their new R66455 and R66451 OLED display driver ICs. Back in late 2014 Synaptics acquired Renesas’ DDIC business unit which turned out to be a match made in heaven. 


For a bit of a back-story, if you’ve followed AnandTech review where we talk about displays you will have noted that on many devices the screens are run by a Renesas based DDIC solution. In fact, if you have an LCD based smartphone of this decade there’s a pretty good chance that it will have a Renesas/Synaptics display driver IC. However, until now, if you had a device with an AMOLED screen, it’s been most certainly powered by a Samsung LSI DDIC solution. In the early days of the Galaxy S3 Samsung Display was still dual-sourcing DDICs between its LSI division and Korean company MagnaChip. The latter however was dropped as screen resolutions increased and its DDIC offerings could no longer keep up with the SLSI developments. To this day the SLSI solutions enjoy such a technological lead that rival panel manufacturers like LG are still missing a key component in the quest to compete with Samsung’s AMOLED panels. Devices such as the LG V30 or the Google Pixel 2 XL which come with LG panels are still handicapped in terms of display quality as they lack a sufficiently capable DDIC. For example if one has noticed that the LG panels become washed out, suffer from black crushing or “tarnishing” becoming more visible at low brightness levels, then the reason for that is inherently the way the DDIC is driving the panel and showcasing that it’s lacking more advanced brightness control techniques.





Synaptics have been working hard to catch up in the OLED DDIC market, and on paper at least, it looks like they’ve managed to catch up with Samsung. The R66455 and R66451 are respectively FHD+ and WQHD+ (20:9 aspect ratio) capable OLED display drivers and integrated advanced features such as Smooth Dimming. Smooth Dimming, or Smart Dimming like Samsung calls it, is PWM emission control. This essentially means that instead of solely controlling the subpixel voltage to control brightness, it uses PWM to keep the same voltages but modulates the pulse width to achieve lower brightness levels. This is important as it does not limit the effective bit-depth of the DACs controlling the pixel voltages and still allows for full colour bit depth even at lower brightness levels. As a side effect I suspect this also attenuates colour ununifomities of the OLED panel itself might be more visible at lower voltages.




The DDICs promise advanced image processing for sub-pixel rendering techniques which are required for panel pixel layouts such as the by now standard diamond-pentile. Synaptics is also looking ahead and also implementing advanced needs such as controlling complex shapes such as round corners and support for the unfortunate screen notches that seem to be catching on.


What is important though and this can’t be reiterated enough is that with Synaptics offering a competitive high-end DDIC it provides a key component which will enable third-party panel manufacturers such as JOLED (JDI, Sony, Panasonic) and various Chinese firms to start trying to compete against Samsung’s dominance in the market.


Synaptics says the R66455 and R66451 are currently sampling to panel manufacturers and OEMs are exploring solutions for future products.




Source: AnandTech – Synaptics at CES 2018: In-screen Fingerprint & OLED DDIC

Zotac at CES 2018: ZBOX MAGNUS Upgraded with Coffee

LAS VEGAS, NV — This year at CES, ZOTAC has demonstrated its new flagship compact gaming PC with an upgrade and a rededign. The new ZBOX MAGNUS is powered by Intel’s six-core Core i7 Coffee Lake CPU and offers higher performance in applications that can take advantage of the increased core/thread count. The system retains NVIDIA’s GeForce GTX 1080 GPU from its two predecessors (the ZBOX MAGNUS EN1080 and the EN1080K). However, the new GPU is a discrete card, and not a MXM module.


We recently reviewed the Core i7-7700 version of this mini-PC, but the new ZOTAC ZBOX MAGNUS is powered by Intel’s Core i7-8700 processor (6C/12T, 3.2/4.6 GHz, 12 MB, 65 W) as well as a custom Intel Z370-based motherboard with two DDR4 SO-DIMM slots, one M.2-2280 slot for an SSD and one SATA connector for a 2.5” storage device. The new system features a custom GeForce GTX 1080 card instead of an MXM module used by its predecessors – a difference that has a major impact on the system’s internal design.



The MXM module inside the previous-generation ZBOX MAGNUS PCs enabled ZOTAC to adopt a compact liquid cooling system (LCS). The space in the chassis was enough to mount a big radiator above the CPU and the GPU. In the Coffee Lake version, the custom GeForce GTX 1080 card is mounted using a riser card, and occupies the space previously taken up by the LCS radiator. As a result, the new ZBOX MAGNUS now relies solely on air cooling. Choosing a custom card over an MXM module has its pros and cons. A card can potentially be upgraded and this is a major advantage. Furthermore, cards give ZOTAC a bit more flexibility in terms of design. However, liquid cooling is more efficient and less noisy compared to two air coolers, but it is also heavier and this has an impact on shipments costs.


Another noteworthy thing about the new ZBOX MAGNUS is that it needs only one external power adapter. By contrast, all of its predecessors have used two 180 W power supplies. It is unknown whether one power connector is a feature of ZOTAC’s prototype used for press photos, or the company now uses one high-performance PSU instead of two moderate ones.



As for connectivity capabilities, everything is nearly similar to the previous-generation high-end ZBOX MAGNUS PCs. The new system has two Gigabit Ethernet controllers, an 802.11ac Wi-Fi/BT 4.2 module, a SDXC card reader, four USB 3.0 Type-A ports in the rear, and two USB 3.1 Gen 2 ports (1x Type-A and 1x Type-C) in the front. The graphics card is equipped with three DisplayPort 1.4, one HDMI 2.0b and one dual-link DVI-D connector. The front HDMI port is no longer a feature. This particular removal makes it a tad inconvenient to hook up VR headsets. However, it is not an insurmountable issue, as a HDMI cable extender can be used for the same purpose if the system is installed in a particularly tight location.

















ZOTAC’s ZBOX MAGNUS with Coffee Lake
  ZBOX MAGNUS with 8th Generation Core i7 CPU
CPU Intel Core i7-8700

6C/12T

3.2 – 4.6 GHz

12 MB

65 W
GPU NVIDIA GeForce GTX 1080

2560 CUDA Cores

8 GB GDDR5X
Memory 2 × DDR4 SO-DIMM slots,

up to 32 GB of memory
Storage M.2 M.2 2280 slot for PCIe/SATA SSD
DFF 1 × 2.5″ SSD/HDD
Card Reader SD/microSD
Wireless 802.11ac Wi-Fi + BT 4.2
Ethernet 2 × Gigabit Ethernet with RJ45 
Display Outputs 3 × DisplayPort 1.2

1 × HDMI

1 × DVI-D
Audio 3.5 mm audio-in

3.5 mm audio-out
USB 4 x USB 3.0 Type-A

1 x USB 3.1 Type-A

1 x USB 3.1 Type-C
PSU External
OS Microsoft Windows 10 or none

Overall, the new redesign is a mixed bag. Removing the need for a second power adapter is very welcome. We would have been happier if ZOTAC had addressed some of the other feedback from our EN1080 / EN1080K reviews – in particular, a flagship PC in 2018 should include some Thunderbolt 3 ports. It is all the more puzzling when ZOTAC has multiple other mini-PCs with Thunderbolt 3 capability. The cooling solution also seems like a downgrade, though we have to hold on to the final verdict until we can compare it against the EN1080 / EN1080K ourselves.


ZOTAC did not announce pricing or availability timeframe for the Coffee Lake-based ZBOX MAGNUS, but it is logical to expect its arrival later this year.


Related Reading




Source: AnandTech – Zotac at CES 2018: ZBOX MAGNUS Upgraded with Coffee

ASRock at CES 2018: Ultra Quad M.2 PCIe Card

LAS VEGAS, NV – Sometimes, one M.2 PCIe drive is not enough. Some motherboards come with three M.2 slots for NVMe SSDs, but that might not be enough either. To get more, users need add-in cards, and these typically come in single, dual, or quad arrangements. Actually, there’s currently only one company that offers a consumer-focused quad M.2 card. Now ASRock is joining the market.



There is nothing too complex on an M.2 PCIe add-in card: the PCIe lanes on the slot finger go directly to the drives in question. As long as there is sufficient power, and sufficient cooling, there is not much more to it than that. ASRock’s card positions the M.2 drives at a 45-degree angle, provides a PCIe 6-pin for power, uses its metallic shroud for additional heatsink cooling, and then puts in a fan to direct airflow out of the exhaust. This is a variable speed fan that reacts to a thermal sensor, so will speed up if the case is warm. If you wanted overkill M.2 cooling, this is it.



There is another feature that ASRock has had to put on the card, for anyone that wants to put more than one of these Ultra Quad M.2 PCIe Cards into the system. A series of four DIP switches are on the PCB which enables ASRock’s software and the system to determine which is the first Quad M.2 PCIe card, which is the second, and so on. Otherwise enumeration of drives in certain situations might not be guaranteed.



ASRock lists the add-in card as having support for Intel’s VROC technology, although that requires Intel PCIe SSDs in order to function as well as the VROC module. ASRock also stated that these cards support AMD systems without any similar requirements.


We suspect that ASRock will release the card sometime within Q1. Price is still to be determined.


Related Reading




Source: AnandTech – ASRock at CES 2018: Ultra Quad M.2 PCIe Card

ADATA at CES 2018: Jellyfish DRAM

LAS VEGAS, NV – Over the years, DRAM heatsinks have gone from non-existent, to gaudy, to death traps with jagged edges, to some form of LED hell. In the middle we saw some DRAM vendors produce modules for water cooling with pre-applied water cooling pipes and heatsinks, but these were not overly popular. ADATA thinks they have the next step in DRAM cooling technology.



The concept of these ‘Jellyfish’ modules is that by using a sealed and clear plastic case around the memory chips, the case can be filled with a non-conductive liquid, such as 3M Novec/Fluorinert that we have seen in custom PCs and server cooling. These chemicals, basically long chain hydrocarbons with funny bits on the end, are liquid at room temperature but can eventually change state into a gas and rise. That state change can absorb a lot of energy being produced, and as long as the energy is removed and the chemicals become liquid again by cooling down, the liquid forms a convection current and in essence, a heat pump. Even if the chemical does not change state from liquid to gas, with the right viscocity, warm liquid will also cause a convection current.



So cue ADATA’s Jellyfish: bare DRAM modules, in a plastic case, with about a spoonful of 3M Fluorinert. The modules also have a dozen LEDs for good measure.



Normally this 3M liquid is not cheap: over 10 years ago, when enthusiasts realized that mineral oil PCs were more hassle than they were worth, a number of people experimented at a cost of $300 per gallon (they didn’t state US or UK gallon). According to ADATA, the 3M liquid is now about $100 per litre. If you want to do the conversion, it means it is more expensive today than it used to be. A spoonful of this stuff, per module, will still add some cost to a build. But as a prototype, it is certainly an interesting change from all the LED DRAM we see out there.



One caveat, which ADATA noted, is that the prototype was not completely filled. This would mean that when placed vertically in a case, a couple of DRAM chips would not be covered. Jellyfish is still in its early stages of development, so if it ever comes to retail, there might be some changes. ADATA stated that due to the design, they have a patent on this technology, so it will be interesting to see if anything can come from that.


Related Reading




Source: AnandTech – ADATA at CES 2018: Jellyfish DRAM

Ambarella at CES 2018: Announcing CV1 and CV22 SoCs with CVflow CNN engines

At CES 2018 in Las Vegas we’ve had the pleasure to attend Ambarella’s booth tour demonstrating the newest products in camera SoC solutions. Ambarella to date was most widely known as being the silicon provider powering the camera capabilities of products from GoPro and DJI. As the traditional customers are looking for more vertical integration and other silicon alternatives, Ambarella is also looking in diversifying its product lines and customer base. The CV1 is a major effort towards gaining track in the EVA (Embedded vehicle autonomy) space. 


The CV1 is the first of a new family of computer vision processors which implement Ambarella’s “CVflow” architecture. CVflow is a new convolutional neural network (CNN) inference acceleration IP developed in-house by Santa Clara company. Over the last couple of months we’ve seen a lot of news in terms of machine learning announcements and IP development as neural network engines become the new “must-have” feature to differentiate in terms of silicon offerings.


In CEVA’s recent NeuPro announcement I briefly addressed the fact that we’re seeing a wider spectrum in terms of CNN engine architecture implementations – at one side of the spectrum we have more programmable and (claimed) flexible DSP-like architectures while on the other extreme we have more fixed-function accelerators that claim higher performance and efficiency. Each company had their own view on the benefits and disadvantages of either approach, but the general consensus I’ve noted among all of them is that it’s quite a mess in regards to marketing claims and specifications. For this reason Ambarella was quite tight-lipped when queried about the CVflow engine architecture and didn’t want to disclose any more in-depth specifications beyond the fact that they’re gravitating to less fixed-function processing with their IPs.



We saw two CV1 evaluation platforms demonstrated; a long-range platform with two CV1 chips meant for long-range imaging which requires a wider camera baseline for stereo cameras as well as a smaller, shorter-range platform with a single CV1 chip. The platforms were demonstrated on a prototype car where the long-range platforms were mounted on the roof and the short-range units on the sides of the car, with all units working in unison to enable EVA capabilities.




Ambarella CV1 – stereovision obstacle detection and monocular object classification demo


The most interesting demonstration video was the output out of a working system highlighting the live obstacle detection and object recognition. In first half of the above video we see the obstacle detection highlighted – the processing for this is done via the stereovision cameras and a disparity mapping engine of the CV1 SoC. This allows for generic obstacle detection such as cars, the curb or other random objects that the car needs to avoid when driving. The hardware block for the disparity mapping is a fixed-function IP from Ambarella and promises high performance for the task.


In the second half of the video we see the object detection highlighted. The processing for this is done via a monocular camera system and implemented via a CNN on the CVflow engine. The vision capabilities here allows the system to distinguish between different types of obstacles and objects in the scene and gives “intelligence” to the EVA system.


Ambarella also demonstrated a “SuperDrone” at the show using a CV1 platform able to fly through an obstacle course. The impressive fact here is that the drone did this autonomously without any pre-defined path beyond only target way-points programmed, and did the path calculations as well as obstacle avoidance on the fly.


The CV1 is a 14nm SoC and has been in production since last summer and includes Ambarella’s capable ISP and camera pipelines. The design is meant more for “live” imaging use-cases and doesn’t have the higher end video recording capabilities that we’re accustomed to, and that’s where the CV22 comes into play.


The CV22 is the second chip in the CVflow family. The SoC is a newer generation product and is implemented in a 10nm process. Like the CV1, these are manufactured by Samsung Foundry.


The CV22 offers the top of the line 4K60 AVC and HEVC video encoding capabilities for which Ambarella claims to have the lowest bitrates (at the same quality) in the industry through help of SmartAVC and SmartHEVC variable bitrate controls. The ISP has a total throughput rate of 800MP/s and this can be configured and budgeted into up to 4 camera interfaces and sensors. The CVflow engine is also of a newer generation and promises 4x the processing power of the CV1 implementation. The SoC is powered by a quad-core A53 cluster for general processing capabilities.The CV22 is sampling to customers in the upcoming quarter.



At the CES booth we also saw demonstration existing solution such as the S5L IP camera SoC against a commercially available Arlo Pro 2 with a competitor SoC. The key advantage of the Ambarella solution was active power consumption as it was able to show 68% lower operating power, a massive difference considering that these IP camera platforms are targeted for battery operation. 


Overall it was great to see Ambarella’s new silicon announcements at the show and I’m looking forward to see what kind of products companies will be able to develop with SoCs such as the CV22. The key take-away for me was the demonstration and application of CNNs in real-world “killer” use-cases of which we haven’t seen too many of in the mobile space… if you don’t count animoji’s of course.



Source: AnandTech – Ambarella at CES 2018: Announcing CV1 and CV22 SoCs with CVflow CNN engines