Micron Discontinues Lexar Business, Plans to Focus on Higher-Margin Products

Micron this week announced plans to discontinue its Lexar removable media storage business as a part of the company’s strategy to shift to higher margin NAND flash-based products. The company intends to sell all or part of its Lexar business division, but promises to support existing customers during the transition period.


Lexar was spun off from Cirrus Logic in 1996 and then acquired by Micron in 2006 in a bid to market NAND flash media. Approximately two years ago, Micron cut down supply of NAND memory to spot market in a bid to concentrate on building its own products and thus earn higher profit margins. Last year the company announced plans to work with its clients to build software for their software storage offerings to improve its profit margins further this time from various SSDs. In addition, the company disclosed plans to develop special memory solutions for emerging automotive applications (which will complement its embedded portfolio). This week Micron went even further and disclosed plans to cease selling Lexar branded products to consumers and OEMs as a part of its strategy to increase “opportunities in higher value markets and channels.” The portfolio of Lexar products includes memory cards and card readers, USB flash drives and even SSDs.


Given the competition on the market of retail removable media and storage drives, the withdrawal from such businesses may be logical for Micron, which feels increasing pressure from Samsung, Western Digital (SanDisk) and others amid lack of market growth in terms of NAND bits (at least, according to its own predictions). Meanwhile, the withdrawal also means that Micron will have to concentrate on production of SSD-graded memory, whereas NAND for removable storage will be sold on the open market, in case the company continues to develop it. If someone buys the Lexar operations from Micron, the latter will likely sign some kind of exclusive supply agreement with the new owner, which means that it will keep developing the aforementioned NAND. SSD-graded memory is more expensive than chips for memory cards or USB flash drives and for about a year Micron was the only company to sell its SSD-graded 3D NAND to third-party makers of drives, possibly earning higher margin than by selling removable storage devices. However, NAND for the latter is typically used to test drive new production technologies and/or architectures before deploying them to make memory for SSDs.



Micron will not be the only producer of NAND flash, which does not produce its own memory cards and USB flash drives. SK Hynix and Intel also do not produce such products under their own trademarks, unlike Samsung, Toshiba and Western Digital. That said, while it will be sad to see Micron’s Lexar gone (assuming that nobody buys it), Micron’s withdrawal from removable storage business is not exactly surprising.


Related Reading:




Source: AnandTech – Micron Discontinues Lexar Business, Plans to Focus on Higher-Margin Products

Western Digital Announce BiCS4 3D NAND: 96 Layers, TLC & QLC, Up to 1 Tb per Chip

Western Digital on Tuesday formally announced its fourth-generation 3D NAND memory, developed as part of the Western Digital/Toshiba joint venture. The fourth-generation BiCS NAND flash chips from Western Digital feature 96 layers and will include several capacity points and will use TLC and QLC architectures. The company expects to start volume production of BiCS4 chips in 2018.


NAND dies that belong to the fourth-generation BiCS 3D NAND will use 96 word layers to minimize die size of the chips and maximize output of fabs, and at this point represents the largest layer count in the flash memory industry. Furthermore the range of BiCS4 NAND die configurations available will be considerably more diverse than BiCS3, which currently only includes 256 Gb and 512 Gb dies. Western Digital plans to offer BiCS4 components based on TLC (triple level cell) and QLC (quadruple level cell) configurations. with capacities ranging from 256 Gb to 1 Tb.


It is noteworthy that Western Digital’s BiCS4 lineup will include QLC NAND, which has been discussed by Western Digital (and SanDisk before that) for several years, but which is about to become reality only in the coming quarters. To store four bits per cell (with 16 voltage states) Western Digital had to use a “thick” process technology alongside multi-layer 3D NAND to keep the per-bit costs down. The company is not specifying how many program/erase cycles its 3D QLC NAND will handle, but various industry predictions over the years have suggested 100 – 150 P/E cycles as a reasonable goal for QLC NAND, which is considerably lower than approximately 1000 P/E cycles supported by TLC NAND. Given such endurance, it is logical to expect 3D QLC NAND to be used for primarily removable storage as well as for ultra-high capacity datacenter drives for the so-called near-WORM (write once read many) storage applications. For example, Toshiba last year discussed a QLC-based datacenter SSD with 100 TB capacity for WORM apps.



Western Digital plans to begin sampling of select 96-layer BiCS4 3D NAND configurations in the second half of this year, but the manufacturer does not specify which dies will sample when. As for mass production, Western Digital intends to start volume manufacturing of their 96-layer 256 Gb 3D NAND in 2018, with other dies to follow later. Based on Western Digital’s announcements made earlier, the company will gradually introduce more sophisticated BiCS4 96-layer configurations in 2018 and 2019, before moving to BiCS5 sometimes in 2020. That said, it makes sense to expect the highest capacity BiCS4 ICs to ship later rather than sooner.



Finally, Western Digital did not disclose whether it uses NAND string stacking technology to assemble its 96-layer 3D NAND dies, but it is a likely scenario given what industrial publications have been predicting.


Related Reading:




Source: AnandTech – Western Digital Announce BiCS4 3D NAND: 96 Layers, TLC & QLC, Up to 1 Tb per Chip

Western Digital My Passport SSD Mini-Review

Flash-based external direct-attached storage (DAS) devices have evolved rapidly over the last few years. Starting with simple thumb drives that could barely saturate USB 2.0 bandwidth, we now see high-performance external SSDs. The full performance from these new crop of external storage devices can only be realized using the USB 3.1 Gen 2 interface. Western Digital’s My Passport SSD is an external SSD with a USB 3.1 Gen 2 Type-C interface. It caters to the mainstream market and comes in three capacities – 256GB, 512GB, and 1TB.In this review, we take a look at the 1TB version.



Source: AnandTech – Western Digital My Passport SSD Mini-Review

Qualcomm Announces Snapdragon Wear 1200 SoC: LTE Categories M1 & NB1 for Wearables & Smart Trackers

Back around this time last year, Qualcomm introduced their Snapdragon Wear 1100, the company’s first SoC specifically designed for budget, low-power wearable devices. The humble SoC featured just a single Cortex-A7 CPU core and LTE Cat 1 support, but for the market Qualcomm had designed it for, this was more than sufficient. Now at MWC Shanghai 2017, the company is launching an even more low power successor to the Snapdragon Wear 1100, the aptly named Snapdragon Wear 1200.


The Snapdragon Wear 1200 is an interesting development from Qualcomm, as while the name can be a bit deceiving, it’s the first in a new generation of products for the company. Taking the basic principles of the 1100, Qualcomm has integrated a new modem that supports new, ultra-low-power communication modes for LTE standardized in the last year: LTE Category M1 and Category NB1. In fact this is Qualcomm’s first SoC to support the 3GPP’s Low Power WAN technologies.










Qualcomm Snapdragon Wear SoCs
  Snapdragon Wear 1200 Snapdragon Wear 1100 Snapdragon Wear 2100
SoC Cortex-A7 @ 1.3GHz

Fixed-function GPU
Cortex-A7 @ 1.2GHz

Fixed-function GPU
4x Cortex-A7 @ 1.2GHz

Adreno 304
Process Node 28nm LP 28nm LP 28nm LP
RAM LPDDR2 LPDDR2 LPDDR3-800 MT/s
Display Simple 2D UI Simple 2D UI Up to 640×480 @ 60fps
Modem Qualcomm (Integrated)

2G (E-GPRS) / LTE

(Cat M1 & Cat NB1)
Qualcomm (Integrated)

2G / 3G / LTE (Category 1 10/5 Mbps)
Qualcomm X5 (Integrated)

2G / 3G / LTE (Category 4 150/50 Mbps)
Connected version only
Connectivity 802.11b/g/n/ac, BT 4.2 LE, GPS/GLONASS/Galileo/BeiDou 802.11b/g/n/ac, BT 4.1 LE, GPS/GLONASS/Galileo/BeiDou
(Wi-Fi and BT optional)
802.11b/g/n (2.4GHz),

BT 4.1 LE, NFC, GPS/GLONASS/Galileo/BeiDou, USB 2.0
Connected and Tethered versions

Briefly touching on the specs of the Wear 1200, the core processor is almost unchanged from the Wear 1100. The SoC is still powered by a single Cortex-A7 CPU core and paired with a simple display controller that is just barely a fixed function GPU. It is meant to be a low-power (and low cost) SoC, through and through.


The big change here for Qualcomm is on the modem side. Whereas the Wear 1100 shipped with a multi-mode 2G /3G / LTE Cat 1 modem – as low a power a design as one could get at the time – the Wear 1200 incorporates a much more power-efficient and very much forward-looking modem. One that supports only basic 2G (E-GPRS) functionality, along with the aforementioned LTE Cat M1 and NB1 standards.


This is the first product announcement to cross our desk supporting these new standards, and ultimately the Snapdragon Wear 1200 will be the first of many devices/chips that we see to do so. Part of the 3GPP’s Release 13, the standards body has been focusing on reducing power consumption, complexity, and costs for radios for IoT devices, wearables, and other simple devices as part of their LPWAN initiative.


At a high level, LTE Cat M1 is designed to be a relatively straightforward, further power-optimized form of LTE. The max data rate is just 1Mbps up and down – and the Wear 1200 doesn’t even reach those speeds – using tricks like a minimum-width 1.08Mhz channel and half-duplex communication to cut power consumption, all the while still being compatible with existing LTE networks. LTE Cat NB1 takes this a step further, going with a stand-alone LTE-derived narrowband implementation that uses just a 180KHz channel, which combined with other technologies offers the lowest amount of bandwidth (max 250Kbps) but also the lowest power consumption and improved range.








3GPP Low Power WAN LTE Standards
  LTE Cat M1 LTE Cat NB1
Network Type LTE-Compatible Seperate Band
Bandwidth 1.08MHz 180KHz
Peak Download 1 Mbps 250 Kbps
Peak Upload 1 Mbps 250 Kbps / 20 Kbps

For Qualcomm and other wearable/IoT device manufacturers, these new standards will be a significant part of making the Internet of Things live up to its name, by allowing even the lowest-power, lowest-cost devices to have LTE network functionality. Unsurprisingly then, one of the first places we’re going to see it deployed is in low-cost wearables, where Internet access is beneficial, but battery life concerns are significant.


As for the Wear 1200 in particular, Qualcomm’s wearable-class SoC keeps a low profile even for M1/NB1 devices. The SoC can support 300Kbps down and 375Kbps up for Cat M1, and just 20Kbps down and 60Kbps up for Cat NB1. And no, the latter isn’t a typo: NB1 devices are expected to send data as much (if not more than) they receive it, so the Wear 1200’s data rates vary accordingly. Meanwhile, despite the limited bandwidth these standards offer, the Wear 1200 supports 15 RF bands along with some notable LTE features, particularly VoLTE. Interestingly however, while Qualcomm supports the latest low-power IoT standards, they don’t support the equivalent low-power 2G standard, EC-GSM. The Wear 1200 does support 2G in the form of E-GPRS, so there is GSM backwards compatibility for when LTE isn’t available, but the future Qualcomm is planning for is very much one where LTE is everywhere and 2G won’t be needed.


Modems aside, the Wear 1200 is otherwise a function-optimized design. Like its predecessor, the SoC supports WiFi and Bluetooth for various connectivity options, along with augmenting the standard GPS/GLONASS/Beidou/Galileo geo positioning systems. The new SoC retains the same small size of its predecessor, with the chip measuring 79mm2. Overall, Qualcomm is touting a 10 day standby battery life for the SoC, which would be a significant improvement over the 7 day standby of the Wear 1100.


Finally, looking at the broader picture, the wearables market is still trying to figure out what it wants to be – and what consumers will actually buy – and for the Wear 1200 Qualcomm is particularly interested in courting the “kid watch” market. A segment of the larger smart tracker market, Qualcomm is looking to tap into what is already a significant market in Asia – and especially China – where it’s not uncommon to give your kid a limited functionality watch that allows you to contact them, while the watch works in conjunction with geofencing applications to keep tabs on their whereabouts. Driven in part by demographics and in part by technology, Qualcomm expects the market for kid tracking watches to further grow, with SoCs like the Wear 1200 further improving the utility of these devices and bringing their cost down. These improvements would also filter down to other parts of the smart tracker market, such as pet tracking and elderly tracking devices.


In fact the company is hitting the ground running: along with the launch of today’s SoC, they are also partnering with Borqs and Quanta to develop smart tracker/kid watch reference designs, so that hardware manufacturers can get a jump on developing Wear 1200-based trackers. And like the Snapdragon Wear 1200 itself, these reference designs are available and shipping today. So while Qualcomm isn’t specifically commenting on when their customers’ consumer devices will be ready, it will almost certainly be before the end of the year.



Source: AnandTech – Qualcomm Announces Snapdragon Wear 1200 SoC: LTE Categories M1 & NB1 for Wearables & Smart Trackers

Qualcomm Announces Snapdragon 450 Midrange SoC

Kicking off today is second Mobile World Congress of the year, MWC Shanghai. As the de facto home of smartphone manufacturing, and home for an increasing number of major mobile device vendors, the tradeshow has taken on increased importance in recent years. This year is no exception, with several different announcements of note coming out of the show.


Starting things off for everyone is Qualcomm, who is at the show to announce their latest mainstream Snapdragon 400 series SoC: the Snapdragon 450. The successor to Qualcomm’s 2016 Snapdragon 435, the Snapdragon 450’s biggest claim to fame is also its smallest: it will be the first Snapdragon 400 series SoC to be fabbed at 14nm, finally moving Qualcomm’s mainstream SoC lineup off of 28nm LP and on to a more recent and more power efficient manufacturing node.













Qualcomm Midrange Snadpragon Family
SoC Snapdragon 450 Snapdragon 435 Snapdragon 625
CPU 4x A53 @ 1.8GHz


4x A53 @ 1.8GHz

4x A53 @ 1.4GHz


4x A53 @ 1.4GHz

4x A53 @ 2.0GHz


4x A53 @ ? GHz

Memory 1x 32-bit LPDDR3 1x 32-bit @ 800MHz

LPDDR3


6.4GB/s b/w

1x 32-bit @ 933MHz

LPDDR3


7.45GB/s b/w

GPU Adreno 506 Adreno 505 Adreno 506
Encode/

Decode
1080p

H.264 & HEVC (Decode)
1080p

H.264 & HEVC (Decode)
2160p

H.264 & HEVC (Decode)
Camera/ISP Dual ISP

13MP + 13 MP (Dual)

21MP (Single)
Dual ISP

8MP + 8MP (Dual)

21MP (Single)
Dual ISP

24MP
Integrated

Modem
“X9 LTE” Cat. 7

300Mbps DL 150Mbps UL


2x20MHz C.A. 

(DL & UL)

“X9 LTE” Cat. 7

300Mbps DL 100Mbps UL


2x20MHz C.A. 

(DL & UL)

“X9 LTE” Cat. 7

300Mbps DL 150Mbps UL


2x20MHz C.A. 

(DL & UL)

USB 3.0

w/QuickCharge 3.0
2.0

w/QuickCharge 3.0
3.0

w/QuickCharge 3.0
Mfc. Process 14nm 28nm LP 14nm

At a high level, the Snapdragon 450 is a very straightforward successor to the 435. Qualcomm has taken most of the 435’s design principles and brought them forward for the smaller Snapdragon 450. For example, we’re still looking at an octa-core ARM Cortex-A53 implementation, however thanks to the 14nm process Qualcomm has been able to bump up the maximum clockspeed from 1.4GHz to 1.8GHz. Similarly, Qualcomm has updated the GPU from an Adreno 505 on the Snapdragon 435 to an Adreno 506 on the Snapdragon 450, with the more powerful GPU said to offer 25% better performance.



Meanwhile more significant upgrades have been made to the ISPs and USB controller. Similar to the Snapdragon 435, the 450 supports a single camera at up to 21MP. However if it’s used in a dual camera configuration – as is increasingly popular these days for artificial Bokeh and telephoto modes – then it can handle a pair of 13MP sensors, up from 8MP on the Snapdragon 435, and a notable improvement as 13MP seems to increasingly be the baseline for midrange phones. Qualcomm’s video processor blocks have also been improved, in part to keep up with the improved sensor, and as a result the 450 can now capture video at up to 1080p60, doubling the maximum framerate over the Snapdragon 435’s 1080p30 limit. Meanwhile the USB controller has been upgraded from USB 2.0 to USB 3.0, allowing for much faster transfers from Snapdragon 450 devices. And, like its predecessor, the 450 also supports QualComm’s QuickCharge 3.0 tech over said USB port.



Cellular connectivity is once again provided by Qualcomm’s Integrated X9 modem, which supports LTE Category 7 down and Category 13 up, for a maximum of 300Mbps down and 150Mbps up respectively. Interestingly, on the Snapdragon 435, Qualcomm limited that SoC for just 100Mbps up despite the fact that Category 13 allows for 150Mbps; so this is the first X9-equipped Snapdragon 400 SoC to actually be able to hit 150Mbps up, going by Qualcomm’s specifications. The 450 also retains the 435’s Hexagon DSP, however like so many other parts of the SoC, the 450’s DSP has been further enhanced to reduce power consumption.


Last but not least, Qualcomm is promising some solid battery life improvements with the Snapdragon 450 over its 435 predecessor. While the company has invested some of their 14nm gains in improving clockspeeds throughout the chip, they’ve also retained a lot of those gains for reducing overall power consumption, a philosophy similar to what they did with the Snapdragon 835 this year as well. To that end the company is promoting that Snapdragon 450 phones will be able to deliver 4 hours more battery life relative to 435 phones.



Overall it’s interesting to note just how much the Snapdragon 450 sounds a lot like Qualcomm’s Snapdragon 625, their 14nm SoC from 2016. Both chips use a octa-A53 CPU configuration, X9 LTE modem, and Adreno 506 GPU. In fact the Snapdragon 450 is even pin compatible with the Snapdragon 625, which means that handset manufacturers can immediately begin working with the new SoC in existing designs. However given this close similarity, I’m also left to wonder whether the Snapdragon 450 is a new die, or a cut-down 625. In any case, the two chips still have some differences between them: particularly that the Snapdragon 625 clocks higher and features a more powerful ISP and video decode block.


Wrapping things up, as is often the case with Qualcomm’s SoC reveals, today’s announcement comes ahead of vendor sampling and wide release. The company will begin commercial sampling in Q3 of this year, and the chip should show up in retail devices by the end of the year.




Source: AnandTech – Qualcomm Announces Snapdragon 450 Midrange SoC

AMD’s Radeon Vega Frontier Edition Formally Launches: Air Cooled For $999, Liquid Cooled for $1499

After what appears to be a very unusual false start, AMD has now formally launched their new Radeon Vega Frontier Edition card. First announced back in mid-May, the unusual card, which AMD is all but going out of their way to dissuade their usual consumer base from buying, will be available today for $999. Meanwhile its liquid cooled counterpart, which was also announced at the time, will be available later on in Q3 for $1499.


Interestingly, both of these official prices are some $200-$300 below the prices first listed by SabrePC two weeks ago in the false start. To date AMD hasn’t commented on what happened there, however it’s worth noting that SabrePC is as of press time still listing the cards for their previous prices, with both cards reporting as being in-stock.





















AMD Workstation Card Specification Comparison
  Radeon Vega Frontier Edition Radeon Pro Duo (Polaris) Radeon Pro WX 7100 Radeon Fury X
Stream Processors 4096 2 x 2304 2304 4096
Texture Units ? 2 x 144 144 256
ROPs 64 2 x 32 32 64
Base/Typical Clock 1382MHz N/A N/A N/A
Peak/Boost Clock 1600MHz 1243MHz 1243MHz 1050MHz
Single Precision 13.1 TFLOPS 11.5 TFLOPS 5.7 TFLOPS 8.6 TFLOPS
Half Precision 26.2 TFLOPS 11.5 TFLOPS 5.7 TFLOPS 8.6 TFLOPS
Memory Clock 1.89Gbps HBM2 7Gbps GDDR5 7Gbps GDDR5 1Gbps HBM
Memory Bus Width 2048-bit 2 x 256-bit 256-bit 4096-bit
Memory Bandwidth 483GB/sec 2x 224GB/sec 224GB/sec 512GB/sec
VRAM 16GB 2 x 16GB 8GB 4GB
Typical Board Power <300W 250W 130W 275W
GPU Vega (1) Polaris 10 Polaris 10 Fiji
Architecture Vega Polaris Polaris GCN 1.2
Manufacturing Process GloFo 14nm GloFo 14nm GloFo 14nm TSMC 28nm
Launch Date 06/2017 05/2017 10/2016 06/24/15
Launch Price Air: $999

Liquid: 1499
$999 $649 $649

Meanwhile AMD has also posted the final specifications for the card, confirming the 1600MHz peak clock. Sustained performance is a bit lower, with AMD publishing a “Typical clock” of 1382MHz. It’s worth noting that this is the first time AMD has used this term – they’ve previously used the term “base clock”, which is generally treated as the minimum clockspeed a card under a full gaming workload should run at. AMD is typically very careful in their word choice (as any good Legal department would require), so I’m curious as to whether there’s any significance to this distinction. At first glance, “typical clock” sounds a lot like NVIDIA’s “boost clock”, which is to say that it will be interesting to see how often Vega FE can actually hit & hold its boost clock, and whether it falls below its typical clock at all.


Feeding the GPU is AMD’s previously announced dual stack HBM2 configuration, which is now confirmed to be a pair of 8 layer, 8GB “8-Hi” stacks. AMD has the Vega FE’s memory clocked at just under 1.9Gbps, which gives the card a total memory bandwidth of 483GB/sec. And for anyone paying close attention to AMD’s naming scheme here, they are officially calling this “HBC” memory – a callback to Vega’s High Bandwidth Cache design.



As for power consumption, AMD lists the card’s typical board power as “< 300W”. This is consistent with the earlier figures posted by retailers, and perhaps most importantly, this is AMD’s official typical board power, not the maximum board power. So we are looking at a fairly high TDP card, and given that AMD has had a great deal of time to sit and work on their reference blower designs over the last few years, I’m anxious to see what that means for this initial air-cooled card.


For display outputs, the Vega FE devotes its entire second slot to airflow, so all of the display connectors are found on the first slot. Typical for AMD cards of the past couple of years, we’re looking at 3x DP 1.4 ports along with 1x HDMI port. AMD is also throwing in a passive DP to SL-DVI adapter in the box.


Moving on, let’s talk about the software setup for the Vega FE. As this is a card meant for (in part) game developers, AMD has opted to give the card access to both their pro and gaming drivers. Taking things one step further however, rather than making them separate downloads and installations, AMD has merged both drivers into a single install. Users can now download a single driver package and simply switch between driver modes in AMD’s control panel, allowing quick access to both driver types.



Unfortunately AMD hasn’t released much more in the way of detailed information on how driver switching works. In particular, it’s not clear whether switching requires a reboot or not; I would assume not, but it remains to be seen. Ultimately the primary purpose of this switch is for allowing game developers to switch modes for testing, going from pro driver mode for development to gaming mode for testing. The difference, I suspect, is less about driver code, and more about what driver optimizations are enabled. Games can get away with numerous hacks and optimizations in the name of performance, whereas professional applications need deterministic accuracy.


Otherwise, the driver situation touches on probably what remains one of the least-clear points of this product launch: who is the Radeon Vega Frontier Edition for? AMD is doing everything they can to encourage their typical Radeon consumer base to wait for the forthcoming Radeon Vega RX cards. In the meantime the company is stating that the card is “For Data Scientists, Immersion Engineers, and Product Designers” and certainly the pricing is closer to a professional card than a consumer card. Complicating matters is that AMD has been posting performance figures for SPECviewperf, Creo, other distinctly professional workloads, the kinds that typically go hand-in-hand with certified drivers. And at least for the moment, it doesn’t appear that AMD’s drivers have been certified (not that we’d expect them to be for a new architecture).



At a high level the Vega FE seems to compete with NVIDIA’s Titan Xp – and certainly that’s how AMD is choosing to pitch it – though this isn’t helped by the fact that NVIDIA isn’t doing great in establishing a clear market segmentation either since the launch of the GeForce GTX 1080 Ti. The Titan Xp is most certainly a partial gaming card (albeit a very expensive one), whereas AMD is focused more on professional visualization use cases that NVIDIA is not. Though where both overlap is on the compute front, where both the Vega FE and Titan Xp are essentially “entry-level” cards for production compute work. Otherwise, it may be better to treat the Vega FE as a beta testing card, especially given the “frontier” branding and the fact that AMD is clearly attempting to build out a more complete ecosystem for the future Vega RX and Instinct cards.


As for compute users in particular, AMD will be releasing the ROCm driver a bit later this week, on the 29th. Vega FE has a lot of potential for a compute card, thanks to its high number of SPs combined with equally high clocks. However serious compute users will need to code for its capabilities and idiosyncrasies to get the best possible performance on the card, which is all the more reason for AMD to get cards out now so that developers can get started. Compute will be the long-tail of the architecture: AMD can tweak the graphics performance of the card via drivers, but it’s up to developers to unlock the full compute capabilities of the Vega architecture.


Wrapping things up, for anyone interested in picking up the Vega FE, AMD is currently only linking to Newegg’s storefront, where both the air cooled and water cooled cards are listed as “coming soon”. Otherwise SabrePC lists the cards in stock, albeit at prices over AMD’s MSRP.




Source: AnandTech – AMD’s Radeon Vega Frontier Edition Formally Launches: Air Cooled For 9, Liquid Cooled for 99

Honor 9 Makes Its Way West: Launching Today in Europe for £380/€450

A couple of weeks back at an event in China, Huawei’s Honor sub-brand announced their flagship smartphone for 2017: the Honor 9. Following in the footsteps of the Honor 8 before it, the Honor 9 continues in Honor’s tradition of offering flagship-level smartphones with high-end components at a more mainstream price. At the time of the reveal, the Honor 9 was only being released in China. But now a short few weeks later, Honor is announcing that it is making its way west for its European launch, which kicks off today.


Honor’s latest flagship is a 5.15-inch phone that, at first glance, looks a lot like parent company Huawei’s recently-launched P10 smartphone. The Honor phone gets the same latest-generation Kirin 960 SoC from Huawei’s HiSilicon division, and the 5.15-inch 1080p display is only a hair larger than the P10’s 5.1-inch display. This is similar to what we have seen in past generations, with the Honor flagship serving as a more value-priced alternative for consumers who are after the latest Huawei tech.















Honor Flagship Phones
  Honor 9 Honor 8
SoC HiSilicon Kirin 960


4x Cortex-A53 @ 1.8GHz

4x Cortex-A73 @ 2.4GHz

ARM Mali-G71 MP8

HiSilicon Kirin 950


4x Cortex-A72 @ 2.3GHz

4x Cortex-A53 @ 1.8GHz

Mali-T880MP4 @ 900MHz

Display 5.15-inch 1920×1080 IPS LCD 5.2-inch 1920×1080 IPS LCD
Dimensions 147.3 x 70.9 x 7.45 mm

155 grams
145.5 x 71.0 x 7.45 mm

153 grams
RAM 4GB / 6GB 3GB / 4GB
NAND 64GB

+ microSD
32GB / 64GB (eMMC)

+ microSD
Battery 3200 mAh (12.23 Wh)

non-replaceable
3000 mAh (11.46 Wh)

non-replaceable
Modem HiSilicon LTE (Integrated)

2G / 3G / 4G LTE (Category 12/13)
HiSilicon Balong (Integrated)

2G / 3G / 4G LTE (Category 6)
SIM Size 2x NanoSIM
Wireless 802.11a/b/g/n/ac, BT 4.2, NFC, GPS/Glonass/BDS
Connectivity USB 2.0 Type-C, 3.5mm headset
Launch OS Android 7.1 with EMUI 5.1 Android 6.0 with EMUI 4.1

Rounding out the specifications, depending on the configuration the phone is paired with either 4GB or 6GB of RAM. All (listed) SKUs come with 64GB of NAND for storage, along with a microSD card for additional storage. For wired connectivity, Honor offers a USB 2.0 port as well as a 3.5mm audio jack. Powering the phone is a 3200mAh battery, which works out to being a bit bigger than the battery on last year’s Honor phone. Finally, Honor has thankfully moved the fingerprint sensor for this year’s phone; rather than being on the rear of the phone it’s now on the front of the phone, where it’s easier to access.


Meanwhile like the Honor 8 before it, the Honor 9 gets a dual camera implementation. This time however instead of matching color and monochrome cameras, the monochrome camera gets a resolution boost, resulting in a 12MP color (RGB) camera paired with a 20MP monochrome camera. As you’d expect with the significant resolution increase for one of the cameras, along with the more powerful Kirin 960 SoC, the camera is the second major focal point for Honor’s promotion of the phone. Specifically, the company is promising improved low-light photography thanks to the improved monochrome camera.



As for the build of the phone, Honor has once again gone with glass for the front and back of the phone. In fact the overall design of the phone and its construction appears to be very similar to the Honor 8 in this respect; the edge-to-edge glass is comprised of 15 layers, with a 2.5D curve at the edges. New to this year’s model is what Honor is calling “Glacier Grey”, which joins their other color options.


As for the launch of the phone, the Honor 9 is available today in much of Europe, including Germany, Belgium, Italy, Russia, and the UK. Interestingly, different countries are getting slightly different phone configurations: the UK gets a 4GB RAM / 64GB NAND configuration for £379.99, meanwhile the rest of Europe is reportedly getting a 6GB RAM / 64GB NAND configuration for €449.99. As for a US release, Honor isn’t announcing anything at this time. However given Honor’s recent struggles, at this point a US release should not be considered a given.


Gallery: Honor 9 (Blue)




Source: AnandTech – Honor 9 Makes Its Way West: Launching Today in Europe for £380/€450

ASUS & Sapphire Release Pascal & Polaris-based Cryptocurrency Mining Cards

Even during the most bullish Bitcoin days, video card partners had shied away from creating specific SKUs for the purpose of cryptocurrency mining, and that has remained the case since – until now. With the Ethereum mining mania hitting new heights (ed: and arguably new lows), add-in board vendors ASUS and Sapphire have released mining-specific video cards, with variants based off of NVIDIA’s GP106 GPU, and AMD’s RX 470 & RX 560 video cards. Being built for high hash-rates rather than visual graphics horsepower, these cards are distinctively sparse in their display output offerings.


ASUS has outright labelled their cards as part of their new “MINING Series,” with product pages for MINING-P106-6G and MINING-RX470-4G advertising hash-rate production and cost efficiency features. Something to note is that ASUS has chosen to use the GPU codename of GP106, rather than the NVIDIA GTX 1060 branding. The GP106-based card has no display outputs, while the RX 470 card supports only a single DVI-D output despite humorously having HDMI and DisplayPort cut-outs on the PCIe bracket. Both cards are specified at reference clocks.



Meanwhile, Overclockers UK are listing 5 Sapphire MINING Edition SKUs, with 4 RX 470 variants differentiated by memory manufacturer and VRAM size: RX 470s with 4GB of non-Samsung (11256-35-10G) or Samsung memory (11256-36-10G), RX 470s with 8GB of non-Samsung (11256-37-10G) or Samsung memory (11256-38-10G), as well as an RX 560 Pulse MINING Edition card (11267-11-10G). None of the RX 470 variants offer any display outputs, while the RX 560 has a single DVI-D.



In the Overclockers UK product descriptions, cards with Samsung memory are specified for an additional 1 MH/s (mega-hashes per second) over the non-Samsung counterparts, highlighting the importance of memory bandwidth and quality in current Ethereum mining. In addition, the descriptions state a short 1 year warranty and, interestingly, CrossFire support for up to 2 GPUs. It remains to be seen whether these cards can be paired with standard video cards for the purpose of increased graphical performance.


Other SKU listings have surfaced in the wild: Sapphire RX 470 4GB with non-Samsung (11256-21-21G) and Samsung memory (11256-31-21G) on Newegg, and MSI P106-100 MINER 6G on NCIX. The Newegg Sapphire RX 470s, unlike the ones listed on Overclockers UK, both have single DVI outputs and 180 day limited warranties. However, the MSI mining card is completely bare of any details.


Looking back, Bitcoin, Litecoin, and Dogecoin – as well as many others – have all waxed and waned. Yet video card manufacturers remained the last holdouts in the PC component market in offering cryptocurrency-specific SKUs; since then, there have been tailored chassis’, PSUs, and motherboards both new and old. In the past, surging cryptomining demand has resulted in periodic supply issues, with consequences like $900 R9 290X’s. Now, ASUS and Sapphire seem intent on tackling the current Polaris and Pascal shortages from the most direct angle possible: cryptomining cards.


While drastic on some level, it’s representative of the difficult problem faced by both the GPU manufacturers (AMD and NVIDIA) and their video card partners. Mining-inflated demand restricts supply to such an extent that scarcity and artificially high prices infuriate standard consumers looking to purchase video cards. However, overproduction could easily lead into an intractably congested channel after the cryptomining craze has ceased, not to mention potential RMA/warranty headaches or unintentional flooding of the secondary market with used mining cards of variable health.


By offering cryptomining cards with limited warranties, restricted display outputs, and presumably lower manufacturing costs, vendors are hoping to capitalize on mining demand while satisfying standard consumers and avoiding undue damage to their brand or revenue. Given these aggressive and forthright efforts by ASUS and Sapphire, it would not be surprising if other add-in board vendors followed suit with a few mining-specific products of their own.



Source: AnandTech – ASUS & Sapphire Release Pascal & Polaris-based Cryptocurrency Mining Cards

Google Fined €2.42B by European Commission for Antitrust Violations

This morning Google has become a new record holder in the European Union; unfortunately however it’s not a good record to hold.


Capping off a multi-year investigation, the European Commission – the EU’s executive body – has ruled that Google has violated the EU’s antitrust laws with the company’s shopping service and how it is promoted. As a consequence of this ruling, the EU is levying a €2,424,495,000 (~$2.73B) fine against Google, along with requiring the company to cease anti-competitive activities in the next 90 days under threat of further fines. This fine is, in turn, now the largest antitrust fine ever levied by the EU, easily surpassing Intel’s €1.06bil fine in 2009.


The EU has been investigating Google for several years now – and indeed hasn’t been the only body to do so over the years – and based on how the investigation was proceeding, it has been expected for some time now that the European Commission would rule against Google. Overall, the Commission bases the size of the fine on the revenue of the offending business – in this case Google’s shopping comparison service – where it can levy a fine at up to 30% of revenue over the offending period of time. So while Google’s fine is quite large, it also represents an equally significant amount of time – over 9 years in the case of Germany and the UK.


From an antitrust standpoint, the crux of the Commission’s argument has been that Google has leveraged their dominance of the search market to unfairly prop up and benefit their search comparison service. Specifically, that in their search results Google listed their own shopping service and its results ahead of competing services, severely harming competitors, who saw traffic drops of up to 92% depending on the specific country in question.



For the time being, Google has 90 days to fix the issue. The Commission isn’t recommending a specific remedy, but they expect Google to pick a reasonable remedy and to explain it to the Commission. Ultimately what regulations are looking for is that Google give competitors “equal treatment” – that is, that competing shopping comparison services receive equal footing in Google’s search results, following the same methods and processes that Google uses to place their own service. Should Google not comply, then the Commission has the option of levying a further fine of 5% of all of Alphabet’s global daily turnover.


Meanwhile Google has the option of appealing the ruling to the courts, and while they’ve yet to make a decision, they’ve already published their own rebuttal to the Commission’s ruling, indicating that an appeal is likely. In their rebuttal, Google has stated that “While some comparison shopping sites naturally want Google to show them more prominently, our data show that people usually prefer links that take them directly to the products they want, not to websites where they have to repeat their searches.” The company has also noted that they do have competition, particularly from companies like eBay and Amazon.


Finally, along with today’s ruling, the European Commission has also noted that they still have other, ongoing cases against Google that they are continuing to investigate. These include issues over the Android operating system – where the Commission is concerned that “Google has stifled choice and innovation in a range of mobile apps and services by pursuing an overall strategy on mobile devices to protect and expand its dominant position in general internet search” – and Google’s AdSense unit, where there are concerns over Google’s policies have reduced choice in the ad market. As a result, even if Google doesn’t appeal today’s fine, their legal wrangling with the EU is not yet over.




Source: AnandTech – Google Fined €2.42B by European Commission for Antitrust Violations

The Intel SSD 545s (512GB) Review: 64-Layer 3D TLC NAND Hits Retail

64-layer 3D NAND has arrived with Intel as the first to market. The new Intel SSD 545s is a mainstream consumer SATA SSD that greatly improves on last year’s disappointing Intel SSD 540s. Intel hasn’t quite beaten Samsung’s entrenched 850 EVO, but the SSD market is definitely getting more competitive with this new generation of 3D NAND flash memory.
 



Source: AnandTech – The Intel SSD 545s (512GB) Review: 64-Layer 3D TLC NAND Hits Retail

Memblaze Launches PBlaze5 SSDs: Enterprise 3D TLC, Up to 6 GB/s, 1M IOPS, 11 TB

Memblaze has introduced its new generation of server-class NVMe SSDs for mixed and mission critical workloads. The PBlaze5 SSDs are based around Micron’s 3D eTLC memory and paired with a Microsemi Flashtec controller. The SSDs come in PCIe 3.0 x8 AIC or 2.5” U.2 form-factors, carry up to 11 TB of 3D TLC NAND, and feature sequential read performance of up to 6 GB/s as well as random read performance of up to 1M IOPS.


The Memblaze PBlaze5 700 and 900-series SSDs are based on Microsemi’s Flashtec PM8607 NVMe2016 controller that features 16 compute cores, 32 NAND flash channels, and supports everything one might expect from a contemporary SoC for server SSDs (LDPC 550 bit/4KB ECC with a 1×10-17 bit error rate, NVMe 1.2a, AES-256 PCIe 3.0 x8/PCIe 3.0 x4 dual-port, etc.) along with a host of enterprise-grade features. Memblaze further outfits the card with their own MemSpeed 3.0 as well as MemSolid 3.0 firmware-based technologies. The MemSpeed 3.0 feature better ensures consistent performance and QoS, and comes with further priority que management optimizations over the previous version. As for the MemSolid 3.0, it is a stack of reliability and security features of the PBlaze5 900-series drives, which we are going to touch upon later.



Both the 700 and 900 series drives use the same kind of memory — Micron’s 32-layer 3D eTLC NAND flash (384 Gb). Memblaze tells us that the 3D eTLC memory offers higher endurance and reliability, but it does not go beyond that.




Given the same controller and the same kind of memory, performance and power consumption numbers for the PBlaze5 700 and 900-series SSDs are close (the 900-series offers 50% higher random write performance). The 2.5″ drive form-factor PBlaze5 D700/D900 feature sequential read speeds of up to 3.2 GB/s, sequential write speeds of up to 2.4 GB/s, as well as up to 760K random read IOPS. The PCIe card-based PBlaze5 C700/C900 offer considerably higher performance numbers due to two times wider interface (PCIe 3.0 x8): sequential reads up to 6 GB/s, sequential writes up to 2.4 GB/s, and 1.042M read IOPS, respectively. As for power consumption, all the drives use from 7 to 25 W of power, depending on the configuration, workload and settings. However, the similarities between the PBlaze5 700 and 900-series SSDs end here.




The PBlaze5 700 drives are designed for datacenters that require maximum performance, high density and capacity at low power and moderate costs. That said, the PBlaze 700-series are rated for 1 DPWD for five years and come with reliability features that are consistent with other SSDs for hyperscale datacenters.



By contrast, the PBlaze5 900-series drives are aimed at mission critical environments (databases, financial transactions, analytics, etc.) that need enhanced reliability. In addition to extended error correction code (with a 1×10-17 bit error rate), the PBlaze 900-series also supports T10 Data Integrity Field (DIF)-compliant end-to-end data path protection, which results in a Silent Bit Error Rate (SBER) lower than 10-23. In addition, the 900-series takes full advantage of all MemSolid 3.0 enhancements offering features like crypto erase, background scan protection, firmware encryption (one of the first SSDs to support this feature), whole disk encryption, metadata protection, read disturb protection, dual-port capability (U.2 drives only), and so on. For those who need to precisely manage the power consumption of their SSDs, the MemSolid 3.0-based drives offer distinct 15, 20 and 25 W modes. As for endurance, Memblaze guarantees 3 DPWD over five years for its PBlaze5 900-series SSDs.

































Memblaze PBlaze5 Series Specifications
  PBlaze5 D700 PBlaze5 C700 PBlaze5 D900 PBlaze5 C900
Form Factors 2.5″ U.2 Drive HHHL AIC 2.5″ U.2 Drive HHHL AIC
Interface PCIe 3.0 x4 PCIe 3.0 x8 PCIe 3.0 x4 PCIe 3.0 x8
Capacities 2 TB

3.6 TB

4 TB

8 TB

11 TB
2 TB

3.2 TB

4 TB

8 TB
Controller Microsemi Flashtec PM8607 NVMe2016
Protocol NVMe 1.2a
NAND 3D Enterprise TLC NAND memory
Sequential Read 3.2 GB/s 6 GB/s 3.2 GB/s 6 GB/s
Sequential Write 2.4 GB/s 2.4 GB/s 2.4 GB/s 2.4 GB/s
Random Read (4 KB) IOPS 760,000 1,042,000 760,000 1,042,000
Random Write (4 KB) IOPS 210,000 304,000
Latency Read 90 µs
Latency Write 15 µs
Power Idle 7 W
Operating 23 W
ECC LDPC 550 bit/4 KB
Endurance 1 DWPD 3 DWPD
Dual-Port Support +
Uncorrectable Bit Error Rate <1 bit per 10-17 bits read
Silent Bit Error <1 bit per 10-23 bits read
End-to-End Data Protection T10 DIF/DIX
Crypto Erase +
Firmware Signature +
PCIe ECRC +
Encryption AES-256
Power Loss Protection Yes
Proprietary Technologies MemSpeed 3.0 MemSpeed 3.0

MemSolid 3.0
MTBF 2.1 million hours
Warranty Five years
Additional Information Link Link

Traditionally, Memblaze does not publicly list the pricing of their enterprise SSDs, as pricing is dependent in part on the number ordered and just how the customer wants the drives configured. The company is currently working with its partners on deploying the PBlaze5 drives, and actual volume shipments will begin after their clients validate the SSDs with their respective applications.


Related Reading:




Source: AnandTech – Memblaze Launches PBlaze5 SSDs: Enterprise 3D TLC, Up to 6 GB/s, 1M IOPS, 11 TB

Samsung Begins Production of Exynos i T200 SoC for Miniature IoT Devices

Samsung on Thursday said it had begun to mass-produce its first SoC for miniature IoT devices, the Exynos i T200. Aimed at devices that do not need a lot of compute power, but require ultra-low standby power consumption, the first Exynos i SoC integrates processing, connectivity, security and other capabilities.


The Samsung Exynos i T200 SoC uses one ARM Cortex-R4 CPU core and one ARM Cortex-M0+ CPU core for real time processing and microcontroller applications, with both cores running at 320MHz. For connectivity, the chip also contains a 802.11 b/g/n single-band (2.4 GHz) Wi-Fi controller and supports IoTivity protocol that enables interoperability between IoT devices over various protocols. In addition, Samsung’s Exynos i SoC has a security hardware block called the Security Sub-System (SSS) as well as a physical unclonable function (PUF) for secure data storage and device authentication.


The Exynos i T200 chip is made using Samsung’s “low power 28 nm HKMG” process technology, but Samsung does not specify which one. As for packaging, the SoC comes in an FCBGA form-factor.



Samsung did not indicate if and when it plans to start using the i T200 chip internally. Since Samsung also sells Exynos SoCs to third parties, it is possible that the Exynos i T200 ends up in devices made by other makers. As for pricing, the SoC uses a Cortex-R4 and a Cortex-M0+ cores, which are very small and optimized for low costs, therefore, it is unlikely that the Exynos i T200 will be expensive.


Related Reading:




Source: AnandTech – Samsung Begins Production of Exynos i T200 SoC for Miniature IoT Devices

GlobalFoundries Details 7 nm Plans: Three Generations, 700 mm², HVM in 2018

Keeping an eye on the ever-evolving world of silicon lithography, GlobalFoundries has recently disclosed additional details about its 7 nm generation of process technologies. As announced last September, the company is going to have multiple generations of 7 nm FinFET fabrication processes, including those using EUV. GlobalFoundries now tells us that its 7LP (7 nm leading performance) technology will extend to three generations and will enable its customers to build chips that are up to 700 mm² in size. Manufacturing of the first chips using their 7LP fabrication process will ramp up in the second half of 2018.








GlobalFoundries 7LP Platform
  7nm Gen 1 7nm Gen 2 7nm Gen 3
Lithography DUV DUV + EUV DUV + EUV
Key Features Increased performance, lower power, higher transistor density vs. 14LPP. Increased yields and lower cycle times. Performance, power and area refinements.
Reasons for EUV insertion To reduce usage of quadruple and triple patterning. To improve line-edge roughness, resolution, CD uniformity, etc.
HVM Start 2H 2018 2019 (?) 2020 (?)

7 nm DUV


First and foremost, GlobalFoundries reiterated their specs of their first-gen 7 nm process, which involves deep ultraviolet (DUV) lithography with argon fluoride (ArF) excimer lasers operating on a 193 nm wavelength. The company’s 7 nm fabrication process is projected to bring over a 40% frequency potential over the 14LPP manufacturing technology that GlobalFoundries uses today, assuming the same transistor count and power. The tech will also reduce the power consumption of ICs by 60% at the same frequency and complexity.


For their newest node, the company is focusing on two ways to reduce power consumption of the chips: implementing superior gate control, and reducing voltages. To that end, chips made using GlobalFoundries’ 7LP technology will support 0.65 – 1 V, which is lower than ICs produced using the company’s 14LPP fabrication process today. In addition, 7LP semiconductors will feature numerous work-functions for gate control.



When it comes to costs and scaling, the gains from 7LP are expected to be a bit atypical from the usual manufacturing process node advancement. On the one hand, the 7 nm DUV will enable over 50% scaling over 14LPP, which is not something surprising given the fact that the latter uses 20 nm BEOL interconnections. However, since 7 nm DUV involves more layers that require triple and quadruple patterning, according to the foundry the actual die cost reduction will be in the range between 30% and 45% depending on application.


The 7 nm platform of GlobalFoundries is called 7LP for a reason — the company is targeting primarily high-performance applications, not just SoCs for smartphones, which contrasts to TSMC’s approach to 7 nm. GlobalFoundries intends to produce a variety of chips using the tech, including CPUs for high-performance computing, GPUs, mobile SoCs, chips for aerospace and defense, as well as automotive applications. That said, in addition to improved transistor density (up to 17 million gates per mm2 for mainstream designs) and frequency potential, GlobalFoundries also expects to increase the maximum die size of 7LP chips to approximately 700 mm², up from the roughly 650 mm² limit for ICs the company is producing today. In fact, when it comes to the maximum die sizes of chips, there are certain tools-related limitations.









Advertised PPA Improvements of New Process Technologies
Data announced by companies during conference calls, press briefings and in press releases
  GlobalFoundries
7nm Gen 1

vs 14LPP
7nm Gen 2

vs Gen 1
7nm Gen 3

vs Gen 1/2
Power >60% same* lower
Performance >40% same* higher
Area Reduction >50% none yes
*Better yields could enable fabless designers of semiconductors to bin chips for extra performance or lower power.

GlobalFoundries has been processing test wafers using 7 nm process technology for clients for several quarters now. The company’s customers are already working on chips that will be made using 7 nm DUV process technology, and the company intends to start risk production of such ICs early in 2018. Right now, the clients are using the 0.5 version of GlobalFoundries’ 7 nm process design kit (PDK), and later this year the foundry will release PDK v. 0.9, which will be nearly final version of the kit. Keep in mind that large customers of GlobalFoundries (such as AMD) do not need the final version of the PDK to start development of their CPUs or GPUs for a given node, hence, when GF talks about plans to commercialize its 7LP manufacturing process, it means primarily early adopters — large fabless suppliers of semiconductors.



In addition to its PDKs, GlobalFoundries has a wide portfolio of licenses for ARM CPU IP, high-speed SerDes (including 112G), and 2.5D/3D packaging options for its 7LP platform. When it comes to large customers, GlobalFoundries is ready for commercial production of chips using its 7 nm DUV fabrication process in 2018.



Fab 8 Ready for 7LP, Getting Ready for EUV


Speaking of high volume manufacturing using their 7LP DUV process, it is necessary to note that earlier this year GlobalFoundries announced plans to increase the production capacity of their Fab 8. Right now, the output of Fab 8 is around 60,000 wafer starts per month (WSPM), and the company expects to increase it by 20% for 14LPP process technology after the enhancements are complete.



The expansion does not involve physical enhancement of the building, which may indicate that the company intends to install more advanced scanners with increased output capabilities. GlobalFoundries naturally does not disclose details about the equipment it uses, but newer scanners with higher output and better overlay and focus performance will also play their role in HVM using 7 nm DUV that relies on quadruple patterning for select layers.



In addition to more advanced ASML TWINSCAN NXT DUV equipment, GlobalFoundries plans to install two TWNSCAN NXE EUV scanners into the Fab 8 in the second half of this year. This is actually a big deal because current-generation fabs were not built with EUV tools in mind. Meanwhile, EUV equipment takes up more space than DUV equipment because of the light source and other aspects.



EUV: Many Problems Solved, By Concerns Remain


Usage of multi patterning for ultra-thin process technologies is one of the reason why the industry needs lithography that uses extreme ultraviolet wavelength of 13.5 nm. As avid readers know, the industry has been struggling to develop EUV tools suitable for HVM, and while significant progress has been made recently, EUV is still not quite up to scale. This is exactly why GlobalFoundries is taking a cautious approach to EUV that involves multiple generations. Keep in mind that GlobalFoundries does not seem to have official names for different iterations of its 7 nm process technologies. The only thing that the company is talking about now is its “7LP platform with EUV compatibility.” Therefore, all our generations-related musings here are just for a better understanding of what to expect.



ASML has developed several generations of EUV scanners and has demonstrated light sources with 205 W of power. The latest TWINSCAN NXE scanners with recent upgrades have demonstrated an availability that exceeds 60%, which is good enough to start their deployment, according to GlobalFoundries. Eventually, availability is expected to increase to 90%, in line with DUV tools.


Meanwhile, there are still concerns about protective pellicles (films) for EUV photomasks, mask defects, as well as EUV resists. On the one hand, current pellicles can handle productivity rates of up to 85 wafers per hour (WpH), which is well below 125 WpH planned for this year. Basically, this means that existing pellicles cannot handle powerful light sources required for HVM. Any defect on a pellicle can affect wafers and dramatically lower yields. Intel demonstrated pelliclized photomasks that could sustain over 200 wafer exposures, but we do not know when such pellicles are expected to enter mass production. On the other hand, powerful light sources are required for satisfactory line-edge roughness (LER) and local critical dimensions (CD) uniformity primarily because of imperfections of resists.


7 nm EUV Gen 1: Improving Yields, Reducing Cycles


Given all the EUV-related concerns, GlobalFoundries will start to insert EUV for select layers in a bid to reduce the usage of multi patterning (and eliminate quadruple patterning in general, if possible), thereby improving yields. At this time the company is not disclosing when it plans to start using EUV tools for manufacturing, only stating that they’ll do so “when it is ready.” It is unlikely that EUV will be ready in 2018, so it is logical to expect the company to use EUV tools no sooner than 2019.



Such approach makes a lot of sense because it enables GlobalFoundries to increase yields for its customers and to learn more about what it will take to get EUV ready for HVM. In the best-case scenario, GlobalFoundries will be able to produce designs developed for 7 nm DUV with multi patterning using its 7 nm EUV tech. However, one should keep in mind two factors. First, semiconductor developers release new products every year. Second, GlobalFoundries will begin to insert EUV tools into production at least a couple of quarters after the launch of the first 7 nm DUV chips. Therefore, it is highly likely that the first EUV-based chips produced at GlobalFoundries will be new designs rather than chips originally fabbed on the all-DUV process.



7 nm EUV Gen 2: Higher Transistor Density and Line-Edge Roughness


Depending how fast the industry addresses the current EUV challenges related to masks, pellicles, CD uniformity, LER and other things, GlobalFoundries will eventually roll out another generation of its 7 nm EUV process.


The second-gen 7 nm EUV manufacturing technology from GlobalFoundries will feature improved LER and a better resolution, which the company hopes will enable higher transistor densities with lower power and/or higher performance. Though given the experimental nature of the tech behind this process, as you’d expect GlobalFoundries is not saying when certain problems are to be resolved and when it can offer appropriate services to its customers.


Finally, 3rd Gen 7LP will likely introduce some new design rules to enable geometry scaling and/or higher frequencies/lower power, but in general I’m expecting that the transition to this process should be relatively seamless to IC designers. After all, the majority of layers will still use DUV. The only question is whether GlobalFoundries will need to install additional TWINSCAN NXE scanners into the Fab 8 for its 2nd Gen 7 nm EUV process technologies, which would also indicate that the number of layers processed using EUV had increased.


5 nm EUV: Adjustable Gate-All-Around FETs


A week before GlobalFoundries disclosed their 7LP platform plans, IBM and their Research Alliance partners (GlobalFoundries and Samsung) demonstrated a wafer processed using a 5 nm manufacturing process. ICs on the wafer were built using silicon nanosheet transistors (aka gate-all-around FETs [GAA FETs]) and it looks like they will be building blocks for semiconductors in the future. The big question of course is when.



GAA FETs developed by IBM, GlobalFoundries, and Samsung stack silicon nanosheets in such a way that every transistor now has four gates. The key thing about GAA FETs is that the width of nanosheets can be adjusted within a single manufacturing process or even within the IC design to fine-tune performance or power consumption. When it comes to performance/power/area(PPA)-related improvements, IBM claims that when compared to a 10 nm manufacturing process, the 5 nm technology offers 40% performance improvement at the same power and complexity, or 75% power savings at the same frequency and complexity. However keep in mind that while IBM participates in the Alliance, announcements by IBM do not reflect the actual process technologies developed by GlobalFoundries or Samsung.



IBM, GlobalFoundries, and Samsung claim that adjustments to GAA FETs were made using EUV, which is logical as the three companies use an ASML TWINSCAN NXE scanner at the SUNY Polytechnic Institute’s NanoTech Complex (in Albany, NY) for their R&D work. Technically, it is possible to produce GAA FETs using DUV equipment (assuming that it is possible to get  the right CD, LER, cycle times, etc.), but it remains to be seen how significantly the 5 nm process and designs will rely on EUV tools.












Industry FinFET Lithography Roadmap, HVM Start
Data announced by companies during conference calls, press briefings and in press releases
  2016 2017 2018 2019 2020 2021
1H 2H 1H 2H 1H 2H 1H 2H
GlobalFoundries 14LPP 7nm DUV 7nm with EUV* 5nm (?)
Intel 14 nm

14 nm+
14 nm++

10 nm
10 nm+

10 nm++
Samsung 14LPP

14LPC
10LPE 10LPP 8LPP

10LPU
7LPP 6 nm* (?)
SMIC 28 nm** 14 nm in development
TSMC CLN16FF+ CLN16FFC CLN10FF

CLN16FFC
CLN7FF

CLN12FFC
CLN12FFC/

CLN12ULP
CLN7FF+ 5 nm* (?)
UMC 28 nm** 14nm no data
*Exact timing not announced

**Planar
 

Neither of the three members of the Research Alliance talked about timeframe of 5 nm HVM, but a wild guess would put 5 nm EUV in 2021 (if not later).



Some Thoughts


Wrapping things up, based on recent announcements it’s looking increasingly likely that EUV will in fact make it out of the lab and intro high volume production. In just the past couple of weeks GlobalFoundries and two of its development partners have made several announcements regarding EUV in general, increasingly calling it a part of their future. This does not mean that they do not have a Plan B with multi patterning, but it looks like EUV is now a part of the mid-term future, not the long-term one. Still, it’s telling that no one is giving a deadline for EUV beyond “when it is ready.”



Just like GlobalFoundries said before (like other foundries), the insertion of EUV equipment into their manufacturing flow would be gradual. The company plans to install two scanners this year to use them for mass production several quarters down the road, but GlobalFoundries has not made any further announcements beyond that. Ultimately while the future for EUV is looking brighter, the technology is still not ready for prime time, and for the moment no one knows quite when it’ll finally meet all of the necessary metrics for volume production.


Finally, speaking of the 7LP platform in general, it is interesting that GlobalFoundries will be primarily targeting high-performance applications with the new technology, and not mobile SoCs like some other contract fabs. This despite the fact that the 7LP platform supports ultra-low voltages (0.65 V) and should be able to address mobile applications. So from a performance/power/area point of view, while the 7LP manufacturing process looks rather competitive, it remains to be seen just how GlobalFoundries’ partners will use the capabilities of the new process.


Related Reading:




Source: AnandTech – GlobalFoundries Details 7 nm Plans: Three Generations, 700 mm², HVM in 2018

TYAN Shows Two Skylake-SP-Based HPC Servers with Up to 8 Xeon Phi/Tesla Modules

At ISC 17 this sweek, TYAN has demonstrated two new HPC servers based on the latest Intel Xeon processors for high-performance computing and deep learning workloads. The new HPC machines can integrate four or eight Intel Xeon Phi co-processors or the same number of NVIDIA Tesla compute cards, as well as over 10 storage devices.


The new TYAN FT77D-B7109 and FT48B-B7100 are 4U dual-processor machines are compatible with Intel’s latest Xeon processors featuring the Skylake-SP (LGA3647) microarchitecture. Since Intel has not yet formally launched the aforementioned CPUs, TYAN has not yet opened up the servers, and little is known about their internal architecture.


The higher-end TYAN FT77D-B7109 server uses a dual PCIe root complex topology (enabled by PLX PCIe switches) to support up to eight Intel Xeon Phi coprocessor modules or up to eight NVIDIA Tesla accelerators, depending on customer needs. Since the machine is positioned for HPC, AI, machine learning, and oil & gas exploration, expect them to have tens of DIMM slots for terabytes of DDR4 memory. The server can also fit 14 hot-swappable 2.5” storage devices with SATA 6 Gbps or U.2/NVMe interfaces (only four bays support U.2) that work in RAID 0, 1, 5 or 10 modes. As for connectivity, the FT77D-B7109 has two 10 GbE ports and a GbE port for IPMI.


The TYAN FT48B-B7100 is a slightly different 4U/2P design that supports up to four Intel Xeon Phi or NVIDIA Tesla compute cards, as well as up to 10 hot-swappable 2.5” SAS/SATA storage devices operating in various RAID modes. TYAN is positioning the server as a cost-effective solution for research institutions, industrial automation and video capture applications, which is why it has seven PCIe x16 slots in general to fit additional cards for various I/O needs. The machine only has two GbE connections used for both networking and IPMI.


In addition to HPC machines, TYAN also showcased two Intel Xeon Processor Scalable Family-based dual-socket cloud platforms. The smaller GT75B-B7102 is a 1U machine with up to 10 hot-swap 2.5” storage devices (including four U.2 drives). The larger TN76-B7102 is a 2U server supporting 12 hot-swap 2.5” SSDs or HDDs with a SATA or SAS interface.


TYAN has not announced when exactly it plans to start selling the new servers, but expect the company to start rolling out its new units after Intel makes the new CPUs available later this summer.


Related Reading:




Source: AnandTech – TYAN Shows Two Skylake-SP-Based HPC Servers with Up to 8 Xeon Phi/Tesla Modules

FreeTail EVOKE Series CompactFlash Cards Capsule Review

Digital cameras and camcorders employ memory cards (flash-based removable media) for storage of captured content. There are different varieties of memory cards catering to various performance levels. CompactFlash (CF) became popular in the late 90s, but, has now been overtaken by Secure Digital (SD) cards. Despite that, cameras with CF card slots are still getting introduced into the market. Today, we will be taking a look at a couple of CF cards in the EVOKE series from FreeTail Tech.


Introduction


CompactFlash (CF) was introduced back in 1994 as a mass storage device format, and it turned out to be the most successful amongst the first set of such products. Electrically, it is based on a parallel ATA (PATA) interface, which means that there is hard transfer rate cap at 167 MBps. However, this is more than sufficient even for current-day 4K video encodes.


​Having been overtaken by Secure Digital (SD) cards in terms of volume shipment, the price per GB of CF cards is not very attractive. However, certain cameras leave the consumer with no choice. FreeTail Tech is aiming to serve this market segment with the EVOKE series of CF cards – the main push is in terms of value for money.


​There are two members in the EVOKE series – the EVOKE and EVOKE Pro. The former is the 800x variant, while the Pro is the 1066x variant. There are three capacity points in each – 64GB, 128GB, and 256GB. FreeTail sent over the 800x and 1006x 256GB models for review.



Testbed Setup and Testing Methodology


Evaluation of memory cards is done on Windows with the testbed outlined in the table below. The USB 3.1 Type-C port enabled by the Intel Alpine Ridge controller is used for benchmarking purposes on the testbed side. CF cards utilize the Lexar Professional Workflow CFR1 CompactFlash UDMA 7 USB 3.0 Reader. The reader was placed in the Lexar Professional Workflow HR2 hub and uplinked through its USB 3.0 port with the help of a USB 3.0 Type-A female to Type-C male cable.













AnandTech DAS Testbed Configuration
Motherboard GIGABYTE Z170X-UD5 TH ATX
CPU Intel Core i5-6600K
Memory G.Skill Ripjaws 4 F4-2133C15-8GRR

32 GB ( 4x 8GB)

DDR4-2133 @ 15-15-15-35
OS Drive Samsung SM951 MZVPV256 NVMe 256 GB
SATA Devices Corsair Neutron XT SSD 480 GB

Intel SSD 730 Series 480 GB
Add-on Card None
Chassis Cooler Master HAF XB EVO
PSU Cooler Master V750 750 W
OS Windows 10 Pro x64
Thanks to Cooler Master, GIGABYTE, G.Skill and Intel for the build components

The full details of the reasoning behind choosing the above build components can be found here.



var fio_fresh_img_src = new Array(

“http://images.anandtech.com/reviews/das/cards/CF/Freetail-1066x-256GB/fio_fresh.png”,”http://images.anandtech.com/reviews/das/cards/CF/Freetail-800x-256GB/fio_fresh.png”,”http://images.anandtech.com/reviews/das/cards/CF/Lexar-1066x-128GB/fio_fresh.png”);


var fio_used_img_src = new Array(

“http://images.anandtech.com/reviews/das/cards/CF/Freetail-1066x-256GB/fio_used.png”,”http://images.anandtech.com/reviews/das/cards/CF/Freetail-800x-256GB/fio_used.png”,”http://images.anandtech.com/reviews/das/cards/CF/Lexar-1066x-128GB/fio_used.png”);


var perf_cons_img_src = new Array(

“http://images.anandtech.com/reviews/das/cards/CF/Freetail-1066x-256GB/perf_cons.png”,”http://images.anandtech.com/reviews/das/cards/CF/Freetail-800x-256GB/perf_cons.png”,”http://images.anandtech.com/reviews/das/cards/CF/Lexar-1066x-128GB/perf_cons.png”);


var CDM_img_src = new Array(

“http://images.anandtech.com/reviews/das/cards/CF/Freetail-1066x-256GB/CDM.png”,”http://images.anandtech.com/reviews/das/cards/CF/Freetail-800x-256GB/CDM.png”,”http://images.anandtech.com/reviews/das/cards/CF/Lexar-1066x-128GB/CDM.png”);


Sequential Accesses


FreeTail claims speeds of up to 160 MBps reads and 85 MBps writes for the 800x card. The 1066x one comes in at 160 MBps reads and 150 MBps writes. However, real-world speeds are bound to be lower. For most applications, that really doesn’t matter as long as the card is capable of sustaining the maximum possible rate at which the camera it is used in dumps data. We use fio workloads to emulate typical camera recording conditions. We run the workload on a fresh card, and also after simulating extended usage. Instantaneous bandwidth numbers are graphed. This gives an idea of performance consistency (whether there is appreciable degradation in performance as the amount of pre-existing data increases and / or the card is subject to wear and tear in terms of amount and type of NAND writes). Further justification and details of the testing parameters are available here.


Freetail 1066x 256GBFreetail 800x 256GBLexar 1066x 128GB



Freetail 1066x 256GBFreetail 800x 256GBLexar 1066x 128GB



In the fresh state, the card exhibits very good consistency. The 1066x variant shows that it can handle sustained writes at around 110 MBps, and reads around 135 MBps. The corresponding numbers for the 800x variant are 70 MBps and 130 MBps. The other card that we have evaluated before (the Lexar 1066x 128GB) shows better consistency with reads, though overall benchmark numbers are roughly the same between the two 1066x cards.


​In the used card scenario, we see that the 800x card has no trouble retaining write consistency, but the 1066x card would occasionally go down to around 80 MBps from the 110 MBps fresh performance number. The read is more interesting. Both cards start off with numbers similar to the fresh case (around 130 MBps), but, end up at around 100 MBps after reading around one-sixth of the card capacity. The Lexar 1066x card doesn’t have any such issues


AnandTech DAS Suite – Performance Consistency


The AnandTech DAS Suite involves transferring large amounts of photos and videos to and from the storage device using robocopy. This is followed by selected workloads from PCMark 8’s storage benchmark in order to evaluate scenarios such as importing media files directly into multimedia editing programs such as Adobe Photoshop. Details of these tests from the perspective of memory cards are available here.


In this subsection, we deal with performance consistency while processing the robocopy segment. The graph below shows the read and write transfer rates to the memory card while the robocopy processes took place in the background. The data for writing to the card resides in a RAM drive in the testbed. The first three sets of writes and reads correspond to the photos suite. A small gap (for the transfer of the videos suite from the primary drive to the RAM drive) is followed by three sets for the next data set. Another small RAM-drive transfer gap is followed by three sets for the Blu-ray folder. The corresponding graphs for similar cards that we have evaluated before is available via the drop-down selection.


Freetail 1066x 256GBFreetail 800x 256GBLexar 1066x 128GB



Both cards show that they can sustain 25 MBps+ even with a large number of small files. Large files (typical videos) make the card exhibit their best performance.


AnandTech DAS Suite – Bandwidth


The average transfer rates for each workload from the previous section is graphed below. Readers can get a quantitative number to compare the Freetail 1066x 256GB CF card against the ones that we have evaluated before.



robocopy - Photos Read


robocopy - Photos Write


robocopy - Videos Read


robocopy - Videos Write


robocopy - Blu-ray Folder Read


robocopy - Blu-ray Folder Write



The Lexar 1066x card has a slight edge in the write workloads, but reads often favor the FreeTail cards.


We also look at the PCMark 8 storage bench numbers in the graphs below. Note that the bandwidth number reported in the results don’t involve idle time compression. Results might appear low, but that is part of the workload characteristic. Note that the same testbed is being used for all memory cards. Therefore, comparing the numbers for each trace should be possible across different cards.



robocopy - Photoshop Light Read


robocopy - Photoshop Light Write


robocopy - Photoshop Heavy Read


robocopy - Photoshop Heavy Write


robocopy - After Effects Read


robocopy - After Effects Write


robocopy - Illustrator Read


robocopy - Illustrator Write



Performance Restoration


The traditional memory card use-case is to delete the files on it after the import process is completed. Some prefer to format the card either using the PC, or, through the options available in the camera menu. The first option is not a great one, given that flash-based storage devices run into bandwidth issues if garbage collection (processes such as TRIM) is not run regularly. Different memory cards have different ways to bring them to a fresh state.Based on our experience, CF cards have to be formatted after all the partitions are removed using the ‘clean’ command in diskpart.


In order to test out the effectiveness of the performance restoration process, we run the default sequential workloads in CrystalDiskMark before and after the formatting. Note that this is at the end of all our benchmark runs, and the card is in a used state at the beginning of the process. The corresponding screenshots for similar cards that we have evaluated before is available via the drop-down selection.


Freetail 1066x 256GBFreetail 800x 256GBLexar 1066x 128GB



We find that CF cards don’t have significant performance loss after being subject to our stress test. Therefore, the performance gain from the refresh process is also minimal across all our tested cards.


Concluding Remarks


The FreeTail 800x and 1066x cards perform as well as the Lexar 1066x cards for almost all relevant content capture workloads. The Lexar card does have the edge in some of the atypical benchmarks that are part of the PCMark 8 storage bench, but, it is highly unlikely that CF cards are going to be subject to such scenarios (SD cards are a different story, as they are often used in embedded systems and mobile devices).


In addition to raw performance and consistency, pricing is also an important aspect. This is particularly important in the casual user and semi-professional markets, where the value for money metric often trumps benchmark numbers. The table below presents the relevant data for the Freetail 1066x and 800x 256GB CF cards and other similar ones that we have evaluated before. The cards are ordered by the $/GB metric.








CF Cards – Pricing (as on June 15, 2017)
Card Model Number Capacity (GB) Street Price (USD) Price per GB (USD/GB)
FreeTail 800x 256GB FTCF256A08 256 145 0.57
FreeTail 1066x 256GB FTCF256A10 256 171 0.67
Lexar 1066x 128GB LCF128CRBNA1066 128 110 0.86

We find that the FreeTaiil cards handily beat the Lexar one in the value proposition metric. Based on our testing, we have no qualms in recommending either FreeTail card for purchase. Semi-professional and casual users will find the pricing to be very attractive.


FreeTail Tech is offering a 10% discount code on Amazon for AnandTech readers. Please enter the code ANAND101 at checkout




Source: AnandTech – FreeTail EVOKE Series CompactFlash Cards Capsule Review

TYAN Announces AMD EPYC TN70A-B8026 Server: 1P, 16 DIMMs, 26 SSDs, OCuLink

TYAN introduced its first server and its first motherboard for AMD’s new EPYC processors. The company decided to take a cautious approach to AMD’s EPYC, and the initial machine will be a single-socket server for high-performance all-flash storage applications. Meanwhile, the new platforms from TYAN will be among the first applications to support OCuLink connections.


The first TYAN platform based on the AMD EPYC 7000-series processor capitalizes on the CPU’s primary advantage besides its core count (up to 32): the number of integrated PCIe 3.0 lanes (up to 128) that can be used to connect NVMe SSDs without any external switches or controllers. The TYAN TN70A-B8026 server is based on the S8026 motherboard that has 16 DDR4 DIMMs slots (two modules are supported per channel, 1 TB of DDR4 in total), two M.2-22110 slots for SSDs (PCIe 3.0 x4) as well as eight SFF-8611 PCIe/OCuLink x8 connectors for 24 hot-swap SSDs in U.2 form-factor. In total, the server supports 26 PCIe 3.0 x4 SSDs as well as two SATA devices.



The platform also supports five PCIe 3.0 x8 slots via 2U risers (these slots function when storage drives are not using their PCIe connections) as well as one PCIe 3.0 x16 OCP 2.0-capable slot for an EDR InfiniBand or a 100 GbE card. To support even the most power hungry components, TYAN equips its TN70A-B8026 with a redundant 770 W power supplies. As for management and networking, the machine is equipped with the AST2500 BMC with iKVM & Redfish support, two GbE ports (Broadcom BCM5720) for connectivity and one GbE for IPMI.



TYAN does not say which SSDs it’s going to use for the TN70A-B8026 and how many terabytes of storage in total the machine can support. What the company does say is that a pair of SFF-8611 OCuLink x8 connectors can be re-configured (from BIOS) to support up to 16 SATA 6 Gbps drives, which provides flexibility to server makers or value-add resellers, who plan to use the TYAN S8026 motherboard or the TN70A-B8026 server barebones. In fact, the latter fits into regular E-ATX supporting cases, so it can be used to build workstations with enhanced storage capabilities.






TYAN TN70A-B8026 Server Barebones SKUs
  PCIe Slots Storage Bays PSU UPC
B8026T70AV16E8HR 6 16 × 2.5″ SATA

8 × 2.5″ NVMe
770 W redundant 635872043727
B8026T70AE24HR 2 24 × NVMe 635872043734

TYAN did not announce MSRP or ETA for its TN70A-B8026 server as well as the S8026 motherboard. Since the server can be equipped with different CPUs and SSDs, its price can vary by orders of magnitude and it does not make a lot of sense to make guesses at this point. Considering that high-endurance/high-capacity SSDs are quite expensive, a fully populated TYAN TN70A-B8026 machine can easily cross the $100K mark.


Related Reading:




Source: AnandTech – TYAN Announces AMD EPYC TN70A-B8026 Server: 1P, 16 DIMMs, 26 SSDs, OCuLink

Lexar Professional Workflow HR2 4-Bay Thunderbolt 2 / USB 3.0 Reader Hub Review

​Content creators in the field often have to deal with large amounts of data spread over multiple flash media. Importing them into a computer for further processing has always been a challenge. Casual users can connect the cameras directly to a PC, while some might prefer taking the card out and using a card reader for this purpose. There are a multiple options available in the card reader market. However, professionals who value cutting down the media import time need to opt for readers with a USB 3.0 and/or Thunderbolt interface. Lexar has a range of card readers and a 4-bay hub (the Lexar Professional Workflow HR2) to go with them. In addition to reviewing the hub, we also take the opportunity to develop a framework for reviewing flash-based storage media for non-PC applications in this piece.



Source: AnandTech – Lexar Professional Workflow HR2 4-Bay Thunderbolt 2 / USB 3.0 Reader Hub Review

Imagination Technologies Formally Puts Itself Up For Sale

The fate of Imagination Technologies has become something of a saga in recent months. The prolific IP vendor, Apple’s right-hand supplier for GPU designs and IP over the last decade, found itself on the rocks in April, when Apple announced they would be transitioning away from using Imagination’s IP and designs. Then in May, the company announced that they would be doubling-down on the GPU business – their strongest business – by selling off their remaining Ensigma communications and MIPS CPU businesses. Now this morning, the company has announced that they have decided to instead focus on going another route, and will be putting the entire company up for sale.


While the company as a whole was not formally up for sale until today, as you’d expect for a company in difficult circumstances like Imagination, that option has unofficially been on the table since the start. To that end, Imagination has reported that a number of parties have expressed an interest in buying the entire company. As noted in Imagination’s press release:


Imagination Technologies Group plc (LSE: IMG, “Imagination”, “the Group”) announces that over the last few weeks it has received interest from a number of parties for a potential acquisition of the whole Group. The Board of Imagination has therefore decided to initiate a formal sale process for the Group and is engaged in preliminary discussions with potential bidders.


At this time Imagination is not naming any suitors – and indeed is warning that a sale may not go through at all – though at this stage it’s difficult to imagine someone not taking advantage of the situation. Imagination’s PowerVR GPU IP alone is valuable to virtually all of the major SoC vendors, not to mention IP powerhouses and former customers such as Qualcomm, Intel, and of course, Apple.


Meanwhile the MIPS and Ensigma business have yet to be sold, and a buyer could opt to pick those businesses up too. Otherwise, for the time being, Imagination is continuing their efforts to sell of those businesses, and they have already received proposals for both.


As for a potential price for the company, assuming Imagination were purchased wholesale, after today’s announcement the company’s market cap has jumped to £400M (~$500M USD). At about half of the company’s 52-week high, this would be significantly cheaper than had anyone attempted to purchase the company before the Apple split. The final price tag then would be somewhat higher, as a sale would almost certainly come with a premium over the company’s current stock price.


Finally, while the company looks for potential buyers, they are also continuing their dispute with Apple. At last report, the companies were still going through their contractual dispute resolution process. It’s not clear whether this process would be completed before Imagination finds a buyer.



Source: AnandTech – Imagination Technologies Formally Puts Itself Up For Sale

MSI Announces GeForce GTX 1080 Ti LIGHTNING Z

After posting a teaser video last week, MSI has followed up by announcing their latest ultra-high-end Lightning-branded graphics card: the MSI GeForce GTX 1080 Ti LIGHTNING Z. The triple-slot-width, triple-fan, and triple-8-pin power connector card comes equipped with all the latest in thermal solutions, overclocking design, and shiny colors. Yes, for those hoping that ‘Lightning’ correlates with ‘lighting,’ the LIGHTNING Z comes LED-strewn and slickly-hewn with Mystic Light RGB control, backplate, and alternate colored shroud highlights.


A key feature of the LIGHTNING Z is a BIOS switch that toggles “LN2 Mode,” which lifts power/current and thermal limits. The allure here is that for extreme overclockers used to hard volt-modding (with pencil or otherwise) can simply flick the switch when necessary. At the same time, MSI also advertises Military Class 4 components, as well as card features such as V-Check Points, a hardware-based voltage measurement design, and Quadruple Overvoltage, a specialized auxiliary voltage system.















MSI GeForce GTX 1080 Ti LIGHTNING Z
Boost Clock 1721MHz (Lightning Mode)

1695MHz (Gaming Mode)

1582MHz (Silent Mode)
Base Clock 1607MHz (Lightning Mode)

1582MHz (Gaming Mode)

1480MHz (Silent Mode)
Memory Clock 11124MHz (Lightning/Gaming Mode)

11016MHz (Silent Mode)
VRAM 11GB GDDR5X

(352-bit)
TDP 250W
Outputs 2x DP1.4, 2x HDMI2.0b, 1x DL-DVI-D
Power Connectors 3x 8pin
Length 320mm
Width 2.5 Slot (61mm)
Weight 1.7kg
Cooler Type Open Air
Price TBA

Keeping the beast cool is MSI’s Tri-Frozr design, armed with 3 TORX 2.0 Fans (1 x 9cm, 2 x 10cm). Alongside the main heatsink/heatpipe complex, the card has a flatter memory/MOSFET heatsink and heatpipe, as well as a rear heatpipe in between the PCB and backplate. The custom PCB itself possesses 10 layers, 14 GPU power phases, and 3 memory power phases.



And as for Mystic Light, MSI’s LED control software enables users to synchronize and adjust lighting across devices, other components, and peripherals, even changing color schemes from the luxury of your smartphone.


MSI has not released pricing information at this time. The LIGHTNING Z is “expected to be available in July.”


Source: MSI



Source: AnandTech – MSI Announces GeForce GTX 1080 Ti LIGHTNING Z

Toshiba Selects Preferred Buyer For Memory Business

Toshiba has selected a consortium as their preferred bidder in the sale of Toshiba’s memory business. The consortium is led by the Innovation Network Corporation of Japan, an investment partnership between the Japanese government and 26 Japanese corporations. Other major partners in the winning consortium include Bain Capital and competing memory manufacturer SK Hynix. Toshiba hopes to have an agreement in place in time for their June 28 annual shareholder meeting and to close the deal by March 2018.


Meanwhile, Western Digital continues to object to Toshiba’s efforts to spin off and sell their portion of the Toshiba–SanDisk joint venture. Western Digital has not been able to keep pace in the bidding war for Toshiba’s memory business, and they are seeking to intervene in any attempt by Toshiba to conduct a sale without consent from Western Digital’s SanDisk subsidiary. In May, Western Digital initiated arbitration proceedings against Toshiba, and last week Western Digital filed for a preliminary injunction to prevent Toshiba from selling the memory business until the arbitration is resolved. A hearing on the injunction request is scheduled for July 14.


A profitable sale of the memory business is crucial to Toshiba’s financial health as other portions of the conglomerate are deeply troubled. Toshiba’s Westinghouse nuclear power subsidiary filed for Chapter 11 bankruptcy earlier this year after an annual loss of around $9 billion. Those losses and continued effects from previous accounting scandals forced Toshiba to put their thriving flash memory manufacturing business on the market as the only way to raise enough money in a short timeframe. The winning bid for Toshiba’s memory business is expected to be at least $18 billion. No matter who ends up buying the Toshiba memory business, the landscape of the flash memory market will be very different. Toshiba is currently the second-largest manufacturer of NAND flash memory, behind Samsung.



Source: AnandTech – Toshiba Selects Preferred Buyer For Memory Business