More States Than Ever Have High Obesity Rates, CDC Data Shows

More and more Americans across the country are living with obesity, new data from the Centers for Disease Control and Prevention shows. The number of states and territories with high obesity rates has more than doubled since 2018. There are also wide disparities in obesity prevalence among different groups of…

Read more…



Source: Gizmodo – More States Than Ever Have High Obesity Rates, CDC Data Shows

Your Name Director’s New Anime Has A Girl Romancing A Chair

While most anime have a muscle-bound hunk or an aloof loner as a protagonist’s romantic partner, the new film from the director of international box-office hit Your Name goes a slightly different route. Suzume, the film’s leading lady, romances a wooden chair. This isn’t an Onion article, that’s the for-realsies…

Read more…



Source: Kotaku – Your Name Director’s New Anime Has A Girl Romancing A Chair

[$] How to fix an ancient GDB problem

The GDB debugger has a long
history; it was first created in 1986. It may thus be
unsurprising that some GDB development happens over relatively long time
frames but, even when taking that into account, the existence of an open
bug first reported in
2007 may be a little surprising. At the 2022 GNU Tools Cauldron,
GDB maintainer Pedro Alves talked about why this problem has been difficult
to solve, and what the eventual solution looks like.

Source: LWN.net – [$] How to fix an ancient GDB problem

Black Panther: Wakanda Forever's Runtime Is Almost Forever

We now know the runtime for Black Panther: Wakanda Forever. According to the Hollywood Reporter, Disney has confirmed that the highly anticipated Marvel sequel has a two hour and 41-minute runtime, which is long but still shorter than Avengers: Endgame, which ran just over three hours.

Read more…



Source: Gizmodo – Black Panther: Wakanda Forever’s Runtime Is Almost Forever

What Parents Need to Know About Enterovirus D68

The CDC recently issued an alert to healthcare providers about enterovirus D68, which has turned up in children who were hospitalized with severe respiratory illnesses. This virus can also cause a form of paralysis known as acute flaccid myelitis. Most illnesses with this virus do not cause the paralysis, but it’s…

Read more…



Source: LifeHacker – What Parents Need to Know About Enterovirus D68

Mystery Hackers Are 'Hyperjacking' Targets for Insidious Spying

For decades, security researchers warned about techniques for hijacking virtualization software. Now one group has put them into practice. From a report: For decades, virtualization software has offered a way to vastly multiply computers’ efficiency, hosting entire collections of computers as “virtual machines” on just one physical machine. And for almost as long, security researchers have warned about the potential dark side of that technology: theoretical “hyperjacking” and “Blue Pill” attacks, where hackers hijack virtualization to spy on and manipulate virtual machines, with potentially no way for a targeted computer to detect the intrusion. That insidious spying has finally jumped from research papers to reality with warnings that one mysterious team of hackers has carried out a spree of “hyperjacking” attacks in the wild.

Today, Google-owned security firm Mandiant and virtualization firm VMware jointly published warnings that a sophisticated hacker group has been installing backdoors in VMware’s virtualization software on multiple targets’ networks as part of an apparent espionage campaign. By planting their own code in victims’ so-called hypervisors –VMware software that runs on a physical computer to manage all the virtual machines it hosts — the hackers were able to invisibly watch and run commands on the computers those hypervisors oversee. And because the malicious code targets the hypervisor on the physical machine rather than the victim’s virtual machines, the hackers’ trick multiplies their access and evades nearly all traditional security measures designed to monitor those target machines for signs of foul play.

“The idea that you can compromise one machine and from there have the ability to control virtual machines en masse is huge,” says Mandiant consultant Alex Marvi. And even closely watching the processes of a target virtual machine, he says, an observer would in many cases see only “side effects” of the intrusion, given that the malware carrying out that spying had infected a part of the system entirely outside its operating system. Mandiant discovered the hackers earlier this year and brought their techniques to VMware’s attention. Researchers say they’ve seen the group carry out their virtualization hacking — a technique historically dubbed hyperjacking in a reference to “hypervisor hijacking” — in fewer than 10 victims’ networks across North America and Asia. Mandiant notes that the hackers, which haven’t been identified as any known group, appear to be tied to China.

Read more of this story at Slashdot.



Source: Slashdot – Mystery Hackers Are ‘Hyperjacking’ Targets for Insidious Spying

MSI Stealth 15M review: Coasting on its good looks

It’s only natural that a person’s tastes and preferences change over time. So after years of thirsting for big, beefy gaming laptops with shiny lights, I’ve started gravitating towards more understated all-rounders that don’t scream “Look at me.” And for the last few generations, MSI’s Stealth 15M line has been one of the best at balancing good performance with a discreet appearance. But unfortunately, it feels like MSI is coasting with the 2022 model. While there aren’t a ton of major flaws, things like the Stealth’s display, battery life and audio just aren’t quite as nice as I would’ve liked.

Design

For a gaming notebook, the Stealth 15M is about as incognito as it gets. It’s got a simple, somewhat boxy build with a matte black finish (which is a bit of a fingerprint magnet, by the way). The only visual flair, at least on the outside, is MSI’s dragon logo, which gets a new holographic treatment for 2022.

Then you open it up and you get MSI’s lovely Spectrum backlit keyboard, a big speaker grill that runs the width of the deck and a smallish touchpad. Along its sides MSI includes a good selection of ports including four USB 3.2 ports (two Type-A and two Type-C, one of which supports DisplayPort), a full-size HDMI 2.1 connector and a combo headphone/mic jack. And with a weight of just under four pounds (3.96 lbs), the Stealth is actually a touch lighter than a lot of other 15-inch gaming laptops (and some 14-inch systems too).

Display, sound and webcam

On paper, the Stealth 15M’s screen looks like a perfect match for its specs. It’s a 15.6-inch IPS panel with a 144Hz refresh rate. It even has a matte finish to help prevent distracting reflections. The issue is that because it has somewhat dull colors and a tested brightness of around 250 nits, movies and games look kind of lifeless. Sure, if you like gaming in darker environments, it’s not a big deal. But its mediocre light output also means that in sunny rooms, it can be difficult to read text, especially if you’re someone who prefers dark mode apps.

The Stealth 15M features a 15.6-inch IPS display with a 1920 x 1080 resolution. Sadly, with a brightness of around 250 nits, it can look a bit dim and lifeless, particularly in sunny rooms.
Sam Rutherford/Engadget

As for audio, the Stealth features dual two-watt speakers that can get pretty loud, though they are lacking a bit of bass. Don’t get me wrong, they’re perfectly fine, I was just hoping for a little more considering the size of its grille. And then perched above the display is a 720p webcam which is serviceable but it doesn’t deliver the kind of quality you’d want for live streaming. It’s more so you can show your face during Zoom meetings, and that’s about it.

But once again, while nothing is egregiously bad, I feel like MSI is doing the bare minimum here. Its speakers are just ok, its webcam doesn’t even capture full HD and that big chin below the display makes the whole laptop look sort of dated.

Performance

The Stealth 15M has a great selection of ports including two USB-A ports, two USB-C ports, a full-size HDMI 2.1 port and a combo headphone/mic jack
Sam Rutherford/Engadget

When it comes to performance the Stealth has plenty of oomph thanks to an Intel Core i7-1280P CPU and an NVIDIA RTX 3060 GPU. Our review unit even comes with a 1TB SSD and 32GB of RAM, the latter of which is arguably overkill given the rest of the system’s specs. However, you’ll want to make sure you figure out where the Stealth’s fan speed settings are in the MSI Center app, because when this thing spins up, you’re in for more than just a subtle whoosh.

In Shadows of the Tomb Raider at 1920 x 1080 and highest settings, the Stealth averaged 106 fps, which is just a tiny bit better than the 102 fps we got from the similarly-sized Alienware x14. Meanwhile, in Metro Exodus, the Stealth tied the Alienware’s performance, with both machines hitting 55 fps at full HD and ultra settings. So not exactly face-melting horsepower, but still more than enough to play modern AAA titles with plenty of graphical bells and whistles enabled.

Keyboard and touchpad

The Spectrum keyboard on the Stealth 15M has a soft, cushy press, though sadly, you can't adjust its color pattern like on a lot of other gaming laptops.
The Spectrum keyboard on the Stealth 15M has a soft, cushy press, though sadly, you can’t adjust its color pattern like on a lot of other gaming laptops.
Sam Rutherford/Engadget

One thing I really like about the Stealth 15M is its Spectrum keyboard. Not only do the keys have a soft, cushy press, they let just the right amount of light to leak out the sides, adding a little razzle dazzle without searing your retinas. And of course, you can turn everything off if you want to go fully undercover. Below that you get a touchpad that measures just four inches wide and two and half inches tall, which can feel a bit cramped at times. That said, having an undersized touchpad isn’t as big of a deal as it might be on a more mainstream notebook. Most gamers will probably carry an external mouse since touchpads really aren’t ideal for gaming.

Battery life

Perhaps the biggest weakness of the Stealth 15M is its battery life. It comes with a 53.8Whr power cell, which feels frustratingly small compared to the Alienware x14, whose battery is 50 percent larger at 80Whr, despite both systems being about the same size. That results in some pretty disappointing longevity, with the Stealth lasting just four hours and 15 minutes on our local video rundown test versus 9:45 for the x14 and 5:42 for the more powerful Razer Blade 15.

Wrap-up

Unfortunately, on the 2022 Stealth 15M it feels like MSI neglected the line, because aside from a new badge on its lid and a refreshed CPU and GPU, it feels like not a ton has changed from the previous model.
Sam Rutherford/Engadget

After using the Stealth 15M for a while, I’m not really mad, I’m just disappointed. I love the general design and aesthetic and the Stealth delivers a great balance of performance and portability. In a lot of ways it feels like a more grown-up take on thin-and-light gaming laptop.

The issue is that it almost feels like MSI has neglected the Stealth line. Compared to previous years, the main upgrades for 2022 are a refreshed CPU and GPU along with a new badge on its lid. That’s not nothing, but I know MSI can do better and I’m really hoping to see the Stealth get a full redesign sometime soon.

Ultimately, assuming you can stomach the short battery life, the value of the Stealth 15M hinges a lot on its price. I’ve seen this thing listed as high as $1,700 from retailers like Walmart, which is simply too much. At that point, you’re much better off going for a notebook with a slightly smaller screen like the Alienware X14 and getting very similar performance, or opting for Asus’ Zephyrous G14 while also saving a couple hundred bucks in the process. But if you can nab the Stealth for under $1,400, a lot of the system’s trade-offs become a lot more palatable. I just wish this version of the Stealth felt more like James Bond and less like Agent Cody Banks.



Source: Engadget – MSI Stealth 15M review: Coasting on its good looks

Affordable Gigabyte B650 Motherboards Begin To Emerge For Mainstream AMD Zen 4 PCs

Affordable Gigabyte B650 Motherboards Begin To Emerge For Mainstream AMD Zen 4 PCs
AMD’s Zen 4-based Ryzen 7000 processors have attractive performance, but their value proposition is questionable. That’s due to the cost of entry: you need speedy DDR5 memory for the best performance, and the available X670 motherboards aren’t cheap. Inexpensive B650-based boards are on the way, with AMD holding a livestream on October 4th

Source: Hot Hardware – Affordable Gigabyte B650 Motherboards Begin To Emerge For Mainstream AMD Zen 4 PCs

Get Ready for Choreographed Bullet Time in The Matrix Dance Adaptation From Danny Boyle

Plug into a rather unique take on the Wachowski siblingsThe Matrix by way of Danny Boyle (28 Days Later), who’s directing a staged dance interpretation of the 1999 sci-fi hit. Yes, you read that correctly—dance, because we’re clamoring for a choreographed, performance-art adaptation of the action movie franchise.

Read more…



Source: Gizmodo – Get Ready for Choreographed Bullet Time in The Matrix Dance Adaptation From Danny Boyle

Why You Should See a Therapist Even If You Don’t ‘Need’ One

Although the stigma surrounding mental health and therapy is fading, it’s still there. Many people think of therapy as something you turn to only when you’re actively struggling with your mental health, or as something only weak and unsuccessful people use. For folks who do go to therapy for one reason or another, one…

Read more…



Source: LifeHacker – Why You Should See a Therapist Even If You Don’t ‘Need’ One

Meta reportedly suspends all hiring, warns staff of possible layoffs

As with many other industries, the tech sector has been feeling the squeeze of the global economic slowdown this year. Meta isn’t immune from that. Reports in May suggested that the company would slow down the rate of new hires this year. Now, Bloomberg reports that Meta has put all hiring on hold. 

CEO Mark Zuckerberg is also said to have told staff that there’s likely more restructuring and downsizing on the way. “I had hoped the economy would have more clearly stabilized by now, but from what we’re seeing it doesn’t yet seem like it has, so we want to plan somewhat conservatively,” Zuckerberg reportedly told employees. The company is planning to reduce budgets for most of its teams, according to Bloomberg. Zuckerberg is said to be leaving headcount decisions in the hands of team leaders. Measures may include moving people to other teams and not hiring replacements for folks who leave.

Meta declined to comment on the report. The company directed Engadget to a comment that Zuckerberg made during Meta’s most recent earnings call in July. “Given the continued trends, this is even more of a focus now than it was last quarter,” Zuckerberg said. “Our plan is to steadily reduce headcount growth over the next year. Many teams are going to shrink so we can shift energy to other areas, and I wanted to give our leaders the ability to decide within their teams where to double down, where to backfill attrition, and where to restructure teams while minimizing thrash to the long-term initiatives.”

In that earnings report, Meta disclosed that, in the April-May quarter, its revenue dropped by one percent year-over-year. It’s the first time the company has ever reported a fall in revenue.

Word of the hiring freeze ties in with a report from last week, which suggested that Meta has quietly been ushering some workers out the door rather than conducting formal layoffs. In July, it emerged that the company asked team heads to identify “low performers” ahead of possible downsizing.

Developing…



Source: Engadget – Meta reportedly suspends all hiring, warns staff of possible layoffs

Meta Announces Hiring Freeze, Warns Employees of Restructuring

Meta Platforms, the owner of Facebook and Instagram, said it will freeze hiring and restructure some teams in an effort to cut costs and shift priorities. From a report: Chief Executive Officer Mark Zuckerberg announced the social networking company’s freeze during a weekly Q&A session with employees, according to a person in attendance. He added that the company would reduce budgets across most teams, even teams that are growing, and that individual teams will sort out how to handle headcount changes — whether that means not filling roles that employees depart, shifting people to other teams, or working to “manage out people who aren’t succeeding,” according to remarks reviewed by Bloomberg. “I had hoped the economy would have more clearly stabilized by now, but from what we’re seeing it doesn’t yet seem like it has, so we want to plan somewhat conservatively,” Zuckerberg said.

Read more of this story at Slashdot.



Source: Slashdot – Meta Announces Hiring Freeze, Warns Employees of Restructuring

Razer Joins With Verizon And Qualcomm For A Cut Of Cloud Streaming Handheld Console Pie

Razer Joins With Verizon And Qualcomm For A Cut Of Cloud Streaming Handheld Console Pie
Verizon announced an Android-based handheld collaboration with Razer and Qualcomm during the Mobile World Congress keynote yesterday. The new gaming handheld will be officially unveiled at RazerCon on October 15, 2022.

The handheld gaming arena is quickly getting more congested, as more and more companies announce new devices. Logitech

Source: Hot Hardware – Razer Joins With Verizon And Qualcomm For A Cut Of Cloud Streaming Handheld Console Pie

AI is already better at lip reading that we are

They Shall Not Grow Old, a 2018 documentary about the lives and aspirations of British and New Zealand soldiers living through World War I from acclaimed Lord of the Rings director Peter Jackson, had its hundred-plus-year-old silent footage modernized through both colorization and the recording of new audio for previously non-existent dialog. To get an idea of what the folks featured in the archival footage were saying, Jackson hired a team of forensic lip readers to guesstimate their recorded utterances. Reportedly, “the lip readers were so precise they were even able to determine the dialect and accent of the people speaking.”

“These blokes did not live in a black and white, silent world, and this film is not about the war; it’s about the soldier’s experience fighting the war,” Jackson told the Daily Sentinel in 2018. “I wanted the audience to see, as close as possible, what the soldiers saw, and how they saw it, and heard it.”

That is quite the linguistic feat given that a 2009 study found that most people can only read lips with around 20 percent accuracy and the CDC’s Hearing Loss in Children Parent’s Guide estimates that, “a good speech reader might be able to see only 4 to 5 words in a 12-word sentence.” Similarly, a 2011 study out of the University of Oklahoma saw only around 10 percent accuracy in its test subjects.

“Any individual who achieved a CUNY lip-reading score of 30 percent correct is considered an outlier, giving them a T-score of nearly 80 three times the standard deviation from the mean. A lip-reading recognition accuracy score of 45 percent correct places an individual 5 standard deviations above the mean,” the 2011 study concluded. “These results quantify the inherent difficulty in visual-only sentence recognition.”

For humans, lip reading is a lot like batting in the Major Leagues — consistently get it right even just three times out of ten and you’ll be among the best to ever play the game. For modern machine learning systems, lip reading is more like playing Go — just round after round of beating up on the meatsacks that created and enslaved you — with today’s state-of-the-art systems achieving well over 95 percent sentence-level word accuracy. And as they continue to improve, we could soon see a day where tasks from silent-movie processing and silent dictation in public to biometric identification are handled by AI systems.

Context matters

it's a statue
Wikipedia / Public Domain

Now, one would think that humans would be better at lip reading by now given that we’ve been officially practicing the technique since the days of Spanish Benedictine monk, Pedro Ponce de León, who is credited with pioneering the idea in the early 16th century.

“We usually think of speech as what we hear, but the audible part of speech is only part of it,” Dr. Fabian Campbell-West, CTO of lip reading app developer, Liopa, told Engadget via email. “As we perceive it, a person’s speech can be divided into visual and auditory units. The visual units, called visemes, are seen as lip movements. The audible units, called phonemes, are heard as sound waves.”

“When we’re communicating with each other face-to-face is often preferred because we are sensitive to both visual and auditory information,” he continued. “However, there are approximately three times as many phonemes as visemes. In other words, lip movements alone do not contain as much information as the audible part of speech.”

“Most lipreading actuations, besides the lips and sometimes tongue and teeth, are latent and difficult to disambiguate without context,” then-Oxford University researcher and LipNet developer, Yannis Assael, noted in 2016, citing Fisher’s earlier studies. These homophemes are the secret to Bad Lip Reading’s success.

What’s wild is that Bad Lip Reading will generally work in any spoken language, whether it’s pitch-accent like English or tonal like Vietnamese. “Language does make a difference, especially those with unique sounds that aren’t common in other languages,” Campbell-West said. “Each language has syntax and pronunciation rules that will affect how it is interpreted. Broadly speaking, the methods for understanding are the same.”

“Tonal languages are interesting because they use the same word with different tone (like musical pitch) changes to convey meaning,” he continued. “Intuitively this would present a challenge for lip reading, however research shows that it’s still possible to interpret speech this way. Part of the reason is that changing tone requires physiological changes that can manifest visually. Lip reading is also done over time, so the context of previous visemes, words and phrases can help with understanding.”

“It matters in terms of how good your knowledge of the language is because you’re basically limiting the set of ambiguities that you can search for,” Adrian KC Lee, ScD, Professor and Chair of the Speech and Hearing Sciences Department, Speech and Hearing Sciences at University of Washington, told Engadget. “Say, ‘cold; and ‘hold,’ right? If you just sit in front of a mirror, you can’t really tell the difference. So from a physical point of view, it’s impossible, but if I’m holding something versus talking about the weather, you, by the context, already know.”

In addition to the general context of the larger conversion, much of what people convey when they speak comes across non-verbally. “Communication is usually easier when you can see the person as well as hear them,” Campbell-West said, “but the recent proliferation of video calls has shown us all that it’s not just about seeing the person there’s a lot more nuance. There is a lot more potential for building intelligent automated systems for understanding human communication than what is currently possible.”

Missing a forest for the trees, linguistically

While human and machine lip readers have the same general end goal, the aims of their individual processes differ greatly. As a team of researchers from Iran University of Science and Technology argued in 2021, “Over the past years, several methods have been proposed for a person to lip-read, but there is an important difference between these methods and the lip-reading methods suggested in AI. The purpose of the proposed methods for lip-reading by the machine is to convert visual information into words… However, the main purpose of lip-reading by humans is to understand the meaning of speech and not to understand every single word of speech.”

In short, “humans are generally lazy and rely on context because we have a lot of prior knowledge,” Lee explained. And it’s that dissonance in process — the linguistic equivalent of missing a forest for the trees — that presents such a unique challenge to the goal of automating lip reading.

“A major obstacle in the study of lipreading is the lack of a standard and practical database,” said Hao. “The size and quality of the database determine the training effect of this model, and a perfect database will also promote the discovery and solution of more and more complex and difficult problems in lipreading tasks.” Other obstacles can include environmental factors like poor lighting and shifting backgrounds which can confound machine vision systems, as can variances due the speaker’s skin tone, the rotational angle of their head (which shifts the viewed angle of the mouth) and the obscuring presence of wrinkles and beards.

As Assael notes, “Machine lipreading is difficult because it requires extracting spatiotemporal features from the video (since both position and motion are important).” However, as Mingfeng Hao of Xinjiang University explains in 2020’s A Survey on Lip Reading Technology, “action recognition, which belongs to video classification, can be classified through a single image.” So, “while lipreading often needs to extract the features related to the speech content from a single image and analyze the time relationship between the whole sequence of images to infer the content.“ It’s an obstacle that requires both natural language processing and machine vision capabilities to overcome.

Acronym soup

Today, speech recognition comes in three flavors, depending on the input source. What we’re talking about today falls under Visual Speech Recognition (VSR) research — that is, using only visual means to understand what is being conveyed. Conversely, there’s Automated Speech Recognition (ASR) which relies entirely on audio, ie “Hey Siri,” and Audio-Visual Automated Speech Recognition (AV-ASR), which incorporates both audio and visual cues into its guesses.

“Research into automatic speech recognition (ASR) is extremely mature and the current state-of the-art is unrecognizable compared to what was possible when the research started,” Campbell-West said. “Visual speech recognition (VSR) is still at the relatively early stages of exploitation and systems will continue to mature.” Liopa’s SRAVI app, which enables hospital patients to communicate regardless of whether they can actively verbalize, relies on the latter methodology. “This can use both modes of information to help overcome the deficiencies of the other,” he said. “In future there will absolutely be systems that use additional cues to support understanding.”

“There are several differences between VSR implementations,” Campbell-West continued. “From a technical perspective the architecture of how the models are built is different … Deep-learning problems can be approached from two different angles. The first is looking for the best possible architecture, the second is using a large amount of data to cover as much variation as possible. Both approaches are important and can be combined.”

In the early days of VSR research, datasets like AVLetters had to be hand-labeled and -categorized, a labor-intensive limitation that severely restricted the amount of data available for training machine learning models. As such, initial research focused first on the absolute basics — alphabet and number-level identification — before eventually advancing to word- and phrase-level identification, with sentence-level being today’s state-of-the-art which seeks to understand human speech in more natural settings and situations.

In recent years, the rise of more advanced deep learning techniques, which train models on essentially the internet at large, along with the massive expansion of social and visual media posted online, have enabled researchers to generate far larger datasets, like the Oxford-BBC Lip Reading Sentences 2 (LRS2), which is based on thousands of spoken lines from various BBC programs. LRS3-TED gleaned 150,000 sentences from various TED programs while the LSVSR (Large-Scale Visual Speech Recognition) database, among the largest currently in existence offers 140,000 hours of audio segments with 2,934,899 speech statements and over 127,000 words.

And it’s not just English: Similar datasets exist for a number of languages such as HIT-AVDB-II, which is based on a set of Chinese poems, or IV2, a French database composed of 300 people saying the same 15 phrases. Similar sets exist too for Russian, Spanish and Czech-language applications.

Looking ahead

VSR’s future could wind up looking a lot like ASR’s past, says Campbell-West, “There are many barriers for adoption of VSR, as there were for ASR during its development over the last few decades.” Privacy is a big one, of course. Though the younger generations are less inhibited with documenting their lives on line, Campbell-West said, “people are rightly more aware of privacy now then they were before. People may tolerate a microphone while not tolerating a camera.”

Regardless, Campbell-West remains excited about VSR’s potential future applications, such as high-fidelity automated captioning. “I envisage a real-time subtitling system so you can get live subtitles in your glasses when speaking to someone,” Campbell-West said. “For anyone hard-of-hearing this could be a life-changing application, but even for general use in noisy environments this could be useful.”

“There are circumstances where noise makes ASR very difficult but voice control is advantageous, such as in a car,” he continued. “VSR could help these systems become better and safer for the driver and passengers.”

On the other hand, Lee, whose lab at UW has researched Brain-Computer Interface technologies extensively, sees wearable text displays more as a “stopgap” measure until BCI tech further matures. “We don’t necessarily want to sell BCI to that point where, ‘Okay, we’re gonna do brain-to-brain communication without even talking out loud,’“ Lee said. “In a decade or so, you’ll find biological signals being leveraged in hearing aids, for sure. As little as [the device] seeing where your eyes glance may be able to give it a clue on where to focus listening.”

“I hesitate to really say ‘oh yeah, we’re gonna get brain-controlled hearing aids,” Lee conceded. “I think it is doable, but you know, it will take time.”



Source: Engadget – AI is already better at lip reading that we are

Google Killing Stadia, Issuing Refunds for Hardware and Games

Stadia is not long for this world. Google’s cloud gaming service will persist for a few more months until it shuts down completely in January. The news came via a blog post by Stadia’s General Manager, Phil Harrison, where he goes into detail on how Google’s phasing out its cloud gaming service. Google will refund all…

Read more…



Source: Gizmodo – Google Killing Stadia, Issuing Refunds for Hardware and Games

Photos Show Hurricane Ian's Path of Destruction

Hurricane Ian made landfall near Florida’s Punta Gorda yesterday as a category 4 storm, bringing huge storm surges and high winds. It has since weakened into a tropical storm but is predicted to continue flooding parts of Florida, according to the National Hurricane Center.

Read more…



Source: Gizmodo – Photos Show Hurricane Ian’s Path of Destruction

Amazon’s self-branded TVs get fancier, with quantum dots, local dimming

Amazon Fire TV Omni QLED with Alexa widgets

Enlarge / Amazon’s Fire TV Omni QLED Series with Alexa widgets displayed. (credit: Amazon)

A year after it started pushing its own TVs, Amazon is expanding its lineup with pricier, more advanced options. The Fire TV Omni QLED Series announced yesterday at the invite-only Amazon hardware event shows the tech giant upping the ante with quantum dot displays and more evolved features for smart homes.

Amazon’s first self-branded TVs came last September, ranging from the more budget-friendly 4-Series, which originally started at $370 for 43 inches, and the Omni Series, which originally cost $1,100 for the largest model, at 75 inches. The 4K TVs aren’t particularly unique. They’re HDR TVs and include HDMI 2.1, with eARC for soundbars, and feature variable refresh rates from a mere 48–60 Hz at 4K. Amazon Alexa is also present, of course. Alexa can work when the TVs are off, enable voice control, and work with Alexa Routines but is not an Amazon-exclusive among modern TVs.

Amazon is paying a little more attention to image quality with the Omni QLED Series; it still avoids specific claims, though, like brightness or color coverage specs. The new 65- and 75-inch TVs use Samsung Display’s QLED technology with quantum dots for a claimed boost in color, plus full-array local dimming to boost contrast.

Read 10 remaining paragraphs | Comments



Source: Ars Technica – Amazon’s self-branded TVs get fancier, with quantum dots, local dimming

Chess Players Are Convinced That The Anal Bead Controversy Is Causing More Online Cheating

Remember the anal bead cheating controversy? As much as I’d like to forget, it turns out that a lot of people are still thinking about it. Some chess players have reported an uptick of computer-aided cheating in Chess.com matches ever since the story about Hans Niemann broke. One player has dubbed the recent surge in…

Read more…



Source: Kotaku – Chess Players Are Convinced That The Anal Bead Controversy Is Causing More Online Cheating