ChatGPT Is Getting on the AI Age Verification Bandwagon

When OpenAI first announced GPT-5.2 last month, it quietly disclosed a new safety feature it called “age prediction.” Considering ChatGPT proper isn’t exactly an “all ages” kind of tool, it makes sense that users under the age of 18 should have protections in place to shield them from harmful content. The company says that users who indicate they’re under 18 already receive an altered experience to “reduce exposure to sensitive or potentially harmful content,” but if the user doesn’t voluntarily share how old they are with OpenAI, how does the company enforce these protections? Here’s where age prediction comes in.

How age prediction for ChatGPT works

On Tuesday, OpenAI officially announced its new age prediction policy, which, like other age verification systems being used by the likes of Roblox, uses AI to guess how old a user is. If the system decides that a particular user is under the age of 18, OpenAI will adjust the experience accordingly, with the goal of keeping all interactions age-appropriate.

Here’s how it works: The new age prediction model looks at both the user’s behaviors within the app, as well as the general account data. That includes things like how old the account is, what times of day the user is accessing ChatGPT, usage patterns, as well as, of course, the age the user says they are. Looking at all this data, the model determines how old the user likely is. If the model thinks they’re over 18, they’ll get the full experience; if the model thinks they’re under 18, they’ll get the “safer experience.” If the model isn’t confident, it defaults to that safer experience.

What’s restricted in the “safer” version of ChatGPT

That limited experience means that someone the model thinks is under 18 will try to reduce the following content types:

  • Graphic violence or gore

  • Viral challenges that might inspire “risky or harmful behaviors”

  • Role play that is sexual, romantic, or violent in nature

  • Self-harm descriptions

  • Content promoting “extreme” beauty standards, unhealthy dieting, or body shaming

The company says that its approach is informed by “expert input” as well as literature discussing child development science. (It’s not clear whether how much of that input is from direct interviews and coordination with experts, and how much, if any, is from independent research.) The company also acknowledges “known teen differences in risk perception, impulse control, peer influence, and emotional regulation” when compared to adults.

AI isn’t always great at age prediction

The biggest risk with any of these age prediction models is that they’ll sometimes get it wrong—hallucination is an unfortunate habit AI models all share. That goes both ways: You don’t want someone too young accessing inappropriate content in ChatGPT, but you also don’t want someone older than 18 getting stuck with a limited account for no reason. If you experience the latter situation, OpenAI has a solution for you: direct age verification through Persona. This is the same third-party Roblox uses for its age verification, which hasn’t gone very well thus far.

That doesn’t necessarily spell doom for OpenAI. Roblox tried overhauling their age verification system for a massive user base all used to a certain type of multiplayer experience, which led to users not being able to chat with other users in newly-assigned age categories, which were often incorrect. Meanwhile, ChatGPT’s age prediction is only controlling the experience of one user at a time. To that end, OpenAI will let you upload a selfie as an added verification step if the prediction model alone isn’t enough. Interestingly, OpenAI doesn’t say anything about the option to upload an ID for verification, which other companies, like Google, have provided.

I’m not necessarily a fan of age prediction models, as I think they often sacrifice user privacy in the name of creating age-appropriate experiences. But there’s little doubt that OpenAI has to do something to limit the full ChatGPT experience for younger users. Many of ChatGPT’s users are under 18, and much of the content they experience is wildly inappropriate, whether it be instructions on getting high, or advice on writing suicide notes. In some tragic cases, minors have taken their own lives after discussions with ChatGPT, leading to lawsuits against OpenAI.

I don’t have any great answers here. We’ll just have to see how this new age prediction model affects the user experience for minors and adults alike, and whether it actually manages to create a safer experience for younger, more impressionable users.

Macaque facial gestures are more than just a reflex, study finds

Recent advances in brain-computer interfaces have made it possible to more accurately extract speech from neural signals in humans, but language is just one of the tools we use to communicate. “When my young nephew asks for ice cream before dinner and I say ‘no,’ the meaning is entirely dictated by whether the word is punctuated with a smirk or a stern frown,” says Geena Ianni, a neuroscientist at the University of Pennsylvania. That’s why in the future, she thinks, neural prostheses meant for patients with a stroke or paralysis will decode facial gestures from brain signals in the same way they decode speech.

To lay a foundation for these future facial gesture decoders, Ianni and her colleagues designed an experiment to find out how neural circuitry responsible for making faces really works. “Although in recent years neuroscience got a good handle on how the brain perceives facial expressions, we know relatively little about how they are generated,” Ianni says. And it turned out that a surprisingly large part of what neuroscientists assumed about facial gestures was wrong.

The natural way

For a long time, neuroscientists thought facial gestures in primates stemmed from a neat division of labor in the brain. “Case reports of patients with brain lesions suggested some brain regions were responsible for certain types of emotional expressions while other regions were responsible for volitional movements like speech,” Ianni explains. We’ve developed a clearer picture of speech by tracing the origin of these movements down to the level of individual neurons. But we’ve not done the same for facial expressions. To fill this gap, Ianni and her team designed a study using macaques—social primates that share most of their complex facial musculature with humans.

Read full article

Comments

56% of Companies Have Seen Zero Financial Return From AI Investments, PwC Survey Says

More than half of companies haven’t seen any financial benefit from their AI investments, according to PwC’s latest Global CEO Survey [PDF], and yet the spending shows no signs of slowing down. Some 56% of the 4,454 chief executives surveyed across 95 countries said their companies have realized neither higher revenues nor lower costs from AI over the past year.

Only 12% reported getting both benefits — and those rare winners tend to be the ones who built proper enterprise-wide foundations rather than chasing one-off projects. CEO confidence in near-term growth has taken a notable hit. Just 30% feel strongly optimistic about revenue growth over the next 12 months, down from 38% last year and nowhere near the 56% who felt that way in 2022.


Read more of this story at Slashdot.

Remote authentication bypass in telnetd

One would assume that most LWN readers stopped running network-accessible
telnet services some number of decades ago. For the rest of you, this security advisory from
Simon Josefsson
is worthy of note:

The telnetd server invokes /usr/bin/login (normally running as
root) passing the value of the USER environment variable received
from the client as the last parameter.

If the client supplies a carefully crafted USER environment value
being the string “-f root”, and passes the telnet(1) -a or –login
parameter to send this USER environment to the server, the client
will be automatically logged in as root bypassing normal
authentication processes.

Setapp Mobile To Close in February as Alternative iOS App Store Economics Prove Untenable

MacPaw, the Ukraine-based developer, has announced that Setapp Mobile — its alternative iOS app store for European Union users that launched in open beta in September 2024 — will shut down on February 16, 2026, citing “still-evolving and complex business terms” for alternative marketplaces that don’t fit its current business model.

Alternative iOS stores became possible under the Digital Markets Act but face challenges including Apple’s controversial Core Technology Fee, which Epic Games CEO Tim Sweeney has called “ruinous for any hopes of a competing store getting a foothold.”


Read more of this story at Slashdot.

Anthropic CEO Says Government Should Help Ensure AI’s Economic Upside Is Shared

An anonymous reader shares a report: Anthropic Chief Executive Dario Amodei predicted a future in which AI will spur significant economic growth — but could lead to widespread unemployment and inequality. Amodei is both “excited and worried” about the impact of AI, he said in an interview at Davos Tuesday. “I don’t think there’s an awareness at all of what is coming here and the magnitude of it.”

Anthropic is the developer of the popular chatbot Claude. Amodei said the government will need to play a role in navigating the massive displacement in jobs that could result from advances in AI. He said there could be a future with 5% to 10% GDP growth and 10% unemployment. “That’s not a combination we’ve almost ever seen before,” he said. “There’s gonna need to be some role for government in the displacement that’s this macroeconomically large.”

Amodei painted a potential “nightmare” scenario that AI could bring to society if not properly checked, laying out a future in which 10 million people — 7 million in Silicon Valley and the rest scattered elsewhere — could “decouple” from the rest of society, enjoying as much as 50% GPD growth while others were left behind. “I think this is probably a time to worry less about disincentivizing growth and worry more about making sure that everyone gets a part of that growth,” Amodei said. He noted that was “the opposite of the prevailing sentiment now,” but the reality of technological change will force those ideas to change.


Read more of this story at Slashdot.

AMD MI455X Could Combine HBM4 And LPDDR For Massive AI Memory Capacity

AMD MI455X Could Combine HBM4 And LPDDR For Massive AI Memory Capacity
Let’s talk about transformers, dear readers. Not robots in disguise, but the neural network architecture that underpins basically every modern AI model. Transformers are smart, but they trade training efficiency for inference complexity. To help reduce the amount of compute needed for complex transformers, we use a thing called a Key Value

OpenAI’s Mysterious Hardware Device Finally Has A Launch Timeframe

OpenAI's Mysterious Hardware Device Finally Has A Launch Timeframe
OpenAI has been tight lipped about its upcoming AI native hardware after having acquired Jony Ive’s design firm last year. Although comments by the company’s chief global affairs officer Chris Lehane, which have been reported by Axios, point to the secretive device being released sooner rather than later.

During an event at the World Economic

Russia May Ban GTA 6 Over Claims Of Immoral Content And Child Influence

Russia May Ban GTA 6 Over Claims Of Immoral Content And Child Influence
To the surprise of few, leaked footage from the upcoming Grand Theft Auto 6 has incited a moral panic, but not from the usual Stateside suspects like disbarred attorney Jack Thompson. Instead, it is coming from Russia, with a recent interview from deputy chairman of the World Russian People’s Council Mikhail Ivanov decrying “destructive and

Netflix to pay all cash for Warner Bros. to fend off Paramount hostile takeover

Netflix agreed to pay all cash for Warner Bros. Discovery, amending its $72 billion deal in an attempt to fight off Paramount’s hostile takeover bid.

Netflix originally agreed to buy the company with a mix of cash and stock. To sweeten the offer for shareholders, Netflix and Warner Bros. today announced that Netflix will pay all cash instead. If successful, Netflix’s purchase will include HBO Max, WB Studios, and other assets.

The price is unchanged at $27.75 per share, and Warner Bros. is targeting an April 2026 shareholder vote. The original plan was for Netflix to buy each Warner Bros. share with $23.25 in cash and $4.50 in Netflix stock.

Read full article

Comments

AI Agents ‘Perilous’ for Secure Apps Such as Signal, Whittaker Says

Signal Foundation president Meredith Whittaker warned that AI agents that autonomously carry out tasks pose a threat to encrypted messaging apps [non-paywalled source] because they require broad access to data stored across a device and can be hijacked if given root permissions.

Speaking at Davos on Tuesday, Whittaker said the deeper integration of AI agents into devices is “pretty perilous” for services like Signal. For an AI agent to act effectively on behalf of a user, it would need unilateral access to apps storing sensitive information such as credit card data and contacts, Whittaker said. The data that the agent stores in its context window is at greater risk of being compromised.

Whittaker called this “breaking the blood-brain barrier between the application and the operating system.” “Our encryption no longer matters if all you have to do is hijack this context window,” she said.


Read more of this story at Slashdot.

AI Agents ‘Perilous’ for Secure Apps Such as Signal, Whittaker Says

Signal Foundation president Meredith Whittaker warned that AI agents that autonomously carry out tasks pose a threat to encrypted messaging apps [non-paywalled source] because they require broad access to data stored across a device and can be hijacked if given root permissions.

Speaking at Davos on Tuesday, Whittaker said the deeper integration of AI agents into devices is “pretty perilous” for services like Signal. For an AI agent to act effectively on behalf of a user, it would need unilateral access to apps storing sensitive information such as credit card data and contacts, Whittaker said. The data that the agent stores in its context window is at greater risk of being compromised.

Whittaker called this “breaking the blood-brain barrier between the application and the operating system.” “Our encryption no longer matters if all you have to do is hijack this context window,” she said.


Read more of this story at Slashdot.