UK Mulls Australia-Like Social Media Ban For Users Under 16

The UK government has launched a public consultation on whether to ban social media use for children under 16, drawing inspiration from Australia’s recently enacted age-based restrictions. “It would also explore how to enforce that limit, how to limit tech companies from being able to access children’s data and how to limit ‘infinite scrolling,’ as well as access to addictive online tools,” reports Engadget. “In addition to seeking feedback from parents and young people themselves, the country’s ministers are going to visit Australia to see the effects of the country’s social media ban for kids, according to Financial Times.”


Read more of this story at Slashdot.

New Linux Patch Improved NVMe Performance +15% With CPU Cluster-Aware Handling

Intel Linux engineers have been working on enhancing the NVMe storage performance with today’s high core count processors. Due to situations where multiple CPUs could end up sharing the same NVMe IRQ(s), performance penalties can arise if the IRQ affinity and the CPU’s cluster do not align. There is a pending patch to address this situation. A 15% performance improvement was reported with the pending patch…

The FTC isn’t giving up on its antitrust case against Meta

The Federal Trade Commission lost its antitrust case against Meta last year, but the regulator hasn’t given up on its attempts to punish the social media company for its acquisitions of WhatsApp and Instagram. The FTC is appealing a ruling last year in which a federal judge found that the government hadn’t proven that Meta is currently operating as a monopoly. 

“Meta has maintained its dominant position and record profits for well over a decade not through legitimate competition, but by buying its most significant competitive threats,” the FTC’s Bureau of Competition Director Daniel Guarnera said in a statement. “The Trump-Vance FTC will continue fighting its historic case against Meta to ensure that competition can thrive across the country to the benefit of all Americans and U.S. businesses.”

The FTC originally filed antitrust charges against Facebook in 2020 during President Donald Trump’s first term in office. The government argued that by acquiring apps it once competed with, Instagram and WhatsApp, the company had depressed competition in the space and ultimately hurt consumers. A trial last year saw testimony from several current and former executives, including CEO Mark Zuckerberg and former COO Sheryl Sandberg, who spoke at length about the pressure to compete with TikTok. 

US District Judge James Boasberg was ultimately persuaded by Meta’s arguments, writing that the success of YouTube and TikTok prevented Meta from currently “holding a monopoly” even if the company had acted monopolistically in the past. If the FTC had won, it could have tried to force Meta to undo its acquisitions of WhatsApp and Instagram. Should it be successful in its appeal, that remedy could once again be on the table.

News of the FTC’s plan to appeal is also a blow to Zuckerberg, who has spent the last year courting Trump and hyping Meta’s plans to spend hundreds of billions of dollars on AI infrastructure in the United States. In a statement, Meta spokesperson Andy Stone said that the original ruling was “correct,” and that “Meta will remain focused on innovating and investing in America.”

This article originally appeared on Engadget at https://www.engadget.com/big-tech/the-ftc-isnt-giving-up-on-its-antitrust-case-against-meta-225020769.html?src=rss

Majority of CEOs Report Zero Payoff From AI Splurge

A PwC survey of more than 4,500 CEOs found that over half report no revenue growth or cost savings from their AI investments so far, despite massive spending. Of the 4,454 business leaders surveyed, only 12% saw both lower costs and higher revenue, while 56% saw neither benefit. “26% saw reduced costs, but nearly as many experienced cost increases,” adds The Register. From the report: AI adoption remains limited. Even in top use cases like demand generation (22 percent), support services (20 percent), and product development (19 percent), only a minority are deploying AI extensively. Last year, a separate PwC study found that only 14 percent of workers indicated they were using generative AI daily in their work. Despite the CEOs’ repsonses, PwC concludes more investment is required. It claims that “isolated, tactical AI projects” often don’t deliver measurable value, and that tangible returns instead come from enterprise-wide deployments consistent with business strategy. […]

In terms of the broader picture, PwC says it found CEO confidence has hit a five-year low, with only 30 percent optimistic about revenue growth (down from 38 percent last year). This points to growing geopolitical risk and intensifying cyber threats, as well as uncertainty over the benefits and downsides of AI. Unsurprisingly, concern remains over tariffs as the Trump administration continues its erratic approach to policy, with almost a third of company chiefs saying tariffs are expected to reduce their company’s profit margin in the year ahead. In the U.S., 22 percent indicate their corporation is highly or extremely exposed to tariffs. PwC warns that companies avoiding major investments due to geopolitical uncertainty underperform peers by two percentage points in growth and three points in profit margins.


Read more of this story at Slashdot.

Verizon starts requiring 365 days of paid service before it will unlock phones

Verizon has started enforcing a 365-day lock period on phones purchased through its TracFone division, one week after the Federal Communications Commission waived a requirement that Verizon unlock handsets 60 days after they are activated on its network.

Verizon was previously required to unlock phones automatically after 60 days due to restrictions imposed on its spectrum licenses and merger conditions that helped Verizon obtain approval of its purchase of TracFone. But an update applied today to the TracFone unlocking policy said new phones will be locked for at least a year and that each customer will have to request an unlock instead of getting it automatically.

The “new” TracFone policy is basically a return to the yearlong locking it imposed before Verizon bought the company in 2021. TracFone first agreed to provide unlocking in a 2015 settlement with the Obama-era FCC, which alleged that TracFone failed to comply with a commitment to unlock phones for customers enrolled in the Lifeline subsidy program. TracFone later shortened the locking period from a year to 60 days as a condition of the Verizon merger.

Read full article

Comments

OpenAI is launching age prediction for ChatGPT accounts

OpenAI is the latest company to hop on the bandwagon of gating access by users’ age. The AI business is beginning a global rollout of an age prediction tool to determine whether or not a user is a minor. “The model looks at a combination of behavioral and account-level signals, including how long an account has existed, typical times of day when someone is active, usage patterns over time,and a user’s stated age,” the company’s announcement states. If an individual is incorrectly characterized by ChatGPT as underage, they will need to submit a selfie to correct the mistake through the Persona age verification platform. 

Most AI companies have been willing to push new features first and then attempt to layer on a patchwork of protections and safety guards on top of them after they cause harm. OpenAI was implicated in a wrongful death suit for a teen who allegedly used ChatGPT to plan his suicide, and only in the following months began pondering automatic restrictions on content for underage users and launching a mental health advisory council. In this instance, OpenAI is attempting to prepare for the launch of an “adult mode” that will allow users to create and consume content that would be dubbed NSFW. Considering how well a similar change has been going over at Roblox, another platform with a shaky history around protecting minors, it seems probable that underage users will find ways to circumvent the existing tools if they want to use ChatGPT as adults.

This article originally appeared on Engadget at https://www.engadget.com/ai/openai-is-launching-age-prediction-for-chatgpt-accounts-222650340.html?src=rss

Google temporarily disabled YouTube’s advanced captions without warning

YouTubers have been increasingly frustrated with Google’s management of the platform, with disinformation welcomed back and an aggressive push for more AI (except where Google doesn’t like it). So it’s no surprise that creators have been up in arms over the suspicious removal of YouTube’s advanced SRV3 caption format. You don’t have to worry too much just yet—Google says this is only temporary, and it’s working on a fix for the underlying bug.

Google added support for this custom subtitle format around 2018, giving creators more customization options than regular users get with traditional captions. SRV3 (also known as YTT or YouTube Timed Text) allows for custom colors, transparency, animations, fonts, and precise positioning in videos. Uploaders using this format can color-code and position captions to help separate multiple speakers, create sing-along animations, or style them to match the video.

Over the last several days, creators who’ve become accustomed to this level of control have been dismayed to see that YouTube is no longer accepting videos with this Google-created format. Many worried Google had ditched the format entirely, which could be problematic for all those previously uploaded videos.

Read full article

Comments

Scammers Are Targeting Your Verizon Outage Refund

Verizon had a massive outage last week, leaving roughly two million customers unable to use their phones off of wifi or SOS mode for nearly a day. As an apology for the inconvenience, the company promised to refund those affected a meager $20 to their accounts—so naturally, scammers quickly got a phishing scheme up and running targeting people eligible for the credit. As reported by Android Authority, the Jones County Sheriff’s Office in Georgia has issued an alert about fake Verizon “credit” messages floating around.

Verizon credit phishing scam

According to the sheriff’s office, scammers may contact you via text or email—claiming to be from Verizon—with information about the outage credit. These messages contain phishing links, which may be set up to collect personal information or account credentials or deliver malware to your device. Clicking through will likely compromise your data in some way, especially if you enter any details on a malicious website.

If you are a Verizon customer who was affected by the outage, you will receive a text letting you know that your $20 credit is available to claim in the myVerizon app, which is why you may not be immediately suspicious of the scam. Plus, since you do need to claim the funds, you may be swayed by a message that sounds urgent. Don’t fall for it.

In general, you shouldn’t click links in unsolicited communication, and you should be suspicious of any messages that prompt you to click said links, even if they appear to be from a legitimate company about a legitimate matter. As evidenced by this phishing campaign and those like it, scammers can and will impersonate trusted brands and use real events to seem more believable.

Instead, always navigate directly to the app or official website—type in the URL and check it carefully or go through your password manager—and log in using your account credentials. Once logged in, you can see any legitimate communication and take action securely. Know that scammers can easily spoof websites, so if you click a phishing link, you may not realize that you’re on a fake page.

Meta’s Oversight Board Takes Up Permanent Bans In Landmark Case

An anonymous reader quotes a report from TechCrunch: Meta’s Oversight Board is tackling a case focused on Meta’s ability to permanently disable user accounts. Permanent bans are a drastic action, locking people out of their profiles, memories, friend connections, and, in the case of creators and businesses, their ability to market and communicate with fans and customers. This is the first time in the organization’s five-year history as an oversight body that permanent account bans have been a subject of the Oversight Board’s focus, the organization notes.

The case being reviewed isn’t exactly one of an everyday user. Instead, the case involves a high-profile Instagram user who repeatedly violated Meta’s Community Standards by posting visual threats of violence against a female journalist, anti-gay slurs against politicians, content depicting a sex act, allegations of misconduct against minorities, and more. The account had not accumulated enough strikes to be automatically disabled, but Meta made the decision to permanently ban the account. The Board’s materials didn’t name the account in question, but its recommendations could impact others who post content that targets public figures with abuse, harassment, and threats, as well as users who have their accounts permanently banned without receiving transparent explanations.

Meta referred this specific case to the Board, which included five posts made in the year before the account was permanently disabled. The Board says it’s looking for input about several key issues: how permanent bans can be processed fairly, the effectiveness of its current tools to protect public figures and journalists from repeated abuse and threats of violence, the challenges of identifying off-platform content, whether punitive measures effectively shape online behaviors, and best practices for transparent reporting on account enforcement decisions. […] Whether the Oversight Board has any real sway to address issues on Meta’s platform continues to be debated, of course. […] After the Oversight Board issues its policy recommendations to Meta, the company has 60 days to respond. The Board is also soliciting public comments on this topic. The report notes that Meta’s Oversight Board is able to overturn individual moderation decisions and offer recommendations, but largely sidelined from major policy shifts driven by Mark Zuckerberg.


Read more of this story at Slashdot.

Flesh-eating flies are eating their way through Mexico, CDC warns

The US Centers for Disease Control and Prevention issued a health alert to clinicians Tuesday, warning that the savage, flesh-eating parasitic fly—the New World Screwworm—is not only approaching the Texas border, but also felling an increasing number of animals in the bordering Mexican state of Tamaulipas.

The advisory, released through the agency’s Health Alert Network, directs doctors, veterinarians, and other health workers to be on the lookout for patients with wounds teeming with ferocious maggots burrowing into their living flesh. The alert also provides guidance on what to do if any such festering wounds are encountered—namely, remove each and every maggot to prevent the patient from dying, and, under no circumstance allow any of the parasites to survive and escape.

The New World Screwworm (NWS) is a fly that lays its eggs—up to 400 at a time—in the wounds, orifices, and mucus membranes of any warm-blooded animal. The eggs hatch into flesh-eating maggots, which look and act much like screws, twisting and boring into their victims while eating them alive.

Read full article

Comments

Ayaneo Konkr Fit Handheld Debuts With 7-Inch OLED And Ryzen AI 9 HX 470

Ayaneo Konkr Fit Handheld Debuts With 7-Inch OLED And Ryzen AI 9 HX 470
Ayaneo announced Konkr Pocket Fit today, marking the first Windows-based handheld model in Ayaneo’s Konkr sub-brand. While pricing and final storage and memory specifications are unknown due to the ongoing DRAM crisis and NAND shortage, this new OLED-equipped handheld seems set to compete with the OneXFly F1 Pro, but with a slightly souped-up

ChatGPT Is Getting on the AI Age Verification Bandwagon

When OpenAI first announced GPT-5.2 last month, it quietly disclosed a new safety feature it called “age prediction.” Considering ChatGPT proper isn’t exactly an “all ages” kind of tool, it makes sense that users under the age of 18 should have protections in place to shield them from harmful content. The company says that users who indicate they’re under 18 already receive an altered experience to “reduce exposure to sensitive or potentially harmful content,” but if the user doesn’t voluntarily share how old they are with OpenAI, how does the company enforce these protections? Here’s where age prediction comes in.

How age prediction for ChatGPT works

On Tuesday, OpenAI officially announced its new age prediction policy, which, like other age verification systems being used by the likes of Roblox, uses AI to guess how old a user is. If the system decides that a particular user is under the age of 18, OpenAI will adjust the experience accordingly, with the goal of keeping all interactions age-appropriate.

Here’s how it works: The new age prediction model looks at both the user’s behaviors within the app, as well as the general account data. That includes things like how old the account is, what times of day the user is accessing ChatGPT, usage patterns, as well as, of course, the age the user says they are. Looking at all this data, the model determines how old the user likely is. If the model thinks they’re over 18, they’ll get the full experience; if the model thinks they’re under 18, they’ll get the “safer experience.” If the model isn’t confident, it defaults to that safer experience.

What’s restricted in the “safer” version of ChatGPT

That limited experience means that someone the model thinks is under 18 will try to reduce the following content types:

  • Graphic violence or gore

  • Viral challenges that might inspire “risky or harmful behaviors”

  • Role play that is sexual, romantic, or violent in nature

  • Self-harm descriptions

  • Content promoting “extreme” beauty standards, unhealthy dieting, or body shaming

The company says that its approach is informed by “expert input” as well as literature discussing child development science. (It’s not clear whether how much of that input is from direct interviews and coordination with experts, and how much, if any, is from independent research.) The company also acknowledges “known teen differences in risk perception, impulse control, peer influence, and emotional regulation” when compared to adults.

AI isn’t always great at age prediction

The biggest risk with any of these age prediction models is that they’ll sometimes get it wrong—hallucination is an unfortunate habit AI models all share. That goes both ways: You don’t want someone too young accessing inappropriate content in ChatGPT, but you also don’t want someone older than 18 getting stuck with a limited account for no reason. If you experience the latter situation, OpenAI has a solution for you: direct age verification through Persona. This is the same third-party Roblox uses for its age verification, which hasn’t gone very well thus far.

That doesn’t necessarily spell doom for OpenAI. Roblox tried overhauling their age verification system for a massive user base all used to a certain type of multiplayer experience, which led to users not being able to chat with other users in newly-assigned age categories, which were often incorrect. Meanwhile, ChatGPT’s age prediction is only controlling the experience of one user at a time. To that end, OpenAI will let you upload a selfie as an added verification step if the prediction model alone isn’t enough. Interestingly, OpenAI doesn’t say anything about the option to upload an ID for verification, which other companies, like Google, have provided.

I’m not necessarily a fan of age prediction models, as I think they often sacrifice user privacy in the name of creating age-appropriate experiences. But there’s little doubt that OpenAI has to do something to limit the full ChatGPT experience for younger users. Many of ChatGPT’s users are under 18, and much of the content they experience is wildly inappropriate, whether it be instructions on getting high, or advice on writing suicide notes. In some tragic cases, minors have taken their own lives after discussions with ChatGPT, leading to lawsuits against OpenAI.

I don’t have any great answers here. We’ll just have to see how this new age prediction model affects the user experience for minors and adults alike, and whether it actually manages to create a safer experience for younger, more impressionable users.

Macaque facial gestures are more than just a reflex, study finds

Recent advances in brain-computer interfaces have made it possible to more accurately extract speech from neural signals in humans, but language is just one of the tools we use to communicate. “When my young nephew asks for ice cream before dinner and I say ‘no,’ the meaning is entirely dictated by whether the word is punctuated with a smirk or a stern frown,” says Geena Ianni, a neuroscientist at the University of Pennsylvania. That’s why in the future, she thinks, neural prostheses meant for patients with a stroke or paralysis will decode facial gestures from brain signals in the same way they decode speech.

To lay a foundation for these future facial gesture decoders, Ianni and her colleagues designed an experiment to find out how neural circuitry responsible for making faces really works. “Although in recent years neuroscience got a good handle on how the brain perceives facial expressions, we know relatively little about how they are generated,” Ianni says. And it turned out that a surprisingly large part of what neuroscientists assumed about facial gestures was wrong.

The natural way

For a long time, neuroscientists thought facial gestures in primates stemmed from a neat division of labor in the brain. “Case reports of patients with brain lesions suggested some brain regions were responsible for certain types of emotional expressions while other regions were responsible for volitional movements like speech,” Ianni explains. We’ve developed a clearer picture of speech by tracing the origin of these movements down to the level of individual neurons. But we’ve not done the same for facial expressions. To fill this gap, Ianni and her team designed a study using macaques—social primates that share most of their complex facial musculature with humans.

Read full article

Comments