OpenAI Has No Moat, No Tech Edge, No Lock-in and No Real Plan, Analyst Warns

OpenAI faces four fundamental strategic problems that no amount of fundraising or capex announcements can paper over, according to analyst Benedict Evans: it has no unique technology, its enormous user base is shallow and fragile, incumbents like Google and Meta are leveraging superior distribution to close the gap, and its product roadmap is dictated by whatever the research labs happen to discover rather than by deliberate product strategy.

The company claims 800-900 million weekly active users, but 80% of them sent fewer than 1,000 messages across all of 2025, averaging fewer than three prompts a day, and only 5% pay. OpenAI has acknowledged what it calls a “capability gap” between what models can do and what people use them for — a framing Evans reads as a polite way to avoid admitting the absence of product-market fit. Gemini and Meta AI are meanwhile gaining share rapidly because the products look nearly indistinguishable to typical users, and Google and Meta already have the distribution to push them. Evans compares ChatGPT to Netscape — an early leader in a category where the products were hard to tell apart, overtaken by a competitor that used distribution as a crowbar.

On capex, Evans argues that Altman’s ambitions — claiming $1.4 trillion and 30 gigawatts of future compute — amount to an attempt to will OpenAI into a seat at a table where annual infrastructure spending may need to reach hundreds of billions. But a seat at the table is not leverage over it; he compares this to TSMC, which holds a de facto chip monopoly yet captures little value further up the stack.

OpenAI’s own strategy diagrams from late last year laid out a full-stack platform vision — chips, models, developer tools, consumer products — each layer reinforcing the others. Evans argues this borrows the language of Windows and iOS without possessing any of the underlying dynamics: no network effect, no lock-in preventing developers from calling a different model’s API, and no reason customers would know or care which foundation model powers the product they are using.


Read more of this story at Slashdot.

Several Meta Employees Have Started Calling Themselves ‘AI Builders’

An anonymous reader shares a report: Meta product managers are rebranding. Some are now calling themselves “AI builders,” a signal that AI coding tools are changing who gets to build software inside the company. One of them, Jeremie Guedj, announced the change in a LinkedIn post last week. “I still can’t believe I’m writing this: as of today, my full-time job at Meta is AI Builder,” he wrote.

Guedj has spent more than a decade as a traditional product manager, a role that sets the road map and strategy for products then built by engineering teams. He said that while his title in Meta’s internal systems still lists him as a product manager, his actual work is now full-time building with AI on what he calls an “AI-native team.” Another Meta product manager also lists “AI Builder” on her LinkedIn profile, while at least two other Meta engineers write the term in their bios, Business Insider found.


Read more of this story at Slashdot.

Here’s Why You Should Never Use AI to Generate Your Passwords

I’m a bit of a broken record when it comes to personal security on the internet: Make strong passwords for each account; never reuse any passwords; and sign up for two-factor authentication whenever possible. With these three steps combined, your general security is pretty much set. But how you make those passwords matters just as much as making each strong and unique. As such, please don’t use an AI program to generate your passwords.

If you’re a fan of chatbots like ChatGPT, Claude, or Gemini, it might seem like a no-brainer to ask the AI to generate passwords for you. You might like how they handle other tasks for you, so it might make sense that something seemingly so high-tech yet accessible could produce secure passwords for your accounts. But LLMs (large language models) are not necessarily good at everything, and creating good passwords just so happens to be among those faults.

AI-generated passwords are not secure

As highlighted by Malwarebytes Labs, researchers recently investigated AI-generated passwords, and evaluated their security. In short? The findings aren’t good. Researchers tested password generation across ChatGPT, Claude, and Gemini, and discovered that the passwords were “highly predictable” and “not truly random.” Claude, in particular, didn’t fare well: Out of 50 prompts, the bot was only able to generate 23 unique passwords. Claude gave the same password as an answer 10 times. The Register reports that researchers found similar flaws with AI systems like GPT-5.2, Gemini 3 Flash, Gemini 3 Pro, and even Nano Banana Pro. (Gemini 3 Pro even warned the passwords shouldn’t be used for “sensitive accounts.”)

The thing is, these results seem good on the surface. They look uncrackable because they’re a mix of numbers, letters, and special characters, and password strength identifiers might say they’re secure. But these generations are inherently flawed, whether that’s because they are repeated results, or come with a recognizable pattern. Researchers evaluated the “entropy” of these passwords, or the measure of unpredictability, with both “character statistics” and “log probabilities.” If that all sounds technical, the important thing to note is that the results showed entropies of 27 bits and 20 bits, respectively. Character statistics tests look for entropy of 98 bits, while log probabilities estimates look for 120 bits. You don’t need to be an expert in password entropy to know that’s a massive gap.

Hackers can use these limitations to their advantage. Bad actors can run the same prompts as researchers (or, presumably, end users) and collect the results into a bank of common passwords. If chatbots repeat passwords in their generations, it stands to reason that many people might be using the same passwords generated by those chatbots—or trying passwords that follow the same pattern. If so, hackers could simply try those passwords during break-in attempts, and if you used an LLM to generate your password, it might match. It’s tough to say what that exact risk is, but to be truly secure, each of your passwords should be totally unique. Potentially using a password that hackers have in a word bank is an unnecessary risk.

It might seem surprising that a chatbot wouldn’t be good at generating random passwords, but it makes sense based on how they work. LLMs are trained to predict the next token, or data point, that should appear in a sequence. In this case, the LLM is trying to choose the characters that make the most sense to appear next, which is the opposite of “random.” If the LLM has passwords in its training data, it may incorporate that into its answer. The password it generates makes sense in its “mind,” because that’s what it’s been trained on. It isn’t programmed to be random.

It’s not hard to make a secure password

Meanwhile, traditional password managers are not LLMs. Instead, they are designed to produce a truly random sequence, by taking cryptographic bits and converting them into characters. These outputs are not based on existing training data and follow no patterns, so the chances that someone else out there has the same password as you (or that hackers have it stored in a word bank) is slim. There are plenty of options out there to use, and most password managers come with secure password generators.

But you don’t even need one of these programs to make a secure password. Just pick two or three “uncommon” words, mix a few of the characters up, and presto: You have a random, unique, and secure password. For example, you could take the words “shall,” “murk,” and “tumble,” and combine them into “sH@_llMurktUmbl_e.” (Don’t use that one, since it’s no longer unique.)

Passkeys may be even more secure than passwords

If you’re looking to boost your personally security even further, consider passkeys whenever possible. Passkeys combine the convenience of passwords with the security of 2FA: With passkeys, your device is your password. You use its built-in authentication to log in (face scan, fingerprint, or PIN), which means there’s no password to actually create. Without the trusted device, hackers won’t be able to break into your account.

Not all accounts support passkeys, which means they aren’t a universal solution right now. You’ll likely need passwords for some of your accounts, which means abiding by proper security methods to keep things in order. But replacing some of your passwords with passkeys can be a step up in both security and convenience—and avoids the security pitfalls of asking ChatGPT to make your passwords for you.

PayPal Warns Of Exposed Social Security Numbers In 6-Month Data Breach

PayPal Warns Of Exposed Social Security Numbers In 6-Month Data Breach
PayPal just disclosed a data breach that exposed sensitive user information, including social security numbers. From July 1st, 2025 to December 12th, 2025, a software glitch in PayPal Working Capital (PPWC) loan applications allowed attackers to gain access to personal user information. While PayPal has since acknowledged and fixed the error,

With NIH in chaos, its controversial director is taking over CDC, too

Jay Bhattacharya, the director of the National Institutes of Health, is now also the acting director of the Centers for Disease Control and Prevention, an unusual arrangement that has drawn swift criticism from researchers and public health experts.

Bhattacharya’s new role comes amid a leadership shakeup in the Department of Health and Human Services under anti-vaccine Health Secretary Robert F. Kennedy Jr. It also marks the third leader for the beleaguered public health agency under Kennedy.

Susan Monarez, a microbiologist and long-time federal health official, held the position of acting director before becoming the first Senate-confirmed CDC director at the end of July. But she was in the role just shy of a month before Kennedy ousted her for—according to Monarez—refusing to rubber-stamp changes to vaccine recommendations made by Kennedy’s hand-picked advisors, who are overwhelmingly anti-vaccine themselves.

Read full article

Comments

Google Highlights Huge AI‑Driven Crackdown On Android Malware And Fraud Apps

Google Highlights Huge AI‑Driven Crackdown On Android Malware And Fraud Apps
Google is flexing its AI muscle in an effort to make its Android ecosystem more safe and secure, as malicious actors are constantly evolving the ways attacks are deployed against victims, including leveraging AI themselves. The company says it was able to use the “investments in AI and real-time defenses over the last year to maintain the

BleachBit 5.1.0 adds cookie manager, CLI negation, expert mode

The BleachBit 5.1.0 release adds a cookie manager to selectively remove cookies in browsers including Google Chrome and Mozilla Firefox. It cleans Chromium when installed as two kinds of Flatpacks, and there are major improvements to cleaners for Opera and LibreOffice. CLI negation makes exception to wildcard arguments. The .deb and .rpm packages are now signed for Debian, Fedora, openSUSE, Ubuntu, Mint.

AMC Theatres Will Refuse To Screen AI Short Film After Online Uproar

An anonymous reader shares a report: When will AI movies start showing up in theaters nationwide? It was supposed to be next month. But when word leaked online that an AI short film contest winner was going to start screening before feature presentations in AMC Theatres, the cinema chain decided not to run the content.

The issue began earlier this week with the inaugural Frame Forward AI Animated Film Festival announcing Igor Alferov’s short film Thanksgiving Day had won the contest. The prize package for included Thanksgiving Day getting a national two-week run in theaters nationwide. When word of this began hitting social media, however, some were dismayed by the prospect of exhibitors embracing AI content, with many singling out AMC Theatres for criticism.

Except the short is not actually programmed by exhibitors, exactly, but by Screenvision Media — a third-party company which manages the 20-minute, advertising-driven pre-show before a theater’s lights go down. Screenvision — which co-organized the festival along with Modern Uprising Studios — provides content to multiple theatrical chains, not just AMC. After The Hollywood Reporter reached out to AMC about the brewing controversy, the company issued this statement to THR on Thursday: “This content is an initiative from Screenvision Media, which manages pre-show advertising for several movie theatre chains in the United States and runs in fewer than 30 percent of AMC’s U.S. locations. AMC was not involved in the creation of the content or the initiative and has informed Screenvision that AMC locations will not participate.”


Read more of this story at Slashdot.

Tunic publisher claims TikTok ran ‘racist, sexist’ AI ads for one of its games without its knowledge

Indie publisher and developer Finji has accused TikTok of using generative AI to alter the ads for its games on the platform without its knowledge or permission. Finji, which published indie darlings like Night in the Woods and Tunic, said it only became aware of the seemingly modified ads after being alerted to them by followers of its official TikTok account.

As reported by IGN, Finji alleges that one ad that went out on the platform was modified so it displayed a “racist, sexualized” representation of a character from one of its games. While it does advertise on TikTok, it told IGN that it has AI “turned all the way off,” but after CEO and co-founder Rebekah Saltsman received screenshots of the ads in question from fans, she approached TikTok to investigate.

A number of Finji ads have appeared on TikTok, some that include montages of the company’s games, and others that are game-specific like this one for Usual June. According to IGN, the offending AI-modified ads (which are still posted as if they’re coming directly from Finji) appeared as slideshows. Some images don’t appear to be that different from the source, but one possibly AI-generated example seen by IGN depicts Usual June’s titular protagonist with “a bikini bottom, impossibly large hips and thighs, and boots that rise up over her knees.” Needless to say (and obvious from the official screenshot used as the lead image for this article), this is not how the character appears in the game.

As for TikTok’s response, IGN printed a number of the platform’s replies to Finji’s complaints, in which it initially said, in part, that it could find no evidence that “AI-generated assets or slideshow formats are being used.” This was despite Finji sending the customer support page a screenshot of the clearly edited image mentioned above. In a subsequent exchange, TikTok appeared to acknowledge the evidence and assured the publisher it was “no longer disputing whether this occurred.” It added that it has escalated the issue internally and was investigating it thoroughly.

TikTok does have a “Smart Creative” option on its ad platform, which essentially uses generative AI to modify user-created ads so that multiple versions are pushed out, with the ones its audience responds more positively to used more often. Another option is the “Automate Creative” features, which use AI to automatically optimize things like music, audio effects and general visual “quality” to “enhance the user’s viewing experience.” Saltsman showed IGN evidence that Finji has both of these options turned off, which was also confirmed by a TikTok agent for the ad in question.

After a number of increasingly frustrated exchanges in which TikTok eventually admitted to Saltsman that the ad “raises significant issues, including the unauthorized use of AI, the sexualization and misrepresentation of your characters, and the resulting commercial and reputational harm to your studio,” the Finji co-founder was offered something of an explanation.

TikTok said that Finji’s campaign used a “catalog ads format” designed to “demonstrate the performance benefits of combining carousel and video assets in Sales campaigns.” It said that this “initiative” helped advertisers “achieve better results with less effort,” but did not address the harmful content directly. Finji seemingly also opted into this ad format without knowing it had done so. TikTok declined to comment on the matter when approached by IGN.

Saltsman was told the issue could not be escalated any higher, with communication not resolved at the time of IGN publishing its report. In a statement to the outlet, Saltsman said she was “a bit shocked by TikTok’s complete lack of appropriate response to the mess they made.” She went on to say that she expected both an apology and clear reassurance of how a similar issue would not reoccur, but was “obviously not holding my breath for any of the above.”

This article originally appeared on Engadget at https://www.engadget.com/gaming/tunic-publisher-claims-tiktok-ran-racist-sexist-ai-ads-for-one-of-its-games-without-its-knowledge-185303395.html?src=rss

How Streaming Became Cable TV’s Unlikely Life Raft

Cable TV providers have spent the past decade losing tens of millions of households to streaming services, but companies like Charter Communications are now slowing that exodus by bundling the very apps that once threatened to replace them.

Charter added 44,000 net video subscribers in the fourth quarter of 2025, its first growth in that count since 2020, after integrating Disney+, Hulu, and ESPN+ directly into Spectrum cable packages — a deal that grew out of a contentious 2023 contract dispute with Disney. Comcast and Optimum still lost subscribers in the quarter, though both saw those losses narrow.

Charter’s Q4 numbers also got a lift from a 15-day Disney channel blackout on YouTube TV during football season, which drove more than 14,000 subscribers to Spectrum. Charter has been discounting aggressively — video revenue fell 10% year over year despite the subscriber gains. Cox Communications launched its first streaming-inclusive cable bundles last month, and Dish Network has yet to integrate streaming apps into its packages at all.


Read more of this story at Slashdot.

Wikipedia blacklists Archive.today, starts removing 695,000 archive links

The English-language edition of Wikipedia is blacklisting Archive.today after the controversial archive site was used to direct a distributed denial of service (DDoS) attack against a blog.

In the course of discussing whether Archive.today should be deprecated because of the DDoS, Wikipedia editors discovered that the archive site altered snapshots of webpages to insert the name of the blogger who was targeted by the DDoS. The alterations were apparently fueled by a grudge against the blogger over a post that described how the Archive.today maintainer hid their identity behind several aliases.

“There is consensus to immediately deprecate archive.today, and, as soon as practicable, add it to the spam blacklist (or create an edit filter that blocks adding new links), and remove all links to it,” stated an update today on Wikipedia’s Archive.today discussion. “There is a strong consensus that Wikipedia should not direct its readers towards a website that hijacks users’ computers to run a DDoS attack (see WP:ELNO#3). Additionally, evidence has been presented that archive.today’s operators have altered the content of archived pages, rendering it unreliable.”

Read full article

Comments

This 27-Inch LG 4K Monitor Just Dropped to Under $200

We may earn a commission from links on this page. Deal pricing and availability subject to change after time of publication.

Gamers and multi-taskers shouldn’t sleep on the 27-inch LG 27UP650K-W Ultrafine 4K monitor—like most options from LG, it’s a versatile and visually striking display with appeal for multimedia, gaming, and office usage. Right now, it’s cheaper than it’s ever been at 30% off, bringing its price down from $279.99 to $196.99.

The monitor offers top-tier 4K clarity for a sub-$200 price tag, with Native 3840×2160 resolution on a 27-inch IPS panel. Its strong color accuracy with HDR400 makes it equally suitable for creative or media consumption. Additionally, it has wide viewing angles and reliable brightness (around 400 nits, which isn’t cinema-quality, but still impressive for a budget 4K monitor), which improves daytime visibility but is modest compared to pricier monitors. Users can adjust the monitor’s pivot, tilt, and height, while HDMI and DisplayPort make it a good choice for most desk setups.

Given its 60Hz refresh rate, it’s better for work, watching movies, and casual gaming; competitive gamers might find it limiting. It also lacks USB-C connectivity, which is a con for those who use laptops like a MacBook. While it can’t offer the same as luxury displays, if you’re looking for a monitor that gets it all done, whether that’s light gaming, office work, media consumption, or content creation, the 27-inch LG 27UP650K-W Ultrafine 4K monitor is a strong budget 4K productivity and casual gaming monitor, particularly at less than $200 with its current discount.

Deals are selected by our commerce team

AMD Zen 6 Olympic Ridge Ryzen CPUs Rumored For A CCD Upgrade With 6 To 24 Cores

AMD Zen 6 Olympic Ridge Ryzen CPUs Rumored For A CCD Upgrade With 6 To 24 Cores
Let’s talk about AMD’s next-generation Zen 6 processors. It has already been all but confirmed that AMD will finally be increasing the core count of a Ryzen CCD to 12 cores from a long time spent at 8. However, a new Xwitter post from well-known hardware enthusiast and occasional leaker HXL seems to have revealed the core counts of the SKUs