PHP 8.0 End of Life Is Today, November 26, 2023

Slashdot reader sysadminafterdark writes:

Released on November 26, 2020, PHP 8 brought many optimizations and powerful features to the language.Fast forward to today, and PHP 8 is getting the boot in favor of 8.1, 8.2, and 8.3 with 8.4 in development. This leaves some websites at risk of breaking and potential security issues. Hearing of this news, I upgraded my own blog and wrote an article on how to add the Remi repository and update. I run Enterprise Linux (The best distro out there) so if you are standing up new boxes, just keep in mind the PHP in the repo is deprecated.

Read more of this story at Slashdot.



Source: Slashdot – PHP 8.0 End of Life Is Today, November 26, 2023

Ridley Scott Is Terrified of AI: 'It's a Technical Hydrogen Bomb'

“Several of your films have explored artificial intelligence,” Rolling Stone pointed out to 85-year-old Ridley Scott, before asking: “Does AI worry you?”

Ridley Scott: I always thought the world would end up being run by two corporations, and I think we’re headed in that direction. Tyrell Corp in Blade Runner probably owned 45-50% of the world, and one of his playthings was creating replication through DNA. Tyrell thinks he’s god and in the first Blade Runner has made a Nexus female. And the Nexus female will have a limited lifespan because AI will get dangerous. We have to lock down AI. And I don’t know how you’re gonna lock it down. They have these discussions in the government, “How are we gonna lock down AI?” Are you fucking kidding? You’re never gonna lock it down. Once it’s out, it’s out. If I’m designing AI, I’m going to design a computer whose first job is to design another computer that’s cleverer than the first one. And when they get together, then you’re in trouble, because then it can take over the whole electrical-monetary system in the world and switch it off. That’s your first disaster. It’s a technical hydrogen bomb. Think about what that would mean?

Rolling Stone: I wanted to ask you about what effect you think AI will have on Hollywood as it was a big sticking point in the writers’ strike, in particular. One fear is that studios will plug a book into AI, have it crap out an “adaptation,” and then pay actual screenwriters day rates to punch it up.
Ridley Scott: Yeah. They really have to not allow this, and I don’t know how you can control it. Another AI expert said, “We are way over-panicking. Of course, I have a computer that can defeat a chess master in an hour because we can feed him every conceivable move from data, and it’ll process 1,900 conceivable moves on what the person will do next in seconds, and the guy is in trouble.” There’s something non-creative about data. You’re gonna get a painting created by a computer, but I like to believe — and I’m saying this without confidence — it won’t work with anything particularly special that requires emotion or soul. With that said, I’m still worried about it.

The article also looks back more than 40 years, to when Ridley Scott was going to direct Dune in between filming Alien and Blade Runner. Scott says he had “a really good screenplay, had all the sets to go” — but the producer had wanted to save money by filiming it in Mexico City, and Scott “didn’t love” the idea of spending a year there.

Read more of this story at Slashdot.



Source: Slashdot – Ridley Scott Is Terrified of AI: ‘It’s a Technical Hydrogen Bomb’

US Energy Department Funds Next-Gen Semiconductor Projects to Improve Power Grids

America’s long-standing Advanced Research Projects Agency (or ARPA) developing the foundational technologies for the internet.

This week its energy division announced $42 million for projects enabling a “more secure and reliable” energy grid, “allowing it to utilize more solar, wind, and other clean energy.” But specifically, they funded 15 projects across 11 states to improve the reliability, resiliency, and flexibility of the grid “through the next-generation semiconductor technologies.”

Streamlining the coordinated operation of electricity supply and demand will improve operational efficiency, prevent unforeseen outages, allow faster recovery, minimize the impacts of natural disasters and climate-change fueled extreme weather events, and redcude grid operating costs and carbon intensity.
Some highlights:

The Georgia Institute of Technology will develop a novel semiconductor switching device to improve grid control, resilience, and reliability.Michigan’s Great Lakes Crystal Technologies (will develop a diamond semiconductor transistor to support the control infrastructure needed for an energy grid with more distributed generation sources and more variable loadsLawrence Livermore National Laboratory will develop an optically-controlled semiconductor transistor to enable future grid control systems to accommodate higher voltage and current than state-of-the-art devices.California’s Opcondys will develop a light-controlled grid protection device to suppress destructive, sudden transient surges on the grid caused by lightning or electromagnetic pulses.Albuquerque’s Sandia National Laboratories will develop novel a solid-state surge arrester protecting the grid from very fast electromagnetic pulses that threaten grid reliability and performance.
America’s Secretary of Energy said the new investment “will support project teams across the country as they develop the innovative technologies we need to strengthen our grid security and bring reliable clean electricity to more families and businesses — all while combatting the climate crisis.”

Read more of this story at Slashdot.



Source: Slashdot – US Energy Department Funds Next-Gen Semiconductor Projects to Improve Power Grids

Continuing Commitment to Open Access, CERN Launches New Open Source Program Office

“The cornerstone of the open-source philosophy is that the recipients of technology should have access to all its building blocks…” writes the European Organization for Nuclear Research, “in order to study it, modify it and redistribute it to others.” This includes mechanical designs, schematics for electronics, and software code.
Ever since releasing the World Wide Web software under an open-source model in 1994, CERN has continuously been a pioneer in this field, supporting open-source hardware (with the CERN Open Hardware Licence), open access (with the Sponsoring Consortium for Open Access Publishing in Particle Physics — SCOAP3) and open data (with the Open Data Portal for the LHC experiments).

The CERN Open Data portal is a testimony to CERN’s policy of Open Access and Open Data. The portal allows the LHC experiments to share their data with a double focus: for the scientific community, including researchers outside the CERN experimental teams, as well as citizen scientists, and for the purposes of training and education through specially curated resources. The first papers based on data from the CERN Open Data portal have been published. Several CERN technologies are being developed with open access in mind. Invenio is an open-source library management package, now benefiting from international contributions from collaborating institutes, typically used for digital libraries. Indico is another open-source tool developed at CERN for conference and event management and used by more than 200 sites worldwide, including the United Nations. INSPIRE, the High Energy Physics information system, is another example of open source software developed by CERN together with DESY, Fermilab and SLAC.

And on Wednesday the European Organization for Nuclear Research launches its new Open Source Program Office “to help you with all issues relating to the release of your software and hardware designs.”
Sharing your work with collaborators in research and industry has many advantages, but it may also present some questions and challenges… The OSPO will support you, whether you are a member of the personnel or a user, to find the best solution by giving you access to a set of best practices, tools and recommendations. With representatives from all sectors at CERN, it brings together a broad range of expertise on open source practices… As well as supporting the CERN internal community, the OSPO will engage with external partners to strengthen CERN’s role as a promoter of open source.
Open source is a key pillar of open science. By promoting open source practices, the OSPO thus seeks to address one of CERN’s core ambitions: sharing our knowledge with the world. Ultimately, the aim is to increase the reach of open source projects from CERN to maximise their benefits for the scientific community, industry and society at large.
For Wednesday’s launch event “We will host distinguished open source experts and advocates from Nvidia, the World Health Organization and the Open Source Hardware Association to discuss the impact and future of open source.” There will be a live webcast of the event.

Read more of this story at Slashdot.



Source: Slashdot – Continuing Commitment to Open Access, CERN Launches New Open Source Program Office

A NASA Spacecraft Could Carry Your Name to Jupiter in 2024

An anonymous reader shared this report from the Washington Post:

In 2024, a new spacecraft will hurtle toward Jupiter in a bid to learn whether its moon Europa is capable of supporting life. The craft will carry more than high-tech sensors: It also will bear a poem and hundreds of thousands of human names.

Yours could be one of them.

NASA is asking people to submit their names ahead of the mission’s October 2024 launch. Those submitted by the end of 2023 will go into space on the Europa Clipper spacecraft, which should enter Jupiter’s orbit in 2030… They’ll eventually be stenciled onto a dime-sized microchip in microscopic writing, then attached to a metal plate engraved with the poem that will accompany the craft.
700,000 names have been submitted so far — and they’ll all be carried a distance of over 1.8 billion miles.
They’ll travel through space with a poem that ends by describing what we humans on earth are made of — including “a need to call out through the dark.”

Read more of this story at Slashdot.



Source: Slashdot – A NASA Spacecraft Could Carry Your Name to Jupiter in 2024

Google Maps' New Color Scheme Draws Criticism Online

Google Maps has added “a fresh color scheme, including a different look for parks and city blocks,” writes SFGate. “But it’s the changes to the app’s all-important road maps that are rankling online commentators…”
Previously, highways and freeways were depicted in bright yellow, which stood out against a stark white grid. Now, the app shows every road in various shades of gray, with major thoroughfares like Interstate 80 and Highway 1 showing up darker and thicker than other roadways. Raynell Cooper, an employee at the San Francisco Municipal Transportation Agency, called the new look “cartographically disappointing” in a Monday post to X, formerly known as Twitter. He added, “major local roads and limited-access highways (freeways) are basically indistinguishable.”
TechRadar has a side-by-side comparison of the old and new color schemes, quoting one Reddit who says the new one is a bit harder to read quickly. “The toned down look is cute but not practical.”
And the Evening Standard shares more negative reactions, including one user who complained the new color scheme is “shockingly bad.”
“Hate it hate it hate it hate it. Yellow roads were so good, and everything was bright and cheery,” states another person on Reddit. “Now it’s depressing and the roads are hard to see when not fairly zoomed in, they just don’t pop like the yellow did.
One Reddit user offered another complaint. “I think the water is a fairly significant change, it’s a much closer shade to the green of the land which makes it a little harder to differentiate at a quick glance.”
And another criticism came from a post on X. “15 years ago, I helped design Google Maps…” wrote designer Elizabeth Laraki. “Last week, the team dramatically changed the map’s visual design. I don’t love it.”

It feels colder, less accurate and less human. But more importantly, they missed a key opportunity to simplify and scale… Google Maps should have cleaned up the crud overlaying the map. So much stuff has accumulated on top of the map. Currently there are ~11 different elements obscuring it.
Tech blogger John Gruber writes, “This is a very long way of saying that Google Maps’s app design should be like Apple Maps.”

Read more of this story at Slashdot.



Source: Slashdot – Google Maps’ New Color Scheme Draws Criticism Online

America's Bowling Pins Face a Revolutionary New Technology: Strings

There’s yet another technological revolution happening, reports the Los Angeles Times. Bowling alleys across America “are ditching traditional pinsetters — the machines that sweep away and reset pins — in favor of contraptions that employ string.

“Think of the pins as marionettes with nylon cords attached to their heads. Those that fall are lifted out of the way, as if by levitation, then lowered back into place after each frame… European bowling alleys have used string pinsetters for decades because they require less energy and maintenance.

“All you need is someone at the front counter to run back when the strings tangle.”
String pinsetters mean big savings, maybe salvation, for an industry losing customers to video games and other newfangled entertainment. That is why the U.S. Bowling Congress recently certified them for tournaments and league play. But there is delicate science at play here. Radius of gyration, coefficient of restitution and other obscure forces cause tethered pins to fly around differently than their free-fall counterparts. They don’t even make the same noise. Faced with growing pushback, the bowling congress published new research this month claiming the disparity isn’t nearly as great as people think.
Using a giant mechanical arm, powered by hydraulics and air pressure, they rolled “thousands of test balls from every angle, with various speeds and spins, on string-equipped lanes,” according to the article:

They found a configuration that resulted in 7.1% fewer strikes and about 10 pins fewer per game as compared to bowling with traditional pinsetters… Officials subsequently enlisted 500 human bowlers for more testing and, this time, reported finding “no statistically significant difference.” But hundreds of test participants commented that bowling on strings felt “off.” The pins seemed less active, they said. There were occasional spares whereby one pin toppled another without making contact, simply by crossing strings.
Nothing could be done about the muted sound. It’s like hearing a drum roll — the ball charging down the lane — with no crashing cymbal at the end.
Still, one Northern California bowling alley spent $1 million to install the technology, and believes it will save them money — partly by cutting their electric bill in half. “We had a full-time mechanic and were spending up to $3,000 a month on parts.”
The article also remembers that once upon a time, bowling alleys reset their pins using pinboys, “actual humans — mostly teenagers… scrambling around behind the lanes, gathering and resetting by hand,” before they were replaced by machines after World War II.

Read more of this story at Slashdot.



Source: Slashdot – America’s Bowling Pins Face a Revolutionary New Technology: Strings

What Happened When California's State Government Examined the Risks and Benefits of AI?

An anonymous reader shared this report from the Los Angeles Times:

AI that can generate text, images and other content could help improve state programs but also poses risks, according to a report released by the governor’s office on Tuesday. Generative AI could help quickly translate government materials into multiple languages, analyze tax claims to detect fraud, summarize public comments and answer questions about state services. Still, deploying the technology, the analysis warned, also comes with concerns around data privacy, misinformation, equity and bias. “When used ethically and transparently, GenAI has the potential to dramatically improve service delivery outcomes and increase access to and utilization of government programs,” the report stated…

AI advancements could benefit California’s economy. The state is home to 35 of the world’s 50 top AI companies and data from Pitchfork says the GenAI market could reach $42.6 billion in 2023, the report said. Some of the risks outlined in the report include spreading false information, giving consumers dangerous medical advice and enabling the creation of harmful chemicals and nuclear weapons. Data breaches, privacy and bias are also top concerns along with whether AI will take away jobs. “Given these risks, the use of GenAI technology should always be evaluated to determine if this tool is necessary and beneficial to solve a problem compared to the status quo,” the report said.

Read more of this story at Slashdot.



Source: Slashdot – What Happened When California’s State Government Examined the Risks and Benefits of AI?

Meta Knowingly Collected Data on Pre-Teens, Unredacted Evidence From Lawsuit Shows

The New York Times reports:

Meta has received more than 1.1 million reports of users under the age of 13 on its Instagram platform since early 2019 yet it “disabled only a fraction” of those accounts, according to a newly unsealed legal complaint against the company brought by the attorneys general of 33 states.

Instead, the social media giant “routinely continued to collect” children’s personal information, like their locations and email addresses, without parental permission, in violation of a federal children’s privacy law, according to the court filing. Meta could face hundreds of millions of dollars, or more, in civil penalties should the states prove the allegations. “Within the company, Meta’s actual knowledge that millions of Instagram users are under the age of 13 is an open secret that is routinely documented, rigorously analyzed and confirmed,” the complaint said, “and zealously protected from disclosure to the public….”

It also accused Meta executives of publicly stating in congressional testimony that the company’s age-checking process was effective and that the company removed underage accounts when it learned of them — even as the executives knew there were millions of underage users on Instagram… The lawsuit argues that Meta elected not to build systems to effectively detect and exclude such underage users because it viewed children as a crucial demographic — the next generation of users — that the company needed to capture to assure continued growth.

More from the Wall Street Journal:

An internal 2020 Meta presentation shows that the company sought to engineer its products to capitalize on the parts of youth psychology that render teens “predisposed to impulse, peer pressure, and potentially harmful risky behavior,” the filings show… “Teens are insatiable when it comes to ‘feel good’ dopamine effects,” the Meta presentation shows, according to the unredacted filing, describing the company’s existing product as already well-suited to providing the sort of stimuli that trigger the potent neurotransmitter. “And every time one of our teen users finds something unexpected their brains deliver them a dopamine hit….”

“In December 2017, an Instagram employee indicated that Meta had a method to ascertain young users’ ages but advised that ‘you probably don’t want to open this pandora’s box’ regarding age verification improvements,” the states say in the suit. Some senior executives raised the possibility that cracking down on underage usage could hurt Meta’s business… The states say Meta made little progress on automated detection systems or adequately staffing the team that reviewed user reports of underage activity. “Meta at times has a backlog of 2-2.5 million under-13 accounts awaiting action,” according to the complaint…

The unredacted material also includes allegations that Meta Chief Executive Mark Zuckerberg instructed his subordinates to give priority to boosting its platforms’ usage above the well being of users… Zuckerberg also repeatedly dismissed warnings from senior company officials that its flagship social-media platforms were harming young users, according to unsealed allegations in a lawsuit filed by Massachusetts earlier this month…

The complaint cites numerous other executives making public claims that were allegedly contradicted by internal documents. While Meta’s head of global safety, Antigone Davis, told Congress that the company didn’t consider profitability when designing products for teens, a 2018 internal email stated that product teams should keep in mind that “The lifetime value of a 13 y/o teen is roughly $270” when making product decisions.

Read more of this story at Slashdot.



Source: Slashdot – Meta Knowingly Collected Data on Pre-Teens, Unredacted Evidence From Lawsuit Shows

Google Confirms Its Schedule for Disabling Third-Party Cookies in Chrome – Starting in 2024

“The abolition of third-party cookies will make it possible to protect privacy-related data such as what sites users visit and what pages they view from advertising companies,” notes the Japan-based site Gigazine.

And this month “Google has confirmed that it is on track to start disabling third-party cookies across its Chrome browser in a matter of weeks,” writes TechRadar:

An internal email published online sees Google software engineer Johann Hofmann share with colleagues the company’s plan to switch off third-party cookies for 1% of Chrome users from Q1 2024 — a plan that was shared months ago and that, surprisingly, remains on track, given the considerable pushbacks so far… Hofmann explains that Google is still awaiting a UK Competition and Markets Authority consultation in order to address any final concerns before “Privacy Sandbox” gets the go-ahead.
The Register explores Google’s “Privacy Sandbox” idea:

Since 2019 — after it became clear that European data protection rules would require rethinking how online ads work — Google has been building a set of ostensibly privacy-preserving ad tech APIs known as the Privacy Sandbox… One element of the sandbox is the Topics API: that allows websites to ask Chrome directly what the user is interested in, based on their browser history, so that targeted ads can be shown. Thus, no need for any tracking cookies set by marketers following you around, though it means Chrome squealing on you unless you tell it not to…

Peter Snyder, VP of privacy engineering at Brave Software, which makes the Brave browser, told The Register in an email that the cookie cutoff and Privacy Sandbox remains problematic as far as Brave is concerned. “Replacing third-party cookies with Privacy Sandbox won’t change the fact that Google Chrome has the worst privacy protections of any major browser, and we’re very concerned about their upcoming plans,” he said. “Google’s turtle-paced removal of third-party cookies comes along with a large number of other changes, which when taken together, seriously harm the progress other browsers are making towards a user-first, privacy-protecting Web.

“Recent Google Chrome changes restrict the ability for users to modify, make private, and harden their Web experience (Manifest v3), broadcasting users’ interests to websites they visit (Topics), dissolving privacy boundaries on the Web (Related Sites), offloading the battery-draining costs of ad auctions on users (FLEDGE/Protected Audience API), and reducing user control and Web transparency (Signed Exchange/WebBundles),” Snyder explained. “And this is only a small list of examples from a much longer list of harmful changes being shipped in Chrome.”

Snyder said Google has characterized the removal of third-party cookies as getting serious about privacy, but he argued the truth is the opposite. “Other browsers have shown that a more private, more user-serving Web is possible,” he said. “Google removing third-party cookies should be more accurately understood as the smallest possible change it can make without harming Google’s true priority: its own advertising business.”

The Register notes that other browser makers such as Apple, Brave, and Mozilla have already begun blocking third-party cookies by default, while Google Chrome and Microsoft Edge “provide that option, just not out of the box.”

EFF senior staff technologist Jacob Hoffman-Andrews told The Register that “When Google Chrome finishes the project on some unspecified date in the future, it will be a great day for privacy on the web. According to the announcement, the actual phased rollout is slated to begin in Q3 2024, with no stated deadline to reach 100 percent. Let’s hope Google’s advertising wing does not excessively delay these critical privacy improvements.”

TechRadar points out that after the initial testing period in 2024, Google will begin its phased rollout of the cookie replacement program — starting in June.
Thanks to long-time Slashdot reader AmiMoJo for sharing the news.

Read more of this story at Slashdot.



Source: Slashdot – Google Confirms Its Schedule for Disabling Third-Party Cookies in Chrome – Starting in 2024

Why Do So Many Sites Have Bad Password Policies?

“Three out of four of the world’s most popular websites are failing to meet minimum requirement standards” for password security, reports Georgia Tech’s College of Computing. Which means three out of four of the world’s most popular web sites are “allowing tens of millions of users to create weak passwords.”

Using a first-of-its-kind automated tool that can assess a website’s password creation policies, researchers also discovered that 12% of websites completely lacked password length requirements. Assistant Professor Frank Li and Ph.D. student Suood Al Roomi in Georgia Tech’s School of Cybersecurity and Privacy created the automated assessment tool to explore all sites in the Google Chrome User Experience Report (CrUX), a database of one million websites and pages.

Li and Al Roomi’s method of inferring password policies succeeded on over 20,000 sites in the database and showed that many sites:
– Permit very short passwords
– Do not block common passwords
– Use outdated requirements like complex characters

The researchers also discovered that only a few sites fully follow standard guidelines, while most stick to outdated guidelines from 2004… More than half of the websites in the study accepted passwords with six characters or less, with 75% failing to require the recommended eight-character minimum. Around 12% of had no length requirements, and 30% did not support spaces or special characters. Only 28% of the websites studied enforced a password block list, which means thousands of sites are vulnerable to cyber criminals who might try to use common passwords to break into a user’s account, also known as a password spraying attack.

Georgia Tech describes the new research as “the largest study of its kind.” (“The project was 135 times larger than previous works that relied on manual methods and smaller sample sizes.”)

“As a security community, we’ve identified and developed various solutions and best practices for improving internet and web security,” said assistant professor Li. “It’s crucial that we investigate whether those solutions or guidelines are actually adopted in practice to understand whether security is improving in reality.”

The Slashdot community has already noticed the problem, judging by a recent post from eggegick. “Every site I visit has its own idea of the minimum and maximum number of characters, the number of digits, the number of upper/lowercase characters, the number of punctuation characters allowed and even what punctuation characters are allowed and which are not.”
The limit of password size really torques me, as that suggests they are storing the password (they need to limit storage size), rather than its hash value (fixed size), which is a real security blunder. Also, the stupid dots drive me bonkers, especially when there is no “unhide” button. For crying out loud, nobody is looking over my shoulder! Make the “unhide” default.
“The ‘dots’ are bad security,” agrees long-time Slashdot reader Spazmania. “If you’re going to obscure the password you should also obscure the length of the password.” But in their comment on the original submission, they also point out that there is a standard for passwords, from the National Institute of Standards and Technology:
Briefly:
* Minimum 8 characters
* Must allow at least 64 characters.
* No constraints on what printing characters can be used (including high unicode)
* No requirements on what characters must be used or in what order or proportion
This is expected to be paired with a system which does some additional and critical things:

* Maintain a database of known compromised passwords (e.g. from public password dictionaries) and reject any passwords found in the database.
* Pair the password with a second authentication factor such as a security token or cell phone sms. Require both to log in.
* Limit the number of passwords which can be attempted per time period. At one attempt per second, even the smallest password dictionaries would take hundreds of years to try…
Someone attempting to brute force a password from outside on a rate-limited system is limited to the rate, regardless of how computing power advances. If the system enforces a rate limit of 1 try per second, the time to crack an 8-character password containing only lower case letters is still more than 6,000 years.

Read more of this story at Slashdot.



Source: Slashdot – Why Do So Many Sites Have Bad Password Policies?

How Python's New Security Developer Hopes To Help All Software Supply Chains

Long-time Slashdot reader destinyland writes: The Linux Foundation recently funded a new “security developer in residence” position for Python. (It’s funded through the Linux Foundation’s own “Open Software Security foundation”, which has a stated mission of partnering with open source project maintainers “to systematically find new, as-yet-undiscovered vulnerabilities in open source code, and get them fixed to improve global software supply chain security.”) The position went to the lead maintainer for the HTTP client library urllib3, the most downloaded package on the Python Package Index with over 10 billion downloads. But he hopes to create a ripple effect by demonstrating the impact of security investments in critical communities — ultimately instigating a wave of improvements to all software supply chains. (And he’s also documenting everything for easy replication by other communities…)

So far he’s improved the security of Python’s release processes with signature audits and security-hardening automation. But he also learned that CVE numbers were being assigned to newly-discovered vulnerabilities by the National Cyber Security Division of the America’s Department of Homeland Security — often without talking to anyone at the Python project. So by August he’d gotten the Python Software Foundation authorized as a CVE Numbering Authority, which should lead to more detailed advisories (including remediation information), now reviewed and approved by the responsible security response teams.

“The Python Software wants to help other Open Source organizations, and will be sharing lessons learned,” he writes in a blog post. And he now says he’s already been communicating with the Curl program about his experiences to help them take the same step, and even authored a guide to the process for other open source projects.

Read more of this story at Slashdot.



Source: Slashdot – How Python’s New Security Developer Hopes To Help All Software Supply Chains

Does OpenAI's Origins Explain the Sam Altman Drama?

Tech journalist Kara Swisher disagrees that Sam Altman’s (temporary) firing stemmed from a conflict between the “go-faster” people pushing for commercialization and a rival contingent wanting more safety-assuring guardrails. “He’s being talking about the problems,” Swisher said on CNN. “Compared to a lot of tech people, he’s talking about the problems. I think that’s a false dichotomy.”

At the same time, NPR argues, the firing and re-hiring of Sam Altman “didn’t come out of nowhere. In fact, the boardroom drama represented the boiling over of tensions that have long simmered under the surface of the company.”

The chaos at OpenAI can be traced back to the unusual way the company was structured. OpenAI was founded in 2015 by Altman, Elon Musk and others as a non-profit research lab. It was almost like an anti-Big Tech company; it would prioritize principles over profit. It wanted to, as OpenAI put it back then, develop AI tools that would “benefit humanity as a whole, unconstrained by a need to generate financial return.”

But in 2018, two things happened: First, Musk quit the board of OpenAI after he said he invested $50 million, cutting the then-unknown company off from more of the entrepreneur’s crucial financial backing. And secondly, OpenAI’s leaders grew increasingly aware that developing and maintaining advanced artificial intelligence models required an immense amount of computing power, which was incredibly expensive.

A year after Musk left, OpenAI created a for-profit arm. Technically, it is what’s known as a “capped profit” entity, which means investors’ possible profits are capped at a certain amount. Any remaining money is re-invested in the company. Yet the nonprofit’s board and mission still governed the company, creating two competing tribes within OpenAI: adherents to the serve-humanity-and-not-shareholders credo and those who subscribed to the more traditional Silicon Valley modus operandi of using investor money to release consumer products into the world as rapidly as possible in hopes of cornering a market and becoming an industry pacesetter… The question was, did Altman abandon OpenAI’s founding principles to try to scale up the company and sign up customers as fast as possible? And, if so, did that make him unsuited to helm a nonprofit created to develop AI products “free from financial obligations”?

Microsoft’s stock price hit an all-time high this week, reports the Wall Street Journal. (They also note that when OpenAI employees considered moving to Microsoft, CEO Satya Nadella “assured their potential colleagues that they wouldn’t even have to use Microsoft’s workplace-communications app Teams.”)

“But the ideal outcome for Microsoft was Altman going back to OpenAI as CEO, according to a person familiar with Nadella’s thinking. By opening Microsoft’s doors to the OpenAI team, Nadella increased Altman’s leverage to get his position back…”

Even after investing $13 billion, Microsoft didn’t have a board seat or visibility into OpenAI’s governance, since it worried that having too much sway would alarm increasingly aggressive regulators. That left Microsoft exposed to the risks of OpenAI’s curious structure… Microsoft has had to strike a tricky balance with OpenAI: safeguarding its investment while ensuring that its ownership stake remained below 50% to avoid regulatory pitfalls… AI is wildly expensive, and Microsoft’s spending is expected to soar as the company builds out the necessary computing infrastructure. And it’s unclear when or if it will be able to make back these upfront costs in added new revenue…
Nadella is banking on OpenAI’s independence leading to innovations that benefit Microsoft as much as humanity. But the uncertainty of the past week has shown the risks in one of the world’s most valuable companies outsourcing the future to a startup beyond its control.
When Chris Wallace asked Swisher if he should be more concerned about the dangers of AI now — and of its potential to take jobs — Swisher had a different answer. “One of the concerns you should have is the consolidation of this into bigger companies. Microsoft really want to win here…”

But she didn’t let the conversation end without wryly underscoring the potential for AI. “I’d be concerned that there’s not enough innovation… It could be a good thing, Chris. Trust me, it could be a good thing. But it could also, you know, kill you.”
Thanks to Slashdot reader Tony Isaac for sharing the article.

Read more of this story at Slashdot.



Source: Slashdot – Does OpenAI’s Origins Explain the Sam Altman Drama?

As Doctor Who Turns 60, the TARDIS Flies Again Tonight

It was November 23rd of the year 1963 that Doctor Who first premiered on the BBC. And the many years since then have wrought their changes, writes the BBC:

Events on screen and off have shaped the character’s personality, their face changing to reflect Britain itself, and every version building on what has gone before. To truly understand Who, you have to know your history…

[T]he series was originally intended to teach children history as much as thrill them… [T]he Daleks were shouty miniaturised tanks, terrifying to a nation that had lived through World War 2… Scripts by the likes of Douglas Adams (who wrote The Hitchhiker’s Guide to the Galaxy) leaned into the show’s inherent strangeness… Interestingly, the new specials and series involve Marvel-owner Disney, who will stream it outside the UK and Ireland, in turn helping boost the budget.
The article handily summarizes the last 60 years. (“Perhaps the most shocking revelation of [2010 showrunner Steven Moffat’s] tenure was a hitherto unseen, past version of the Doctor, played by John Hurt. Other writers would take this idea and run with it…”) The article ends with the words, “Only time will tell.”

And elsewhere another BBC article notes that today “the TARDIS is set to return to BBC One and iPlayer.”
With David Tennant as the Fourteenth Doctor and Catherine Tate reprising her role as Donna Noble the popular duo will make their spectacular return to mark the show’s 60th anniversary with three special episodes running each Saturday from the 25th November…

Neil Patrick Harris as the Toymaker [is] set to cause all kinds of mayhem. It’s going to be an unmissable cosmic adventure, all before Ncuti Gatwa gets the keys to the TARDIS over the festive season.
Thanks to Alain Williams (Slashdot reader #2,972) for sharing the article.

Read more of this story at Slashdot.



Source: Slashdot – As Doctor Who Turns 60, the TARDIS Flies Again Tonight

How to Support Local Retailers on 'Small Business Saturday'

America celebrates “Small Business Saturday” today with special celebrations everywhere from Houston, Texas to Buffalo, New York
NBC News reports:
Sandwiched between Black Friday and Cyber Monday — historically the biggest and busiest retail days of the year — there’s another standout shopping event: Small Business Saturday. Started by American Express in 2010 and co-sponsored by the U.S. Small Business Administration since 2011, Small Business Saturday aims to create awareness about the impact shoppers have when they buy “small” year round, whether they physically visit stores or shop online.

This year, 85% of consumers say they’re likely to shop “small” during the holiday season, according to the American Express 2023 Shop Small Impact Study. That represents a multibillion dollar opportunity — consumers are expected to spend an estimated $125 billion at small businesses this holiday season, up 42% from $88 billion in 2022, as reported by Intuit QuickBooks.

Like CBS News, NBC has compiled its list of small businesses that can ship their products to you — and suggests leaving positive reviews online for your favorite small businesses. (“Amazon, for example, now adds badges to product pages on its site if items are sold by small businesses.”)
They also recommend interacting with your favorite small businesses on social media — while “the American Express small-business map allows you to input your zip code so it can recommend local shops in your area and beyond. Google also has a ‘small business’ filter on desktop and mobile, and one for Google Maps on mobile.”

The UK’s “Small Business Saturday” will happen next week, on the first Saturday in December.

Read more of this story at Slashdot.



Source: Slashdot – How to Support Local Retailers on ‘Small Business Saturday’

Ubuntu Budgie Switches to an Xfce Approach to Wayland

Last January the Register reported that the Budgie desktop environment was planning to switch from using GNOME to Enlightenment. But this week Budgie’s project lead David Mohammed and packaging guru Sam Lane “passed on news of a rift — and indeed possible divorce — between Budgie and Enlightenment,” the Register reported. “And it’s caused by Wayland.”

The development team of the Budgie desktop is changing course and will work with the Xfce developers toward Budgie’s Wayland future…

While Enlightenment does have some Wayland support, in the project’s own words this is “still considered experimental and not for regular end users.” Mohammed told us… “Progress though towards a full implementation currently doesn’t fit into the deemed urgent nature to move to Wayland (Red Hat dropping further X11 development, and questions as to any organisation stepping up, etc.)”

So, instead, Budgie is exploring different ways to build a Wayland-only environment. For now, as we mentioned when looking at Ubuntu’s 23.10 release, there’s a new windowing library, Magpie. Magpie 0.9 is what the project describes as “a soft-fork of GNOME’s mutter at version 43” — the term soft fork meaning it’s a temporary means to an end, rather than intended to form an on-going independent continuation.

For the future, though, Mohammed told us… “[T]he Budgie team has been evaluating options to move forward. XFCE are doing some really great work in this area with libxfce4windowing — a compatibility layer bridging Wayland and X11, allowing the move in a logical direction without needing a big-bang approach. To date, most of the current codebase has already been reworked and is ready for a Wayland-only approach without impacting further development and enhancements.”
Mohammed later told the Register, “It makes sense for the more dynamic smaller projects to work together where there are shared aims.”

Read more of this story at Slashdot.



Source: Slashdot – Ubuntu Budgie Switches to an Xfce Approach to Wayland

Cards Against Humanity's Black Friday Prank: Launching Its Own Social Media Site

Long-time Slashdot reader destinyland writes: The popular party game “Cards Against Humanity” continued their tradition of practical jokes on Black Friday. They created a new social network where users can perform only one action: posting the word “yowza.”

Then announced it on their official social media accounts on Instagram, Facebook, and X…
Regardless of what words you type into the window, they’re replaced with the word yowza. “For just $0.99, you’ll get an exclusive black check by your name,” reads an announcement on the site, “and the ability to post a new word: awooga.”

It’s a magical land where “yowfluencers” keep “reyowzaing” the “yowzas” of other users. And there’s also a tab for trending hashtags. (Although, yes, they all seem to be “yowza”.) But they’ve already gotten a write up in the trade industry publication Advertising Age.

“With every bad thing happening in the world, social media is always right there, making it worse,” a spokesperson said…. “[W]e asked ourselves: Is there a way we could make a social network that doesn’t suck? At first, the answer was ‘no.’ The content moderation problem is just too hard. And then we thought, why not solve the content moderation problem by having no content? That’s Yowza….”
When creating your profile on the network there’s a dropdown menu for specifying your age and location — although all of the choices are yowza. More details from Advertising Age:

The company said the word “yowza” was the first that came to mind when its creative teams were brainstorming—and it just stuck. “It’s dumb, it’s ridiculous, it means nothing. It’s perfect,” the rep said.

And the service is still evolving, with fresh user upgrades. The official Yowza store will now also sell you the ability to also post the word Shazam — for $29.99. (Also on sale are 100,000 followers — for 99 cents.) But there’s also an official FAQ which articulates the service’s deep commitment to protecting their users’ privacy.

Do you promise you won’t share my private information with the Chinese Communist Party, like TikTok?
Yowza.

Read more of this story at Slashdot.



Source: Slashdot – Cards Against Humanity’s Black Friday Prank: Launching Its Own Social Media Site

In Just 15 Months, America Made $37B In Clean Energy Investments In Fossil Fuel-Reliant Regions

America passed a climate bill in August of 2022 with incentives to build wind and solar energy in regions that historically relied on fossil fuels. And sure enough, since then “a disproportionate amount of wind, solar, battery and manufacturing investment is going to areas that used to host fossil fuel plants,” reports the Washington Post.

They cite a new analysis of investment trends from independent research firm Rhodium Group and MIT’s Center for Energy and Environmental Policy Research:

In Carbon County, Wyo. — a county named for its coal deposits — a power company is building hundreds of wind turbines. In Mingo County, W.Va., where many small towns were once coal towns, the Adams Fork Energy plant will sit on a former coal mining site and produce low-carbon ammonia… While communities that once hosted coal, oil or gas infrastructure make up only 18.6 percent of the population, they received 36.8 percent of the clean energy investment in the year after the Inflation Reduction Act’s passage. “We’re talking about in total $100 billion in investment in these categories,” said Trevor Houser, a partner at Rhodium Group. “So $37 billion investment in a year for energy communities — that’s a lot of money….”

Most significantly, 56.6 percent of investment in U.S. wind power in the past year has gone to energy communities, as well as 45.5 percent of the storage and battery investment… The analysis also found that significant amounts of clean energy investment were going to disadvantaged communities, defined as communities with environmental or climate burdens, and low-income communities. Many of the states benefiting are solidly Republican…

Josh Freed, senior vice president for climate and energy at the center-left think tank Third Way, is not sure whether the clean energy investments will make a difference for next year’s election. But in the long term, he argues, rural Republican areas will become more dependent on clean energy — potentially shifting party alliances and shifting the position of the Republican Party itself. “It’s going to change these fossil fuel communities,” he said.

Read more of this story at Slashdot.



Source: Slashdot – In Just 15 Months, America Made B In Clean Energy Investments In Fossil Fuel-Reliant Regions

Google Maps Error Misleads Row of Cars Into the Mojave Desert

“Every car we were driving with was heading that direction…” Shelby Easler says in a TikTok video, “so we assumed this was going somewhere…”

But SFGate reports that instead of a handy “alternate route,” Google Maps was leading her and her two passengers “far off the major highway and into Nevada’s fierce deserts on an off-roading trail.”
Easler’s car were not the only bushwackers. In Shelby’s viral TikTok, a trail of cars closely follows behind them. “The first driver that turned around talked to us to tell us that the road gets washed out the higher into the mountain you get, and we have to turn around since the path leads nowhere. He was in a huge truck and was just driving straight through the bushes and shrubs to let people know to turn around,” Easler said.
1.5 million people have viewed Easler’s earlier footage of their road to nowhere. The off-roading trail was apparently only wide enough for traffic in one direction, and attempting to return in that other direction, “We were driving over bushes and rocks and alot of the cars couldn’t even make it,” Easler says in the second video. “Which is kind of why our car broke down.”

They told SFGate that ultimately “We had to leave the car in Vegas, and it got towed to the service center of a dealership. They said the rear, right tire was coming off, and the alignment was messed up too. Low-key a pretty expensive fix.”

They eventually called the highway patrol to shut down the road that Google Maps was sending people to, because “With every car coming in, every single car was getting trapped.”

Read more of this story at Slashdot.



Source: Slashdot – Google Maps Error Misleads Row of Cars Into the Mojave Desert

EU, Chinese, French Regulators Seeking Info on Graphic Cards, Nvidia Says

Regulators in the European Union, China and France have asked for information on Nvidia’s graphic cards, with more requests expected in the future, the U.S. chip giant said in a regulatory filing. From a report: Nvidia is the world’s largest maker of chips used both for artificial intelligence and for computer graphics. Demand for its chips jumped following the release of the generative AI application ChatGPT late last year. The California-based company has a market share of around 80% via its chips and other hardware and its powerful software that runs them.

Its graphics cards are high-performance devices that enable powerful graphics rendering and processing for use in video editing, video gaming and other complex computing operations. The company said this has attracted regulatory interest around the world. “For example, the French Competition Authority collected information from us regarding our business and competition in the graphics card and cloud service provider market as part of an ongoing inquiry into competition in those markets,” Nvidia said in a regulatory filing dated Nov. 21.

Read more of this story at Slashdot.



Source: Slashdot – EU, Chinese, French Regulators Seeking Info on Graphic Cards, Nvidia Says