Sam Bankman-Fried Testifies, Says He 'Skimmed Over' FTX Terms of Service

An anonymous reader quotes a report from Ars Technica: Sam Bankman-Fried took the stand in his criminal trial today in an attempt to avoid decades in prison for alleged fraud at cryptocurrency exchange FTX and its affiliate Alameda Research. […] Some of the alleged fraud relates to how Alameda borrowed money from FTX. In testimony today, “Bankman-Fried said he believed that under FTX’s terms of service, sister firm Alameda was allowed in many circumstances to borrow funds from the exchange,” the WSJ wrote. Bankman-Fried reportedly said the terms of service were written by FTX lawyers and that he only “skimmed” certain parts. “I read parts in depth. Parts I skimmed over,” Bankman-Fried reportedly said after [U.S. District Judge Lewis Kaplan] asked if he read the entire terms of service document.

Sassoon asked Bankman-Fried if he had “any conversations with lawyers about Alameda spending customer money that was deposited into FTX bank accounts,” according to Bloomberg’s live coverage. “I don’t recall any conversations that were contemporaneous and phrased that way,” Bankman-Fried answered. “I had so many conversations with lawyers later when we were trying to reconcile things in November 2022,” Bankman-Fried also said. “There were conversations around Alameda being used as a payment processor, a payment agent for FTX. I frankly don’t recall conversations with lawyers or otherwise about the usage of the funds or the North Dimension accounts.” North Dimension was an Alameda subsidiary. The Securities and Exchange Commission has alleged that “Bankman-Fried directed FTX to have customers send funds to North Dimension in an effort to hide the fact that the funds were being sent to an account controlled by Alameda.” […]

In an overview of the alleged crimes, the indictment said Bankman-Fried “misappropriated and embezzled FTX customer deposits and used billions of dollars in stolen funds… to enrich himself; to support the operations of FTX; to fund speculative venture investments; to help fund over a hundred million dollars in campaign contributions to Democrats and Republicans to seek to influence cryptocurrency regulation; and to pay for Alameda’s operating costs.” He was also accused of making “false and fraudulent statements and representations to FTX’s investors and Alameda’s lenders.” SBF’s legal team decided that he would take the stand in his own defense — a risky decision by legal observers as he will have to face cross-examination from federal prosecutors. In a rather unusual move, Judge Kaplan sent the jury home for a day to conduct a hearing on whether certain parts of Bankman-Fried’s testimony are admissible.

During his testimony, Bankman-Fried discussed various aspects of the case, including FTX’s terms of service, loans from Alameda to him and other executives, a hack into FTX, and his use of the encrypted messaging service Signal. Live paywall-free updates of the trial are available here.

Read more of this story at Slashdot.



Source: Slashdot – Sam Bankman-Fried Testifies, Says He ‘Skimmed Over’ FTX Terms of Service

Barcode Leads To Arrest of Texas Litterbug Behind 200 Pounds of Dumped Trash

“Illegal dumping is way too common, and often leads to no consequences,” writes Slashdot reader Tony Isaac. “In some urban neighborhoods, people dump entire truckloads of waste in ditches along the streets. Maybe authorities have found a way to make a dent in this problem.” Houston Chronicle reports: The Texas Game Wardens were recently able to track down and arrest a litterbug allegedly behind an illegal dumping of over 200 pounds of construction materials using a barcode left at the scene of the crime, according to a news release from the Texas Parks and Wildlife Department (TPWD). The pile of trash, which included sheetrock, housing trim, two-by-fours and various plastic items, was reportedly dumped along a bridge and creek on private land instead of being properly disposed of.

However, hidden among the garbage was also a box containing a barcode that would help identify the person behind the heap. A Smith County Game Warden used the barcode to track down the materials to a local store, and ultimately the owner of the credit card that was used for the purchase, TPWD said. The game warden interviewed the home owner who had reportedly just finished remodeling his home. “The homeowner explained that he paid someone familiar to the family who offered to haul off their used material and trash for a minimum fee,” Texas Games Wardens said in a statement. “Unfortunately, the suspect kept the money and dumped the trash onto private property.”

Working with the game warden, Smith County Sheriff’s Office environmental deputies eventually arrested the suspect on charges of felony commercial dumping. At the time of the arrest, the suspect’s truck was reportedly found loaded with even more building materials and trash, TPWD said. The state agency did not identify the suspect or disclose when or where they were arrested.

Read more of this story at Slashdot.



Source: Slashdot – Barcode Leads To Arrest of Texas Litterbug Behind 200 Pounds of Dumped Trash

iPhones Have Been Exposing Your Unique MAC Despite Apple's Promises Otherwise

Dan Goodin reports via Ars Technica: Three years ago, Apple introduced a privacy-enhancing feature that hid the Wi-Fi address of iPhones and iPads when they joined a network. On Wednesday, the world learned that the feature has never worked as advertised. Despite promises that this never-changing address would be hidden and replaced with a private one that was unique to each SSID, Apple devices have continued to display the real one, which in turn got broadcast to every other connected device on the network. […]

In 2020, Apple released iOS 14 with a feature that, by default, hid Wi-Fi MACs when devices connected to a network. Instead, the device displayed what Apple called a “private Wi-Fi address” that was different for each SSID. Over time, Apple has enhanced the feature, for instance, by allowing users to assign a new private Wi-Fi address for a given SSID. On Wednesday, Apple released iOS 17.1. Among the various fixes was a patch for a vulnerability, tracked as CVE-2023-42846, which prevented the privacy feature from working. Tommy Mysk, one of the two security researchers Apple credited with discovering and reporting the vulnerability (Talal Haj Bakry was the other), told Ars that he tested all recent iOS releases and found the flaw dates back to version 14, released in September 2020. “From the get-go, this feature was useless because of this bug,” he said. “We couldn’t stop the devices from sending these discovery requests, even with a VPN. Even in the Lockdown Mode.”

When an iPhone or any other device joins a network, it triggers a multicast message that is sent to all other devices on the network. By necessity, this message must include a MAC. Beginning with iOS 14, this value was, by default, different for each SSID. To the casual observer, the feature appeared to work as advertised. The “source” listed in the request was the private Wi-Fi address. Digging in a little further, however, it became clear that the real, permanent MAC was still broadcast to all other connected devices, just in a different field of the request. Mysk published a short video showing a Mac using the Wireshark packet sniffer to monitor traffic on the local network the Mac is connected to. When an iPhone running iOS prior to version 17.1 joins, it shares its real Wi-Fi MAC on port 5353/UDP.

Read more of this story at Slashdot.



Source: Slashdot – iPhones Have Been Exposing Your Unique MAC Despite Apple’s Promises Otherwise

Meta's Threads App Has 'Just Under' 100 Million Monthly Active Users, Says Zuckerberg

“Threads is officially a success,” writes long-time Slashdot reader destinyland. 9to5Mac reports: During Meta’s quarterly earnings call today, CEO Mark Zuckerberg offered an update on the Threads, saying that the service has “just under” 100 million monthly active users. When Threads launched in July, the app quickly rocketed to having 100 million users within just a few days. While that growth is believed to have slowed down, as expected when something takes off so quickly, Zuckerberg says the service is currently at almost 100 million active users. Note the difference in terms, too. Having 100 million “users” is one thing, while having 100 million monthly active users is quite different — and more impressive.

The number is also impressive when you consider that Threads isn’t available to the millions of people who live in the European Union. As noted by The Verge, Zuckerberg also reiterated today that Meta’s goal is to turn Threads into a “billion-person public conversations app” that is “a bit more positive” than some of the competition. According to Zuckerberg, Threads is on the way to achieving that goal.

Read more of this story at Slashdot.



Source: Slashdot – Meta’s Threads App Has ‘Just Under’ 100 Million Monthly Active Users, Says Zuckerberg

T-Mobile Walks Back Forced Plan Migration, Won't Make People Switch Plans After All

An anonymous reader quotes a report from CNET: T-Mobile caused a bit of a stir earlier this month when a leak revealed it planned to move people from older, cheaper plans to pricier ones starting with their November bill cycle. On Wednesday, the carrier officially walked back the changes with CEO Mike Sievert confirming that they would not happen. “We tend to do tests and pilots of things quite a bit to try to figure out what’s the right answer,” Sievert said on a company earnings call, in response to a question about industry pricing and how it could raise its average revenues per user, a key industry metric. “In this case, we had a test sell to try to understand customer interest in, and acceptance of, migrating off old legacy rate plans to something that’s higher value, for them and for us.”

Sievert noted that the company was doing training around this test and said it wasn’t planned to be a “broad, national thing.” In its statement confirming the leak, the company told CNET earlier this month that the notices it was sending out was going to “a small number” of its users, but the carrier never clarified what a “small number” actually meant and didn’t respond to that question when asked. At the time, the carrier said that the switch would generally see customers pay “an increase of approximately $10 per line” per month.

With the “plenty of feedback” the company received following the leak, Sievert said that T-Mobile has learned that this “particular test sell isn’t something that our customers are going to love.” He mentioned that no migrations of plans have actually rolled out. As for what will happen going forward, the carrier will continue to do tests and pilots for different changes, Mike Katz, T-Mobile’s president of marketing, strategy and products, said on the call.

Read more of this story at Slashdot.



Source: Slashdot – T-Mobile Walks Back Forced Plan Migration, Won’t Make People Switch Plans After All

Privacy Advocate Challenges YouTube's Ad Blocking Detection Scripts Under EU Law

“Privacy advocate Alexander Hanff has filed a complaint with the Irish Data Protection Commission (DPC) challenging YouTube’s use of JavaScript code to detect the presence of ad blocking extensions in the browsers of website visitors,” writes long-time Slashdot reader Dotnaught. “He claims that under Europe’s ePrivacy Directive, YouTube needs to ask permission to run its detection script because it’s not technically necessary. If the DPC agrees, it would be a major win for user privacy.” The Register reports: Asked how he hopes the Irish DPC will respond, Hanff replied via email, “I would expect the DPC to investigate and issue an enforcement notice to YouTube requiring them to cease and desist these activities without first obtaining consent (as per [Europe’s General Data Protection Regulation (GDPR)] standard) for the deployment of their -spyware- detection scripts; and further to order YouTube to unban any accounts which have been banned as a result of these detections and to delete any personal data processed unlawfully (see Article 5(1) of GDPR) since they first started to deploy their -spyware- detection scripts.”

Hanff’s use of strikethrough formatting to acknowledges the legal difficulty of using the term “spyware” to refer to YouTube’s ad block detection code. The security industry’s standard defamation defense terminology for such stuff is PUPs, or potentially unwanted programs. Hanff, who reports having a Masters in Law focused on data and privacy protection, added that the ePrivacy Directive is lex specialis to GPDR. That means where laws overlap, the specific one takes precedence over the more general one. Thus, he argues, personal data collected without consent is unlawful under Article 5(1) of GDPR and cannot be lawfully processed for any purpose.

With regard to YouTube’s assertion that using an ad blocker violates the site’s Terms of Service, Hanff argued, “Any terms and conditions which restrict the legal rights and freedoms of an EU citizen (and the point of Article 5(3) of the ePrivacy Directive is specifically to protect the fundamental right to Privacy under Article 7 of the Charter of Fundamental Rights of the European Union) are void under EU law.” Therefore, in essence, “Any such terms which restrict the rights of EU persons to limit access to their terminal equipment would, as a result, be void and unenforceable,” he added.

Read more of this story at Slashdot.



Source: Slashdot – Privacy Advocate Challenges YouTube’s Ad Blocking Detection Scripts Under EU Law

Humanity At Risk From AI 'Race To the Bottom,' Says MIT Tech Expert

An anonymous reader quotes a report from The Guardian: Max Tegmark, a professor of physics and AI researcher at the Massachusetts Institute of Technology, said the world was “witnessing a race to the bottom that must be stopped.” Tegmark organized an open letter published in April, signed by thousands of tech industry figures including Elon Musk and the Apple co-founder Steve Wozniak, that called for a six-month hiatus on giant AI experiments. “We’re witnessing a race to the bottom that must be stopped,” Tegmark told the Guardian. “We urgently need AI safety standards, so that this transforms into a race to the top. AI promises many incredible benefits, but the reckless and unchecked development of increasingly powerful systems, with no oversight, puts our economy, our society, and our lives at risk. Regulation is critical to safe innovation, so that a handful of AI corporations don’t jeopardize our shared future.”

In a policy document published this week, 23 AI experts, including two modern “godfathers” of the technology, said governments must be allowed to halt development of exceptionally powerful models. Gillian Hadfield, a co-author of the paper and the director of the Schwartz Reisman Institute for Technology and Society at the University of Toronto, said AI models were being built over the next 18 months that would be many times more powerful than those already in operation. “There are companies planning to train models with 100x more computation than today’s state of the art, within 18 months,” she said. “No one knows how powerful they will be. And there’s essentially no regulation on what they’ll be able to do with these models.”

The paper, whose authors include Geoffrey Hinton and Yoshua Bengio — two winners of the ACM Turing award, the “Nobel prize for computing” — argues that powerful models must be licensed by governments and, if necessary, have their development halted. “For exceptionally capable future models, eg models that could circumvent human control, governments must be prepared to license their development, pause development in response to worrying capabilities, mandate access controls, and require information security measures robust to state-level hackers, until adequate protections are ready.” The unrestrained development of artificial general intelligence, the term for a system that can carry out a wide range of tasks at or above human levels of intelligence, is a key concern among those calling for tighter regulation. Further reading: AI Risk Must Be Treated As Seriously As Climate Crisis, Says Google DeepMind Chief

Read more of this story at Slashdot.



Source: Slashdot – Humanity At Risk From AI ‘Race To the Bottom,’ Says MIT Tech Expert

iFixit Now Sells Microsoft Surface Parts For Repair

iFixit has started selling genuine replacement parts for Microsoft Surface devices. From a report: The company now offers SSDs, batteries, screens, kickstands, and a whole bunch of other parts for 15 Surface products. Some of the devices on that list include the Surface Pro 9, Surface Laptop 5, Surface Go 4, Surface Studio 2 Plus, and others. You can check out the entire list of supported products and parts in this post on Microsoft’s website. In addition to supplying replacement parts, iFixit also offers disassembly videos and guides for each product, as well as toolkits that include things like an opening tool, tweezers, drivers, and more.

Read more of this story at Slashdot.



Source: Slashdot – iFixit Now Sells Microsoft Surface Parts For Repair

UK Regulator Trying To Block Release of Shell North Sea Documents

The UK’s oil and gas regulator is coming under fire from environmental groups for using lawyers to try to prevent the publication of five key documents relating to the environmental impact of Shell’s activities in the North Sea. From a report: At a hearing in December, a legal representative for the North Sea Transition Authority (NSTA) is expected to argue against the publication of documents that contain details about the risk of pollution as a result of decommissioning the Brent oilfield, which was operated by Shell for more than 40 years. It says it opposes publication “on a matter of process basis.” Shell has applied for an exemption from international rules that require all infrastructure to be removed from the field and the UK government is deciding whether it will allow the oil company to leave the 170-metre-high oil platform legs in place for the three platforms known as Bravo, Charlie and Delta.

A total of 64 concrete storage cells are contained in the leg structures, 42 of which have previously been used for oil storage and separation. Most of the cells are the size of seven Olympic swimming pools, and collectively still contain an estimated 72,000 tonnes of contaminated sediment and 638,000 cubic metres of oily water. Environmental groups believe the documents held by the NSTA would reveal new information about long-term environmental dangers that is relevant to other North Sea oil developments, including Equinor’s plans to develop Rosebank, the UK’s largest untapped field.

Read more of this story at Slashdot.



Source: Slashdot – UK Regulator Trying To Block Release of Shell North Sea Documents

Google Adds Generative AI Threats To Its Bug Bounty Program

Google has expanded its vulnerability rewards program (VRP) to include attack scenarios specific to generative AI. From a report: In an announcement shared with TechCrunch ahead of publication, Google said: “We believe expanding the VRP will incentivize research around AI safety and security and bring potential issues to light that will ultimately make AI safer for everyone.” Google’s vulnerability rewards program (or bug bounty) pays ethical hackers for finding and responsibly disclosing security flaws.

Given that generative AI brings to light new security issues, such as the potential for unfair bias or model manipulation, Google said it sought to rethink how bugs it receives should be categorized and reported. The tech giant says it’s doing this by using findings from its newly formed AI Red Team, a group of hackers that simulate a variety of adversaries, ranging from nation-states and government-backed groups to hacktivists and malicious insiders to hunt down security weaknesses in technology. The team recently conducted an exercise to determine the biggest threats to the technology behind generative AI products like ChatGPT and Google Bard.

Read more of this story at Slashdot.



Source: Slashdot – Google Adds Generative AI Threats To Its Bug Bounty Program

Hyundai To Hold Software-Upgrade Clinics Across the US For Vehicles Targeted By Thieves

Hyundai said this week that it will set up “mobile clinics” at five U.S. locations to provide anti-theft software upgrades for vehicles now regularly targeted by thieves using a technique popularized on TikTok and other social media platforms. From a report: The South Korean automaker will hold the clinics, which will run for two to three days on or adjacent to weekends, in New York City; Chicago; Minneapolis; St. Paul, Minnesota; and Rochester, New York. The clinics will take place between Oct. 28 and Nov. 18. Hyundai said it will also support single-day regional clinics run by dealerships before the end of 2023, although it didn’t name locations or dates.

Read more of this story at Slashdot.



Source: Slashdot – Hyundai To Hold Software-Upgrade Clinics Across the US For Vehicles Targeted By Thieves

OpenAI Forms Team To Study 'Catastrophic' AI Risks, Including Nuclear Threats

OpenAI today announced that it’s created a new team to assess, evaluate and probe AI models to protect against what it describes as “catastrophic risks.” From a report: The team, called Preparedness, will be led by Aleksander Madry, the director of MIT’s Center for Deployable Machine Learning. (Madry joined OpenAI in May as “head of Preparedness,” according to LinkedIn, ) Preparedness’ chief responsibilities will be tracking, forecasting and protecting against the dangers of future AI systems, ranging from their ability to persuade and fool humans (like in phishing attacks) to their malicious code-generating capabilities.

Some of the risk categories Preparedness is charged with studying seem more… far-fetched than others. For example, in a blog post, OpenAI lists “chemical, biological, radiological and nuclear” threats as areas of top concern where it pertains to AI models. OpenAI CEO Sam Altman is a noted AI doomsayer, often airing fears รข” whether for optics or out of personal conviction — that AI “may lead to human extinction.” But telegraphing that OpenAI might actually devote resources to studying scenarios straight out of sci-fi dystopian novels is a step further than this writer expected, frankly.

Read more of this story at Slashdot.



Source: Slashdot – OpenAI Forms Team To Study ‘Catastrophic’ AI Risks, Including Nuclear Threats

Google Exec Testifies Innovation Key To Avoid Becoming 'Next Road Kill'

Google executive Prabhakar Raghavan on Thursday detailed challenges the search and advertising giant faces from smaller rivals, describing efforts to avoid becoming “the next road kill.” From a report: Raghavan testified at the ongoing antitrust trial in the suit brought by the U.S. Justice Department and a coalition of state attorneys general, alleging Alphabet’s Google unlawfully abused its dominance in the search-engine market to maintain monopoly power. Raghavan, asked about a 1998 article about Yahoo!’s dominance of search at the time, said he was acutely aware rivals from Expedia.com to Instagram to TikTok competed for users’ attention.

“I feel a keen sense not to become the next road kill,” said Raghavan, a senior vice president at Google who reports to chief executive Sundar Pichai. Raghavan said Google had some 8,000 engineers and product managers working on search, with about 1,000 involved in search quality. Raghavan’s description of Google struggling to stay relevant clashed with the Justice Department’s depiction of a behemoth that broke antitrust law to retain dominance of online search and some aspects of advertising, including paying an estimated $10 billion annually to smartphone makers and wireless carriers to be the default search engine on devices. Google’s share of the search engine market is near 90%.

Read more of this story at Slashdot.



Source: Slashdot – Google Exec Testifies Innovation Key To Avoid Becoming ‘Next Road Kill’

The UK's Controversial Online Safety Bill Finally Becomes Law

An anonymous reader shares a report: The UK’s Online Safety Bill, a wide-ranging piece of legislation that aims to make the country “the safest place in the world to be online” received royal assent today and became law. The bill has been years in the making and attempts to introduce new obligations for how tech firms should design, operate, and moderate their platforms. Specific harms the bill aims to address include underage access to online pornography, “anonymous trolls,” scam ads, the nonconsensual sharing of intimate deepfakes, and the spread of child sexual abuse material and terrorism-related content.

Although it’s now law, online platforms will not need to immediately comply with all of their duties under the bill, which is now known as the Online Safety Act. UK telecoms regulator Ofcom, which is in charge of enforcing the rules, plans to publish its codes of practice in three phases. The first covers how platforms will have to respond to illegal content like terrorism and child sexual abuse material, and a consultation with proposals on how to handle these duties is due to be published on November 9th.

Read more of this story at Slashdot.



Source: Slashdot – The UK’s Controversial Online Safety Bill Finally Becomes Law

Infosys Founder Says India's Work Culture Must Change: 'Youngsters Should Work 70 Hours a Week'

NR Narayana Murthy, the founder of software consultancy giant Infosys, urged youngsters in India to work 70 hours a week if they want the nation to compete with other economies. From a report: Narayana Murthy, in conversation with former Infosys CFO Mohandas Pai, said that India’s work productivity is among the lowest in the world. In order to compete with countries like China, India’s youngsters must put in extra hours of work — like Japan and Germany did after World War 2.

He also blamed other issues like corruption in the government and bureaucratic delays, saying: “India’s work productivity is one of the lowest in the world. Unless we improve our work productivity, unless we reduce corruption in the government at some level, because we have been reading I don’t know the truth of it, unless we reduce the delays in our bureaucracy in taking this decision, we will not be able to compete with those countries that have made tremendous progress.” Murthy, 77, added his request to the youngsters of today. “So therefore, my request is that our youngsters must say, ‘This is my country. I’d like to work 70 hours a week.'”

Read more of this story at Slashdot.



Source: Slashdot – Infosys Founder Says India’s Work Culture Must Change: ‘Youngsters Should Work 70 Hours a Week’

Western Digital and Kioxia Scrap Memory Chip Merger Talks

Negotiations to merge Western Digital’s semiconductor memory business and Japan’s Kioxia Holdings have been terminated, Nikkei reported Thursday. From the report: The companies were aiming to reach an agreement by the end of October. U.S.-based Western Digital by Thursday had notified Kioxia that it would exit the talks after the merger failed to secure approval from SK Hynix, an indirect shareholder in Kioxia. The companies were also unable to agree on the merger’s conditions with Bain Capital, Kioxia’s top shareholder. Kioxia, formerly known as Toshiba Memory, and Western Digital have both suffered a downturn in earnings amid headwinds in memory chips. They are each seeking capital infusions and other measures to help bolster operations.

Kioxia ranks third in global market share for NAND flash memory, while Western Digital ranks fourth. The proposed merger would have resulted in an entity that rivals market leader Samsung Electronics, and the companies had hoped the larger scale would lead to greater profits and growth. But SK Hynix officially declared its opposition to the deal on Thursday. SK Hynix had invested about 400 billion yen ($2.67 billion at current rates) in the Bain-led consortium that acquired what is now Kioxia from Toshiba. The South Korean company is now second only to Samsung in NAND memory, and was worried that the Western Digital-Kioxia merger would hurt its position while derailing partnerships it had been exploring with Kioxia.

Read more of this story at Slashdot.



Source: Slashdot – Western Digital and Kioxia Scrap Memory Chip Merger Talks

Bloomsbury Chief Warns of AI Threat To Publishing

The chief executive of Bloomsbury Publishing has warned of the threat of artificial intelligence to the publishing industry, saying tech groups are already using the work of authors to train up generative AI programmes. From a report: Nigel Newton, who signed Harry Potter author JK Rowling to Bloomsbury in the 1990s, also said ministers needed to act urgently to address competition concerns between large US tech groups and the publishing industry given their increasing market dominance in selling books across the world. The warning came as Bloomsbury reported its highest-ever first-half results on the back of the boom in fantasy novels, leading the publisher to boost its interim dividend. The group said revenues grew 11 per cent to $165.7mn, sending profits 11 per cent higher at $21.4mn, for the six months to August 31.

Newton pointed to the “huge” growth in fantasy novels, with sales of books by Sarah J Maas and Samantha Shannon growing 79 per cent and 169 per cent respectively in the period and demand for Harry Potter books, 26 years after publication, remaining strong. The next Maas book, scheduled for January, has already received “staggering” pre-orders of 750,000 copies for the hardback edition, he said, underscoring the resurgence of the book-selling industry. The group’s consumer division will also publish new books in the expanding Harry Potter franchise — such as a new Wizarding Almanac this autumn.

Read more of this story at Slashdot.



Source: Slashdot – Bloomsbury Chief Warns of AI Threat To Publishing

JPMorgan Says JPM Coin Now Handles $1 Billion Transactions Daily

JPMorgan Chase’s digital token JPM Coin now handles $1 billion worth of transactions daily and the bank plans to continue widening its usage, Global Head of Payments Takis Georgakopoulos said. From a report: “JPM Coin gets transacted on a daily basis mostly in US dollars, but we again intend to continue to expand that,” Georgakopoulos said Thursday in an interview on Bloomberg Television. JPM Coin enables wholesale clients to make dollar and euro-denominated payments through a private blockchain network. It’s one of the few examples of a live blockchain application by a large bank, but remains a small fraction of the $10 trillion in US dollar transactions moved by JPMorgan on a daily basis. The company also runs a blockchain-based repo application, and is exploring a digital deposit token to accelerate cross-border settlements.

Read more of this story at Slashdot.



Source: Slashdot – JPMorgan Says JPM Coin Now Handles Billion Transactions Daily

Inside Google's Plan To Stop Apple From Getting Serious About Search

Google has worried for years that Apple would one day expand its internet search technology, and has been working on ways to prevent that from happening. From a report: For years, Google watched with increasing concern as Apple improved its search technology, not knowing whether its longtime partner and sometimes competitor would eventually build its own search engine. Those fears ratcheted up in 2021, when Google paid Apple around $18 billion to keep Google’s search engine the default selection on iPhones, according to two people with knowledge of the partnership, who were not authorized to discuss it publicly. The same year, Apple’s iPhone search tool, Spotlight, began showing users richer web results like those they could have found on Google.

Google quietly planned to put a lid on Apple’s search ambitions. The company looked for ways to undercut Spotlight by producing its own version for iPhones and to persuade more iPhone users to use Google’s Chrome web browser instead of Apple’s Safari browser, according to internal Google documents reviewed by The New York Times. At the same time, Google studied how to pry open Apple’s control of the iPhone by leveraging a new European law intended to help small companies compete with Big Tech. Google’s anti-Apple plan illustrated the importance that its executives placed on maintaining dominance in the search business. It also provides insight into the company’s complex relationship with Apple, a competitor in consumer gadgets and software that has been an instrumental partner in Google’s mobile ads business for more than a decade.

Read more of this story at Slashdot.



Source: Slashdot – Inside Google’s Plan To Stop Apple From Getting Serious About Search

AI Risk Must Be Treated As Seriously As Climate Crisis, Says Google DeepMind Chief

An anonymous reader quotes a report from The Guardian: The world must treat the risks from artificial intelligence as seriously as the climate crisis and cannot afford to delay its response, one of the technology’s leading figures has warned. Speaking as the UK government prepares to host a summit on AI safety, Demis Hassabis said oversight of the industry could start with a body similar to the Intergovernmental Panel on Climate Change (IPCC). Hassabis, the British chief executive of Google’s AI unit, said the world must act immediately in tackling the technology’s dangers, which included aiding the creation of bioweapons and the existential threat posed by super-intelligent systems.

“We must take the risks of AI as seriously as other major global challenges, like climate change,” he said. “It took the international community too long to coordinate an effective global response to this, and we’re living with the consequences of that now. We can’t afford the same delay with AI.” Hassabis, whose unit created the revolutionary AlphaFold program that depicts protein structures, said AI could be “one of the most important and beneficial technologies ever invented.” However, he told the Guardian a regime of oversight was needed and governments should take inspiration from international structures such as the IPCC.

“I think we have to start with something like the IPCC, where it’s a scientific and research agreement with reports, and then build up from there.” He added: “Then what I’d like to see eventually is an equivalent of a Cern for AI safety that does research into that — but internationally. And then maybe there’s some kind of equivalent one day of the IAEA, which actually audits these things.” The International Atomic Energy Agency (IAEA) is a UN body that promotes the secure and peaceful use of nuclear technology in an effort to prevent proliferation of nuclear weapons, including via inspections. However, Hassabis said none of the regulatory analogies used for AI were “directly applicable” to the technology, though “valuable lessons” could be drawn from existing institutions. Hassabis said the world was a long time away from “god-like” AI being developed but “we can see the path there, so we should be discussing it now.”

He said current AI systems “aren’t of risk but the next few generations may be when they have extra capabilities like planning and memory and other things … They will be phenomenal for good use cases but also they will have risks.”

Read more of this story at Slashdot.



Source: Slashdot – AI Risk Must Be Treated As Seriously As Climate Crisis, Says Google DeepMind Chief