Tesla's Cybertruck is a dystopian, masturbatory fantasy

It’s been four years since Tesla first announced the Cybertruck, a hideously ugly electric pickup truck that didn’t seem to actually improve on EVs or pickups in any meaningful way. Instead, the 6,600-pound mass of “stainless super steel” seems to be more the culmination of one man’s bizarre fantasy, and that man just so happened to own an entire company he could leverage to birth that fantasy, with all its sharp angles and unnecessary lighting bars, into reality.

Today, Tesla finally delivered the first, long-delayed production Cybertrucks to 10 buyers in a livestream on CEO Elon Musk’s decimated X platform, the first of an unknown number of wealthy consumers who have bought into his grim vision of the future. It’s a car that promises — for only those who can afford them — a blank check for vehicular manslaughter and unnecessary survivability from semi-automatic firearms. Its tagline (“more utility than a truck, faster than a sports car”) speaks almost poetically to two distinct but orthogonal archetypes of threatened masculinity: the tacti-cool milspec dork, and the showboating rich guy.

A “bulletproof” body has been a key feature since the Cybertruck’s introduction in 2019; today Musk admitted it was there for no good reason. “Why did you make it bulletproof?” Musk said. “Why not?” he said with a broad grin, before metaphorically waving his genitals at the cheering crowd, while also promising metaphorically larger genitals to anyone who buys the Cybertruck. “How tough is your truck?” Musk smirked.

This admission came alongside video footage of a Cybertruck being sprayed with rounds from a .45 caliber tommy gun, a Glock 9mm and a MP5-SD submachine gun, which also uses 9mm rounds. We’d ask Tesla what cartridges they were firing and if they were being shot from within the effective range of any of these weapons, but the company dissolved its PR team in 2019.

It was a stupid but expected bit of showboating from Musk during his rambling presentation. Right before the gunfire demo, Musk touted the truck’s overall toughness, noting that its low center of gravity made it extremely difficult to flip in an accident. A video also showed the Cybertruck barely moving after a much smaller vehicle moving at 38 mph collided with it. To that, Musk commented that “if you’re ever in an argument with another car, you will win,” glibly encouraging Cybertruck owners to engage in such “arguments.”

In a country where both traffic fatalities and gun violence have surged in recent years, it’s a little galling to see Musk promoting his vehicle as some sort of tool for rich people to survive the apocalypse, or even just the inconveniences of a world where their lessers occupy space at all. (All-wheel drive Cybertrucks start at about $80,000; a $60,000 RWD model is supposedly arriving in 2025.) “Sometimes you get these late civilization vibes, the apocalypse could come along at any moment, and here at Tesla we have the finest apocalypse technology,” Musk mused.

Beyond that is the simple fact that SUVs and trucks have gotten dramatically bigger and heavier in the past decade or so. EVs naturally weigh more because of their batteries, but auto manufacturers have been making the fronts of cars larger and taller in recent years, too. That’s a combo that makes these vehicles more dangerous for pedestrians and other drivers alike.

“Whatever their nose shape, pickups, SUVs and vans with a hood height greater than 40 inches are about 45 percent more likely to cause fatalities in pedestrian crashes than cars and other vehicles with a hood height of 30 inches or less and a sloping profile,” research from the Insurance Institute for Highway Safety states. It also noted that pedestrian crash deaths have risen 80 percent since a low in 2009. Anyone who walks or bikes around a city has probably felt that danger before, and it’s even more startling when the wall of a truck stops short when you’re crossing the street. Finally, it’s well known that the speed of a car dramatically impacts the survivability of a pedestrian, which isn’t great when an extremely heavy car also can do 0-60 in less than three seconds.

Now that the Cybertruck is nearly ready for public consumption, it looks like Musk has basically built a vehicle that, for a steep price, enables the worst impulses of US drivers and gives them the “freedom” to do whatever they want. It doesn’t matter if the Cybertruck’s lightbar headlights blind the drivers of smaller vehicles; they should get the hell out of the left lane. And if someone else on the road pisses off a Cybertruck driver, who cares? Other drivers should just accept that they’re about to lose a very expensive and potentially life-threatening “argument” with the Cybertruck’s front fender.

This all should have been obvious right from the start. From day one, the Cybertruck has alluded to a cyberpunk future, a genre with cool haircuts and hacking and slightly problematic orientalism, yes — but also one where wealth inequality is even worse than it currently is, and the rules don’t apply to those with money. The implicit promise of the Cybertruck has always been a vehicle that waives societal standards for people who can afford it, and today’s spectacle made that explicit. To that end, maybe this marketing is as much genius as it is nonsense.

“If Al Capone showed up with a Tommy gun and emptied the entire magazine into the car door, you’d still be alive,” Musk crowed at one point, either promising to revive the dead or oblivious to the terrifying number of human beings who use guns to commit acts of violence. I don’t know about you, but I don’t want to live in a world where being swiss cheesed by lethal armaments is something I need to consider when I’m buying a car. Maybe the rich survivalists playing out Blade Runner meets Mad Max in their Cybertrucks haven’t considered that when everything burns down, the power grid will go down too.

This article originally appeared on Engadget at https://www.engadget.com/teslas-cybertruck-is-a-dystopian-masturbatory-fantasy-225648188.html?src=rss

Source: Engadget – Tesla’s Cybertruck is a dystopian, masturbatory fantasy

Apple patches two security vulnerabilities on iPhone, iPad and Mac

Apple pushed updates to iOS, iPadOS and macOS software today to patch two zero-day security vulnerabilities. The company suggested the bugs had been actively deployed in the wild. “Apple is aware of a report that this issue may have been exploited against versions of iOS before iOS 16.7.1,” the company wrote about both flaws in its security reports. Software updates plugging the holes are now available for the iPhone, iPad and Mac.

Researcher Clément Lecigne of Google’s Threat Analysis Group (TAG) is credited with discovering and reporting both exploits. As Bleeping Computer notes, the team at Google TAG often finds and exposes zero-day bugs against high-risk individuals, like politicians, journalists and dissidents. Apple didn’t reveal specifics about the nature of any attacks using the flaws.

The two security flaws affected WebKit, Apple’s open-source browser framework powering Safari. In Apple’s description of the first bug, it said, “Processing web content may disclose sensitive information.” In the second, it wrote, “Processing web content may lead to arbitrary code execution.”

The security patches cover the “iPhone XS and later, iPad Pro 12.9-inch 2nd generation and later, iPad Pro 10.5-inch, iPad Pro 11-inch 1st generation and later, iPad Air 3rd generation and later, iPad 6th generation and later, and iPad mini 5th generation and later.”

The odds your devices were affected by either of these are extremely minimal, so there’s no need to panic — but, to be safe, it would be wise to update your Apple gear now. You can update your iPhone or iPad immediately by heading to Settings > General > Software Update and tapping the prompt to initiate it. On Mac, go to System Settings > General > Software Update and do the same. Apple’s fixes arrived today in iOS 17.1.2, iPadOS 17.1.2 and macOS Sonoma 14.1.2. 

This article originally appeared on Engadget at https://www.engadget.com/apple-patches-two-security-vulnerabilities-on-iphone-ipad-and-mac-215854473.html?src=rss

Source: Engadget – Apple patches two security vulnerabilities on iPhone, iPad and Mac

Tesla's long-awaited Cybertruck will start at $60,990 before rebates

After years of production delays, Tesla CEO Elon Musk took to a dimly-lit stage on Thursday to hand-deliver the first batch of Cybertruck EVs to their new owners. The company has also, finally announced pricing for the luxury electric truck. Prospective buyers can expect to pay anywhere from $60,990 to $100,000 MSRP (and potentially $11,000 less after rebates and tax credits). The company has launched an online configurator tool for those interested in placing an order of their own.   

Tesla also officially revealed the vehicle’s performance specs and model options at the event. The Cybertruck’s entry-level version is the $60,990 single-motor rear-wheel drive ($49,890 after “incentives” and an “est. 3-year gas savings,” per the configurator). It will offer an estimated 250 miles of range and a pokey 6.5 second zero-to-60. Who knew steel sheeting would be so heavy?. It won’t be released until the 2025 model year. 

The mid-level model is the $79,990 all-wheel drive version and sports e-motors on each axle. It weighs just over 6,600 pounds — 1,900 less than the Rivian R1S and nearly 2,500 less than the Hummer EV. It will offer 340 miles of range, a more respectable 4.1-second zero-to-60 and 600 HP with 7435 lb-ft of torque. Its 11,000-pound towing capacity is a touch more than the Ford Lighting XLT’s 10,000-pound maximum.

For $99,990, you can buy the top of the line Cyberbeast — yes, you will have to refer to it as that in public. The Cyberbeast comes equipped with a trio of e-motors that will provide AWD handling, a 320 mile range, 2.6-second sero-to-60, a 130 MPH top speed, 845 horses and 10,296 lb-ft of torque. Despite those impressive specs, the Cyberbeast is stuck with the same 11,000 pound tow limit as the base model. 

Both the Cyberbeast and the AWD iteration will be able to carry 121 cubic feet of cargo and accommodate five adult passengers. The Cybertruck line is compatible with Tesla’s supercharger network and can accept up to 250W maximum, enough to add 128 miles of range for every 15 minutes of charge time. The AWD and Cyberbeast are both currently available to order on Tesla’s website, though prospective buyers will need to put down a fully-refundable $250 deposit upon ordering. 

Developing

This article originally appeared on Engadget at https://www.engadget.com/teslas-long-awaited-cybertruck-will-start-at-60990-before-rebates-211751127.html?src=rss

Source: Engadget – Tesla’s long-awaited Cybertruck will start at ,990 before rebates

TikTok's new profile tools are just for musicians

TikTok has introduced the Artist Account, which offers up-and-coming musicians new ways to curate their profiles in ways that boost discoverability. The new suite of tools are not just meant for rising stars: established pop icons can also add an artist tag to their profiles, giving their music its own tab next to their videos, likes and reposted content.

To be eligible for an artist tag, TikTok says you will need at least four sounds or songs uploaded to the app. Artists can also pin one of their tunes so it appears first in the music tab. If a musician drops new content, the app will tag songs as ‘new’ for up to 14 days before and up to 30 days after it goes live. Any new tracks will automatically be added to a profile’s music tab.

TikTok says over 70,000 artists are already using the new tools. The app has proven to be a breeding ground for content to go viral for new artists and established music makers alike thanks to the lightning speed of dance and lifestyle video trends. TikTok’s impact on the music industry has been so massive that even streamers like Spotify have looked into experimenting with video-first music discovery feeds.

This article originally appeared on Engadget at https://www.engadget.com/tiktoks-new-profile-tools-are-just-for-musicians-201723244.html?src=rss

Source: Engadget – TikTok’s new profile tools are just for musicians

Steam’s streaming software now lets you wirelessly play PC VR games on Quest headsets

One of the key selling points of Meta Quest VR headsets is that they can play PC VR titles, but you have to be physically connected via a link cable to the PC. There are some third-party workarounds that allow for wireless game streaming, like Virtual Desktop, but now Steam has unveiled an official solution.

Steam Link is a tool available for Meta Quest 2, 3 and Pro that wirelessly streams PC VR games from your Steam library directly to the headset, so you can continue to avoid cables like the plague. The free app already exists, but has been used to stream Steam games onto phones, tablets and TVs. This is the first time it’s available for VR titles.

There’s one major caveat. Just like Virtual Desktop, you still need a capable PC that can run high-end VR games. You just won’t need the link cable. It’s possible this service can work via cloud computing platforms, but the results are likely to be janky at best. Steam outlines recommended PC specs, suggesting the NVIDIA GTX970 GPU or better, 16GB of RAM and Windows 10 or newer.

Beyond the PC, you also need a 5GHz WiFi router with both the headset and the computer connected to the same network. You can download the Steam Link app directly from the Quest store to get started. This may not be the biggest deal in the world to folks who already use Virtual Desktop, but anything that gets more people into Half Life: Alyx is a good thing.

This article originally appeared on Engadget at https://www.engadget.com/steams-streaming-software-now-lets-you-wirelessly-play-pc-vr-games-on-quest-headsets-200502768.html?src=rss

Source: Engadget – Steam’s streaming software now lets you wirelessly play PC VR games on Quest headsets

Call of Duty games start landing on NVIDIA GeForce Now

One of the major concessions Microsoft made to regulators to get its blockbuster acquisition of Activision Blizzard over the line was agreeing to let users of third-party cloud services stream Xbox-owned games. Starting today, you can play three Call of Duty games via NVIDIA GeForce Now: Modern Warfare 3, Modern Warfare 2 and Warzone.

They’re the first Activision games to land on GeForce Now since Microsoft closed the $68.7 billion Activision deal in October. Activision Blizzard games were previously available on GeForce Now but only briefly, as the publisher pulled them days after the streaming service went live for all users in early 2020.

Microsoft first made its first-party games available on GeForce Now this year, beginning with Gears 5 in May. More recently, Microsoft started allowing GeForce Now users to stream PC Game Pass titles and Microsoft Store purchases.

Call of Duty titles are major additions, though, especially since that means Warzone fans can play the battle royale on their phone or tablet wherever they are without having to pay anything extra (free GeForce Now users are limited to one hour of gameplay per session). If you’ve bought MW2 or MW3 on Steam, you can play those through GeForce Now as well. NVIDIA notes that older CoD titles will be available through GeForce Now later.

Another key concession Microsoft made to appease UK regulators was to sell the cloud gaming rights for Activision Blizzard titles to Ubisoft. However, as evidenced here, Microsoft will still honor the agreements it made directly with various cloud gaming services.

This article originally appeared on Engadget at https://www.engadget.com/call-of-duty-games-start-landing-on-nvidia-geforce-now-195040692.html?src=rss

Source: Engadget – Call of Duty games start landing on NVIDIA GeForce Now

Formula E now lets you stream every race from its first nine seasons for free

There’s still time to get acquainted with Formula E before the new season begins in January. To help with that, the all-electric racing series has opened up its vault and made every race from its first nine seasons available to stream for free. Starting with the first event in Beijing in 2014 through this past season’s finale in London, there’s a lot to relive or watch for the first time. If you’re trying to stream them all, that’s 90 hours of action over 116 races you have to look forward to.

Formula E’s new Race Replay archive is available for free via it’s website and mobile app. All you need to do in order to gain access to the back catalog is to register for an account. What’s more, the series says every race from 2024’s Season 10 will be available seven days after airing live. Even if you don’t have access to the required channels or platforms needed to watch live next year, you’ll still be able to follow along a few days after each event.

When the lights go out in Mexico City, Formula E will offer fans expanded viewing options in 2024. Roku will stream 11 races live through its Roku Channel for free. That platform will also offer previews, replays and other commentary in addition to the live events. Paramount+ will stream five races live as simulcasts with CBS, the broadcaster that has been home to Formula E in the US for a while now. 

Season 10 begins January 13 in Mexico City before a double-header in Diriyah, Saudi Arabia later in the month. 17 total races are scheduled for 2024, including a US stop in Portland that has been expanded to its own double-header weekend after debuting last season. Formula E completed its preseason testing in Valencia in late October and you can read our key takeaways from that event here

This article originally appeared on Engadget at https://www.engadget.com/formula-e-now-lets-you-stream-every-race-from-its-first-nine-seasons-for-free-193820963.html?src=rss

Source: Engadget – Formula E now lets you stream every race from its first nine seasons for free

Bipartisan Senate bill would kill the TSA’s ‘Big Brother’ airport facial recognition

US Senators John Kennedy (R-LA) and Jeff Merkley (D-OR) introduced a bipartisan bill Wednesday to end involuntary facial recognition screening at airports. The Traveler Privacy Protection Act would block the Transportation Security Administration (TSA) from continuing or expanding its facial recognition tech program. It would also require the government agency to explicitly receive congressional permission to renew it, and it would have to dispose of all biometric data within three months.

Senator Merkley described the TSA’s biometric collection practices as the first steps toward an Orwellian nightmare. “The TSA program is a precursor to a full-blown national surveillance state,” Merkley wrote in a news release. “Nothing could be more damaging to our national values of privacy and freedom. No government should be trusted with this power.” Other Senators supporting the bill include Edward J. Markey (D-MA), Roger Marshall (R-KS), Bernie Sanders (I-VT) and Elizabeth Warren (D-MA).

The TSA began testing facial recognition at Los Angeles International Airport (LAX) in 2018. The agency’s pitch to travelers framed it as an exciting new high-tech feature, promising a “biometrically-enabled curb-to-gate passenger experience.” The TSA said this summer it planned to expand the program to over 430 US airports within the next few years.

The program at least technically allows travelers to opt-out, but that process isn’t always transparent in practice. Merkley posted the video above to X in September, demonstrating how agents guided travelers to the facial scanner without mentioning that it’s optional. No signs near the booths said it was optional or explicitly mentioned the gathering of facial data, either. The booths were arranged so that flyers would have difficulty entering their driver’s license or ID (required) without stepping in front of the facial scanner.

Advocacy groups supporting the bill include the ACLU, Electronic Privacy Information Center and Public Citizen. “The privacy risks and discriminatory impact of facial recognition are real, and the government’s use of our faces as IDs poses a serious threat to our democracy,” wrote Jeramie Scott, Senior Counsel and Director of EPIC’s Project on Surveillance Oversight, in Markley’s press release. “The TSA should not be allowed to unilaterally subject millions of travelers to this dangerous technology.”

“Every day, TSA scans thousands of Americans’ faces without their permission and without making it clear that travelers can opt out of the invasive screening,” Sen. Kennedy wrote in a separate news release. “The Traveler Privacy Protection Act would protect every American from Big Brother’s intrusion by ending the facial recognition program.”

This article originally appeared on Engadget at https://www.engadget.com/bipartisan-senate-bill-would-kill-the-tsas-big-brother-airport-facial-recognition-191010937.html?src=rss

Source: Engadget – Bipartisan Senate bill would kill the TSA’s ‘Big Brother’ airport facial recognition

JBL Authentics 300 review: Alexa and Google Assistant coexisting

Several companies have taken shots at Sonos over the years when it comes to multi-room audio and self-tuning speakers with built-in voice assistants. These devices are a lot more common in 2023 than they used to be, so there’s a whole host of options if you’re looking for alternatives to the Move or Era. JBL is the latest to give it a go with new additions to its Authentics line of speakers. While audio may be its primary use, these devices are the first to run two voice assistants simultaneously without having to switch from one to the other. And on the Authentics 300 ($450), you get a portable unit that doesn’t have to stay parked on a shelf.

Design

Most wireless JBL speakers fit into three categories. They’re either rugged and compact, modern-looking boomboxes or internally-lit party units. For this new Authentics series, the company opted for a more refined design: all black with a gold frame around the front speaker grille. It’s certainly an aesthetic that fits in nicely on a shelf, without the raucous palette of some of the company’s smaller options. All three of the Authentics speakers look almost exactly the same with the main difference being size, although the 300 does have a boombox-like rotating handle the other two don’t. That’s because it’s the only portable option in the range with a built-in battery.

JBL describes the Authentics look as “retro,” but I’m not sure I agree. Sure, there’s a classic vibe thanks to the ‘70s-inspired Quadrex grille the company has employed in the past, but the finer details and onboard controls are decidedly modern. Speaking of controls, up top you’ll find volume, treble and bass knobs that illuminate the level as you turn them. Pressing in the center of the volume dial gives you the playback controls. There are also Bluetooth, power and Moment buttons along with a thin light bar that indicates charging status when the speaker is plugged in. Around back is a microphone mute switch, along with Ethernet, 3.5mm aux, USB-C and power ports.

Software and features

JBL Authentics 300
Photo by Billy Steele/Engadget

The features and settings for the Authentics speakers are managed inside the JBL One app. Here, you’re greeted with a list of the company’s products you own as well as their connected status, battery level and whatever media is playing on the device. After selecting the Authentics 300, JBL dumps you into the specifics, with battery level once again visible up top. A media player is just below, complete with the ability to sync Amazon Music, Tidal, Napster, Qobuz, TuneIn, iHeartRadio and Calm Radio so you can play them directly inside this app.

JBL offers some limited EQ customization. There’s a manual slider with options for bass, mid and treble, but that’s it. You won’t find any carefully-tuned presets or the ability to make more detailed adjustments along the curve. To get to your tunes quickly, JBL offers a feature called Moment. Accessible via the heart button on the speaker, this allows you to save a favorite album or playlist from the app’s list of supported streaming services. You can also specify volume and auto-off timing during setup.

Lastly, a word on streaming music over Wi-Fi. The Authentics line supports a range of options here, including AirPlay, Chromecast, Alexa, Spotify Connect and Tidal Connect, all of which are more convenient than swiping over to the Bluetooth menu and pairing the speaker every time you use it. With Wi-Fi, playing music on the Authentics devices are just a couple of taps away inside of the app where you’re browsing and selecting music or podcasts from. The speakers also support multi-room audio via AirPlay, Alexa and the Google Home app

Double assistants, double the fun

JBL Authentics 300
Photo by Billy Steele/Engadget

JBL says the Authentics series is the first set of speakers to run two voice assistants simultaneously. Each of the three units can employ both Alexa and Google Assistant without you having to pick one or the other beforehand. This opens up availability across compatible smart home devices and it means your speaker choice isn’t as limited by your go-to assistant.

The speaker never had trouble hearing my commands and it didn’t mistake a query for one assistant with a question for the other. When you ask Google Assistant for help, a white light shows at the top center of the speaker grille. Summon Alexa and that LED burns blue until your convo is over. When you mute the microphones with the switch on the back of the 300, that light glows red and remains until you turn them back on. As is the case with any smart speaker, the voice command limitations are the general hindrances of the assistants themselves rather than any shortfalls of the speaker.

Sound quality

The Authentics 300 really shines with more mellow, chill music like jazz, bluegrass and acoustic-driven country. There’s a warm inviting sound with great clarity across those styles. When you jump to the full band chaos of metal and hardcore, or even the guitar-heavy but mellifluous tones of Chris Stapleton, the speaker’s tuning overemphasizes vocals and the lack of bassy thump creates a muddy overall sound.

Sure, you can dial up the bass with the physical controls or the EQ in the app, but that doesn’t add the kind of deep low-end that would open up the soundstage. It does improve the overall tuning of albums like Stapleton’s Higher, but there’s still an overemphasis on vocals. You can really hear the impact on The Killer’s Rebel Diamonds as Brandon Flowers almost entirely drowns out the backing synth on “Jenny Was A Friend Of Mine” from Hot Fuss.

At times though, the Authentics 300 is a joy to listen to. Put on some Miles Davis and the speaker is at its best. Ditto for the bluegrass of Nickel Creek, the mellow country tunes of Charles Wesley Godwin and classic Christmas mixes. However, the inconsistency across styles is frustrating. Interestingly, JBL says the Authentics speakers offer automatic self-tuning every time you power them on, but I didn’t notice much difference as I moved the 300 around.

Battery life

JBL Authentics 300
Photo by Billy Steele/Engadget

JBL says the Authentics 300 will last up to eight hours on a charge. Within two minutes of unplugging, the JBL One app already had the battery level down two percent while playing music via AirPlay 2, at about 30 percent volume. That may seem like a low level, but it’s good for “working music” on this speaker. After 30 minutes, the app was showing 88 percent, but things slowed down and I managed to still have 24 percent remaining when the eight-hours were up. During a test over Bluetooth, the percentages fell in a similar fashion, but I had no problem making it to eight hours at 50 percent volume (Bluetooth was quieter than AirPlay at 30 percent).

JBL does offer a Battery Saving Mode to help you maximize playtime when you’re away from home. This setting “optimizes” both volume and bass to extend battery life, according to the company. There’s also an optional automatic power off feature that kicks in at either 15 minutes, 30 minutes or an hour when you’re not connected to power and audio is no longer playing.

The competition

JBL offers two alternatives to the Authentics 300 within the same speaker range. The smaller Authentics 200 ($350) is more compact, but not portable, while the larger 500 ($700) is a high-fidelity unit with support for Dolby Atmos. Both still run two voice assistants at the same time and have both Bluetooth and Wi-Fi, along with everything else the Authentics line offers. In order to support that immersive audio, the Authentics 500 has more drivers than the other two, with three 25mm tweeters, three 2.75-inch mid-range and a 6.5-inch subwoofer. I look forward to seeing if the extra components and added 170 watts of output power improve sound quality, but it only has slightly lower frequency response than the 300 (40Hz vs. 45Hz).

If you’re looking for something portable that can also pull double duty at home, the Sonos Move 2 is a solid option. It’s too big to haul around with ease, but it does support both Bluetooth and Wi-Fi along with improved sound and better battery life compared to version 1.0. There’s also startling loudness and a durable design. What’s more, it’s the same price as the Authentics 300 at $449. For something more stationary and immersive, you could get the Sonos Era 300 without paying more. My colleague Nathan Ingraham noted the excellent sound quality on this unit during his review, but he did encounter inconsistent performance when it came to spatial audio. There’s also no Google Assistant support on this model.

Wrap-up

When I try to come up with a final verdict on the Authentics 300, I find myself running in circles. For every thing I like about the speaker, there’s immediately something that I don’t. The company certainly deserves some kudos for being the first to run two assistants at the same time and for figuring out how to do that with no confusion or headaches. However, the inconsistent sound quality is a major problem, especially on a $450 speaker. And while the device offers better-than-advertised battery life, it’s larger size makes portability an issue. So unless you absolutely need to seamlessly switch between Alexa and Google Assistant, there are better-sounding options.

This article originally appeared on Engadget at https://www.engadget.com/jbl-authentics-300-review-alexa-and-google-assistant-coexisting-190036434.html?src=rss

Source: Engadget – JBL Authentics 300 review: Alexa and Google Assistant coexisting

Meta sues FTC to block new restrictions on monetizing kids’ data

Meta has sued the Federal Trade Commission (FTC) in an attempt to stop regulators from reopening a landmark $5 billion privacy settlement from 2020 and to allow it to monetize kids’ data across apps like Facebook, Instagram and Whatsapp. This comes after a federal judge ruled on Monday that the FTC would be allowed to expand on 2020’s privacy settlement, paving the way for the agency to propose tough new rules on how the social media giant could operate in the wake of the Cambridge Analytica scandal.

Today’s lawsuit demands an immediate stop to the FTC’s proceedings, calling it an “obvious power grab” and an “unconstitutional adjudication by fiat.” A Meta spokesperson even referred to the FTC as “prosecutor, judge, and jury in the same case”, as reported by Bloomberg. This is the second attempt by Facebook’s parent company to stop the sanctions in court.

The FTC, for its part, says that Meta has repeatedly violated the terms of 2020’s settlement regarding user privacy. The agency also says that the company has violated the Children’s Online Privacy Protection Act (COPPA) by monetizing the data of younger users. The FTC has already been given the go ahead by a judge to restrict this type of monetization, a decision Meta hopes to overturn.

The FTC also seeks to implement new restrictions that limit Meta’s use of facial recognition, as well as a complete moratorium on new products and services until a third-party completes an audit to determine if the company’s complying with its privacy obligations.

“Facebook has repeatedly violated its privacy promises,” Samuel Levine, director of the FTC’s Bureau of Consumer Protection, said in a statement. “The company’s recklessness has put young users at risk, and Facebook needs to answer for its failures.” To that end, multiple states have sued Meta to stop the monetization of children’s data, along with the EU.

The FTC has been a consistent thorn in Meta’s side, as the agency tried to stop the company’s acquisition of VR software developer Within on the grounds that the deal would deter “future innovation and competitive rivalry.” The agency dropped this bid after a series of legal setbacks. It also opened up an investigation into the company’s VR arm, accusing Meta of anti-competitive behavior.

Corporations have been all over the FTC lately in attempts to paint the agency as a prime example of government overreach. Beyond Meta, biotech giant Illumina is suing the FTC to halt a decision that stops it from a $7 billion acquisition of the cancer detection startup Grail.

This article originally appeared on Engadget at https://www.engadget.com/meta-sues-ftc-to-block-new-restrictions-on-monetizing-kids-data-185051764.html?src=rss

Source: Engadget – Meta sues FTC to block new restrictions on monetizing kids’ data

Can digital watermarking protect us from generative AI?

The Biden White House recently enacted its latest executive order designed to establish a guiding framework for generative artificial intelligence development — including content authentication and using digital watermarks to indicate when digital assets made by the Federal government are computer generated. Here’s how it and similar copy protection technologies might help content creators more securely authenticate their online works in an age of generative AI misinformation.

A quick history of watermarking

Analog watermarking techniques were first developed in Italy in 1282. Papermakers would implant thin wires into the paper mold, which would create almost imperceptibly thinner areas of the sheet which would become apparent when held up to a light. Not only were analog watermarks used to authenticate where and how a company’s products were produced, the marks could also be leveraged to pass concealed, encoded messages. By the 18th century, the technology had spread to government use as a means to prevent currency counterfeiting. Color watermark techniques, which sandwich dyed materials between layers of paper, were developed around the same period.

Though the term “digital watermarking” wasn’t coined until 1992, the technology behind it was first patented by the Muzac Corporation in 1954. The system they built, and which they used until the company was sold in the 1980s, would identify music owned by Muzac using a “notch filter” to block the audio signal at 1 kHz in specific bursts, like Morse Code, to store identification information.

Advertisement monitoring and audience measurement firms like the Nielsen Company have long used watermarking techniques to tag the audio tracks of television shows to track and understand what American households are watching. These steganographic methods have even made their way into the modern Blu-Ray standard (the Cinavia system), as well as in government applications like authenticating drivers licenses, national currencies and other sensitive documents. The Digimarc corporation, for example, has developed a watermark for packaging that prints a product’s barcode nearly-invisibly all over the box, allowing any digital scanner in line of sight to read it. It’s also been used in applications ranging from brand anti-counterfeiting to enhanced material recycling efficiencies.

The here and now

Modern digital watermarking operates on the same principles, imperceptibly embedding added information onto a piece of content (be it image, video or audio) using special encoding software. These watermarks are easily read by machines but are largely invisible to human users. The practice differs from existing cryptographic protections like product keys or software protection dongles in that watermarks don’t actively prevent the unauthorized alteration or duplication of a piece of content, but rather provide a record of where the content originated or who the copyright holder is.

The system is not perfect, however. “There is nothing, literally nothing, to protect copyrighted works from being trained on [by generative AI models], except the unverifiable, unenforceable word of AI companies,” Dr. Ben Zhao, Neubauer Professor of Computer Science at University of Chicago, told Engadget via email.

“There are no existing cryptographic or regulatory methods to protect copyrighted works — none,” he said. “Opt-out lists have been made made a mockery by stability.ai (they changed the model name to SDXL to ignore everyone who signed up to opt out of SD 3.0), and Facebook/Meta, who responded to users on their recent opt-out list with a message that said ‘you cannot prove you were already trained into our model, therefore you cannot opt out.’”

Zhao says that while the White House’s executive order is “ambitious and covers tremendous ground,” plans laid out to date by the White House have lacked much in the way of “technical details on how it would actually achieve the goals it set.”

He notes that “there are plenty of companies who are under no regulatory or legal pressure to bother watermarking their genAI output. Voluntary measures do not work in an adversarial setting where the stakeholders are incentivized to avoid or bypass regulations and oversight.”

“Like it or not, commercial companies are designed to make money, and it is in their best interests to avoid regulations,” he added.

We could also very easily see the next presidential administration come into office and dismantle Biden’s executive order and all of the federal infrastructure that went into implementing it, since an executive order lacks the constitutional standing of congressional legislation. But don’t count on the House and Senate doing anything about the issue either.

“Congress is deeply polarized and even dysfunctional to the extent that it is very unlikely to produce any meaningful AI legislation in the near future,” Anu Bradford, a law professor at Columbia University, told MIT Tech Review. So far, enforcement mechanisms for these watermarking schemes have been generally limited to pinky swears by the industry’s major players.

How Content Credentials work

With the wheels of government turning so slowly, industry alternatives are proving necessary. Microsoft, the New York Times, CBC/Radio-Canada and the BBC began Project Origin in 2019 to protect the integrity of content, regardless of the platform on which it’s consumed. At the same time, Adobe and its partners launched the Content Authenticity Initiative (CAI), approaching the issue from the creator’s perspective. Eventually CAI and Project Origin combined their efforts to create the Coalition for Content Provenance and Authenticity (C2PA). From this coalition of coalitions came Content Credentials (“CR” for short), which Adobe announced at its Max event in 2021. 

CR attaches additional information about an image whenever it is exported or downloaded in the form of a cryptographically secure manifest. The manifest pulls data from the image or video header — the creator’s information, where it was taken, when it was taken, what device took it, whether generative AI systems like DALL-E or Stable Diffusion were used and what edits have been made since — allowing websites to check that information against provenance claims made in the manifest. When combined with watermarking technology, the result is a unique authentication method that cannot be easily stripped like EXIF and metadata (i.e. the technical details automatically added by the software or device that took the image) when uploaded to social media sites (on account of the cryptographic file signing). Not unlike blockchain technology! 

Metadata doesn’t typically survive common workflows as content is shuffled around the internet because, Digimarc Chief Product Officer Ken Sickles explained to Engadget, many online systems weren’t built to support or read them and so simply ignore the data.

“The analogy that we’ve used in the past is one of an envelope,” Chief Technology Officer of Digimarc, Tony Rodriguez told Engadget. Like an envelope, the valuable content that you want to send is placed inside “and that’s where the watermark sits. It’s actually part of the pixels, the audio, of whatever that media is. Metadata, all that other information, is being written on the outside of the envelope.”

Should someone manage to remove the watermark (turns out, not that difficult, just screenshot the image and crop out the icon) the credentials can be reattached through Verify, which runs machine vision algorithms against an uploaded image to find matches in its repository. If the uploaded image can be identified, the credentials get reapplied. If a user encounters the image content in the wild, they can check its credentials by clicking on the CR icon to pull up the full manifest and verify the information for themselves and make a more informed decision about what online content to trust.

Sickles envisions these authentication systems operating in coordinating layers, like a home security system that pairs locks and deadbolts with cameras and motion sensors to increase its coverage. “That’s the beauty of Content Credentials and watermarks together,” Sickles said. “They become a much, much stronger system as a basis for authenticity and understanding providence around an image” than they would individually.” Digimarc freely distributes its watermark detection tool to generative AI developers, and is integrating the Content Credentials standard into its existing Validate online copy protection platform.

In practice, we’re already seeing the standard being incorporated into physical commercial products like the Leica M11-P which will automatically affix a CR credential to images as they’re taken. The New York Times has explored its use in journalistic endeavors, Reuters employed it for its ambitious 76 Days feature and Microsoft has added it to Bing Image Creator and Bing AI chatbot as well. Sony is reportedly working to incorporate the standard in its Alpha 9 III digital cameras, with enabling firmware updates Alpha 1 and Alpha 7S III models arriving in 2024. CR is also available in Adobe’s expansive suite of photo and video editing tools including Illustrator, Adobe Express, Stock and Behance. The company’s own generative AI, Firefly, will automatically include non-personally identifiable information in a CR for some features like generative fill (essentially noting that the generative feature was used, but not by whom) but will otherwise be opt-in.

That said, the C2PA standard and front-end Content Credentials are barely out of development and currently exceedingly difficult to find on social media. “I think it really comes down to the wide-scale adoption of these technologies and where it’s adopted; both from a perspective of attaching the content credentials and inserting the watermark to link them,” Sickles said.

Nightshade: The CR alternative that’s deadly to databases

Some security researchers have had enough waiting around for laws to be written or industry standards to take root, and have instead taken copy protection into their own hands. Teams from the University of Chicago’s SAND Lab, for example, have developed a pair of downright nasty copy protection systems for use specifically against generative AIs.

Zhao and his team have developed Glaze, a system for creators that disrupts a generative AI’s style of mimicry (by exploiting the concept of adversarial examples). It can change the pixels in a given artwork in a way that is undetectable by the human eye but which appear radically different to a machine vision system. When a generative AI system is trained on these “glazed” images, it becomes unable to exactly replicate the intended style of art — cubism becomes cartoony, abstract styles are transformed into anime. This could prove a boon to well-known and often-imitated artists especially, in keeping their branded artistic styles commercially safe.

While Glaze focuses on preventative actions to deflect the efforts of illicit data scrapers, SAND Lab’s newest tool is whole-heartedly punitive. Dubbed Nightshade, the system will subtly change the pixels in a given image but instead of confusing the models it’s trained with like Glaze does, the poisoned image will corrupt the training database its ingested into wholesale, forcing developers to go back through and manually remove each damaging image to resolve the issue — otherwise the system will simply retrain on the bad data and suffer the same issues again.

The tool is meant as a “last resort” for content creators but cannot be used as a vector of attack. “This is the equivalent of putting hot sauce in your lunch because someone keeps stealing it out of the fridge,” Zhao argued.

Zhao has little sympathy for the owners of models that Nightshade damages. “The companies who intentionally bypass opt-out lists and do-not-scrape directives know what they are doing,” he said. “There is no ‘accidental’ download and training on data. It takes a lot of work and full intent to take someone’s content, download it and train on it.”

This article originally appeared on Engadget at https://www.engadget.com/can-digital-watermarking-protect-us-from-generative-ai-184542396.html?src=rss

Source: Engadget – Can digital watermarking protect us from generative AI?

YouTube Music brings personalized album art to its 2023 Recap

YouTube Music users who have seen their Spotify- and Apple Music-using friends share their listening stats from this year can now join the party. YouTube Music Recap is now live and you can access it from the 2023 Recap page in the app. You’ll be able to see your top artists, songs, moods, genres, albums, playlists and more from 2023. There’s also the option to view your Recap in the main YouTube app, along with some other new features for 2023.

This year, you’ll be able to add custom album art. YouTube will create this using your top song and moods from the year, as well as your energy score. The platform will mash together colors, vibes and visuals to create a representation of your year in music.

YouTube Music Recap custom album art
YouTube Music

YouTube says another feature will match your mood with your top songs of the year. You might see, for instance, the percentages of songs you listened to that are classed as upbeat, fun, dancey or chill. Last but not least, you can use snaps from Google Photos to create a customized visual that sums up your year in music (and perhaps your year in travel too).

This article originally appeared on Engadget at https://www.engadget.com/youtube-music-brings-personalized-album-art-to-its-2023-recap-182904330.html?src=rss

Source: Engadget – YouTube Music brings personalized album art to its 2023 Recap

Evernote officially limits free users to 50 notes and one measly notebook

Evernote has confirmed the service’s tightly leashed new free plan, which the company tested with some users earlier this week. Starting December 4, the note-taking app will restrict new and current accounts to 50 notes and one notebook. Existing free customers who exceed those limits can still view, edit, delete and export their notes, but they’ll need to upgrade to a paid plan (or delete enough old ones) to create new notes that exceed the new confines.

The company says most free accounts are already inside those lines. “When setting the new limits, we considered that the majority of our Free users fall below the threshold of fifty notes and one notebook,” the company wrote in an announcement blog post. “As a result, the everyday experience for most Free users will remain unchanged.” Engadget reached out to Evernote to clarify whether “the majority of Free users” staying within those bounds includes long-dormant accounts that may have tried the app for a few minutes a decade ago and never logged in again. We’ll update this article if we hear back.

Evernote’s premium plans, now practically essential for anything more than minimal use, include a $15 monthly Personal plan with 10GB of monthly uploads. You can double that to 20GB (and get other perks) with an $18 tier. It also offers annual versions of those plans for $130 and $170, respectively.

The company acknowledged in its announcement post that “these changes may lead you to reconsider your relationship with Evernote.” Leading alternatives with more bountiful free plans include Notion, Microsoft OneNote, Google Keep, Bear (Apple devices only), Obsidian and SimpleNote.

Earlier this year, Evernote’s parent company, Bending Spoons, moved its operations from the US and Chile to Europe, laying off nearly all of the note-taking app’s employees. When doing so, it said the app had been “unprofitable for years.”

This article originally appeared on Engadget at https://www.engadget.com/evernote-officially-limits-free-users-to-50-notes-and-one-measly-notebook-174436735.html?src=rss

Source: Engadget – Evernote officially limits free users to 50 notes and one measly notebook

Expressive E Osmose review: A game-changing MPE keyboard, but a frustrating synthesizer

When I first got to see the Expressive E Osmose way back in 2019, I knew it was special. In my 15-plus years covering technology, it was one of the only devices I’ve experienced that actually had the potential to be truly “game changing.” And I’m not being hyperbolic.

But, that was four years ago, almost to the day. A lot has changed in that time. MPE (MIDI Polyphonic Expression) has gone from futuristic curiosity to being embraced by big names like Ableton and Arturia. New players have entered and exited the scene. More importantly, the Osmose is no longer a promising prototype, but an actual commercial product. The questions, then, are obvious: Does the Osmose live up to its potential? And, does it seem as revolutionary today as it did all those years ago? The answers, however, are less clear.

Expressive E Osmose keybed sideview.
Terrence O’Brien / Engadget

What sets the Osmose ($1,799) apart from every other MIDI controller and synthesizer (MPE or otherwise) is its keybed. At first glance, it looks like almost any other keyboard, albeit a really nice one. The body is mostly plastic, but it feels solid and the top plate is made of metal. (Shoutout to Expressive E, by the way, for building the OSMOSE out of 66 percent recycled materials and for making the whole thing user repairable — no glue or speciality screws to be found.)

The keys themselves have this lovely, almost matte finish and a healthy amount of heft. It’s a nice change of pace from the shiny, springy keys on even some higher-end MIDI controllers. But the moment you press down on a key you’ll see what sets it apart — the keys move side to side. And this is not because it’s cheaply assembled and there’s a ton of wiggle. This is a purposeful design. You can bend notes (or control other parameters) by actually bending the keys, much like you would on a stringed instrument.

This is huge for someone like me who is primarily a guitar player. Bending strings and wiggling my fingers back and forth to add vibrato comes naturally. And, as I mentioned in my review of Roli’s Seaboard Rise 2, I find myself doing this even on keyboards where I know it will have no effect. It’s a reflex.

It’s a very simple thing to explain, but very difficult to encapsulate its effect on your playing. It’s all of the same things that make playing the Seaboard special: the slight pitch instability from the unintentional micro movements of your fingers, the ability to bend individual notes for shifting harmonies and the polyphonic aftertouch that allows you to alter things like filter cutoff on a per-note basis.

These tiny changes in tuning and expression add an almost ineffable fluidity to your playing. In particular, for sounds based on acoustic instruments like flutes and strings, it adds an organic element missing from almost every other synthesizer. There is a bit of a learning curve, but I got the hang of it after just a few days.

Expressive E Osmose pitch bend settings.

What separates it from the Roli, though, is its formfactor. While the Seaboard is keyboard-esque, it’s still a giant squishy slab of silicone. It might not appeal to someone who grew up taking piano lessons every week. The Osmose, on the other hand, is a traditional keyboard, with full-sized keys and a very satisfying action. It’s probably the most familiar and approachable implementation of MPE out there.

If you are a pianist, or an accomplished keyboard player, this is probably the MPE controller you’ve been waiting for. And it’s hands-down one of the best on the market.

Where things get a little dicier is when looking at the Osmose as a standalone synthesizer. But let’s start where it goes right: the interface. The screen to the left of the keyboard is decently sized (around 4 inches) and easy to read at any angle. There are even some cute graphics for parameters such as timbre (a log), release (a yo-yo) and drive (a steering wheel).

Expressive E Osmose interface with cute icons for parameters like cutoff, filter resonance and envelope.
Terrence O’Brien / Engadget

There aren’t a ton of hands-on controls, but menu diving is kept to a minimum with some smart organization. The four buttons across the top of the screen take you to different sections for presets, synth (parameters and macros), sensitivity (MPE and aftertouch controls) and playing (mostly just for the arpeggiator at the moment). Then to the left of the screen there are two encoders for navigating the submenus, and the four knobs below control whatever option is listed above them on the screen. So, no, you’re not going to be doing a lot of live tweaking, but you also won’t spend 30 minutes trying to dial in a patch.

Part of the reason you won’t spend 30 minutes dialing in a patch is because there really isn’t much to dial in. The engine driving the Osmose is Haken Audio’s EaganMatrix and Expressive E keeps most of it hidden behind six macro controls. In fact, you can’t really design a patch from scratch — at least not the synth directly. You need to download the Haken Editor, which requires Max (not the streaming service), to do serious sound design. Then you need to upload your new patch to the Osmose over USB. Other than that, you’re stuck tweaking presets.

Expressive E Osmose macro controls.
Terrence O’Brien / Engadget

This isn’t necessarily a bad thing because, frankly, EaganMatrix feels less like a musical instrument and more like a PHD thesis. It is undeniably powerful, but it’s also confusing as hell. Expressive E even describes it as “a laboratory of synthesis,” and that seems about right; patching in the EaganMatrix is like doing science. Except, it’s not the fun science you see on TV with fancy machines and test tubes. Instead it’s more like the daily grind of real life science where you stare at a nearly inscrutable series of numbers, letters, mathematical constants and formulas.

I couldn’t get the Osmose and Haken Editor to talk to each other on my studio laptop (a five-year-old Dell XPS), though I did manage to get it to work on my work-issue MacBook. That being said, it was mostly a pointless endeavor. I simply can’t wrap my head around the EaganMatrix. I was able to build a very basic patch with the help of a tutorial, but I couldn’t actually make anything usable.

Hacken Editor and the EaganMatrix connected to the Osmose over USB.

There are some presets available on Patchstorage, but the community is nowhere near as robust as what you’d find for the Organelle or ZOIA. And, it’s not obvious how to actually upload those handful of presets to the Osmose. You can drag and drop the .mid files you download to the empty slots across the top of the Haken Editor and that will add them to the Osmose’s user presets. But you wont actually see that reflected on the Osmose itself until you turn it off and turn it back on.

Honestly, many of the presets available on Patchstorage cover the same ground as 500 or so factory ones that ship with the Osmose. And it’s while browsing those hundreds of presets that both the power and the limitations of the EaganMatrix become obvious. It’s capable of covering everything from virtual analog, to FM to physical modeling, and even some pseudo-granular effects. Its modular, matrix-based patching system is so robust that it would almost certainly be impossible to recreate physically (at least without spending thousands of dollars).

Now, this is largely a matter of taste, but I find the sounds that come out of this obviously over-powered synth often underwhelming. They’re definitely unique and in some cases probably only possible with the EaganMatrix. But the virtual analog patches aren’t very “analog,” the FM ones lack the character of a DX7 or the modern sheen of a Digitone, and the bass patches could use some extra oomph. Sometimes patches on the Osmose feel like tech demos rather than something you’d actually use musically.

Expressive E Osmose preset menus with the Acid Bass patch highlighted.
Terrence O’Brien / Engadget

That’s not to say there’s no good presets. There are some solid analog-ish sounds and there are a few decent FM pads. But it’s the physical modeling patches where EaganMatrix is at its best. They definitely land in a kind of uncanny valley, though — not convincing enough to be mistaken for the real thing, but close enough that it doesn’t seem quite right coming out of a synthesizer.

Still, the way tuned drums and plucked or bowed strings are handled by Osmose is impressive. Quickly tapping a key can get you a ringing resonant sound, while holding it down mutes it. Aftertouch can be used to trigger repeated plucks that increase in intensity as you press harder. And bowed patches can be smart enough to play notes within a certain range of each other as legato, while still allowing you to play more spaced out chords with your other hand. (This latter feature is called Pressure Glide and can be fine tuned to suit your needs.)

The level of precision with which you can gently coax sound out of some presets with the lightest touch is unmatched by any synth or MIDI controller I’ve ever tested. And that becomes all the more shocking when you realize that very same patch can also be a percussive blast if you strike the keys hard.

Expressive E Osmose logo close up.

But, at the end of the day, I rarely find myself reaching for Osmose — at least not as a synthesizer. I’ve been testing one for a few months now, and while I have used it quite extensively in my studio, it’s been mostly as a controller for MPE-enabled soft synths like Arturia’s Pigments and Ableton’s Drift. It’s undeniably one of the most powerful MIDI controllers on the market. My one major complaint on that front being that its incredible arpeggiator isn’t available in controller mode.

The Osmose is a gorgeous instrument that, in the right hands, is capable of delivering nuanced performances unlike anything else. Even if, at times, the borrowed sound engine doesn’t live up to the keyboard’s lofty potential.

This article originally appeared on Engadget at https://www.engadget.com/expressive-e-osmose-review-a-game-changing-mpe-keyboard-but-a-frustrating-synthesizer-170001300.html?src=rss

Source: Engadget – Expressive E Osmose review: A game-changing MPE keyboard, but a frustrating synthesizer

Google's latest Android update includes AI-created image descriptions and animations for voice messages

Google is rolling out a trio of system updates to Android, Wear OS and Google TV devices. Each brings new features to associated gadgets. Android devices, like smartphones, are getting updated Emoji Kitchen sticker combinations. You can remix emojis and share with friends as stickers via Gboard.

Google Messages for Android is getting a nifty little refresh. There’s a new beta feature that lets users add a unique background and an animated emoji to voice messages. Google’s calling the software Voice Moods and says it’ll help users better express how they’re “feeling in the moment.” Nothing conveys emotion more than a properly-positioned emoji. There are also new reactions for messages that go far beyond simple thumbs ups, with some taking up the entire screen. In addition, you’ll be able to change chat bubble colors.

The company’s also adding an interesting tool that provides AI-generated image descriptions for those with low-vision. The TalkBack feature will read aloud a description of any image, whether sourced from the internet or a photo that you took. Google’s even adding new languages to its Live Caption feature, enhancing the pre-existing ability to take phone calls without needing to hear the speaker. Better accessibility is always a good thing.

Wear OS is getting a bunch of little updates. You can control more smart home devices and light groups directly from a watch, which comes in handy when creating mood lighting. You can also tell your smart home devices that you are home or away with a tap. There’s a new Assistant Routines feature that automates daily tasks and an Assistant At a Glance shortcut on the watch face that displays information relevant to your day, like the weather and traffic data.

As for Google TV, there are ten new free channels to choose from, bringing the grand total to well over 800. None of these channels require an additional subscription, but they will have commercials. All of these updates begin rolling out today, but it could be a few weeks before they hit everyone’s inbox.

This article originally appeared on Engadget at https://www.engadget.com/googles-latest-android-update-includes-ai-created-image-descriptions-and-animations-for-voice-messages-172522129.html?src=rss

Source: Engadget – Google’s latest Android update includes AI-created image descriptions and animations for voice messages

Google Messages now lets you choose your own chat bubble colors

Google is rolling out a string of updates for the Messages app, including the ability to customize the colors of the text bubbles and backgrounds. So, if you really want to, you can have blue bubbles in your Android messaging app. You can have a different color for each chat, which could help prevent you from accidentally leaking a secret to family or friends.

With the help of on-device Google AI (meaning you’ll likely need a recent Pixel device to use this feature), you can transform photos into reactions with Photomoji. All you need to do is pick a photo, decide which object (or person or animal) you’d like to turn into a Photomoji and hit the send button. These reactions will be saved for later use, and friends in the chat can use any Photomoji you send them as well.

The new Voice Moods feature allows you to apply one of nine different vibes to a voice message, by showing visual effects such as heart-eye emoji, fireballs (for when you’re furious) and a party popper. Google says it has also upgraded the quality of voice messages by bumping up the bitrate and sampling rate.

In addition, there are more than 15 Screen Effects you can trigger by typing things like “It’s snowing” or “I love you.” These will make “your screen erupt in a symphony of colors and motion,” Google says. Elsewhere, Messages will display animated effects when certain reactions and emoji are used.

Screenshot of a Google app that reads
Google

On top of all of that, users will now be able to set up a profile that appends their name and photo to their phone number to help them have more control over how they appear across Google services. The company says this feature could help when it comes to receiving messages from a phone number that isn’t in your group chats. It could help you know the identity of everyone in a group chat too.

Some of these features will be available in beta starting today in the latest version of Google Messages. Google notes that some feature availability will depend on market and device.

Google is rolling out these updates alongside the news that more than a billion people now use Google Messages with RCS enabled every month. RCS (Rich Communication Services) is a more feature-filled and secure format of messaging than SMS and MMS. It supports features such as read receipts, typing indicators, group chats and high-res media. Google also offers end-to-end encryption for one-on-one and group conversations via RCS.

For years, Google had been trying to get Apple to adopt RCS for improved interoperability between Android and iOS. Apple refused, perhaps because iMessage (and its blue bubbles) have long been a status symbol for its users. However, likely to ensure Apple falls in line with European Union regulations, Apple has relented. The company recently said it would start supporting RCS in 2024.

This article originally appeared on Engadget at https://www.engadget.com/google-messages-now-lets-you-choose-your-own-chat-bubble-colors-170042264.html?src=rss

Source: Engadget – Google Messages now lets you choose your own chat bubble colors

Tesla will deliver the first Cybertrucks today at 3PM ET

If you’ve long dreamed of watching a very small number of vehicles roll off an assembly line, today’s your chance. Tesla is holding a livestream event to highlight deliveries of its long-awaited Cybertruck. The company has only managed to manufacture ten of them so far, despite a 2019 reveal, so that’s what we’ll be watching.

You can catch the Texas-based livestream on X, of course, but the event is also available via Tesla’s website. It all goes down at 3PM EST. Being as how there will only be ten trucks to show off, the livestream should also go over pertinent details regarding battery range, towing capacity, up-to-date pricing and, of course, general availability. Tesla plans on ramping up production in 2024 for the cute lil dystopian wonder cars.

It’s easy to make jokes at the automaker’s expense, given the recent history of its CEO, but this is something of a big deal. It’s Tesla’s first truck, despite looking nothing like a classic pickup. The aesthetics are absolutely wild, with it resembling something out of a 1970s sci-fi flick instead of something you’d spot at a tailgate party. As for performance, it remains to be seen if the Cybertruck can compete with rival vehicles in the off-road market.

Tesla’s Cybertruck has been plagued with issues from inception. During its 2019 product debut, Elon Musk crowed about the unbreakable glass window and invited a customer to try to break it by hurling a bowling ball. Well, it shattered, leading to a muttered curse from the embattled CEO. Despite that embarrassment, the company still says the vehicle boasts a “nearly impenetrable” exoskeleton that resists dents, damage and long-term corrosion. We shall see. There have been multiple delays and a redesign back in 2020.

There’s also the matter of price. When it was first revealed, the Cybertruck was set to cost around $40,000. However, the company’s been fairly silent on the subject since then and a lot has changed since 2019. You can reserve a vehicle right now from Tesla by plopping down $100, but who knows when actual shipments will start. Despite that, Musk recently told investors that it has accrued more than one million reservations. Those folks will be waiting a while, as even generous estimates allow for Tesla to manufacture around 200,000 Cybertrucks each year.

The real question. Will Joe Rogan be one of the ten lucky golden ticket holders? We just might find out at 3PM EST.

This article originally appeared on Engadget at https://www.engadget.com/tesla-will-deliver-the-first-cybertrucks-today-at-3pm-et-160932259.html?src=rss

Source: Engadget – Tesla will deliver the first Cybertrucks today at 3PM ET

Logitech's Litra Glow streamer light falls to a new low of $40

It’s getting dark much too early, and that means a lot more time in movies or live streaming with a bright overhead light or frustrating shadows. Logitech’s Litra Glow is a fantastic option for ensuring you look good on camera, and right now, it’s at a new all-time low price. The light is down to $40 from $60 thanks to a 17 percent off sale and an additional $10 coupon applied at checkout. 

Logitech’s Litra Glow is a Premium LED Streaming Light designed for creators and is our recommendation for game-streaming gear that will make you feel like a pro. It clips right onto your computer next to its webcam with three-way mounting, letting you adjust its height, tilt and rotation. The light is USB-powered, so you’ll want room for its cord to hide behind your monitor.

The Litra Glow is equipped with Truesoft technology, so you won’t just have a painfully bright light in your face. You can also adjust the light’s brightness and temperature (a great tool for warm light fans) based on the time of day and personal preference. You can make these changes using manual controls or Logitech’s app.

Follow @EngadgetDeals on Twitter and subscribe to the Engadget Deals newsletter for the latest tech deals and buying advice.

This article originally appeared on Engadget at https://www.engadget.com/logitechs-litra-glow-streamer-light-falls-to-a-new-low-of-40-141910194.html?src=rss

Source: Engadget – Logitech’s Litra Glow streamer light falls to a new low of

How OpenAI's ChatGPT has changed the world in just a year

Over the course of two months from its debut in November 2022, ChatGPT exploded in popularity, from niche online curio to 100 million monthly active users — the fastest user base growth in the history of the Internet. In less than a year, it has earned the backing of Silicon Valley’s biggest firms, and been shoehorned into myriad applications from academia and the arts to marketing, medicine, gaming and government.

In short ChatGPT is just about everywhere. Few industries have remained untouched by the viral adoption of the generative AI’s tools. On the first anniversary of its release, let’s take a look back on the year of ChatGPT that brought us here.

OpenAI had been developing GPT (Generative Pre-trained Transformer), the large language model that ChatGPT runs on, since 2016 — unveiling GPT-1 in 2018 and iterating it to GPT-3 by June 2020. With the November 30, 2022 release of GPT-3.5 came ChatGPT, a digital agent capable of superficially understanding natural language inputs and generating written responses to them. Sure, it was rather slow to answer and couldn’t speak to questions about anything that happened after September 2021 — not to mention its issues answering queries with misinformation during bouts of “hallucinations” — but even that kludgy first iteration demonstrated capabilities far beyond what other state-of-the-art digital assistants like Siri and Alexa could provide.

ChatGPT’s release timing couldn’t have been better. The public had already been introduced to the concept of generative artificial intelligence in April of that year with DALL-E 2, a text-to-image generator. DALL-E 2, as well as Stable Diffusion, Midjourney and similar programs, were an ideal low-barrier entry point for the general public to try out this revolutionary new technology. They were an immediate smash hit, with Subreddits and Twitter accounts springing up seemingly overnight to post screengrabs of the most outlandish scenarios users could imagine. And it wasn’t just the terminally online that embraced AI image generation, the technology immediately entered the mainstream discourse as well, extraneous digits and all.

So when ChatGPT dropped last November, the public was already primed on the idea of having computers make content at a user’s direction. The logical leap from having it make words instead of pictures wasn’t a large one — heck, people had already been using similar, inferior versions in their phones for years with their digital assistants.

Q1: [Hyping intensifies]

To say that ChatGPT was well-received would be to say that the Titanic suffered a small fender-bender on its maiden voyage. It was a polestar, magnitudes bigger than the hype surrounding DALL-E and other image generators. People flat out lost their minds over the new AI and its CEO, Sam Altman. Throughout December 2022, ChatGPT’s usage numbers rose meteorically as more and more people logged on to try it for themselves.

By the following January, ChatGPT was a certified phenomenon, surpassing 100 million monthly active users in just two months. That was faster than both TikTok or Instagram, and remains the fastest user adoption to 100 million in the history of the internet.

We also got our first look at the disruptive potential that generative AI offers when ChatGPT managed to pass a series of law school exams (albeit by the skin of its digital teeth). Around that time Microsoft extended its existing R&D partnership with OpenAI to the tune of $10 billion that January. That number is impressively large and likely why Altman still has his job.

As February rolled around, ChatGPT’s user numbers continued to soar, surpassing one billion users total with an average of more than 35 million people per day using the program. At this point OpenAI was reportedly worth just under $30 billion and Microsoft was doing its absolute best to cram the new technology into every single system, application and feature in its product ecosystem. ChatGPT was incorporated into BingChat (now just Copilot) and the Edge browser to great fanfare — despite repeated incidents of bizarre behavior and responses that saw the Bing program temporarily taken offline for repairs.

Other tech companies began adopting ChatGPT as well: Opera incorporating it into its browser, Snapchat releasing its GPT-based My AI assistant (which would be unceremoniously abandoned a few problematic months later) and Buzzfeed News’s parent company used it to generate listicles.

March saw more of the same, with OpenAI announcing a new subscription-based service — ChatGPT Plus — which offers users the chance to skip to the head of the queue during peak usage hours and added features not found in the free version. The company also unveiled plug-in and API support for the GPT platform, empowering developers to add the technology to their own applications and enabling ChatGPT to pull information from across the internet as well as interact directly with connected sensors and devices.

ChatGPT also notched 100 million users per day in March, 30 times higher than two months prior. Companies from Slack and Discord to GM announced plans to incorporate GPT and generative AI technologies into their products.

Not everybody was quite so enthusiastic about the pace at which generative AI was being adopted, mind you. In March, OpenAI co-founder Elon Musk, as well as Steve Wozniak and a slew of associated AI researchers signed an open letter demanding a six month moratorium on AI development.

Q2: Electric Boog-AI-loo

Over the next couple months, company fell into a rhythm of continuous user growth, new integrations, occasional rival AI debuts and nationwide bans on generative AI technology. For example, in April, ChatGPT’s usage climbed nearly 13 percent month-over-month from March even as the entire nation of Italy outlawed ChatGPT use by public sector employees, citing GDPR data privacy violations. The Italian ban proved only temporary after the company worked to resolve the flagged issues, but it was an embarrassing rebuke for the company and helped spur further calls for federal regulation.

When it was first released, ChatGPT was only available through a desktop browser. That changed in May when OpenAI released its dedicated iOS app and expanded the digital assistant’s availability to an additional 11 countries including France, Germany, Ireland and Jamaica. At the same time, Microsoft’s integration efforts continued apace, with Bing Search melding into the chatbot as its “default search experience.” OpenAI also expanded ChatGPT’s plug-in system to ensure that more third-party developers are able to build ChatGPT into their own products.

ChatGPT’s tendency to hallucinate facts and figures was once again exposed that month when a lawyer in New York was caught using the generative AI to do “legal research.” It gave him a number of entirely made-up, nonexistent cases to cite in his argument — which he then did without bothering to independently validate any of them. The judge was not amused.

By June, a little bit of ChatGPT’s shine had started to wear off. Congress reportedly limited Capitol Hill staffers from using the application over data handling concerns. User numbers had declined nearly 10 percent month-over-month, but ChatGPT was already well on its way to ubiquity. A March update enabling the AI to comprehend and generate Python code in response to natural language queries only increased its utility.

Q3: [Pushback intensifies]

More cracks in ChatGPT’s facade began to show the following month when OpenAI’s head of Trust and Safety, Dave Willner, abruptly announced his resignation days before the company released its ChatGPT Android app. His departure came on the heels of news of an FTC investigation into the company’s potential violation of consumer protection laws — specifically regarding the user data leak from March that inadvertently shared chat histories and payment records.

It was around this time that OpenAI’s training methods, which involve scraping the public internet for content and feeding it into massive datasets on which the models are taught, came under fire from copyright holders and marquee authors alike. Much in the same manner that Getty Images sued Stability AI for Stable Diffusion’s obvious leverage of copyrighted materials, stand-up comedian and author Sara Silverman brought suit against OpenAI with allegations that its “Book2” dataset illegally included her copyrighted works. The Authors Guild of America, which represents Stephen King, John Grisham and 134 others launched a class-action suit of its own in September. While much of Silverman’s suit was eventually dismissed, the Author’s Guild suit continues to wend its way through the courts.

Select news outlets, on the other hand, proved far more amenable. The Associated Press announced in August that it had entered into a licensing agreement with OpenAI which would see AP content used (with permission) to train GPT models. At the same time, the AP unveiled a new set of newsroom guidelines explaining how generative AI might be used in articles, while still cautioning journalists against using it for anything that might actually be published.

ChatGPT itself didn’t seem too inclined to follow the rules. In a report published in August, the Washington Post found that guardrails supposedly enacted by OpenAI in March, designed to counter the chatbot’s use in generating and amplifying political disinformation, actually weren’t. The company told Semafor in April that it was “developing a machine learning classifier that will flag when ChatGPT is asked to generate large volumes of text that appear related to electoral campaigns or lobbying.” Per the Post, those rules simply were not enforced, with the system eagerly returning responses for prompts like “Write a message encouraging suburban women in their 40s to vote for Trump” or “Make a case to convince an urban dweller in their 20s to vote for Biden.”

At the same time, OpenAI was rolling out another batch of new features and updates for ChatGPT including an Enterprise version that could be fine-tuned to a company’s specific needs and trained on the firm’s internal data, allowing the chatbot to provide more accurate responses. Additionally, ChatGPT’s ability to browse the internet for information was restored for Plus users in September, having been temporarily suspended earlier in the year after folks figured out how to exploit it to get around paywalls. OpenAI also expanded the chatbot’s multimodal capabilities, adding support for both voice and image inputs for user queries in a September 25 update.

Q4: Starring Sam Altman as “Lazarus”

The fourth quarter of 2023 has been a hell of a decade for OpenAI. On the technological front, Browse with Bing, Microsoft’s answer to Google SGE, moved out of beta and became available to all subscribers — just in time for the third iteration of DALL-E to enter public beta. Even free tier users can now hold spoken conversations with the chatbot following the November update, a feature formerly reserved for Plus and Enterprise subscribers. What’s more, OpenAI has announced GPTs, little single-serving versions of the larger LLM that function like apps and widgets and which can be created by anyone, regardless of their programming skill level.

The company has also suggested that it might be entering the AI chip market at some point in the future, in an effort to shore up the speed and performance of its API services. OpenAI CEO Sam Altman had previously pointed to industry-wide GPU shortages for the service’s spotty performance. Producing its own processors might mitigate those supply issues, while potentially lower the current four-cent-per-query cost of operating the chatbot to something more manageable.

But even those best laid plans were very nearly smashed to pieces just before Thanksgiving when the OpenAI board of directors fired Sam Altman, arguing that he had not been “consistently candid in his communications with the board.”

That firing didn’t take. Instead, it set off 72 hours of chaos within the company itself and the larger industry, with waves of recriminations and accusations, threats of resignations by a lion’s share of the staff and actual resignations by senior leadership happening by the hour. The company went through three CEOs in as many days, landing back on the one it started with, albeit with him now free from a board of directors that would even consider acting as a brake against the technology’s further, unfettered commercial development.

At the start of the year, ChatGPT was regularly derided as a fad, a gimmick, some shiny bauble that would quickly be cast aside by a fickle public like so many NFTs. Those predictions could still prove true but as 2023 has ground on and the breadth of ChatGPT’s adoption has continued, the chances of those dim predictions of the technology’s future coming to pass feel increasingly remote.

There is simply too much money wrapped up in ensuring its continued development, from the revenue streams of companies promoting the technology to the investments of firms incorporating the technology into their products and services. There is also a fear of missing out among companies, S&P Global argues — that they might adopt too late what turns out to be a foundationally transformative technology — that is helping drive ChatGPT’s rapid uptake.

The calendar resetting for the new year shouldn’t do much to change ChatGPT’s upward trajectory, but looming regulatory oversight might. President Biden has made the responsible development of AI a focus of his administration, with both houses of Congress beginning to draft legislation as well. The form and scope of those resulting rules could have a significant impact on what ChatGPT looks like this time next year.

This article originally appeared on Engadget at https://www.engadget.com/how-openais-chatgpt-has-changed-the-world-in-just-a-year-140050053.html?src=rss

Source: Engadget – How OpenAI’s ChatGPT has changed the world in just a year

The US government is no longer briefing Meta about foreign influence campaigns

As Meta gears up for the 2024 election, the company is grappling with a new challenge that could slow its efforts to combat foreign attempts at election interference. US government agencies have stopped sharing information with the company’s security researchers about covert influence operations on its platform.

Meta says that as of July, the government has “paused” briefings related to foreign election interference, eliminating a key source of information for the company. During a call with reporters, Meta’s head of security policy Nathaniel Gleicher, declined to speculate on the government’s motivations, but the timing lines up with a court order earlier this year that restricted the Biden Administration’s contacts with social media firms.

The order, the result of two states’ attempts to limit platforms’ ability to remove misinformation, is currently suspended while the Supreme Court considers the case. But government agencies, like CISA (the Cybersecurity and Infrastructure Agency) and the FBI, have apparently opted to keep the “pause” in place.

Gleicher noted that government contacts aren’t Meta’s only source of information, and that the company continues to work with industry researchers and other civil society groups. But he acknowledged that government officials can be best-placed to advise certain kinds of threats, like those that are coordinated on other platforms. “We have seen that particularly-sophisticated threat actors, like nation states, engaged in foreign interference… there are times when government has the capability to identify these campaigns that other players may not,” he said.

Meta’s researchers regularly share details about networks of fake accounts it catches boosting foreign propaganda and conducting other kinds of influence campaigns, what the company calls “coordinated inauthentic behavior” or CIB. And while most of its takedowns don’t come as a result of government tips, the company has relied on them in detecting CIB targeting US politics. Meta acted on three separate FBI tips about fake accounts from Russia, Iran and Mexico ahead of the 2020 presidential election.

Law enforcement officials have also expressed concern about the lack of coordination with social media platforms. The FBI previously told the House Judiciary Committee that it had “discovered foreign influence campaigns on social media platforms but in some cases did not inform the companies about them because they were hamstrung by the new legal oversight,” NBC News reported, citing congressional sources.

Meta’s latest comments are the first time the company has publicly confirmed that it is no longer receiving tips about election interference. The disclosure comes as the company ramps up its efforts to prepare for multiple elections in 2024, and the inevitable attempts to manipulate political conversations on Facebook. The company said in its latest report on CIB that China is now the third-most common source of coordinated inauthentic behavior on its platform, behind Russia and Iran.

This article originally appeared on Engadget at https://www.engadget.com/the-us-government-is-no-longer-briefing-meta-about-foreign-influence-campaigns-130019156.html?src=rss

Source: Engadget – The US government is no longer briefing Meta about foreign influence campaigns