Austrian Audio is not exactly a household name. It was formed in 2017 following the closure of AKG’s offices in Vienna. And it’s mostly stuck to higher-end microphones and headphones. Now the company is dipping its toes into more consumer-friendly territory with the $199 MiCreator Studio, a combination condenser microphone and USB-C audio interface in an incredibly portable package.
At 155 x 60 x 37 mm the MiCreator Studio is quite small, but I’d say a touch shy of pocketable. It’s also surprisingly dense. 13 ounces isn’t exactly heavy, but it’s more than I expected the MiCreator to weigh just by looking at it. The heft makes it feel like a durable and well-made device. The only exception to this is the swappable faceplates. My review sample came with black and red plates, they look nice but feel a little flimsy. And the magnets that attach them to the sides are a little weak and I could see them easily coming off if tossed in bag. At least it comes with a soft pouch you can put it in so everything will stay in one place even if it does get jostled around.
Terrence O’Brien / Engadget
There are a pair of rubber feet on the bottom that allow the MiCreator to sit comfortably on almost any surface with little concern it’ll get jostled around. And the mic capsule is suspended by rubber shock absorbers, so vibration shouldn’t prove too much of a concern. If the tiltable head doesn’t give you enough flexibility, there is screw mount underneath for attaching to a mic stand.
Controls on the unit itself are relatively spartan. On the front is a switch that changes the mic from high gain, to low gain or mutes it. And there’s a knob for controlling monitoring levels or the balance between two sources when you have something plugged into the 3.5mm in/out jack on the back. Below that dual-purpose jack is a dedicated 3.5mm headphone output, and above it is a switch that allows you to set the input level of the additional input.
The switches for changing the input level are probably my biggest gripe with the MiCreator. The difference between the high and low settings are pretty massive. Yes, obviously you can tweak the levels in your DAW and most people will have no issue doing that. But personally I like getting my levels as close to right as possible without touching the gain in my DAW for maximum flexibility.
Terrence O’Brien / Engadget
I also found that the high gain setting still required me to get pretty close to the mic while speaking which resulted in a lot of breath and mouth noises. Probably exactly what you want when recording some ASMR videos, but it wasn’t ideal for podcasting. Again, a lot of this is easily rectified with a decent pop filter. But that’s one more thing to carry and slightly undercuts the portability factor.
Those minor nitpicks aside, the mic sounds great. They’re highly directional so, despite the fact that they’re fairly sensitive condenser mics, background noise is rarely a problem. And they capture a healthy amount of midrange and low end. That’s essential for micing up, say, a guitar amp, and it tends to treat my lower vocal register well. But many will want to toss a low cut on their voice in post production.
That lack of tweakability on the gain is an issue again when you plug an instrument directly into the second input. Something like a synth with a master volume that doesn’t alter tone is fine. But going direct in with my guitar was a little hit or miss. The most reliable way was to go through my pedal board and one of UA’s amp sim pedals to give me better control over volume and tone, rather than relying on amp plugins in a DAW. Austrian Audio gets a ton of credit though for including an instrument cable in the box though that’s standard 1/4-inch TS on one side and 3.5mm TRRS on the other side for plugging into the MiCreator. It might seem like a small thing, but it saves you from having to track down and order a rather unusual cable on your own.
Terrence O’Brien / Engadget
The company also sent over one of its MiCreator Satellites. This is a second mic, without an interface, designed to pair with the MiCreator. It costs $99 but adds a lot of flexibility. For one, it’s the exact same mic as the MiCreator, so you can use them as a stereo pair or for two people in a simplified podcast setup. But the included cable is also long enough for you to put one mic right up against an amp while capturing some room tone with the other. Or, you could mic an acoustic guitar with one and sing into the other. And the Satellite is truly tiny. This is one of, if not the, smallest full-fledged podcast studio you can can get.
If there’s one feature I would have loved to see, it’s a standalone operating mode. If the MiCreator had a small battery and a microSD card slot so it could double as a field recorder, or capture an interview when hooking up your laptop or iPad is not really feasible, I could see it carving out a permanent spot in my day bag.
Still, for $199, or $299 when bundled with a Satellite, the MiCreator offers a surprising amount of value. Frankly it’s better than it has any right being at that price. It’s an excellent condenser USB microphone and a solid (if simple) audio interface in a small, rugged package. It can be a high-quality go-anywhere podcast studio. Or be the primary way a band records new material while they’re out on tour.
This article originally appeared on Engadget at https://www.engadget.com/micreator-studio-hands-on-a-199-portable-recording-studio-worth-more-than-its-price-tag-180021292.html?src=rss
Google has revealed a string of accessibility updates it’s rolling out for Maps, Search and Assistant, as well as greater availability of some camera-based Pixel features. One of the main focus areas this time around is wheelchair accessibility. A new option that’s gradually becoming available on iOS and Android will allow Maps users to request stair-free walking routes. This feature — which Google says will benefit those traveling with luggage and strollers as well — will be available globally, as long as the company has sufficient data for the region.
Google notes that if you have the wheelchair-accessible option enabled in your transit preferences, this will automatically be applied to walking routes too. Otherwise, when you request a walking route, you can access stair-free directions by tapping the three dots at the top of the screen and enabling the “wheelchair-accessible” option.
On a related note, wheelchair-accessible information will be available across more Google products, namely on Maps for Android Auto and cars with Google built in. When you search for a place and tap on it, a wheelchair icon will appear if the location has a step-free entrance, accessible restrooms, parking or seating.
It should be easier to find and support businesses owned by people with disabilities in Maps and Search too. If a business chooses to identify itself as “disabled-owned,” this will be mentioned in Maps and Search listings. Google previously rolled out similar Asian-owned, Black-owned, Latino-owned, LGBTQ+ owned, veteran-owned and women-owned business labels.
Elsewhere, Google is enabling screen reader capabilities in Lens in Maps (which was previously called Search with Live View), an augmented reality tool that’s designed to help you find things like ATMs, restrooms and restaurants with the help of your handset’s camera. When you’re in a perhaps unfamiliar place, you can tap the camera icon in the search bar and point your phone at the world around you.
“If your screen reader is enabled, you’ll receive auditory feedback of the places around you with helpful information like the name and category of a place and how far away it is,” Eve Andersson, senior director on Google’s Products for All team, wrote in a blog post. This Lens in Maps feature, which is geared toward blind and low-vision folks, will be available on iOS starting today and Android later this year.
On Pixel devices, the Magnifier app uses your camera to help you zoom in on real-world details from afar or to make text on menus and documents easier to read with the help of color filters, brightness and contrast settings. The app is available for Pixel 5 and later devices, but not the Pixel Fold.
Google also notes that the latest version of Guided Frame that arrived on Pixel 8 and Pixel 8 Pro earlier this month recognizes pets, dishes and documents in addition to faces to help people who are blind or have low-vision take good-quality photos. The Guided Frame update is coming to Pixel 6 and Pixel 7 devices later this year.
Meanwhile, Google is offering more customizable Assistant Routines. The company says you’ll be able to add a Routine to your home screen as a shortcut, determine the size of it and customize it with your own images. “Research has shown that this personalization can be particularly helpful for people with cognitive differences and disabilities and we hope it will bring the helpfulness of Assistant Routines to even more people,” Andersson wrote. Google developers took inspiration from Action Blocks for this feature.
Last but not least, Google earlier this year added a feature to the desktop Chrome address bar to detect typos and suggest websites based on what the app reckons you meant. The feature will be available on Chrome on iOS and Android starting today. The idea is to help folks with dyslexia, language learners and those who make typos more easily find what they’re seeking.
This article originally appeared on Engadget at https://www.engadget.com/google-rolls-out-more-accessibility-features-for-maps-search-and-assistant-175237621.html?src=rss
YouTube’s rolling out a whole bunch of new features and design updates, three dozen in total. Some of these tools are for the web app, while others are for the smartphone app and smart TV software. These features aren’t game-changers by themselves, but they add up to an improved user experience. Let’s go over some of the more interesting ones.
It’s now easier to speed up videos for those who just can’t get enough of really fast podcast clips. Just hold your finger down on the video and it’ll automatically bump up the playback speed to 2x. This feature is also useful for searching through a video for a relevant portion, in addition to fast-paced playback. The tool’s available across web, tablets and mobile devices.
The app’s launching bigger preview thumbnails to help with navigation. There’s also a new haptic feedback component that vibrates when you hover over the original start point, so you never lose your place. This will help when perusing videos with your finger on a smartphone or tablet, as the current way to do this isn’t exactly accurate.
One of the more useful updates here is a new lock screen tool to avoid accidental interruptions while you watch stuff on your phone or tablet. This should be extremely handy for those who like to take walks or exercise while listening to YouTube, as the jostling typically interrupts whatever’s on-screen. In other words, your quiet meditation video won’t accidentally switch to some guy yelling about the end of masculinity as your phone sits in a pocket, purse or handbag.
Speaking of guys yelling about the end of masculinity, the company’s finally (finally) added a stable volume feature, which ensures that the relative loudness of videos don’t fluctuate too much. This tool’s automatically turned on once you snag the update.
Even the humble library tab has gotten a refresh. It’s now called “You” and relays a bit more data than before. You’ll have access to previously watched videos, playlists, downloads and purchase all from one place. Again, this change impacts the app on both web and mobile devices.
The rest of the updates are design related, with on-screen visual cues that appear when creators ask you to subscribe complete with dopamine-enhancing sparkles when you finally “smash that like button.” There’s even a new animation that follows the view count and like count throughout a video’s first 24 hours. Some design elements extend to the smart TV app, including a new vertical menu, video chapters, a scrollable description section and more.
YouTube’s latest update is a tiered release and the company says it could be a few weeks before it reaches every user throughout the globe. The popular streaming platform says more features are forthcoming, including a redesign of the YouTube Kids app.
YouTube’s constantly changing up its core features. The past year has seen an enhanced 1080p playback option for web users and the company’s even announced a spate of AI-enhanced creator tools, among other updates. Evolve or die right? The social media landscape, after all, is currently in the midst of something of a sea change.
This article originally appeared on Engadget at https://www.engadget.com/youtube-is-rolling-out-a-new-you-section-as-part-of-a-substantial-update-174512477.html?src=rss
If you have a pair of in-ear headphones, there’s a good chance they are using a technology that’s several decades old. Despite attempts to introduce different, exotic-sounding systems like planar magnetic, electrostatic and even bone conduction, most IEMs or in-ear headphones still use either balanced armature or dynamic drivers. But there’s another contender, promising high fidelity, low power consumption and a tiny physical footprint. The twist is, it’s a technology that’s been in your pocket for the last ten years already.
We’re talking about micro-electromechanical systems (MEMS), and it’s a technology that’s been used in almost every microphone in every cell phone since the 2010s. When applied to headphone drivers (the inverse of a microphone) the benefits are many. But until recently, the technology wasn’t mature enough for mainstream headphones. California-based xMEMS is one company pushing the technology and consumer products featuring its solid-state MEMS drivers are finally coming to market. We tested the high-end Oni from Singularity, but Creative has also confirmed a set of TWS headphones with xMEMS drivers will be available in time for the holidays.
Where conventional speakers and drivers typically use magnets and coils, MEMS uses piezos and silicon. The result, if the hype is to be believed, is something that’s more responsive, more durable and with consistent fidelity. And unlike balanced-armature or dynamic, MEMS drivers can be built on a production line with minimal-to-no need for calibration or driver matching, streamlining their production. xMEMS, for example, has partnered with TSMC, one of the largest producers of microprocessors for its manufacturing process.
xMEMS
Of course, MEMS drivers lend themselves to any wearable that produces sound from AR glasses to VR goggles and hearing aids. For most of us, though, it’s headphones where we’re going to see the biggest impact. Not least because the potential consistency and precision of MEMS should marry perfectly with related technologies such as spatial audio where fast response times and perfect phase matching (two headphones being perfectly calibrated to each other) is essential.
For now, MEMS is best suited to earbuds, IEMS and TWS-style headphones but xMEMS hopes to change that. “The North Star of the company was to reinvent loudspeakers,” Mike Householder, Marketing & Business Development at the company told Engadget. “But to generate that full bandwidth audio in free air is a little bit more of a development challenge that’s going to take some more time. The easier lift for us was to get into personal audio and that’s the product that we have today.”
To look at, the first IEM to feature xMEMS’ solid-state drivers, Singularity’s Oni, seem like regular, stylish high-end in-ear monitors. Once the music started to flow, though, there was a very clear difference. Electronic genres sounded crisp and impactful in a way that feld more . The MEMS drivers’ fast transient response evidenced in the sharp, punch percussion of RJD2’s “Ghostwriter” and the Chemical Brothers’ “Live Again.” The latter’s mid- and high-end sections in particular shone through with remarkable clarity. Bass response was good, especially in the lower-mids, but perhaps not the strong point of the experience.
Singularity
When I tried Metallica’s “For Whom the Bell Tolls,” I immediately noticed the hi-hats pushing through in a way I’d never heard before. The only way I can describe it is “splashy.” It didn’t sound weird, just noticeable. I asked Householder about this and he wasn’t as surprised. “Yeah, the hi-hats, cymbals and percussion, you’re gonna hear it with a new level of detail that you’re really not accustomed to.” He said, adding that some of this will be the tuning of the supplied headphone amplifier (made by iFi) so it’s partly the EQ of that, mixed with the improved clarity of high frequencies from the MEMS drivers.
There was another surprise with the supplied amp/DAC also — it had a specific “xMEMS” mode. I originally planned to use my own, but it turns out that I needed this specific DAC as the MEMS drivers require a 10-volt bias to work. I asked Householder if all headphones would require a DAC (effectively ending their chances of mainstream adoption), but apparently xMEMS has developed its own amp “chip” that can both drive the speakers and supply the 10-volt bias. The forthcoming True Wireless buds from Creative, for example, obviously won’t need any additional hardware.
This is where things get interesting. While we don’t know the price for Creative’s TWS buds with xMEMS drivers, we can be sure that they will be a lot cheaper than Singularity’s IEMs which retail for $1,500. “You know, they’re appealing to a certain consumer, but you could just very easily put that same speaker into a plastic shell, and retail it for 150 bucks,” Householder told Engadget. The idea that xMEMS can democratize personal audio at every price point is a bold one. Not least because most audiophiles aren’t used to seeing the exact same technology in their IEMs also in sub $200 wireless products. Until we have another set to test, though, we can’t comment on the individual character each manufacturer can imbue on them.
xMEMS
One possible differentiating factor for higher-end products (and competing MEMS-based products) is something xMEMS is calling “Skyline.” Householder described it as a dynamic “vent” that can be opened and closed depending on the listener’s needs. Similar to how open-back headphones are favored by some for their acoustic qualities, xMEMS-powered IEMs could include Skyline that would open and close to prevent occlusion, improve passive noise canceling and other acoustic qualities such as “transparency” mode where you want to temporarily let external, environmental noises come through.
For those who prefer on-ear or over-ear headphones, MEMS technology will likely be paired with legacy dynamic drivers, at least initially. “The first step that we’re taking into headphone is actually a two way approach,” Householder said. The idea being that a smaller dynamic driver can handle the low frequencies, while MEMS drivers currently don’t scale up so well. “It’s really the perfect pairing. The dynamic for the low end, let it do what it does best, and then we’ve got the far superior high frequency response [from MEMS],” he said. “But the long term vision is to eventually fully replace that dynamic driver.”
The ultimate goal would of course be a set of solid-state desktop speakers, but we’re a little way out on that it seems. For now though, there’s a tantalizing promise that MEMS-based in-ears could modernize and maybe even democratize consumer audio, at least around a certain price point. Not to mention that xMEMS isn’t the only company in the game. Austrian startup, Usound, already showed its own reference-design earphone last year and Sonic Edge has developed its own MEMS “speaker-in-chip” technology. With some competition in the market, there’s hope that the number of products featuring it will increase and improve steadily over the next year or so.
This article originally appeared on Engadget at https://www.engadget.com/could-mems-be-the-next-big-leap-in-headphone-technology-173034402.html?src=rss
In a study from Oxford University, researchers found that by using a combination of wearable sensor data and machine learning algorithms the progression of Parkinson’s disease can be monitored more accurately than in traditional clinical observation. Monitoring movement data collected by sensor technology may not only improve predictions about disease progression but also allows for more precise diagnoses.
Parkinson’s disease is a neurological condition that affects motor control and movement. Although there is currently no cure, early intervention can help delay the progression of the disease in patients. Diagnosing and tracking the progression of Parkinson’s disease currently involves a neurologist using the Movement Disorder Society-Unified Parkinson’s Disease Rating Scale (MDS-UPDRS) to assess the patient’s motor symptoms by assigning scores to the performance of specific movements. However, because this is a subjective, human analysis, classification can be inaccurate.
In the Oxford study, 74 patients with Parkinson’s were monitored for disease progression over a period of 18 months. The participants wore wearables with sensors in different regions of the body: on the chest, at the base of the spine and on each wrist and foot. These sensors — which had gyroscopic and accelerometric capabilities — kept tabs on 122 different physiological measurements, and tracked the patients during walking and postural sway tests. Kinetic data was then analyzed by custom software programs using machine learning.
Oxford
The sensor data collected by the wearables were compared to standard MDS-UPDRS assessments, which are considered the gold standard in current practice. That traditional test, in this study’s patients “did not capture any change” while the sensor-based analysis “detected a statistically significant progression of the motor symptoms” according to the researchers.
Having more precise data on the progression of Parkinson’s isn’t a cure, of course. But the incorporation of metrics from wearables could help researchers confirm the efficacy of novel treatment options.
This article originally appeared on Engadget at https://www.engadget.com/study-wearable-sensors-more-accurately-track-parkinsons-disease-progression-than-traditional-observation-171132495.html?src=rss
ERNIE, Baidu’s answer to ChatGPT, has “achieved a full upgrade,” company CEO Robin Li told the assembled crowd at the Baidu World 2023 showcase on Tuesday, “with drastically improved performance in understanding, generation, reasoning, and memory.”
During his keynote address, Li demonstrated improvements to those four core capabilities on-stage by having the AI create a multimodal car commercial in a few minutes based on a short text prompt , solve complex geometry problems and progressively iterate the plot for a short story on the spot. The fourth-gen generative AI system “is not inferior in any aspect to GPT-4,” he continued.
ERNIE 4.0 will offer an “improved” search experience resembling that of Google’s SGE, aggregating and summarizing information pulled from the wider web and distilled into a generated response.The system will be multimodal, providing answers as text, images or animated graphs through an “interactive chat interface for more complex searches, enabling users to iteratively refine their queries until reaching the optimal answer, all in one search interface,” per the company’s press. What’s more, the AI will be able to recommend “highly customized” content streams based on previous interactions with the user.
Similar to ChatGPT Enterprise, ERNIE’s new Generative Business Intelligence will offer a more finely-tuned and secure model trained on each client’s individual data silo. ERNIE 4.0 will also be capable of, “conducting academic research, summarizing key information, creating documents, and generating slideshow presentations” as well as enable users to search and retrieve files using text and voice prompts.
Baidu is following the example set by the rest of the industry and has announced plans to put its generative AI in every app and service it can manage. The company has already integrated some of the AI’s functions into Baidu Maps, including navigation, ride hailing and hotel bookings. It is also offering “ow-threshold access and productivity tools” to help individuals and enterprises develop API plugins for the Baidu Qianfan Foundation Model Platform.
Baidu’s partner startups also showed off new product series that will integrate the AI’s functionality during the event, including a domestic robot, an All-in-One learning machine and a smart home speaker.
This article originally appeared on Engadget at https://www.engadget.com/baidus-ceo-says-its-ernie-ai-is-not-inferior-in-any-aspect-to-gpt-4-162333722.html?src=rss
The Netflix Cup will see four pairs of Formula 1 drivers and PGA Tour golfers pairing up in a match play tournament that will take place in Las Vegas. You’ll be able to watch the event starting at 6PM ET on Tuesday, November 14 — just a few days before F1’s inaugural Las Vegas Grand Prix.
As things stand, The Netflix Cup is set to feature F1 drivers Alex Albon, Pierre Gasly, Lando Norris and Carlos Sainz. The golf pros who have lined up to take part are Rickie Fowler, Max Homa, Collin Morikawa and Justin Thomas. The tournament will see the pro-am pairs play an eight-hole match. The top two teams will duke it out on a final hole to try and win the Netflix Cup.
“The continued success of Drive to Survive has played a significant role in the growth of Formula 1 in the US, which has ultimately led to the addition of a third American race,” Emily Prazer, chief commercial officer of Las Vegas Grand Prix, Inc, said in a statement. “It’s only fitting that we kick off our inaugural race weekend with a fun event that can be streamed by F1 and PGA Tour fans around the globe.”
This is a logical way for Netflix to dip its toes into live sports streaming. It means that the company doesn’t have to immediately snap up expensive rights to high-profile leagues (many of which havedealswithrivalstreamingservices anyway) or to showcase lower-tier sports.
It’s also another example of Netflix’s cross-branding coming to the forefront. The company is placing more focus on its own properties with things like a Squid Game reality competition series and branded retail stores that will feature an obstacle course based on its biggest hit to date. Netflix is also said to be developing more video game adaptations of its shows and movies, such as Extraction and Black Mirror.
Netflix’s first livestreamed event was a Chris Rock standup special. However, the company ran into technical problems with its second planned livestream, a Love is Blind cast reunion. The company instead filmed the reunion and uploaded it to the platform as quickly as it could. Netflix will be hoping things go more smoothly this time around.
This article originally appeared on Engadget at https://www.engadget.com/netflixs-first-live-sports-event-is-a-golf-tournament-featuring-f1-drivers-and-pga-tour-pros-160042770.html?src=rss
Alan Wake is coming to Fortnite in a cross-promotional event ahead of the 2010 game’s long-awaited sequel. Alan Wake: Flashback “reimagines Remedy Entertainment’s iconic story in Fortnite” as Epic Games and Remedy Entertainment introduce younger players to a franchise that faded in and out of public consciousness before some of them were born.
The game within a game appears to provide a quick recap of the events of the first title within Fortnite. “Troubled author Alan Wake embarks on a desperate search for his missing wife, Alice,” Epic’s description reads. “Following her mysterious disappearance from the Pacific Northwest town of Bright Falls, he discovers pages of a horror story he has supposedly written, but has no memory of.”
The surreal pairing becomes more logical when you consider Epic and Alan Wake developer Remedy have a working relationship. Remedy signed a publishing agreement with Epic in 2020 in a program covering up to 100 percent of a title’s development costs, including paying for quality assurance, localization and marketing. Once a game recovers its development costs, the companies split their profits 50/50. So, the Fortnite tie-in is a win-win for both companies’ bottom lines.
Alan Wake will also be a playable character via an Alan Wake Outfit. It will launch in the “Waking Nightmare” set available on the Fortnite shop beginning on October 26. Meanwhile, Alan Wake 2launches for $50 on October 27 for PlayStation 5, Xbox Series X/S and PC via the Epic Store.
This article originally appeared on Engadget at https://www.engadget.com/alan-wake-brings-his-flashlight-to-fortnite-155907947.html?src=rss
Elgato’s Stream Deck MK.2 is on sale for $130, a discount of $20 from the MSRP of $150. That’s 13 percent off and actually beats the sale price from last week’s Amazon Prime Day event. If you’re a podcaster or a livestreamer, this is a pretty good time to snag this highly useful streaming device.
This is the latest and greatest Stream Deck, and we said it sets a new standard for the industry when we placed it in our list of the best game streaming gear. Not to be confused with Valve’s Steam Deck, this similarly-named device boasts a hub of 15 LCD hotkeys that you can customize to your liking to simplify livestreaming, podcasting and related activities.
For instance, one button press can turn on a connected accessory, instantly mute a microphone, adjust the lights, trigger on-screen effects or activate audio clips, to name a few examples. You have 15 of these keys, and each can be customized as you see fit. You can even set them to perform in-game actions, like any standard keyboard shortcut.
Additionally, many users have found these devices useful for programming, media editing and any other profession/hobby that could use a bit of hotkey simplification. The buttons are also really satisfying to press.
The main reason you’d get this, however, is right in the name. It’s for streamers that have to moderate a fast-moving chat all while gaming or performing some other task. Each button has a tiny display to let you know at a glance the end result of each press. Over time, you won’t even need these mini displays, instead relying on simple muscle memory, just like keyboard hotkeys. Each of the major streaming platforms, like Twitch and YouTube, offer their own plugins for the device complete with a set of commonly-used adjustment options.
This article originally appeared on Engadget at https://www.engadget.com/the-stream-deck-mk2-is-on-sale-for-just-130-152539642.html?src=rss
While many of the flashy, marquee mobility and transportation demos that go on at CES tend to be of the more… aspirational variety, Honda’s electric cargo hauler, the Autonomous Work Vehicle (AWV), could soon find use on airport grounds as the robotic EV trundles towards commercial operations.
Honda first debuted the AWV as part of its CES 2018 companion mobility demonstration, then partnered with engineering firm Black & Veatch to further develop the platform. The second-generation AWV was capable of being remotely piloted or following a preset path while autonomously avoiding obstacles. It could carry nearly 900 pounds of sutff onboard and atow another 1,600 pounds behind it, both on-road and off-road. Those second-gen prototypes spent countless hours ferrying building materials back and forth across a 1,000-acre solar panel construction worksite, both individually and in teams, as part of the development process.
This past March, Honda unveiled the third-generation AWV with a higher carrying capacity, higher top speed, bigger battery and better obstacle avoidance. On Tuesday, Honda revealed that it is partnering with the Greater Toronto Airports Authority to test its latest AWV at the city’s Pearson Airport.
The robotic vehicles will begin their residencies by driving the perimeters of airfields, using mounted cameras and an onboard AI, checking fences and reporting any holes or intrusions. The company is also considering testing the AWV as a FOD (foreign object debris) tool to keep runways clear, as an aircraft component hauler, people mover or baggage cart tug.
This article originally appeared on Engadget at https://www.engadget.com/honda-to-test-its-autonomous-work-vehicle-at-torontos-pearson-airport-153025911.html?src=rss
Apple has unveiled a new Apple Pencil. The latest model costs $79 ($69 for education) and it pairs and charges via a USB-C cable. It’ll be available in early November and it’s compatible with every iPad that has a USB-C port.
This is the company’s most budget-friendly Apple Pencil yet. It’s $20 less than the original model and $40 cheaper than the second-gen Apple Pencil. Apple says features of the new version include pixel-perfect accuracy, low latency and tilt sensitivity.
There’s no pressure sensitivity this time around, though, so if you want that feature, you’ll need to stick with either of the previous iterations. While you can attach the USB-C Apple Pencil magnetically to the side of your iPad for storage (in which case it will go into a sleep state to prolong the battery life), there’s no wireless charging support either. To top up the Pencil’s battery, you’ll need to slide back a cap to expose a USB-C port and plug in a charging cable.
Apple
Unlike the second-gen Pencil, you won’t be able to double tap the latest version to change drawing tools. Apple has also declined to offer free engraving this time around. However, if you have an M2-powered iPad, you’ll be able to take advantage of the hover feature that’s supported on the second-gen Pencil. That enables you to preview any mark you intend to make before it’s actually applied to your note, sketch, annotation and so on.
This is Apple’s latest step in its transition away from the Lightning port, which was largely prompted by European Union rules. The company started embracing USB-C on iPads several years ago, while it ditched the Lightning port in all iPhone 15 models. It’ll take Apple a while longer to move away from Lightning entirely. Several devices it sells — such as older iPhones, AirPods Max, Magic Mouse, Magic Trackpad and the first-gen Apple Pencil — still use that charging port. But this is another step toward an all-USB-C future, and one fewer charging cable you’ll need to carry around.
This article originally appeared on Engadget at https://www.engadget.com/the-new-79-apple-pencil-has-a-usb-c-charging-port-141732710.html?src=rss
Microsoft’s new Copilot AI has wormed its way into nearly every aspect of Windows 11. However, there’s a bit of a learning curve, but don’t worry. We’ve got you covered. We’ve put together a primer on the company’s new AI assistant, along with step-by-step instructions on how to both enable and disable it on your Windows computer.
What does Microsoft Copilot do?
Microsoft’s Copilot is a suite of AI tools that work together to create a digital personal assistant of sorts. Just like other modern AI assistants, the tech is based on generative artificial intelligence and large language models (LLM.)
You can use Copilot to do a whole bunch of things to increase productivity or just have fun. Use the service to summarize a web page or essay, write an email, quickly change operating system settings, generate custom images based on text, transcribe audio or video, generate a screenshot and even connect to an external device via Bluetooth. It also does the sorts of things other AI chatbots do, like creating lists of recipes, writing code or planning itineraries for trips. Think of it as a more robust version of the pre-existing Bing AI chatbot.
How to enable Microsoft Copilot
Update your computer to the latest version of Windows 11
First of all, you need the latest Windows 11 update, so go ahead and download that first.
1. Head to Settings and look for the Windows Update option.
2. Follow the prompts and reset your computer if required.
You’re now ready to experience everything Copilot has to offer. If Microsoft just dropped an update, you may have to wait a bit before it reaches your region. Click the tab to automatically install the latest update when available.
Once your computer is updated, click the Copilot button
As for enabling the feature, click the Copilot button on the taskbar or press Win + C on the keyboard. That’s all there is to it.
How to disable Microsoft Copilot
Engadget/Terrence O’Brien
Microsoft Copilot isn’t an always-on feature. Once it shows up in the taskbar, it only works when you ask it something. However, if you want to disable or delete the feature entirely, you have a couple of options.
The easiest method is to remove it from the taskbar. Out of sight, out of mind, right? Open up Settings and click on Personalization. Next, tap the Taskbar page to the right side. Look for Taskbar Items and then click on the Copilot toggle switch to remove it from the line-up. This ensures you won’t ever accidentally turn it on via the Taskbar, but you can still call up the AI by typing Win + C.
If you want to delete the toolset entirely, the process is a bit more involved. Start by opening a PowerShell window. Search for Windows PowerShell, right-click on the results and select the option to run as an administrator. Next, click yes on the UAC prompt. This opens up a command prompt.
Paste the following into the window: reg add HKCUSoftwarePoliciesMicrosoftWindowsWindowsCopilot /v “TurnOffWindowsCopilot” /t REG_DWORD /f /d 1
That should do it. Every trace of Copilot will disappear from your system.
What are the limitations of Copilot?
This is new technology, so the limitations are extensive. Like all modern LLMs, Microsoft’s Copilot can and will make up stuff out of thin air every once in a while, a phenomenon known as hallucination. It also doesn’t retain information from conversation to conversation, likely for security reasons. This means it restarts the conversation from a blank slate every time you close a window and open another one. It won’t remember anything about you, your preferences or even your favorite order from the coffee shop down the street. Finally, it doesn’t integrate with too many third-party sources of data, beyond the web, so you won’t be able to incorporate personal fitness data and the like.
What’s the difference between Github Copilot and Microsoft Copilot?
There is a primary difference between the two platforms, despite the similar names. Github Copilot is all about helping craft and edit code for developing software applications. Microsoft Copilot can whip up some rudimentary code but it’s far from a speciality. If your primary use case scenario for an AI assistant is code, go with Github. If you only dabble in coding, or have no interest at all, go with Microsoft.
This article originally appeared on Engadget at https://www.engadget.com/microsoft-copilot-heres-everything-you-need-to-know-about-the-companys-ai-assistant-130004909.html?src=rss
WhatsApp just made logging in a much simpler and faster process, at least on Android devices. The Meta-owned chat application has launched passkey support for Android, which means users no longer have to use OTPs from two-factor authentication to be able to log into their account. Passkeys are a relatively new login technology designed to be resistant to phishing attacks, password leaks and other security vulnerabilities plaguing its older peers.
They’re made up of cryptographic pairs consisting of one public key and one private key that lives on the user’s device. Services that support passkeys don’t have access to that private key, and it also can’t be written down or given away. Without that private key, nobody else can log into somebody’s account. Now that WhatsApp has launched passkey support, users can log in using their device’s authentication procedure, so they can simply verify their identities with their face, fingerprint or their PINs.
While a lot of applications still don’t have passkey support, the list continues to grow. PayPal launched passkey logins for Android back in March, while TikTok rolled out support for the technology in July. More recently, 1Password rolled out passkeys to all its users on desktop and iOS after testing the login solution for three months.
Android users can easily and securely log back in with passkeys 🔑 only your face, finger print, or pin unlocks your WhatsApp account pic.twitter.com/In3OaWKqhy
Volta Trucks has declared bankruptcy in Sweden after four years in business. The EV manufacturers’ board announced the news in a statement that thanked its workers and pointed to its existing accomplishments and unattained potential. “We created the world’s first purpose-built 16-tonne all-electric truck, including a unique cab and chassis design, that would have contributed to decarbonising the environment and enhanced the health and safety and air quality of urban centres.” The company had piloted its delivery vehicle, Volta Zero, in five European countries and originally planned to expand to Los Angeles in mid-2023.
Volta Trucks blames its situation, in part, on that of another bankruptcy: Its battery supplier, Proterra, filed for bankruptcy protection in August following cost-trimming efforts. According to Volta Trucks, this turn of events reduced the number of vehicles it planned to produce and made raising the capital necessary to continue operations more challenging.
The EV industry has faced a great deal of layoffs and closures, especially from startups navigating the ever-evolving (and supply chain issues-plagued) field. Lordstown Motors declared bankruptcy in mid-2023 after five years in business, and Arrival has gone through multiple rounds of layoffs with all signs pointing toward bankruptcy — to name only two examples. As was the case with Lordstown Motors, Volta Trucks could seek a buyer for its existing technology.
This article originally appeared on Engadget at https://www.engadget.com/ev-startup-volta-trucks-files-for-bankruptcy-115059284.html?src=rss
Myspace is getting the documentary treatment, with a film currently in the works chronicling the rise and fall of arguably the first big social network. When it launched in 2003, you chose your top eight digital friends, and drama ensued. The platform went mainstream, becoming an important music promotional tool long before Bandcamp or even YouTube.
The movie will be a joint project between production companies Gunpowder & Sky and The Documentary Group. Gunpowder & Sky has produced documentaries like 69: The Saga of Danny Hernandez and Everybody’s Everything, about deceased rapper Lil Peep. The Documentary Group’s behind shows like Amend: The Fight for America and The Deep End, a series focusing on spiritual wellness guru Teal Swan.
Maybe, just maybe, we’ll even learn what Tom from Myspace’s last name is.
Web-swinging around New York City in Marvel’s Spider-Man might be the best game mechanic in recent times, but why not add wings? With the sequel, Insomniac did just that — and gave players two Spideys to control.
The team has also streamlined and expanded combat movesets and abilities. A lot of the gadgets from the first game return, but they’re easier than ever to access. Previously, if you wanted to use a gadget, you’d have to hold R1 and switch from your web-shooters to another option. Now, web shooters are always triggered by mashing R1, but you can hold R1 and hit one of the four face buttons to activate your slotted gadgets. It’s all further augmented by a compelling plot featuring the likes of Venom’s symbiote, the Lizard, Sandman, and more.
After a week with the Meta and Ray-Ban’s latest $299 smart sunglasses, they still feel a little bit like a novelty. But Meta has improved the core features, with better audio and camera quality, as well as the ability to livestream directly from the frames. If you’re a creator or already spend a lot of time in Meta’s apps (Facebook, Instagram, even WhatsApp), though, there are plenty of reasons to give the second-generation shades a look. These Ray-Ban Meta smart glasses feel more like a finished product.
Analogue’s 3D aims to be the ultimate Nintendo 64 console tribute, playing original cartridges on modern 4K displays. All Analogue’s machines use field-programmable gate arrays (FPGA) coded to mimic the original hardware. Instead of playing often legally questionable ROM files, like most software emulators, Analogue consoles play original media, without the downsides that software emulation often brings. The Analogue 3D is currently slated to ship in 2024, but no price yet.
This article originally appeared on Engadget at https://www.engadget.com/the-morning-after-get-ready-for-the-myspace-documentary-111556330.html?src=rss
Snapchat has rolled out two new features, including the ability to embed content from the platform into a website. Users can now embed Lenses, Spotlight videos and public stories or profiles through their computer browser by clicking the embed button under share options. This will automatically copy the code — just as competitors like Instagram and TikTok have long allowed users to do.
Following years of trying to broaden from just a platform to send pictures back and forth with friends, the option to embed is a logical next step from Snapchat. It builds on other features like articles and discovering local places of interest and, in 2022, Snapchat for Web.
Along with embeds, Snapchat has also launched an OpenAI-powered feature that lets users extend their snaps to include more of their possible surroundings. The tool is reminiscent of Photoshop’s Content-Aware Fill but, in this case, estimates what the entire border area looks like versus one targeted piece. Engadget has confirmed this feature is available for Snapchat+ subscribers.
The company has regularly been using AI tools as perks for its now five million-plus Snapchat+ subscribers. The company’s AI-powered Dreams feature — which lets users generate eight packs of “fantastical” images — is limited to one time only for regular users or one set per month for Snapchat+ subscribers. Anyone can buy extra packs for $0.99 each.
Snapchat was quick to hop on the AI boom, rolling out a chatbot called My AI using “OpenAI’s GPT technology that the authors have customized” back in February. Initially also available solely to Snapchat+ subscribers, My AI expanded to all global users two months later with everything from restaurant recommendations to photo responses (as has been the case for AI bots in 2023, not always appropriately).
This article originally appeared on Engadget at https://www.engadget.com/snapchat-enables-video-and-stories-embeds-103535731.html?src=rss
Disney its turning Gargoyles, its animated cult classic from the 90s, into a live-action TV series for its streaming service. It’s also teaming up with two of the most well-known names in horror films today to make it happen. According to The Hollywood Reporter, James Wan’s Atomic Monster production company and Gary Dauberman are in the early stages of developing a live-action Gargoyles for Disney+. You may know James Wan as the creator of The Conjuring franchise and as co-creator of the Saw and Insidious franchises, in addition to directing Aquaman. Dauberman, a frequent Wan collaborator who had written the Annabelle movies, will serve as showrunner, writer and executive producer.
Gargoyles ran for three seasons from 1994 to 1997. It was more complex and darker in tone than your typical Disney cartoon and revolved around a clan of “gargoyles,” species of nocturnal creatures that turn to stone during the day, along with police officer Elisa Maza. The clan used to live in a castle in Scotland before they were betrayed by humans and were cursed to be frozen in stone. A thousand years later, the gargoyles wake up in New York City and choose to serve as its protectors at night.
Of course, whether a live action Gargoyles is a good thing or a bad thing depends on how you liked Disney’s remakes so far. We could only hope that Dauberman and Wan’s company could do the show justice, especially since it will mostly likely use a lot of CGI to stay true to the source material. Disney has been getting a lot of flak over its use of CGI lately, which critics consider visually unappealing and subpar, including in movies like The Little Mermaid and Marvel’s Ant-Man and the Wasp: Quantumania
This article originally appeared on Engadget at https://www.engadget.com/disney-is-making-a-live-action-gargoyles-show-with-james-wan-100025832.html?src=rss
Much like how Huawei developed its own HarmonyOS as an Android substitute, Xiaomi is about to pull a similar move to bolster its ecosystem — especially with its electric car due to arrive in the first of half next year. Dubbed “HyperOS,” this MIUI replacement will apparently be a blend of Android and Xiaomi’s very own “Vela” system, hence a “completely rewritten underlying architecture” that would supposedly allow users, vehicles and smart home — of over 200 product categories — to connect with one another seamlessly. It’s safe to assume that Xiaomi’s electric car will also feature HyperOS, thus going head to head with Huawei’s Aito line of EVs.
In a Weibo post, CEO Lei Jun said development on HyperOS dates back to 2017, with a mission to build “a unified, integrated system framework that supports the entire ecosystem of devices and applications.” The exec added that this new platform will debut on the upcoming Xiaomi 14 series smartphones, which have apparently entered production, though he stopped short of sharing a launch date (rumors say end of this month). Separately, when asked on X whether HyperOS will be heading to Xiaomi’s international line of products, Lei only responded with “stay tuned.” And so we shall.
This article originally appeared on Engadget at https://www.engadget.com/xiaomis-new-hyperos-will-power-its-smartphones-and-beyond-075537374.html?src=rss
Half of Bandcamp’s employees have lost their jobs following the company’s acquisition by Songtradr, according to SFGate and Variety. Songtradr spokesperson Lindsay Nahmiache has admitted to SFGate that only 58 of Bandcamp’s 118 employees received an offer during the transition. A remaining employee has confirmed Nahmiache’s statement to the publication, reporting that half of the company has disappeared from its Slack chatroom and that the account owned by co-founder and former CEO Ethan Diamond has been deactivated. Some former employees who didn’t receive offers have taken to social networks to reveal that they had been kept in the dark and were in limbo over the past couple of weeks.
Based on Songtradr’s statement to Variety, the move was financially motivated: “Over the past few years the operating costs of Bandcamp have significantly increased,” they said. “It required some adjustments to ensure a sustainable and healthy company that can serve its community of artists and fans. We are committed to keeping the existing Bandcamp services that fans and artists love, including its artist-first revenue share, Bandcamp Fridays and Bandcamp Daily. We are looking forward to welcoming Bandcamp into our musically aligned community. We share a deep passion for all things music and will continue to serve artists, labels and the fans who make it all possible.”
What the spokesperson said echoes an email written by Songtradr CEO Paul Wiltshire to the remaining Bandcamp employees. He said that Bandcamp has not been healthy financially, and that while its revenue has been consistent, its operating costs have “significantly increased making it impossible to continue running the business the way it has been.”
Songtradr purchased Bandcamp from Epic Games in September, merely a year and a half after the game developer’s surprise acquisition of the music company. Bandcamp employees had organized under Epic, and they’re now fighting for Songtradr to recognize their union. Members told SFGate that they will now negotiate severance packages with Epic, while nonmembers will receive six months of severance pay.
This article originally appeared on Engadget at https://www.engadget.com/bandcamp-loses-half-its-staff-after-being-bought-by-songtradr-071319836.html?src=rss
A lot has changed in the two years since Facebook released its Ray Ban-branded smart glasses. Facebook is now called Meta. And its smart glasses also have a new name: the Ray-Ban Meta smart glasses. Two years ago, I was unsure exactly how I felt about the product. The Ray-Ban Stories were the most polished smart glasses I’d tried, but with mediocre camera quality, they felt like more of a novelty than something most people could use.
After a week with the company’s latest $299 sunglasses, they still feel a little bit like a novelty. But Meta has managed to improve the core features, while making them more useful with new abilities like livestreaming and hands-free photo messaging. And the addition of an AI assistant opens up some intriguing possibilities. There are still privacy concerns, but the improvements might make the tradeoff feel more worth it, especially for creators and those already comfortable with Meta’s platform.
What’s changed
Just like its predecessor, the Ray-Ban Meta smart glasses look and feel much more like a pair of Ray-Bans than a gadget and that’s still a good thing. Meta has slimmed down both the frames and the charging case, which now looks like the classic tan leather Ray-Ban pouch. The glasses are still a bit bulkier than a typical pair of shades, but they don’t feel heavy, even with extended use.
This year’s model has ditched the power switch of the original, which is nice. The glasses now automatically turn on when you pull them out of the case and put them on (though you sometimes have to launch the Meta View app to get them to connect to your phone).
Image by Karissa Bell for Engadget
The glasses themselves now charge wirelessly through the nosepiece, rather than near the hinges. According to Meta, the device can go about four hours on one charge, and the case holds an additional four charges. In a week of moderate use, I haven’t had to top up the case, but I do wish there was a more precise indication of its battery level than the light at the front (the Meta View app will display the exact power level of your glasses, but not the case.)
My other minor complaint is that the new charging setup makes it slightly more difficult to pull the glasses out of the case. It takes a little bit of force to yank the frames off the magnetic charging contacts and the vertical orientation of the case makes it easy to grab (and smudge) the lenses.
The latest generation of smart glasses comes in both the signature Wayfarer style, which start at $299, as well as a new, rounder “Headliner” design, which sells for $329. I opted for a pair of Headliners in the blue “shiny jean” color, but there are tan and black variations as well. One thing to note about the new colors is that both the “shiny jeans” and “shiny caramel” options are slightly transparent, so you can see some of the circuitry and other tech embedded in the frames.
The lighter colors also make the camera and LED indicator on the top corner of each lens stand out a bit more than on their black counterparts. (Meta has also updated its software to prevent the camera from being used when the LED is covered.) None of this bothered me, but if you want a more subtle look, the black frames are better at disguising the tech inside.
New camera, better audio
Look closely at the transparent frames, though, and you can see evidence of the many upgrades. There are now five mics embedded in each pair, two in each arm and one in the nosepiece. The additional mics also enable some new “immersive” audio features for videos. If you record a clip with sound coming from multiple sources — like someone speaking in front of you and another person behind you — you can hear their voices coming from different directions when you play back the video through the frames. It’s a neat trick, but doesn’t feel especially useful.
The directional audio is, however, a sign of how dramatically the sound quality has improved. The open-ear speakers are 50 percent louder and, unlike the previous generation, don’t distort at higher volumes. Meta says the new design also has reduced the amount of sound leakage, but I found this really depends on the volume you’re listening at and your surrounding noise conditions.
There will always be some quality tradeoffs when it comes to open-ear speakers, but it’s still one of my favorite features of the glasses. The design makes for a much more balanced level of ambient noise than any kind of “transparency mode” I’ve experienced with earbuds or headphones. And it’s especially useful for things like jogging or hiking when you want to maintain some awareness of what’s around you.
Camera quality was one of the most disappointing features on the first-generation Ray-Ban Stories so I was happy to see that Meta and Luxottica ditched the underpowered 5-megapixel cameras for a 12MP ultra-wide.
The upgraded camera still isn’t as sharp as most phones, but it’s more than adequate for social media. Shots in broad daylight were clear and the colors were more balanced than snaps from the original Ray-Ban Stories, which tended to look over-processed. I was surprised that even photos I took indoors or at dusk — occasions when most people wouldn’t wear sunglasses — also looked decent. One note of caution about the ultra-wide lens, however: if you have long hair or bangs, it’s very easy for wisps of hair to end up in the edges of your frame if you’re not careful.
The camera also has a few new tricks of its own. In addition to 60-second videos, you can now livestream directly from the glasses to your Instagram or Facebook account. You can even use touch controls on the side of the glasses to hear a readout of likes and comments from your followers. As someone who has live streamed to my personal Instagram account exactly one time before this week, I couldn’t imagine myself using this feature.
But after trying it out, it was a lot cooler than I expected. Streaming a first-person view from your glasses is much easier than holding up your phone, and being able to seamlessly switch between the first-person view and the one from your phone’s camera is something I could see being incredibly useful to creators. I still don’t see many IG Lives in my future, but the smart glasses could enable some really creative use cases for content creators.
The other new camera feature I really appreciated was the ability to snap a photo and share it directly with a contact via WhatsApp or Messenger (but not Instagram DMs) using only voice commands. While this means you can’t review the photo before sending it, it’s a much faster and convenient way to share photos on the go.
Meta AI
Two years ago, I really didn’t see the point of having voice commands on the Ray-Ban Stories. Saying “hey Facebook” felt too cringey to utter in public, and it just didn’t seem like there was much point to the feature. However, the addition of Meta’s AI assistant makes voice interactions a key feature rather than an afterthought.
The Meta Ray-Ban smart glasses are one of the first hardware products to ship with Meta’s new generative AI assistant built in. This means you can chat with the assistant about a range of topics. Answers to queries are broadcast through the internal speakers, and you can revisit your past questions and responses in the Meta View app.
To be clear: I still feel really weird saying “hey Meta,” or “OK Meta,” and haven’t yet done so in public. But there is now, at least, a reason you may want to. For now, the assistant is unable to provide “real-time” information other than the current time or weather forecast. So it won’t be able to help with some practical queries, like those about sports standings or traffic conditions. The assistant’s “knowledge cutoff” is December 2022, and it will remind you of that for most questions related to current events. However, there were a few questions I asked where it hallucinated and gave made-up (but nonetheless real-sounding) answers. Meta has said this kind of thing is an expected part of the development of large language models, but it’s important to keep in mind when using Meta AI.
Karissa Bell
Meta has suggested you should instead use it more for creative or general interest questions, like basic trivia or travel ideas. As with other generative AI tools, I found that the more creative and specific your questions, the better the answer. For example, “Hey Meta, what’s an interesting Instagram caption for a view of the Golden Gate Bridge,” generated a pretty generic response that sounded more like an ad. But “hey Meta, write a fun and interesting caption for a photo of the Golden gate Bridge that I can share on my cat’s Instagram account,” was slightly better.
That said, I’ve been mostly underwhelmed by my interactions with Meta AI. The feature still feels like something of a novelty, though I appreciated the mostly neutral personality of Meta AI on the glasses compared to the company’s corny celebrity-infused chatbots.
And, skeptical as I am, Meta has given a few hints about intriguing future possibilities for the technology. Onstage at Connect, the company offered a preview of an upcoming feature that will allow wearers to ask questions based on what they’re seeing through their glasses. For example, you could look at a monument and ask Meta to identify what you’re looking at. This “multi-modal” search capability is coming sometime next year, according to the company, and I’m looking forward to revisiting Meta AI once the update rolls out.
Privacy
The addition of generative AI also raises new privacy concerns. First, even if you already have a Facebook or Instagram account, you’ll need a Meta account to use the glasses. While this also means they don’t require you to use Facebook or Instagram, not everyone will be thrilled at the idea of creating another Meta-linked account.
The Meta View app still has no ads and the company says it won’t use the contents of your photos or video for advertising. The app will store transcripts of your voice commands by default, though you can opt to remove transcripts and associated voice recordings from the app’s settings. If you do allow the app to store voice recordings, these can be surfaced to “trained reviewers” to “improve, troubleshoot and train Meta’s products.”
Karissa Bell
I asked the company if it plans to use Meta AI queries to inform its advertising and a spokesperson said that “at this time we do not use the generative AI models that power our conversational AIs, including those on smart glasses, to personalize ads.” So you can rest easy that your interactions with Meta AI won’t be fed into Meta’s ad-targeting machine, at least for now. But it’s not unreasonable to imagine that could one day change. Meta tends to keep new products ad-free in the beginning and introduce ads once they start to reach a critical mass of users. And other companies, like Snap, are already using generative AI to boost their ad businesses.
Are they worth it?
If any of that makes you uncomfortable, or you’re interested in using the shades with non-Meta apps, then you might want to steer clear of the Ray-Ban Meta smart glasses. Though your photos and videos can be exported to any app, most of the devices’ key features work best when you’re playing in Meta’s ecosystem. For example, you can connect your WhatsApp and Messenger accounts to send hands-free photos or messages but can’t send texts via SMS or other apps (Meta AI will read out incoming texts, however). Likewise, the livestreaming abilities are limited to Instagram and Facebook, and won’t work with other platforms.
If you’re a creator or already spend a lot of time in Meta’s apps, though, there are plenty of reasons to give the second-generation shades a look. While the Ray-Ban Stories of two years ago were a fun, if overly expensive, novelty, the $299 Ray-Ban Meta smart glasses feel more like a finished product. The improved audio and photo quality better justify the price, and the addition of AI makes them feel like a product that’s likely to improve rather than a gadget that will start to become obsolete as soon as you buy it.
This article originally appeared on Engadget at https://www.engadget.com/ray-ban-meta-smart-glasses-review-instagram-worthy-shades-070010365.html?src=rss