Your Clothes Are Making You Sick

https://www.wired.com/story/gadget-lab-podcast-603/

Have you ever put on a new shirt and discovered that it makes you feel itchy? Or taken off a new pair of pants at the end of the day to find that the fabric has given you a rash? This problem is increasingly common as more chemicals are added to our clothing when they’re dyed or treated with additives that make them resistant to stains, wrinkles, and odors. Some of these chemicals are irritants that can cause breathing problems or skin issues. Others are toxic enough to trigger life-altering autoimmune diseases. Since the fashion industry operates within loose regulations, the problem of toxic apparel isn’t going away anytime soon.

This week on Gadget Lab, we’re joined by journalist and author Alden Wicker. Her new book is called To Dye For: How Toxic Fashion Is Making Us Sick—and How We Can Fight Back. We discuss the wide range of chemicals, dyes, and treatments that go into our clothes and offer tips on how to avoid the worst offenders while shopping for a new wardrobe.

Show Notes

Alden’s book is To Dye For. It’s out this week from G.P. Putnam’s Sons; buy it wherever books are sold. She is the editor of the sustainable fashion publication EcoCult. Also read Alden’s reporting on the fashion industry for WIRED.

Recommendations

Alden recommends Vermont. Lauren recommends tzatziki sauce. Mike recommends The Creative Act: A Way of Being by Rick Rubin.

Alden Wicker can be found on Twitter @AldenWicker. Lauren Goode is @LaurenGoode. Michael Calore is @snackfight. Bling the main hotline at @GadgetLab. The show is produced by Boone Ashworth (@booneashworth). Our theme music is by Solar Keys

How to Listen

You can always listen to this week’s podcast through the audio player on this page, but if you want to subscribe for free to get every episode, here’s how:

If you’re on an iPhone or iPad, open the app called Podcasts, or just tap this link. You can also download an app like Overcast or Pocket Casts and search for Gadget Lab. If you use Android, you can find us in the Google Podcasts app by tapping here. We’re on Spotify too. And in case you really need it, here’s the RSS feed.

Transcript

Michael Calore: Lauren.

Lauren Goode: Mike.

Michael Calore: Lauren, I like this shirt that you’re wearing. Where did you get it?

via Wired Top Stories https://www.wired.com

June 29, 2023 at 03:13PM

AI Could Change How Blind People See the World

https://www.wired.com/story/ai-gpt4-could-change-how-blind-people-see-the-world/

For her 38th birthday, Chela Robles and her family made a trek to One House, her favorite bakery in Benicia, California, for a brisket sandwich and brownies. On the car ride home, she tapped a small touchscreen on her temple and asked for a description of the world outside. “A cloudy sky,” the response came back through her Google Glass.

Robles lost the ability to see in her left eye when she was 28, and in her right eye a year later. Blindness, she says, denies you small details that help people connect with one another, like facial cues and expressions. Her dad, for example, tells a lot of dry jokes, so she can’t always be sure when he’s being serious. “If a picture can tell 1,000 words, just imagine how many words an expression can tell,” she says.

Robles has tried services that connect her to sighted people for help in the past. But in April, she signed up for a trial with Ask Envision, an AI assistant that uses OpenAI’s GPT-4, a multimodal model that can take in images and text and output conversational responses. The system is one of several assistance products for visually impaired people to begin integrating language models, promising to give users far more visual details about the world around them—and much more independence.

Envision launched as a smartphone app for reading text in photos in 2018, and on Google Glass in early 2021. Earlier this year, the company began testing an open source conversational model that could answer basic questions. Then Envision incorporated OpenAI’s GPT-4 for image-to-text descriptions.

Be My Eyes, a 12-year-old app that helps users identify objects around them, adopted GPT-4 in March. Microsoft—which is a major investor in OpenAI—has begun integration testing of GPT-4 for its SeeingAI service, which offers similar functions, according to Microsoft responsible AI lead Sarah Bird.

In its earlier iteration, Envision read out text in an image from start to finish. Now it can summarize text in a photo and answer follow-up questions. That means Ask Envision can now read a menu and answer questions about things like prices, dietary restrictions, and dessert options.

Another Ask Envision early tester, Richard Beardsley, says he typically uses the service to do things like find contact information on a bill or read ingredients lists on boxes of food. Having a hands-free option through Google Glass means he can use it while holding his guide dog’s leash and a cane. “Before, you couldn’t jump to a specific part of the text,” he says. “Having this really makes life a lot easier because you can jump to exactly what you’re looking for.”

Integrating AI into seeing-eye products could have a profound impact on users, says Sina Bahram, a blind computer scientist and head of a consultancy that advises museums, theme parks, and tech companies like Google and Microsoft on accessibility and inclusion.

Bahram has been using Be My Eyes with GPT-4 and says the large language model makes an “orders of magnitude” difference over previous generations of tech because of its capabilities, and because products can be used effortlessly and don’t require technical skills. Two weeks ago, he says, he was walking down the street in New York City when his business partner stopped to take a closer look at something. Bahram used Be My Eyes with GPT-4 to learn that it was a collection of stickers, some cartoonish, plus some text, some graffiti. This level of information is “something that didn’t exist a year ago outside the lab,” he says. “It just wasn’t possible.”

via Wired Top Stories https://www.wired.com

July 5, 2023 at 06:09AM

Brilliantly Modded Game Boy Camera Is No Bigger Than a Cartridge

https://gizmodo.com/brilliantly-modded-game-boy-camera-is-no-bigger-than-a-1850595045

Twenty-five years after the Game Boy Camera first debuted back in 1998, fans are not only still enjoying lo-fi, black and white, pixelated photography, they’re also re-engineering the camera to make it better, even going so far as to shrink it down to the size of a regular Game Boy cartridge.

Panic’s Playdate Handheld is Almost Here

The last time we had checked in with Christopher Graves, a Game Boy Camera fan who knows their way around a 3D printer and a circuit board, they were showing off a custom creation called the Game Boy Camera M. It turned a sacrificed Game Boy Pocket into a mirrorless shooter with swappable lenses, a custom shell with a leatherette wrapped finish, a rechargeable battery, and a repositioned action button repurposed as a shutter trigger.

At the time, the Game Boy Camera M seemed like the perfect tool for those who prefer shooting images at just 0.001434-megapixels (128×112 pixels), but after seeing Graves’ latest creation, we’re not so sure any more.

Those who spent any amount of time shooting with Nintendo’s Game Boy Camera will remember that it was a chunky accessory with a bulbous lens sticking out over the top of the Game Boy it was attached to. There’s no doubt Nintendo did everything it could to shrink the size of the original Game Boy Camera, but the company was limited by the technology in 1998, when all digital cameras were fairly bulky. By leveraging a quarter century of technological miniaturization, Graves’ was able to create a much sleeker alternative they’re calling the Game Boy Mini Camera.

Starting with schematics for a custom reflashable Game Boy Camera PCB made by Martin Refseth, Graves created a custom board that carried over the original Game Boy Camera’s sensor, memory map controller, and “a few capacitors that were just easier to harvest,” which all fits inside a custom shell the same size as a standard Game Boy game cartridge. The Game Boy Mini Camera can actually run two different ROMs, and is upgraded with flash memory so that there’s no risk of stored photos disappearing when a backup battery dies.

As for the Game Boy Mini Camera’s lens? If it looks oddly familiar to you, slightly sticking out of the top corner of the cartridge, that’s because it’s actually a lens from an iPhone XR. The lens is screwed into a custom threaded sleeve, allowing its position to be adjusted to change the camera’s focus. Graves is also testing other iPhone lenses, including one from the iPhone 14 that will stick out even further, and potentially one from an iPhone 5S, which would eliminate that small camera bump altogether.

Despite borrowing some optical hardware from the iPhone, the images captured by the Game Boy Mini Camera are still unmistakingly low resolution and lacking in detail and color: which is exactly the aesthetic Game Boy Camera photographers are after. But with the redesign, the Game Boy Mini Camera can actually be slipped into a pocket when used with a handheld like the Game Boy Pocket.

Don’t whip out your credit card and start screaming at Graves to shut up and take your money just yet. As with their Game Boy Camera M, the Game Boy Mini Camera is described as another selfish project they made only for themselves, but they aren’t completely ruling out one day designing their own PCB schematics from scratch and making them available to other hobbyists who want to tackle a DIY project like this on their own.

via Gizmodo https://gizmodo.com

June 30, 2023 at 09:30AM

Google Says It’ll Scrape Everything You Post Online for AI

https://gizmodo.com/google-says-itll-scrape-everything-you-post-online-for-1850601486

Google updated its privacy policy over the weekend, explicitly saying the company reserves the right to scrape just about everything you post online to build its AI tools. If Google can read your words, assume they belong to the company now, and expect that they’re nesting somewhere in the bowels of a chatbot.

Mr. Tweet Fumbles Super Bowl Tweet

“Google uses information to improve our services and to develop new products, features and technologies that benefit our users and the public,” the new Google policy says. “For example, we use publicly available information to help train Google’s AI models and build products and features like Google Translate, Bard, and Cloud AI capabilities.”

Fortunately for history fans, Google maintains a history of changes to its terms of service. The new language amends an existing policy, spelling out new ways your online musings might be used for the tech giant’s AI tools work.

Previously, Google said the data would be used “for language models,” rather than “AI models,” and where the older policy just mentioned Google Translate, Bard and Cloud AI now make an appearance.

This is an unusual clause for a privacy policy. Typically, these policies describe ways that a business uses the information that you post on the company’s own services. Here, it seems Google reserves the right to harvest and harness data posted on any part of the public web, as if the whole internet is the company’s own AI playground. Google did not immediately respond to a request for comment.

The practice raises new and interesting privacy questions. People generally understand that public posts are public. But today, you need a new mental model of what it means to write something online. It’s no longer a question of who can see the information, but how it could be used. There’s a good chance that Bard and ChatGPT ingested your long forgotten blog posts or 15-year-old restaurant reviews. As you read this, the chatbots could be regurgitating some humonculoid version of your words in ways that are impossible to predict and difficult to understand.

One of the less obvious complications of the post ChatGPT world is the question of where data-hungry chatbots sourced their information. Companies including Google and OpenAI scraped vast portions of the internet to fuel their robot habits. It’s not at all clear that this is legal, and the next few years will see the courts wrestle with copyright questions that would have seemed like science fiction a few years ago. In the meantime, the phenomenon already affects consumers in some unexpected ways.

The overlords at Twitter and Reddit feel particularly aggrieved about the AI issue, and made controversial changes to lockdown their platforms. Both companies turned off free access to their API’s which allowed anyone who pleased to download large quantities of posts. Ostensibly, that’s meant to protect the social media sites from other companies harvesting their intellectual property, but it’s had other consequences.

Twitter and Reddit’s API changes broke third-party tools that many people used to access those sites. For a minute, it even seemed Twitter was going to force public entities such as weather, transit, and emergency services to pay if they wanted to Tweet, a move that the company walked back after a hailstorm of criticism.

Lately, web scraping is Elon Musk’s favorite boogieman. Musk blamed a number of recent Twitter disasters on the company’s need to stop others from pulling data off his site, even when the issues seem unrelated. Over the weekend, Twitter limited the number of tweets users were allowed to look at per day, rendering the service almost unusable. Musk said it was a necessary response to “data scraping” and “system manipulation.” However, most IT experts agreed the rate limiting was more likely a crisis response to technical problems born of mismanagement, incompetence, or both. Twitter did not answer Gizmodo’s questions on the subject.

On Reddit, the effect of API changes was particularly noisy. Reddit is essentially run by unpaid moderators who keep the forums healthy. Mods of large subreddits tend to rely on third-party tools for their work, tools that are built on now inaccessible APIs. That sparked a mass protest, where moderators essentially shut Reddit down. Though the controversy is still playing out, it’s likely to have permanent consequences as spurned moderators hang up their hats.

via Gizmodo https://gizmodo.com

July 3, 2023 at 12:12PM

Astronomers Baffled by Planet That Shouldnt Exist

https://gizmodo.com/astronomers-baffled-by-planet-that-shouldn-t-exist-1850590452

The search for planets outside our Solar System – exoplanets – is one of the most rapidly growing fields in astronomy. Over the past few decades, more than 5,000 exoplanets have been detected and astronomers now estimate that on average there is at least one planet per star in our galaxy.

What Is Planet Nine and Why Can’t We Find It?

Many current research efforts aim at detecting Earth-like planets suitable for life. These endeavours focus on so-called “main sequence” stars like our Sun – stars which are powered by fusing hydrogen atoms into helium in their cores, and remain stable for billions of years. More than 90% of all known exoplanets so far have been detected around main-sequence stars.

As part of an international team of astronomers, we studied a star that looks much like our Sun will in billions of years’ time, and found it has a planet which by all rights it should have devoured. In research published today in Nature, we lay out the puzzle of this planet’s existence – and propose some possible solutions.

A glimpse into our future: red giant stars

Just like humans, stars undergo changes as they age. Once a star has used up all its hydrogen in the core, the core of the star shrinks and the outer envelope expands as the star cools.

In this “red giant” phase of evolution, stars can grow to more than 100 times their original size. When this happens to our Sun, in about 5 billion years, we expect it will grow so large it will engulf Mercury, Venus, and possibly Earth.

Eventually, the core becomes hot enough for the star to begin fusing helium. At this stage the star shrinks back to about 10 times its original size, and continues stable burning for tens of millions of years.

We know of hundreds of planets orbiting red giant stars. One of these is called 8 Ursae Minoris b, a planet with around the mass of Jupiter in an orbit that keeps it only about half as far from its star as Earth is from the Sun.

The planet was discovered in 2015 by a team of Korean astronomers using the “Doppler wobble” technique, which measures the gravitational pull of the planet on the star. In 2019, the International Astronomical Union dubbed the star Baekdu and the planet Halla, after the tallest mountains on the Korean peninsula.

A planet that should not be there

Analysis of new data about Baekdu collected by NASA’s Transiting Exoplanet Survey Satellite (TESS) space telescope has yielded a surprising discovery. Unlike other red giants we have found hosting exoplanets on close-in orbits, Baekdu has already started fusing helium in its core.

Using the techniques of asteroseismology, which studies waves inside stars, we can determine what material a star is burning. For Baekdu, the frequencies of the waves unambiguously showed it has commenced burning helium in its core.

Sound waves inside a star can be used to determine whether it is burning helium. 
Image: Gabriel Perez Diaz / Instituto de Astrofisica de Canarias

The discovery was puzzling: if Baekdu is burning helium, it should have been much bigger in the past – so big it should have engulfed the planet Halla. How is it possible Halla survived?

As is often the case in scientific research, the first course of action was to rule out the most trivial explanation: that Halla never really existed.

Indeed, some apparent discoveries of planets orbiting red giants using the Doppler wobble technique have later been shown to be illusions created by long-term variations in the behaviour of the star itself.

However, follow-up observations ruled out such a false-positive scenario for Halla. The Doppler signal from Baekdu has remained stable over the last 13 years, and close study of other indicators showed no other possible explanation for the signal. Halla is real – which returns us to the question of how it survived engulfment.

Two stars become one: a possible survival scenario

Having confirmed the existence of the planet, we arrived at two scenarios which could explain the situation we see with Baekdu and Halla.

At least half of all stars in our galaxy did not form in isolation like our Sun, but are part of binary systems. If Baekdu once was a binary star, Halla may have never faced the danger of engulfment.

If the star Baekdu used to be a binary, there are two scenarios which can explain the survival of the planet Halla.
Graphic: Brooks G. Bays, Jr, SOEST/University of Hawai’i

A merger of these two stars may have prevented the expansion of either star to a size large enough to engulf planet Halla. If one star became a red giant on its own, it would have engulfed Halla – however, if it merged with a companion star it would jump straight to the helium-burning phase without getting big enough to reach the planet.

Alternatively, Halla may be a relatively newborn planet. The violent collision between the two stars may have produced a cloud of gas and dust from which the planet could have formed. In other words, the planet Halla may be a recently born “second generation” planet.

Whichever explanation is correct, the discovery of a close-in planet orbiting a helium-burning red giant star demonstrates that nature finds ways for exoplanets to appear in places where we might least expect them.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

via Gizmodo https://gizmodo.com

July 4, 2023 at 10:06AM

AI pioneer Geoffrey Hinton isn’t convinced good AI will triumph over bad AI

https://www.engadget.com/ai-pioneer-geoffrey-hinton-isnt-convinced-good-ai-will-triumph-over-bad-ai-181536702.html

University of Toronto professor Geoffrey Hinton, often called the “Godfather of AI” for his pioneering research on neural networks, recently became the industry’s unofficial watchdog. He quit working at Google this spring to more freely critique the field he helped pioneer. He saw the recent surge in generative AIs like ChatGPT and Bing Chat as signs of unchecked and potentially dangerous acceleration in development. Google, meanwhile, was seemingly giving up its previous restraint as it chased competitors with products like its Bard chatbot.

At this week’s Collision conference in Toronto, Hinton expanded his concerns. While companies were touting AI as the solution to everything from clinching a lease to shipping goods, Hinton was sounding the alarm. He isn’t convinced good AI will emerge victorious over the bad variety, and he believes ethical adoption of AI may come at a steep cost.

A threat to humanity

Geoffrey Hinton at Collision 2023
University of Toronto professor Geoffrey Hinton (left) speaking at Collision 2023.
Photo by Jon Fingas/Engadget

Hinton contended that AI was only as good as the people who made it, and that bad tech could still win out. “I’m not convinced that a good AI that is trying to stop bad AI can get control,” he explained. It might be difficult to stop the military-industrial complex from producing battle robots, for instance, he says — companies and armies might “love” wars where the casualties are machines that can easily be replaced. And while Hinton believes that large language models (trained AI that produces human-like text, like OpenAI’s GPT-4) could lead to huge increases in productivity, he is concerned that the ruling class might simply exploit this to enrich themselves, widening an already large wealth gap. It would “make the rich richer and the poor poorer,” Hinton said.

Hinton also reiterated his much-publicized view that AI could pose an existential risk to humanity. If artificial intelligence becomes smarter than humans, there is no guarantee that people will remain in charge. “We’re in trouble” if AI decides that taking control is necessary to achieve its goals, Hinton said. To him, the threats are “not just science fiction;” they have to be taken seriously. He worries that society would only rein in killer robots after it had a chance to see “just how awful” they were.

There are plenty of existing problems, Hinton added. He argues that bias and discrimination remain issues, as skewed AI training data can produce unfair results. Algorithms likewise create echo chambers that reinforce misinformation and mental health issues. Hinton also worries about AI spreading misinformation beyond those chambers. He isn’t sure if it’s possible to catch every bogus claim, even though it’s “important to mark everything fake as fake.”

This isn’t to say that Hinton despairs over AI’s impact, although he warns that healthy uses of the technology might come at a high price. Humans might have to conduct “empirical work” into understanding how AI could go wrong, and to prevent it from wresting control. It’s already “doable” to correct biases, he added. A large language model AI might put an end to echo chambers, but Hinton sees changes in company policies as being particularly important.

The professor didn’t mince words in his answer to questions about people losing their jobs through automation. He feels that “socialism” is needed to address inequality, and that people could hedge against joblessness by taking up careers that could change with the times, like plumbing (and no, he isn’t kidding). Effectively, society might have to make broad changes to adapt to AI.

The industry remains optimistic

Google DeepMind's Colin Murdoch at Collision 2023
Google DeepMind CBO Colin Murdoch at Collision 2023.
Photo by Jon Fingas/Engadget

Earlier talks at Collision were more hopeful. Google DeepMind business chief Colin Murdoch said in a different discussion that AI was solving some of the world’s toughest challenges. There’s not much dispute on this front — DeepMind is cataloging every known protein, fighting antibiotic-resistant bacteria and even accelerating work on malaria vaccines. He envisioned “artificial general intelligence” that could solve multiple problems, and pointed to Google’s products as an example. Lookout is useful for describing photos, but the underlying tech also makes YouTube Shorts searchable. Murdoch went so far as to call the past six to 12 months a “lightbulb moment” for AI that unlocked its potential.

Roblox Chief Scientist Morgan McGuire largely agrees. He believes the game platform’s generative AI tools “closed the gap” between new creators and veterans, making it easier to write code and create in-game materials. Roblox is even releasing an open source AI model, StarCoder, that it hopes will aid others by making large language models more accessible. While McGuire in a discussion acknowledged challenges in scaling and moderating content, he believes the metaverse holds “unlimited” possibilities thanks to its creative pool.

Both Murdoch and McGuire expressed some of the same concerns as Hinton, but their tone was decidedly less alarmist. Murdoch stressed that DeepMind wanted “safe, ethical and inclusive” AI, and pointed to expert consultations and educational investments as evidence. The executive insists he is open to regulation, but only as long as it allows “amazing breakthroughs.” In turn, McGuire said Roblox always launched generative AI tools with content moderation, relied on diverse data sets and practiced transparency.

Some hope for the future

Roblox's Morgan McGuire at Collision 2023
Roblox Chief Scientist Morgan McGuire talks at Collision 2023.
Photo by Jon Fingas/Engadget

Despite the headlines summarizing his recent comments, Hinton’s overall enthusiasm for AI hasn’t been dampened after leaving Google. If he hadn’t quit, he was certain he would be working on multi-modal AI models where vision, language and other cues help inform decisions. “Small children don’t just learn from language alone,” he said, suggesting that machines could do the same. As worried as he is about the dangers of AI, he believes it could ultimately do anything a human could and was already demonstrating “little bits of reasoning.” GPT-4 can adapt itself to solve more difficult puzzles, for instance.

Hinton acknowledges that his Collision talk didn’t say much about the good uses of AI, such as fighting climate change. The advancement of AI technology was likely healthy, even if it was still important to worry about the implications. And Hinton freely admitted that his enthusiasm hasn’t dampened despite looming ethical and moral problems. “I love this stuff,” he said. “How can you not love making intelligent things?”

This article originally appeared on Engadget at https://ift.tt/g7tdj8B

via Engadget http://www.engadget.com

June 30, 2023 at 01:22PM

Worlds Most Outrageously Long R/C Airplane Takes Flight [Video]

https://www.geeksaresexy.net/2023/07/04/worlds-most-outrageously-long-r-c-airplane-takes-flight-video/

Commercial airplanes have been getting longer to fit more passengers. However, there’s a limit to how long an airplane can be and still be safe to fly. To find out when length becomes a problem, Youtuber Peter Sripol did an experiment: he took a remote-controlled jet plane model and made its fuselage ridiculously long.

Click This Link for the Full Post > World’s Most Outrageously Long R/C Airplane Takes Flight [Video]

via [Geeks Are Sexy] Technology News https://ift.tt/FpTg95w

July 4, 2023 at 12:39PM