India’s Tech Obsession May Leave Millions of Workers Without Pay

https://www.wired.com/story/india-tech-obsession-millions-workers-without-pay/


Vaishali Kanal’s wages don’t depend on how much she works. They depend on whether there is internet in her village or not. Kanal, 25, usually leaves her toddler at home in Palatpada, a remote village in western India, early in the morning, and goes to work on a nearby building site. But when we met on a scorching May afternoon, she was cradling her daughter in her arms. “If she is awake or crying, I take her with me,” she says. “It is tough to do backbreaking labor and take care of the toddler at the same time.”

Often she puts in a whole day of grueling labor but doesn’t get paid for it, due to a glitch in a government system that was supposed to help some of the country’s most marginalized people—like Kanal, a tribal farmer from Maharashtra’s Palghar district. Kanal is a worker under the Indian government’s Mahatma Gandhi National Rural Employment Guarantee Act, 2005, or MGNREGA, which gives rural workers a guaranteed income for working on public infrastructure project, such as roads, wells and dams. The aim of the scheme was to give people in rural areas employment opportunities close to home, so they wouldn’t have to move to cities to find work. With 266.3 million registered workers and 144.3 million active ones, it is probably the largest employment scheme in the world.

Until last year, workers’ attendance at their jobs was often marked on a physical muster roll by a village employment guarantee assistant or worksite supervisor. However, in January this year, the national government made it mandatory to log the attendance of workers on an app, the National Mobile Monitoring System (NMMS). The official on site now has to upload pictures of workers to the system to prove their attendance. But the app doesn’t work in remote areas with weak or infrequent internet connections. Critics of the policy say the lack of connectivity was an easily foreseeable problem, and that workers from marginalized groups are—not for the first time—being left behind in the government’s obsession with rolling out glossy, but poorly thought-through technologies.

“The government’s focus is not on workers, but on technology, regardless of whether it helps workers,” says Brian Lobo, an activist working in Palghar.

The Ministry of Rural Development, which oversees the MGNREGA scheme, did not respond to a request for comment.

Workers using the NMMS have to have their photos taken twice—once when they start work, and once when the day is over. “If the internet is sluggish, the photos don’t get uploaded,” said Jagadish Bhujade, a guarantee assistant in the block of Vikramgad where Kanal’s village is located. “In our block, it is always a problem.”

Kanal says that there have been days she’s rushed to work, only to find that the internet is down and there’s no way to log in. “That means I have to walk back all the way to my home,” she says. “Some of the worksites are quite far, and I can’t afford to take the bus each time.”

via Wired Top Stories https://www.wired.com

June 6, 2023 at 01:08AM

Rich Nations Owe $192 Trillion for Causing Climate Change, New Analysis Finds

https://subscriber.politicopro.com/article/eenews/2023/06/06/rich-nations-owe-trillions-for-causing-climate-change-scientists-say-00100288


CLIMATEWIRE | A major question has emerged as the world strives to reduce greenhouse gases: How much money should rich nations pay to poor ones for raising Earth’s temperature?

Scientists have found an answer.

High carbon countries owe at least $192 trillion to low-emitting nations in compensation for their greenhouse gas pollution.

That’s the conclusion of a new paper published Monday in the journal Nature Sustainability by researchers Andrew Fanning and Jason Hickel.

“It is a matter of climate justice that if we are asking nations to rapidly decarbonise their economies, even though they hold no responsibility for the excess emissions that are destabilizing the climate, then they should be compensated for this unfair burden,” said Fanning in a statement.

The concept of climate reparations is a topic of global discussion. Low-income and developing nations have long argued that wealthy, high-emitting countries should help them with the costs of decarbonizing. More recently, the international community has begun to acknowledge that high-emitting nations should help other countries grapple with the damages they’ve suffered as a result of climate change, including the impacts of extreme weather events, rising seas and other climate consequences.

World leaders agreed last year at the U.N. climate talks in Egypt to establish a fund that would pay vulnerable countries for “loss and damage” associated with climate change. But the details of how the fund will operate — including which states are eligible for compensation, what kinds of damage the fund will cover and how the money will be disbursed — are still undecided.

A special committee tasked with hashing out these details is expected to present a proposal at the climate talks in the United Arab Emirates starting in November.

Meanwhile, activists, scientists and policy experts around the world are considering ways that climate aid — sometimes called reparations — could potentially be structured. The new paper presents one potential framework for climate compensation.

Nations participating in the Paris climate agreement are currently striving to keep global temperatures within 2 degrees Celsius of their preindustrial levels, and below 1.5 C if at all possible. So the researchers began by examining the carbon budget for both climate goals — that’s the amount of carbon the world can release without overshooting the temperature target.

Then they divided the carbon budgets into fair shares for every country. Each nation gets a slice of the budget according to its size and population.

Next, they examined each country’s cumulative emissions since the year 1960. The world had been emitting large quantities of greenhouse gases for decades beforehand — but by 1960, they said, researchers clearly understood the science of global warming and were beginning to communicate it to the public, as well.

Based on these historical emissions, the researchers then determined which countries have already used up their fair shares of the carbon budget. They also looked at how much more carbon each country is likely to emit between now and 2050, even if the world begins reducing emissions fast enough to meet the 1.5 C target.

The researchers divided the world into two groups. They lumped 39 high-emitting countries together, including the United States, Canada, Europe, Australia, New Zealand, Japan and Israel, in a group they refer to as the Global North. All of the other countries in the study, including the rest of Asia, the Americas and Africa, fell into the second group, which the researchers referred to as the Global South.

They found that all the countries in the Global North group had already exceeded their fair shares of the carbon budgets. The group had collectively blown through its 1.5 C budget back in 1986, and its 2 C budget was gone by 1995.

Even if nations worldwide manage to collectively reduce their net emissions to zero by 2050 and meet the 1.5 C temperature target, Global North countries would still overshoot their share of the budget by three times — and they’d use up half the Global South group’s budget in the process.

Fifty-five countries around the world would have at least 75 percent of their carbon budget used up by high emitters in this net-zero scenario. And 10 countries — all in sub-Saharan Africa — would sacrifice at least 95 percent of their carbon budgets.

The researchers then calculated the amount of money the overshooters would owe in compensation. They based their estimates on carbon prices, or the costs associated with excess emissions, established by the U.N.’s Intergovernmental Panel on Climate Change.

They found that the overshooters would owe a total of $192 trillion to the rest of the world. The United States, the European Union and the United Kingdom alone would be responsible for about two-thirds of that total. And the United States would owe the single greatest debt of any country on the planet.

Meanwhile, India and the countries of sub-Saharan Africa would be owed around half the total compensation value.

The researchers noted that these figures only include compensation for “atmospheric appropriation.” They don’t include payments that rich countries may owe poorer countries for the costs associated with decarbonizing or adapting to climate change — those would be extra.

The researchers also noted that the study does not account for inequalities within high-emitting nations themselves, where the wealthiest people account for much greater shares of the carbon footprint.

“Responsibility for excess emissions is largely held by the wealthy classes who have very high consumption and who wield disproportionate power over production and national policy,” Hickel said in a statement. “They are the ones who must bear the costs of compensation.”

Reprinted from E&E News with permission from POLITICO, LLC. Copyright 2023. E&E News provides essential news for energy and environment professionals.

via Scientific American https://ift.tt/KXhLZHU

June 6, 2023 at 11:07AM

Apple Announces Very Fancy ‘Facial Computer,’ Starts At $3,499

https://kotaku.com/apple-vr-headset-vision-pro-mixed-reality-mac-iphone-1850507156


Screenshot: Apple / Kotaku

All the rumors were true. Apple has a fancy headset it wants to sell you. The tech giant revealed its new mixed reality / virtual reality headset during its June 5 WWDC digital event, confirming the details of previous leaks.

The Week In Games: What’s Releasing Beyond Diablo IV

During today’s Worldwide Developers Conference—Apple’s annual event where it talks about its future plans and updates—the iPhone maker announced its new headset: The Vision Pro. The new headset features impressive specs, but you better be ready to pay a lot for this advanced piece of hardware.

Apple

The Vision Pro is controlled using your hands, eyes, and voice. Apps and videos will appears to exist in the real world using the headset’s advanced augmented reality tech, which lets it overlay computer visuals over a real-time camera feed. Apple also showed off how the headset can also immerse you in fully digital environments like a typical VR headset, letting you watch Ted Lasso in the middle of space.

An interesting feature called “EyeSight” shows your eyes to other people when they get close, via a display on the front of the unit. But if you are in the middle of a game or an immersive app, the front display makes that clear to other people, letting them know you are busy. However, at any point, Apple says you can see through apps to see other people in the room, helping to make you feel less isolated when using the new headset.

Developing…

via Kotaku https://kotaku.com

June 5, 2023 at 01:44PM

‘The Risk of Extinction:’ AI Leaders Agree on One-Sentence Warning About Technology’s Future

https://gizmodo.com/ai-chatgpt-extinction-warning-letter-openai-sam-altman-1850486688


Over 350 AI executives, researchers, and industry leaders signed a one-sentence warning released Tuesday, saying that we should try to stop their technology from destroying the world.

ChatGPT’s Creator Buddies Up to Congress | Future Tech

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” reads the statement, released by the Center for AI Safety. The signatories including Sam Altman, the CEO of OpenAI, Demis Hassabis, CEO of Google DeepMind, Dario Amodei, CEO of Anthropic, and Geoffrey Hinton, the so called “Godfather of AI” who recently quit Google over fears about his life’s work.

As the public conversation about AI shifted from awestruck to dystopian over the last year, a growing number of advocates, lawmakers, and even AI executives united around a single message: AI could destroy the world and we should do something about it. What that something should be, specifically, is entirely unsettled, and there’s little consensus about the nature or likelihood of these existential risks.

There’s no question that AI is poised to flood the world with misinformation, and a large number of jobs will likely be automated into oblivion. The question is just how far these problems will go, and when or if they will dismantle the order of our society.

Usually, tech executives tell you not to worry about the threats of their work, but the AI business is taking the opposite tactic. OpenAI’s Sam Altman testified before the Senate Judiciary Committee this month, calling on Congress to establish an AI regulatory agency. The company published a blogpost arguing that companies should need a license if they want to work on AI “super intelligence.” Altman and the heads of Anthropic and Google DeepMind recently met with President Biden at the White House for a chat about AI regulation.

Things break down when it comes to specifics though, which explains the length of Tuesday’s statement. Dan Hendrycks, executive director of the Center for AI Safety, told the New York Times they kept it short because experts don’t agree on the details about the risks, or what, exactly, should be done to address them. “We didn’t want to push for a very large menu of 30 potential interventions,” Hendrycks said. “When that happens, it dilutes the message.”

It may seem strange that AI companies would call on the government to regulate them, which would ostensibly get in their way. It’s possible that unlike the leaders of other tech businesses, AI executives really care about society. There are plenty of reasons to think this is all a bit more cynical than it seems, however. In many respects, light-touch rules would be good for business. This isn’t new: some of the biggest advocates for a national privacy law, for example, include Google, Meta, and Microsoft.

For one, regulation gives businesses an excuse when critics start making a fuss. That’s something we see in the oil and gas industry, where companies essentially throw up their hands and say “Well, we’re complying with the law. What more do you want?” Suddenly the problem is incompetent regulators, not the poor corporations.

Regulation also makes it far more expensive to operate, which can be a benefit to established companies when it hampers smaller upstarts that could otherwise be competitive. That’s especially relevant in the AI businesses, where it’s still anybody’s game and smaller developers could pose a threat to the big boys. With the right kind of regulation, companies like OpenAI and Google could essentially pull up the ladder behind them. On top of all that, weak nationwide laws get in the way of pesky state lawmakers, who often push harder on the tech business.

And let’s not forget that the regulation that AI businessmen are calling for is about hypothetical problems that might happen later, not real problems that are happening now. Tools like ChatGPT make up lies, they have baked-in racism, and they’re already helping companies eliminate jobs. In OpenAI’s calls to regulate super intelligence — a technology that does not exist — the company makes a single, hand-waving reference to the actual issues we’re already facing, “We must mitigate the risks of today’s AI technology too.”

So far though, OpenAI doesn’t actually seem to like it when people try to mitigate those risks. The European Union took steps to do something about these problems, proposing special rules for AI systems in “high-risk” areas like elections and healthcare, and Altman threatened to pull his company out of EU operations altogether. He later walked the statement back and said OpenAI has no plans to leave Europe, at least not yet.

Want to know more about AI, chatbots, and the future of machine learning? Check out our full coverage of artificial intelligence, or browse our guides to The Best Free AI Art Generators and Everything We Know About OpenAI’s ChatGPT.

via Gizmodo https://gizmodo.com

May 30, 2023 at 10:49AM

We’ve Got Power: Dream Chaser Spaceplane Aces Critical Test as Epic Maiden Mission Draws Near

https://gizmodo.com/we-ve-got-power-dream-chaser-spaceplane-aces-critical-1850495336


Sierra Space fired up its spaceplane in its assembly facility for the first time, signifying that the Dream Chaser shuttle could soon be ready for its first mission to low Earth orbit.

Astronomers Could Soon Get Warnings When SpaceX Satellites Threaten Their View

The Colorado-based company announced on Wednesday that it had successfully completed the first power-up of its Dream Chaser spaceplane. During the test, engineers simulated the power that would otherwise be generated by Dream Chaser’s solar arrays once the spaceplane is in orbit and its systems are turned on.

“This is a milestone that points to the future and is a key moment in a long journey for Dream Chaser.” Tom Vice, CEO of Sierra Space, said in the company’s statement. “With this significant achievement, our Dream Chaser spaceplane is poised to redefine commercial space travel, opening up new possibilities for scientific research, technological advancements, and economic opportunities in space.”

The Dream Chaser is an orbital spaceplane designed to fly to low Earth orbit, carrying crew and cargo to orbital pitstops such as the International Space Station (ISS). The futuristic-looking shuttle is designed to carry up to 12,000 pounds (5,443 kilograms) of cargo. As Dream Chaser cannot fly to space on its own, a big rocket, namely ULA’s Vulcan Centaur, is required to deliver the craft to low Earth orbit. Like NASA’s Space Shuttle, however, Dream Chaser is designed to survive atmospheric reentry and perform runway landings on the surface.

Sierra Space is targeting the end of 2023 for Dream Chaser’s first flight from Kennedy Space Center in Florida. This flight is under a supply mission contract with NASA to send cargo to the ISS. The company also wants to launch crewed Dream Chaser missions to its own space station, Orbital Reef, a collaboration with Jeff Bezos’ Blue Origin.

The company is preparing to ship the first Dream Chaser, named Tenacity, to NASA’s Neil Armstrong Test Facility in Ohio for testing ahead of its inaugural flight, SpaceNews reported. The exact date of its launch, however, has yet to be disclosed. The spaceplane is set to launch on board the second mission for ULA’s Centaur, but the rocket’s first mission has been repeatedly postponed.

For more spaceflight in your life, follow us on Twitter and bookmark Gizmodo’s dedicated Spaceflight Spaceflight page.

via Gizmodo https://gizmodo.com

June 1, 2023 at 11:19AM

Please Don’t Make These 7 Nasty ChatGPT-Generated Vegan Recipes

https://gizmodo.com/chatgpt-ai-recipes-food-7-bad-vegan-recipes-1850496776


 If the artificial intelligence hypebeasts are to be believed, ChatGPT-styled large language models are on the cusp of radically altering the global workforce, gutting entire industries, and, quite possibly, eliminating life on earth as we know it. Just don’t ask it to prepare you a basic meal or a palatable cocktail.

Chefs and taste testers at World Of Vegan learned this lesson the hard way after testing more than 100 different spring recipes, date night dishes, and dessert ideas created by OpenAI’s ChatGPT. The results were “hilariously pitiful,” one chef said. Out of the dozens of recipes churned out, only one dish, cauliflower tacos, were deemed “successful” by World of Vegan’s team of chefs.

Many of the other failed dishes sound fine at first glance, but after a quick analysis, the chefs found numerous examples of situations where ingredients clashed or where cooked dishes wound up a soggy, nasty mess. A simple spring veggie wrap generated by ChatGPT, for example, ended up looking more like last night’s dinner after a brutal hangover. Oven-baked chocolate cupcakes, meanwhile, somehow looked wound up looking like a deep crater caused an errant bombing run.

The sudden rise of advanced large language models to the mainstream in recent years has revealed how, in many areas, AI and software are outpacing robotics. Ironically, that doesn’t seem to be the case in food service. Rudimentary robots can already flip burgers and help assemble more complicated meals in the physical world while ChatGPT fails to put together beginner-level dishes.

“I imagine specially-programmed robots can cook up chef-created, Michelin Star-worthy meals, no problem,” World of Vegan Founder & CEO Michelle Cehn told Gizmodo. “However, when it comes to generating sensible—let alone innovative and delectable—recipes, the technology proved to be too immature in its development to consistently achieve our desired results.”

Chatbots aren’t entirely useless when it comes to food. The model can, in a pinch, help brainstorm a dish if you give it a handful of ingredients you happen to have on hand since the model is essentially predicting the most likely combinations of those individual elements. On the other hand, AI’s tendency to simply make shit up seemingly out of whole cloth could leave hungry users sorely disappointed. Oh, and don’t expect it to properly fill out your grocery list either.

“It may be okay, and even entertaining, to use ChatGPT to help whip up a recipe,” World of Vegan Blog assistant and taste tester Erin Wysocars said. “[But] be warned that it may still flop and those ingredients you wanted to save may turn into garbage anyway. May your stomach rumble and empty fridge serve as the lesson here!”

“For recipes, ChatGPT is a tool, but should not be the tool for finding human-generated and tested recipes,” Wysocars added.

Keep reading to see some of ChatGPT’s terrible concoctions. Recipes for each dish are included, should you dare tempt fate.

via Gizmodo https://gizmodo.com

June 2, 2023 at 06:07AM

YouTube’s recommendations are leading kids to gun videos, report says

https://www.engadget.com/youtubes-recommendations-are-leading-kids-to-gun-videos-report-says-231207580.html?src=rss

YouTube’s recommendations are leading young kids to videos about school shootings and other gun-related content, according to a new report. According to the Tech Transparency Project (TTP), a nonprofit watchdog group, YouTube’s recommendation algorithm is “pushing boys interested in video games to scenes of school shootings, instructions on how to use and modify weapons” and other gun-centric content. 

The researchers behind the report set up four new YouTube accounts posing as two 9-year-old boys and two 14-year-old boys. All accounts watched playlists of content about popular video games, like Roblox, Lego Star Wars, Halo and Grand Theft Auto. The researchers then tracked the accounts’ recommendations during a 30-day period last November.

“The study found that YouTube pushed content on shootings and weapons to all of the gamer accounts, but at a much higher volume to the users who clicked on the YouTube-recommended videos,” the TTP writes. “These videos included scenes depicting school shootings and other mass shooting events; graphic demonstrations of how much damage guns can inflict on a human body; and how-to guides for converting a handgun to a fully automatic weapon.”

As the report notes, several of the recommended videos appeared to violate YouTube’s own policies. Recommendations included videos of a young girl firing a gun and tutorials on converting handguns into “fully automatic” weapons and other modifications. Some of these videos were also monetized with ads.

In a statement, a YouTube spokesperson pointed to the YouTube Kids app and its in-app supervision tools, which “create a safer experience for tweens and teens” on its platform.

“We welcome research on our recommendations, and we’re exploring more ways to bring in academic researchers to study our systems,” the spokesperson said. “But in reviewing this report’s methodology, it’s difficult for us to draw strong conclusions. For example, the study doesn’t provide context of how many overall videos were recommended to the test accounts, and also doesn’t give insight into how the test accounts were set up, including whether YouTube’s Supervised Experiences tools were applied.”

The TTP report is far from the first time researchers have raised questions about YouTube’s recommendation algorithm. The company has also spent years working to reduce so-called “borderline” content — videos that don’t break its rules outright but may otherwise be unsuitable for mass distribution — from appearing in recommendations. And last year, the company said it was considering disabling sharing altogether on some such content.

This article originally appeared on Engadget at https://ift.tt/BDpIVn1

via Engadget http://www.engadget.com

May 16, 2023 at 06:24PM