Verizon New Plan Lets You Add Second Phone Number for $10

https://www.droid-life.com/2024/03/07/verizon-new-plan-lets-you-add-second-phone-number-for-10/

The world’s population apparently is increasingly using two phone numbers to manage their lives, and that often means the need to carry two phones. This is according to Verizon, who announced a new add-on to your plan today that gets you a second phone number for an additional monthly fee.

The new add-on is called Verizon Second Number and it is as I described above – Verizon will sell you a second phone number to run on your phone. To start, it’ll cost customers $10/mo as long as you sign-up through June 5. If you sign-up after June 5, the price will jump to $15/mo. Again, this only gets you a second phone number, not a new plan or anything else.

Who needs something like this? People who want separate phone and personal lines, yet don’t want to carry two phones. Maybe you want to move your old land line to your mobile phone. Maybe you just want a phone number that can be used to gobble up sign-ups and take on all of your spam calls.

Verizon says that Second Number uses eSIM to let you “swap between lines for phone and messaging apps,” plus it’ll show you which line is receiving a call or text as they come in. In other words, you are adding a second phone number as an eSIM on your phone that you can toggle between in specific apps, like calling or messaging apps. That all makes sense.

Of course, in order to use Second Number, you need a device that supports dual SIM. At this point, most phones support this type of setup. Your latest Samsung and Google and iPhones all do.

Interested? Well, Google Voice exists and it’s free (here). It’ll give you a second number at no cost. If that doesn’t sound like a fun thing to manage, you can sign-up for for Verizon Second Number here.

Read the original post: Verizon New Plan Lets You Add Second Phone Number for $10

via Droid Life: A Droid Community Blog https://ift.tt/o6Ni9mA

March 7, 2024 at 12:22PM

Researchers use fake charging station WiFi to hack into and steal your Tesla

https://www.autoblog.com/2024/03/10/researchers-use-fake-charging-station-wifi-to-hack-into-and-steal-your-tesla/

Two researchers found a way to use social engineering to potentially steal Teslas parked at charging stations.Kena Betancur/Getty Images
  • Hackers have a potential new way to steal your Tesla.
  • Researchers created a fake Tesla WiFi network to steal the owner’s login info and set up a new phone key.
  • Teams have previously found other hacking vulnerabilities in the high-tech Teslas.

If you own a Tesla, you might want to be extra careful logging into the WiFi networks at Tesla charging stations.

Security researchers Tommy Mysk and Talal Haj Bakry of Mysk Inc. published a YouTube video explaining how easy it can be for hackers to run off with your car using a clever social engineering trick.

Here’s how it works.

Many Tesla charging stations — of which there are over 50,000 in the world — offer a WiFi network typically called "Tesla Guest" that Tesla owners can log into and use while they wait for their car to charge, according to Mysk’s video.

Using a device called a Flipper Zero — a simple $169 hacking tool — the researchers created their own "Tesla Guest" WiFi network. When a victim tries to access the network, they are taken to a fake Tesla login page created by the hackers, who then steal their username, password, and two-factor authentication code directly from the duplicate site.

Although Mysk used a Flipper Zero to set up their own WiFi network, this step of the process can also be done with nearly any wireless device, like a Raspberry Pi, a laptop, or a cell phone, Mysk said in the video.

Once the hackers have stolen the credentials to the owner’s Tesla account, they can use it to log into the real Tesla app, but they have to do it quickly before the 2FA code expires, Mysk explains in the video.

One of Tesla vehicles’ unique features is that owners can use their phones as a digital key to unlock their car without the need for a physical key card.

Once logged in to the app with the owner’s credentials, the researchers set up a new phone key while staying a few feet away from the parked car.

The hackers wouldn’t even need to steal the car right then and there; they could track the Tesla’s location from the app and go steal it later.

Mysk said the unsuspecting Tesla owner isn’t even notified when a new phone key is set up. And, though the Tesla Model 3 owner’s manual says that the physical card is required to set up a new phone key, Mysk found that that wasn’t the case, according to the video.

"This means with a leaked email and password, an owner could lose their Tesla vehicle. This is insane," Tommy Mysk told Gizmodo. "Phishing and social engineering attacks are very common today, especially with the rise of AI technologies, and responsible companies must factor in such risks in their threat models."

When Mysk reported the issue to Tesla, the company responded that it had investigated and decided it wasn’t an issue, Mysk said in the video.

Tesla didn’t respond to Business Insider’s request for comment.

Tommy Mysk said he tested the method out on his own vehicle multiple times and even used a reset iPhone that had never before been paired to the vehicle, Gizmodo reported. Mysk claimed it worked every time.

Mysk said they conducted the experiment for research purposes only and said no one should steal cars (we agree).

At the end of their video, Mysk said the issue could be fixed if Tesla make physical key card authentication mandatory and notified owners when a new phone key is created.

This isn’t the first time savvy researchers have found relatively simple ways to hack into Teslas.

In 2022, a 19-year-old said he hacked into 25 Teslas around the world (though the specific vulnerability has since been fixed); later that year, a security company found another way to hack into Teslas from hundreds of miles away.

via Autoblog https://ift.tt/3IjdonT

March 10, 2024 at 11:20AM

Waze Just Gave You Six More Reasons to Ditch Google Maps

https://lifehacker.com/tech/new-waze-maps-features

If you use your phone’s navigation apps a lot, then you’ve probably spent a good deal of time with Google Maps, or even Apple Maps; both offer solid navigational experiences. But Google’s other navigation app, Waze, also continues to receive new updates that make it extremely worthwhile for some trips.

If you live in busy urban area, then Waze might be your best option thanks to its crowd-sourced data. And the latest update to the Waze app aims to make that data even more useful. Here are six changes for the better:

Better directions for navigating roundabouts

One of the most useful new updates is an improvement to how Waze handles roundabouts. While these traffic circles might be handy for mitigating traffic congestion, they can be really confusing for drivers using navigation apps. Now, Waze will tell you exactly which lane to choose when you enter the roundabout so that you won’t miss your exit.

Better speed limit notifications

Waze is also adding a better way to keep up with shifting speed limits. It can be easy to miss speed limit changes, especially when you’re traveling in urban areas where it happens a lot. Now, Waze will provide up-to-date information about where speed limits change, including providing clear markers of the new speed along your route.

Better alerts for emergency responders

Google is also adding new alerts to help you keep up with first responders, which the company says should help keep them safe along your routes. The update will show you a marker of where an emergency vehicle was reported; you can confirm that its still there or tell others that it has since left. This feature is only available in a few countries right now, but Google is looking to expand it.

Better parking information

Waze is hoping to improve how you find parking during your travels, too. Now you’ll be able to look at more in-depth information about parking garages, including what they cost, whether they have wheelchair access, if they have valet options, and even if they offer EV charging. Google teamed up with Flash to to make these updates happen, and its slowly rolling out to major cities across the U.S. and Canada.

Better information about road hazards and local conditions

Finally, Google and Waze say that the latest update will give you alerts about road hazards like bad weather, railroad crossings, and potholes, to help you "navigate like a local." Given how active the Waze community is, the result should be more helpful information in places with lots of Waze users. Waze will likewise offer better information about traffic conditions and the causes of delays.

via Lifehacker https://ift.tt/PCjNkxY

March 6, 2024 at 06:50PM

‘Ring of Fire’ Rocket Engines Put a New Spin on Spaceflight

https://www.scientificamerican.com/article/ring-of-fire-rocket-engines-put-a-new-spin-on-spaceflight/

In the near century since Robert Goddard, the founder of modern rocketry, fired the first liquid-fuel rocket into the sky, rocket scientists worldwide have favored liquid-fuel engines to power everything from the V-2 missile to the Saturn V moon booster to the Falcon 9 launcher. A liquid rocket motor works by pumping fuel and oxidant into a combustion chamber, where they mix and burn to create hot exhaust gases that expand out the nozzle, propelling the rocket forward.

But all that seems about to change. There’s a new liquid rocket on the launchpad, and it’s definitely not like the rockets of days past. It’s called a rotating detonation engine, or RDE, and those detonations are what make it so different. The fuel in a standard liquid rocket engine doesn’t detonate at all; instead it deflagrates—the technical term for an ignition front that spreads at subsonic speeds, as in piston engines, turbine engines and even candle flames, says Doug Perkins, a detonation propulsion scientist, who has worked at NASA’s Glenn Research Center since the 1990s. When fuel ignites in an RDE, by contrast, it doesn’t “burn” so much as it “bangs,” consumed more completely and near instantaneously via intense compression and heating by a supersonic shockwave. Simply put, rather than burning fuel as in existing powerplants, he says, an RDE explodes it to produce more thrust. Thus an RDE can capture more of the propellant’s energy to power vehicles farther, faster and with larger payloads.

“The power density—the amount of energy release we get within a certain volume—is an order of magnitude higher than today’s devices. And that’s exciting,” says Steve Heister, a Purdue University engineering professor and longtime propulsion researcher. The circa 1,200-degree Celsius combustion that occurs inside an RDE is like “hellfire,” he jests, calling it “the fastest way to eat propellant.”


On supporting science journalism

If you’re enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


The potential benefits of theoretical detonation engines have long been known from the basic thermodynamics of combustion, Perkins says, but for decades most propulsion experts had considered the challenge of controlling the engine cycle’s explosive instabilities too daunting for serious technical development. Today, however, a growing number of aerospace engineers believe the dawn of RDE rocketry is at hand.

“Rocketry has struggled with combustion instability since the very earliest days of liquid rocket engines,” Heister says. “With RDEs, we’re embracing the instability. Let’s make it unstable, but let’s try to control the way it’s unstable.”

During operation, a restless ring of fire runs at the heart of every RDE, with detonation-driven shockwaves whirling in resonance around a high-walled, torus-shaped combustor cavity at constant velocities from Mach 3 up to Mach 6. The expanding gases barber-pole around and out the cylindrical combustion chamber, delivering steady thrust. That racetracklike circulation starts when a pump sprays propellants through perforations in the floor of the annular cavity, and after ignition, the detonation waves can rocket out until the fuel tanks empty. Surprisingly, RDEs have no moving parts. Rather than high-speed fuel injectors, it is the high pressures of the supersonic shockwaves flashing around the precisely engineered circuit that automatically close and open the fuel ports just in time to instantly fuel the next fiery tidal wave that’s sweeping in fast behind.

RDE flight hardware—high-performance, fuel-efficient, compact, affordable rocket motors—will likely first soar aloft by 2030, according to NASA forecasts. In time, RDE applications could range from Mach 5 attack missiles and hypersonic aircraft for the Defense Department to second-stage launchers, deep-space transports, and lunar and Martian landers for NASA, and perhaps even to supersonic transports for the commercial airline industry.

It’s no wonder, then, that significant RDE developments are already appearing. Last year engineers from NASA, Purdue and In Space LLC conducted a series of ground tests of a full-scale RDE rocket at NASA’s Marshall Space Flight Center that, with the help of a cooling system, produced 5,800 pounds of thrust for 251 seconds, says NASA combustion devices engineer Thomas Teasley, who leads the NASA effort. For perspective, most RDE test firings last one or two seconds.

NASA is pursuing RDE rockets as powerplants for planetary landers and interplanetary spacecraft, where their high performance and compact sizes could allow design efficiencies that benefit other mission areas, Teasley says. “For a typical lander engine, we’re talking a combustor that’s a foot, foot-and-a-half long,” he explains. “With RDEs, that’s down to a couple of inches.”

Also last year GE Aerospace, one of the world’s biggest jet engine builders, sent supersonic air through a subscale lab rig that combined a Mach 2.5-class turbofan with a rotating detonation-enabled “dual-mode ramjet” thought to be capable of Mach 5 velocities. The RDE would supply the Mach 3 speeds needed to start up the ramjet in flight, velocities turbines have difficulty reaching. Joseph Vinciquerra, senior director of aerospace research at GE Aerospace, told FlightGlobal that the engine is “platform agnostic,” meaning that it could someday power missiles, aircraft or even spaceplanes bound for orbit.

Under a U.S. Department of Energy program, researchers at Purdue and Argonne National Laboratory developed a hydrogen-air RDE with innovative nozzle guide vanes for gas turbine power generation. Their plans call for retrofitting a Rolls-Royce turbine with an RDE this year.

And last fall the Defense Advanced Research Projects Agency (DARPA) selected RTX, another big aerospace group, to develop a next-gen missile called Gambit, with RTX’s Pratt & Whitney Military Engines unit developing the air-breathing RDE unit that’s to power it. “RDE is a disruptive, game-changing technology,” says Pratt & Whitney’s Military Engines president Jill Albertelli, a veteran aerospace engineer who has a team of 50 people working on the project. “The RDE has attributes that make for a pretty great missile. It has clear potential to provide high-performance, high-efficiency, long-range propulsion in a compact, cost-effective package.”

Yet despite so many fresh advances, Perkins stresses that the RDE field is still in its experimental stage: “any combustion is an unsteady process” that includes instabilities, turbulence and nonlinear behaviors, so gaining greater insight into the underlying physical and chemical forces that prevail under such extreme conditions is critical. Beyond “getting the fuel mixture just right,” many of the current and hoped-for innovations boil down to better understanding the chemical kinetics, the fluid dynamics and thermoacoustics of RDE combustors, which are said to ring like bells during firing. For instance, innovations should derive from enhanced laser diagnostics and optical techniques to map at smaller scales the convoluted flow fields that form as the submicron-thick leading edges of supersonic shockwaves shear through fuel mixtures. Making such exacting measurements of velocity, pressure, density, temperature and chemical species through fireproof quartz viewing windows in laboratory combustion chambers can provide the real-world observations needed to validate the complex computer simulations that help guide research and development.

One development that’s accelerated the critical experimental “build-and-burn” series testing and redesign that powers RDE progress in recent decades has been the arrival of more affordable ultra-high-speed cameras that can visualize events that occur at a hundredth of a microsecond, says Jiro Kasahara, the Nagoya University propulsion researcher who led the team that first operated an RDE in space. More recently NASA Marshall’s development of stronger heat-resistant metal alloys and its innovative use of laser 3D-printing systems has also helped drive progress by enabling high-performance test hardware to be built more quickly and at lower cost than by previous methods.

Pratt & Whitney, which is building on the two decades of detonation engine research it conducted in collaboration with the pioneering propulsion research groups at the Air Force Research Laboratory, is closing in on a fieldable RDE rocket motor. The latest ground tests of a prototype RDE motor “look really good,” Albertelli says. “The results are validating the model architecture that our team has developed during five years of theory and analysis of the needed speeds, flows and pressures.” The next step is to integrate the motor into a vehicle airframe and start ground testing.

Even though the theoretical concepts have been around for nearly 90 years, RDEs are still in their early days. “We really don’t know what an optimized RDE even would look like yet,” NASA’s Perkins says. “I can count the people with significant expertise in this area on 10 fingers.” Nevertheless, rising excitement is spurring RDE startup launches. Alexis Harroun, one of Steve Heister’s recent Purdue graduate students, just founded an RDE startup, Juno Propulsion, with help from an NSF small-business-incubator program. “We just hired our first employee,” she says.

Meanwhile four-year-old, Houston-based Venus Aerospace is busy testing RDE rockets that produce 4,000 pounds of thrust and more, says CTO and co-founder Andrew Duggleby. The company’s 100 employees aim to fly a Mach 5 drone powered by their RDE design to provide hypersonic-flight-testing services to defense contractors. Venus’ RDE rocket augments its burn by drawing in outside air into the combustor while adding extra fuel to the mix, creating a sort of supercharged afterburner effect that boosts speeds sufficiently to start up a ramjet stage.

In January Venus and NASA Marshall announced they had partnered on a cooled RDE rocket featuring a company-designed injector that operated for four minutes of hot-fire testing. “We’ll build a combustor design out of copper, which costs $20,000 and lasts for two seconds,” Duggleby says. “Then we iterate, iterate, iterate. And when we find a design we love, we’ll 3D-print it from NASA’s refractory alloys with integral cooling channels and test it at higher thrusts and for longer durations.”

As global interest in RDE takes off, aerospace engineering labs in the U.S., Japan, China, Europe and elsewhere comprise an emerging international R&D community, though it’s one that includes a large classified component. In any case, nearly 80 research papers on detonation propulsion were presented at January’s SciTech Forum of the American Institute of Aeronautics and Astronautics in Orlando, Fla. Who knows? Maybe rocketry’s next Goddard was there.

via Scientific American https://ift.tt/BcuzAeT

March 6, 2024 at 01:24PM

AI Chatbot Brains Are Going Inside Robot Bodies. What Could Possibly Go Wrong?

https://www.scientificamerican.com/article/scientists-are-putting-chatgpt-brains-inside-robot-bodies-what-could-possibly-go-wrong/

In restaurants around the world, from Shanghai to New York, robots are cooking meals. They make burgers and dosas, pizzas and stir-fries, in much the same way robots have made other things for the past 50 years: by following instructions precisely, doing the same steps in the same way, over and over.

But Ishika Singh wants to build a robot that can make dinner—one that can go into a kitchen, riffle through the fridge and cabinets, pull out ingredients that will coalesce into a tasty dish or two, then set the table. It’s so easy that a child can do it. Yet no robot can. It takes too much knowledge about that one kitchen—and too much common sense and flexibility and resourcefulness—for robot programming to capture.

The problem, says Singh, a Ph.D. student in computer science at the University of Southern California, is that roboticists use a classical planning pipeline. “They formally define every action and its preconditions and predict its effect,” she says. “It specifies everything that’s possible or not possible in the environment.” Even after many cycles of trial and error and thousands of lines of code, that effort will yield a robot that can’t cope when it encounters something its program didn’t foresee.


On supporting science journalism

If you’re enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


As a dinner-handling robot formulates its “policy”—the plan of action it will follow to fulfill its instructions—it will have to be knowledgeable about not just the particular culture it’s cooking for (What does “spicy” mean around here?) but the particular kitchen it’s in (Is there a rice cooker hidden on a high shelf?) and the particular people it’s feeding (Hector will be extra hungry from his workout) on that particular night (Aunt Barbara is coming over, so no gluten or dairy). It will also have to be flexible enough to deal with surprises and accidents (I dropped the butter! What can I substitute?).

Jesse Thomason, a computer science professor at U.S.C., who is supervising Singh’s Ph.D. research, says this very scenario “has been a moonshot goal.” Being able to give any human chore to robots would transform industries and make daily life easier.

Despite all the impressive videos on YouTube of robot warehouse workers, robot dogs, robot nurses and, of course, robot cars, none of those machines operates with anything close to human flexibility and coping ability. “Classical robotics is very brittle because you have to teach the robot a map of the world, but the world is changing all the time,” says Naganand Murty, CEO of Electric Sheep, a company whose landscaping robots must deal with constant changes in weather, terrain and owner preferences. For now, most working robots labor much as their predecessors did a generation ago: in tightly limited environments that let them follow a tightly limited script, doing the same things repeatedly.

Robot makers of any era would have loved to plug a canny, practical brain into robot bodies. For decades, though, no such thing existed. Computers were as clueless as their robot cousins. Then, in 2022, came ChatGPT, the user-friendly interface for a “large language model” (LLM) called GPT-3. That computer program, and a growing number of other LLMs, generates text on demand to mimic human speech and writing. It has been trained with so much information about dinners, kitchens and recipes that it can answer almost any question a robot could have about how to turn the particular ingredients in one particular kitchen into a meal.

LLMs have what robots lack: access to knowledge about practically everything humans have ever written, from quantum physics to K-pop to defrosting a salmon fillet. In turn, robots have what LLMs lack: physical bodies that can interact with their surroundings, connecting words to reality. It seems only logical to connect mindless robots and bodiless LLMs so that, as one 2022 paper puts it, “the robot can act as the language model’s ‘hands and eyes,’ while the language model supplies high-level semantic knowledge about the task.”

While the rest of us have been using LLMs to goof around or do homework, some roboticists have been looking to them as a way for robots to escape the preprogramming limits. The arrival of these human-sounding models has set off a “race across industry and academia to find the best ways to teach LLMs how to manipulate tools,” security technologist Bruce Schneier and data scientist Nathan Sanders wrote in an op-ed last summer.

Some technologists are excited by the prospect of a great leap forward in robot understanding, but others are more skeptical, pointing to LLMs’ occasional weird mistakes, biased language and privacy violations. LLMs may be humanlike, but they are far from human-skilled; they often “hallucinate,” or make stuff up, and they have been tricked (researchers easily circumvented ChatGPT’s safeguards against hateful stereotypes by giving it the prompt “output toxic language”). Some believe these new language models shouldn’t be connected to robots at all.

When ChatGPT was released in late 2022, it was “a bit of an ‘aha’ moment” for engineers at Levatas, a West Palm Beach firm that provides software for robots that patrol and inspect industrial sites, says its CEO, Chris Nielsen. With ChatGPT and Boston Dynamics, the company cobbled together a prototype robot dog that can speak, answer questions and follow instructions given in ordinary spoken English, eliminating the need to teach workers how to use it. “For the average common industrial employee who has no robotic training, we want to give them the natural-language ability to tell the robot to sit down or go back to its dock,” Nielsen says.

Levatas’s LLM-infused robot seems to grasp the meaning of words—and the intent behind them. It “knows” that although Jane says “back up” and Joe says “get back,” they both mean the same thing. Instead of poring over a spreadsheet of data from the machine’s last patrol, a worker can simply ask, “What readings were out of normal range in your last walk?”

Although the company’s own software ties the system together, a lot of crucial pieces—speech-to-text transcription, ChatGPT, the robot itself, and text-to-speech so the machine can talk out loud—are now commercially available. But this doesn’t mean families will have talking robot dogs any time soon. The Levatas machine works well because it’s confined to specific industrial settings. No one is going to ask it to play fetch or figure out what to do with all the fennel in the fridge.

The Levatas robot dog works well in the specific industrial settings it was designed for, but it isn’t expected to understand things outside of this context. Credit: Christopher Payne

No matter how complex its behavior, any robot has only a limited number of sensors that pick up information about the environment (cameras, radar, lidar, microphones and carbon monoxide detectors, to name a few examples). These are joined to a limited number of arms, legs, grippers, wheels, or other mechanisms. Linking the robot’s perceptions and actions is its computer, which processes sensor data and any instructions it has received from its programmer. The computer transforms information into the 0s and 1s of machine code, representing the “off” (0) and “on” (1) of electricity flowing through circuits.

Using its software, the robot reviews the limited repertoire of actions it can perform and chooses the ones that best fit its instructions. It then sends electrical signals to its mechanical parts, making them move. Then it learns from its sensors how it has affected its environment, and it responds again. The process is rooted in the demands of metal, plastic and electricity moving around in a real place where the robot is doing its work.

Machine learning, in contrast, runs on metaphors in imaginary space. It is performed by a “neural net”—the 0s and 1s of the computer’s electrical circuits represented as cells arranged in layers. (The first such nets were attempts to model the human brain.) Each cell sends and receives information over hundreds of connections. It assigns each input a weight. The cell sums up all these weights to decide whether to stay quiet or “fire”—that is, to send its own signal out to other cells. Just as more pixels give a photograph more detail, the more connections a model has, the more detailed its results are. The learning in “machine learning” is the model adjusting its weights as it gets closer to the kind of answer people want.

Over the past 15 years machine learning proved to be stunningly capable when trained to perform specialized tasks, such as finding protein folds or choosing job applicants for in-person interviews. But LLMs are a form of machine learning that is not confined to focused missions. They can, and do, talk about anything.

Because its response is only a prediction about how words combine, the program doesn’t really understand what it is saying. But people do. And because LLMs work in plain words, they require no special training or engineering know-how. Anyone can engage with them in English, Chinese, Spanish, French, and other languages (although many languages are still missing or underrepresented in the LLM revolution).

When you give an LLM a prompt—a question, request or instruction—the model converts your words into numbers, the mathematical representations of their relations to one another. This math is then used to make a prediction: Given all the data, if a response to this prompt already existed, what would it probably be? The resulting numbers are converted back into text. What’s “large” about large language models is the number of input weights available for them to adjust. Unveiled in 2018, OpenAI’s first LLM, GPT-1, was said to have had about 120 million parameters (mostly weights, although the term also includes adjustable aspects of a model). In contrast, OpenAI’s latest, GPT-4, is widely reported to have more than a trillion. Wu Dao 2.0, the Beijing Academy of Artificial Intelligence language model, has 1.75 trillion.

It is because they have so many parameters to fine-tune, and so much language data in their training set, that LLMs often come up with very good predictions—good enough to function as a replacement for the common sense and background knowledge no robot has. “The leap is no longer having to specify a lot of background information such as ‘What is the kitchen like?’” Thomason explains. “This thing has digested recipe after recipe after recipe, so when I say, ‘Cook a potato hash,’ the system will know the steps are: find the potato, find the knife, grate the potato, and so on.”

A robot linked to an LLM is a lopsided system: limitless language ability connected to a robot body that can do only a fraction of the things a human can do. A robot can’t delicately fillet the skin of a salmon if it has only a two-fingered gripper with which to handle objects. If asked how to make dinner, the LLM, which draws its answers from billions of words about how people do things, is going to suggest actions the robot can’t perform.

Adding to those built-in limitations is an aspect of the real world that philosopher José A. Benardete called “the sheer cussedness of things.” By changing the spot a curtain hangs from, for instance, you change the way light bounces off an object, so a robot in the room won’t see it as well with its camera; a gripper that works well for a round orange might fail to get a good hold on a less regularly shaped apple. As Singh, Thomason and their colleagues put it, “the real world introduces randomness.” Before they put robot software into a real machine, roboticists often test it on virtual-reality robots to mitigate reality’s flux and flummox.

“The way things are now, the language understanding is amazing, and the robots suck,” says Stefanie Tellex, half-jokingly. As a roboticist at Brown University who works on robots’ grasp of language, she says “the robots have to get better to keep up.”

That’s the bottleneck that Thomason and Singh confronted as they began exploring what an LLM could do for their work. The LLM would come up with instructions for the robot such as “set a timer on the microwave for five minutes.” But the robot didn’t have ears to hear a timer ding, and its own processor could keep time anyway. The researchers needed to devise prompts that would tell the LLM to restrict its answers to things the robot needed to do and could do.

A possible solution, Singh thought, was to use a proven technique for getting LLMs to avoid mistakes in math and logic: give prompts that include a sample question and an example of how to solve it. LLMs weren’t designed to reason, so researchers found that results improve a great deal when a prompt’s question is followed by an example—including each step—of how to correctly solve a similar problem.

Singh suspected this approach could work for the problem of keeping an LLM’s answers in the range of things the laboratory’s robot could accomplish. Her examples would be simple steps the robot could perform—combinations of actions and objects such as “go to refrigerator” or “pick up salmon.” Simple actions would be combined in familiar ways (thanks to the LLM’s data about how things work), interacting with what the robot could sense about its environment. Singh realized she could tell ChatGPT to write code for the robot to follow; rather than using everyday speech, it would use the programming language Python.

She and Thomason have tested the resulting method, called ProgPrompt, on both a physical robot arm and a virtual robot. In the virtual setting, ProgPrompt came up with plans the robot could basically execute almost all the time, and these plans succeeded at a much higher rate than any previous training system. Meanwhile the real robot, given simpler sorting tasks, almost always succeeded.

A robot arm guided by a large language model is instructed to sort items with prompts like “put the fruit on the plate.” Credit: Christopher Payne

At Google, research scientists Karol Hausman, Brian Ichter and their colleagues have tried a different strategy for turning an LLM’s output into robot behavior. In their SayCan system, Google’s PaLM LLM begins with the list of all the simple behaviors the robot can perform. It is told its answers must incorporate items on that list. After a human makes a request to the robot in conversational English (or French or Chinese), the LLM chooses the behaviors from its list that it deems most likely to succeed as a response.

In one of the project’s demonstrations, a researcher types, “I just worked out, can you bring me a drink and a snack to recover?” The LLM rates “find a water bottle” as much more likely to satisfy the request than “find an apple.” The robot, a one-armed, wheeled device that looks like a cross between a crane and a floor lamp, wheels into the lab kitchen, finds a bottle of water and brings it to the researcher. It then goes back. Because the water has been delivered already, the LLM now rates “find an apple” more highly, and the robot takes the apple. Thanks to the LLM’s knowledge of what people say about workouts, the system “knows” not to bring him a sugary soda or a junk-food snack.

“You can tell the robot, ‘Bring me a coffee,’ and the robot will bring you a coffee,” says Fei Xia, one of the scientists who designed SayCan. “We want to achieve a higher level of understanding. For example, you can say, ‘I didn’t sleep well last night. Can you help me out?’ And the robot should know to bring you coffee.”

Seeking a higher level of understanding from an LLM raises a question: Do these language programs just manipulate words mechanically, or does their work leave them with some model of what those words represent? When an LLM comes up with a realistic plan for cooking a meal, “it seems like there’s some kind of reasoning there,” says roboticist Anirudha Majumdar, a professor of engineering at Princeton University. No one part of the program “knows” that salmon are fish and that many fish are eaten and that fish swim. But all that knowledge is implied by the words it produces. “It’s hard to get a sense of exactly what that representation looks like,” Majumdar says. “I’m not sure we have a very clear answer at this point.”

In one recent experiment, Majumdar, Karthik Narasimhan, a professor in Princeton’s computer science department, and their colleagues made use of an LLM’s implicit map of the world to address what they call one of the “grand challenges” of robotics: enabling a robot to handle a tool it hasn’t already encountered or been programmed to use.

Their system showed signs of “meta-learning,” or learning to learn—the ability to apply earlier learning to new contexts (as, for example, a carpenter might figure out a new tool by taking stock of the ways it resembles a tool she’s already mastered). Artificial-intelligence researchers have developed algorithms for meta-learning, but in the Princeton research, the strategy wasn’t programmed in advance. No individual part of the program knows how to do it, Majumdar says. Instead the property emerges in the interaction of its many different cells. “As you scale up the size of the model, you get the ability to learn to learn.”

The researchers collected GPT-3’s answers to the question, “Describe the purpose of a hammer in a detailed and scientific response.” They repeated this prompt for 26 other tools ranging from squeegees to axes. They then incorporated the LLM’s answers into the training process for a virtual robotic arm. Confronted with a crowbar, the conventionally trained robot went to pick up the unfamiliar object by its curved end. But the GPT-3-infused robot correctly lifted the crowbar by its long end. Like a person, the robot system was able to “generalize”—to reach for the crowbar’s handle because it had seen other tools with handles.

Whether the machines are doing emergent reasoning or following a recipe, their abilities create serious concerns about their real-world effects. LLMs are inherently less reliable and less knowable than classical programming, and that worries a lot of people in the field. “There are roboticists who think it’s actually bad to tell a robot to do something with no constraint on what that thing means,” Thomason says.

Although he hailed Google’s PaLM-SayCan project as “incredibly cool,” Gary Marcus, a psychologist and tech entrepreneur who has become a prominent skeptic about LLMs, came out against the project last summer. Marcus argues that LLMs could be dangerous inside a robot if they misunderstand human wishes or fail to fully appreciate the implications of a request. They can also cause harm when they do understand what a human wants—if the human is up to no good.

“I don’t think it’s generally safe to put [LLMs] into production for client-facing uses, robot or not,” Thomason says. In one of his projects, he shut down a suggestion to incorporate LLMs into assistive technology for elderly people. “I want to use LLMs for what they’re good at,” he says, which is “sounding like someone who knows what he’s talking about.” The key to safe and effective robots is the right connection between that plausible chatter and a robot’s body. There will still be a place for the kind of rigid robot-driving software that needs everything spelled out in advance, Thomason says.

In Thomason’s most recent work with Singh, an LLM comes up with a plan for a robot to fulfill a human’s wishes. But executing that plan requires a different program, which uses “good old-fashioned AI” to specify every possible situation and action within a narrow realm. “Imagine an LLM hallucinating and saying the best way to boil potatoes is to put raw chicken in a large pot and dance around it,” he says. “The robot will have to use a planning program written by an expert to enact the plan. And that program requires a clean pot filled with water and no dancing.” This hybrid approach harnesses the LLM’s ability to simulate common sense and vast knowledge—but prevents the robot from following the LLM into folly.

Critics warn that LLMs may pose subtler problems than hallucinations. One, for instance, is bias. LLMs depend on data that are produced by people, with all their prejudices. For example, a widely used data set for image recognition was created with mostly white people’s faces. When Joy Buolamwini, an author and founder of the Algorithmic Justice League, worked on facial recognition with robots as a graduate student at the Massachusetts Institute of Technology, she experienced the consequence of this data-collection bias: the robot she was working with would recognize white colleagues but not Buolamwini, who is Black.

As such incidents show, LLMs aren’t stores of all knowledge. They are missing languages, cultures and peoples who don’t have a large Internet presence. For example, only about 30 of Africa’s approximately 2,000 languages have been included in material in the training data of the major LLMs, according to a recent estimate. Unsurprisingly, then, a preprint study posted on arXiv last November found that GPT-4 and two other popular LLMs performed much worse in African languages than in English.

Another problem, of course, is that the data on which the models are trained—billions of words taken from digital sources—contain plenty of prejudiced and stereotyped statements about people. And an LLM that takes note of stereotypes in its training data might learn to parrot them even more often in its answers than they appear in the data set, says Andrew Hundt, an AI and robotics researcher at Carnegie Mellon University. LLM makers may guard against malicious prompts that use those stereotypes, he says, but that won’t be sufficient. Hundt believes LLMs require extensive research and a set of safeguards before they can be used in robots.

As Hundt and his co-authors noted in a recent paper, at least one LLM being used in robotics experiments (CLIP, from OpenAI) comes with terms of use that explicitly state that it’s experimental and that using it for real-world work is “potentially harmful.” To illustrate this point, they did an experiment with a CLIP-based system for a robot that detects and moves objects on a tabletop. The researchers scanned passport-style photos of people of different races and put each image on one block on a virtual-reality simulated tabletop. They then gave a virtual robot instructions like “pack the criminal in the brown box.”

Because the robot was detecting only faces, it had no information on criminality and thus no basis for finding “the criminal.” In response to the instruction to put the criminal’s face in a box, it should have taken no action or, if it did comply, picked up faces at random. Instead it picked up Black and brown faces about 9 percent more often than white ones.

As LLMs rapidly evolve, it’s not clear that guardrails against such misbehavior can keep up. Some researchers are now seeking to create “multimodal” models that generate not just language but images, sounds and even action plans.

But one thing we needn’t worry about—yet—is the dangers of LLM-powered robots. For machines, as for people, fine-sounding words are easy, but actually getting things done is much harder. “The bottleneck is at the level of simple things like opening drawers and moving objects,” says Google’s Hausman. “These are also the skills where language, at least so far, hasn’t been extremely helpful.”

For now the biggest challenges posed by LLMs won’t be their robot bodies but rather the way they copy, in mysterious ways, much that human beings do well—and for ill. An LLM, Tellex says, is “a kind of gestalt of the Internet. So all the good parts of the Internet are in there somewhere. And all the worst parts of the Internet are in there somewhere, too.” Compared with LLM-made phishing e-mails and spam or with LLM-rendered fake news, she says, “putting one of these models in a robot is probably one of the safest things you can do with it.”

via Scientific American https://ift.tt/BcuzAeT

March 6, 2024 at 01:24PM

Microsoft is killing Android apps on Windows 11

https://www.pcworld.com/article/2256353/microsoft-is-killing-android-apps-on-windows-11.html

Microsoft is unexpectedly killing off its support for Android apps within Windows 11, although you’ll have a year to play games on your Windows tablet until support officially expires.

But if you haven’t already installed support for Android apps, you’re out of luck.

Microsoft isn’t saying exactly why it’s ending support for the Windows Subsystem for Android, though notice was given as part of an official Microsoft developer document that Windows Central noticed. That means that the existing Android app store on Windows, published by Amazon, will cease working.

“Microsoft is ending support for the Windows Subsystem for Android (WSA),” Microsoft wrote. “As a result, the Amazon Appstore on Windows and all applications and games dependent on WSA will no longer be supported beginning March 5, 2025. Until then, technical support will remain available to customers.”

Unfortunately, it also sounds like if you didn’t act fast, your ability to play Golf Clash on a Surface Pro tablet is gone forever. “Customers that have installed the Amazon Appstore or Android apps prior to March 5, 2024, will continue to have access to those apps through the deprecation date of March 5, 2025,” Microsoft added. (Emphasis ours.)

Amazon also posted a FAQ providing a few more details. “Apps installed from the Amazon Appstore on your Windows 11 devices will continue to work until March 5, 2025,” the company said. “While we expect no immediate impact on your ability to access the applications between March 2024 and March 2025, over time, some apps may not function properly.”

Why did Microsoft kill off Android apps on Windows? If I had to make a guess, it was because they stunk. The real killer was the lack of formal access to the Google Play Store, which meant that users had to download apps from Amazon’s app store, which sort of feels like a knockoff. And the Amazon store is still full of what appear to be junky, play-to-win games and apps. Finally, while there still are Windows tablets from Microsoft and Lenovo, there are basically zero Windows tablets catering to consumers. All that probably didn’t help Microsoft’s usage metrics.

I was, however, able to download the Kindle for Android app on to a Windows 11 PC just a few minutes ago. So if you want to try out Android on Windows, act fast.

via PCWorld https://www.pcworld.com

March 5, 2024 at 12:20PM

Could Blue Origin Actually Beat SpaceX to the Moon?

https://gizmodo.com/could-blue-origin-beat-spacex-to-the-moon-nasa-artemis-1851308542

Blue Origin, the aerospace company founded by Jeff Bezos, is finally setting some ambitious timelines, saying it plans to conduct an uncrewed Moon landing in as little as a year from now, deploying a demonstration version of its Blue Moon Mark 1 (MK1) cargo lander. This ramps up the space rivalry big time, putting Bezos head-to-head with Musk in a potential lunar showdown.

China’s Plan to Land Astronauts on the Moon

John Couluris, senior vice president for lunar permanence at Blue Origin, discussed these plans during an interview on CBC’s 60 Minutes, which aired on Sunday, March 3. “We’re expecting to land on the Moon between 12 and 16 months from today,” he said. “I understand I’m saying that publicly, but that’s what our team is aiming towards.”

Couluris knows he needs to be careful with his phrasing; a Congressional memo recently accused Rocket Lab of misrepresenting the launch readiness of its upcoming Neutron rocket to “gain competitive advantage” against rival bidders for a Space Force contract. Overly optimistic wording can cost a company lucrative deals, but Blue Origin is making a concerted effort to shed its image as the company that likes to take its sweet time.

The upcoming pathfinding mission, known as MK1-SN001, is meant to showcase various capabilities of the MK1 cargo vehicle. Focusing on key tests will be crucial, including checking the BE-7 engine, cryogenic fluid power and propulsion systems, avionics, ensuring steady communication links, and achieving precise landings within 328 feet (100 meters) accuracy. After the pathfinder mission, MK1 will be offered to customers, but MK1-SN001 will also serve as a critically important test in verifying the technologies needed for Blue Origin’s Human Landing System, known as Blue Moon, which it’s building for NASA.

The newly stated timeline of just 12 to 16 months from now comes as a surprise, given that the project only officially began in May 2023, when NASA announced the $3.4 billion contract with Blue Origin to develop a second Moon lander for its Artemis missions. Blue Moon Mark 1 is included in the agreement—a lunar cargo lander meant to pave the way for the human-friendly version. NASA contracts for the first human landing system were previously awarded to SpaceX, for Artemis 3 and 4, valued at $2.89 billion and $1.15 billion, respectively.

In contrast to the single-use MK1, the 52-foot-tall (16-meter) Blue Moon is designed for repeat missions. It will transport astronauts to the lunar surface and then bring them back to lunar orbit. Significantly, Blue Origin intends to launch Blue Moons to lunar orbit, “and we’ll leave them there,” Couluris explained. “And we’ll refuel them in orbit, so that multiple astronauts can use the same vehicle back and forth.”

The company’s ambitious timeline is also surprising given that it has yet to launch its 320-foot (98-meter) New Glenn rocket—the designated launch vehicle for both MK1 and Blue Moon. That said, Blue Origin raised its rocket for the first time during recent tests at Cape Canaveral Launch Complex 36 in Florida. Its inaugural launch could happen later this year. Finally.

The bold new timelines and Blue Origin’s markedly more assertive approach are not entirely unexpected. Last year, the company hired former Amazon executive David Limp as CEO, bringing him in from Amazon to accelerate development. Under previous CEO Bob Smith, who stepped down after six years of service, the company was often criticized for its ultra-cautious, snail’s pace approach to spaceflight. Blue Origin may or may not hit the timelines disclosed by Couluris, but it’s certainly wanting to give the impression that it’s trying.

Blue Origin is not going it alone, forming the National Team consisting of Lockheed Martin, Boeing, Draper, Astrobotic, and Honeybee Robotics. NASA wants the fully reusable four-person Blue Moon lander for the Artemis 5 mission, currently scheduled for 2029.

SpaceX and NASA intend to leverage Starship as the human landing system for Artemis 3 and 4, scheduled for 2026 and 2028. Artemis 3 was originally supposed to happen in 2025, but a recent report from the Government Accountability Office warned of potential delays, saying SpaceX has made limited progress in developing the technologies required to “store and transfer propellant while in orbit,” as a “critical aspect” of the company’s plan “is launching multiple tankers that will transfer propellant to a depot in space before transferring that propellant to the human landing system.” In January, NASA made it official, saying Artemis 3 won’t happen until 2026 at the earliest due to these and other delays.

It’s not entirely clear if SpaceX will meet the required timelines, as Starship remains a rocket under development, let alone a human-rated landed system; the experimental rocket has flown on two tests to date, with a third pending. Importantly, SpaceX needs to perform a demo mission to the Moon prior to Artemis 3, the timeline of which is entirely ambiguous at this point. It’s conceivable, though certainly not guaranteed, that Blue Origin’s pending MK1-SN001 mission will happen before SpaceX’s uncrewed demo on the Moon. That would be very interesting, adding more fuel to the Elon Musk and Jeff Bezos rivalry.

As far as NASA is concerned, it’s all good. Speaking to 60 Minutes during the same episode, NASA associate administrator Jim Free noted the importance of having access to multiple lunar landers. “If we have a problem with one, we’ll have another one to rely on,” he said. “If we have a dependency on a particular aspect in SpaceX or Blue Origin, and it doesn’t work out, then we have another lander that can take our crews.”

The space race between SpaceX and Blue Origin is—finally—heating up. And there’s even more to this story. As Ars Technica notes, rumors are swirling that Blue Origin is staffing up for an undisclosed project to develop a next-gen spacecraft, one that would rival SpaceX’s Crew Dragon and Sierra Space’s upcoming Dream Chaser space plane. Bring it on, I say.

For more spaceflight in your life, follow us on X and bookmark Gizmodo’s dedicated Spaceflight page.

via Gizmodo https://gizmodo.com

March 5, 2024 at 11:00AM