NASA’s OSIRIS-REx Probe Successfully Stows Space-Rock Sample

https://www.scientificamerican.com/article/nasas-osiris-rex-probe-successfully-stows-space-rock-sample/


NASA’s pioneering OSIRIS-REx probe has bagged up its precious asteroid sample for return to Earth.

OSIRIS-REx has finished stowing the bits of the carbon-rich asteroid Bennu that it snagged last Tuesday (Oct. 20), successfully locking the material into the spacecraft’s return capsule, mission team members announced Thursday (Oct. 29).

And the sample appears to be substantial—far heftier than the 2.1 ounces (60 grams) the mission had set as a target, team members said. Indeed, OSIRIS-REx collected so much material on Oct. 20 that its sampling head couldn’t close properly; the head’s sealing mylar flap was wedged open in places by protruding Bennu pebbles.

The OSIRIS-REx team noticed that issue last week when examining photos of the head and its collected sample; flakes of escaped asteroid material drifted through the frames. To minimize the amount lost, the team decided to expedite the precise and complex stowing procedure, which was supposed to happen next week.

So, over the course of 36 hours on Tuesday and Wednesday (Oct. 27 and Oct. 28), engineers directed OSIRIS-REx to deposit the sampling head, which sat at the end of the probe’s robotic arm, into the return capsule; tug on the head to make sure it was secured properly; sever connections with the robotic arm; and lock up the return capsule via the locking of two latches.

This was all done while OSIRIS-REx was about 205 million miles (330 million kilometers) from Earth, meaning it took 18.5 minutes for each command to reach OSIRIS-REx, and another 18.5 minutes for each update from the probe to come back down to Earth. 

“We wanted to only attempt stow one time, and we wanted to make sure we were successful,” OSIRIS-REx mission operations manager Sandra Freund, of Lockheed Martin Space in Littleton, Colorado, said during a NASA news conference Thursday. “And we definitely were.”

The change of plans required a last-minute reallocation of time on NASA’s Deep Space Network (DSN), the system of radio telescopes that the agency uses to communicate with its far-flung probes. Because the stow operation was so important and so involved, OSIRIS-REx needed a large block of continuous DSN time, which other NASA missions sacrificed for the greater good.

It’s unclear exactly how much asteroid material now sits in OSIRIS-REx’s return capsule, which will come down to Earth in September 2023. The team canceled a planned post-sampling weighing procedure that would have involved spinning the probe, because this maneuver would have resulted in more sample loss. (Moving the arm—to photograph the sample and conduct the stow operation, for example—imparted grain-liberating acceleration, mission team members explained. So they wanted to minimize such motions.) 

But there’s definitely a lot of asteroid material on board, said mission principal investigator Dante Lauretta of the University of Arizona.

The sampling operation on Oct. 20 went extremely well, Lauretta said, and the head penetrated deep into Bennu’s surface—perhaps 19 inches (48 centimeters) or more. The team is confident that OSIRIS-REx pretty much filled its sampling head that day, meaning it likely backed away from Bennu with about 4.4 lbs. (2 kilograms) of collected material.

The losses over the ensuing days appear minimal by comparison—probably “tens of grams” in total, Lauretta said. And recent photos of the sampling head showed that it was still packed. Mission team members could only see 17% of the head’s volume in those photos, but they estimate that about 14.1 ounces (400 g) of Bennu material is jammed into that space, Lauretta said.

If that estimate is accurate, and if the 17% slice is representative of the entire sampling head, then OSIRIS-REx may have held onto more than 4.4 lbs. (2 kg) of sample. Lauretta’s overall prediction is more measured than that, but it’s still decidedly bullish.

“I believe we still have hundreds of grams of material in the sample collector head—probably over a kilogram, easily,” Lauretta said during today’s news conference.

That would be great news. Such a large amount would allow lots of research groups to study the Bennu dirt and rock, and to perform a wide variety of experiments with the pristine cosmic sample. As an example, Lauretta pointed to organic chemistry—specifically, analyses involving sugars.

Sugars are “expected to be present in very low abundances [on asteroids like Bennu], requiring several grams of sample to extract them from,” Lauretta said. “And we thought that would not be feasible with the 15-gram allocation but something that does open up with the larger mass available for analysis.”

(The OSIRIS-REx science team gets to analyze up to 25% of the returned sample. If the total sample ended up being the targeted 60 grams, the team would get to study up to 15 grams of it.)

If all goes according to plan, such experiments will reveal a great deal about the solar system’s early days and the role that asteroids like Bennu may have played in helping life get going on Earth, by delivering lots of water and carbon-containing organic chemicals. Shedding light on such big questions is the chief goal of the $800 million OSIRIS-REx mission, which launched in September 2016 and arrived at Bennu in December 2018.

The mission’s next big steps involve gearing up for the return trip (though engineers are also trying to figure out if they can somehow get a rough mass estimate of the now-stowed sample). Orbital dynamics dictate that OSIRIS-REx must start heading for home between early March and May, and the current plan is to target the earliest part of that window, team members said today.

OSIRIS-REx is NASA’s first asteroid-sampling mission, but it’s not the first one in history. Japan’s Hayabusa mission delivered small bits of the stony asteroid Itokawa to Earth in 2010, and that probe’s successor, Hayabusa2, is scheduled to return a sample of the carbon-rich asteroid Ryugu this coming December. 

Copyright 2020 Space.com, a Future company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.

via Scientific American https://ift.tt/n8vNiX

October 31, 2020 at 09:00AM

Gamers Forge Their Own Paths When It Comes to Accessibility

https://www.wired.com/story/accessibility-video-games-ablegamers


When Mark Barlet realized there weren’t many gaming resources available for a friend with multiple sclerosis, he and Stephen Spohn helmed a solution that would change countless lives. They created AbleGamers and turned a personal mission into a global vision of video game accessibility for all.

“AbleGamers hasn’t followed any path. We’ve created our own,” Spohn said. He’s AbleGamers’ COO and has spinal muscular atrophy, which attacks his muscles and limits movement from the neck down. “We entered an industry with a bunch of staircases and brought our own ramps.”

Spohn said the “secret sauce” of AbleGamers is to “do as much good as we possibly can.” That’s a tall order when you consider there are 46 million people with disabilities in the United States alone, according to Spohn.

AbleGamers’ impact on the disabled gaming community isn’t always well publicized. In 2011, they unveiled the Adroit Switchblade, an accessible controller. Years later, Microsoft took notice, saw the controller’s potential and worked with AbleGamers in secret to create its spiritual successor, the Microsoft Xbox Adaptive Controller (XAC). This controller is a household staple for people with disabilities and is much more affordable than the Switchblade.

According to Brannon Zahand, Senior Gaming Accessibility program manager at Microsoft, this new controller was the key to breaking down the “unintentional barrier” that kept people from playing games. Along with AbleGamers, Microsoft worked with multiple organizations, including the Cerebral Palsy Foundation, to provide “an effective, customizable solution for gamers with limited mobility.”

AbleGamers offers Accessible Player Experiences, an intensive certification course to design games with an eye for accessibility. Every achievement within AbleGamers came from “sheer determination and will.” They support innovative, specialized controllers and give them to gamers in need.

The Microsoft gaming accessibility boot camp is another route to create games for everyone, not just those who are able-bodied. Xbox accessibility guidelines are available to developers to “provide guardrails when developing their game and as a checklist for validating the accessibility of their title,” Zahand said.

“Game accessibility advocates, subject matter experts, and community members present to our teams on a variety of topics such as inclusive design best practices and various assistive technologies that can be leveraged by our products,” Zahand said.

Overcoming Visible and Invisible Hurdles

Alanah Pearce credits video games with helping her through severe effects of myalgic encephalomyelitis (ME) and tendinitis, including dizziness, joint pain, headaches, and nausea.

“There are days where getting out of bed ultimately isn’t an option, and it can be very frustrating, but video games are always an option for me, and always help me feel like I’m still able to ‘do’ something,” Pearce said.

She plays through her pain, often limiting her gameplay to an hour due to her tendinitis causing swelling after rapidly tapping a controller. She said advances in games like Naughty Dog’s The Last of Us Part 2 (TLOU2) and the Microsoft XAC are great, but options are extremely limited.

“Largely, developers look at accessibility as an afterthought, so the limitations are self-imposed,” Pearce said. “I suppose it becomes too time-consuming at the end of a development cycle to implement accessibility options, where they should be in consideration from the very beginning.”

Zahand’s team is striving to accomplish this. During Inclusive Design Sprints, gamers with disabilities chat with Microsoft developers and share their experiences with playing video games.

“Accessibility must be considered in product design from the very start,” Zahand said. “For game developers and studio teams, we emphasize the importance of partnering with the gaming and disability community throughout the development process.”

Mike Begum, aka “Brolylegs,” has also adapted to the limits of inaccessible gaming. He has arthrogryposis, a condition that limits muscle growth. He stays mobile by using a special wheelchair so he can lie on his stomach.

He’s mastered fighting games like Street Fighter by using an arm and parts of his face to manipulate a controller. Begum has used this technique since childhood and has traveled all over the country for esports competitions. Traveling by air could be especially painful, but he loves it.

via Wired Top Stories https://ift.tt/2uc60ci

October 29, 2020 at 06:09AM

AI has cracked a key mathematical puzzle for understanding our world

https://www.technologyreview.com/2020/10/30/1011435/ai-fourier-neural-network-cracks-navier-stokes-and-partial-differential-equations/

Unless you’re a physicist or an engineer, there really isn’t much reason for you to know about partial differential equations. I know. After years of poring over them in undergrad while studying mechanical engineering, I’ve never used them since in the real world.

But partial differential equations, or PDEs, are also kind of magical. They’re a category of math equations that are really good at describing change over space and time, and thus very handy for describing the physical phenomena in our universe. They can be used to model everything from planetary orbits to plate tectonics to the air turbulence that disturbs a flight, which in turn allows us to do practical things like predict seismic activity and design safe planes.

The catch is PDEs are notoriously hard to solve. And here, the meaning of “solve” is perhaps best illustrated by an example. Say you are trying to simulate air turbulence to test a new plane design. There is a known PDE called Navier-Stokes that is used to describe the motion of any fluid. “Solving” Navier-Stokes allows you to take a snapshot of the air’s motion (a.k.a. wind conditions) at any point in time and model how it will continue to move, or how it was moving before.

These calculations are highly complex and computationally intensive, which is why disciplines that use a lot of PDEs often rely on supercomputers to do the math. It’s also why the AI field has taken a special interest in these equations. If we could use deep learning to speed up the process of solving them, it could do a whole lot of good for scientific inquiry and engineering.

Now researchers at Caltech have introduced a new deep-learning technique for solving PDEs that is dramatically more accurate than deep-learning methods developed previously. It’s also much more generalizable, capable of solving entire families of PDEs—such as the Navier-Stokes equation for any type of fluid—without needing retraining. Finally, it is 1,000 times faster than traditional mathematical formulas, which would ease our reliance on supercomputers and increase our computational capacity to model even bigger problems. That’s right. Bring it on.

Hammer time

Before we dive into how the researchers did this, let’s first appreciate the results. In the gif below, you can see an impressive demonstration. The first column shows two snapshots of a fluid’s motion; the second shows how the fluid continued to move in real life; and the third shows how the neural network predicted the fluid would move. It basically looks identical to the second.

The paper has gotten a lot of buzz on Twitter, and even a shout-out from rapper MC Hammer. Yes, really.

Okay, back to how they did it.

When the function fits

The first thing to understand here is that neural networks are fundamentally function approximators. (Say what?) When they’re training on a data set of paired inputs and outputs, they’re actually calculating the function, or series of math operations, that will transpose one into the other. Think about building a cat detector. You’re training the neural network by feeding it lots of images of cats and things that are not cats (the inputs) and labeling each group with a 1 or 0, respectively (the outputs). The neural network then looks for the best function that can convert each image of a cat into a 1 and each image of everything else into a 0. That’s how it can look at a new image and tell you whether or not it’s a cat. It’s using the function it found to calculate its answer—and if its training was good, it’ll get it right most of the time.

Conveniently, this function approximation process is what we need to solve a PDE. We’re ultimately trying to find a function that best describes, say, the motion of air particles over physical space and time.

Now here’s the crux of the paper. Neural networks are usually trained to approximate functions between inputs and outputs defined in Euclidean space, your classic graph with x, y, and z axes. But this time, the researchers decided to define the inputs and outputs in Fourier space, which is a special type of graph for plotting wave frequencies. The intuition that they drew upon from work in other fields, says Anima Anandkumar, a Caltech professor who oversaw the research, is that something like the motion of air can actually be described as a combination of wave frequencies. The general direction of the wind at a macro level is like a low frequency with very long, lethargic waves, while the little eddies that form at the micro level are like high frequencies with very short and rapid ones.

Why does this matter? Because it’s far easier to approximate a Fourier function in Fourier space than to wrangle with PDEs in Euclidean space, which greatly simplifies the neural network’s job. Cue major accuracy and efficiency gains: in addition to its huge speed advantage over traditional methods, their technique achieves a 30% lower error rate when solving Navier-Stokes than previous deep-learning methods.

The whole thing is extremely clever, and also makes the method more generalizable. Previous deep-learning methods had to be trained separately for every type of fluid, whereas this one only needs to be trained once to handle all of them, as confirmed by the researchers’ experiments. Though they haven’t yet tried extending this to other examples, it should also be able to handle every earth composition when solving PDEs related to seismic activity, or every material type when solving PDEs related to thermal conductivity.

Super-simulation

Anandkumar and the lead author of the paper, Zongyi Li, a PhD student in her lab, didn’t do this research just for the theoretical fun of it. They want to bring AI to more scientific disciplines. It was through talking to various collaborators in climate science, seismology, and materials science that Anandkumar first decided to tackle the PDE challenge with her students. They’re now working to put their method into practice with other researchers at Caltech and the Lawrence Berkeley National Laboratory.

One research topic Anandkumar is particularly excited about: climate change. Navier-Stokes isn’t just good at modeling air turbulence; it’s also used to model weather patterns. “Having good, fine-grained weather predictions on a global scale is such a challenging problem,” she says, “and even on the biggest supercomputers, we can’t do it at a global scale today. So if we can use these methods to speed up the entire pipeline, that would be tremendously impactful.”

There are also many, many more applications, she adds. “In that sense, the sky’s the limit, since we have a general way to speed up all these applications.”

via Technology Review Feed – Tech Review Top Stories https://ift.tt/1XdUwhl

October 30, 2020 at 04:06AM

This robotic hawk can shape-shift as it flies

https://www.popsci.com/story/technology/robotic-hawk-drone/

The LisHawk.

The LisHawk. (Enrico Ajanic, EPFL/)

Drones are common enough that it’s easy to picture their basic design variations. Fixed-wing drones look like miniature airplanes. Others use propellers—typically four of them—to pull themselves up into the sky, kind of like helicopters. A few drones combine those ideas by using props to get off the ground and then rotating position as they fly, so the sides of the drone can act like wings and provide lift. Amazon’s package delivery drone does this, and so does a big cargo carrier from Bell.

Now picture how a bird soars through the air, with all the ways that its wings and tail can move. Biological flight like that feels pretty different from the way those other gadgets whiz around. But engineers in Switzerland have unveiled a robotic bird that emulates the way a hawk flies. Their results are published today in the journal Science Robotics.

Their goal was to create a bird-like drone that’s capable of both cruising long distances at high speeds (like a fixed-wing plane) while remaining highly maneuverable. Their creation was inspired by a real bird called the northern goshawk.

“This bird hunts in forests, so it’s super agile,” says Enrico Ajanic, a doctoral student and roboticist at the Swiss Federal Institute of Technology in Lausanne. They wanted to be able to determine: “Why is this northern goshawk so agile? But at the same time, [it] can also be quite efficient—it’s also a migratory bird.” By creating a robot that can accomplish those varied flight goals, Ajanic says they can make a flying machine that’s the best of both worlds.

A northern goshawk.

A northern goshawk. (Ondrej Prosicky / Deposit Photos/)

A drone that could do that, he argues, would be fantastic at cruising through an urban environment. “Big cities require a drone which can fly long distances, so you have to be very efficient,” he says. “But at the same time, you also need to avoid obstacles, because these cities are cluttered.”

The result is a creation made from carbon fiber and other materials that’s called LisHawk. At its largest, the wingspan is 3.4 feet across. In some ways, the robo-hawk is a lot like a real northern goshawk. Its tail, which can fan outwards, is about the same length—around .8 feet. And the outer portion of its wing (called the chord) is about a foot long, roughly the same as its biological counterpart. The wings can extend outwards or tuck inwards. The tail can fan out, and move up and down and side-to-side. That morphing ability gives the LisHawk the ability to widen the spectrum of the type of flying that it’s good at; a typical drone can’t morph like that.

Enrico Ajanic and the LisHawk.

Enrico Ajanic and the LisHawk. (EPFL/)

There are challenges, though, with trying to duplicate nature with artificial materials. The main one is that the robotic hawk doesn’t flap its wings—it uses a propeller. “The propeller is quite efficient, and from a mechanical engineer point of view, it’s a simple system,” he says. A drone that flapped its wing would be difficult to create, and picturing the opposite scenario is just funny: a bird with a propeller sticking out of its beak.

Overall, Ajanic is pleased with how well they accomplished the goal of creating a shape-shifting, bio-inspired aircraft. He says that tech like this could be used with other drones to “improve their flight performance.”

For fast cruise flight, he says the ideal configuration is with the wings and tail tucked inwards, a position in which the minimum speed is 17 miles per hour. For slower but more agile flight, the speed decreases to 9 mph, with the tail and wings extended. Changing wing shape in the air that way in aviation is a rarity: planes like the F-14, the fighter jets from the original Top Gun, did it.

The LisHawk follows in the feathery footsteps of a similar robo-bird called the PigeonBot, which debuted back in January. Unlike the PigeonBot, this robot-hawk doesn’t use actual feathers from a real bird—Ajanic’s team devised an artificial solution.

Realistically, we’re unlikely to see hawk- or pigeon-like robots zipping through cities anytime soon, if ever—fixed-wing drones and quadcopters remain the industry norm, and rural areas are safer settings for them, and a better flight area from a regulatory standpoint. But that doesn’t change the fact that a robotic creation that finds its inspiration in biology is, to put it simply, very cool.

via Popular Science – New Technology, Science News, The Future Now https://www.popsci.com

October 28, 2020 at 03:31PM

Geoengineering: A Horrible Idea We Might One Day Have to Do [Video]

https://www.geeksaresexy.net/2020/10/27/geoengineering-a-horrible-idea-we-might-one-day-have-to-do-video/

By the end of the 21st century, humanity is becoming desperate. Decades of heatwaves and droughts have led to unusually poor harvests, while the warming oceans yield fewer fish each year. In the tropical zones, millions suffer from famine and resource wars have made millions more flee to the north. As things quickly get worse, in an act of desperation, the world’s governments decide to enact an emergency plan…

It is far from certain that a grim scenario like this will play out. But the failure of world leaders to effectively address climate change, makes it far from impossible.

So in the near future it might become necessary to try something radical to slow down rapid climate change: Geoengineering. Interventions so massive in scale that they might undo centuries of human behavior. Or make everything much worse.

What is geoengineering, is it really an option and what if it goes wrong?

[In a Nutshell]

The post Geoengineering: A Horrible Idea We Might One Day Have to Do [Video] appeared first on Geeks are Sexy Technology News.

via [Geeks Are Sexy] Technology News https://ift.tt/23BIq6h

October 27, 2020 at 02:10PM

GM Super Cruise beats Tesla Autopilot again in latest hands-free test

https://www.autoblog.com/2020/10/28/tesla-autopilot-loses-gm-super-cruise-wins/


DETROIT — General Motors‘ Super Cruise once again edged Tesla’s Autopilot in an evaluation of 17 vehicles equipped with active driving assistance systems (ADAS) by Consumer Reports, the testing organization said on Wednesday.

A Tesla Model Y fitted with Autopilot finished “a distant second,” the group said, to a Cadillac CT6 equipped with Super Cruise, which GM is rolling out to more than 20 vehicles — including its new Hummer electric pickup truck — over the next three years.

Safety and insurance researchers have frequently warned of the risks of consumers overestimating ADAS systems’ abilities, a misconception increased by some automakers calling their products Autopilot, ProPilot or Co-Pilot.

In 2018, the Cadillac CT6 with Super Cruise scored higher than a Tesla Model 3 with Autopilot, in a Consumer Reports test of just four vehicles equipped with ADAS.

In the latest test, conducted this summer on a track and on public roads, the Cadillac scored 69 points out of a possible 100, while the Tesla scored 57. A Lincoln Corsair equipped with Ford’s Co-Pilot 360 system, finished third with 52.

The critical difference in the Super Cruise system is a driver-facing infrared camera to make sure he or she is paying attention to the road and is ready to take over manual control when necessary, said Kelly Funkhouser, head of connected and automated vehicle testing at Consumer Reports.

The group noted that Autopilot can shut off abruptly in some situations, while Super Cruise did a better job of notifying the driver when the system is disengaging.

In recent European safety testing, a Tesla Model 3 with Autopilot placed sixth out of 10 systems, getting high marks for performance and ability to respond to emergencies, but falling short on its ability to maintain a driver’s focus on the road.

Related Video:

via Autoblog https://ift.tt/1afPJWx

October 28, 2020 at 07:56AM

MIT tests autonomous ‘Roboat’ that can carry two passengers

https://www.engadget.com/mit-autonomous-roboat-ii-carries-passengers-140145138.html

We’ve heard plenty about the potential of autonomous vehicles in recent years, but MIT is thinking about different forms of self-driving transportation. For the last five years, MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Senseable City Lab have been working on a fleet of autonomous boats to deploy in Amsterdam. Last year, we saw the autonomous “roboats” that could assemble themselves into a series of floating structures for various uses, and today CSAIL is unveiling the “Roboat II.” What makes this one particularly notable is that it’s the first that can carry passengers.

The boat is pretty small, only two meters long, and can carry two passengers through the canals of Amsterdam. Roboat II has four propellors so it can move in any direction, and also includes LiDAR, GPS and inertial sensors to help it navigate. While an individual boat looks rather tiny, they’re modular, like the original Roboat. This means they can self-assemble into a larger vessel that’s commanded by a main “leader” boat.

MIT looked at the original Roboat as “quarter-scale” option, with the Roboat II being half-scale; they’re slowly working up to the point of a full-scale option that can carry four to six passengers. That bigger version is already under construction in Amsterdam, but there’s no word on when it’ll be ready for testing. In the meantime, Roboat II seems like it can pretty effectively navigate Amsterdam — MIT says that it autonomously navigated the city’s canals for three hours collecting data and returned to where it left with an error margin of less than seven inches.

Going forward, the MIT team expects to keep improving the Roboat’s algorithms to make it better able to deal with the challenges a boat might find, like disturbances from currents and waves. They’re also working to make it more capable of identifying and “understanding” objects it comes across so it can better deal with the environment it’s in. Everything the half-scale Roboat II learns will naturally be applied to the full-scale version that’s being worked on now. There’s no word on when we might see that bigger Roboat out in the waters, though.

via Engadget http://www.engadget.com

October 26, 2020 at 09:12AM