What Is Quantum Gravity?

https://www.space.com/quantum-gravity.html

Gravity was the first fundamental force that humanity recognized, yet it remains the least understood. Physicists can predict the influence of gravity on bowling balls, stars and planets with exquisite accuracy, but no one knows how the force interacts with minute particles, or quanta. The nearly century-long search for a theory of quantum gravity — a description of how the force works for the universe’s smallest pieces — is driven by the simple expectation that one gravitational rulebook should govern all galaxies, quarks and everything in between. [Strange Quarks and Muons, Oh My! Nature’s Tiniest Particles Dissected (Infographic)]

“If there is no theory [of quantum gravity], then the universe is just chaos. It’s just random,” said Netta Engelhardt, a theoretical physicist at the Massachusetts Institute of Technology. “I can’t even say that it would be chaotic or random because those are actually legitimate physical processes.”

The edge of general relativity

At the heart of the thorniest problem in theoretical physics lies a clash between the field’s two greatest triumphs. Albert Einstein’s theory of general relativity replaced Isaac Newton’s notion of simple attraction between objects with a description of matter or energy bending space and time around it, and nearby objects following those curved paths, acting as if they were attracted to one another. In Einstein’s equations, gravity is the shape of space itself. His theory kept the traditional description of a smooth, classical universe — one where you can always zoom in further to a smaller patch of space. 

General relativity continues to ace every test astrophysicists throw at it, including situations Einstein never could have imagined. But most experts expect Einstein’s theory to fall short someday, because the universe ultimately appears bumpy, not smooth. Planets and stars are really collections of atoms, which, in turn, are made up of electrons and bundles of quarks. Those particles hang together or break apart by swapping other types of particles, giving rise to forces of attraction and repulsion. 

Electric and magnetic forces, for example, come from objects exchanging particles known as virtual photons. For example, the force sticking a magnet to the fridge can be described as a smooth, classical magnetic field, but the field’s fine details depend on the quantum particles that create it. Of the universe’s four fundamental forces (gravity, electromagnetism, and the strong and weak nuclear forces), only gravity lacks the “quantum” description. As a result, no one knows for sure (although there are plenty of ideas) where gravitational fields come from or how individual particles act inside them. 

The odd force out

The problem is that even though gravity keeps us stuck to the ground and generally acts as a force, general relativity suggests it’s something more — the shape of space itself. Other quantum theories treat space as a flat backdrop for measuring how far and fast particles fly. Ignoring the curvature of space for particles works because gravity is so much weaker than the other forces that space looks flat when zoomed in on something as small as an electron. The effects of gravity and the curvature of space are relatively obvious at more zoomed-out levels, like planets and stars. But when physicists try to calculate the curvature of space around an electron, slight as it may be, the math becomes impossible. 

In the late 1940s physicists developed a technique, called renormalization, for dealing with the vagaries of quantum mechanics, which allow an electron to spice up a boring trip in an infinite variety of ways. It may, for instance, shoot off a photon. That photon can split into an electron and its antimatter twin, the positron. Those pairs can then shoot off more photons, which can split into more twins, and so on. While a perfect calculation would require counting up the infinite variety of electron road trips, renormalization let physicists gather the unruly possibilities into a few measurable numbers, like the electron charge and mass. They couldn’t predict these values, but they could plug in results from experiments and use them to make other predictions, like where the electron is going.

Renormalization stops working when theoretical gravity particles, called gravitons, enter the scene. Gravitons also have their own energy, which creates more warping of space and more gravitons, which create more warping, and more gravitons, and so on, generally resulting in a giant mathematical mess. Even when physicists try to pile some of the infinities together to measure experimentally, they end up drowning in an infinite number of piles. 

“It effectively means that you need an infinite number of experiments to determine anything,” Engelhardt said, “and that’s not a realistic theory.”

The theory of general relativity says the universe is a smooth fabric, and quantum mechanics says it's a bumpy mess of particles. Physicists say it can't be both.

The theory of general relativity says the universe is a smooth fabric, and quantum mechanics says it’s a bumpy mess of particles. Physicists say it can’t be both.

(Image credit: Shutterstock)

In practice, this failure to deal with curvature around particles grows fatal in situations where lots of mass and energy twist space so tightly that even electrons and their ilk can’t help but take notice — such as the case with black holes. But any particles very near — or worse, inside — the pits of space-time certainly know the rules of engagement, even if physicists don’t. 

“Nature has found a way to make black holes exist,” Robbert Dijkgraaf, director of the Institute for Advanced Study in Princeton, New Jersey, wrote in a publication for the institute. “Now it is up to us to find out what nature knows and we do not yet.” 

Bringing gravity into the fold

Using an approximation of general relativity (Engelhardt called it a “Band-Aid”), physicists have developed a notion of what gravitons might look like, but no one expects to see one anytime soon. One thought experiment suggests it would take 100 years of experimentation by a particle collider as heavy as Jupiter to detect one. So, in the meantime, theorists are rethinking the nature of the universe’s most fundamental elements. 

One theory, known as loop quantum gravity, aims to resolve the conflict between particles and space-time by breaking up space and time into little bits — an ultimate resolution beyond which no zooming can take place. 

String theory, another popular framework, takes a different approach and swaps out particles for fiber-like strings, which behave better mathematically than their point-like counterparts. This simple change has complex consequences, but one nice feature is that gravity just falls out of the math. Even if Einstein and his contemporaries had never developed general relativity, Engelhardt said, physicists would have stumbled upon it later through string theory. “I find that pretty miraculous,” she said.

And string theorists have uncovered further hints that they’re on a productive track in recent decades, according to Engelhardt. Simply put, the idea of space itself may be distracting physicists from a more fundamental structure of the universe. 

Theorists discovered in the late 1990s that descriptions of a simple, box-like universe including gravity were mathematically equivalent to a picture of a flat universe with only quantum physics (and no gravity). The ability to jump back and forth between the descriptions suggests that space may not be a fundamental ingredient of the cosmos but rather a side effect that emerges from particle interactions.

As hard as it might be for us mortals embedded in the fabric of space to imagine, the relationship between space and particles might be something like the one between room temperature and air molecules. Physicists once thought of heat as a fluid that flowed from a warm room to a cool room, but the discovery of molecules revealed that what we sense as temperature “emerges” from the average speed of air molecules. Space (and equivalently, gravity) may similarly represent our large-scale experience of some small-scale phenomenon. “Within string theory, there are pretty good indications at this point that space is actually emergent,” Engelhardt said.

But string theory’s universe in a box has a different shape from the one we see (although Engelhardt said this difference may not be a deal breaker, since quantum gravity could act the same way for all possible universe shapes). Even if lessons from the box universe do apply in reality, the mathematical framework remains rough. Physicists are a long way from cutting their theoretical ties to space and achieving an accurate description of quantum gravity in all its bumpy glory. 

While they continue to work out the substantial mathematical kinks in their respective theories, some physicists harbor hope that their astrophysical observations may someday nudge them in the right direction. No experiment to date has diverged from general relativity’s predictions, but in the future, a diverse array of gravitational-wave detectors sensitive to many wave sizes could catch the subtle whispers of gravitons. However, Engelhardt said, “my instinct would be to look at the cosmos rather than to look at particle colliders.”

Additional resources: 

via Space.com https://ift.tt/2CqOJ61

August 27, 2019 at 01:31PM

Why You Should Delete Your (Ancient) Foursquare Data

https://lifehacker.com/why-you-should-delete-your-ancient-foursquare-data-1837615177

Remember Foursquare? I used to use it (and the company’s other apps) to keep detailed, digital recordings of everywhere I went, which was the cool thing to do back in 2010. And while I don’t use Foursquare’s Android or iOS apps anymore, I’ve given the company a lot of information about me. Thankfully, it’s easy to see all the data the company has collected from you—and delete it.

You might be wondering why this is important right now. If you haven’t used the app in some time, it’s still a good privacy practice to review and manage your data. And why let let your accounts sit dormant when you can take a few minutes to delete them? (That’s even factoring in the amount of time it might take you to reset the password you’ve long since forgotten)

If you’re a current Foursquare user, reviewing and managing your data is even more important. As Intelligencer recently described:

“In addition to all of those active check-ins, at some point Foursquare began collecting passive data using a ‘check-in button you never had to press.’ It doesn’t track people 24/7 (in addition to creeping people out, doing so would burn through phones’ batteries), but instead, if users opt-in to allow the company to ‘always’ track their locations, the app will register when someone stops and determine whether that person is at a red light or inside an Urban Outfitters. The Foursquare database now includes 105 million places and 14 billion check-ins. The result, experts say, is a map that is often more reliable and detailed than the ones generated by Google and Facebook”

Here’s an even more eye-opening figure from Intelligencer’s report: “All told, the company now has “interest profiles” for over 100 million U.S. consumers.”

How to view all the data Foursquare’s apps have collected on you

This one’s easy. Simply visit your Foursquare Privacy Settings and click on the can’t-miss-it “Export My Data” button. You’ll then receive an email at the address associated with your Foursquare account, and it’ll look something like this:

Then, you’ll need to do a bit of waiting. As the email indicates, this data-request service isn’t immediate. Nor is it a “I’ll go watch a little Netflix while I wait” kind of a deal. In the meantime, however, you can spend a little time tweaking your Foursquare privacy settings to lock down how your data is used, assuming you aren’t planning to quit the service entirely.

Tweak these Foursquare privacy settings

There aren’t a ton of options you can play with via Foursquare’s web-based account settings, but there are a few worth checking out. First off, consider what kind of activity you’re sharing with your social networks. Perhaps you didn’t realize that your Foursquare app is feeding into more social networks than you thought; or you simply don’t want to let the entire world (of your friends) know what you’re up to. Regardless of the reason, click on that link and disconnect any social networks from Foursquare that you want; I’ve removed all of them, but I’m a bit reclusive.

After that, jump down to “Connected Apps.” Like most other services, you can use this section to disconnect any apps that are tapping into your Foursquare data. I’d remove everything because, again, digital recluse, but it’s at least worth pruning out any sites or services you no longer use (like Klout, in my case).

Finally, click on “Privacy Settings.” The checked boxes should be pretty self-explanatory—disable whatever you want, like Foursquare’s background-location sharing or targeted advertising. Once you’ve done that, make sure you check out this link to delete any background location data Foursquare has already collected from you.

You can also go into the Foursquare or Swarm apps on your Android or iOS devices and disable background location-sharing, assuming you’re keeping the services around. If not…

Deleting your Foursquare account

There’s no way to pick-and-choose which location data you want Foursquare to have. If you’re done with the service, or want to remove all of your data from it, deleting your account is easy. Click on this link to Foursquare’s “Delete Account” page, look for the big red scary button, and give it a click.

You’ll have to enter your password to confirm that you want to delete your account, but that’s the only real hurdle you’ll have to leap over to finish the process. Again, if you’ve requested a data dump from Foursquare, make sure you receive that before you delete your account, or else that request won’t process.

To Foursquare’s credit, the process—on your end, at least—is instantaneous. You’ll be dumped back to Foursquare’s homepage, and attempting to log in with your old credentials will do nothing.

via Lifehacker https://lifehacker.com

August 27, 2019 at 01:22PM

Japan’s Asteroid Probe Packs Up and Prepares for Return to Earth

https://gizmodo.com/japans-asteroid-probe-packs-up-and-prepares-for-return-1837615474

A view of Ryugu’s surface, with the shadow of Hayabusa 2 seen at far left.
Image: JAXA

Following a pair of successful touchdowns onto the surface of Ryugu, Japan’s Hayabusa2 probe is packing its precious cargo as it prepares to bring samples of the asteroid back to Earth.

Hayabusa2 performed two touchdowns after arriving at the asteroid in June 2018. The first, on February 21, 2019, was done to collect samples directly from Ryugu’s surface, and the second, on July 11, 2019, was done to collect materials from deeper within the asteroid. With the two touchdowns now in the history books, JAXA mission planners are now shifting to the return phase.

Photographic and video evidence of both encounters suggests the efforts to gather materials were successful, but we won’t know for sure until the probe returns its sample canisters to Earth in late 2020.

Hayabusa2 still has some work to do around Ryugu, but JAXA is already preparing the probe for its 300-million-kilometer (186-million-mile) journey home. On Monday, JAXA conducted a successful procedure in which the sample chamber was placed inside the probe’s re-entry capsule, as the space agency announced in tweet.

Unlike the previous Hayabusa mission in 2010, in which both the probe and the capsule re-entered Earth’s atmosphere, the current mission will have only the capsule endure atmospheric re-entry. Hayabusa2 itself is expected to stay in space and possibly participate in a yet-to-be determined future mission.

JAXA is currently seeking permission from the Australian government to use its territory for the landing of the re-entry capsule. The Japanese space agency is currently targeting the restricted Woomera territory, which requires special access permissions from Australia, along with an approval to build an antenna station for tracking the descent of the jettisoned re-entry capsule. JAXA is in the midst of preparing the required documents, including collection and safety plans. The exact date and the precise landing area within the Woomera Prohibited Area are still to be determined.

A candidate recovery site in Australia’s Woomera Prohibited Area (WPA).
Image: JAXA

Assuming JAXA receives the required permissions, a recovery team will locate and retrieve the capsule after it lands, then deliver it to Japan for analysis. By studying the bits of dust and rock hopefully contained inside it, scientists hope to learn more about the origin of the solar system and possibly the organic materials that made life possible on Earth.

Before any of this happens, however, Hayabusa2 still has a major task to perform, namely the deployment of the MINERVA-II2 lander. Earlier in the mission, the probe deployed the two landers that comprised MINERVA-II1, which led to some spectacular close-up views of the asteroid’s surface. As JAXA explained in a fact sheet (pdf), a trial run of the deployment will be performed on September 5, 2019, with further details about the mission to come later in the month.

Speaking of MINERVA-II1, JAXA received a signal from one of the landers on August 2, 2019. The Ryugu asteroid is currently traveling toward the Sun, which apparently caused both landers to awake from from their “hibernation,” according to a JAXA press kit. With this unexpected re-appearance of MINERVA-II1, the space agency is now devising a plan for what to do next with its revived landers. MINERVA-II1 is capable of hopping from one location to another on the surface, so hopefully there’s some more excitement to come.

via Gizmodo https://gizmodo.com

August 27, 2019 at 12:33PM

Amazon: 40% Off Select Amazing Grass Products + Free Prime Shipping

https://slickdeals.net/f/13341352-amazon-40-off-select-amazing-grass-products-free-prime-shipping?utm_source=rss&utm_content=ht&utm_medium=RSS2

Amazon: 40% Off Select Amazing Grass Products + Free Prime Shipping

Thumb Score: +8

via SlickDeals.net https://ift.tt/2eSubrS

August 27, 2019 at 08:16AM

Sony, Yamaha to launch autonomous EV made of big-screen TVs

https://www.autoblog.com/2019/08/26/sony-yamaha-sc-1-sociable-cart-autonomous-ev/

Sony and Yamaha have joined forces to create a new take on the future of entertainment, and it’s packaged in a windowless mobility cart. Known as the SC-1 Sociable Cart, the electric autonomous box features big-screen TVs inside and out and will launch strictly as an experiential tool rather than product consumers could buy. 

The SC-1 is a direct result of the shift that has been happening throughout the entire automotive market. Automakers have been hard at work to produce electric vehicles that feature autonomous capabilities, and this thing has both technologies. The SC-1 stores energy with a lithium-ion battery and puts it to use with an DC electric motor. It can seat up to five passengers, and it has a top speed of a little slower than 12 mph. Under certain conditions, the SC-1 can be remote-controlled, as well. 

At 123.4 inches long, the SC-1 is about 16 inches more compact than the Fiat 500. Its boxy design allows for a height of about six feet, and it’s about 51 inches wide. Because it uses cameras, sensors, and LIDAR to see and read its exterior, there are absolutely no windows on the vehicle. That means there’s room for screens, both inside and out.

The body wears four 55-inch 4K LCD monitors, and a 49-inch 4K LCD monitor sits inside. The outside screens can play a variety of content, including advertising, while the interior screen can play a video of the exterior or stream a variety of different entertainment options. The interior is also set up to use augmented reality for an immersive experience. Upping the creep factor, the exterior cameras can scan people walking around, read their demographics, and aim specific ads at them. 

Sony and Yamaha are planning to launch new services with the vehicle in Japan in fiscal 2019 in places such as golf courses, amusement parks and commercial facilities, among others. At this time, it is not planned for sale.

via Autoblog https://ift.tt/1afPJWx

August 26, 2019 at 02:48PM

A Single Math Model Explains Many Mysteries of Vision

https://www.wired.com/story/a-single-math-model-explains-many-mysteries-of-vision

This is the great mystery of human vision: Vivid pictures of the world appear before our mind’s eye, yet the brain’s visual system receives very little information from the world itself. Much of what we “see” we conjure in our heads.

“A lot of the things you think you see you’re actually making up,” said Lai-Sang Young, a mathematician at New York University. “You don’t actually see them.”

Quanta Magazine


author photo

About

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research develop­ments and trends in mathematics and the physical and life sciences.

Yet the brain must be doing a pretty good job of inventing the visual world, since we don’t routinely bump into doors. Unfortunately, studying anatomy alone doesn’t reveal how the brain makes these images up any more than staring at a car engine would allow you to decipher the laws of thermodynamics.

New research suggests mathematics is the key. For the past few years, Young has been engaged in an unlikely collaboration with her NYU colleagues Robert Shapley, a neuroscientist, and Logan Chariker, a mathematician. They’re creating a single mathematical model that unites years of biological experiments and explains how the brain produces elaborate visual reproductions of the world based on scant visual information.

“The job of the theorist, as I see it, is we take these facts and put them together in a coherent picture,” Young said. “Experimentalists can’t tell you what makes something work.”

Young and her collaborators have been building their model by incorporating one basic element of vision at a time. They’ve explained how neurons in the visual cortex interact to detect the edges of objects and changes in contrast, and now they’re working on explaining how the brain perceives the direction in which objects are moving.

Their work is the first of its kind. Previous efforts to model human vision made wishful assumptions about the architecture of the visual cortex. Young, Shapley, and Chariker’s work accepts the demanding, unintuitive biology of the visual cortex as is—and tries to explain how the phenomenon of vision is still possible.

“I think their model is an improvement in that it’s really founded on the real brain anatomy. They want a model that’s biologically correct or plausible,” said Alessandra Angelucci, a neuroscientist at the University of Utah.

Layers and Layers

There are some things we know for sure about vision.

The eye acts as a lens. It receives light from the outside world and projects a scale replica of our visual field onto the retina, which sits in the back of the eye. The retina is connected to the visual cortex, the part of the brain in the back of the head.

However, there’s very little connectivity between the retina and the visual cortex. For a visual area roughly one-quarter the size of a full moon, there are only about 10 nerve cells connecting the retina to the visual cortex. These cells make up the LGN, or lateral geniculate nucleus, the only pathway through which visual information travels from the outside world into the brain.

Not only are LGN cells scarce—they can’t do much either. LGN cells send a pulse to the visual cortex when they detect a change from dark to light, or vice versa, in their tiny section of the visual field. And that’s all. The lighted world bombards the retina with data, but all the brain has to go on is the meager signaling of a tiny collection of LGN cells. To see the world based on so little information is like trying to reconstruct Moby-Dick from notes on a napkin.

“You may think of the brain as taking a photograph of what you see in your visual field,” Young said. “But the brain doesn’t take a picture, the retina does, and the information passed from the retina to the visual cortex is sparse.”

But then the visual cortex goes to work. While the cortex and the retina are connected by relatively few neurons, the cortex itself is dense with nerve cells. For every 10 LGN neurons that snake back from the retina, there are 4,000 neurons in just the initial “input layer” of the visual cortex—and many more in the rest of it. This discrepancy suggests that the brain heavily processes the little visual data it does receive.

“The visual cortex has a mind of its own,” Shapley said.

For researchers like Young, Shapley, and Chariker, the challenge is deciphering what goes on in that mind.

Visual Loops

The neural anatomy of vision is provocative. Like a slight person lifting a massive weight, it calls out for an explanation: How does it do so much with so little?

Young, Shapley, and Chariker are not the first to try and answer that question with a mathematical model. But all previous efforts assumed that more information travels between the retina and the cortex — an assumption that would make the visual cortex’s response to stimuli easier to explain.

“People hadn’t taken seriously what the biology was saying in a computational model,” Shapley said.

Mathematicians have a long, successful history of modeling changing phenomena, from the movement of billiard balls to the evolution of space-time. These are examples of “dynamical systems”—systems that evolve over time according to fixed rules. Interactions between neurons firing in the brain are also an example of a dynamical system—albeit one that’s especially subtle and hard to pin down in a definable list of rules.

LGN cells send the cortex a train of electrical impulses one-tenth of a volt in magnitude and one millisecond in duration, setting off a cascade of neuron interactions. The rules that govern these interactions are “infinitely more complicated” than the rules that govern interactions in more familiar physical systems, Young said.

Individual neurons receive signals from hundreds of other neurons simultaneously. Some of these signals encourage the neuron to fire. Others restrain it. As a neuron receives electrical pulses from these excitatory and inhibitory neurons, the voltage across its membrane fluctuates. It only fires when that voltage (its “membrane potential”) exceeds a certain threshold. It’s nearly impossible to predict when that will happen.

“If you watch a single neuron’s membrane potential, it’s fluctuating wildly up and down,” Young said. “There’s no way to tell exactly when it’s going to fire.”

The situation is even more complicated than that. Those hundreds of neurons connected to your single neuron? Each of those is receiving signals from hundreds of other neurons. The visual cortex is a swirling play of feedback loop upon feedback loop.

“The problem with this thing is there are a lot of moving parts. That’s what makes it difficult,” Shapley said.

Earlier models of the visual cortex ignored this feature. They assumed that information flows just one way: from the front of the eye to the retina and into the cortex until voilà, vision appears at the end, as neat as a widget coming off a conveyor belt. These “feed forward” models were easier to create, but they ignored the plain implications of the anatomy of the cortex—which suggested “feedback” loops had to be a big part of the story.

“Feedback loops are really hard to deal with because the information keeps coming back and changes you, it keeps coming back and affecting you,” Young said. “This is something that almost no model deals with, and it’s everywhere in the brain.”

In their initial 2016 paper, Young, Shapley, and Chariker began to try and take these feedback loops seriously. Their model’s feedback loops introduced something like the butterfly effect: Small changes in the signal from the LGN were amplified as they ran through one feedback loop after another in a process known as “recurrent excitation” that resulted in large changes in the visual representation produced by the model in the end.

Young, Shapley, and Chariker demonstrated that their feedback-rich model was able to reproduce the orientation of edges in objects—from vertical to horizontal and everything in between—based on only slight changes in the weak LGN input coming into the model.

“[They showed] that you can generate all orientations in the visual world using just a few neurons connecting to other neurons,” Angelucci said.

Vision is much more than edge detection, though, and the 2016 paper was just a start. The next challenge was to incorporate additional elements of vision into their model without losing the one element they’d already figured out.

“If a model is doing something right, the same model should be able to do different things together,” Young said. “Your brain is still the same brain, yet you can do different things if I show you different circumstances.”

Swarms of Vision

In lab experiments, researchers present primates with simple visual stimuli — black-and-white patterns that vary in terms of contrast or the direction in which they enter the primates’ visual fields. Using electrodes hooked to the primates’ visual cortices, the researchers track the nerve pulses produced in response to the stimuli. A good model should replicate the same kinds of pulses when presented with the same stimuli.

“You know if you show [a primate] some picture, then this is how it reacts,” Young said. “From this information you try to reverse engineer what must be going on inside.”

In 2018, the three researchers published a second paper in which they demonstrated that the same model that can detect edges can also reproduce an overall pattern of pulse activity in the cortex known as the gamma rhythm. (It’s similar to what you see when swarms of fireflies flash in collective patterns.)

They have a third paper under review that explains how the visual cortex perceives changes in contrast. Their explanation involves a mechanism by which excitatory neurons reinforce each other’s activity, an effect like the gathering fervor in a dance party. It’s the type of ratcheting up that’s necessary if the visual cortex is going to create full images from sparse input data.

Currently Young, Shapley, and Chariker are working on adding directional sensitivity into their model—which would explain how the visual cortex reconstructs the direction in which objects are moving across your visual field. After that, they’ll start trying to explain how the visual cortex recognizes temporal patterns in visual stimuli. They hope to decipher, for example, why we can perceive the flashes in a blinking traffic light, but we don’t see the frame-by-frame action in a movie.

At that point, they’ll have a simple model for activity in just one of the six layers in the visual cortex—the layer where the brain roughs out the basic outlines of visual impression. Their work doesn’t address the remaining five layers, where more sophisticated visual processing goes on. It also doesn’t say anything about how the visual cortex distinguishes colors, which occurs through an entirely different and more difficult neural pathway.

“I think they still have a long way to go, though this is not to say they’re not doing a good job,” Angelucci said. “It’s complex and it takes time.”

While their model is far from uncovering the full mystery of vision, it is a step in the right direction—the first model to try and decipher vision in a biologically plausible way.

“People hand-waved about that point for a long time,” said Jonathan Victor, a neuroscientist at Cornell University. “Showing you can do it in a model that fits the biology is a real triumph.”

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.


More Great WIRED Stories

via Wired Top Stories https://ift.tt/2uc60ci

August 25, 2019 at 07:06AM

Does Hyundai’s rooftop solar panel change the fuel-economy equation?

https://www.popsci.com/hyundai-hybrid-car-solar-panel/

The hybrid Sonata with a solar panel on its roof.

The hybrid Sonata with a solar panel on its roof. (Hyundai/)

The new hybrid Hyundai Sonata isn’t available yet in the United States, but it offers something compelling enough to make headlines here—a solar panel on its roof. While the panel can’t produce nearly enough juice to give the car’s battery all it needs for regular travel, it does occupy what the company calls a "supporting role" for the vehicle.

Hyundai notes that the solar cells could provide a boost of about 808 miles annually. In other words, if you drove this car every day for a year, you’d be getting about 2.2 miles of daily travel from the sun, on average. Unlike with plug-in charging or previous attempts at cars with photovoltaics, the solar system shunts power to the Sonata’s battery even when you’re driving.

And in October of last year, Hyundai announced that they have been working on different types of solar cells for cars, one of which is designed to juice the battery of a car with an internal-combustion engine, "thereby improving fuel efficiency," they argue.

Hyundai follows Toyota down the solar-panels-on-a-car road; the Japanese carmaker first offered a Prius with a solar roof around a decade ago, but in 2009, the MIT Technology Review referred to it as "underwhelming." That’s because the solar energy didn’t power the battery that drove the vehicle’s propulsion system, but just "ran a fan to ventilate the car," they note.

Things got better in 2017, when Toyota started offering the Prius PHV with an optional solar roof in Japan. That solar system charges the main battery only when the car is parked, and can add, on average, 1.8 miles to the car’s driving distance each day, with a max of around 3.8 miles. Solar power can’t charge that main battery—called the traction battery—while the car is being driven, however. During that time, the power goes into the 12-volt auxiliary battery, which gives juice to car systems like the radio.

The demonstrator vehicle with solar cells on its hood, roof, and rear.

The demonstrator vehicle with solar cells on its hood, roof, and rear. (Toyota/)

But earlier this summer, Toyota announced something even better: a blue and white demonstration vehicle with solar cells on the hood, roof, and rear that (unlike the Prius PHV production car in Japan) could charge the main battery while the vehicle is in motion. Not only that, but the solar cells are more efficient (clocking in at 34 percent efficiency or more for the demonstration car, compared to 22.5 percent for the Prius PHV) and they produce nearly five times as much wattage. In short, this demonstrator vehicle offers a more capable solar system than the production car. "The trials aim to assess the effectiveness of improvements in cruising range and fuel efficiency of electrified vehicles equipped with high-efficiency solar batteries," Toyota said in its announcement.

Carmakers and others have much more affordable and more powerful solar panels to work with now than they did around ten or so years ago. "In the last decade, the prices of solar panels have dropped at least 60 percent," says Vikram Aggarwal, the CEO of EnergySage, a company that provides people with financial quotes for installing solar power. That, plus solar panels are more energy dense—capable of producing more wattage—and more efficient now. In a nutshell: they’re packing more power, better efficiency, while costing less, Aggarwal says.

All of that means that it now makes more sense for a carmaker to throw a solar panel on the back of a car, even if it’s a far cry from powering the whole vehicle. “You can cover every inch of the car’s exterior with solar cells—the total surface area is never going to be that much [of a power source],” Aggarwal observes, meaning that the panels are just going to be supplemental.

But Parth Vaishnav, an assistant research professor of engineering and public policy at Carnegie Mellon University, wonders if the cars themselves are truly the best place to take advantage of solar power. After all, he notes, solar energy produced at the utility level is much cheaper per watt than solar energy that comes from a person’s residential solar installation. And putting a solar panel on a car could hypothetically add financial cost to the carmaker, plus complexity, weight, and make the car more difficult to disassemble at the end of its life. Ultimately, a person interested in driving a car powered by the sun would be better served by plugging an electric vehicle into a power source that came from a large-scale solar installation. "If you wanted to deploy solar power," he reflects, "is the roof of a car the best place to put it?

via Popular Science – New Technology, Science News, The Future Now https://www.popsci.com

August 23, 2019 at 12:35PM