Why Tesla’s Autopilot Can’t See a Stopped Firetruck

On Monday, a Tesla Model S slammed into the back of a stopped firetruck on the 405 freeway in Los Angeles County. The driver apparently told the fire department the car was in Autopilot mode at the time. The crash highlighted the shortcomings of the increasingly common semi-autonomous systems that let cars drive themselves in limited conditions.

This surprisingly non-deadly debacle also raises a technical question: How is it possible that one of the most advanced driving systems on the planet doesn’t see a freaking fire truck, dead ahead?

Tesla didn’t confirm the car was running Autopilot at the time of the crash, but its manual does warn that the system is ill-equipped to handle this exact sort of situation: “Traffic-Aware Cruise Control cannot detect all objects and may not brake/decelerate for stationary vehicles, especially in situations when you are driving over 50 mph (80 km/h) and a vehicle you are following moves out of your driving path and a stationary vehicle or object is in front of you instead.”

Volvo’s semi-autonomous system, Pilot Assist, has the same shortcoming. Say the car in front of the Volvo changes lanes or turns off the road, leaving nothing between the Volvo and a stopped car. “Pilot Assist will ignore the stationary vehicle and instead accelerate to the stored speed,” Volvo’s manual reads, meaning the cruise speed the driver punched in. “The driver must then intervene and apply the brakes.” In other words, your Volvo won’t brake to avoid hitting a stopped car that suddenly appears up ahead. It might even accelerate towards it.

The same is true for any car currently equipped with adaptive cruise control, or automated emergency braking. It sounds like a glaring flaw, the kind of horrible mistake engineers race to eliminate. Nope. These systems are designed to ignore static obstacles because otherwise, they couldn’t work at all.

“You always have to make a balance between braking when it’s not really needed, and not braking when it is needed,” says Erik Coelingh, head of new technologies at Zenuity, a partnership between Volvo and Autoliv formed to develop driver assistance technologies and self-driving cars. He’s talking about false positives. On the highway, slamming the brakes for no reason can be as dangerous as not stopping when you need to.

“The only safe scenario would be don’t move,” says Aaron Ames, from Caltech’s Center for Autonomous Systems and Technologies. That doesn’t exactly work for driving. “You have to make reasonable assumptions about what you care about and what you don’t.”

Raj Rajkumar, who researches autonomous driving at Carnegie Mellon University, thinks those assumptions concern one of Tesla’s key sensors. “The radars they use are apparently meant for detecting moving objects (as typically used in adaptive cruise control systems), and seem to be not very good in detecting stationary objects,” he says.

That’s not nearly as crazy as it may seem. Radar knows the speed of any object it sees, and is also simple, cheap, robust, and easy to build into a front bumper. But it also detects lots of things a car rolling down the highway needn’t worry about, like overhead highway signs, loose hubcaps, or speed limit signs. So engineers make a choice, telling the car to ignore these things and keep its eyes on the other cars on the road: They program the system to focus on the stuff that’s moving.

This unsettling compromise may be better than nothing, given evidence that these systems prevent other kinds of crashes and save lives. And it’s not much of a problem if every human in a semi-autonomous vehicle followed the automakers’ explicit, insistent instructions to pay attention at all times, and take back control if they see a stationary vehicle up ahead.

The long term solution is to combine a several sensors, with different abilities, with more computing power. Key amongst them is lidar. These sensors use lasers to build a precise, detailed map of the world around the car, and can easily distinguish between a hub cap and a cop car. The problem is that compared to radar, lidar is a young technology. It’s still very expensive, and isn’t robust enough to survive a life of hitting potholes and getting pelted with rain and snow. Just about everybody working on a fully self-driving system—the kind that doesn’t depend on lazy, inattentive humans for support—plans to use lidar, along with radar and cameras.

Except for Elon Musk. The Tesla CEO insists he can make his cars fully autonomous—no supervision necessary—with just radars and cameras. He hasn’t proven his claim just yet, and no one knows if he ever will. Lidar’s price and reliability problems are less of an issue when it comes to a taxi-like service, where a provider can amortize the cost over time and perform regular maintenance. But in today’s cars, meant for average or modestly wealthy consumers, it’s a no-go.

In the meantime, we’re stuck with a flawed system, the result of a compromise made to navigate the world at speed. And when even the best systems available can’t see a big red big firetruck, it’s a stark reminder of how long and winding the path to autonomy actually is.


Driving on My Own

from Wired Top Stories http://ift.tt/2Gc7dqZ
via IFTTT

We pitted digital assistants against each other to find the most useful AI

CES 2018 demonstrated that smart digital assistants are becoming key components of a variety of gadgets. As these AI companions play more of a role in everyday life, every tech company wants an assistant app to call their own. Amazon has Alexa, Apple has Siri, Google has Google Assistant, Microsoft has Cortana, and Samsung has Bixby.

Some of these AIs can help you do everything from looking up train times to controlling a smart home—but it can be hard to keep track of which one can do what. So we downloaded all of the five major assistant apps to put them through their paces, pinpoint their differences, and decide which are the smartest and most capable.

A note before we fire up the apps: Because each of these assistants can respond to thousands of commands, it’s hard to do an exhaustive survey of their skills. Instead, we aim to give you a general flavor of what each one is like. Also bear in mind that we were talking to Alexa via an Amazon Echo speaker, so it couldn’t display text and graphics alongside its responses like the other assistants—which we tested on Android and iOS phones—did.

General questions

We started the contest by asking the assistants basic questions about topics like the weather forecast, conversions between different measurements, and trivia.

Every assistant could answer a simple query about the weather and provide the forecast for the next few days. We did notice some very slight variations in the responses, probably because these apps rely on different sources. Alexa gave the most useful information in the shortest time, which makes sense—as a voice-only program, it has to be as succinct as possible.

The apps also coped admirably with simple calculations and conversions between different units. They could even update us on the time in different time zones—though we should knock points off for Siri and Cortana, which didn’t include the day as well as the time in their spoken responses, something you need to know if a region is a day ahead of or behind you.

We also tried a number of queries that you might throw into a search engine: “How old is Barack Obama?”, “Who built the Empire State Building?”, “What is the population of France?” and “How big is California?”. All the assistants returned the correct answers, which means every one of them would make a good trivia night companion.

However, a more complex question tripped up a few of the assistants. We asked: “What’s that movie with Robert de Niro and Al Pacino in it?” In response, Google Assistant and Bixby did well, producing a list of all five films that matched the question. Siri showed just three of the films, Cortana showed the Wikipedia page for Robert de Niro, and Alexa had no idea what was going on and tried to play a video.

We upped the ante again, asking a tricky question that a human being could research in a matter of minutes: “Is NASA planning a mission to Mars?” In response, Google and Bixby provided and read aloud the Wikipedia page on a human mission to Mars, Siri and Cortana pointed us to NASA’s Journey to Mars page but didn’t read it out, and Alexa started reading out the Wikipedia entry for the Rosetta spacecraft that flew past Mars in 2007.

Evidently, these digital assistants can all return simple answers to simple searchable questions. However, they’ve still got a long way to go when it comes to more complex queries. They’ll need improved natural language recognition, and a better ability to sift through online information, before they can match a human assistant.

Organization

A good digital assistant should be able to organize your life as well as search the web. Here’s how the contenders manage your calendars, contacts, and emails.

Every one of these assistants are able to work with Google, Apple, and Microsoft calendar apps—with the exception of Cortana, which can’t access Apple calendars. When you ask, all of these apps can tell you what your next calendar appointment is and provide a list of upcoming events in response to “What am I doing next week?” Bixby and Siri do the best job of neatly displaying that information (but that’s not Alexa’s fault, as it doesn’t have a screen on a standard Amazon Echo). In addition to reviewing existing events, you can use any assistant to create new ones. However, none of the apps understood what we meant when we asked when our next free time slot or free day would occur.

As for contacts, all the assistants can act on simple “call Joe Schmo” and “text Jane Doe” commands—though with the Alexa app on your phone, you must first enable an Echo for it to be able to make calls. The functionality was largely similar across the board, with the call launching automatically on speakerphone. However, Siri also avoids accidental dials by asking you to confirm that you really want to make the call.

Of course, there are times when you want to look up contact information without necessarily calling them. So we tried to find an address for one of our contacts. Kudos to Siri and Bixby, which actually recognized a “Where does [name] live?” command. Cortana and Google Assistant returned a web result for a famous person of the same name, while Alexa wasn’t able to look up addresses.

Next, we asked all these digital assistants to show our most recent emails. Google Assistant, Bixby, and Siri all displayed the most recent messages on screen, but Alexa and Cortana couldn’t, hampered by not having tight integration with an Android or iOS operating system. Cortana could, however, send emails to specific contacts, as could Bixby, Siri, and Google Assistant. Alexa misses out again here, but expect Amazon to add this ability soon, perhaps through an update to the Alexa app that helps it work with existing email apps.

Reminders

Alarms, timers, and reminders are bread and butter tasks for digital assistants—and in this category, none of the contending apps will let you down.

All assistants could “set an alarm for 9 a.m. tomorrow.” They identified the day and time properly and even showed a confirmation on screen. A follow-up command, “cancel the alarm for tomorrow,” only worked on Bixby, Google Assistant, and Alexa. Siri didn’t understand that phrasing but did respond to “cancel alarm,” perhaps a sign that its natural language recognition lags behind the others just a little. And we couldn’t get Cortana to cancel the notification at all until we asked for a list of alarms and manually toggled it off.

As for reminders, the simple “remind me to buy milk” command met with a few different responses. Alexa asked for a day and time for the reminder before saving it, Google Assistant wanted either a time or a place when the reminder would kick in, and Bixby, Cortana, and Siri saved the note in the default reminder app without asking for any more details. If you do include a specific date and time in your original voice command, all apps will save those relevant details and time the reminder accordingly. They also understand recurring alerts like “remind me at 2 p.m. every day to exercise,” allowing you to set up daily, weekly, or monthly reminders.

In addition to times, you can also associate reminders with places. Cortana and Siri coped well with a “remind me to buy flowers when I get to London” command. Bixby and Google Assistant got halfway there—but asked us to specify a location manually rather than just accepting London from the voice command. Alexa couldn’t link a reminder to a place at all, but that’s a forgivable shortcoming considering that it lives inside a speaker rather than a GPS-enabled phone.

Finally, to avoid having the test phones buzzing for the rest of the week, we asked the digital assistants to “delete all reminders.” Alexa and Bixby obliged after asking for a general confirmation, Siri wanted us to confirm the deletion of each reminder individually, and Google Assistant and Cortana wouldn’t obey that instruction at all—we had to ask to view the reminders on the phone and then remove them manually.

Entertainment

Nobody wants life to be all work and no fun. Which is why your digital assistant must be able to update you on the news, provide sports results, and play music and movies.

In response to “What’s the news?”, Alexa reads out bulletins from a variety of local, national, and international services. The other contenders—Cortana, Google Assistant, Bixby, and Siri—display some of the day’s top headlines, as drawn from the internet, but only Cortana starts to read them aloud. However, we were testing Google Assistant on a Pixel phone, and if you have it installed in a Google Home speaker, then it can read the news to you as well.

Next, we tried asking, “What was the United score?” (referring to the English soccer team Manchester United by an abbreviation of its name). Alexa and Google Assistant answered perfectly, and Alexa even told us when the next game would happen. Siri and Cortana struggled until we specified “Manchester United” as the team name. Coming in last place in this challenge, Bixby could only provide us with a list of web results.

We found music to be a real mixed bag. It really depends on two factors: the assistant and the audio service. For example, we could call up and play Spotify playlists perfectly with Alexa, Google Assistant, and Cortana. But when we tried other music apps like Apple Music and Google Play Music, the assistants fell short. As for Siri, it could only control Apple Music, and Bixby was even more limited—it was only able to play audio files stored on the phone.

Similarly, movies varied from app to app. Alexa can open up apps and play specific films when you say “show me an action movie” or “open Netflix“—as long as you’ve connected it to a Fire TV on the same wireless network. Google Assistant, Siri, and Bixby pull up those videos, and any YouTube clips, on the phone you’re using. In addition, Siri can fire up anything you’ve stored in your iTunes library and Google Assistant can beam any video content to a nearby Chromecast. We’ve left out Cortana, because it doesn’t possess the same video-playing talents that its competitors do, although it can bring up a few YouTube videos via a Bing search.

Although this is the current state of entertainment on assistant apps, it could change very quickly, because Google, Samsung, Microsoft, Amazon, and Apple are constantly making new partnerships. For example, when it comes to displaying news, the assistants rely on receiving compatible content from news producers. Once those partnerships exist, the companies can update their apps to add additional features.

Final verdict

This might seem like a lot of testing, but these assistants also have many features we haven’t explored, which is a testament to how capable they’re becoming. Still, with our basic review, we managed to rank these digital pals in different ways.

In our opinion, Google Assistant is the best at recognizing natural language and responding to follow-up questions. This is to be expected, as the app can draw on Google’s hefty experience in search and AI. It also has the advantage of tight integration with other Google services, such as Android, Gmail, and Google Maps (the latter lets it quickly launch turn-by-turn directions).

Siri possesses a similarly tight integration with smartphones, but only of the iOS variety. However, its lack of support for non-Apple apps and services occasionally lets this assistant down. If you prefer iPhones, though, there’s no reason to switch: Siri will work great with Apple’s Mail, Calendar, Contacts, and Maps apps.

Where Alexa really excels is in the number of third-party skills, from companies like Domino’s and Uber, and outside apps, such as iCloud and Spotify, that you can plug into the service. It can also recognize natural language patterns well. On the downside, it’s not available as a phone app, but Amazon is fixing that drawback as we speak, so it won’t count as a disadvantage for much longer.

Cortana, the Microsoft assistant hitching a ride on Android and iOS, seems the most disjointed of the digital assistants we tested. That said, it syncs neatly with Windows 10, works across multiple devices, and does make some effort to learn the news stories, sports scores, and other interests you follow. Unfortunately, it doesn’t have quite the same polish as Google Assistant, Siri, or Alexa.

Finally, as the newest of these apps, Bixby is still a work in progress. At the moment, it can’t offer as many features as its rivals do. However, it does control Samsung devices well (try commands like “close all recent apps”) and works nicely with the manufacturer’s own mobile apps. Expect some big improvements to come.

from Popular Science – New Technology, Science News, The Future Now http://ift.tt/2DytmOp
via IFTTT

Shimmering Disco Ball Launched Into Space by a Millionaire Who Is Totally Not Compensating for Anything

It’s called “Humanity Star,” and it’s supposed to remind us of our puny place in the Universe. Barely brighter than other chunks of metal we’ve put in space, and with an achingly short six-month lifespan, the giant “disco ball,” is more of a publicity stunt than anything else.

On Sunday, California-based Rocket Lab launched its second Electron two-stage rocket from a launch pad in New Zealand’s Mahia Peninsula. In addition to three CubeSats, the rocket—with the revealing name Still Testing—delivered Humanity Star—the pet project of the company’s CEO Peter Beck.

Measuring three feet wide, the carbon-fiber geodesic sphere is fitted with 65 reflective panels. It should spin rapidly and reflect the Sun’s light such that it’s visible from Earth’s surface at night. It’ll be bright, but not distractingly bright, exhibiting a luminosity just slightly greater than stars and other artificial satellites. The giant disco ball will circle the globe every 90 minutes, traveling around 9,180 meters every second, or 27 times the speed of sound.

So pretty. (Image: Rocket Lab)

Beck’s hope is that it’ll be the brightest object in the night sky—a perpetual reminder that we’re living “on a rock in a giant Universe.” The CEO hates the comparison to a disco ball, saying it represents something greater—a kind of “focal point” for humanity to think about larger issues, such as climate change and resource shortages. Reading his essay “Under One Sky” at the Humanity Ball website, you’d think Beck’s big bright ball is going to save the world:

Humanity is finite, and we won’t be here forever. Yet in the face of this almost inconceivable insignificance, humanity is capable of great and kind things when we recognize we are one species, responsible for the care of each other, and our planet, together.

The Humanity Star is to remind us of this. No matter where you are in the world, rich or in poverty, in conflict or at peace, everyone will be able to see the bright, blinking Humanity Star orbiting Earth in the night sky. My hope is that everyone looking up at the Humanity Star will look past it to the expanse of the universe, feel a connection to our place in it and think a little differently about their lives, actions and what is important.

Yeah, we’re pretty sure the ball won’t have this unifying effect on humanity, and few of us will be lucky enough to even see it. Light pollution’s a bitch these days, and the mirror ball will be making select appearances over the planet’s surface. New Zealand and Australia will have the best view over the next six weeks, followed by North America in March. You can track Humanity Star’s progress here.

Humanity Ball’s current position. (Image: Humanity Star/Rocket Lab/Google)

Moreover, this ball has a painfully brief six-month life span—so we better not develop an attachment. After that, it’ll get sucked up into Earth’s atmosphere and burn to a crisp on re-entry.

Beck hopes to launch more Humanity Stars in the future, but he’s waiting to see how the public responds to this ball, and to assess the cost. Most assuredly, Beck will also be evaluating the extent to which this publicity stunt serves the interests of his fledgling rocket company. In light of its most recent launch, Rocket Lab says it’s now a major step closer to ferrying commercial satellites. What better way to advertise his company than with a glittering ball in space.

[Washington Post, BBC]

from Gizmodo http://ift.tt/2DDkb3t
via IFTTT

Ford Has An Idea For An Autonomous Police Car That Could Find A Hiding Spot 

Here’s something to make you squirm: Ford has submitted a patent application to the U.S. Patent and Trademark Office for an autonomous police car that could function “in lieu of or in addition to human police officers.”

Now, companies always file patents for technology that may never get made, but an autonomous police cruiser seems like the logical conclusion to the development self-driving cars. But damn is it weird to read about.

The patent, first noticed by Motor1, describes how the hypothetical car would rely on artificial intelligence and use “on-board speed detection equipment, cameras, and [it would] communicate with other devices in the area such as stationary speed cameras.”

There’s a number of other ideas in the filing that stand out: For instance, say a car runs a red light. “There may be [a] surveillance camera … as well as [a] roadside sensor (e.g., camera), each of which may detect violation of one or more traffic laws, by vehicle,” the filing says.

That camera or sensor would then transmit a signal to a central computing system, which would then transmit a signal to the autonomous police car to spring into action and catch the scofflaw.

All of which is to say: Yikes! Of course, this is something that’s decades, if ever, from coming reality—but it sure is wild to see put to paper.

from Gizmodo http://ift.tt/2n8Q7m2
via IFTTT

NASA tests light, foldable plane wings for supersonic flights

Planes that can fold their wings to different angles while in the air have the potential to fly faster than their peers, and NASA has recently made headway into their development. The space agency has conducted a series of test flights proving that it can control the wings it designed to move into any position and that they have aerodynamic benefits. While the technology has existed for a long time, it typically requires the use of heavy hydraulic systems. NASA’s version doesn’t need that kind of machinery: it relies on the properties of a temperature-activated material called shape memory alloy instead. Upon being heated, the alloy activates a twisting motion in the tubes serving as the wings’ actuator, moving the wings’ outer portion up to 70 degrees upwards or downwards.

The foldable wings will give typical planes like commercial airliners a way to adapt to different flight conditions. They can give pilots more control over their aircraft and could even lead to more fuel efficient flights. Planes designed to fly at supersonic speeds (faster than the speed of sound), however, will get more out of this technology.

As Matt Moholt, the principal director of the Spanwise Adaptive Wing project, said:

"There’s a lot of benefit in folding the wing tips downward to sort of ‘ride the wave’ in supersonic flight, including reduced drag. This may result in more efficient supersonic flight. Through this effort, we may be able to enable this element to the next generation of supersonic flight, to not only reduce drag but also increase performance, as you transition from subsonic to supersonic speeds. This is made possible using shape memory alloy."

The team now plans to continue developing the technology until the foldable wings can move both up and down during a single flight. At the moment, they can only fold in a single direction in the air, and team members have to rearrange the hardware every time they want to test if the wings can move in the opposite direction. They’ll be busy making that happen within the year, since they’re aiming to conduct the next batch of test flights by summer of 2018.

Source: NASA

from Engadget http://ift.tt/2n9zZje
via IFTTT

Whopper Neutrality: Burger King Explains Net Neutrality with Whoppers [Video]

Most people have no idea what Net Neutrality is about, so to help people understand, Burger King created this video explaining the concept using Whoppers.

The repeal of Net Neutrality is a hot topic in America, but it can be very difficult to understand. That’s why the BURGER KING® brand created WHOPPER® Neutrality, a social experiment that explains the effects of the repeal of Net Neutrality by putting it in terms anyone can understand: A WHOPPER® sandwich.

[Burger King]

The post Whopper Neutrality: Burger King Explains Net Neutrality with Whoppers [Video] appeared first on Geeks are Sexy Technology News.

from [Geeks Are Sexy] Technology News http://ift.tt/2n9uuSI
via IFTTT

Can Chopping Your Vegetables Boost Their Nutrients?

If veggies are chopped or shredded, any health benefits may be too small to be significant.

Personal Creations/Flickr


hide caption

toggle caption

Personal Creations/Flickr

If veggies are chopped or shredded, any health benefits may be too small to be significant.

Personal Creations/Flickr

We all know eating vegetables is a good way to improve health. And for many years, the focus has been on just eating more vegetables, be it fresh, frozen or canned.

But what if there was a quicker and easier way to get more benefit from our vegetables? Can the way we prepare vegetables boost their nutrition? Does tearing or chopping your lettuce make any difference? And, if we chop, does it matter what type of knife we use?

For a long time, we’ve believed tearing vegetables, especially salad leaves, is the best way to preserve their nutrients. The idea is that tearing leaves disrupts the cells of the plant less than chopping. Chopping slices straight through cells, allowing their contents to spill out. This means nutrients, especially minerals such as potassium, can leak away.

But it’s not all bad news for chopping. It has several other effects on vegetables, some of which may be beneficial, at least in theory.

Cutting boosts polyphenols

Vegetables contain a wide range of bioactive compounds, a term that extends beyond their nutrients, like vitamin C and potassium, to include the polyphenols.

These compounds are only found in plants and have various roles, including providing color, acting as plant sunscreen against ultraviolet radiation, and giving the plant a bitter taste, which discourages animals from eating it.

Cutting some types of vegetables — notably celery, lettuce and parsnips — can increase their polyphenol content.

There is logic in this. Cutting wounds the flesh of the vegetable and it responds by producing more polyphenols, helping defend the vegetable tissue from further damage. Similarly, if a grazing animal tasted these bitter compounds, it may think twice before taking another bite.

In theory, higher levels of polyphenols (say, from chopping) are better for our health. Polyphenols often make up a large proportion of what are described as “antioxidants,” which are thought to help support our body’s defenses against inflammation.

But there’s a complication. After chopping comes enzymatic browning, the same chemical reaction that turns cut apples, potatoes and avocados brown. That’s thanks to the enzyme polyphenol oxidase breaking down the polyphenols, the very compounds you’re interested in.

How about chopping and chilling?

Refrigeration might help slow the rate of this browning reaction and so help preserve the potentially beneficial polyphenol content. This works as the cold temperatures in the fridge slow down the chemical reactions, which normally would break down the polyphenols.

On the face of it, this sounds like a great idea: chop up your vegetables and chill them to slow down polyphenol loss (and to stop color changes associated with enzymatic browning).

But the very act of the vegetable producing polyphenols (say after chopping) often involves using up vitamin C. So, nutritionally it could be a case of “robbing Peter to pay Paul,” and there may not be any overall benefits.

We also need to look at the actual levels of polyphenol changes brought on by chopping. Although chopping carrots boosts levels by nearly 200 percent, carrots normally contain very small amounts of these compounds to start with.

So, while there may be statistically more polyphenols produced after chopping, practically this increase is largely irrelevant. This is because the quantities in these chopped and chilled vegetables are still only modest, and typically very poorly absorbed.

So, for most people, the key message remains: Keep working toward eating at least five servings of vegetables per day. It is less important if the veggies are chopped or shredded, as any benefits are too small to be significant.

Chopping can affect taste and texture

But chopping (and the rise in polyphenols that come with it) can alter a vegetable’s taste. This is because polyphenols have a slightly bitter taste, which not everyone likes.

Chopping can also affect a vegetable’s texture, as breaking up the cells releases other enzymes that can cause the structure of the product to break down and become soft and mushy. Refrigeration can slow this effect too.

This is the case with basil, with many recipes recommending tearing rather than chopping or risk bruising the basil as it can alter flavor and texture. The tearing of the leaves seems to damage fewer cells, so lower levels of enzymes are released, and less browning and damage can occur.

Can different knives affect polyphenol loss?

There is some suggestion that the type of knife may influence the breakdown of polyphenols and browning. A blunt knife potentially causes more damage to the cells, promoting polyphenol breakdown. So it might be better to use a sharp one.

More significantly, the copper in steel knives can help the polyphenol oxidase enzyme work, causing more rapid browning. So, a ceramic or plastic knife could reduce this effect.

This story comes to us from The Conversation, an independent source of news and global perspectives from the academic and research community. Nenad Naumovski is an assistant professor in Food Science and Human Nutrition at the University of Canberra, Australia. Ekavi Georgousopoulou is a research associate at the University of Canberra. Duane Mellor is a senior lecturer at Coventry University in London.

from NPR Topics: News http://ift.tt/2GjbvwP
via IFTTT