With a Commercial Printer, Researchers Manufacture Motion Sensors in Bulk

Using a commercial printer and some silver ink, researchers from Florida State University have found a novel way of producing motion sensors en masse. The low-profile sensors support new applications in wearable electronics, structural health monitoring, and, perhaps soon enough, microrobotics.

Lead researcher and doctoral candidate Joshua DeGraff assembled the technology from buckypaper — razor-thin, flexible sheets of durable carbon nanotubes.

DeGraff handles a buckypaper sensor. (Image Credit: Florida State)

In addition to a strip of seven micron-thin buckypaper, the sensor features silver ink electrodes printed from a common, commercially available EPSON Stylus C88+ ink-jet printer.

Degraff, along with Richard Liang, professor and director of Florida State’s High-Performance Materials Institute, spoke with Tech Briefs about how the manufacturing method offers an important benefit for emerging technologies like wearables and robotics: scalability.

Tech Briefs: What are the characteristics of buckypaper?

Dr. Richard Liang: Buckypaper is a carbon-molecule thin film, about 10-15 microns in width. All the carbon nanotubes work in tandem together. The material is extremely lightweight: 5 grams per square meter. By adjusting the contact between the carbon nanotubes, you achieve a wider range of conductivity and sensitivity.

Tech Briefs: How is the buckypaper sensor made?

Joshua DeGraff: We use the printer and silver ink to print patterned electrodes on low-profile plastic substrates. Then, we position our buckypaper films on the printed circuit and laminate it. Lamination holds everything in place and protects the components. We then crimp on low-profile electrical contacts for easy connection. I’m able to print out maybe 100 sensors at a time, and I can make them pretty fast.

Tech Briefs: What is the sensing element?

DeGraff: The buckypaper is the sensing element. It’s very sensitive — about eight times more sensitive than commercial sensors that are usually just made out of metallic prints. Its function is to provide the change in resistance and conductivity. The whole point of this is to have a low-profile sensor that’s scalable, so we can print these in large quantities in continuous fashion.

Dr. Liang: If you want a wearable sensor for everybody, scalability and affordability is key. If somebody wants to cover the whole human body with sensors, we can do it in a very affordable way. We don’t need a very expensive 3D printing machine. We don’t need to use very special conducting ink. You can make 100 sensors a day; that’s a very unique scalability.

Tech Briefs: What applications do you envision right away? What applications do you envision in the future?

DeGraff: Right away, I see the sensors in wearable technology. We were able to integrate our sensors into gloves. The sensors detect very small finger movements, and also large bending movements.

I also see the sensors in structural health monitoring soon. The sensors can basically detect “invisible” deformations and microstrains that we can’t see with the naked eye. They’re affordable, and we can use them to create sensor arrays. We’re doing more lifecycle tests, especially with structural health monitoring with carbon-fiber composites. Down the road, when we figure out how to integrate the sensor with the artificial muscles, we may see them in microrobotics and soft robotic systems.

Tech Briefs: Has buckypaper been used before as a sensor material? Is this a novel approach?

DeGraff: They’ve been used in other sensors, but the problem is the way they are commercialized and manufactured. You want it to be a scalable process. You also want to have the mechanical properties, so you can have a highly sensitive sensor. We have a mixture of both here, and that’s why we have such a high gauge factor [how much resistance value changes as a material is strained or bent]. We can print out lots of sensors at a time, and we can even tailor them to different applications.

Tech Briefs: What’s most exciting to you about this sensor?

DeGraff: I like the fact that it has a wider range of applications, and we can help people out in their daily lives, and not in just one sector, like aerospace. When it comes to wearable technology, athletes can track how intense their workouts are.

You can count your steps. You can have bed sheets that can tell, by your movements, how well you’re sleeping. You can help people who have carpal tunnel syndrome, who are going through treatment, and who need to know how well their self-rehabilitation is going. It can go from there to the structural health monitoring and the detecting of vibrations in buildings.

Tech Briefs: Do you have any advice for fellow engineers and sensor researchers?

DeGraff: If you have an idea and you have the materials, just try it. Instead of wondering whether or not it will work out, or reading if somebody else did it before, take the initiative; try things out yourself and see how it works out for you.

What do you think? Will scalable motion sensors improve adoption of wearables? Share your thoughts below.

Related Content:

from NASA Tech Briefs http://ift.tt/2AbqOYd
via IFTTT

Alex, the French Cooking Guy, Feeds His Ramen Noodle Jones

Make: contributor, mad food scientist and chef, Alex, of French Guy Cooking, has been posting another one of his video series where he obsessively explores a kitchen-related topic. This time it’s ramen noodles, how to cook them, make them from scratch, how to make a suitable broth, and how to produce your own dried noodles to store up for that next all-night hack-a-thon.




from MAKE http://ift.tt/2zw9Pwp
via IFTTT

YouTube’s Creepy Kid Problem Was Worse Than We Thought

Image: YouTube / Pexels / Gizmodo

YouTube says that it’s removed ads from some 2 million videos and over 50,000 channels that featured disturbing content aimed at kids. Some of that content actually exploited children in videos. And while we’ve long known that YouTube struggles to keep bad stuff off of its platform, the fact that tens of thousands of channels involved doing bad things to children feels chilling.

The public outcry over YouTube’s creepy kid videos started a few weeks ago. Several reports highlighted seemingly kid-friendly videos that depicted scenes of Disney characters in bikinis flirting with other popular children’s characters and other generally inappropriate themes. The issue was compounded by disturbing videos of cartoon characters dealing with themes like torture and suicide popping up in the YouTube Kids app. There was also a rash of videos that showed kids being tied up, apparently hurt, or otherwise engaged in exploitative situations.

YouTube quickly addressed the issue by announcing plans to age-restrict and demonetize these kinds of videos. The company went beyond that and told Vice News that it “terminated more than 270 accounts and removed over 150,000 videos” as well as “turned off comments on over 625,000 videos targeted by child predators.” Additionally, YouTube says it “removed ads from nearly 2 million videos and over 50,000 channels masquerading as family-friendly content.”

News of YouTube’s action against millions of disturbing videos came on the heels of a separate but related controversy. Over the weekend, a number of people reported that typing “how to have” into the YouTube search bar prompted autofill suggestions that include phrases like “how to have s*x with kids” and “how to have s*x in school.” Those searches led to videos with titles like “Inappropriate Games You Played as a Kid” featuring a provocative thumbnail of young people kissing and “School Is Hard and So Is Your Math Teacher” with an image of a crying girl being touched by an older man. Those videos have 23 million and 117 million views, respectively, so it’s not hard to imagine why they showed up at the top of the search results.

YouTube says it has removed these vile suggested searches, which is good. But the persistence of these kinds of problems raises larger questions about YouTube’s capacity for moderating creepy content. I’m not talking about “spookin in the wrong neighborhood” or any other fun but weird but also slightly dark videos. I’m talking about the stuff that targets kids and appeals to bad people, like pedophiles.

The problem with all of these videos is how borderline they appear to be—even to humans. And yet, YouTube primarily depends on algorithms and filters to keep bad content off its platform. Videos and channels are removed by human moderators, but only after they’re flagged by users. In the meantime, you have disturbing autofill suggestions like “how to have s*x in school” showing up for everyone, as well as the countless questionable videos to which these searches lead. Removing all of these suggestions and videos seems like an impossible task, especially since over 400 hours of content are uploaded to YouTube every minute.

The thing is, algorithms are inherently imperfect. Computers have a hard time identifying the uncanny valley that separates an innocuous video from one that’s entirely inappropriate. Sure, videos that violate YouTube’s terms of service—stuff that’s copyrighted, gruesome, illegal, full of nudity, exploits children, and so on—can get flagged and removed. YouTube also uses algorithmic filters now in order to catch some of these videos before they’re published. The system isn’t perfect, but YouTube seems committed to it.

It’s inevitably hard to point fingers. Is it a tech company’s fault that humans are awful and abusive and exploitative? Of course not. YouTube does shoulder a tremendous burden when it comes to deciding how to let the right videos in and keep the bad ones out. Few are surprised when the platform fails to catch every creepy video. But the creepy videos are still a problem. Whether that says more about YouTube’s limitations or our own perversions remains to be seen.

from Gizmodo http://ift.tt/2AdazcE
via IFTTT

Photoshop uses AI to make selecting people less of a hassle

Masking a human or other subject out of a scene is a pretty common trick nowadays, but it’s is still arguably one of the hardest and lowest-tech parts of Photoshop. Adobe’s about to make that a lot easier, thanks to an upcoming AI-powered feature called Select Subject. Using it is pretty much idiot-proof: From the main or "Select and Mask" workspaces, you just need to click anywhere on the image, and it’ll automatically select the subject or subjects in the image. From there, you’re free to change the background or tweak the subject separately.

The tech is powered by Adobe’s AI platform, Sensei. "Complicated details around the subject aren’t an issue, because this feature is using machine learning to recognize the objects," Adobe Photoshop Product Manager Meredith Payne Stotzner says in the YouTube video (below). During the demo, she uses it select a single person on the street, a group of volleyball players, a couple on the beach and a red panda.

In some cases, details like hair and fur aren’t properly selected, but using the tool would certainly give you a big head start. It pretty much eliminates the tedious hand-drawn or tweaked masking process, letting you focus on the fine details. Since it uses machine learning tech, it should also get better over time.

There are already plenty of Photoshop plugins like Akvis SmartMask and Fluid Mask that can do something similar to Select Subject. However, it’ll be nice to have such a feature as part of Photoshop, rather than paying extra for a plugin. And the new feature is more than just a technology "sneak" — it’s an actual feature coming in a future Photoshop build. Adobe has yet to say exactly when it’ll arrive, however.

Source: Adobe (YouTube)

from Engadget http://ift.tt/2ncHIAj
via IFTTT

Airbnb now lets you split the cost of rentals

Airbnb is taking a page out of Uber’s playbook.

The short-term rental startup has announced it will now let users split payments on the platform.

Airbnb users can split the cost of a listing with up to 16 people. Previously, the trip organizer would have to pay for the entire cost of the stay upfront.

Payments are automatically split evenly as a default. However, travelers can choose to pay for more than one person.

Related: Airbnb-branded apartment building to open near Walt Disney World

Last year, Airbnb CEO Brian Chesky asked users on Twitter what feature they’d most like to see in 2017. One of the top requests was the ability to split payments.

The tool will launch worldwide on Tuesday, with the exception of India and China. The current payment rules and regulations in those countries would make it difficult to roll out the feature, according to an Airbnb spokesman.

The option is a part of Airbnb’s greater effort to attract more users. During a test of the feature, the startup said 30% of reservations booked led to one or more new users joining Airbnb. It recently tested the feature among 80,000 groups across 175 countries.

But it’s hardly a new concept. Ride-hailing startups Uber and Lyft both offer a split fare option. Uber said its split fare tool is used during “hundreds of thousands” of rides every month, making it one of its more popular features.

Related: You can now book a restaurant reservation on Airbnb

The tool comes a few months after the company rolled out restaurant reservations within its app. Meanwhile, Airbnb announced earlier this month it is acquiring startup Accomable to help connect disabled travelers with more accessible listings.

The platform is also trying to expand beyond its business model. Earlier this year, it announced plans to build a 324-unit apartment building in Kissimmee, Florida in early 2018.

from Business and financial news – CNNMoney.com http://ift.tt/2jtJ0Sq
via IFTTT

Google Waymo’s autonomous cars have driven 4 million miles on public roads

Lest anyone think that Waymo hasn’t been preparing to launch its own autonomous ride-sharing service at some point, the Google spinoff just announced that its self-driving cars have driven a collective 4 million miles on public roads. But it’s not just the milestone the company is celebrating, it’s the pace: While it took the company 18 months to reach 1 million, then 14 to reach 2 million, then 8 months to reach 3 million and finally six months to reach the 4 million mile marker.

While these have been on public roads, they’ve been restricted to cities in California’s Bay Area and around Phoenix, Arizona, with some testing in Austin, Texas and Kirkland, Washington. Waymo had been refining its autonomous tech in partnership with Lyft, though it’s unclear how much influence that has had.

In addition to real driving, the company has put its systems through 2.5 billion simulated miles in the last year, Waymo’s blog post boasted. With all that experience, the company says its been able to teach its vehicles enough to pull off “full autonomy,” and hinted that the public would soon “get to use Waymo’s driverless service to go to work, to school, to the grocery store and more.”

Waymo Blog

Written by David Lamb for Engadget.

Related Video:

from Autoblog http://ift.tt/2AEXVEm
via IFTTT