What Is Wi-Fi 6 and When Will I Get It?

https://www.wired.com/story/what-is-wi-fi-6

In January of this year, at the annual multithousand-square-foot madhouse of consumer electronics in Las Vegas, manufacturers started slipping a new claim into their spec sheets: Supports Wi-Fi 6. New laptops and routers from HP, Dell, Asus—they would all support this new standard. The following month, when Samsung revealed its Galaxy S10 smartphone, it listed Wi-Fi 6 support among the many whiz-band features of the fancy phone. “Wi-Fi 6” was now being included in a flagship product.

So … what is this new standard that everyone’s pledging to support? Wi-Fi 6 is the latest generation of wireless connectivity technology. It hasn’t really launched yet, but it will soon, so tech makers have been building support into devices this year as a means of future-proofing their products.

As with most new standards, its stewards say that Wi-Fi 6 will ultimately make our tech lives better and faster. That’s probably true. But keep in mind that the main objective with the launch of Wi-Fi 6 is to increase the performance and reliability of wireless connectivity at a network level, not necessarily on a single device or at a single access point. Sure, your Roku and your Nintendo Switch will see wireless speed gains, but a lot of the new computational intelligence behind Wi-Fi 6 will be devoted to handling streaming to multiple gadgets at once. It’s Wi-Fi for a world crowded with mobile gadgets, IoT devices, and connected equipment.

The Basics

The standards for Wi-Fi are established by the Institute of Electrical and Electronic Engineers, or IEEE, and devices are certified for these new standards by the the Wi-Fi Alliance, which lists over 800 companies as sponsors or contributors. The list includes Apple, Microsoft, Google, Facebook, Intel, Qualcomm, Broadcom, Microsoft, Samsung, LG Electronics, and, well, hundreds more.

These groups lay the foundation for new radio technologies every five years or so, which means Wi-Fi 6 has been in the works since the last standard was released in 2014. The current wireless networking standard we all use today is referred to as IEEE 802.11ac. The upcoming standard is called IEEE 802.11ax.

But you can just call it Wi-Fi 6. That simplified moniker actually represents a change in how the Wi-Fi Alliance is branding these standards. Every Wi-Fi standard will get named in sequence from now on—especially nice since Wi-Fi 6 certainly rolls off the tongue a lot more easily than “802.11ax.”

“We decided to change the paradigm with Wi-Fi 6,” says Edgar Figueroa, the president and CEO of the Wi-Fi Alliance. “We’re done talking about technologies. Now we’re talking about generations.” So “Wi-Fi 6” will be used more broadly to describe which version of the Wi-Fi network you’re connected to, as will “Wi-Fi 4” and “Wi-Fi 5.” Meanwhile, Wi-Fi Certified 6 will refer to the certification program that device makers have to go through.

How It’s Different

Wi-Fi 6 is expected to usher in the first major update to dual-band support since the 2009 rollout of Wi-Fi 802.11n—or Wi-Fi 4, since we’re calling it that now. Wi-Fi 4 operates on both 2.4-GHz and 5-GHz bands. Wi-Fi 5, née 802.11ac, only uses bands in the 5-GHz spectrum. Wi-Fi 6, or 802.11ax, is supposed to optimize for the transmission frequencies of both 2.4-GHz and 5-GHz bands. Two of its marquee features are multi-user, multiple-input, multiple-output technology (MU-MIMO), and something called Orthogonal Frequency Division Multiple Access (OFDMA).

What the what?

Basically, this tech enables more devices to simultaneously operate on the same Wi-Fi channel, which improves the efficiency, latency times, and data throughput of your wireless network. And while Wi-Fi 6 is designed to improve the performance of Wi-Fi networks on the whole, on your own device you might experience up to four times the capacity and four times the data throughput (the amount of data moved from one point to another) that you would with older wireless network standards. Figueroa says this could mean a throughput of 9 to 10 gigabytes per second in optimal conditions.

via Wired Top Stories https://ift.tt/2uc60ci

August 29, 2019 at 06:06AM

Georgia Tech researchers teach robots to be mechanical MacGyvers

https://www.engadget.com/2019/08/28/georgia-tech-researchers-teach-robots-to-be-mechanical-macgyvers/

There weren’t many sticky situations that ’80s television action hero MacGyver couldn’t slip out of with the help of a disposable lighter, penknife and two tabs of Alka Seltzer. His unconventional application of scientific principles and outside the box thinking were what made him such a formidable opponent week after week. Now, a team of researchers from Georgia Tech’s RAIL research lab is working to impart those same survival skills into robots.

The RAIL team, led by PhD student Lakshmi Nair, focused its research on teaching a robot to fabricate tools — hammers, screwdrivers, ladles and the like — out of whatever materials are handy. "So if a robot needs to solve the task that uses a hammer," Nair explained to Engadget. "They can combine a stick and a stone, for instance, to be able to make a hammer."

But this is much more than just showing the robot a picture of a hammer and telling it to make something that looks like it. "It’s situation-specific," Nair said. "So given a particular situation, if it wants to hammer something, it figures out which objects to put together. So you’re not giving it a specific example of a hammer, just telling it what the situation is."

Once the robot knows what it needs to do, it will evaluate the materials in its workspace based on their shape and how they can be attached to one another. By leveraging supervised machine learning, the RAIL team taught the system to match objects by their relative shapes and perceived functions.

"Essentially, the robot is taught to match form to function so it learns things like the concavity of bowls enables it to hold liquids, for instance," Nair said. The system is fed labeled examples of everyday objects so that when presented with a new set of items that it hasn’t seen before, it can use what it learned beforehand to reason about the unknown objects in front of it. Cups and tongs combine to create ladles, those same tongs and a coin become an improvised flathead screwdriver while piercing a rectangular foam block with a poker makes a DIY squeegee.

Although the robot does use a handheld spectrometer to determine whether an item is pierceable or not, it can’t effectively determine the material properties of what it’s looking at which is why it fabricated a number of hammers out of foam blocks. "Right now, it just looks at shape as one of the main ways to reason about which part to use for the construction of the tool," Nair explained. "Some of our future ongoing work looks at incorporating material reasoning so that it would construct hammers or more sturdy material that can then be used in the actual application."

Nair’s team is not alone in its robotic tool building research. In fact, a team from Tufts University suggested The MacGyver Test as a means of evaluating a robot’s resourcefulness and creativity in 2017 — a practical alternative to the Turing Test, which scores a robot’s loquaciousness.

"The proposed evaluation framework," the Tuft’s team wrote, "based on the idea of MacGyver-esque creativity, is intended to answer the question whether embodied machines can generate, execute and learn strategies for identifying and solving seemingly unsolvable real-world problems."

But does that constitute cleverness or resourcefulness? In the TV show, MacGyver’s gadgets and contraptions were so effective because his plans exploited some underlying scientific or physics principle. Professor Nathan Michael, Director of the RISLab at Carnegie Mellon University argues that these systems are performing the same creative task as Richard Dean Anderson’s character.

"A lot of the problem-solving challenges that arise when systems are engaging in executive-level task planning share a lot of similarities to this idea of building tools," he told Engadget. "In fact, the underlying algorithmic framework employed for the idea of problem-solving itself is one basically trying to optimize or find a solution within a particular set of constraints subject to a particular condition."

The way that these problems are handled depends on the chosen methodology, which in turn "depends largely on the nature of the problem," he continued. "What we’re talking about here is kind of like a larger problem of reasoning. The question of cleverness or resourcefulness really comes down to, within that context, the ability for the system to quantify that which it can or cannot do, those resources that it can or cannot leverage, and its ability to find a way to solve that particular problem through the application of those resources."

There are plenty of places in the real world where such systems can find use Nair said, "any application that involves repair and improvisation." In addition to helping out with light home repairs, Nair anticipates that this technology will make it into space before too long. "You have space exploration where you could potentially send these robots in beforehand and then have them use the resources available to construct habitats before humans get there," she continued. However, the system still faces a number of technological challenges to overcome — whether they’re clunky manipulator arms or cameras that can’t reliably detect metallic items — before we see them in space.

via Engadget http://www.engadget.com

August 28, 2019 at 12:06PM

Unix at 50: How the OS that powered smartphones started from failure

https://arstechnica.com/?p=1489117

Ken Thompson (sitting) and Dennis Ritchie (standing) in front of a PDP-11. Ritchie annotated this press image for Bell Labs as <a href='https://www.bell-labs.com/usr/dmr/www/picture.html'>"an amusing photo,"</a> and he joked that he had much "more luxuriant and darker hair" at the time of the photo than when it appeared in magazines like the March 1999 Scientific American (which, unfortunately, incorrectly swapped IDs for the two).
Enlarge /

Ken Thompson (sitting) and Dennis Ritchie (standing) in front of a PDP-11. Ritchie annotated this press image for Bell Labs as

“an amusing photo,”

and he joked that he had much “more luxuriant and darker hair” at the time of the photo than when it appeared in magazines like the March 1999 Scientific American (which, unfortunately, incorrectly swapped IDs for the two).

Maybe its pervasiveness has long obscured its origins. But Unix, the operating system that in one derivative or another powers nearly all smartphones sold worldwide, was born 50 years ago from the failure of an ambitious project that involved titans like Bell Labs, GE, and MIT. Largely the brainchild of a few programmers at Bell Labs, the unlikely story of Unix begins with a meeting on the top floor of an otherwise unremarkable annex at the sprawling Bell Labs complex in Murray Hill, New Jersey.

It was a bright, cold Monday, the last day of March 1969, and the computer sciences department was hosting distinguished guests: Bill Baker, a Bell Labs vice president, and Ed David, the director of research. Baker was about to pull the plug on Multics (a condensed form of MULTiplexed Information and Computing Service), a software project that the computer sciences department had been working on for four years. Multics was two years overdue, way over budget, and functional only in the loosest possible understanding of the term.

Trying to put the best spin possible on what was clearly an abject failure, Baker gave a speech in which he claimed that Bell Labs had accomplished everything it was trying to accomplish in Multics and that they no longer needed to work on the project. As Berk Tague, a staffer present at the meeting,

later told Princeton University

, “Like Vietnam, he declared victory and got out of Multics.”

Within the department, this announcement was hardly unexpected. The programmers were acutely aware of the various issues with both the scope of the project and the computer they had been asked to build it for.

Still, it was something to work on, and as long as Bell Labs was working on Multics, they would also have a seven-million-dollar mainframe computer to play around with in their spare time. Dennis Ritchie, one of the programmers working on Multics, later said they all felt some stake in the success of the project, even though they knew the odds of that success were exceedingly remote.

Cancellation of Multics meant the end of the only project that the programmers in the Computer science department had to work on—and it also meant the loss of the only computer in the Computer science department. After the GE 645 mainframe was taken apart and hauled off, the computer science department’s resources were reduced to little more than office supplies and a few terminals.

As Ken Thompson, another programmer working on the project, wryly observed for the Unix Oral History project, “Our personal way of life was going to go much more spartan.”

Luckily for computer enthusiasts, constraint can at times lead to immense creativity. And so the most influential operating system ever written was not funded by venture capitalists, and the people who wrote it didn’t become billionaires because of it. Unix came about because Bell Labs hired smart people and gave them the freedom to amuse themselves, trusting that their projects would be useful more often than not. Before Unix, researchers at Bell Labs had already invented the transistor and the laser*, as well as any number of innovations in computer graphics, speech synthesis, and speech recognition.

Make way for Multics

Multics had started off hopefully enough, although even at first glance its goals were a bit vaguely stated and somewhat extravagant.

A collaboration involving GE, MIT, and Bell Labs, Multics was promoted as a project that would turn computing power into something as easy to access as electricity or phone service. Bell Labs researchers would have a jack in their office that would connect their terminal to the Multics mainframe, and they would be able to access—in real time—the mainframe’s entire resources. They would also be able to store files on the mainframe and retrieve them at will.

If all this sounds incredibly trivial, it’s evidence of how important these features rapidly became—even for simple computing tasks. But when Multics was first conceived in the early ’60s, file storage was a novelty, and “time sharing” (or the ability for multiple users to share access to a single computer’s resources) had only been done experimentally, not in a production environment with a high number of users.

Computers in the early 1960s ran programs one at a time, one after the other. A researcher at Bell Labs would write a program, convert it into whatever form of input the computer accepted (punch cards, paper tape, or magnetic media for really fancy machines), and drop it off at the computer center. A computer operator would queue up the program, run it, and then deliver the printed results and the original program to the researcher.

If there was a mistake in the code, the hassle of printing out punch cards, taking them down to the computer center, and then waiting around for results was rewarded with a printout saying something like “SYNTAX ERROR.” Perhaps you might also get a line reference or some other possibly helpful information.

As programs became more complicated, this method of debugging code became even more frustrating than it already was. But no company or university, even Bell Labs, was in a position to buy a mainframe for each individual researcher—in 1965, the GE 645 that Bell Labs used to develop Multics cost almost as much as a Boeing 737.

Thus, there was widespread interest in time sharing, which allowed multiple researchers to run programs on the mainframe at the same time, getting results immediately on their remote terminals. With time sharing, the programs weren’t printed off on punch cards, they were written and stored on the mainframe. In theory, researchers could write, edit, and run their programs on the fly and without leaving their offices. Multics was conceived with that goal in mind. It kicked off in 1964 and had an initial delivery deadline of 1967.

MIT, where a primitive time sharing system called CTSS had already been developed and was in use, would provide the specs, GE would provide the hardware, and GE and Bell Labs would split the programming tasks.

via Ars Technica https://arstechnica.com

August 29, 2019 at 07:04AM

Duped In The Deli Aisle? ‘No Nitrates Added’ Labels Are Often Misleading

https://www.npr.org/sections/thesalt/2019/08/29/755115208/duped-in-the-deli-aisle-no-nitrates-added-labels-are-often-misleading?utm_medium=RSS&utm_campaign=news

Consumer groups are urging the USDA to change labeling rules for processed meats.

Consumer groups are urging the USDA to change labeling rules for processed meats. They argue that "uncured" and "no nitrates added" labels may falsely lead people to believe these meats are healthier.

(Image credit: Foodcollection/Getty Images/Foodcollection)

via NPR Topics: News https://ift.tt/2m0CM10

August 29, 2019 at 04:11AM

Aptera returns with plans for 1,000-mile EV

https://www.autoblog.com/2019/08/28/aptera-3-wheel-electric-vehicle-revival/

A long time ago — especially in the relative timeframe of the modern EV — there was a promising, über-efficient electric car being developed by a company called Aptera. The three-wheeled, two-seat pod looked like the fuselage of a small plane. Time went on, and in 2011 Aptera began refunding deposits for the vehicle, called the 2e, before going bankrupt and closing up shop. The assets were bought by a Chinese company, Zap Jonway, and then the whole story kind of died off. We figured that’d be the end of it.

To the contrary, says an exclusive article from IEEE Spectrum. Aptera’s founders have reconvened, it says, and bought back its intellectual property in order to re-launch the brand and the Aptera 2e EV. The three reunited founders, Chris Anthony, Steve Fambro and Michael Johnson, told the publication they plan to build the world’s most efficient EV. To do that, they’ve completely updated the old design, relying on years of new manufacturing (including additive manufacturing) know-how, improved materials and technology and a more robust supply chain.

The new Aptera electric vehicle would use 50-kW in-wheel motors, likely at all three wheels (though it’ll test a two-motor setup as well). Battery packs will range from 40 to 100 kWh, which in an aerodynamic, lightweight car (the 60-kWh version would weight about 1,800 pounds), would mean up to 1,000 miles of range in the highest configuration.

Aptera is still in the early stages of its second life. Now, it needs working prototypes, which require funding. It has just launched a crowdfunding campaign — with a $1,000 investment earning a spot in the reservation list — and is in talks with more traditional investors as well, in an effort to raise $2.5 million. With that, it’ll build three prototypes, with a potential unveiling next year. Aptera is also considering building a six-seat autonomous vehicle in the future, but first things first … again.

via Autoblog https://ift.tt/1afPJWx

August 28, 2019 at 03:44PM