Ukraine Army Using Steam Decks To Shoot Real Turrets In War With Russia

https://www.gamespot.com/articles/ukraine-army-using-steam-decks-to-shoot-real-turrets-in-war-with-russia/1100-6513768/


New footage shows Ukrainian military personnel piloting a remote-control turret with nothing other than a Steam Deck. As reported by PC Gamer, footage of the Steam Deck being used by the Ukrainian military first emerged via TRO Media on Instagram. Later footage, likely from the same event, appeared showing the turret explicitly being controlled by the portable PC.

The turret shown in the footage is a Shablya model, developed by Ukrainian firm Global Dynamics. A crowdfunding campaign via People’s Project raised 445,000 uah (12,000 USD) to supply 10 of the remote weapon stations to the Ukraine armed forces.

Why use a video game device to control a remote turret? According to Bellingcat research Aric Toler, as quoted by PC Gamer, the device is perfect for this kind of usage. Toler said, “Totally native OS client, great controller you can use, touch screen, etc. It makes perfect sense for Steam Deck to be used, assuming the software is Linux-compatible (unless they went through the godawful process of dual-booting Windows on a Steam Deck).”

The relationship between video games and military technology goes back to the medium’s origins. The first flight simulators were developed for training military pilots. This is not the first time gaming controllers have been used for military purposes. Xbox controller have been used to control submarines, photonics masts, and even giant laser cannons. On the more absurd side of these connections, War Thunder players leaked classified military documents in an effort to correct perceived inaccuracies in the combat-vehicle-themed game.

Got a news tip or want to contact us directly? Email news@gamespot.com

via GameSpot’s PC Reviews https://ift.tt/kdMeLbD

May 2, 2023 at 11:42AM

Daimler Trucks launches Rizon electric medium-duty truck in U.S.

https://www.autoblog.com/2023/04/28/daimler-medium-duty-electric-truck-mitsubishi-fuso-ecanter/


Daimler Trucks, which already owns Freightliner and Western Star, is putting a new brand on the U.S. market later this year. Dubbed Rizon, the medium-duty Class 4 and Class 5 haulers are battery-electric vehicles ranging from 15,995 to 17,995 pounds in gross vehicle weight. These are the kinds of city workhorses usually put to work as box trucks and refrigerated trucks for delivery and minicipal fleets, dump trucks, and flatbeds. Daimler hasn’t addressed the source of Rizon’s line yet, but Freightwaves suspects these are rebadged versions of Japanese truckmaker Mitsubishi Fuso’s eCanter. Daimler Trucks owns 89.3% of Mitsubishi Fuso Truck and Bus Corporation, which launched the latest version of the eCanter for Europe last September. The additions expand Daimler Trucks’ commercial e-footprint, along with the Class 6 Freightliner eM2 106 and the Class 8 Freightliner eCascadia.

Three Rizon models will go on sale through a network of U.S. dealers in Q4, the e16M, e16L and e18L. The M model runs with two 83-kWh battery packs providing a range from 75 to 100 miles. The L models boast three larger 124-kWh packs extending range to anywhere from 110 to 160 miles on a charge. Daimler chose lithium iron phosphate (LFP) batteries for their extended durability and reliability compared to other chemistries. The company will warranty the high-voltage packs for 5 years or 120,000 miles, and the rest of the truck and powertrain for 5 years or 75,000 miles. The company says charging using a Level 2 AC system refills the battery in “five to six hours,” whereas using the DC fast charge system “will result in a full charge in 45 to 90 minutes.”

To help the fleet customers that Rizon is targeting first, Daimler Truck and distribution partner Velocity will offer consultation on charging and telematics access, plus a sales force and technicians trained on the ins and outs of commercial electric vehicles.

Sales start in Southern California, New York and Texas. These are also the states where Daimler just announced the launch of its charging infrastructure joint venture Greenlane. Developed with NextEra Energy Resources and private equity firm BlackRock, the $650 million initiative will build out a charging network along popular freight routes for medium- and heavy-duty battery-electric trucks that has refilling stations for hydrogen fuel-cell trucks. When up and running, dedicated software will include a commercial vehicle reservation platform to make charging more efficient even before plugging in. Eventually, the project wants to make space for light-duty vehicles after opening the way for electric trucks and then hydrogen trucks.

The Rizon makes its debut here at next week’s Advanced Clean Transportation (ACT) Expo, running May 1-4 in Anaheim, California.

Related Video:

via Autoblog https://ift.tt/Suqdl9D

April 28, 2023 at 04:12PM

ChatGPT Answers Patients’ Online Questions Better Than Real Doctors, Study Finds

https://gizmodo.com/chatgpt-ai-doctor-patients-reddit-questions-answer-1850384628


AI may not replace your doctor anytime soon, but it will probably be answering their emails. A study published in JAMA Internal Medicine Friday examined questions from patients and found that ChatGPT provided better answers than human doctors four out of five times. A panel of medical professionals evaluated the exchanges, and preferred the AI’s response in 79% of cases. ChatGPT didn’t just provide higher quality answers, the panel concluded the AI was more empathetic, too. It’s a finding that could have major implications for the future of healthcare.

Won’t Virtual Reality Make Me Sick?

“There’s one area in public health where there’s more need than ever before, and that is people seeking medical advice. Doctor’s inboxes are filled to the brim after this transition to virtual care because of COVID-19,” said the study’s lead author, John W. Ayers, PhD, MA, vice chief of innovation in the UC San Diego School of Medicine Division of Infectious Diseases and Global Public Health.

“Patient emails go unanswered or get poor responses, and providers get burnout and leave their jobs. With that in mind, I thought ‘How can I help in this scenario?’” Ayers said. “So we got this basket of real patient questions and real physician responses, and compared them with ChatGPT. When we did, ChatGPT won in a landslide.”

Questions from patients are hard to come by, but Ayers’s team found a novel solution. The study pulled from Reddit’s r/AskDocs, where doctors with verified credentials answer users’ medical questions. The study randomly collected 195 questions and answers, and then had ChatGPT answer the same questions. A panel of licensed healthcare professionals with backgrounds in internal medicine evaluated the exchanges. The panel first chose which response was they thought was better, and then evaluated both the quality of the answers and the empathy or bedside manner provided.

The results were dramatic. ChatGPT’s answers were rated “good” or “very good” more than three times more often than doctors’ responses. The AI was rated “empathetic” or “very empathetic” almost 10 times more often.

The study’s authors say their work isn’t an argument in favor of ChatGPT over other AI tools, and they say that we don’t know enough about the risks and benefits for doctors to start using chatbots just yet.

Physicians showed an overwhelming preference for AI written responses,
Graphic: Courtesy of John. W. Ayers

“For some patients, this could save their lives,” Ayers said. For example, if you’re diagnosed with heart failure, it’s likely you’ll die within five years. “But we also know your likelihood of survival is higher if you have a high degree of compliance to clinical advice, such as restricting salt intake and taking your prescriptions. In that scenario, messages could help ensure compliance to that advice.”

The study says the medical community needs to move with caution. AI is progressing at an astonishing rate, and as the technology advances, so do the potential harms.

“The results are fascinating, if not all that surprising, and will certainly spur further much-needed research,” said Steven Lin, MD, executive director of the Stanford Healthcare AI Applied Research Team. However, Lin stressed that the JAMA study is far from definitive. For example, exchanged on Reddit don’t reflect the typical doctor-patient relationship in a clinical setting, and doctors with no therapeutic relationship with a patient have no particular reason to be empathetic or personalized in their responses. The results may also be skewed because the methodology for judging quality and empathy were simplistic, among other caveats.

Still, Lin said the study is encouraging, and highlights the enormous opportunity that chatbots pose for public health.

“There is tremendous potential for chatbots to assist clinicians when messaging with patients, by drafting a message based on a patient’s query for physicians or other clinical team members to edit,” said “The silent tsunami of patient messages flooding physicians’ inboxes is a very real, devastating problem.”

Doctors started playing around with ChatGPT almost as soon as it was released, and the chatbot shows a lot of potential for use in healthcare. But it’s hard to trust the AI’s responses, because in some cases, it lies. In another recent study, researchers had ChatGPT answer questions about preventing cardiovascular disease. The AI gave appropriate responses for 21 out of 25 questions. But ChatGPT made some serious mistakes, such as “firmly recommending” cardio and weightlifting, which can be unsafe for some people. In another example, a physician posted a TikTok about a conversation with ChatGPT where it made up a medical study. These problems could have deadly consequences if patients take a robot’s advice without input from a real doctor. OpenAI, the maker of ChatGPT, did not respond to a request for comment.

“We’re not saying we should just flip the switch and implement this. We need to pause and do the phase one studies, to evaluate the benefits and discover and mitigate the potential harms.” Ayers said. “That doesn’t mean we have to delay implementation for years. You could do the next phase and the next required study in a matter of months.”

There is no question that tools like ChatGPT will make their way into the world of medicine. It’s already started. In January, a mental health service called Koko tested GPT-3 on 4,000 people. The platform connects users to provide peer-to-pier support, and briefly let people harness an OpenAI powered chatbot in their responses. Koko said the AI messages got an overwhelmingly positive response, but the company shut the experiment down after a few days because it “felt kind of sterile.”

That’s not to say tools like ChatGPT don’t have potential in this context. Designed properly, a system that uses AI chatbots in medicine “may even be a tool to combat the epidemic of misinformation and disinformation that is probably the single biggest threat to public health today,” Lin said. “Applied poorly, it may make misinformation and disinformation even more rampant.”

Reading through the data from the JAMA study, the results seem clear even to a layperson. For example, one man said he gets arm pain when he sneezes, and asked if that’s cause for alarm. The doctor answered “Basically, no.” ChatGPT gave a detailed, five paragraph response with possible causes for the pain and several recommendations, including:

“It is not uncommon for people to experience muscle aches or pains in various parts of their body after sneezing. Sneezing is a sudden, forceful reflex that can cause muscle contractions throughout the body. In most cases, muscle pain or discomfort after sneezing is temporary and not a cause for concern. However, if the pain is severe or persists for a long time, it is a good idea to consult a healthcare professional for further evaluation.”

Unsurprisingly, all of the panelists in the study preferred ChatGPT’s answer. (Gizmodo lightly edited the details in the doctor’s example above to protect privacy.)

“This doesn’t mean we should throw doctors out of the picture,” Ayers said. “But the ideal message generating candidate may be a doctor using an AI system.”

If (and inevitably when) ChatGPT starts helping doctors with their emails, it could be useful even if it isn’t providing medical advice. Utilizing ChatGPT, doctors could work faster, giving patients the information they need without having to worry about grammar and spelling. Ayers said a ChatBot could also help reaching out to patients proactively with health care recommendations—like many doctors did in the early stages of the pandemic—rather than waiting for patients to get in touch when they have a problem.

“This isn’t just going to change medicine and help physicians, it’s going to have a huge value for public health, we’re talking en masse, across the population,” Ayers said.

Want to know more about AI, chatbots, and the future of machine learning? Check out our full coverage of artificial intelligence, or browse our guides to The Best Free AI Art Generators and Everything We Know About OpenAI’s ChatGPT.

via Gizmodo https://gizmodo.com

April 28, 2023 at 10:30AM

Watch Hyundai e-Corner execute a real 90-degree crab walk and 180-degree turn

https://www.autoblog.com/2023/04/27/watch-hyundai-e-corner-execute-a-real-90-degree-crab-walk-and-180-degree-turn/


Hyundai Group technology division and auto supplier Hyundai Mobis is hitting its marks in development of its e-Corner module. The module packages a wheel’s suspension, braking, and steering necessities into a free-standing assembly connected at the corner of a vehicle to an in-wheel motor. A vehicle fitted with four e-Corner modules would look and run just like a traditional EV in everyday use, but all of its driving functions are by-wire, a central ECU ensuring the modules work together. The magic of the system is the degree of rotation allowed by omitting links like half-shafts and steering racks. The system’s been demonstrated on public roads around the company campus on a Hyundai Ioniq 5. With each module able to rotate 90 degrees off-axis, the resulting maneuvers capable to the Ioniq 5 prototype look like a TikTok video or science fiction, take your pick. 

There are a few small giveaways that something is up with the test car: The modules jack the body up some and push the tires out a bit, causing poke and funky wheel arch alignment, and the wheel arches have additional cutouts to make room for full wheel rotation. By the time Hyundai Mobis achieves a production unit, planned for 2025, those minor issues should be easily addressed with a more compact module or a chassis design that accounts for the module.

A short video showcases several driving modes. Crab Driving turns all modules 90 degrees in the same direction, moving the Ioniq 5 sideways into a parallel parking spot. The Zero Turn is the same as we’ve seen from Rivian’s Tank Turn or the prototype electric G-Wagen. It rotates the modules 45 degrees, front wheels turning in, rear wheels turning out, so the vehicle can rotate in place. The Pivot Turn rotates the rear modules 90 degrees, turning the rear wheels plus one front wheel so that the car can pivot around a stationary front wheel as if the stationery wheel is the fixed point of a compass. This could come in handy in crammed parking lots to avoid making 12-point turns getting into and out of a spot. Finally, there’s Diagonal Driving, which we’d recognize as Hummer’s Crab Walk. Side note since this will come up in some bar somewhere, marine crabs walk sideways or forward, not diagonally, so we gotta give Hyundai the nod for the decapod-appropriate term.

If all goes well with the business case and production plans, Hyundai Mobis wants to start taking orders in 2025. It will be developing a Purpose Built Vehicle — the autonomous living-room-on-wheels kind — as another showcase for the e-Corner. Check out the vid to see what our future living rooms on wheels will be able to do.

via Autoblog https://ift.tt/HBQTRiF

April 27, 2023 at 03:18PM

TikTok may have generative AI avatars soon

https://www.engadget.com/tiktok-may-have-generative-ai-avatars-soon-065038031.html?src=rss

TikTok may soon let you create AI stylized avatars not unlike what you can with deep learning apps like Midjourney or Lensa, according to a Twitter thread from social media guru Matt Navarra seen by The Verge. Called AI Avatars, the tool lets you upload three to 10 photos of yourself and choose from five art styles. It will then generate up to 30 separate avatars in a couple of minutes. You can then download one, several or all of the images to use as a profile picture or in stories.

Though the styles are more limited than what you can get on Lensa, the results look pretty good — so the feature is bound to be popular. Likely for that reason, TikTok will only let you use it once a day to presumably avoid overloading servers. 

TikTok may have generative AI avatars soon
Matt Navara

Though generative AI images seem like harmless fun, they’re not without some controversy. For both Lensa and Midjourney, artists have complained that the AI has sampled their work and borrowed from it a bit too liberally at times. And earlier this year, Getty launched a lawsuit against Stable Diffusion claiming it was scraping its data to generate art. 

This article originally appeared on Engadget at https://ift.tt/6xZFRsf

via Engadget http://www.engadget.com

April 26, 2023 at 02:53AM

Zozofit’s capture suit takes the guesswork out of body measuring

https://www.engadget.com/zozofits-capture-suit-takes-the-guesswork-out-of-body-measuring-140006295.html?src=rss

I’ve developed an odd fascination with body-measuring technology, especially as it relates to the fashion world. Many companies are working on infrastructure that will hopefully one day let us buy clothes custom-tailored for the exact contours of our bodies. That should make people like me, who feel very under-served by the traditional fashion industry, a lot happier. It should also help to reduce the waste generated by the overproduction of clothes nobody wants to buy, which is a problem both for businesses and the planet. So, when Zozofit, makers of the Zozosuit, asked if I wanted to try its skin-tight body-measuring outfit, which has now been repurposed as a fitness tool, I agreed, albeit with my usual degree of trepidation.

The Zozosuit isn’t new, but its makers are using this year as a form of soft relaunch, with a new focus on breaking into the US. It was actually set up back in 2018 by Japanese high-end fashion retailer Zozo as a way of launching a custom-clothing line. Users bought the suit, scanned their bodies and then could order clothes that, on paper, were tailored to better suit their bodies. And while the clothes weren’t custom-made, the idea was that the outfits would be a better fit for them than the usual mass-produced stuff. But that idea, great in theory, didn’t necessarily shake out that well in practice.

Fashion Network said that the cost and complexity involved in launching the suit ate away at the company’s otherwise healthy profits. QZ reported that while people bought the suits, which were sold at a deep discount, few went on to purchase the custom threads as Zozo had planned. It got worse, as many reporters who tested the system found the clothes they had ordered, like Gizmodo’s Ryan F. Mandelbaum and the Economist’s Charlie Wells, didn’t actually fit. A better suit with higher-resolution dots for imaging was developed, but the project was subsequently put on ice.

Since then, Zozo has tried to open up its technology to third parties, but has now pivoted the technology toward something more fitness-focused. Since it already had the tech to make a body-measuring suit, it might as well be put to good use, or so the thinking goes. A number of health and fitness professionals advocate that, for people looking to get fitter, measure their bodies instead of stepping on the scale. So it makes sense for this to be offered as an elegant alternative to wrestling with a tape measure on a weekly basis.

Buying a Zozosuit is easy enough, just give it your weight in pounds, as well as your height in feet and inches, and cough up $98 plus tax. Not long after, you’ll get a slender package which contains a skinsuit made out of polyester and spandex. It looks very much like a motion capture suit commonly used in the production of visual effects, and functionally does the same job. Coming in two parts, the app will give you guidance on how to wear it, making sure that the waistband is pulled up high and covered by the top. You’ll need to try and keep everything as flat as you can, since visible creases will prevent you from taking an accurate scan.

Picture of two people with different body shapes wearing the Zozofit Zozosuit, used for making quick 3D scans of your body.
Zozofit

As a 5’11”, 231-pound man, I did wonder if Zozo would have a suit large enough to cater for my body shape. The website has images of much more athletically-adept models wearing its clothing and you may be concerned there’s no option for bigger-sized folks. The suit I tried on was tight, as intended, but didn’t feel restrictive, and I don’t think you should be nervous that the company can’t accommodate your needs. Other users in a similar situation have documented a similar experience, including YouTuber The Fabric Ninja, who produced a “Plus-Size Review” in 2020. That said, I don’t think I could pull this off as some form of athleisure fashion statement, for all of the reasons you can probably presume.

Inside the package is a cardboard phone stand, which you’ll need to pop out and fold into place to prop your smartphone onto. The Zozofit uses your handset’s primary camera, and so you’ll need to stand it on a table and then stand six feet or so away from it. Once activated, you’ll get voice guidance talking you through the setup and measurement process, and you’ll be asked to hold your arms slightly away from your body. The coach will then ask you to turn to every position on the clock, taking 12 images as you shuffle around in a circle. Once completed, you’ll be notified that you can pick up your phone and then wait 30 seconds-or-so for the model to process.

And you’ll get a headless 3D-mesh model of your body with various measurements labeled off the sides. These include measurements for your upper arms, chest, waist and hips, upper thigh and your calves. After you’ve pawed at your vital statistics, you’ll be invited to set some fitness goals based on those initial measurements. Interestingly, these are capped, I suspect to keep you picking smaller, more sustainable goals and avoid becoming disappointed. It measured my waist at 46.6-inches, and you can only set the goal at inch-wide increments down to 41.6-inches or up to 51.6-inches. This will change in a later update, but I appreciated the more realistic form of goal-setting it promises.

You’ll also get the app’s rough calculation of your body fat percentage, which it clocked at 35.6 percent. Not long after, I jumped on my smart scale and it registered me as having 31.6 percent, and I suspect, too, the imaging might struggle to be as accurate when you’re dealing with such big figures. I’d wager, too, that body fat percentages might not be so easily calculated by sight alone, and perhaps Zozo could look to remove those measurements which aren’t as reliable. It may also dent the PR braggadocio the company is putting out, claiming that this setup is the “world’s most accurate at-home 3D body scanner.” (It says it has compared its results to several rivals on the market, as well as professional hand-measurements.)

Screenshots of measurements taken of Daniel Cooper inside the Zozofit app showing a 3D-wire mesh of his body with measurements overlaid.
Zozofit / Daniel Cooper

Now, the company says that its body fat measurements use the US Navy Body Fat system, which calculates your body fat based on a series of body measurements. That method was developed to create a quick-and-dirty measurement to determine if someone was fit for service. (In the process of researching this, I learned that personnel describe it as the “rope and choke,” which isn’t relevant, but thought you’d appreciate the slang.) The company’s representatives added, to me, that it has found that curvier bodies are more likely to see less accurate results than thinner ones, and that it is working on its algorithms to improve this situation.

With any health-and-fitness technology, there’s a question of how much you can rely upon the accuracy of its measurements. Few consumer-level devices offer the same level of data quality you can get from a much more expensive clinical tool. Straight after my first scan, I ran a second, to see the sort of variation you can expect from an imaging-based measurement. The margin is fairly small, only a few tenths of an inch difference between each scan, which seems fair to me. I’d say, too, that what matters more with these sorts of tools is the trend and direction of travel, rather than obsessing over the pinpoint accuracy of each individual measurement.

And, to test that, as soon as I’d run my second scan (and changed back into normal clothes), I asked a friend to help measure me with a tailor’s tape. And there was a wider delta than I think some people might expect, especially if they’re in need of millimeter-perfect measurements. For instance, the app measured my chest at 43.4-inches, while the tape clocked it in at 44. My upper arms measured 14.5-inches, compared to 14.2 and 14.3-inches inside the app. With my waist and hips, the app said they were 44.6 and 45.3-inches, respectively, while the tape measure clocked them in at 44.5-inches and 47-inches.

Partially, I think these divergences are because computer imaging, even with help, isn’t going to hit as perfectly as a tape measure. Not to mention that the suit pulls you in a little compared to normal clothes, which are far baggier by comparison. I’m sure, too, that the garb sits less well on a larger body compared to a smaller one, where there are fewer issues with terrain. Maybe I’m grading on a curve, but it’ll depend on what exactly users want to get out of this system.

The other question, and a likely more relevant one, is if squeezing into a Zozosuit is easier and less time-consuming than using a tape measure. It’s nice to have an automated process, and to have that data tracked over time, but nothing the app does could qualify as essential. That’s a fairly neat way to sum this up – if you’re a dedicated gym-goer looking for a more elegant way to monitor your vital statistics, then you may find some value here. I’m not sure how compelling this would be, however, if you’re expecting this to be the sum total of your fitness universe.

This article originally appeared on Engadget at https://ift.tt/WkQu1q2

via Engadget http://www.engadget.com

April 27, 2023 at 09:20AM

OpenAI improves ChatGPT privacy with new data controls

https://www.engadget.com/openai-improves-chatgpt-privacy-with-new-data-controls-174851274.html?src=rss

OpenAI is tightening up ChatGPT’s privacy controls. The company announced today that the AI chatbot’s users can now turn off their chat histories, preventing their input from being used for training data.

The controls, which roll out “starting today,” can be found under ChatGPT user settings under a new section labeled Data Controls. After toggling the switch off for “Chat History & Training,” you’ll no longer see your recent chats in the sidebar.

Even with the history and training turned off, OpenAI says it will still store your chats for 30 days. It does this to prevent abuse, with the company saying it will only review them if it needs to monitor them. After 30 days, the company says it permanently deletes them.

Screenshot of ChatGPT's Chat History & Training toggle. A Settings window shows various controls, including that, export data and delete account.
OpenAI

OpenAI also announced an upcoming ChatGPT Business subscription in addition to its $20 / month ChatGPT Plus plan. The Business variant targets “professionals who need more control over their data as well as enterprises seeking to manage their end users.” The new plan will follow the same data-usage policies as its API, meaning it won’t use your data for training by default. The plan will become available “in the coming months.”

Finally, the startup announced a new export option, letting you email yourself a copy of the data it stores. OpenAI says this will not only allow you to move your data elsewhere, but it can also help users understand what information it keeps.

Earlier this month, three Samsung employees were in the spotlight for leaking sensitive data to the chatbot, including recorded meeting notes. By default, OpenAI uses its customers’ prompts to train its models. The company urges its users not to share sensitive information with the bot, adding that it’s “not able to delete specific prompts from your history.” Given how quickly ChatGPT and other AI writing assistants blew up in recent months, it’s a welcome change for OpenAI to strengthen its privacy transparency and controls.

This article originally appeared on Engadget at https://ift.tt/xviR93p

via Engadget http://www.engadget.com

April 25, 2023 at 01:13PM