ChatGPT Answers Patients’ Online Questions Better Than Real Doctors, Study Finds

https://gizmodo.com/chatgpt-ai-doctor-patients-reddit-questions-answer-1850384628


AI may not replace your doctor anytime soon, but it will probably be answering their emails. A study published in JAMA Internal Medicine Friday examined questions from patients and found that ChatGPT provided better answers than human doctors four out of five times. A panel of medical professionals evaluated the exchanges, and preferred the AI’s response in 79% of cases. ChatGPT didn’t just provide higher quality answers, the panel concluded the AI was more empathetic, too. It’s a finding that could have major implications for the future of healthcare.

Won’t Virtual Reality Make Me Sick?

“There’s one area in public health where there’s more need than ever before, and that is people seeking medical advice. Doctor’s inboxes are filled to the brim after this transition to virtual care because of COVID-19,” said the study’s lead author, John W. Ayers, PhD, MA, vice chief of innovation in the UC San Diego School of Medicine Division of Infectious Diseases and Global Public Health.

“Patient emails go unanswered or get poor responses, and providers get burnout and leave their jobs. With that in mind, I thought ‘How can I help in this scenario?’” Ayers said. “So we got this basket of real patient questions and real physician responses, and compared them with ChatGPT. When we did, ChatGPT won in a landslide.”

Questions from patients are hard to come by, but Ayers’s team found a novel solution. The study pulled from Reddit’s r/AskDocs, where doctors with verified credentials answer users’ medical questions. The study randomly collected 195 questions and answers, and then had ChatGPT answer the same questions. A panel of licensed healthcare professionals with backgrounds in internal medicine evaluated the exchanges. The panel first chose which response was they thought was better, and then evaluated both the quality of the answers and the empathy or bedside manner provided.

The results were dramatic. ChatGPT’s answers were rated “good” or “very good” more than three times more often than doctors’ responses. The AI was rated “empathetic” or “very empathetic” almost 10 times more often.

The study’s authors say their work isn’t an argument in favor of ChatGPT over other AI tools, and they say that we don’t know enough about the risks and benefits for doctors to start using chatbots just yet.

Physicians showed an overwhelming preference for AI written responses,
Graphic: Courtesy of John. W. Ayers

“For some patients, this could save their lives,” Ayers said. For example, if you’re diagnosed with heart failure, it’s likely you’ll die within five years. “But we also know your likelihood of survival is higher if you have a high degree of compliance to clinical advice, such as restricting salt intake and taking your prescriptions. In that scenario, messages could help ensure compliance to that advice.”

The study says the medical community needs to move with caution. AI is progressing at an astonishing rate, and as the technology advances, so do the potential harms.

“The results are fascinating, if not all that surprising, and will certainly spur further much-needed research,” said Steven Lin, MD, executive director of the Stanford Healthcare AI Applied Research Team. However, Lin stressed that the JAMA study is far from definitive. For example, exchanged on Reddit don’t reflect the typical doctor-patient relationship in a clinical setting, and doctors with no therapeutic relationship with a patient have no particular reason to be empathetic or personalized in their responses. The results may also be skewed because the methodology for judging quality and empathy were simplistic, among other caveats.

Still, Lin said the study is encouraging, and highlights the enormous opportunity that chatbots pose for public health.

“There is tremendous potential for chatbots to assist clinicians when messaging with patients, by drafting a message based on a patient’s query for physicians or other clinical team members to edit,” said “The silent tsunami of patient messages flooding physicians’ inboxes is a very real, devastating problem.”

Doctors started playing around with ChatGPT almost as soon as it was released, and the chatbot shows a lot of potential for use in healthcare. But it’s hard to trust the AI’s responses, because in some cases, it lies. In another recent study, researchers had ChatGPT answer questions about preventing cardiovascular disease. The AI gave appropriate responses for 21 out of 25 questions. But ChatGPT made some serious mistakes, such as “firmly recommending” cardio and weightlifting, which can be unsafe for some people. In another example, a physician posted a TikTok about a conversation with ChatGPT where it made up a medical study. These problems could have deadly consequences if patients take a robot’s advice without input from a real doctor. OpenAI, the maker of ChatGPT, did not respond to a request for comment.

“We’re not saying we should just flip the switch and implement this. We need to pause and do the phase one studies, to evaluate the benefits and discover and mitigate the potential harms.” Ayers said. “That doesn’t mean we have to delay implementation for years. You could do the next phase and the next required study in a matter of months.”

There is no question that tools like ChatGPT will make their way into the world of medicine. It’s already started. In January, a mental health service called Koko tested GPT-3 on 4,000 people. The platform connects users to provide peer-to-pier support, and briefly let people harness an OpenAI powered chatbot in their responses. Koko said the AI messages got an overwhelmingly positive response, but the company shut the experiment down after a few days because it “felt kind of sterile.”

That’s not to say tools like ChatGPT don’t have potential in this context. Designed properly, a system that uses AI chatbots in medicine “may even be a tool to combat the epidemic of misinformation and disinformation that is probably the single biggest threat to public health today,” Lin said. “Applied poorly, it may make misinformation and disinformation even more rampant.”

Reading through the data from the JAMA study, the results seem clear even to a layperson. For example, one man said he gets arm pain when he sneezes, and asked if that’s cause for alarm. The doctor answered “Basically, no.” ChatGPT gave a detailed, five paragraph response with possible causes for the pain and several recommendations, including:

“It is not uncommon for people to experience muscle aches or pains in various parts of their body after sneezing. Sneezing is a sudden, forceful reflex that can cause muscle contractions throughout the body. In most cases, muscle pain or discomfort after sneezing is temporary and not a cause for concern. However, if the pain is severe or persists for a long time, it is a good idea to consult a healthcare professional for further evaluation.”

Unsurprisingly, all of the panelists in the study preferred ChatGPT’s answer. (Gizmodo lightly edited the details in the doctor’s example above to protect privacy.)

“This doesn’t mean we should throw doctors out of the picture,” Ayers said. “But the ideal message generating candidate may be a doctor using an AI system.”

If (and inevitably when) ChatGPT starts helping doctors with their emails, it could be useful even if it isn’t providing medical advice. Utilizing ChatGPT, doctors could work faster, giving patients the information they need without having to worry about grammar and spelling. Ayers said a ChatBot could also help reaching out to patients proactively with health care recommendations—like many doctors did in the early stages of the pandemic—rather than waiting for patients to get in touch when they have a problem.

“This isn’t just going to change medicine and help physicians, it’s going to have a huge value for public health, we’re talking en masse, across the population,” Ayers said.

Want to know more about AI, chatbots, and the future of machine learning? Check out our full coverage of artificial intelligence, or browse our guides to The Best Free AI Art Generators and Everything We Know About OpenAI’s ChatGPT.

via Gizmodo https://gizmodo.com

April 28, 2023 at 10:30AM

Watch Hyundai e-Corner execute a real 90-degree crab walk and 180-degree turn

https://www.autoblog.com/2023/04/27/watch-hyundai-e-corner-execute-a-real-90-degree-crab-walk-and-180-degree-turn/


Hyundai Group technology division and auto supplier Hyundai Mobis is hitting its marks in development of its e-Corner module. The module packages a wheel’s suspension, braking, and steering necessities into a free-standing assembly connected at the corner of a vehicle to an in-wheel motor. A vehicle fitted with four e-Corner modules would look and run just like a traditional EV in everyday use, but all of its driving functions are by-wire, a central ECU ensuring the modules work together. The magic of the system is the degree of rotation allowed by omitting links like half-shafts and steering racks. The system’s been demonstrated on public roads around the company campus on a Hyundai Ioniq 5. With each module able to rotate 90 degrees off-axis, the resulting maneuvers capable to the Ioniq 5 prototype look like a TikTok video or science fiction, take your pick. 

There are a few small giveaways that something is up with the test car: The modules jack the body up some and push the tires out a bit, causing poke and funky wheel arch alignment, and the wheel arches have additional cutouts to make room for full wheel rotation. By the time Hyundai Mobis achieves a production unit, planned for 2025, those minor issues should be easily addressed with a more compact module or a chassis design that accounts for the module.

A short video showcases several driving modes. Crab Driving turns all modules 90 degrees in the same direction, moving the Ioniq 5 sideways into a parallel parking spot. The Zero Turn is the same as we’ve seen from Rivian’s Tank Turn or the prototype electric G-Wagen. It rotates the modules 45 degrees, front wheels turning in, rear wheels turning out, so the vehicle can rotate in place. The Pivot Turn rotates the rear modules 90 degrees, turning the rear wheels plus one front wheel so that the car can pivot around a stationary front wheel as if the stationery wheel is the fixed point of a compass. This could come in handy in crammed parking lots to avoid making 12-point turns getting into and out of a spot. Finally, there’s Diagonal Driving, which we’d recognize as Hummer’s Crab Walk. Side note since this will come up in some bar somewhere, marine crabs walk sideways or forward, not diagonally, so we gotta give Hyundai the nod for the decapod-appropriate term.

If all goes well with the business case and production plans, Hyundai Mobis wants to start taking orders in 2025. It will be developing a Purpose Built Vehicle — the autonomous living-room-on-wheels kind — as another showcase for the e-Corner. Check out the vid to see what our future living rooms on wheels will be able to do.

via Autoblog https://ift.tt/HBQTRiF

April 27, 2023 at 03:18PM

TikTok may have generative AI avatars soon

https://www.engadget.com/tiktok-may-have-generative-ai-avatars-soon-065038031.html?src=rss

TikTok may soon let you create AI stylized avatars not unlike what you can with deep learning apps like Midjourney or Lensa, according to a Twitter thread from social media guru Matt Navarra seen by The Verge. Called AI Avatars, the tool lets you upload three to 10 photos of yourself and choose from five art styles. It will then generate up to 30 separate avatars in a couple of minutes. You can then download one, several or all of the images to use as a profile picture or in stories.

Though the styles are more limited than what you can get on Lensa, the results look pretty good — so the feature is bound to be popular. Likely for that reason, TikTok will only let you use it once a day to presumably avoid overloading servers. 

TikTok may have generative AI avatars soon
Matt Navara

Though generative AI images seem like harmless fun, they’re not without some controversy. For both Lensa and Midjourney, artists have complained that the AI has sampled their work and borrowed from it a bit too liberally at times. And earlier this year, Getty launched a lawsuit against Stable Diffusion claiming it was scraping its data to generate art. 

This article originally appeared on Engadget at https://ift.tt/6xZFRsf

via Engadget http://www.engadget.com

April 26, 2023 at 02:53AM

Zozofit’s capture suit takes the guesswork out of body measuring

https://www.engadget.com/zozofits-capture-suit-takes-the-guesswork-out-of-body-measuring-140006295.html?src=rss

I’ve developed an odd fascination with body-measuring technology, especially as it relates to the fashion world. Many companies are working on infrastructure that will hopefully one day let us buy clothes custom-tailored for the exact contours of our bodies. That should make people like me, who feel very under-served by the traditional fashion industry, a lot happier. It should also help to reduce the waste generated by the overproduction of clothes nobody wants to buy, which is a problem both for businesses and the planet. So, when Zozofit, makers of the Zozosuit, asked if I wanted to try its skin-tight body-measuring outfit, which has now been repurposed as a fitness tool, I agreed, albeit with my usual degree of trepidation.

The Zozosuit isn’t new, but its makers are using this year as a form of soft relaunch, with a new focus on breaking into the US. It was actually set up back in 2018 by Japanese high-end fashion retailer Zozo as a way of launching a custom-clothing line. Users bought the suit, scanned their bodies and then could order clothes that, on paper, were tailored to better suit their bodies. And while the clothes weren’t custom-made, the idea was that the outfits would be a better fit for them than the usual mass-produced stuff. But that idea, great in theory, didn’t necessarily shake out that well in practice.

Fashion Network said that the cost and complexity involved in launching the suit ate away at the company’s otherwise healthy profits. QZ reported that while people bought the suits, which were sold at a deep discount, few went on to purchase the custom threads as Zozo had planned. It got worse, as many reporters who tested the system found the clothes they had ordered, like Gizmodo’s Ryan F. Mandelbaum and the Economist’s Charlie Wells, didn’t actually fit. A better suit with higher-resolution dots for imaging was developed, but the project was subsequently put on ice.

Since then, Zozo has tried to open up its technology to third parties, but has now pivoted the technology toward something more fitness-focused. Since it already had the tech to make a body-measuring suit, it might as well be put to good use, or so the thinking goes. A number of health and fitness professionals advocate that, for people looking to get fitter, measure their bodies instead of stepping on the scale. So it makes sense for this to be offered as an elegant alternative to wrestling with a tape measure on a weekly basis.

Buying a Zozosuit is easy enough, just give it your weight in pounds, as well as your height in feet and inches, and cough up $98 plus tax. Not long after, you’ll get a slender package which contains a skinsuit made out of polyester and spandex. It looks very much like a motion capture suit commonly used in the production of visual effects, and functionally does the same job. Coming in two parts, the app will give you guidance on how to wear it, making sure that the waistband is pulled up high and covered by the top. You’ll need to try and keep everything as flat as you can, since visible creases will prevent you from taking an accurate scan.

Picture of two people with different body shapes wearing the Zozofit Zozosuit, used for making quick 3D scans of your body.
Zozofit

As a 5’11”, 231-pound man, I did wonder if Zozo would have a suit large enough to cater for my body shape. The website has images of much more athletically-adept models wearing its clothing and you may be concerned there’s no option for bigger-sized folks. The suit I tried on was tight, as intended, but didn’t feel restrictive, and I don’t think you should be nervous that the company can’t accommodate your needs. Other users in a similar situation have documented a similar experience, including YouTuber The Fabric Ninja, who produced a “Plus-Size Review” in 2020. That said, I don’t think I could pull this off as some form of athleisure fashion statement, for all of the reasons you can probably presume.

Inside the package is a cardboard phone stand, which you’ll need to pop out and fold into place to prop your smartphone onto. The Zozofit uses your handset’s primary camera, and so you’ll need to stand it on a table and then stand six feet or so away from it. Once activated, you’ll get voice guidance talking you through the setup and measurement process, and you’ll be asked to hold your arms slightly away from your body. The coach will then ask you to turn to every position on the clock, taking 12 images as you shuffle around in a circle. Once completed, you’ll be notified that you can pick up your phone and then wait 30 seconds-or-so for the model to process.

And you’ll get a headless 3D-mesh model of your body with various measurements labeled off the sides. These include measurements for your upper arms, chest, waist and hips, upper thigh and your calves. After you’ve pawed at your vital statistics, you’ll be invited to set some fitness goals based on those initial measurements. Interestingly, these are capped, I suspect to keep you picking smaller, more sustainable goals and avoid becoming disappointed. It measured my waist at 46.6-inches, and you can only set the goal at inch-wide increments down to 41.6-inches or up to 51.6-inches. This will change in a later update, but I appreciated the more realistic form of goal-setting it promises.

You’ll also get the app’s rough calculation of your body fat percentage, which it clocked at 35.6 percent. Not long after, I jumped on my smart scale and it registered me as having 31.6 percent, and I suspect, too, the imaging might struggle to be as accurate when you’re dealing with such big figures. I’d wager, too, that body fat percentages might not be so easily calculated by sight alone, and perhaps Zozo could look to remove those measurements which aren’t as reliable. It may also dent the PR braggadocio the company is putting out, claiming that this setup is the “world’s most accurate at-home 3D body scanner.” (It says it has compared its results to several rivals on the market, as well as professional hand-measurements.)

Screenshots of measurements taken of Daniel Cooper inside the Zozofit app showing a 3D-wire mesh of his body with measurements overlaid.
Zozofit / Daniel Cooper

Now, the company says that its body fat measurements use the US Navy Body Fat system, which calculates your body fat based on a series of body measurements. That method was developed to create a quick-and-dirty measurement to determine if someone was fit for service. (In the process of researching this, I learned that personnel describe it as the “rope and choke,” which isn’t relevant, but thought you’d appreciate the slang.) The company’s representatives added, to me, that it has found that curvier bodies are more likely to see less accurate results than thinner ones, and that it is working on its algorithms to improve this situation.

With any health-and-fitness technology, there’s a question of how much you can rely upon the accuracy of its measurements. Few consumer-level devices offer the same level of data quality you can get from a much more expensive clinical tool. Straight after my first scan, I ran a second, to see the sort of variation you can expect from an imaging-based measurement. The margin is fairly small, only a few tenths of an inch difference between each scan, which seems fair to me. I’d say, too, that what matters more with these sorts of tools is the trend and direction of travel, rather than obsessing over the pinpoint accuracy of each individual measurement.

And, to test that, as soon as I’d run my second scan (and changed back into normal clothes), I asked a friend to help measure me with a tailor’s tape. And there was a wider delta than I think some people might expect, especially if they’re in need of millimeter-perfect measurements. For instance, the app measured my chest at 43.4-inches, while the tape clocked it in at 44. My upper arms measured 14.5-inches, compared to 14.2 and 14.3-inches inside the app. With my waist and hips, the app said they were 44.6 and 45.3-inches, respectively, while the tape measure clocked them in at 44.5-inches and 47-inches.

Partially, I think these divergences are because computer imaging, even with help, isn’t going to hit as perfectly as a tape measure. Not to mention that the suit pulls you in a little compared to normal clothes, which are far baggier by comparison. I’m sure, too, that the garb sits less well on a larger body compared to a smaller one, where there are fewer issues with terrain. Maybe I’m grading on a curve, but it’ll depend on what exactly users want to get out of this system.

The other question, and a likely more relevant one, is if squeezing into a Zozosuit is easier and less time-consuming than using a tape measure. It’s nice to have an automated process, and to have that data tracked over time, but nothing the app does could qualify as essential. That’s a fairly neat way to sum this up – if you’re a dedicated gym-goer looking for a more elegant way to monitor your vital statistics, then you may find some value here. I’m not sure how compelling this would be, however, if you’re expecting this to be the sum total of your fitness universe.

This article originally appeared on Engadget at https://ift.tt/WkQu1q2

via Engadget http://www.engadget.com

April 27, 2023 at 09:20AM

OpenAI improves ChatGPT privacy with new data controls

https://www.engadget.com/openai-improves-chatgpt-privacy-with-new-data-controls-174851274.html?src=rss

OpenAI is tightening up ChatGPT’s privacy controls. The company announced today that the AI chatbot’s users can now turn off their chat histories, preventing their input from being used for training data.

The controls, which roll out “starting today,” can be found under ChatGPT user settings under a new section labeled Data Controls. After toggling the switch off for “Chat History & Training,” you’ll no longer see your recent chats in the sidebar.

Even with the history and training turned off, OpenAI says it will still store your chats for 30 days. It does this to prevent abuse, with the company saying it will only review them if it needs to monitor them. After 30 days, the company says it permanently deletes them.

Screenshot of ChatGPT's Chat History & Training toggle. A Settings window shows various controls, including that, export data and delete account.
OpenAI

OpenAI also announced an upcoming ChatGPT Business subscription in addition to its $20 / month ChatGPT Plus plan. The Business variant targets “professionals who need more control over their data as well as enterprises seeking to manage their end users.” The new plan will follow the same data-usage policies as its API, meaning it won’t use your data for training by default. The plan will become available “in the coming months.”

Finally, the startup announced a new export option, letting you email yourself a copy of the data it stores. OpenAI says this will not only allow you to move your data elsewhere, but it can also help users understand what information it keeps.

Earlier this month, three Samsung employees were in the spotlight for leaking sensitive data to the chatbot, including recorded meeting notes. By default, OpenAI uses its customers’ prompts to train its models. The company urges its users not to share sensitive information with the bot, adding that it’s “not able to delete specific prompts from your history.” Given how quickly ChatGPT and other AI writing assistants blew up in recent months, it’s a welcome change for OpenAI to strengthen its privacy transparency and controls.

This article originally appeared on Engadget at https://ift.tt/xviR93p

via Engadget http://www.engadget.com

April 25, 2023 at 01:13PM

This Nigerian EV entrepreneur hopes to go head to head with Tesla

https://www.technologyreview.com/2023/04/21/1071359/mustapha-gajibo-nigeria-electric-vehicle-motorized-tricycles/

Nigerians have become accustomed to long lines for gasoline and wild fluctuations in bus fares. Though the country is Africa’s largest producer of oil, its residents don’t benefit from a steady supply.

Mustapha Gajibo, 30, is doing what he can to alleviate the problem: his startup, Phoenix Renewables Limited, is launching a homegrown electric-­vehicle industry in the northeastern city of Maiduguri. 

Gajibo dropped out of university in his third year to run it. His first project was converting the internal-combustion engines of commonly used vehicles in the city to electric versions. He focused on two types of vehicles that residents often pay to ride: seven-seat minibuses and the motorized tricycles known as kekes.

A "zero emission" vehicle with fake grass floor carpeting outside the warehouse of Phoenix Renewables
Phoenix Renewables maintains a fleet of a dozen retrofitted electric
minibuses capable of covering a distance of 150 kilometers on a charge.
FATI ABUBAKAR

He faced skepticism at first: limited power charging infrastructure has constrained the adoption of electric vehicles in the region. “Many people don’t believe that electric mobility is possible and commercially viable in the city of Maiduguri,” Gajibo says. But his electrification scheme has been gaining traction. The company now maintains a fleet of a dozen electric minibuses that can cover a distance of 150 kilometers on a charge and cost about $1.50 to power to full capacity. 

Building the necessary infrastructure is crucial to the success of the project. Gajibo and his cofounder Sadiq Abubakar Issa designed a 60-kilowatt-hour solar-powered charging station in the city and are looking at creating more.

Now, Gajibo has moved on from retrofitting internal-combustion vehicles to building electric vehicles from scratch. 

The first, introduced in 2021, is a 12-seat bus constructed from a number of locally sourced materials. It has a range of 212 kilometers and can be charged in 35 minutes via a solar-powered system integrated into the back. In a recent test run funded by the company, the buses transported 35,000 passengers in Maiduguri in just one month. 

Deborah Maidawa, an electrical building services engineer who lives in Maiduguri, believes Gajibo’s EVs are a good way to meet local needs. “Incorporating solar gives the vehicles an edge over other EVs that are springing up, and I believe they will flood the Nigerian market,” she says.

A brand-new gas-powered passenger minibus with automatic transmission can cost nearly 5 million naira (about $10,000). Gajibo says it will cost around the same to buy one of his solar-powered 12-seaters. He plans to roll out 500 units across eight Nigerian cities in the coming months and hopes this time he’ll be able to sell them. 

“Our products are quite affordable, and the cost of the vehicle is one of the major things we put into consideration,” he says. “The only way to achieve that is by fully designing and building these vehicles locally.”

State and local governments are now taking notice. In early 2022, for example, the governor of Borno State, where Maiduguri is situated, commended Gajibo’s work and awarded him 20 million naira (about $45,000) for research and development, as well as 15,000 square meters of land for a factory. The Nigerian government has expressed interest in having his company build electric patrol vehicles for the police and armed forces.

Mustapha Gajibo at a workbench with motor parts in the foreground

FATI ABUBAKAR

Gajibo’s ultimate goal is to compete with Tesla and other bigger brands. “We want to have our vehicles driven in New York, London, Munich, and other big cities across the world,” he says.

Valentine Benjamin is a Nigerian travel journalist and photographer who reports on global health, social justice, politics, and development in Nigeria and sub-Saharan Africa.

via Technology Review Feed – Tech Review Top Stories https://ift.tt/mYi9z3h

April 21, 2023 at 04:30AM

AI Image Generator Is Making Wild And Horrifying Game Controllers

https://kotaku.com/midjourney-ai-art-ps5-nintendo-xbox-controller-1850363530

When it’s not stealing or plagiarizing, generative AI is improving quickly. Images that used to look uncanny now appear more natural and humanly imperfect. But it still struggles with plenty of things. Apparently video game controllers are one of them. Someone asked Midjourney for simple pictures of a person having fun playing video games, and got back some beautiful abominations.

This New Series Completes Studio MAPPA’s Dark Trilogy

A generative AI enthusiast asked the Midjourney community for help this week when a simple prompt returned some nightmares. “Mj has a real tough time with ‘playing video games’ apparently,” they posted on the project’s subreddit. “Any ideas how I could improve this? Prompt: female influencer relaxing playing PlayStation 5 having a blast”

While Midjourney managed to render a human with the right number of fingers, the controllers in her hands and how she was holding them looked like something out of a Cronenberg movie. The gamepads are overflowing with random buttons, triggers, and sticks, and not in a cool way. Microsoft’s adaptiver controller looks sleek. Midjourney’s version hurts just looking at it.

As many commenters suggested, one reason could be the overly broad prompt. While “playing” is intuitive to the average person, it’s vague when compared to what a search for it might reveal. The bigger culprit, though, is probably that there just aren’t many images of the backs of controllers compared to all the front-facing promotional shots companies release to sell them.

In that regard, the failed experiment potentially reinforces one of generative AI’s biggest weaknesses: It’s great at giving you variations on what already exists, but struggles to bridge the gaps in what’s missing. Or it borrows from existing sources in the wrong ways. Some of you might remember the infamous grip meme, and it certainly looks like that’s what Midjourney is recreating in the fourth image. Turns out the fake AI gamer girl is actually an extremely hardcore Armored Core fan.

via Kotaku https://kotaku.com

April 21, 2023 at 03:23PM