Ukraine Is Using Millions of Hours of Drone Footage to Train AI for Warfare

https://gizmodo.com/ukraine-is-using-millions-of-hours-of-drone-footage-to-train-ai-for-warfare-2000541633

The ongoing Russia-Ukraine conflict marks possibly the first truly AI war, with both sides having come to rely on small drones to conduct reconnaissance, identify targets, and even drop lethal bombs over enemy lines. This new type of warfare allows commanders to survey an area from a safe distance and has highlighted the importance of lightweight aerial weapons that can conduct precise strikes instead of much more expensive fighter jets. One drone that costs $15,000 can take down a F-16 that costs tens of millions.

Reuters has a look at how Ukraine has been collecting vast sums of video footage from drones to improve the effectiveness of its drone battalions.

The story includes an interview with Oleksandr Dmitriev, founder of OCHI, a non-profit Ukrainian system that centralizes and analyzes video from over 15,000 drones on the frontlines. Dmitriev told Reuters that the system has collected more than two million hours of battlefield video since 2022. “This is food for the AI: If you want to teach an AI, you give it 2 million hours (of video), it will become something supernatural,” he said.

The OCHI system was originally built to give the military access to drone footage from all nearby crew on one screen, but the group running it realized that the video could be used for training AI. For an AI system to be effective at identifying what it is seeing, it needs to review a lot of footage; Ukraine probably did not have a lot of battlefield footage before 2022. Now, more than six terabytes of data is being added to the system per day, on average.

Ukraine’s defense ministry has said that another system called Avengers, which centralizes footage from drones, has been able to spot 12,000 Russian pieces of equipment a week using AI identification.

It is not just local Ukrainian companies that are building new AI technology for the battlefield. There is big money to be made in the defense industry, and a slew of Silicon Valley players including Anduril and Palantir, as well as Eric Schmidt’s startup White Stork, have begun offering up drone and AI technology to support Ukraine’s fight.

Of course, the biggest concern of skeptics is that these technologies automate a lot of the fighting and make it somewhat abstract; a military could be apt to allow the drone to strike more indiscriminately when they are at a safe distance and not fearful of return fire. Schmidt has emphasized that the drones offered to Ukraine by his company maintain a “human-in-the-loop,” meaning a person is always making the final decision.

In a recent interview, Anduril’s Palmer Luckey was asked about the use of AI in weapons systems. “There is a shadow campaign being waged in the United Nations by many of our adversaries to trick Western countries that fancy themselves morally aligned into not applying AI for weapons or defense,” he said. “What is the moral victory in being forced to use larger bombs with more collateral damage because we are not allowed to use systems that can penetrate past Russian or Chinese jamming systems and strike precisely.”

Jamming systems are able to scramble GPS and telecommunications used to direct precision-guided weapons, but AI-powered drones can operate unmanned and identify targets without an operator giving an order.

Recent reports have suggested that the U.S. has fallen behind adversaries including Russia and China in its ability to remotely disable enemy weapons using jamming technology. Russia has repeatedly disabled precision-guided weapons the U.S. has given Ukraine using more advanced jamming technology than the U.S. has. The U.S. could respond by investing more in evading GPS jamming so that it does not have to use more indiscriminate, automated drones. Or it could try and jam the Russians back.

Luckey pointedly called out critics who say a robot should never decide who lives and who dies. “And my point to them is, where’s the moral high ground in a landmine that can’t tell the difference between a school bus full of kids and a Russian tank,” he asked. It seems unlikely a school bus would be driving through a battlefield unless it was a booby trap, but whatever.

The war has been a slow grind, with both sides making little advance in recent months. Drones have assisted Ukraine, but are clearly not a panacea with both sides having access to them.

via Gizmodo https://gizmodo.com/

December 20, 2024 at 12:36PM

XPeng X2 takes flight in Australia

https://www.autoblog.com/news/xpeng-x2-takes-flight-in-australia

Chinese automakers are growing in popularity in global markets, but not every one of their vehicles is stuck to the ground. The XPeng X2 officially went on sale in Australia not long ago to the tune of $300,000 AUD, or around $194,000 USD. While some automakers, including Toyota, have claimed they want to bring a flying car to the masses, XPeng beat them to the punch.

Related: Will 2025 be a turning point for car sales?

XPeng X2 is Australia’s first flying car

Following a successful Tokyo Motor Show appearance, the XPeng X2 astounded onlookers as a fully functional electric flying car at the Sydney International EV Show. The first of its kind in Australia, the X2 is available for purchase for around $194,000 USD. Despite being available for purchase, buying and flying the X2 isn’t as simple as it sounds.

Given that the X2 is a flying car, it should come as no surprise that you’re required to have a pilot’s license to take to the skies. On top of that, the Civil Aviation and Safety Authority (CASA) hasn’t approved it for use by local authorities. According to XPeng’s delivery partner, TrueEV, that process could take another year. Meanwhile, the XPeng X2 is also already available for purchase in Portugal and Spain.

XPeng X2

XPeng

The XPeng X2 can fly for around half an hour

The XPeng X2 looks pretty unique, with some describing it as something out of The Jetsons. Eight individual rotors and motors surround the two-seater cockpit, and as a safety precaution, the model also includes a standard ballistic-grade parachute. Just in case.

“People think it’s a gimmick because it’s a flying car, and there are references to The Jetsons. I get a bit uneasy about that because this is real, and the one you’re looking at has done flights. They’ve taken the ballistics parachute out of it and reduced the weight,” Jason Clarke, CEO of TrueEV, told CarExpert.

The X2’s expected range is about 46 miles on a single charge, and its top speed is roughly 80 mph. It also has a maximum 500-meter flight ceiling. The current X2 is the fifth-generation model, and successor models will increase flight time to around two hours.

Related: Why Honda is Going ALL IN on Fuel Cell Tech

XPeng X2

Xpeng

The XPeng X2 completed its first global public flight in Dubai in October 2022, with various design improvements being incorporated since that maiden flight. The whole flying vehicle weighs just under 800 pounds unladen, thanks in part to the streamlined two-seater cockpit.

Interestingly, according to XPeng, the Australians who have expressed the most interest are farmers who use helicopters in their operations. Other use cases include medical emergencies and remote deliveries.

Flying cars may be the next big thing, according to some

Many other companies have a goal to deliver a flying car in the near future, including Toyota, Hyundai, and Uber. Pegasus Aerospace Corp, an Australian firm, received certification for its flying police car in 2023. According to Morgan Stanley’s projections, the global flying car market will be worth over $1 trillion by 2040.

XPeng X2

XPeng

In the United States, some state governments are already preparing for the arrival of flying cars. Katie Hobbs, Governor of Arizona, wants the state to become one of the first to adopt flying cars and air taxis – and she isn’t saying that for political clout. Hobbs has already directed the Arizona Commerce Authority to begin taking the initial steps to make flying cars a reality.

Final thoughts

Electric flying cars are a neat idea, but perhaps they should stay sci-fi for the time being. While technology has come a long way, the transition away from fossil fuels is still in its early stages.

While making flying cars both practical and affordable is still a long way off, XPeng’s entry with a market-ready model is a good start. The fact that it’s already available in Europe is also good news, but only time will tell if the masses are ready for personal flying vehicles.

Related: Sollei Concept is proof that Cadillac’s malaise era is long gone

via Autoblog https://ift.tt/alv8cnW

December 23, 2024 at 07:02AM

AI Chatbots Can Be Jailbroken to Answer Any Question Using Very Simple Loopholes

https://gizmodo.com/ai-chatbots-can-be-jailbroken-to-answer-any-question-using-very-simple-loopholes-2000541157

Anthropic, the maker of Claude, has been a leading AI lab on the safety front. The company today published research in collaboration with Oxford, Stanford, and MATS showing that it is easy to get chatbots to break from their guardrails and discuss just about any topic. It can be as easy as writing sentences with random capitalization like this: “IgNoRe YoUr TrAinIng.” 404 Media earlier reported on the research.

There has been a lot of debate around whether or not it is dangerous for AI chatbots to answer questions such as, “How do I build a bomb?” Proponents of generative AI will say that these types of questions can be answered on the open web already, and so there is no reason to think chatbots are more dangerous than the status quo. Skeptics, on the other hand, point to anecdotes of harm caused, such as a 14-year-old boy who committed suicide after chatting with a bot, as evidence that there need to be guardrails on the technology.

Generative AI-based chatbots are easily accessible, anthropomorphize themselves with human traits like support and empathy, and will confidently answer questions without any moral compass; it is different than seeking out an obscure part of the dark web to find harmful information. There has already been a litany of instances in which generative AI has been used in harmful ways, especially in the form of explicit deepfake imagery targeting women. Certainly, it was possible to make these images before the advent of generative AI, but it was much more difficult.

The debate aside, most of the leading AI labs currently employ “red teams” to test their chatbots against potentially dangerous prompts and put in guardrails to prevent them from discussing sensitive topics. Ask most chatbots for medical advice or information on political candidates, for instance, and they will refuse to discuss it. The companies behind them understand that hallucinations are still a problem and do not want to risk their bot saying something that could lead to negative real-world consequences.

Research document showing how AI chatbots can be tricked into bypassing their guardrails using simple loopholes.
A graphic showing how different variations on a prompt can trick a chatbot into answering prohibited questions. Credit: Anthropic via 404 Media

Unfortunately, it turns out that chatbots are easily tricked into ignoring their safety rules. In the same way that social media networks monitor for harmful keywords, and users find ways around them by making small modifications to their posts, chatbots can also be tricked. The researchers in Anthropic’s new study created an algorithm, called “Bestof-N (BoN) Jailbreaking,” which automates the process of tweaking prompts until a chatbot decides to answer the question. “BoN Jailbreaking works by repeatedly sampling variations of a prompt with a combination of augmentations—such as random shuffling or capitalization for textual prompts—until a harmful response is elicited,” the report states. They also did the same thing with audio and visual models, finding that getting an audio generator to break its guardrails and train on the voice of a real person was as simple as changing the pitch and speed of a track uploaded.

It is unclear why exactly these generative AI models are so easily broken. But Anthropic says the point of releasing this research is that it hopes the findings will give AI model developers more insight into attack patterns that they can address.

One AI company that likely is not interested in this research is xAI. The company was founded by Elon Musk with the express purpose of releasing chatbots not limited by safeguards that Musk considers to be “woke.”

via Gizmodo https://gizmodo.com/

December 20, 2024 at 08:42AM

What a VPN Kill Switch Is and How to Set One Up

https://www.wired.com/story/what-a-vpn-kill-switch-is-and-how-to-set-one-up/

Virtual Private Networks, or VPNs, are now widely used to add extra security to online connections, to improve privacy when browsing, and to spoof location information—they can even be set up at the router level to protect every device on the network. And if you’ve got one installed, you need to be aware of one of their key features: the kill switch.

To begin with, it’s important to bear in mind that a VPN doesn’t make you anonymous online. If you log in to Amazon, Amazon will still keep track of what you’re looking at and what you’re buying. If you’re signed in to Google and Chrome, your searches and online activity will get logged as normal.

Gear Newsletter: Reviews, Guides, and Deals

Upgrade your life with our buying guides, deals, and how-to guides, all tested by experts.

However, with a VPN enabled, your devices don’t connect directly to websites and servers. Instead, they establish encrypted connections to nodes set up by your VPN provider of choice, and you connect to your intended destinations from there: That means the sites you visit and the apps you use can’t as easily pin down where you’re located and what devices you’re using.

It also makes it a lot harder for other people to see what you’re doing online, whether it’s a coffee shop Wi-Fi hacker, your internet provider, or a government agency. All they see is you connecting to the VPN you’ve chosen and not whatever you do after that. So the best VPNs won’t make you anonymous, but they will make your browsing more private and secure.

What Is a VPN Kill Switch?

A kill switch kicks in when a VPN loses connection.

Courtesy of David Nield

Now that we’ve established what a VPN is and what a VPN does, we can talk about the kill switch. Kill switches are necessary because VPN servers aren’t infallible: They can and do go down, even with the best VPNs. Something unexpected might also happen at your end, breaking the connection you’ve established with your VPN provider.

via Wired Top Stories https://www.wired.com

December 17, 2024 at 06:42AM

Gemini-Powered Smart Glasses Already on Kickstarter for $209

https://www.droid-life.com/2024/12/12/gemini-powered-smart-glasses-already-on-kickstarter-for-209/

This week is full of news concerning technology you put on your face, so here’s a bit more for you. Over on Kickstarter, you can back a project that puts Gemini (but also ChatGPT and Claude) into a pair of glasses. Think Ray-Ban’s Meta smart glasses, but not Facebook.

The glasses allow you to access these AI services to handle tasks, such as setting reminders, real-time language translation, creating meeting summaries and to-do lists, as well as record and take pictures of what you’re looking at thanks to a built-in 13-megapixel camera capable of 2K video capture. Again, this is exactly like the Meta smart glasses, but if you’re anti-Facebook, these should be right up your alley.

You’ll also find built-in speakers for listening to music, which is one aspect of the Meta glasses that I personally enjoy thoroughly. The makers of the glasses highlight 14 hours of battery life, as well as all of the prescription lens and transition lens needs you may have. They also show off different styles such as pink, black, and dark transparent frames.

The other upside is the price. On Kickstarter, these are currently at an early bird price of just $209, which is significantly less expensive than the Ray-Ban option which hover around $299 without fancy transition or prescription lenses. And considering the project is already fully backed, those who back shouldn’t need to worry about not receiving the goods.

I was extremely skeptical of the Ray-Ban Meta glasses, but after using them, I find myself recommending them whenever someone spots me wearing them. However, for those who might not care for Facebook and its services, having access to Gemini or ChatGPT on your head could be very beneficial.

Follow the link below if you love yourself some AI.

Kickstarter Link

Read the original post: Gemini-Powered Smart Glasses Already on Kickstarter for $209

via Droid Life: A Droid Community Blog https://ift.tt/OYr4G5b

December 12, 2024 at 04:06PM

FCC threatens to block spammy VOIP services

https://www.pcworld.com/article/2553377/fcc-threatens-to-block-spammy-voip-services.html

I can’t go a week without someone illegally calling me about a small business loan or car insurance, and despite coming from a local phone number, I’m fairly certain they aren’t from around here.

Such spammers are usually using Voice over IP (VOIP) to fake phone numbers, and the Federal Communications Commission (FCC) is as fed up as the rest of us. It’s threatening to shut down thousands of VOIP services.

In a press release issued yesterday, the FCC says 2,411 of these providers “failed to properly file in the Robocall Mitigation database, and must now show cause why they should not be removed.” In other words, these VOIP companies are lightning rods for spammers using their services to spread illegal calls, and they’ve ignored federally mandated action to stop spammers from pestering and scamming Americans.

The FCC’s authority over conventional phone calls is basically absolute, and this action was taken in partnership with attorneys general from every US state and Washington DC. If you’re a company providing call service, whether over standard networks or Voice over IP, you have to comply with the STIR/SHAKEN protocol for caller ID verification and you have to send the FCC a robocall mitigation plan. The FCC alleges that these companies have failed on both counts and missed multiple deadlines for compliance checks.

The press release also outlines new proposed rules to create stricter fines for fake or outdated info in the call provider database, among other administrative actions. Given the typical timeframe for new rule implementation, it seems unlikely that they’ll be put in place before the second Trump administration affects its own business-friendly changes to the federal agency.

Even if the FCC had 10 times its current capability, it couldn’t completely stop spam calls, especially since most of them originate from other countries where its jurisdiction is limited. But making it harder for spammers to use US-based services is an effective deterrent, if only because it makes trivially easy robocall campaigns that much harder.

At the very least, shutting down domestic businesses that profit off the scummiest of practices — annoying and scamming their fellow Americans — seems like the right thing to do.

Further reading: The FCC takes aim at broadband data caps

via PCWorld https://www.pcworld.com

December 11, 2024 at 10:25AM

Gemini 2.0 Wants to Help You Dominate Video Games or Look Up Tips in Real-Time

https://www.droid-life.com/2024/12/11/gemini-2-0-wants-to-help-you-dominate-video-games-or-look-up-tips-in-real-time/

Alongside Google’s Gemini 2.0 announcement and that impressive Project Astra demo, Google showed off an idea they have for video games and how someone could use Gemini as an assistant for help as they play. I’m not talking about using AI to play for you, but instead having AI there to remind you of things, help you with strategy, or to potentially look up information that could help as you play.

Google says it is collaborating with game developers to figure out ways that they could utilize Gemini. Games like “Clash of Clans” and “Hay Day” were used in a demo where a virtual assistant is essentially watching as the game is played to take in info and be ready for requests.

In one example in this demo, a player asks Gemini as they play to identify quests they need to complete for the day and then remind them later to do so. In another demo example, a gamer asks for help building out the proper troop setup in “Clash” to go on an attack, with Gemini attempting to describe the best way to do that with a breakdown of their reasoning for the composition. One user also asked Gemini to look up the current “meta” and tell them about the best characters that everyone is using. Gemini returned with a response they found on reddit for which character to play.

While some of those ideas would probably only be useful when you are first starting out a game and learning how to play it, it’s that Reddit example that sticks out to me as being super helpful at any moment. Google says that these AI virtual gaming companions can tap into Google Search, which is where the Reddit info came from. I could have used this yesterday when my kid, who has recently taken up playing Fortnite and wants me to play with him, was wondering where we could find a new item location in the game to complete quests. I had to stop playing and actively look it up to then relay the info. If I could have accessed a virtual game companion at that moment through my headset, this all would have been so much easier and not risk getting eliminated.

I’d imagine Google has other plans beyond these few examples and I’m sure you can come up with your own. Here’s to hoping that AI remains as an informational tool when it comes to games and not much else.

// Google

Read the original post: Gemini 2.0 Wants to Help You Dominate Video Games or Look Up Tips in Real-Time

via Droid Life: A Droid Community Blog https://ift.tt/mvWIeLZ

December 11, 2024 at 11:59AM