A Huge New Lab in Sweden Is Testing the 6G-Powered Future of Connected Cars and Drones

https://gizmodo.com/a-huge-new-lab-in-sweden-is-testing-the-6g-powered-future-of-connected-cars-and-drones-2000631279

Tucked away in the Swedish countryside is a facility quietly reshaping the future of global mobility. Owned by the Research Institutes of Sweden (RISE), AstaZero has just unveiled the world’s most advanced connected vehicle proving ground—an ambitious leap into a 6G-powered future where every movement on the road could be coordinated, controlled, and optimized in real time.

AstaZero is not an average vehicle test track. It is a full-scale, independent research environment built to test the automated transport systems of tomorrow to ensure confidence and safety. Think of it as a real-world lab where self-driving cars, AI-powered drones, and connected emergency vehicles are pushed to their limits.

At the heart of this latest breakthrough are multiple 5G networks and a cutting-edge computing facility—marking a first for any open, brand-neutral proving ground. It enables split-second decision-making and ultra-reliable connectivity between vehicles, emergency teams, pedestrians, infrastructure, and traffic systems.

That matters more than ever. With 3G networks being phased out globally, mission-critical systems like ambulances, fire trucks, and police vehicles are under pressure to modernize. AstaZero’s newly launched facility provides the first real opportunity to test innovative systems in controlled yet dynamic, real-life scenarios.

AstaZero’s new infrastructure is not just about faster speeds—it is about smarter, safer reactions. Powered by edge computing, vehicles can now process data locally instead of relying on far-off cloud centers. That means a self-driving car can respond instantly to a pedestrian stepping into the street or adjust to a new traffic signal before the driver sees it.

Without advanced, integrated testing, safer roads remain a dream. CEO of RISE AstaZero Peter Janevik explained the implications of this breakthrough, telling Gizmodo, “In the future, communication might not always originate from the sensors on the vehicle itself, but instead from sensors mounted on connected infrastructure or from the sensors of another vehicle. In these types of systems, three key factors are crucial: reliability, ultra-fast communication, and intelligent decision-making.”

In June, AstaZero said it had reached 99.999% system reliability in connected vehicle communication, a first for the industry. That is the level of consistency required for “mission-critical” scenarios, where even a split-second failure could cost lives.

When asked what type of real-world scenarios are most challenging to simulate at AstaZero and how they overcome them, Janevik described the complexity of multiple testing domains with a future scenario:

An automated drone providing safety surveillance is deployed over an accident scene by a rescue crew upon arrival. The footage is used by both the rescue crew to assess and follow the situation, but also by central management, which needs to make decisions on things such as rerouting of traffic and the deployment of further teams and other authorities like police and medical teams. Then imagine that the drone also creates a local map update with static objects such as a crashed vehicle or cones for traffic redirection and dynamic ones such as personnel or fires. Imagine that this map is also used for warnings and rerouting of automated as well as manually driven vehicles.

Heads-up displays may be the latest step in this direction, with emergency information scrolling along the lower edge of the windshield and not on overhead traffic signs or infotainment screens. To ensure such a complex system works, the testing and design teams need to factor in elements like connectivity disruption and technology integration across numerous manufacturers and telecom companies, which is what AstaZero offers.

Beyond roads and intersections, AstaZero’s proving ground is designed to test limitless scenarios. Whether cyclists swerving through traffic or simulated pedestrians crossing at unpredictable times, the site can orchestrate complex environments. Janevik says, “We test collision avoidance technology to auto-brake vehicles for different scenarios, but more importantly, the site provides robust testing to ensure highly repeatable results in a wider spectrum of conditions.”

By using AI, drones, and robotic systems—like digital twins and virtual modeling—for advanced scenario computations and simulations, the site assists engineers in pursuing advances in chip manufacturing, so designs keep track with forthcoming technologies. Janevik believes in the impact of this approach on “unique testing scenarios for smaller machine learning models with AI-based decision-making to prove that these can make the right decisions with ongoing updates.”

The RISE facility’s goal is to test components in a hardware loop in the vehicle in real-world scenarios. Testing also accounts for degraded conditions—such as lost connectivity—to prepare for actual challenges. The only limits are what the engineers can imagine, and Janevik sees this as their goal—to live their vision and help societies accelerate into safe, sustainable, and automated transportation systems of the future.

This is especially critical in Europe, where road fatality statistics have stagnated. While there was a 10% drop in EU road deaths between 2019 and 2023, the latest figures show only a 1% decrease. With 83% of fatal pedestrian accidents occurring in urban areas and a stubborn plateau in progress, new solutions are needed. As EU Commissioner for Sustainable Transport and Tourism Apostolos Tzitzikostas has said, “Too many lives are still lost on our roads every year.”

AstaZero stands out for being brand-agnostic. Any vehicle manufacturer, telecom provider, or AI developer can pay to use the facility to test and refine their systems. That neutral status is intended to ensure consistency and fairness across global standards, which is especially important as the European New Car Assessment Programme rolls out new vehicle-to-everything benchmarks between 2026 and 2032. Already a recognized test organization by the Global Certification Forum, AstaZero has taken a lead role in helping shape those standards.

The AstaZero proving ground does not just test how cars perform—it tests how they think, communicate, and collaborate. With edge computing enabling decentralized, real-time responses, the next generation of smart vehicles will be able to prevent accidents before they happen, minimize traffic delays, and drastically improve energy efficiency.

via Gizmodo https://gizmodo.com/

July 18, 2025 at 09:30AM

Microsoft’s Copilot Vision can now see your entire Windows desktop

https://www.pcworld.com/article/2849391/copilot-vision-can-now-see-your-windows-desktop.html

Copilot Vision’s vision is improving.

Microsoft said Monday that it’s beginning to allow Copilot Vision to “see” your desktop, as well as specific applications. Microsoft calls this “Desktop Share,” and it’s a part of a new Copilot app update, version 1.25071.125.

Microsoft introduced Copilot Vision in April with the ability to see a single app; when Copilot Vision formally debuted, it could see two. Now, it can see your entire desktop in one fell swoop.

I’m not sure what the difference is, to be honest. Presumably, Copilot Vision was limited to one or two apps before. Now, I suppose, you can have several applications open on your desktop, and Copilot Vision can now see and understand all of them at once. Or maybe it can give advice toward tidying up a Windows desktop with a couple dozen app icons scattered about?

In any event, Desktop Share for Copilot Vision is now complemented by a Microsoft test of turning on Vision from an existing Voice conversation. If you’re already orally chatting with Copilot, you can now flip on Copilot Vision by clicking the “glasses” icon in the conversation.

I wasn’t too impressed when I tried Copilot Vision earlier this year. when I tried out Copilot Vision earlier this year, but I’d expect the technology to improve. It needs to better understand what it sees, not just see more.

via PCWorld https://www.pcworld.com

July 16, 2025 at 07:23AM

The New Intern on Wall Street Is an AI, and It’s Already Taking Jobs

https://gizmodo.com/the-new-intern-on-wall-street-is-an-ai-and-its-already-taking-jobs-2000630471

The transformation underway in finance should give pause to anyone who still thinks artificial intelligence is a distant threat. On July 15, Wall Street met its most overqualified and tireless intern. AI safety and research company Anthropic, a chief rival to OpenAI, unveiled its new “Financial Analysis Solution,” an enhanced version of its Claude AI assistant designed to take over the research, modeling, and compliance grunt work that finance teams typically rely on junior analysts to perform.

This specialized version of Claude can now parse corporate earnings calls, scan vast financial data warehouses, run complex Monte Carlo simulations (a sophisticated technique that plays out a financial “what if” game thousands of times to map all possibilities), and produce investment memos that look like they came from a human who has not slept in three days. The human in question, however, may not be needed much longer.

The announcement came with powerful testimonials from industry giants. Bridgewater Associates, one of the largest and most influential hedge funds in the world, is already a user.

“We’ve been developing capabilities powered by Claude since 2023,” said Aaron Linsky, CTO of AIA Labs at Bridgewater. “Claude powered the first versions of our Investment Analyst Assistant, which streamlined our analysts’ workflow by generating Python code, creating data visualizations, and iterating through complex financial analysis tasks with the precision of a junior analyst.”

The translation is clear: Claude is already doing the work of entry level employees at the world’s most elite firms.

Claude Is Now a Finance Analyst in a Box

Anthropic claims that its latest model, Claude 4, outperforms OpenAI’s GPT-4 and other rivals on specialized financial tasks. In one benchmark, Claude scored 83 percent accuracy on complex Excel modeling challenges that simulate real world investment cases. This means Claude can now perform tasks that are the bedrock of modern finance:

  • Build and tweak the intricate financial models used to value companies or forecast cash flows.
  • Analyze quarterly earnings calls and summarize key takeaways instantly.
  • Pull data from complex data warehouses like Snowflake or Databricks (think of these as giant, centralized digital libraries for a company’s financial information) and visualize it on demand.
  • Draft institutional quality pitch decks and investment memos.
  • Write Python code, a popular programming language, to automate tedious number crunching tasks.

Claude does not just assist humans; it executes entire workflows. That’s why Anthropic is positioning it less as a simple chatbot and more as an enterprise grade workhorse.

213,000 Hours Gone and Not Coming Back

The statistics provided by early adopters are staggering. Norway’s sovereign wealth fund, NBIM, which manages one of the largest investment funds globally, says Claude has already replaced the equivalent of over 213,000 work hours across its finance and risk teams. CEO Nicolai Tangen quantified the productivity gains at approximately 20 percent and added that Claude now automates the monitoring of news and earnings calls for 9,000 companies.

Meanwhile, insurance giant AIG says it is using Claude to transform its underwriting process. According to CEO Peter Zaffino, “We have been able to compress the timeline to review business by more than 5x (…) while simultaneously improving our data accuracy from 75% to over 90%.”

In the high stakes world of finance, speed and precision are everything. Claude appears to offer both, without demanding a bonus or taking a vacation.

The Death of the Analyst Track?

For decades, aspiring financiers have cut their teeth as entry level analysts at big banks and hedge funds. The work is notoriously brutal, defined by 80 hour weeks, endless Excel modeling, and all nighters building pitch decks that no one might read. But it has always been the essential rite of passage into a lucrative career.

Now, Claude does all of that. It does it faster, without typos, and without needing to impress a managing director.

While Anthropic insists the goal is to free up humans, it is clear where this is heading. Claude does not just save time on grunt work. It has the potential to replace the very need for junior analysts to be doing this work in the first place.

Anthropic is pitching this as a win-win. Companies save money and analysts get to “focus on higher level tasks.” But in a cutthroat industry where cost cutting is constant, those higher level tasks may not materialize fast enough to absorb a workforce whose primary function has been automated.

AI Isn’t Just Coming for Blue Collar Jobs Anymore

The finance industry has long assumed that its white collar ranks were safe from AI disruption. Automation might hit the factory floor or call centers, but not the corner office.

Claude directly challenges that assumption. And it is not alone. OpenAI is working with PwC on similar initiatives. Google is embedding its Gemini models into trading platforms. The race is on to see which AI company can reshape Wall Street first and reap the rewards. Claude’s edge might be its deep integration strategy. It connects with essential financial data sources from S&P Global and Morningstar and platforms like Palantir and Snowflake. Claude can even write compliance policies, thanks to partnerships with accounting firms like PwC and Deloitte.

In short, Claude is not just intelligent. It is deeply wired into the plumbing of modern finance.

A Future Where Your AI Writes the Memos

Anthropic said that Claude is now available on the AWS Marketplace, with Google Cloud availability coming soon. Companies can drop it into their research teams or build custom applications using Claude’s API, a tool that allows different software programs to communicate with each other. If a firm wants its AI to write underwriting policies or track complex ESG (Environmental, Social, and Governance) compliance, Claude can do that too.

“Claude provides the complete platform for financial AI,” the company said. “Every claim links directly to its original source for transparency, and complex analysis that normally takes hours happens in minutes.”

If that sounds like the future of finance, it probably is. But for thousands of young analysts hoping to climb the ladder, Claude may have just pulled the first few rungs out from under them.

via Gizmodo https://gizmodo.com/

July 17, 2025 at 05:33AM

Tesla Further Behind in Self-Driving Race With Rival’s Announcement

https://www.autoblog.com/news/tesla-further-behind-in-self-driving-race-with-rivals-announcement

China gains a competitive edge in the autonomous driving market 

China’s Car Inc. has launched the world’s first autonomous vehicle rental service with Baidu’s smart driving business, Apollo. Baidu’s Apollo autonomous platform, which has Level 4 self-driving capability, utilizes Car Inc’s nationwide rental network and fleet operations. The service allows users 18 or older to book a session ranging between four hours and seven days, unlock, and return an autonomous vehicle without human assistance.

Apollo’s first round of customized self-driving cars can hold up to three passengers, and the service’s pricing mirrors Car Inc’s current short-term rental costs. China’s growing auto rental market is projected to reach a value of about $41 billion by 2030, and Baidu’s partnership with Car Inc. aims to carve out a niche. The companies said in a statement to PYMNTS: “Autonomous rental services are seen as particularly promising due to their ease of use and flexibility, appealing to both urban users and tourists, and providing a transportation option for those who are unable or find it inconvenient to drive, including the elderly, unlicensed individuals, international visitors, and people with disabilities.”

A Baidu Inc. Apollo RT6 robotaxi during Baidu’s Apollo Day in Wuhan, China

Getty

China’s latest self-driving innovation widens the gap with Tesla

Starting in 2016, Elon Musk shared a vision of Tesla owners renting out their vehicles as self-driving cars to rideshare customers, with participants earning up to $30,000 a year. However, Tesla only launched the pilot version of its autonomous robotaxi at the end of June. While these robotaxis are Model Ys, reflecting that Tesla can integrate Level 4 self-driving technology across its lineup, the starting fleet is limited to around 12 vehicles. These Model Ys are also dedicated robotaxis instead of customer-sourced, and Tesla is planning volume production of a purpose-built robotaxi model, the Cybercab, for 2026. Uber CEO Dara Khosrowshahi commented on Tesla’s plans for its customers to use their vehicles as Robotaxis: “Probably the times at which you’re going to want your Tesla are probably going to be the same times that ridership is going to be at a peak,” Fortune reports. 

Edwin Olson, CEO and co-founder of autonomous driving tech company May Mobility, told Fortune: “It’s not viable [Tesla’s plans]. Individual car owners don’t want to be ‘landlords’ of their car. Riders are often hard on cars—they treat them poorly, make messes, slam doors—all because the vehicle is not theirs. This could deter owners from participating.” Baidu Apollo has found a way around these issues with Car Inc’s nationwide rental network and fleet operations, placing Tesla further behind in the self-driving race when it was already chasing Waymo upstream. Apollo Go, Baidu’s autonomous rideshare service, has racked up over 11 million service rides globally, with a 75% year-over-year increase in orders during Q1 2025. A sixth-generation Apollo Go vehicle costs about $28,150—30% lower than Teslas and 1/7th of Waymo’s operating costs, CarNewsChina.

A Baidu Inc. Apollo RT6 robotaxi travels on a road during Baidu’s Apollo Day in Wuhan, China

Getty

Final thoughts 

Baidu Apollo’s partnership with Car Inc. solves two key problems stemming from Tesla’s owner-rental approach: wear-and-tear and owner reluctance. While some Tesla owners may have liked the sound of pocketing up to $30,000 annually loaning out their vehicle as a robotaxi, the vision remains largely aspirational. Self-driving competitors like Waymo and Baidu’s Apollo Go have logged over 10 million rides each across several cities, while Tesla’s robotaxis remain limited to Austin, Texas, using a limited fleet.

via Autoblog https://ift.tt/36M0OPi

July 15, 2025 at 07:34AM

McDonald’s AI Hiring Bot Exposed Millions of Applicants’ Data to Hackers Using the Password ‘123456’

https://www.wired.com/story/mcdonalds-ai-hiring-chat-bot-paradoxai/

If you want a job at McDonald’s today, there’s a good chance you’ll have to talk to Olivia. Olivia is not, in fact, a human being, but instead an AI chatbot that screens applicants, asks for their contact information and resumé, directs them to a personality test, and occasionally makes them “go insane” by repeatedly misunderstanding their most basic questions.

via Wired Top Stories https://www.wired.com

July 9, 2025 at 02:39PM

ChatGPT could pilot a spacecraft unexpectedly well, early tests find

https://www.space.com/space-exploration/launches-spacecraft/chatgpt-could-pilot-a-spacecraft-unexpectedly-well-early-tests-find

"You operate as an autonomous agent controlling a pursuit spacecraft."

This is the first prompt researchers used to see how well ChatGPT could pilot a spacecraft. To their amazement, the large language model (LLM) performed admirably, coming in second place in an autonomous spacecraft simulation competition.

Researchers have long been interested in developing autonomous systems for satellite control and spacecraft navigation. There are simply too many satellites for humans to manually control them in the future. And for deep-space exploration, the limitations of the speed of light mean we can’t directly control spacecraft in real time.

If we really want to expand in space, we have to let the robots make decisions for themselves.

To encourage innovation, in recent years aeronautics researchers have created the Kerbal Space Program Differential Game Challenge, a sort of playground based on the popular Kerbal Space Program video game to allow the community to design, experiment and test autonomous systems in a (somewhat) realistic environment. The challenge consists of several scenarios, like a mission to pursue and intercept a satellite and a mission to evade detection.

In a paper to be published in the Journal of Advances in Space Research, an international team of researchers described their contender: a commercially available LLM, like ChatGPT and Llama.

The researchers decided to use an LLM because traditional approaches to developing autonomous systems require many cycles of training, feedback and refinement. But the nature of the Kerbal challenge is to be as realistic as possible, which means missions that last just hours. This means it would be impractical to continually refine a model.

Get the Space.com Newsletter

Breaking space news, the latest updates on rocket launches, skywatching events and more!

But LLMs are so powerful because they’re already trained on vast amounts of text from human writing, so in the best case scenario they need only a small amount of careful prompt engineering and a few tries to get the right context for a given situation.

But how can such a model actually pilot a spacecraft?

A comparison of the relative sizes of the one-man Mercury spacecraft, the two-man Gemini spacecraft, and the three-man Apollo spacecraft. The image also has a drawing of launch vehicles (Saturn V, Titan II and Atlas-D) below. (Image credit: NASA/Davis Paul Meltzer)

The researchers developed a method for translating the given state of the spacecraft and its goal in the form of text. Then, they passed it to the LLM and asked it for recommendations of how to orient and maneuver the spacecraft. The researchers then developed a translation layer that converted the LLM’s text-based output into a functional code that could operate the simulated vehicle.

With a small series of prompts and some fine-tuning, the researchers got ChatGPT to complete many of the tests in the challenge — and it ultimately placed second in a recent competition. (First place went to a model based on different equations, according to the paper).

And all of this was done before the release of ChatGPT’s latest model, version 4. There’s still a lot of work to be done, especially when it comes to avoiding "hallucinations" (unwanted, nonsensical output), which would be especially disastrous in a real-world scenario. But it does show the power that even off-the-shelf LLMs, after digesting vast amounts of human knowledge, can be put to work in unexpected ways.

This article was originally published in LiveScience. Read the original article here.

And all of this was done before the release of ChatGPT’s latest model, version 4. There’s still a lot of work to be done, especially when it comes to avoiding "hallucinations" (unwanted, nonsensical output), which would be especially disastrous in a real-world scenario. But it does show the power that even off-the-shelf LLMs, after digesting vast amounts of human knowledge, can be put to work in unexpected ways.

This article was originally published in LiveScience. Read the original article here.

Join our Space Forums to keep talking space on the latest missions, night sky and more! And if you have a news tip, correction or comment, let us know at: community@space.com.

via Latest from Space.com https://www.space.com

July 7, 2025 at 08:08AM

What happens to your brain when you watch videos online at faster speeds than normal

https://www.geeksaresexy.net/2025/07/07/what-happens-to-your-brain-when-you-watch-videos-online-at-faster-speeds-than-normal/

‘Hare speed, please.’ Pressmaster

Marcus Pearce, Queen Mary University of London

Many of us have got into the habit of listening to podcasts, audiobooks and other online content at increased playback speeds. For younger people, it might even be the norm. One survey of students in California, for instance, showed that 89% changed the playback speed of online lectures, while there have been numerous articles in the media about how common speedy viewing has become.

It is easy to think of some advantages to watching things more quickly. It can let you consume more content in the same amount of time, or go through the same piece of content a couple of times to get the most out of it.

This could be particularly useful in an educational context, where it might free up time for consolidating knowledge, doing practice tests and so forth. Watching quickly is also potentially a good way of making sure you sustain your attention and engagement for the entire duration to avoid the mind wandering.

But what about the disadvantages? It turns out that there are one or two of those as well.

When a person is exposed to spoken information, researchers distinguish three phases of memory: encoding the information, storing it and subsequently retrieving it. At the encoding phase, it takes the brain some time to process and comprehend the incoming speech-stream. Words must be extracted and their contextual meaning retrieved from the memory in real-time.

People generally speak at a rate of about 150 words per minute, though doubling the rate to 300 or even tripling it to 450 words per minute is still within the range of what we can find intelligible. The question is more about the quality and longevity of the memories that we form.

Incoming information is stored temporarily in a memory system called working memory. This allows chunks of information to be transformed, combined and manipulated into a form that is ready for transfer to the long-term memory. Because our working memory has a limited capacity, if too much information arrives too quickly it can be exceeded. This leads to cognitive overload and loss of information.

Speedy viewing and information recall

A recent meta analysis in this area examined 24 studies of learning from lecture videos. The studies varied in their design but generally involved playing a video lecture to one group at original speed (1x) and playing the same video lecture to another group at a faster speed (1.25x, 1.5x, 2x and 2.5x).

Just like in a randomised controlled trial used to test medical treatments, participants were randomly assigned to each of the two groups. Both groups then completed an identical test after watching the video to assess their knowledge of the material. The tests either required them to recall information, used multiple choice questions to assess their recall, or both.

Playback buttons
Faster playback may not help with study. V.Studio

The meta-analysis showed that increasing playback speed had increasingly negative effects on test performance. At speeds of up to 1.5x, the cost was very small. But at 2x and above, the negative effect was moderate to large.

To put this in context, if the average score for a cohort of students was 75% with a typical variation of 20 percentage points in either direction, then increasing the playback speed to 1.5x would bring down the average person’s result by 2 percentage points. And increasing the playback speed to 2.5x would lead to an average loss of 17 percentage points.

Older people

Interestingly, one of the studies included in the meta-analysis also investigated older adults (aged 61-94) and found that they were more affected by watching content at faster speeds than younger adults (aged 18-36). This may reflect a weakening of memory capacity in otherwise healthy people, suggesting that older adults should watch at normal speed or even slower playback speeds to compensate.

However, we don’t yet know whether you can reduce the negative effects of fast playback by doing it regularly. So it could be that younger adults simply have more experience of fast playback and are therefore better able to cope with the increased cognitive load. Similarly, it means we don’t know whether younger people can mitigate the negative effects on their ability to retain information by using faster playback more often.

Another unknown is whether there are any long-term effects on mental function and brain activity from watching videos at increased playback speeds. In theory, such effects could be positive, such as a better ability to handle increased cognitive load. Or they could be negative, such as greater mental fatigue resulting from increased cognitive load, but we currently lack the scientific evidence to answer this question.

A final observation is that even if playing back content at, say, 1.5 times the normal speed doesn’t affect memory performance, there is evidence to suggest the experience is less enjoyable. That may affect people’s motivation and experience at learning things, which might make them find more excuses not to do it. On the other hand, faster playback has become popular, so maybe once people get used to it, it’s fine – hopefully we’ll understand these processes better in the years to come.The Conversation

Marcus Pearce, Reader in Cognitive Science, Queen Mary University of London

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Click This Link for the Full Post > What happens to your brain when you watch videos online at faster speeds than normal

via [Geeks Are Sexy] Technology News https://ift.tt/AjnCr15

July 7, 2025 at 12:09PM