A new policy document outlines China’s plan to create an internationally competitive BCI industry within five years, and proposes developing devices for both health and consumer uses.
Google Vids is the company’s web-based video editor, and you can now use it for free. Up until now, Vids was available only for paying Google Workspace subscribers, but this move makes it accessible to everyone. The pitch here is quite simple: Vids is a simple video editing tool that integrates extremely well with Google Drive. You can use basic editing tools and templates for free, and if you’re a paying Google subscriber, then you can use its AI features, too.
Before you get too excited about Google Vids, though, you should know that it’s not a replacement for professional editing software. It’s more intended as a good place to get started, a bit like what Windows Movie Maker used to be about 15 years ago. You can do a lot with Google Vids, but if you’re a professional, you’ll hit the limits of the app’s capabilities fairly quickly. The good news is that there are alternatives like DaVinci Resolve for those who want professional-grade editing software and don’t need to do their editing in a browser. But if that’s a bit overkill for your needs, Google Vids is still worth looking into, especially because of the new free tier.
Google Vids now has a generous free tier
Credit: Pranay Parab
Over the years, I’ve learned to lower my expectations for web-based video editing tools, but Google Vids is quite decent. My favorite thing about it is that if you have no idea how you want to present a video, it has several built-in templates to help you get started. For instance, there’s a template that lets you create a "year in review" style video. Within each template, you can insert premade scenes alongside your own footage. For instance, a sourdough prep template I found had premade scenes showing an ingredient list (with text you can swap out for your own recipe), someone prepping dough, someone making dough, and so on.
You also have the option of importing a presentation from Google Slides and converting it into a video. It’s a great use of the app’s integration with Google Drive, and even better, you can also easily import pictures from your Google Photos account if you need still shots. There’s even a handy feature that lets you search stock photo/video websites to get filler footage.
The tools available for basic edits are also quite intuitive. Even as a novice, I was able to easily add basic animations for text or transition effects, and modify on-screen elements like the background. Google Vids also lets you easily search for royalty-free music to add to your videos, which is a nice touch. The best bit is that you can use Google Vids via any browser, not just Chrome.
You can pay for AI integration
Credit: Google
If you are a paying Google Workspace subscriber, then you can also use AI features in Google Vids. The basic editing tools remain the same, and all the AI stuff basically revolves around using a text prompt to generate ideas or videos. For instance, you can use a text prompt to generate a rough storyboard, or just ask Gemini to look at a Google Docs file to generate it even without a prompt. Vids also lets you throw in a script and generate an AI voiceover for your videos, and you have the option to choose from multiple types of voices.
Using Veo 3, you can also now make eight-second video clips from a text prompt, or convert still photos into video using AI. If that’s not enough, you could also input your entire script, and Vids will generate an AI avatar to read it out loud for you, with no restriction on time, over your other footage. I’ll presume Google has done its due diligence to not base any of these avatars on actual people—you can see a few avatar choices above—but still, it does mean that some of the talking heads you’re about to see online won’t resemble the person making the video at all. Be careful with what you assume is real.
The International Space Station (ISS) has been in orbit for over 26 years, housing astronauts at an altitude of 250 miles (400 kilometers) above Earth. But even at that distance, the space station can’t escape the drag of Earth’s atmosphere as oxygen molecules and other gases collide with it, causing it to lose altitude over time.
For the ISS to retain its status in orbit, NASA and its partners perform the occasional reboost maneuver. This is typically done using the space station’s own thrusters (which are tiny and relatively weak) or with Russia’s Progress spacecraft and Northrop Grumman’s Cygnus. For the first time, however, and starting in September, NASA will use SpaceX’s Dragon vehicle to help sustain the space station’s orbital altitude.
A boost kit in the trunk
SpaceX’s Dragon launched to the ISS on Sunday at 2:45 a.m. ET, carrying more than 5,000 pounds of supplies to the orbiting lab. The otherwise routine commercial resupply mission carried a little something extra this time around, a propellant system tucked inside Dragon’s trunk for a reboost demonstration.
Dragon’s boost kit will be used to maintain the altitude of the ISS starting in September through a series of burns planned throughout the fall, nudging the massive space station a little higher in its orbit.
The SpaceX spacecraft, while docked to the station, will use a propellant system that’s independent from the one used to fuel its own engines. Instead, the boost kit fuels two Draco engines in the spacecraft’s trunk using an existing hardware and propellant system design, according to NASA.
Dragon’s engines are not facing the right direction to pull off the boost maneuvers; hence, the need for the additional engines that are aligned with the velocity vector of the ISS.
The rear-facing engines are connected to propellant tanks filled with hydrazine and nitrogen tetroxide, which ignite when they come in contact with one another. When it’s time to give the ISS a little boost, the engines will ignite and lightly adjust the space station’s altitude in low Earth orbit.
Multiple reboost options
NASA and SpaceX tested Dragon’s ability to reboost the ISS in November 2024 through a demonstration that lasted approximately 12 minutes. Dragon successfully adjusted the station’s orbit by 7/100 of a mile at apogee, the point at which it’s farthest away from Earth, and 7/10 of a mile at perigee, when it is closest to Earth.
“By testing the spacecraft’s ability to provide reboost and, eventually, attitude control, NASA’s International Space Station Program will have multiple spacecraft available to provide these capabilities for the orbital complex,” NASA wrote in a statement at the time.
The Dragon spacecraft will remain docked to the ISS until December—the longest period for a cargo mission—in order to pull off the reboost maneuvers in the coming months. The boost kit being used on this mission is a smaller version of one SpaceX is currently developing for the space station’s final deorbit.
The ISS is due to retire by 2030, and NASA plans on using a Dragon spacecraft to perform a series of deorbit burns that will lower the space station’s altitude until it burns up in Earth’s atmosphere. Until the moment comes for its impending doom, the ISS will get to enjoy a little boost from Dragon.
We all know that nutrition is one of the pillars of health. To live a long and healthy life it’s important to get the details right, and one ongoing topic of dietary research is protein. Like carbohydrates and fats, both the amount and quality matter.
Nutritional research is tricky, often relying on observational studies that offer hints but not definitive answers. A recurring debate is whether plant proteins are healthier than animal proteins. Researchers from McMaster University in Ontario recently set out to explore that question.
Their study, published in Applied Physiology, Nutrition, and Metabolism, looked at data from nearly 16,000 adults. Surprisingly, it doesn’t seem to matter whether protein comes from plants or animals when it comes to overall mortality, though animal protein showed a slight protective effect against cancer deaths.
Confusion Around Eating Protein
The recommended dietary allowance (RDA) for protein in Canada and the U.S. is 0.8 grams per kilogram of body weight per day. Another way to approach protein amounts is the acceptable macronutrient distribution range (AMDR), which is broader, at 10 to 35 percent of daily calories. Most people already fall comfortably within this range, often several times higher than the RDA.
Debate remains over how much protein is optimal, especially for older adults. Some studies suggest more protein supports muscle health and longevity, while others raise concerns. One study linked high protein intake with a 75 percent increase in overall mortality and a four-fold higher cancer risk in adults aged 50–65.
Interestingly, that risk disappeared when the protein came from plants, lending support to guidelines like Canada’s Food Guide, which favors plant protein. But instead of focusing on protein amounts, the McMaster researchers looked at usual amounts of protein, only differing in the source of protein.
“There’s a lot of confusion around protein – how much to eat, what kind and what it means for long-term health. This study adds clarity, which is important for anyone trying to make informed, evidence-based decisions about what they eat,” said study co-author Stuart Phillips, professor at McMaster, in a press release.
To provide clarity, the team analyzed data from the Third National Health and Nutrition Examination Survey (NHANES III), which gathered dietary information from U.S. adults between 1988 and 1994. Close to 16,000 people, ages 19 and up, were included.
The researchers used advanced statistical methods, including the National Cancer Institute (NCI) method and multivariate Markov Chain Monte Carlo modeling, to account for daily fluctuations in protein intake.
“It was imperative that our analysis used the most rigorous, gold standard methods to assess usual intake and mortality risk. These methods allowed us to account for fluctuations in daily protein intake and provide a more accurate picture of long-term eating habits,” said Phillips.
Results showed no link between total, animal, or plant protein and risk of death from any cause, including cardiovascular disease and cancer. Even when both protein sources were analyzed together, the findings held steady. There was a hint that animal protein may slightly reduce cancer-related mortality but no significant relationship emerged for plant protein.
Protein That Promotes Health and Longevity
The takeaway? The data don’t support the idea that one protein source is inherently more harmful or beneficial for longevity than another. If anything, animal protein may be mildly protective against cancer deaths, but the difference is small.
As with all observational studies, the findings can’t prove cause and effect. Still, when combined with decades of clinical trial evidence, the results offer reassurance for people who enjoy a mixed diet.
“When both observational data like this and clinical research are considered, it’s clear both animal and plant protein foods promote health and longevity” concluded first author Yanni Papanikolaou in the statement.
This article is not offering medical advice and should be used for informational purposes only.
Our writers at Discovermagazine.com use peer-reviewed studies and high-quality sources for our articles, and our editors review for scientific accuracy and editorial standards. Review the sources used below for this article:
Modern notebooks with integrated AI hardware are changing the way artificial intelligence is used in everyday life. Instead of relying on external server farms, these large language models, image generators, or transcription systems run directly on the user’s own device.
This is made possible by the combination of powerful CPUs, dedicated graphics processors and, at the center of this development, a Neural Processing Unit (NPU). An NPU is not just an add-on, but a specialized accelerator designed precisely for the calculation of neural networks.
It enables offline AI tools such as GPT4All or Stable Diffusion not only to start, but also to react with high performance, low energy consumption and constant response time. Even with complex queries or multimodal tasks, the working speed remains stable. The days when AI was only conceivable as a cloud service are now over.
Work where others are offline
As soon as the internet connection is interrupted, classic laptops begin to idle. An AI PC, on the other hand, remains operational, whether in airplane mode above the clouds, deep in the dead zones of rural regions, or in an overloaded train network without a stable network.
In such situations, the structural advantage of locally running AI systems becomes apparent. Jan.ai or GPT4All can be used to create, check and revise texts, intelligently summarize notes, pre-formulate emails and categorize appointments.
Foundry
With AnythingLLM, contracts or meeting minutes can be searched for keywords without the documents leaving the device. Creative tasks such as creating illustrations via Stable Diffusion or post-processing images with Photo AI also work, even on devices without a permanent network connection.
Even demanding projects such as programming small tools or the automated generation of shell scripts are possible if the corresponding models are installed. For frequent travelers, project managers, or creative professionals, this creates a comprehensive option for productive working, completely independent of infrastructure, network availability, or cloud access. An offline AI notebook does not replace a studio, but it does prevent downtime.
Sensitive content remains local
Data sovereignty is increasingly becoming a decisive factor in personal and professional lives. Anyone who processes business reports, develops project ideas, or analyzes medical issues cannot afford to have any uncertainties when processing data.
Public chatbots such as Gemini, ChatGPT, or Microsoft Copilot are helpful, but are not designed to protect sensitive data from misuse or unwanted analysis.
Local AI solutions, on the other hand, work without transmitting data to the internet. The models used, such as LLaMA, Mistral or DeepSeek, can be executed directly on the device without the content leaving the hardware.
This opens up completely new fields of application, particularly in areas with regulatory requirements, such as healthcare, in a legal context, or in research. AnythingLLM goes one step further. It combines classic chat interaction with a local knowledge base of Office documents, PDFs and structured data. This turns voice AI into an interactive analysis tool for complex amounts of information, locally, offline and in compliance with data protection regulations.
NPU notebooks: new architecture, new possibilities
While traditional notebooks quickly reach their thermal or energy limits in AI applications, the new generation of copilot PCs rely on specialized AI hardware. Models such as the Surface Laptop 6 or the Surface Pro 10 integrate a dedicated NPU directly into the Intel Core Ultra SoC, supplemented by high-performance CPU cores and integrated graphics.
The advantages are evident in typical everyday scenarios. Voice input via Copilot, Gemini or ChatGPT can be analyzed without delay, image processing with AI tools takes place without cloud rendering, and even multimodal tasks, such as analyzing text, sound, and video simultaneously run in real time. Microsoft couples the hardware closely with the operating system.
IDG
Windows 11 offers native NPU support, for example for Windows Studio Effects, live subtitles, automatic text recognition in images or voice focus in video conferences. The systems are designed so that AI does not function as an add-on, but is an integral part of the overall system as soon as it is switched on, even without an internet connection.
Productive despite dead zones
The tools for offline AI are now fully developed and stable in everyday use. GPT4All from Nomic AI is particularly suitable for beginners, with a user-friendly interface, uncomplicated model management and support for numerous LLMs. Ollama is aimed at technically experienced users and offers terminal-based model management with a local API connection, ideal for providing your own applications or workflows directly with AI support. LM Studio, on the other hand, is characterized by its GUI focus. Models from Hugging Face can be simply be searched in the app, downloaded, and activated with a click.
The LM Studio chatbot not only provides access to a large selection of AI models from Huggingface.com, but also allows the AI models to be fine-tuned. There is a separate developer view for this.
LM Studio
Jan.ai is particularly versatile. The minimalist interface hides a highly functional architecture with support for multiple models, context-sensitive responses, and elegant interaction.
Local tools are also available in the creative area. With suitable hardware, Stable Diffusion delivers AI-generated images within a few seconds, while applications such as Photo AI automatically improve the quality of screenshots or video frames. A powerful NPU PC turns the mobile device into an autonomous creative studio, even without Wi-Fi, cloud access, or GPU calculation on third-party servers.
What counts on the move
The decisive factor for mobile use is not just whether a notebook can run AI, but how confidently it can do this offline. In addition to the CPU and GPU, the NPU plays a central role. It processes AI tasks in real time, while at the same time conserving battery power and reducing the load on the overall system.
Devices such as the Galaxy Book with an RTX 4050/4070 or the Surface Pro 10 with a Intel Core Ultra 7 CPU demonstrate that even complex language models such as Phi-2, Mistral, or Qwen run locally, with smooth operation and without the typical latencies of cloud services.
Copilot as a system assistant complements this setup, provided the software can access it. When travelling, you can compose emails, structure projects, prepare images or generate text modules, regardless of the network. Offline AI on NPU notebooks also transforms the in-flight restaurant, the waiting gate, or the remote holiday home into a productive workspace.
Requirements and limitations
The hardware requirements are not trivial however. Models such as LLaMA2 or Mistral require several gigabytes of RAM, 16 GB RAM is the lower minimum. Those working with larger prompts or context windows should plan for 32 or 64 GB. The SSD memory requirement also increases, as many models use between 4 and 20 GB.
NPUs take care of inference, but depending on the tool, additional GPU support may be necessary, for example for image generation with Stable Diffusion.
Sam Singleton
Integration into the operating system is also important. Copilot PCs ensure deep integration between hardware, AI libraries, and system functions. Anyone working with older hardware will have to accept limitations.
The model quality also varies. Local LLMs do not yet consistently reach the level of GPT-4, but they are more controllable, more readily available and more data protection-friendly. They are the more robust solution for many applications, especially when travelling.
Offline AI under Linux: openness meets control
Offline AI also unfolds its potential on Linux systems—often with even greater flexibility. Tools such as Ollama, GPT4All, or LM Studio offer native support for Ubuntu, Fedora, and Arch-based distributions and can be installed directly from the terminal or as a flatpack. The integration of open models such as Mistral, DeepSeek, or LLaMA works smoothly, as many projects rely on open source frameworks such as GGML or llama.cpp.
Browser interface for Ollama: Open-Web-UI is quickly set up as a Python program or in a Docker container and provides a user interface.
IDG
Anyone working with Docker or Conda environments can build customized model set-ups, activate GPU support or fine-tune inference parameters. This opens up various scenarios, especially in the developer environment: Scripting, data analysis, code completion, or testing your own prompt structures.
In conjunction with tiling desktops, reduced background processes and optimized energy management, the Linux notebook becomes a self-sufficient AI platform, without any vendor lock-in, with maximum control over every file and every computing operation.
Offline instead of delivered
Offline AI on NPU notebooks is not a stopgap measure, but a paradigm shift. It offers independence, data protection, and responsiveness, even in environments without a network. Thanks to specialized chips, optimized software, and well thought-out integration in Windows 11 and the latest Linux kernel, new freedom is created for data-secure analyses, mobile creative processes, or productive work beyond the cloud.
The prerequisite for this is an AI PC that not only provides the necessary performance, but also anchors AI at a system level. Anyone relying on reliable intelligence on the move should no longer hope for the cloud, but choose a notebook that makes it superfluous.
Los Angeles, the city that’s long battled smog, now faces a new and ironic problem: EV fast chargers that actually make the air dirtier. According to a recent UCLA study, particulate matter levels near some charging cabinets spiked to 200 micrograms per cubic meter, roughly 16 times higher than levels at nearby gas stations. For a technology meant to clean up the city’s air, that’s a startling twist.
The culprit isn’t the cars themselves but the fast chargers’ cooling fans, which kick up dust, brake residue, and other fine particulates from the ground. Move just a few steps away, and pollution levels drop—but if you’re standing right next to the unit, you could be inhaling more than you would at a busy fuel pump.
Why Chargers Pollute More Than Pumps
The UCLA team compared 50 charging locations across Los Angeles County against gas stations, traffic intersections, and citywide background air. Gas stations averaged around 12 µg/m³ of PM2.5, intersections hovered near 10–11, and the citywide average was closer to 7–8. But chargers averaged 15, with peaks up to 200.
It’s an unexpected setback for a sector that’s been under pressure to prove itself reliable and effective. Fortunately, the industry has already shown it can adapt quickly—just this summer, new data revealed major improvements in public EV charging reliability, after years of complaints about broken or slow stations. The next challenge may not be uptime, but air quality.
Health Risks, Policy Pressure, and Easy Fixes
Exposure to PM2.5 is linked to heart disease, asthma, and other respiratory problems, meaning those few minutes of waiting at the charger could carry risks, particularly for sensitive groups. But this is not an insurmountable problem. Engineers suggest adding filtration systems to cabinets or simply elevating exhaust vents to prevent dust clouds from being blown into people’s breathing space.
The timing matters. EV infrastructure growth has already hit snags—new charger installations declined earlier this year, slowed by political criticism and investor uncertainty. If public confidence in chargers is dented further by pollution concerns, it could give skeptics more ammunition to stall rollout. And that comes at a critical moment: federal regulators are still clashing over how strict future tailpipe standards should be, with some reports warning that nixing emissions rules could actually spike gas prices.
What It Means for Drivers
For EV drivers, the practical takeaway is simple: stand back from the charger while your car tops up. Just a few meters makes the difference between safe air and dirty air. Many drivers already sit inside their cars with the HVAC running, which effectively filters the particulates.
Zooming out, the study is a reminder that clean transport isn’t just about zero tailpipe emissions—it’s about the entire ecosystem. Fast charging is essential to mass EV adoption, but as Los Angeles shows, it’s also a system that needs smarter design. With relatively simple fixes available, researchers argue this problem is more hiccup than disaster—if policymakers, utilities, and automakers act before it becomes another headline.
In 2024, Microchip launched PIC64, a new portfolio of microprocessors that the Chandler, Arizona-based company claims could enable a generational leap in embedded processing performance for aerospace and defense applications.