Iranian Spies Accidentally Leaked Videos of Themselves Hacking

https://www.wired.com/story/iran-apt35-hacking-video


The most telling element of the video, Wikoff says, is the speed the hacker demonstrates in exfiltrating the accounts’ information in real-time. The Google account’s data is stolen in around four minutes. The Yahoo account takes less than three minutes. In both cases, of course, a real account populated with tens or hundreds of gigabytes of data would take far longer to download. But the clips demonstrate how quickly that download process is set up, Wikoff says, and suggest that the hackers are likely carrying out this sort of personal data theft on a mass scale. “To see how adept they are at going in and out of all these different webmail accounts and setting them up to exfiltrate, it is just amazing,” says Wikoff. “It’s a well-oiled machine.”

In some cases, IBM’s researchers could see in the video that the same dummy accounts were also themselves being used to send phishing emails, with bounced emails to invalid addresses appearing in the accounts’ inboxes. The researchers say those bounced emails revealed some of the APT35 hackers’ targeting, including American State Department staff as well as an Iranian-American philanthropist. It’s not clear if either target was successfully phished. The dummy Yahoo account also briefly shows the phone number linked with it, which begins with Iran’s +98 country code.

In other videos the IBM researchers declined to show to WIRED, the researchers say the hackers appeared to be combing through and exfiltrating data from real victims’ accounts, rather than ones they created for training purposes. One victim was a member of the US Navy, and another was a two-decade veteran of the Greek Navy. The researchers say the APT35 hackers appear to have stolen photos, emails, tax records, and other personal information from both targeted individuals.

Screenshot: IBM

In some clips, the researchers say they observed the hackers working through a text document full of usernames and passwords for a long list of non-email accounts, from phone carrier to bank accounts, as well as some as trivial as pizza delivery and music streaming services. “Nothing was off-limits,” Wikoff says. The researchers note that they didn’t see any evidence that the hackers were able to bypass two-factor authentication, however. When an account was secured with any second form of authentication, the hackers simply moved on to the next one on their list.

The sort of targeting that IBM’s findings reveal fits with previous known operations tied to APT35, which has carried out espionage on behalf of Iran for years, most often with phishing attacks as its first point of intrusion. The group has focused on government and military targets that represent a direct challenge to Iran, such as nuclear regulators and sanctions bodies. More recently it has aimed its phishing emails at pharmaceutical companies involved in Covid-19 research and President Donald Trump’s reelection campaign.

via Wired Top Stories https://ift.tt/2uc60ci

July 16, 2020 at 05:12AM

OpenAI’s fiction-spewing AI is learning to generate images

https://www.technologyreview.com/2020/07/16/1005284/openai-ai-gpt-2-generates-images/

In February of last year, the San Francisco-based research lab OpenAI announced that its AI system could now write convincing passages of English. Feed the beginning of a sentence or paragraph into GPT-2, as it was called, and it could continue the thought for as long as an essay with almost human-like coherence.

Now, the lab is exploring what would happen if the same algorithm were instead fed part of an image. The results, which were given an honorable mention for best paper award at this week’s International Conference on Machine Learning, open up a new avenue for image generation, ripe with opportunity and consequences.

At its core, GPT-2 is a powerful prediction engine. It learned to grasp the structure of the English language by looking at billions of examples of words, sentences, and paragraphs, scraped from the corners of the internet. With that structure, it could then manipulate words into new sentences by statistically predicting the order in which they should appear.

So researchers at OpenAI decided to swap the words for pixels and train the same algorithm on images in ImageNet, the most popular image bank for deep learning. Because the algorithm was designed to work with one-dimensional data, i.e.: strings of text, they unfurled the images into a single sequence of pixels. They found that the new model, named iGPT, was still able to grasp the two-dimensional structures of the visual world. Given the sequence of pixels for the first half of an image, it could predict the second half in ways that a human would deem sensible.

Below, you can see a few examples. The left-most column is the input, the right-most column is the original, and the middle columns are iGPT’s predicted completions. (See more examples here.)

OPENAI

The results are startlingly impressive and demonstrate a new path for using unsupervised learning, which trains on unlabeled data, in the development of computer vision systems. While early computer vision systems in the mid-2000s trialed such techniques before, they fell out of favor as supervised learning, which uses labeled data, proved far more successful. The benefit of unsupervised learning, however, is that it allows an AI system to learn about the world without a human filter, and significantly reduces the manual labor of labeling data.

The fact that iGPT uses the same algorithm as GPT-2 also shows its promising adaptability across domains. This is in line with OpenAI’s ultimate ambition to achieve more generalizable machine intelligence.

At the same time, the method presents a concerning new way to create deepfake images. Generative adversarial networks, the most common category of algorithms used to create deepfakes in the past, must be trained on highly curated data. To get a GAN to generate a face, for example, its training data should only include faces. iGPT, by contrast, simply learns enough of the structure of the visual world across millions and billions of examples to spit out images that could feasibly exist within it. While training the model is still computationally expensive, offering a natural barrier to its access, that may not be the case for long.

OpenAI did not grant an interview request, and therefore did not provide additional context for  future plans with regards to its research. But in an internal policy team meeting that MIT Technology Review attended last year, its policy director Jack Clark mused about the future risks of GPT-style generation, including what would happen if it were applied to images. “Video is coming,” he said, projecting where he saw the field’s research trajectory going. “In probably five years, you’ll have conditional video generation over a five to ten second horizon. The sort of thing I’m imagining is eventually you’ll be able to put a photo of Angela Merkel as the condition, with an explosion next to her, and it will generate a likely output, which will be Angela Merkel getting killed.”

via Technology Review Feed – Tech Review Top Stories https://ift.tt/1XdUwhl

July 16, 2020 at 09:18AM