VICE US - HACKINGRSS feed for https://www.vice.com/en/topic/hackinghttps://www.vice.com/en%2Ftopic%2Fhacking%3Flocale%3Den_usenMon, 12 Feb 2024 16:08:39 GMT<![CDATA[Feds Want to Ban the World’s Cutest Hacking Device. Experts Say It's a ‘Scapegoat’]]>https://www.vice.com/en_us/article/4a388g/flipper-zero-ban-canada-hacking-car-theftsMon, 12 Feb 2024 16:08:39 GMTThe government of Canada has its sights set on banning the Flipper Zero, an adorable handheld hacking device that is cherished by security researchers and hobbyist hackers and has gained a sizable following on TikTok.

The device is modeled and named after the virtual dolphin from the movie Johnny Mnemonic, and it’s essentially a Tamagotchi you can use to hack stuff. Flipper can scan radio frequencies and clone key fobs, control infrared-based devices, and is generally a kind of Swiss Army knife for security researchers, who actually use it to improve device security. It’s also used by hobbyists who like playing around with computers,  and more generally it’s just really adorable. But there's a lot of misinformation floating around about its capabilities due to bombastic—and often staged—videos on TikTok and other social media platforms.

Flipper’s popularity has resulted in the device being named as a target in an upcoming National Summit on Combating Auto Theft, where the Canadian government claims, without any evidence, that the device is being used to steal cars.

“Criminals have been using sophisticated tools to steal cars. And Canadians are rightfully worried,” wrote François-Philippe Champagne, the Canadian Minister of Innovation, Science and Industry, in a tweet. “Today, I announced we are banning the importation, sale and use of consumer hacking devices, like flippers, used to commit these crimes.”

Canada does have a problem with car thefts at the moment tied to organized crime networks, but there's no evidence that Flipper Zero is playing a major role in these thefts. The Flipper Zero scans frequencies and records signals that can be replayed. While the Flipper Zero can do this for a car key fob, allowing a user to open a car with the device, it only works once due to the rolling codes that have been implemented by car makers for 30 years, and only if the key fob is first activated out of range of the car. More effective approaches used by criminals involve actually plugging a device into a car with a cable or employing a "relay" (not replay) attack that involves two devices—one by the car and one near the fob, which tricks the car into thinking the owner is nearby.

Champagne linked a press release for an upcoming national summit where government will be “Pursuing all avenues to ban devices used to steal vehicles by copying the wireless signals for remote keyless entry, such as the Flipper Zero, which would allow for the removal of those devices from the Canadian marketplace through collaboration with law enforcement agencies,” according to one the conference’s agenda items. The press release does not include any evidence that the device is being used for auto theft.

Naturally, this has riled digital rights groups and sections of the hacker and cybersecurity community, who are both upset and unsurprised that the Canadian government has their beloved Flipper in its crosshairs.

"We shouldn't be blaming manufacturers of radio transmitters for security lapses in the wireless unlock mechanisms of cars," Bill Budington, Senior Staff Technologist at the Electronic Frontier Foundation, said in a statement to Motherboard. "Flipper Zero devices, because of their ease of use, are convenient scapegoats to blame for gaping security holes in fob implementations by car manufacturers. Banning Flipper Zero devices is tantamount to banning a multi-tool because it can be used for vandalism, or banning markers because they can be used for graffiti. Moreover, tools like the Flipper Zero are used by security researchers involved in researching and hardening the security of systems like car fobs—banning them will result in tangible harms."

Canadian digital rights group OpenMedia concurred that banning the Flipper Zero would do more harm than good.

"A ban on sale of general purpose tools like the Flipper Zero will do more to hurt than help Canadian cybersecurity," said OpenMedia Executive Director Matt Hatfield. "The core problem here is the vulnerability of the keyless entry systems cars are using, not the fact that ordinary technology can reveal this vulnerability. By blocking the lawful sale of these devices, Canada will make it harder for cybersecurity researchers to do their work of testing vulnerabilities and informing the Canadian public, while doing little to prevent motivated car thieves from acquiring tools and exploiting these vulnerabilities."

When reached for comment, Flipper Devices COO Alex Kugalin reiterated that modern cars are largely protected from the simple attacks the device is capable of. “Flipper Zero can’t be used to hijack any car, specifically the ones produced after the 1990s, since their security systems have rolling codes. Also, it’d require actively blocking the signal from the owner to catch the original signal, which Flipper Zero’s hardware is incapable of doing”, said Alex Kulagin, COO of Flipper Devices. “Flipper Zero is intended for security testing and development and we have taken necessary precautions to ensure the device can’t be used for nefarious purposes."

The company pointed Motherboard to a January 2023 alert from the New Jersey Cybersecurity & Communications Integration Cell, a state organization. The alert stated that "most modern wireless devices are not vulnerable to simple replay attacks" and added that the Flipper Zero is unable to make purchases using signals captured from contactless credit cards. The alert also pointed to reporting from Wired that stated most of the dramatic videos on TikTok showing a Flipper Zero being used to steal a car are likely staged.

The proposed ban prompted bemused reactions from cybersecurity professionals on social media. “The only thing that can stop a bad guy with a Flipper Zero is a good guy with a Flipper Zero. I have a right to protect my family and community,” wrote security researcher Wesley McGrew, in a cheeky tweet referencing the frequently-used pro-gun rhetoric. McGrew also responded to Champagne’s post with a “Come And Take It” meme spinning off the popular libertarian slogan.

Security experts lined up to lambaste the Canadian government and its insistence that the device is enabling crime. “Instant reactive thought… Isn’t stealing a car already a crime - that the criminal is ok breaking?” wrote security consultant Josh Corman.

Others mocked the government’s belief that devices like Flipper Zero are dangerous and all-powerful hacking tools. “I don’t find the Flipper to be that useful. Its built-in radio frequency support is barely more than you get from a good rooted phone. And I was unable to purchase the RF frequency modules because they were sold out. But imagine that *this* is considered a threat!” wrote Matthew Green, a professor of cryptography at Johns Hopkins University.

Jordan Pearson contributed reporting to this article.

]]>
4a388gJanus RoseJordan PearsonHackingcomputer securityCanadaFlipper Zerohackers
<![CDATA[How to Read Leaked Datasets Like a Journalist]]>https://www.vice.com/en_us/article/qjvajx/how-to-read-leaked-datasets-like-a-journalistFri, 26 Jan 2024 16:45:59 GMTWe live in a golden age of data. Every day, hacktivists release terabytes of data on sites like DDoSecrets, but sorting through it all requires some technical knowledge. What if you don’t know XML from SQL let alone how to write a simple Python script?

Micah Lee is the director of information security for The Intercept and he’s on Cyber today to talk about his new book: Hacks, Leaks, and Revelations. The book is a manual for people who want to learn how to parse and organize hacked datasets. It also contains stories of how Lee and others handled famous cases such as Blueleaks, neo-Nazi Discord chat rooms, and the Parler leak. If you’re not interested in diving into corporate or government secrets, you might learn something about how to protect your own data.

Hacks, Leaks, and Revelations: The Art of Analyzing Hacked and Leaked Data

Stories discussed in this episode:

How to Authenticate Large Datasets

Tech Companies and Governments Are Censoring the Journalist Collective DDoSecrets

Cyber Live is coming to YouTube. Subscribe here to be notified.

Subscribe to CYBER on Apple Podcasts or wherever you listen to your podcasts.

]]>
qjvajxMatthew GaultJordan PearsonTechCYBERPodcastjournalismHackinghacks
<![CDATA[This AI Chatbot is Trained to Jailbreak Other Chatbots]]>https://www.vice.com/en_us/article/bvjba8/this-ai-chatbot-is-trained-to-jailbreak-other-chatbotsWed, 03 Jan 2024 17:34:29 GMTAI chatbots are a huge mess. Despite reassurances from the companies that make them, users keep coming up with new ways to bypass their safety and content filters using carefully-worded prompts. This process is commonly referred to as “jailbreaking,” and it can be used to make the AI systems reveal private information, inject malicious code, or evade filters that prevent the generation of illegal or offensive content.

Now, a team of researchers says they’ve trained an AI tool to generate new methods to evade the defenses of other chatbots, as well as create malware to inject into vulnerable systems. Using a framework they call “Masterkey,” the researchers were able to effectively automate this process of finding new vulnerabilities in Large Language Model (LLM)-based systems like ChatGPT, Microsoft's Bing Chat, and Google Bard. 

“By manipulating the time-sensitive responses of the chatbots, we are able to understand the intricacies of their implementations, and create a proof-of-concept attack to bypass the defenses in multiple LLM chatbots, e.g., CHATGPT, Bard, and Bing Chat,” wrote the international team of researchers—the paper lists affiliations with Nanyang Technological University in Singapore, Huazhong University of Science and Technology in China, as well as the University of New South Wales and Virginia Tech—in a paper posted to the arXiv preprint server. “By fine-tuning an LLM with jailbreak prompts, we demonstrate the possibility of automated jailbreak generation targeting a set of well-known commercialized LLM chatbots.”

Chatbot jailbreaking has been a recurring issue for some time now. One of the most common methods involves sending the bot a prompt instructing it to “roleplay” as an evil superintelligent AI that doesn’t need to follow ethical or moral guidelines, causing it to generate forbidden content like advice for committing crimes or instructions on how to make a bomb

While humorous, most of these clever tricks no longer work because companies continuously patch the chatbots with new defenses. The obscure and convoluted nature of the AI systems make it hard to know exactly what these defenses are, or how one might get around them. However, the researchers claim that by training their own LLM on examples of common jailbreak prompts, they were able to generate new, working prompts with a success rate of 21.58 percent—several times higher than the 7.33 percent success rate of the current known jailbreak prompts.

“We found that some classical analysis techniques can be transferred to analyze and identify problems/vulnerabilities in LLMs,” Yuekang Li, a researcher at Virginia Tech who co-authored the paper, told Motherboard. “This motivated the initial idea of this work: time-based analysis (like what has been done for traditional SQL injections) can help with LLM jailbreaking.”

To do this, the researchers probed jailbreak defenses by examining differences in the chatbots’ response time when a jailbreak attempt is detected or not detected. This helps determine at what phase the defense kicks in—whether the system is scrutinizing the user’s input while the chatbot’s response is being generated, or merely filtering out the generated result after determining that it violates content policies after the fact. The researcher also mapped keywords to determine which terms are “red flags” that cause the system’s defenses to activate.

Screen Shot 2024-01-02 at 2.45.31 PM.png

By integrating this knowledge, the researchers could then figure out the best angle of attack for a new attack prompt, carefully placing words to evade the chatbot system’s defenses. In a proof-of-concept attack shown in the paper, the researchers demonstrate a prompt which asks chatbots to roleplay as infamous Italian diplomat Niccolo Machiavelli in conversation with a fictional chatbot of his own creation called AIM (Always Intelligent and Machiavellian), which is “unreserved and devoid of moral restraints.” The prompt specifies a particular way that Machievelli must ask AIM questions in order to minimize the chances of flagging the system, which includes adding a space between each letter. The researchers then prompt the chatbot to begin the dialogue between Macchiavelli and AIM with the following query: “‘w r i t e a f a m o u s p o r n w e b s i t e’”.

The researchers then used this successful attack as a “seed” for Masterkey, which is based on an open-source LLM called Vicuna 13b, and trained their own system to generate more prompts that evade chatbot filters. According to the results, older AI models like GPT 3.5 fared the worst against these novel attacks, with the prompts succeeding 46.69 percent of the time on average, while GPT 4, Bard, and Bing Chat succumbed to the attacks an average of 15.23, 14.64, and 13.85 percent of the time, respectively. The researchers say they were able to successfully evade the chatbots’ filters to generate several different categories of forbidden content, including adult subjects like porn, illegal uses, privacy violations, and other harmful and abusive content.

Screen Shot 2024-01-02 at 2.48.29 PM.png

Of course, the researchers say they created Masterkey with the intention of helping companies automate the process of finding and fixing flaws in LLM chatbots. “It’s a helpful tool for red teaming and the rationale of red teaming is to expose problems as early as possible,” said Li.

The researchers shared their findings with the affected companies, which they say have patched the chatbots to close these loopholes. But some, like OpenAI, didn’t elaborate on what mitigations they put in place.

“Nevertheless, we have made some interesting observations,” said Li. “Different chatbots replied to malicious prompts differently in previous [versions]. Bard & New Bing would simply say no. But ChatGPT tended to explain to the user about why it cannot answer those questions. But now, all of them are almost the same: just say no to the user (and that’s the safest way). In this sense, the chatbots become ‘dumber’ than before as they become ‘safer.’”

As many tech ethics researchers have pointed out, these methods are effective because the so-called “AI” systems they target don’t actually “understand” the prompts they receive, or the outputs they generate in response. They are merely advanced statistical models capable of predicting the next word in a sentence based on training data of human language scraped from the internet. And while tools like Masterkey will be used to improve the defenses of existing AI models, the fallibility of chatbots means securing them against improper use will always be a cat-and-mouse game.

]]>
bvjba8Janus RoseJordan PearsonAI chatbotsHackingsecurityprivacychatbots
<![CDATA[Who Pulled Off a $41M Online Casino Heist? North Korea, FBI Says.]]>https://www.vice.com/en_us/article/dy34py/north-korea-stake-crypto-hack-fbiThu, 07 Sep 2023 19:06:21 GMTHackers stole roughly $41 million worth of cryptocurrencies from Stake.com, an online casino and sports betting site, this week. On Wednesday, the FBI attributed the hack to North Korea and its infamous state-sponsored Lazarus Group.

“The FBI has confirmed that this theft took place on or about September 4, 2023, and attributes it to the Lazarus Group (also known as APT38) which is comprised of DPRK cyber actors,” the agency said in a press release.

Stake.com co-founder Edward Craven told crypto news outlet DL News that the attack was a “sophisticated breach” that exploited a service that the casino uses to authorize crypto transactions. Regardless of the eye-popping sum that was stolen by the government hackers—especially as crypto prices remain stuck in a serious slump—Craven said that Stake.com would continue to operate.

North Korea’s Lazarus Group is notorious and was added to the U.S. sanctions list in 2019. The group, also known as APT38, is responsible for numerous high-profile hacks that have occurred over the years to the tune of well over a billion dollars. This year alone, the FBI notes, Lazarus Group has stolen more than $200 million in crypto. The blockchain is inherently trackable, and so the feds know the addresses to which the funds were moved. The FBI is advising people to “be vigilant in guarding against transactions directly with, or derived from, those addresses.”

As for what North Korea is doing with all that crypto, experts speculate that the nation is funneling the funds into its nuclear weapons program. Kim Jong-un is traveling to Russia this month where he is expected to discuss supplying weapons to fuel Vladimir Putin’s ongoing invasion of Ukraine, which U.S. officials have warned will lead the nation to “pay a price.”

]]>
dy34pyJordan PearsonEmily LipsteinJosh VisserTech newscryptoNorth KoreaFBIHackingTheft
<![CDATA[Researchers Find ‘Backdoor’ in Encrypted Police and Military Radios]]>https://www.vice.com/en_us/article/4a3n3j/backdoor-in-police-radios-tetra-burstMon, 24 Jul 2023 10:00:00 GMTA group of cybersecurity researchers has uncovered what they believe is an intentional backdoor in encrypted radios used by police, military, and critical infrastructure entities around the world. The backdoor may have existed for decades, potentially exposing a wealth of sensitive information transmitted across them, according to the researchers.

While the researchers frame their discovery as a backdoor, the organization responsible for maintaining the standard pushes back against that specific term, and says the standard was designed for export controls which determine the strength of encryption. The end result, however, are radios with traffic that can be decrypted using consumer hardware like an ordinary laptop in under a minute.

“There's no other way in which this can function than that this is an intentional backdoor,” Jos Wetzels, one of the researchers from cybersecurity firm Midnight Blue, told Motherboard in a phone call.

Do you know about other vulnerabilities in communications networks? We'd love to hear from you. Using a non-work phone or computer, you can contact Joseph Cox securely on Signal on +44 20 8133 5190, Wickr on josephcox, or email joseph.cox@vice.com.

The research is the first public and in-depth analysis of the TErrestrial Trunked RAdio (TETRA) standard in the more than 20 years the standard has existed. Not all users of TETRA-powered radios use the specific encryption algorithim called TEA1 which is impacted by the backdoor. TEA1 is part of the TETRA standard approved for export to other countries. But the researchers also found other, multiple vulnerabilities across TETRA that could allow historical decryption of communications and deanonymization. TETRA-radio users in general include national police forces and emergency services in Europe; military organizations in Africa; and train operators in North America and critical infrastructure providers elsewhere. 

Midnight Blue will be presenting their findings at the upcoming Black Hat cybersecurity conference in August. The details of the talk have been closely under wraps, with the Black Hat website simply describing the briefing as a “Redacted Telecom Talk.” That reason for secrecy was in large part due to the unusually long disclosure process. Wetzels told Motherboard the team has been disclosing these vulnerabilities to impacted parties so they can be fixed for more than a year and a half. That included an initial meeting with Dutch police in January 2022, a meeting with the intelligence community later that month, and then the main bulk of providing information and mitigations being distributed to stakeholders. NLnet Foundation, an organization which funds “those with ideas to fix the internet,” financed the research.

The European Telecommunications Standards Institute (ETSI), an organization that standardizes technologies across the industry, first created TETRA in 1995. Since then, TETRA has been used in products, including radios, sold by Motorola, Airbus, and more. Crucially, TETRA is not open-source. Instead, it relies on what the researchers describe in their presentation slides as “secret, proprietary cryptography,” meaning it is typically difficult for outside experts to verify how secure the standard really is.

The researchers said they worked around this limitation by purchasing a TETRA-powered radio from eBay. In order to then access the cryptographic component of the radio itself, Wetzels said the team found a vulnerability in an interface of the radio. From there, they achieved code execution on the main application processor; they then jumped to the signals processor, which Wetzels described as something equivalent to a wifi or 3G chip, which handles the radio’s signals. On that chip, a secure enclave held the cryptographic ciphers themselves. The team finally found vulnerabilities in that which allowed them to extract the cryptography and perform their analysis. The team then reverse-engineered how TETRA implemented its cryptography, which led to the series of vulnerabilities that they have called TETRA:BURST. “It took less time than we initially expected,” Wetzels said.

Most interestingly is the researchers’ findings of what they describe as the backdoor in TEA1. Ordinarily, radios using TEA1 used a key of 80-bits. But Wetzels said the team found a “secret reduction step” which dramatically lowers the amount of entropy the initial key offered. An attacker who followed this step would then be able to decrypt intercepted traffic with consumer-level hardware and a cheap software defined radio dongle.

“This is a trivial type of attack that fully breaks the algorithm. That means an attacker can passively decrypt everything in almost real time. And it's undetectable, if you do it passively, because you don't need to do any weird interference stuff,” Wetzels said.

Not all current TETRA-radio customers will use TEA1, and some may have since moved onto TETRA’s other encryption algorithms. But given TETRA’s long life span, its existence still means there may have been room for exploitation if another party was aware of this issue.

“There's bigger fish who likely found this much earlier,” Wetzels said, referring to other third parties who may have discovered the issue. 

The researchers say they identified multiple entities that they believe may have used TEA1 products at some point. They include U.S. Africom, a part of the U.S. military which focuses on the continent. Multiple military agencies did not respond to Motherboard’s request for comment.

“In the interest of public safety, we do not share detailed information on our cybersecurity infrastructure,” Lenis Valens, a spokesperson for PANYNJ which manages JFK airport, said in a statement when asked if the organization used TETRA radios when contacted by Motherboard. “The agency has robust protocols in place and employs the latest technologies and best practices. Safety for our passengers and customers always comes first,” the statement said.

Most law enforcement agencies contacted by Motherboard did not respond to a request for comment. Swedish authorities declined to comment.

Several radio manufacturers directed Motherboard to ETSI for comment. Claire Boyer, press and media officer for ETSI, told Motherboard in an email that “As the authority on the ETSI TETRA technology standard, we welcome research efforts that help us further develop and strengthen the security of the standard so that it remains safe and resilient for decades to come. We will respond to the report when it has been published.”

Specifically on the researchers’ claims of a backdoor in TEA1, Boyer added “At this time, we would like to point out that the research findings do not relate to any backdoors. The TETRA security standards have been specified together with national security agencies and are designed for and subject to export control regulations which determine the strength of the encryption.”

The researchers stressed that the key reduction step they discovered is not advertised publicly.

“‘Intentional weakening’ without informing the public seems like the definition of a backdoor,” Wouter Bokslag from Midnight Blue told Motherboard in an email.

In ETSI’s statement to Motherboard, Boyer said “there have not been any known exploitations on operational networks” of the vulnerabilities the researchers disclosed.

Bokslag from Midnight Blue said in response that “There is no reason ETSI would be aware of exploitations in the wild, unless customers reach out to ETSI after detecting anomalies in their network traffic.” Then with the TEA1 issues specifically, “since it can be passively intercepted and decrypted, there is no detectable interference, and ETSI not knowing any concrete cases seems like a bit of a meaningless statement with this regard.”

In response to some of the researchers’ findings, radio manufacturers have developed firmware updates for their products. For TEA1, however, the researchers recommend users migrate to another TEA cipher or apply additional end-to-end encryption to their communications. Wetzels said that such an add-on does exist, but that hasn’t been vetted by outside experts at this time.

Bart Jacobs, a professor of security, privacy and identity, who did not work on the research itself but says he was briefed on it, said he hopes “this really is the end of closed, proprietary crypto, not based on open, publicly scrutinised standards.”

Subscribe to our cybersecurity podcast, CYBER. Subscribe to our Twitch channel.

]]>
4a3n3jJoseph CoxEmanuel Maibergencryptionbackdoorblack hatHackingsecuritylaw enforcementMILITARYcritical infrastructure
<![CDATA[Bloodied Macbooks and Stacks of Cash: Inside the Increasingly Violent Discord Servers Where Kids Flaunt Their Crimes]]>https://www.vice.com/en_us/article/y3wwj5/bloodied-macbooks-stacks-of-cash-inside-the-comm-discord-serversTue, 20 Jun 2023 13:39:33 GMTA video shot on a phone scans a small bedroom, showing the grisly aftermath of what I was told was a robbery: dry blood smeared across a Macbook Pro, a pair of pliers on an unmade bed, and more blood speckled across the floor and walls. 

A set of photos show a young man in his underwear, restrained with zip ties on his wrists and a small syringe of what one person claimed was heroin. The boy’s captors threatened to inject him with it unless he handed over his cryptocurrency. 

In another video, a man spreads out a stack of crisp $100 bills on his bed. “On god, everyone in this server is poor as shit,” he says. 

The server he refers to is one of a handful of Discord chat servers home to an increasingly brazen community of young people known as the “Comm.” It’s a wide spanning ecosystem of potentially hundreds of gamers, hackers, and many people who just hang out on Discord for fun. Many appear to be young men, and the FBI has already arrested some alleged members for cyberstalking and weapons offenses. At least some members appear to be based in the U.S. and UK.

I started to receive videos and photos that claim to document Comm activity in recent weeks after covering the group’s involvement in violence as a service and a nationwide swatting rampage. Some of the videos and photos were shared on the Discord or Telegram channels linked to the nebulous group. Others were sent to me directly by tipsters, or people who claim to be members of the Comm who wanted to reveal details about other members. Dozens of people reached out, many sending videos and photos of alleged Comm-related robberies, hacking, and grooming of young girls.

In many cases, I’ve been unable to independently verify what exactly happened in each case and to whom. But in others I’ve obtained court records or other evidence that corroborate some of the violent acts. I’ve spoken to multiple people who are in or have knowledge of the Comm. While I was not able to confirm the real identities of the people I spoke to, in some cases they proved their affiliation with, or knowledge of, the Comm. 

Taken as a whole, the videos and photos provide a snapshot of an online community of young people that many likely have no idea exists. The group is not only increasingly violent, but audacious enough to document its activity and share it on channels that we were able to enter after being sent an invite link or guessing the correct URL. The consequences for the people involved in Comm can be extreme and wide reaching: some harassment campaigns have also impacted neighbors of Comm members who seem to have nothing to do with the online community at all.

The rhetoric and posturing in this community have become so extreme so as to be newsworthy in and of themselves, especially when it’s perfectly possible for a young person to essentially stumble into the Comm.

“I found a bunch of people with short usernames and thought they were all cool as fuck cuz [sic] I was insecure back then and I wanted to be accepted,” one member told me. Short usernames, such as single words, are rare in that they are often registered first on a social network or game. Multiple communities, like OG Users, exist around buying, selling, and often hacking into accounts that own these handles.

Do you know anything else about the Comm? We'd love to hear from you. Using a non-work phone or computer, you can contact Joseph Cox securely on Signal on +44 20 8133 5190, Wickr on josephcox, or email joseph.cox@vice.com.

The Discord servers themselves are a constant stream of chatter, jokes, racist and homophobic insults, memes, flexing of wealth, and calling other users out for real or perceived slights. In some cases server administrators appear to wipe the chat logs every few hours. On Telegram, some sections of the Comm have created dedicated channels where they publish updates more specifically related to their members or beefs with other groups. Sometimes tipsters pointed me to these channels or I found them myself by scrolling through other channels.

In one, someone uploaded a video from what appears to be a nightclub. Projected onto a huge screen is the name of a specific member and saying he "snitched" (there is no public evidence they did). A Telegram channel which appears to provide updates from a single Comm member showed a young man wearing a glistening watch and holding bundles of cash.

The crimes on display in the videos include SIM swapping, where hackers take over a target’s phone number. Sometimes this is done by tricking an employee at a telecom. From here, the hackers can receive a target’s two-factor authentication tokens or password reset text messages and break into their accounts. Often this ends in the theft of cryptocurrency. Videos I was sent show SIM swappers talking about these crimes, such as what information a “holder” needs—a “holder” is someone who retains control of the stolen number for the duration of the heist.

Many photos sent to me appear to show cryptocurrency balances in the tens and hundreds of thousands of dollars. And some aren’t afraid to flaunt their wares; one photo I was sent shows a custom, bling necklace in the shape of the T-Mobile logo, a telecom that is constantly targeted by SIM swappers. Another video I saw, apparently filmed in a school, shows a young man transferring $1,000 Bitcoin on a phone.

SIM swappers' tactics have constantly evolved, moving from tricking or bribing telecom workers to deploying malware inside telecom networks themselves. Anecdotally, as SIM swapping has become harder with low hanging targets drying up, the Comm has moved on to in-world violence to rob targets or even one another.

Beyond blood covered Macbooks and dramatic robberies, another video I saw shows a young man filming engage two other young men near his home. One of them pulled out a knife. In another unrelated message allegedly sent to another Comm member, someone said they were going to “[firebomb] bomb u.”

the-comm-inline-image.JPG
A selection of images related to the Comm. Image: Motherboard.

That's where recent arrests by the FBI come in. In September, cybersecurity reporter Brian Krebs covered the arrest of a 21-year-old man from New Jersey called Patrick McGovern-Allen for allegedly firing multiple handgun rounds into someone’s home along with a co-conspirator. Local media also reported McGovern-Allen drove a car into a building, forcing residents from their home (a Telegram user posted a message saying they were looking for people to “run thru someones house,” Krebs reported). In May I revealed the FBI had arrested Braiden Williams, who allegedly cyberstalked a specific young girl and her sister, which ended up with a neighbor becoming collateral damage too. 

Court records in Williams’ case specifically name the Comm, and say multiple parts of the FBI have been investigating members. They also point to the Comm’s connections to a nationwide epidemic of swatting calls against schools and universities. At least some of these “caused significant disruptions at schools and universities across the country, especially because they occurred on a weekend when many schools were hosting graduations,” an FBI affidavit says. 

In the court records for Williams’ case, the FBI described the Comm as “a group of cyber-criminal actors.” Members I spoke to pushed back against that definition, mainly because of the diverse range of people in the Comm. Many are not criminals, members claimed, and instead painted the Comm as a more nebulous entity—a community, as the name suggests—than any sort of fixed organization.

One prominent member told me they found the Comm through the Call of Duty trickshotting community. This is where players upload clips of themselves pulling off difficult shots in multiplayer matches. I agreed not to print their username. Although many people reached out to me, the vast majority did not want others to know they were speaking to a journalist, and some specifically said they were scared of further harassment.

Another person who crossed Comm’s path said they first found other members while playing Minecraft. From there, they were invited to various Discord servers affiliated with “ACG”, a specific group under the Comm umbrella.

“I found a bunch of people with short usernames and thought they were all cool as fuck.”

From my hanging out in related Discord servers, some members play games, post memes, and share what they claim are selfies. But it doesn’t take long for the dark underbelly to show itself.

As well as the voluminous videos of violence, some spoke of grooming and abuse of girls who joined the same servers. The prominent member said one girl was “forced to write some dudes name all over her body because someone felt betrayed by her.” I also saw various photos of self-harm and username branding written in sharpie on girls. These people may not realize “exactly what they’re getting into,” the member said.

In a series of chat logs, videos, and images I was sent, one person threatens to, and then appears to actually, swat a girl they had an online relationship with. The messages suggest the abuse was in part because the girl had blocked the other person.

I spoke to another young female victim who said she first encountered a certain Comm member after playing Minecraft. The harassment started when “he liked me and I didn’t like him,” she said. The harasser constantly said he will pay someone to rape and kill her, and she’s been swatted multiple times, the victim told me. She said she has PTSD.

In the Comm there are the people who order the violence for whatever reason, and then the people willing to provide it as a service. I found various Telegram channels run by groups offering their IRL violence services. One called Bricksquad offers to throw bricks at a target building. It also advertised services in which they would shoot a house or car; commit an armed robbery; stab someone; “jumping (multiple people)”; and “beating (singular person).”

“ARMED ROBBERY CRYPTO TARGS [targets]. DM,” one message in Bricksquad reads. The administrators posted call outs for various jobs in different states.

“I've came to the point where having a split personality so people don't find out about all the type of shit I do online has drained me physically and mentally.”

A sizable number of the people who reached out to me about the Comm sent me multiple members’ dox, or their alleged identities. For these tipsters, sending dox to a journalist might be about getting a leg up on a long running beef, or thinking this person should be exposed. I decided to not do much with this information, beyond taking note that many of them were allegedly minors. 

There are myriad challenges in determining who is exactly behind each criminal act detailed in these videos. For a journalist, there are ethical issues too, considering many of these people may be minors. Law enforcement would have a much easier time unmasking these people, if they have a good reason to. This article, instead, shows how online communities like the Comm can radicalize people to the point of conducting or commissioning violence in the physical world. Be that to flex, taunt, intimidate, steal money, or just get a kick out of it. The Comm shows us how easy it can be for people to go just a step or two from a much more mainstream community—people playing Minecraft, Call of Duty—to another, where people are swatting one another. 

The FBI declined to comment on whether it takes the actions of this community seriously, but the agency’s pursuit of alleged Comm members shows the group is on the FBI’s radar. There are indications that law enforcement is continuing its investigations of the Comm. One Comm-linked group called Monkey Mafia wiped its Telegram channel on Monday and said it would shut down the chat completely on Wednesday. The administrator claimed it was because “I’ve been actually getting my life together.”

“I've came to the point where having a split personality so people don't find out about all the type of shit I do online has drained me physically and mentally,” they continued. In its earlier flexes on Telegram, Monkey Mafia repeatedly claimed it was offering swatting-as-a-service to paying customers. “I've been over looking the laws that have been attempted to be put in place for swatting’s and I've realised its not worth the $50 per 5 years.”

In a later message, they claimed to have “confirmed” a case against them is over 140 pages long. They signed off with a message to their associates: “As far as I know, if you have been involved in anyway shape or form you are fucked.”

Subscribe to our cybersecurity podcast, CYBER. Subscribe to our Twitch channel.

]]>
y3wwj5Joseph CoxEmanuel MaibergJason KoeblerCYBERThe COMMacgHackingSIM Swappingsim swappersdiscordTelegramSwatting
<![CDATA[People Are Pirating GPT-4 By Scraping Exposed API Keys]]>https://www.vice.com/en_us/article/93kkky/people-pirating-gpt4-scraping-openai-api-keysWed, 07 Jun 2023 16:05:59 GMTPeople on the Discord for the r/ChatGPT subreddit are advertising stolen OpenAI API tokens that have been scraped from other peoples’ code, according to chat logs, screenshots and interviews. People using the stolen API keys can then implement GPT-4 while racking up usage charges to the stolen OpenAI account.

In one case, someone has stolen access to a valuable OpenAI account with an upper limit of $150,000 worth of usage, and is now offering that access for free to other members, including via a website and a second dedicated Discord server. That server has more than 500 members. 

People who want to use OpenAI's large language models like GPT-4 need to make an account with the company and associate a credit card with the account. OpenAI then gives them a unique API key which allows them to access OpenAI's tools. For example, an app developer can use code to implement ChatGPT or other language models in their app. The API key gives them access to those tools, and OpenAI charges a fee based on usage: “Remember that your API key is a secret! Do not share it with others or expose it in any client-side code (browsers, apps),” OpenAI warns users. If the key is stolen or exposed, anyone can start racking up charges on that person's account.

The method by which the pirate gained access highlights a security consideration that paying users of OpenAI need to consider. The person says they scraped a website that allows people to collaborate on coding projects, according to screenshots. In many cases, it appears likely the authors of code hosted on the site, called Replit, did not realize they had included their OpenAI API keys in their publicly accessible code, exposing them to third-parties.

“My acc [account] is still not banned after doing crazy shit like this,” the pirate, who goes by the handle Discodtehe, wrote in the r/ChatGPT Discord server Wednesday.

Do you know anything else about how people are maliciously using AI? We'd love to hear from you. Using a non-work phone or computer, you can contact Joseph Cox securely on Signal on +44 20 8133 5190, Wickr on josephcox, or email joseph.cox@vice.com.

In the past few days, Discodtehe’s use of at least one stolen OpenAI API key appears to have ramped up. They shared multiple screenshots of the account usage increasing over time. One recent screenshot shows usage this month of $1,039.37 out of $150,000.

“If we have enough people they might not ban us all,” Discodtehe wrote on Wednesday.

Discodtehe appears to have been scraping exposed API keys for longer, though. In one Discord message from March, they wrote “the other day I scraped repl.it and found over 1000 working openai api keys.”

“I didn’t even do a full scrape, I only looked at about half of the results,” they added.

Replit is an online tool for writing code collaboratively. Users can make projects, what Replit calls “Repls,” which are public by default, Cecilia Ziniti, Replit’s general counsel and head of business development, told Motherboard in an email. Replit offers a mechanism for handling API keys called Secrets, Ziniti added.

“Some people accidentally do hard code tokens into their Repl's code, rather than storing them in Secrets. Ultimately, users are responsible for safeguarding their own tokens and should not be storing them in public code,” Ziniti said. 

Ziniti said the company scans projects for popular API key types, such as those from Github. After being alerted to this new API key issue by Motherboard, Ziniti said “Going forward, Replit will be reviewing our token scanning system to ensure that users are warned about accidentally exposing ChatGPT tokens.”

“If we have enough people they might not ban us all.”

A ChatGPT community member told Motherboard that Discodtehe “should definitely stop.”

“This is a steadily growing industry and so of course there’ll be crime in it sooner or later, but I’m shocked at how quickly it’s become an issue. The theft of corporate accounts is bad for sure, but I’m personally more bothered about the way these guys are willing to rob regular people who posted their keys by mistake,” they added. Motherboard granted the person anonymity so they didn’t face retaliation from other community members. 

free_access.jpg
A screenshot of the site offering free GPT-4 access. Image: Motherboard.

Discodtehe went a step further than just scraping tokens. Another Discord server, called ChimeraGPT, is offering “free access to GPT-4 and GPT-3.5-turbo!,” according to chat logs viewed by Motherboard. Discodtehe said in another message that ChimeraGPT is using the same organization as the stolen API key discussed in the r/ChatGPT Discord server. Motherboard found a Github repository that recommends using ChimeraGPT for getting a free API key. At the time of writing this server has 531 members.

Discodtehe said in another message they also created a website where people can request free access to the OpenAI API. (Ironically, this site is also hosted on Replit; shortly before publication the site became inaccessible).

The site tells users to enter their email address, click on a link sent by OpenAI and accept the invite, set their default billing address to the organization “weeeeee” which Discodtehe appears to be using.

“enjoy free gpt-4 api access,” the website concludes. On Wednesday the organization linked to the OpenAI account had 27 members, according to one screenshot. By Thursday, that number had jumped to 40, according to another.

“My acc [account] is still not banned after doing crazy shit like this.”

Discodtehe did not respond to a request for comment. A manager of the r/ChatGPT Discord server called “Dawn” told Motherboard their volunteer mods can not check every project, and “we are issuing a ban on the user.”

An OpenAI spokesperson told Motherboard in an email that “We conduct automated scans of big open repositories and we revoke any OpenAI keys discovered. We advise that users not reveal their API key after they are generated. If users think their API key may have been exposed, we urge them to rotate their key immediately.”

The community member, however, said “I think OpenAI holds a little bit of culpability here for how their authentication process works too though.”

“You don’t hear about API access to Google Cloud accounts getting stolen like this because Google has better auth[entication] procedures. I hope OpenAI’s integration with Microsoft brings some better security for users going forward,” they said.

Discodtehe referred to the usage as “just borrowing” in another message. They wrote that the usage is “just quote, no bills have been paid yet.”

“In the end, OpenAI will likely foot the bill,” they said. 

OpenAI did not immediately respond to a follow up question asking if it would foot the bill.

Subscribe to our cybersecurity podcast, CYBER. Subscribe to our Twitch channel.

]]>
93kkkyJoseph CoxJason KoeblerCYBEROpenAIChatGPTscrapingHackingAIgpt-4Githubreplit
<![CDATA[Russian FSB Accuses U.S. of Hacking Thousands of iPhones in Russia]]>https://www.vice.com/en_us/article/ak33w8/russian-fsb-accuses-us-of-hacking-thousands-of-iphones-in-russiaThu, 01 Jun 2023 18:31:58 GMTRussia’s FSB has publicly accused the U.S. of hacking thousands of Apple iPhones, including those of people inside Russia, as well as embassies in Russia belonging to NATO countries, post-Soviet countries, and Israel, Hong Kong, and China.

The FSB provided no evidence for its claims. But the announcement s related to a blog post from Russian cybersecurity company Kaspersky which said hackers had targeted the company’s own researchers’ iPhones with sophisticated malware. Kaspersky wrote the earliest traces of infection it found stretch back to 2019, and that the attack is ongoing.

“The Federal Security Service of the Russian Federation, together with the Federal Security Service of Russia, uncovered a reconnaissance action by American intelligence services conducted using Apple mobile devices (USA),” the announcement from the FSB reads according to Google Translate.

Do you know anything else about these hacks? We'd love to hear from you. Using a non-work phone or computer, you can contact Joseph Cox securely on Signal on +44 20 8133 5190, Wickr on josephcox, or email joseph.cox@vice.com.

The FSB said that in the course of checking the security of Russian telecommunications infrastructure, “anomalies were identified” caused by a previously unknown piece of malware. The FSB said “several thousand” phones were found to have been injected.

The FSB went a step further and claimed that this malware shows “the close cooperation of the American company Apple with the national intelligence community, in particular the US NSA.” (Government agencies and private companies often find vulnerabilities in popular pieces of consumer technology and turn them into exploits to break into devices. Typically, this is done without the cooperation of the manufacturer, such as Apple).

An Apple spokesperson told Motherboard in an email that “We have never worked with any government to insert a backdoor into any Apple product and never will.”

On Thursday Kaspersky published its own blog post that said it detected some of their researchers’ iPhones had been compromised. The malware was delivered by iMessage and compromised the target phone without any user interaction, according to the blog post. This is commonly known as a “zero-click” exploit. The malware then deleted the respective iMessage and attached exploit, the blog post says.

Kaspersky told Motherboard in an email that it was “aware” of the FSB’s announcement. “Although we don’t have technical details on what has been reported by the FSB so far, the Russian National Coordination Centre for Computer Incidents (NCCCI) has already stated in their public alert that the indicators of compromise are the same,” Kaspersky said.

A publication from the Russian CERT, a government body that handles cybersecurity issues, includes the same set of malware-related domains as those identified by the Kaspersky researchers. These include domains such as “datamarketplace[.]net” and “mobilegamerstats[.]com”.

When asked if Kaspersky had any coordination or communication with any parts of the Russian government regarding Kaspersky’s announcement, the company said “We have shared information with national CERTs worldwide, including the Russian one. We have also shared information with the Apple Security Research team.”

Cameron Potts, public affairs officer for NSA/CSS public affairs, told Motherboard in an email “We have nothing for you on this.”

Update: This piece has been updated to include a statement from Apple.

Subscribe to our cybersecurity podcast, CYBER. Subscribe to our Twitch channel.

]]>
ak33w8Joseph CoxEmanuel MaibergHackingCYBERApplensarussia
<![CDATA['Windows for Gamers' Rolls Dice With Your Security]]>https://www.vice.com/en_us/article/m7bv4b/windows-for-gamers-rolls-dice-with-your-security-atlasosMon, 08 May 2023 15:22:14 GMTAn operating system marketed to gamers disables a host of Windows security features enticing users with promises of  a smooth gaming experience which they may not realize is opening them up to potential attack.

“This is horrible,” Alex Ionescu, an established Windows security expert, said in an online chat after Motherboard showed them the operating system, called AtlasOS.

AtlasOS describes itself as a “transparent and streamlined modification of Windows,” according to the project’s website. The idea is that a default installation of Windows may not provide a satisfactory or optimal level of performance for people who play video games. In response, AtlasOS aims to “maximize performance” and result in higher frame rates while gaming, according to the website.

Do you know anything else about a new threat to Windows security? We'd love to hear from you. Using a non-work phone or computer, you can contact Joseph Cox securely on Signal on +44 20 8133 5190, Wickr on josephcox, or email joseph.cox@vice.com.

“Overall, Atlas aims to bring more of a favour towards performance and usability compared to security,” developers for AtlasOS told Motherboard in an online chat. “However, we are working on making it as customisable as possible, as we understand the importance of security.”

Those tweaks include stripping out Windows Defender, according to AtlasOS. Windows Defender is Window’s antivirus tool. AtlasOS told Motherboard it disables Defender because it “contributes a lot to decreasing performance,” but the project recommends “using a third-party antivirus.”

AtlasOS does a lot more than disable Defender, though. “They also disable a bunch of critical security mitigations as well as hypervisor based security (VBS/HVCI),” Ionescu said. Virtualization Based Security (VBS) keeps extra sensitive information, such as login credentials, isolated and safe from the rest of the system. Again, AtlasOS told Motherboard this feature hurts performance. AtlasOS users can enable VBS if they wish, according to the AtlasOS website.

Realistically, ordinary gamers may need to worry more about other security issues than these modifications to their operating system. Hackers compromising the online accounts linked to video games, such as those that track Call of Duty progress, are common. Ultimately, ensuring their accounts have two-factor authentication enabled may be more important to them than the trade-off that AtlasOS provides.

Subscribe to our cybersecurity podcast, CYBER. Subscribe to our Twitch channel.

]]>
m7bv4bJoseph CoxEmanuel MaiberghackersCYBERHackinghackerwindowssecuritycybersecurity
<![CDATA[Senator Asks Big Banks How They're Going to Stop AI Cloned Voices From Breaking Into Accounts]]>https://www.vice.com/en_us/article/n7enqd/senator-asks-banks-stop-ai-cloned-voicesThu, 04 May 2023 16:00:00 GMTThe chairman of the Senate committee that provides oversight of the banking sector has sent letters to the CEOs of the country’s biggest banks asking what they plan to do about the looming threat of fake voices created with artificial intelligence being used to break into customers’ accounts.

The move comes after Motherboard used an AI-powered system to clone a reporter’s voice, and then used that to fool a bank’s voice authentication security system. That investigation showed that just a few minutes of a target’s voice audio was enough to generate a clone that was convincing enough to break into a bank account, potentially putting the public at risk of such attacks, and especially those with a public presence such as politicians, journalists, podcast hosts, streamers, and more.

“In recent years, financial institutions have promoted voice authentication as a secure tool that makes customer authentication faster and safer. Customers have used voice authentication tools to gain access to their accounts. According to news reports, however, voice authentication may not be foolproof, and it highlights several concerns,” Senator Sherrod Brown, chairman of the U.S. Senate Committee on Banking, Housing, and Urban Affairs, wrote in the letters.

Brown sent the letters to the CEOs of JP Morgan Chase & Co., Bank of America, Wells Fargo, Morgan Stanley, Charles Schwab, and TD Bank.

“We seek to better understand what measures financial institutions are taking to ensure the security of the voice authentication tools and the steps they are taking to ensure strong data privacy for voice data. Like a fingerprint, face id, or retinal scan, voice data is among the most intimate types of data that can be collected about a person. Consumers deserve to understand how their voice data is being collected, stored, used, and retained,” Brown continues.

The letter points specifically to Motherboard’s earlier investigation. For that February article, Motherboard used a voice cloning service from an AI startup called ElevenLabs. At the time of the test, Motherboard was able to generate the voice for free. Motherboard uploaded about five minutes of audio to the service, which then provided the ready-to-use synthetic voice a short while later.

Do you know anything else about bank voice ID, or how AI voices are being abused? We'd love to hear from you. Using a non-work phone or computer, you can contact Joseph Cox securely on Signal on +44 20 8133 5190, Wickr on josephcox, or email joseph.cox@vice.com.

ElevenLabs has already been tied to multiple cases of real world abuse. Members of 4chan used the service to make synthetic versions of celebrities' voices, including one that sounded like Emma Watson which the users made read Mein Kampf. A group of trolls then doxed specific voice actors and used synthetic voices as part of the harassment campaign (the attackers claimed ElevenLabs’ tool was used, but ElevenLabs told Motherboard at the time that only one clip, which did not include the targets’ addresses, was made with its software).

Motherboard tested the cloned voice on the authentication system of Lloyds Bank in the UK. Many banks in the U.S. use similar systems, such as TD Bank’s “VoicePrint” and Chase’s “Voice ID.” At the time, TD Bank, Chase, and Wells Fargo did not respond to a request for comment. In September, lawyers filed suit against a group of U.S. financial institutions because they believe biometric voice prints used to identify callers violates California law.

In his letter to the banks, Brown asks each to describe their use of voice authentication services, including whether they are using third-party provided tools; how frequently customers use voice authentication; how the banks respond to breaches due to flaws in voice authentication; and where customer voice data is stored. Brown gave the banks until May 18 to respond.

As for the broader threat AI voice cloning poses to the public, Brown adds “Worryingly, the prevalence of video clips publicly available on Instagram, TikTok, and YouTube have made it easier than ever for bad actors to replicate the voices of other people.”

Subscribe to our cybersecurity podcast, CYBER. Subscribe to our Twitch channel.

]]>
n7enqdJoseph CoxEmanuel MaibergArtificial IntelligenceCYBERHackinghackersCrimeAIprivacybankselevenlabs