Best Infosec-Related Long Reads for the Week of 5/18/24

Best Infosec-Related Long Reads for the Week of 5/18/24

Hackers rescued a bricked Polish train, The double life of Incognito Market's founder, Tricking Wi-Fi networks into less secure connections, Cybercriminals are selling Indian police biometric data, AI fakes are used to recruit Indian voters, Indian fake news verification tools are a bust


Metacurity is pleased to offer our free and premium subscribers this weekly digest of the best long-form (and longish) infosec-related pieces we couldn’t properly fit into our daily news crush. So tell us what you think, and feel free to share your favorite long reads via email at info@metacurity.com.

Their Trains Were Stalled. These Hackers Brought Them Back to Life

The Wall Street Journal’s Jack Gillum and Karolina Jeznach tell the story of how a Polish ethical hacking group called Dragon Sector managed to unbrick a 175-ton passenger train operated by Dolnośląskie Rail in southwest Poland after the rail company turned to SPS, a rival to the original manufacturer Newag, for repair and refurbishment.

After a different broken train was returned from Newag, the group discovered GPS coordinates in it just happened to pinpoint boundaries on a map around Newag’s competitors.

It was an electronic leash that seemed to tether any repair work to the manufacturer. And it was a problem, the hackers say, that extended to other trains across Poland. One train on a different railway even had code that signaled a mechanical breakdown even though the system was working fine.

“What Newag did,” [the railway’s lawyer, Mirosław Eulenfeld] said, “was truly gangster-like.”

Newag didn’t respond to inquiries seeking comment. In a previous statement, Newag had denied the software subterfuge, arguing its code was “clean” and that SPS ginned up a “conspiracy theory for the media” to avoid paying contract penalties.

With minutes to spare under the railway’s deadline, the hackers came up with programmatic workarounds that brought the locomotives back to life.

“In a true MacGyver-like fashion, the boys succeeded,” said [Monika Mieczkowska, the daughter of SPS’s owner], adding that they finished the job with 43 minutes left in the time allotted by SPS’s contract with the railway. “I was crying.”

Dragon Sector presented their findings to fellow hackers earlier this year at the annual Chaos Communication Congress in Germany. Although their presentation was full of technical findings—“We reverse-engineered based on traffic dumps and a Windows DLL”—it drew chuckles from a sympathetic audience who understood it took a group of techies to fix a European railway.

There have also been unspecific threats of legal action by Newag, the hackers said. Experts fear that could increase the stakes for hackers who use their skills to further the public interest.

He Trained Cops to Fight Crypto Crime—and Allegedly Ran a $100M Dark-Web Drug Market

Wired’s Andy Greenberg details the crimes of Lin Rui-siang, who was arrested by US authorities last week for running the dark web illegal narcotics marketplace Incognito Market, highlighting how Lin lived a double life as an expert cryptocurrency tracer until the long arm of the law caught up with him.

Over his years working as a cryptocurrency-focused intern at Cathay Financial Holdings in Taipei and then as a young IT staffer at St. Lucia's Taiwanese embassy, Lin allegedly lived a double life as a dark-web figure who called himself “Pharoah" or “faro”—a persona whose track record qualifies as remarkably strange and contradictory even for the dark web, where secret lives are standard issue. In his short career, Pharoah launched Incognito, built it into a popular crypto black market with some of the dark web's better safety and security features, then abruptly stole the funds of the market's customers and drug dealers in a so-called “exit scam” and, in a particularly malicious new twist, extorted those users with threats of releasing their transaction details.

During those same busy years, Pharoah also launched a web service called Antinalysis, designed to defeat crypto money laundering countermeasures—only for Lin, who prosecutors say controlled that Pharoah persona, to later refashion himself as a crypto-focused law enforcement trainer. Finally, despite his supposed expertise in cryptocurrency tracing and digital privacy, it was Lin's own relatively sloppy money trails that, the DOJ claims, helped the FBI to trace his real identity.

Among all those incongruities, though, it's the image of Lin giving his cryptocurrency crime training in St. Lucia—which Lin proudly posted to his LinkedIn account—that shocked Tom Robinson, a cofounder of the blockchain analysis firm Elliptic, who has long tracked Lin's alleged Pharoah alter ego. “This is an alleged dark-net market admin standing in front of police officers, showing them how to use blockchain analytics tools to track down criminals online,” says Robinson. “Assuming he is who the FBI says he is, it's incredibly ironic and brazen.”

SSID Confusion: Making Wi-Fi Clients Connect to the Wrong Network

Researchers at Belgian University KU Leuven demonstrated how a design flaw in the IEEE 802.11 standard can cause an SSID (set service identifier) confusion attack in Wi-Fi networks that can trick users into connecting to a less secure network.

In our attack, when the victim wants to connect to the network TrustedNet, we trick it into connecting to a different network WrongNet that uses similar credentials. As a result, the victim’s client will think, and show the user, that it is connected to TrustedNet, while in reality it is connected to WrongNet. The root cause is that, although passwords or other credentials are mutually verified when connecting to a protected Wi-Fi network, the name of the network is not guaranteed to be authenticated. This is caused by a flaw in the 802.11 standard that underpins Wi-Fi.

A common attack scenario is when networks use different SSIDs, but the same credentials, for each frequency band, e.g., for the 2.4 and 5 GHz bands. Often the 5 GHz band is preferred by clients and better secured [10]. However, our attack can downgrade clients to the less secure 2.4 GHz SSID. Furthermore, we demonstrate how our attack may cause a victim to automatically turn off its VPN and possibly allow the interception of the victim’s traffic. The vulnerability was assigned CVE-2023-52424.

[We] propose three possible mitigations against our attack: a modified version of beacon protection, avoiding credential reuse, and authenticating the network SSID.

A Leak of Biometric Police Data Is a Sign of Things to Come

Wired’s Matt Burgess reveals that cybercriminals have started to advertise the sale of biometric police data from India on the heels of security researcher Jeremiah Fowler discovering a massive leak of Indian police applicants’ biometric data from a server operated by a police vendor ThoughtGreen Technologies.

Prateek Waghre, executive director of Indian digital rights organization Internet Freedom Foundation, says there is “vast” biometric data collection happening across India, but there are added security risks for people involved in law enforcement. “A lot of times, the verification that government employees or officers use also relies on biometric systems,” Waghre says. “If you have that potentially compromised, you are in a position for someone to be able to misuse and then gain access to information that they shouldn’t.”

It appears that some biometric information about law enforcement officials may already be shared online. Fowler says after the exposed database was closed down he also discovered a Telegram channel, containing a few hundred members, which was claiming to sell Indian police data, including of specific individuals. “The structure, the screenshots, and a couple of the folder names matched what I saw,” says Fowler, who for ethical reasons did not purchase the data being sold by the criminals so could not fully verify it was exactly the same data.

“We take data security very seriously, have taken immediate steps to secure the exposed data,” a member of ThoughtGreen Technologies wrote in an email to WIRED. “Due to the sensitivity of data, we cannot comment on specifics in an email. However, we can assure you that we are investigating this matter thoroughly to ensure such an incident does not occur again.”

In follow-up messages, the staff member said the company had “raised a complaint” with law enforcement in India about the incident, but did not specify which organization they had contacted. When shown a screenshot of the Telegram post claiming to sell Indian police biometric data, the ThoughtGreen Technologies staff member said it is “not our data.” Telegram did not respond to a request for comment.

Indian Voters Are Being Bombarded With Millions of Deepfakes. Political Candidates Approve

In Wired, Nilesh Christopher and Varsha Bansal explain how, as nearly a billion Indians head to the polls in the country’s general election, Indian political parties are deploying AI-generated audio fakes, propaganda images, and AI parodies for personalized voter recruitment.

The scale and frequency of AI calls have increased considerably during this general election campaign, a seven-phase marathon that began in April and ends on June 1. Most of them are one-way blasts. In the southern Indian state of Tamil Nadu, a company called IndiaSpeaks Research Lab contacted voters with calls from dead politician J. Jayalalithaa, endorsing a candidate, and deployed 250,000 personalized AI calls in the voice of a former chief minister. (They had permission from Jayalalithaa’s party, but not from her family.) In another state, hundreds of thousands calls have been made in the voice of Narendra Modi, cloned from speeches available online, endorsing a local candidate. Up north, political consultant Sumit Savara has been bombarding candidates and other political consultants with sales pitches for his AI services. “People in the hinterland of India are not on social media,” he says. “For them, these calls will make a difference.”

Vijay Vasanth, a Congress Party politician from Kanyakumari on the southernmost tip of India, was thrust into politics in 2021 after his father, former member of parliament H. Vasanthakumar, died of Covid. In an attempt to harness his father’s goodwill, Vasanth’s team resurrected him using AI and shared a two-minute-long video on WhatsApp and Instagram asking the people of Kanyakumari to vote for his son. “He will be the rightful heir to the love, affection, and faith that you had placed in me,” the dead politician says in the deepfake. Vasanth’s team also created AI video calls with his voice and likeness, to connect with voters in the remote parts of the coastal town—places where Vasanth was unable to make in-person visits—and make it look like he was speaking to them live.

The Congress Party has emerged as the most prolific sharer of AI clones, with numerous official accounts sharing illegal voice clone endorsements by Bollywood celebrities—soliciting warnings from India’s election watchdog and at least three police complaints. Audio deepfakes are particularly pernicious, says Sam Gregory, executive director at nonprofit Witness. “When they manipulate an existing video, it is possible to track down the original and see how it has been manipulated,” he says. “The challenge with fake audio alone is that often there is no reference. There is no original to find.”

But sanctioned deepfakes muddy the waters too, even when people are happy to receive them. “Delightingly deceiving voters doesn't excuse the deception,” says Gregory. “Just because someone pulled the wool over your eyes while smiling at you doesn't change the fact that they deceived you.”

Fake news verification tools fail the test during elections in India

Rest of World’s Ananya Bhattacharya and contributor Fahad Shah walk through how WhatsApp “tiplines” have arisen as misinformation detection tools ahead of India’s general election and how tests of eleven of these tiplines showed that most failed to detect election misinformation.

The WhatsApp tip lines often require human intervention as the chatbots frequently struggle with verifying the obvious. An AI-generated image of Modi as a “saffron superhero” — donning a saffron cape in front of a saffron flag with the word “Om” on it — went unchecked by all the tip lines. None could verify the discernibly fake image.

The success rate of detection systems can vary widely, depending on the sophistication of the deepfake and the detection technology itself, Divyendra Singh Jadoun, who heads AI content generating company Polymath Solution, told Rest of World. “Early detection systems had higher success rates against less sophisticated deepfakes, but as deepfakes improve, detection becomes a moving target requiring continual updates and training of detection models,” he said. Jadoun is creating AI content for at least half a dozen political campaigns this year.

Only two of the tip lines, Newschecker and The Quint’s WebQoof, caught the manipulation in a blatantly AI-generated video, where Modi can be seen dancing on stage at a concert — a video shared by the prime minister himself.

Newsmeter was easily able to identify an AI-generated video of opposition leader Rahul Gandhi and politician Akhilesh Yadav, calling it “a meme,” despite most chatbots not catching it. But the tip line said “No” when asked if it could verify Modi’s concert video.

There were also inconsistencies in responses from the same tip line.

Newschecker, for example, yielded different responses to Modi’s concert video when asked the same question from three separate devices. On one device, the tip line called it a “meme or edited video” that Modi had shared. On another, it said the video was made using the Viggle AI tool, that an X user named Rohan Pal (@rohanpal363) had shared it, and included a link to a related article.

The discrepancies between a chatbot’s responses for the same query are likely a bug, according to Bice. “Getting the right content based on someone’s unstructured, maybe semi-grammatical query, or matching an image to an available fact-check involves a lot of steps,” Bice said. These include sorting and representing the query in a preexisting set, comparing it against available fact-checked content in the system, and then returning that to the user.

Read more