Best Infosec-Related Long Reads for the Week of 5/11/24
The Russian consultant who pumps out disinformation, The expert who disproved the MyPillow guy is still waiting for his money, China tries to ramp up disinformation with AI news anchors, Crypto scam tweets struggle to find victims, Replacing public cryptography keys with something better, NYC to test error-prone gunshot detection system
Metacurity is pleased to offer our free and premium subscribers this weekly digest of the best long-form (and longish) infosec-related pieces we couldn’t properly fit into our daily news crush. So tell us what you think, and feel free to share your favorite long reads via email at info@metacurity.com.
Investigation: Who is Ilya Gambashidze, the man the US government accuses of running a Kremlin disinformation campaign?
Voice of America’s Matthew Kupfer delves into the results of his investigation into Ilya Gambashidze, the US-sanctioned, Kremlin-aligned Russian political consultant who is responsible for a large-scale disinformation campaign aimed at undermining trust in democratic processes and institutions in the United States and beyond.
Since 2022, social media companies, researchers, Western governments and journalists have increasingly linked Gambashidze’s two companies to a project to spread disinformation in Ukraine, Europe, Latin America and the United States.
In November 2022, Meta announced it had removed more than 2,300 accounts and pages on Facebook and Instagram for “coordinated inauthentic behavior.”
The accounts shared content from websites posing as real media outlets and government pages. These articles “criticized Ukraine, praised Russia and argued that Western sanctions on Russia would backfire,” Meta wrote.
The targets of the disinformation were Europe and Ukraine. European officials later named the campaign “Doppelganger.”
Meta tied the campaign to two Russian firms: marketing company Social Design Agency, or SDA, and IT firm Structura, which together spent approximately $105,000 on advertising on Facebook and Instagram.
According to the Treasury Department, SDA was founded by Gambashidze (it is formally registered to a woman likely associated with his political work in Tambov).
Gambashidze was also co-founder of Structura, but today it belongs to Nikolai Tupikin, who is even less well-known. There is only one “official” photo of him from a Russian Telegram channel dedicated to startups. Little other information is available.
MyPillow fight
In Business Insider, Brent Crane profiles lifelong Republican and software forensics expert Bob Zeidman, who successfully disproved the claims of Mike Lindell, the arch-MAGA owner of MyPillow, after Lindell offered $5 million at a “Cyber Symposium” to anyone who proved him wrong about a purchased set of data supposedly showing China interfered in the 2020 election. Zeidman has yet to receive his $5 million.
I was interested in what impact, if any, Zeidman's experience with Lindell had made on his politics. Lindell, for all his cartoonish bombast, has clearly been following a path of election denial forged by Trump. It is Trump, after all, who has made his case for voter fraud the centerpiece of his reelection campaign. The fabrications have been wildly successful at firing up his base: Recent polling suggests that nearly 70% of Trump's supporters believe that the 2020 election was "stolen" by the Democrats.
Zeidman, though sympathetic, is not fully on board. "What Lindell is doing is hurting America and hurting Republicans, because it's dividing us and it's wrong," he told me. "I've had a few other Republicans say to me, 'OK, but you don't have to broadcast this.' And I say, 'No, it's the truth. The truth has to be out there. We allegedly stand for the truth, OK? Not the relative truth, but the absolute truth.'"
Zeidman requires constant stimulation. During meetings, he makes sure to sit by a door in case boredom forces him to flee. Whenever he's idle, he becomes "terribly depressed." To keep busy, he invents gizmos and software. He pens satirical political novels and computer-science textbooks and conservative op-eds on his Substack or for publications like the Cleveland Jewish News. He adores movies, though plot holes ruffle him. His favorite film is "Memento," because "it all connects together." He speaks regularly at engineering conferences and attends political ones.
Since young adulthood, Zeidman has "leaned conservative." But it was not until the presidency of George W. Bush that he started going to political events and closely following party politics. He describes himself as a "rare species": a Republican Jew. American support for Israel is one of his chief concerns. He remains a great fan of Bush.
He has always found Trump distasteful: "He's a really nasty guy." Yet he understands Trump's appeal as a totem of conservative grievance, much of which he shares: left-wing media bias, high taxes, political correctness. He pinpoints the genesis of Trump's rise to Mitt Romney's loss to Barack Obama in 2012.
"The Democratic Party attacked Romney so strongly on moral issues when this man was an outstanding moral leader," he said. "I think that pissed off so many people that they said, 'OK, whoever we put in office is going to be trashed, so let's find a guy who trashes back.' And that was Trump."
How China is using AI news anchors to deliver its propaganda
The Guardian’s Dan Milmo and Amy Hawkins analyze a pro-China disinformation network reliant on AI-generated news presenters that is ramping up ahead of the US presidential election with what experts say is a limited impact on the real world.
Beijing has already experimented with AI-generated news anchors. In 2018, the state news agency Xinhua unveiled Qiu Hao, a digital news presenter, who promised to bring viewers the news “24 hours a day, 365 days a year”. Although the Chinese public is generally enthusiastic about the use of digital avatars in the media, Qiu Hao failed to catch on more widely.
China is at the forefront of the disinformation element of the trend. Last year, pro-China bot accounts on Facebook and X distributed AI-generated deepfake videos of news anchors representing a fictitious broadcaster called Wolf News. In one clip, the US government was accused of failing to deal with gun violence, while another highlighted China’s role at an international summit.
In a report released in April, Microsoft said Chinese state-backed cyber groups had targeted the Taiwanese election with AI-generated disinformation content, including the use of fake news anchors or TV-style presenters. In one clip cited by Microsoft, the AI-generated anchor made unsubstantiated claims about the private life of the ultimately successful pro-sovereignty candidate – Lai Ching-te – alleging he had fathered children outside marriage.
Microsoft said the news anchors were created by the CapCut video editing tool, developed by the Chinese company ByteDance, which owns TikTok.
Clint Watts, the general manager of Microsoft’s threat analysis centre, points to China’s official use of synthetic news anchors in its domestic media market, which has also allowed the country to hone the format. It has now become a tool for disinformation, although there has been little discernible impact so far.
“The Chinese are much more focused on trying to put AI into their systems – propaganda, disinformation – they moved there very quickly. They’re trying everything. It’s not particularly effective,” said Watts.
Give and Take: An End-To-End Investigation of Giveaway Scam Conversion Rates
Researchers at UC San Diego, Chainalysis, and Google studied cryptocurrency giveaway scams to understand how scammers reach potential victims, the earnings they make, and any potential bottlenecks for durable interventions in a quest to help them design effective interventions.
Scams—fraudulent schemes designed to swindle money from victims—have existed for as long as recorded history. However, the Internet’s combination of low communication cost, global reach, and functional anonymity has allowed scam volumes to reach new heights. Designing effective interventions requires first understanding the context: how scammers reach potential victims, the earnings they make, and any potential bottlenecks for durable interventions.
In this short paper, we focus on these questions in the context of cryptocurrency giveaway scams, where victims are tricked into irreversibly transferring funds to scammers under the pretense of even greater returns.
Combining data from Twitter, YouTube and Twitch livestreams, landing pages, and cryptocurrency blockchains, we measure how giveaway scams operate at scale. We find that 1 in 1000 scam tweets, and 4 in 100,000 livestream views, net a victim, and that scammers managed to extract nearly $4.62 million from just hundreds of victims during our measurement window
Beyond public key encryption
In his blog, A Few Thoughts on Cryptographic Engineering, cryptographer Matthew Green shares notes on a course he teaches at Johns Hopkins University on what lies beyond public key cryptography, examining a handful of technologies developed in the past twenty years that go beyond public keys.
In the mid-1980s, a cryptographer named Adi Shamir proposed a radical new idea. The idea, put simply, was to get rid of public keys.
To understand where Shamir was coming from, it helps to understand a bit about public key encryption. You see, prior to the invention of public key crypto, all cryptography involved secret keys. Dealing with such keys was a huge drag. Before you could communicate securely, you needed to exchange a secret with your partner. This process was fraught with difficulty and didn’t scale well.
Public key encryption (beginning with Diffie-Hellman and Shamir’s RSA cryptosystem) hugely revolutionized cryptography by dramatically simplifying this key distribution process. Rather than sharing secret keys, users could now transmit their public key to other parties. This public key allowed the recipient to encrypt to you (or verify your signature) but it could not be used to perform the corresponding decryption (or signature generation) operations. That part would be done with a secret key you kept to yourself.
While the use of public keys improved many aspects of using cryptography, it also gave rise to a set of new challenges. In practice, it turns out that having public keys is only half the battle — people still need to use distribute them securely.
For example, imagine that I want to send you a PGP-encrypted email. Before I can do this, I need to obtain a copy of your public key. How do I get this? Obviously we could meet in person and exchange that key on physical media — but nobody wants to do this. It would much more desirable to obtain your public key electronically. In practice this means either (1) we have to exchange public keys by email, or (2) I have to obtain your key from a third piece of infrastructure, such as a website or key server. And now we come to the problem: if that email or key server is untrustworthy (or simply allows anyone to upload a key in your name), I might end up downloading a malicious party’s key by accident. When I send a message to “you”, I’d actually be encrypting it to [someone else].
Solving this problem — of exchanging public keys and verifying their provenance — has motivated a huge amount of practical cryptographic engineering, including the entire web PKI. In most cases, these systems work well. But Shamir wasn’t satisfied. What if, he asked, we could do it better? More specifically, he asked: could we replace those pesky public keys with something better?
Internal Emails Reveal How a Controversial Gun-Detection AI System Found Its Way to NYC
In Wired, investigative journalist Georgia Gee walks through how emails she obtained persuaded Mayor Eric Adams to test a problematic and error-prone gunshot detection system provided by a company called Evolv in New York City.
The Mayor’s Office has been keen to stress that it is not set on Evolv being a permanent fixture. “To be clear, we have NOT said we are putting Evolv technology in the subway stations,” Kayla Mamelak, deputy press secretary of the Mayor’s Office, tells WIRED in an email. “We said that we are opening a 90-day period to explore using technology, such as Evolv, in our subway stations.”
Civil rights and technology experts have argued that utilizing Evolv’s scanners in subway stations is likely to be futile. “This is Mickey Mouse public safety,” says Albert Fox Cahn, founder of the Surveillance Technology Oversight Project, a privacy advocacy organization. “This is not a serious solution for the largest transit system in the country.”
Moreover, deploying the company’s technology might not just be ineffective—it’s also likely to add more police officers to the daily rhythms of New Yorkers’ lives, heightening Adams’ pro-cop agenda. The NYC subway has 472 stations. “That is roughly 1,000 subway station entrances,” explains Sarah Kaufman, director of the New York University’s Rudin Center for Transportation. “That means that Evolv would have to be at every single entrance in order to be effective, and that of course would require monitoring.”
According to the draft policy posted by the NYPD, the process surrounding weapons-detection technology in the subway is extremely vague, and still relies heavily on police officers. “The checkpoint supervisor will determine the frequency of passengers subject to inspection (for example, every fifth passenger or every tenth passenger),” the document reads. It will also be based on “available police personnel on hand to perform inspections.”