Best Infosec-Related Long Reads for the Week of 6/1/24

Best Infosec-Related Long Reads for the Week of 6/1/24

FBI-controlled encrypted phones took down drug kingpins, Open source investigator Bellingcat evolves as falsified information surges, Drone police pose a dilemma for poor residents, Twitter deplatforming reduced misinformation and disinformation, Fact-based inoculation needed to reduce misinformation, AI chop shop produced mounds of error-ridden 'news' articles


Image created by ByteDance on replicate.
Image created by ByteDance on replicate.

Metacurity is pleased to offer our free and premium subscribers this weekly digest of the best long-form (and longish) infosec-related pieces we couldn’t properly fit into our daily news crush. So tell us what you think, and feel free to share your favorite long reads via email at info@metacurity.com.

Inside the Biggest FBI Sting Operation in History

In Wired, 404 Media’s Joseph Cox offers an excerpt from his new book Dark Wire, in which he recounts how drug kingpin Maximilian Rivkin, who went by the nickname Microsoft, and drug cartel operator Hakan Ayik were taken down in a sting operation by using his favored encrypted phone called Anom, which the FBI secretly operated.

The FBI, it turned out, had controlled Anøm for nearly the entire existence of the company. It all started back in 2018, when the agency shut down another encrypted phone company called Phantom Secure. Just as happened with EncroChat, the crackdown created a vacuum in the market, with stranded Phantom Secure users now grasping for a new phone of choice. And somewhat like Microsoft, a canny player in the encrypted phone industry smelled an opportunity in the exodus.

As it happened, one of Phantom Secure’s distributors had been on the verge of starting his own, rival encrypted phone service at the time of the crackdown. That distributor was Afgoo, the tech-savvy entrepreneur behind Anøm. To get in front of the FBI, Afgoo took the extraordinary step of approaching the agency with an offer: In exchange for the possibility of a reduced sentence, would the agency perhaps like the keys to his new encrypted phone startup? That would allow them to soak up refugees from Phantom Secure—and monitor all their communications.

The FBI and its partners at the Australian Federal Police, who had been involved in the Phantom Secure takedown, leaped at the chance. First they built a backdoor into Anøm’s encryption mechanism. Then the Australians ran a beta test: Afgoo handed out a few phones to criminal contacts in Australia. A hundred percent of the beta test customers used the phones to conduct crimes, and the police caught every word of it. The operation took off from there: The unwitting beta testers told their contacts about Anøm, and soon Afgoo was fielding demand for the phones overseas.

This word-of-mouth marketing was important, because the FBI needed to avoid any claims of entrapment; it helped that this organic sales model was just how many real encrypted phone companies worked. As Anøm grew, especially in Europe, the FBI started learning about drug shipments and assassination attempts in other countries, so it began passing tips to foreign law enforcement agencies through legal attachés. But as users flooded in—especially with the 2020 fall of EncroChat—they got overwhelmed; eventually, it made more sense to simply give some foreign partners direct access to the Anøm messages. Some were given credentials to Anøm’s surveillance backend, called Hola iBot. That’s where [Head of Operations for the Swedish police Ted] Esplund came in: Now he was being roped into the world’s biggest sting operation as well.

How to Lead an Army of Digital Sleuths in the Age of AI

Wired’s Samanth Subramanian talks with open-source investigation pioneer Eliot Higgins, founder of NGO Bellingcat, which has a staff of forty gathering evidence and hard facts using an evolving set of online forensic techniques to track conflicts in Ukraine and Gaza, a challenge made all the more difficult this year as falsified artifacts emerge from elections in the US, the UK, India, and dozens of other countries as complications from the advent of AI arise.

Is this AI-generated stuff at a stage of sophistication where even your team has to struggle to distinguish it?

Well, we explore the network of information around an image. Through the verification process, we’re looking at lots of points of data. The first thing is geo-location; you’ve got to prove where something was taken. You’re also looking at things like the shadows, for example, to tell the time of day; if you know the position of the camera, you’ve basically got a sundial. You also have metadata within the picture itself. Then images are shared online. Someone posts it on their social media page, so you look at who that person is following. They may know people in the same location who’ve seen the same incident.

You can do all that with AI-generated imagery. Like the Pentagon AI image that caused a slight dip in the stock market. [In May 2023, a picture surfaced online showing a huge plume of smoke on the US Department of Defense’s lawn.] You’d expect to see multiple sources very quickly about an incident like that. People wouldn’t miss it. But there was only one source. The picture was clearly fake.

My concern is that someone will eventually figure that out, that you’ll get a coordinated social media campaign where you have bot networks and fake news websites that have been around for a long time, kind of building a narrative. If someone were clever enough to say, “OK, let’s create a whole range of fake content” and then deliver it through these sites at the same time that claims an incident has happened somewhere, they’d create enough of a gap of confusion for an impact on the stock market, for panic to happen, for real news organizations to accidentally pick it up and make the situation much worse.

The Age of the Drone Police Is Here

Wired’s Dhruv Mehrotra analyzed more than 22.3 million coordinates from the drone-dispatching Drone as First Responder (DFR) program established by the Chula Vista Police Department, a high-tech model for police departments nationwide, which the CVPD says reduces unnecessary police contacts but makes residents, particularly in poor neighborhoods, feel like they are constantly under surveillance, with each drone flight passing above 13 census blocks, potentially exposing approximately 4,700 of the residents below to a drone’s camera.

While [a] survey found that residents are largely in favor of the DFR program, a majority are concerned that devices might record people not suspected of a crime or that the video might be shared with federal immigration authorities. In 2020, the San Diego Union-Tribune found that the Chula Vista Police Department had been sharing data from its network of license plate readers with US Immigration and Customs Enforcement as part of a partnership with Vigilant Solutions, a leading provider of the technology for law enforcement agencies around the country. In the uproar that followed, city officials said they’d removed immigration authorities’ access to the data, at least temporarily.

Constitutional law experts worry that without oversight, these public safety deployments will inevitably lead to excessive and potentially inappropriate drone usage. An ACLU report published in July 2023 cautioned that while police departments were using serious situations—fires, accidents, gun violence—as the basis for drone deployment, many were also using drones to investigate more mundane incidents. In Chula Vista, that included a “water leak” and someone “bouncing a ball against a garage.”

According to WIRED’s analysis, Chula Vista police have sent its drones to investigate hundreds of 911 calls for seemingly minor incidents like suspicious activity, loud music, public intoxication, vandalism, and shoplifting. Last July, for instance, a Chula Vista resident called 911 to complain about a party; police deployed a drone to investigate. En route to the alleged party, the drone flew over 11 blocks where approximately 2,500 people live, ultimately arriving at a house on a quiet suburban street in east Chula Vista.

Before returning to the station, the drone hovered above Roxanna Galvan's house for three minutes. Galvan, who works at the San Diego County Office of Education, recalls that her neighbors were hosting a party that night, but she was unaware of the drone’s presence until contacted by WIRED. “I don’t mind that CVPD is using drones to check out what’s going on,” she says, but it concerns her that they can send high-tech tools over her home without her knowledge.

In an effort to ease concerns about drones, the department uploads data about every flight to its transparency portal. Through the portal, residents can look up details about why a drone was in the sky at a particular time. While organizations like the ACLU have praised the department for its transparency, WIRED found that approximately one in 10 flights on the portal didn't list a reason for why they were flown. These unexplained flights weren’t assigned an incident number from the department—meaning they couldn’t be connected to a 911 call—and nearly 400 of them didn't come within half a mile of where any call in the preceding half hour originated.

Jay Stanley, author of the 2023 ACLU report, tells WIRED that he is concerned by the hundreds of unexplained drone flights listed by the department. “Considering how novel and sensitive this technology is, they—and other departments—should be scrupulous in their attention to detail when logging these activities,” he says. Nevertheless, Stanley believes that the department should still be “commended” for the amount of transparency that it does have.

Post-January 6th deplatforming reduced the reach of misinformation on Twitter

In Nature (subscription required but summary below from UC Riverside News), researchers from the George Washington University, University of California, Riverside, Duke University, and Northeastern University used a panel of more than 500,000 active Twitter users and natural experimental designs to evaluate the effects of Twitter’s ban of 70,000 misinformation traffickers in response to the violence at the US Capitol on 6 January 2021 and found that the ban reduced the number of misinformation posts and pushed many of the disinformation traffickers off the platform.

The researchers analyzed a panel of about 550,000 Twitter users in the United States who were active during the 2020 election cycle. This information was acquired by David Lazer, the corresponding author of the study who is a professor of political science and computer and information science at Northeastern University in Boston. (Twitter was renamed X after it was acquired by billionaire Elon Musk in late 2022)

A research team from Lazer’s laboratory collected Twitter posts through Twitter’s application programming interface, or API, which is a set of programmatic tools that allowed researchers to interact with Twitter's platform and gather tweets and other information about users of the platform. The users in the panel were verified as real people by cross-referencing with voter registration data.

The analysis found that those who had followed one or more of the 70,000 who were de-platformed had been more frequent tweeters of URLs (Internet addresses) known to disseminate misinformation when compared with others in the panel of users. But after the de-platforming occurred between Jan. 6 and 12, 2021, these followers on average tweeted fewer misinformation URLs than the averages for the whole panel.

The research also identified about 600 “super sharers” of misinformation in the panel who were in the top 0.1 percent of misinformation sharers in the months leading up to the Jan. 6 insurrection. The analysis found their ranks dropped by more than half after the de-platforming. Similarly, some 650 Q-Anon sharers in the panel dropped to about 200 two weeks after the de-platforming.

When monitoring their platforms, social media companies face a tradeoff between private economic interests and the public interest, said Diogo Ferrari, co-author of the paper and a UCR assistant professor of political science. Fake news posts increase engagement, which helps a platform’s bottom line. But curbing it “is good for democracy and democratic governance,” he said.

Misinformation poses a bigger threat to democracy than you might think

In Nature, researchers from the University of Western Australia, King’s College London, University of Cambridge, Australian National University, University of Melbourne, Harvard University, and the Unversity of Potsdam write that the mechanisms to protect the public against misinformation have been undermined given the rise in populist movements over the past few years and efforts bolstered by facts and logic-based information are needed to counter this trend to protect democracy.

To be proactive — for example, if the misinformation is anticipated but not yet disseminated — psychological inoculation is a strong option that governments and public authorities can consider using. Inoculation involves a forewarning and a pre-emptive correction — or ‘prebunking’ — and it can be fact-based or logic-based.

To illustrate the former, the US administration led by President Joe Biden pre-empted Russian President Vladimir Putin’s justification for invading Ukraine in February 2022. In a public communication, citizens in several countries were forewarned. The administration explained how Putin would seek to make misleading claims about Ukrainian aggression in the Donbas region to rationalize his decision to invade, which might have served to limit the international community’s willingness to believe Putin’s claims when they were subsequently made.

Logic-based inoculation, by contrast, is useful even when false claims are not specifically known, because it aims to educate citizens more generally about misleading argumentation. The intervention focuses on whether arguments contain logical flaws (such as false dilemmas or incoherence) or misleading techniques (such as fearmongering or conspiratorial reasoning) rather than attempting to provide verificatory evidence for or against a specific claim.

As well as proving successful in the laboratory, large-scale field experiments (on YouTube, for example) have shown that brief inoculation games and videos can improve people’s ability to identify information that is likely to be of poor quality. Although some critics think that such interventions aim to “limit public discourse ... without consent” and do so “paternalistically” and “stealthily”, this is a misrepresentation of the interventions, which seek merely to educate and empower people to make informed judgements free from manipulation.

Further countermeasures that are compatible with democratic norms include accuracy prompts, which aim to focus users’ attention on the veracity of the information to reduce the sharing of misleading material online, and the implementation of friction elements, which briefly delay a person when they are interacting with information online to avoid them sharing the content without reading it first.

It Looked Like a Reliable News Site. It Was an A.I. Chop Shop

In the New York Times, Kashmir Hill and Tiffany Hsu profile the supposed news organization but now defunct BNN Breaking that had a publishing deal with Microsoft but whose international team journalists published as many as multiple error-ridden and often politically slanted articles per minute using AI.

“You should be utterly ashamed of yourself,” one person wrote in an email to Kasturi Chakraborty, a journalist based in India whose byline was on BNN’s story with Mr. Fanning’s photo.

Ms. Chakraborty worked for BNN Breaking for six months, with dozens of other journalists, mainly freelancers with limited experience, based in countries like Pakistan, Egypt and Nigeria, where the salary of around $1,000 per month was attractive. They worked remotely, communicating via WhatsApp and on weekly Google Hangouts.

Former employees said they thought they were joining a legitimate news operation; one had mistaken it for BNN Bloomberg, a Canadian business news channel. BNN’s website insisted that “accuracy is nonnegotiable” and that “every piece of information underwent rigorous checks, ensuring our news remains an undeniable source of truth.”

But this was not a traditional journalism outlet. While the journalists could occasionally report and write original articles, they were asked to primarily use a generative A.I. tool to compose stories, said Ms. Chakraborty and Hemin Bakir, a journalist based in Iraq who worked for BNN for almost a year. They said they had uploaded articles from other news outlets to the generative A.I. tool to create paraphrased versions for BNN to publish.

Mr. Bakir, who now works at a broadcast network called Rudaw, said that he had been skeptical of this approach but that BNN’s founder, a serial entrepreneur named Gurbaksh Chahal, had described it as “a revolution in the journalism industry.”

Mr. Chahal’s evangelism carried weight with his employees because of his wealth and seemingly impressive track record, they said. Born in India and raised in Northern California, Mr. Chahal made millions in the online advertising business in the early 2000s and wrote a how-to book about his rags-to-riches story that landed him an interview with Oprah Winfrey. A business trend chaser, he created a cryptocurrency (briefly promoted by Paris Hilton) and manufactured Covid tests during the pandemic.

But he also had a criminal past. In 2013, he attacked his girlfriend at the time, and was accused of hitting and kicking her more than 100 times, generating significant media attention because it was recorded by a video camera he had installed in the bedroom of his San Francisco penthouse. The 30-minute recording was deemed inadmissible by a judge, however, because the police had seized it without a warrant. Mr. Chahal pleaded guilty to battery, was sentenced to community service and lost his role as chief executive at RadiumOne, an online marketing company.

After an arrest involving another domestic violence incident with a different partner in 2016, he served six months in jail.

Read more