Best Infosec-Related Long Reads for the Weeks of 11/20/23 and 11/27/23

Best Infosec-Related Long Reads for the Weeks of 11/20/23 and 11/27/23

Ukraine battles Russia in electromagnetic warfare, Apple's iPhone hacking lab in Paris, No federal law on deepfake porn, X fails on Hamas war misinformation, Canadian gov't extracts data from phones


Metacurity is pleased to offer our free and paid subscribers this weekly digest of the best long-form infosec-related pieces we couldn’t properly fit into our daily crush of news. So tell us what you think, and feel free to share your favorite long reads via email at info@metacurity.com. We’ll gladly credit you with a hat tip. Happy reading!

The Invisible War in Ukraine Being Fought Over Radio Waves

The New York Times’s Paul Mozur and Aaron Krolik reveal how a battle is raging in the invisible realm of electromagnetic waves in Ukraine, with Ukraine stepping up its efforts to battle Russian superiority in electromagnetic warfare.

To combat Russia’s century of Soviet know-how in electronic attack and defense, Ukraine has turned to a start-up approach associated with Silicon Valley. The idea is to help the country’s tech workers quickly turn out electronic warfare products, test them and then send them to the battlefield.

This summer, Ukraine’s government hosted a hackathon for firms to work on ways to jam Iranian Shahed drones, which are long-range unmanned aerial vehicles that have been used to hit cities deep inside the country, said Mykhailo Fedorov, Ukraine’s digital minister.

At testing ranges outside Kyiv, drone makers pit their craft against electronic attack weapons. In a field in central Ukraine in August, Yurii Momot, 53, a former Soviet Union special forces commander and a founder of the electronic warfare firm Piranha, showed a new anti-drone gun built for the conflict.

The guns have a checkered performance in the war, but Mr. Momot’s version worked. Pointing it at a DJI Mavic, a common cheap reconnaissance drone, he pulled the trigger. The drone hovered motionless. Its navigation system had been swamped by a burst of radio signals from the gun.

“The whole system is more structured in Russia,” Mr. Momot said of Russia’s electronic warfare program, which he knows from his time with the Soviet army. “We’re catching up, but it will take a while.”

Other Ukrainian companies, such as Kvertus and Himera, are building tiny jammers or $100 walkie-talkies that can withstand Russian jamming.

Why Apple is working hard to break into its own iPhones

The Independent’s Andrew Griffin got an inside look into Apple’s lab in Paris that works to break into iPhones to develop new security and privacy features, such as Lockdown Mode or Advanced Data Protection for iCloud.

It is complicated and expensive work. But they are up against highly compensated hackers: in recent years, there has grown up to be an advanced set of companies offering cyber weapons to the highest bidder, primarily for use against people working to better the world: human rights activists, journalists, diplomats. No piece of software better exemplifies the vast resources that are spent in this shadow industry than Pegasus, a highly targeted piece of spyware that is used to hack phones and surveil their users, though it has a host of competitors.

Pegasus has been around since at least 2016, and since then Apple has been involved in a long and complicated game of trying to shut down to the holes it might exploit before attackers find and market another one. Just as with other technology companies, Apple works to secure devices against more traditional attacks, such as stolen passwords and false websites. But Pegasus is an entirely different kind of threat, targeted at specific people and so expensive that it would only be used in high-grade attacks. Fighting it means matching its complexity.

It’s from that kind of threat that Lockdown Mode was born, though Apple does not explicitly name Pegasus in its materials. It works by switching off parts of the system, which means that users are explicitly warned when switching it on that they should only do so with good reason, since it severely restricts the way the phone works; FaceTime calls from strangers will be blocked, for instance, and so will most message attachments.

But Lockdown Mode is not alone. Recent years have seen Apple increase the rewards in its bug bounty programme, through which it pays security researchers for finding bugs in its software, after it faced sustained criticism over its relatively small payouts. And work on hardware technologies such as encryption – and testing it in facilities such as those in Paris – mean that Apple is attempting to build a phone that is safe from attacks in both hardware and software.

No Laws Protect People From Deepfake Porn. These Victims Fought Back

Bloomberg’s Olivia Carville and Margi Murphy tell the devastating tale of a horrific deepfake website that produced fake pornographic images of a group of young women in the New York suburb of Levittown, New York, even charging victims to remove photos, a practice that is not illegal under US federal law.

No federal law criminalizes the creation or sharing of fake pornographic images in the US. When it comes to fake nudes of children, the law is narrow and pertains only to cases where children are being abused. And Section 230 of the Communications Decency Act protects web forums, social media platforms and internet providers from being held liable for content posted on their sites.

This legal landscape was problem enough for police and prosecutors when it took time and a modicum of skill to create realistic-looking fake pornography. But with billions of dollars of venture capital flowing into image-generating software powered by artificial intelligence, it’s gotten cheaper and easier to create convincing photos and videos of things that never happened. Tools such as Midjourney and Stability AI’s Stable Diffusion have been used to produce images of Pope Francis in a puffer jacket, actress Emma Watson as a mermaid and former President Donald Trump sprinting from a cadre of FBI agents.

The term “deepfake” was coined on a Reddit forum dedicated to fake porn made with deep-learning models. It’s now in the Oxford English Dictionary, defined as an image digitally manipulated to depict an individual doing something they didn’t. More than 15 billion such images have been created since April 2022, according to Everypixel Group, an AI photo company. The vendors that designed these tools have installed safety filters to ban the creation of explicit images, but because much of the software is open source, anyone can use it, build off it and deactivate the safeguards. Online security experts say more than 90% of deepfakes are pornographic in nature. Mark Pohlmann, founder and chief executive officer of Aeteos, a content moderation company, says he’s seen doctored images of girls as young as 3 dressed in leather, their hands tied together, their throats slit.

Like many technological advances, these AI tools edged their way into popular culture before lawmakers and law enforcement authorities understood their power. One man who did is Björn Ommer, a professor at Ludwig Maximilian University in Munich and co-creator of Stable Diffusion. Ommer says he told academic colleagues last year, before Stability AI released the software to the public, that he was “deeply concerned” it had the potential for great harm and wanted researchers to stress-test it first. But it was rushed out anyway, he says, to appease investors. (A spokesperson for Stability AI didn’t respond to questions about Ommer’s allegations but said the company is “committed to preventing the misuse of AI” and has taken steps to prohibit the use of its models for unlawful purposes.)

In October, the Biden administration issued an executive order seeking to prevent AI from producing child sexual abuse material or nonconsensual intimate imagery of real individuals, but it’s unclear how and when such restrictions would go into effect. More than a dozen states have passed laws targeting deepfakes, but not all of them carry criminal charges; some cover only election-related content. Most states have revenge porn laws, and a few, including New York, have amended them to include deepfakes. But some prosecutors say those laws apply only to intimate photos shared consensually. As for images pulled from social media and doctored to become sexual content, no law exists.

How Musk’s X Is Failing To Stem the Surge of Misinformation About Israel and Gaza

Aided by sophisticated interactive graphics, Bloomberg’s Davey Alba, Denise Lu, Leon Yin, and Eric Fan took a deep dive into how Elon Musk’s X, formerly Twitter, specifically the feature known as Community Notes, which lets volunteers flag false tweets, failed to stem the tide of fake information following the outbreak of war between Israel and Hamas.

In order to understand how X has moderated its platform as the Israel-Hamas war has carried on, Bloomberg analyzed hundreds of viral posts on X from Oct. 7, when news of Hamas’ attack on Israel first emerged, to Oct. 21, drawing from a publicly available database of Community Notes posted online. Bloomberg collected notes about the Israel-Gaza conflict that were rated “helpful” by users and an algorithm and thus visible to the public. Reporters filtered the data for keywords related to the conflict, including terms like “Gaza,” “Israel,” “Hamas” and “Palestine.” Bloomberg manually checked posts to confirm they were related to the conflict, then checked whether they contained false information — for example, claims that footage from a video game was actually of the war between Israel and Hamas, which is designated a terrorist organization by the US and the EU. Posts containing opinion or sharing fast-moving breaking news — even if they were later understood to be inaccurate — were not labeled as misinformation in the database.

X has said that it recently added more contributors to Community Notes and that it tries to notify people who have engaged with a post that later receives a note. The company also said that notes were appearing faster on the platform than they used to. But Bloomberg’s analysis found that there was still usually a significant delay before a Community Notes label became visible on the platform.

Across nearly 400 posts with misinformation that Bloomberg checked, a typical note took more than seven hours to show up, while some took as long as 70 hours — a crucial period of time during which a particular lie had the chance to travel far and wide on the platform. As Bloomberg has previously reported, notes only become visible to the public if users from a “diversity of perspectives” are able to agree that a note is “helpful.” A note may also be discarded even after it’s deemed helpful, if its support later goes off-balance and skews toward one side of the opinion spectrum.

Tools capable of extracting personal data from phones being used by 13 federal departments, documents show

Radio Canada’s Brigitte Bureau uncovered how tools capable of extracting personal data from phones or computers made by Cellebrite, Magnet Forensics, and Grayshift are being used by 13 federal departments and agencies without undergoing a privacy impact assessment as required by federal government directive.

A directive from the Treasury Board of Canada Secretariat (TBS) requires that all federal institutions carry out what it calls a privacy impact assessment (PIA) prior to any new activity that involves the collection or handling of personal information, with the goal of identifying privacy risks and ways of mitigating or eliminating them.

According to the directive, which took effect in 2002 and was revised in 2010, federal departments must then provide a copy of their PIA to the TBS and the Office of the Privacy Commissioner.

Radio-Canada asked each of the federal institutions using the software if they had first conducted privacy impact assessments. According to their written responses, none did. The Department of Fisheries and Oceans said it intends to do so.

The fact that these assessments were never done "shows that it's just become normalized, that it's not a big deal to get into somebody's cell phone," said Light. "There's been a normalization of this really extreme capability of surveillance."

Some departments said a PIA wasn't necessary because they had already obtained judicial authorizations such as search warrants, which impose strict conditions on the seizure of electronic devices.

Others said they only use the material on government-owned devices — for example, in cases involving employees suspected of harassment.

Bonus Read: The Inside Story of Microsoft’s Partnership with OpenAI

Although not technically “infosec-related,” this in-depth piece from Charles Duhigg in The New Yorker is well worth reading for its inside look at the tumultuous and ultimately failed firing of OpenAI’s CEO Sam Altman and its detailed and rich recounting of Microsoft’s efforts to mount AI products.

The first time Microsoft tried to bring A.I. to the masses, it was an embarrassing failure. In 1996, the company released Clippy, an “assistant” for its Office products. Clippy appeared onscreen as a paper clip with large, cartoonish eyes, and popped up, seemingly at random, to ask users if they needed help writing a letter, opening a PowerPoint, or completing other tasks that—unless they’d never seen a computer before—they probably knew how to do already. Clippy’s design, the eminent software designer Alan Cooper later said, was based on a “tragic misunderstanding” of research indicating that people might interact better with computers that seemed to have emotions. Users certainly had emotions about Clippy: they hated him. Smithsonian called it “one of the worst software design blunders in the annals of computing.” In 2007, Microsoft killed Clippy.

Nine years later, the company created Tay, an A.I. chatbot designed to mimic the inflections and preoccupations of a teen-age girl. The chatbot was set up to interact with Twitter users, and almost immediately Tay began posting racist, sexist, and homophobic content, including the statement “Hitler was right.” In the first sixteen hours after its release, Tay posted ninety-six thousand times, at which point Microsoft, recognizing a public-relations disaster, shut it down. (A week later, Tay was accidentally reactivated, and it began declaring its love for illegal drugs with tweets like “kush! [I’m smoking kush in front the police].”)

Read more