Best Infosec-Related Long Reads of the Week, 4/8/23

Best Infosec-Related Long Reads of the Week, 4/8/23

China's all-out spy war, Mites at CMU raise privacy concerns, IPVM is a surveillance tech reporting powerhouse, Advertiser access to medical visits, US TikTok ban would cripple foreign creators, more


Metacurity is pleased to offer our free and paid subscribers this weekly digest of the best long-form infosec pieces and related articles that we couldn’t properly fit into our daily crush of news. So tell us what you think, and feel free to share your favorite long reads via email at info@metacurity.com. We’ll gladly credit you with a hat tip. Happy reading!

man reading a books

China Has Been Waging a Decades-Long, All-Out Spy War

Historian Calder Waltin in Foreign Policy reviews how the Chinese government has for nearly twenty years been building intelligence and surveillance capabilities that tower over comparable operations in Western nations, unrestrained by political bodies or the law.

China’s foreign intelligence offensive has reached new levels since Xi took power in 2012. Its purpose involves what all intelligence agencies do: to understand the intentions and capabilities of foreign adversaries. But China’s offensive goes much further: to steal as many scientific and technical secrets from Western powers, principally the United States, as possible to advance China’s position as a superpower—challenging and overtaking the United States on the world stage.

China’s unprecedented economic boom this century has been fueled by an equally unprecedented theft of Western science and technology. Back in 2012, the director of the U.S. National Security Agency warned that cyber-espionage constituted the greatest transfer of wealth in history. China was—and remains—the greatest perpetrator. Beginning around 2013 or 2014, Chinese operatives carried out a massive hack of the U.S. Office of Personnel Management (OPM), which holds some of the most sensitive information in the U.S. federal government: information obtained during security clearances. This information is that which people often hide from their nearest and dearest—extramarital affairs and such. Chinese intelligence thus has millions of datapoints for potential blackmail, what the Russians call kompromat, to recruit agents with access to U.S. secrets. The OPM haul was followed, in 2017, by China’s hack of the credit rating bureau Equifax, which gave China sensitive data on approximately 150 million Americans. If you are an American, it is more likely than not that China has sensitive data about you.

Computer scientists designing the future can’t agree on what privacy means

MIT Tech Review’s Eileen Guo and Tate Ryan-Mosley delve into how a cutting-edge project by Carnegie Mellon University’s Institute for Software Research called Mites, which seeks to build a safer and more secure IoT system by relying on ubiquitous devices featuring microphones, infrared sensors, thermometers, and six other sensors that connect to the internet and shares data wirelessly, has kicked up a privacy and ethics challenge

In the video recording of the town hall obtained by MIT Technology Review, attendees asked how researchers planned to notify building occupants and visitors about data collection. Jessica Colnago, then a PhD student, was concerned about how the Mites’ mere presence would affect studies she was conducting on privacy. “As a privacy researcher, I would feel morally obligated to tell my participant about the technology in the room,” she said in the meeting. While “we are all colleagues here” and “trust each other,” she added, “outside participants might not.”

Attendees also wanted to know whether the sensors could track how often they came into their offices and at what time. “I'm in office [X],” Widder said. “The Mite knows that it's recording something from office [X], and therefore identifies me as an occupant of the office.” Agarwal responded that none of the analysis on the raw data would attempt to match that data with specific people.

At one point, Agarwal also mentioned that he had gotten buy-in on the idea of using Mites sensors to monitor cleaning staff—which some people in the audience interpreted as facilitating algorithmic surveillance or, at the very least, clearly demonstrating the unequal power dynamics at play.

A Tiny Blog Took on Big Surveillance in China—and Won

Amos Zeeberg in Wired tells the story of how John Honovich, founder of video surveillance technology publication IPVM, unearthed damning details on the surveillance gear coming out of China, focusing on the country’s giant vendors such as Hikvision and Dahua and Huaweii, catching the attention of policymakers and the intelligence community.

IN DECEMBER 2020, an IPVM employee made a blockbuster discovery. The reporter, who keeps his identity secret because of the harassment some IPVMers get for their controversial work, discovered that Huawei and a Chinese AI unicorn called Megvii had tested a literal “Uyghur alarm”: The system used AI to analyze people’s faces, and if it determined that a passerby was Uyghur, it could send an alert to authorities. At the time, Huawei wasn’t publicly known to be participating in China’s racial surveillance system. IPVM partnered with two Washington Post tech reporters to get the information out.

The Post published an article on the same day as IPVM and credited the security outfit with the discovery. Dozens of publications picked up the story. For the first time, an IPVM report was national news. Reacting to the Post report, US senator Ben Sasse from Nebraska said, “While Huawei sells contracts with fancy talk about connecting people around the world, they’re working to send Uyghurs to torture camps in China.” Senator Marco Rubio from Florida tweeted, “The sick people at @Huawei developing software to recognize the faces of #Uighur Muslims & alert the communist government of #China.” Antoine Griezmann, a French soccer star who had appeared in prominent ad campaigns for Huawei, canceled his sponsorship deal. Huawei released a statement saying it wasn’t involved in ethnicity detection, yet the Post reporters promptly found other documents on a Huawei website showing it had worked on race-detecting systems with at least four other partners besides Megvii.

I declined to share my medical data with advertisers at my doctor’s office. One company claimed otherwise

Ethnographic sociologist Alex Rosenblat uses her own pregnancy to explain in STAT how she repeatedly checked the box in her office visit sign-in forms that stipulated her permission was required before the medical practices could share her data with any outside party, and despite these declarations, her data was shared with advertisers, and she was targeted with their pitches.

I methodically clicked “I decline” to the terms at each routine visit and kept a photo record, but that wasn’t enough to safeguard my consent. Staying in control of my data privacy is a burden that requires proactive attention. Pregnancy is exhausting, and I already had a very active toddler to run after, plus a full-time job. A patient seeking a long-awaited appointment with a specialist isn’t going to cancel, even if they are uncomfortable, because getting care is the priority. And yet, privacy harms add up. The Markup investigated hospitals that send your data to Facebook, Google and others when you visit their websites. The Federal Trade Commission recently fined GoodRx $1.5 million for doing the same and banned the company from sharing consumers’ sensitive health information for advertising when patients use its service to obtain discounts on prescription drugs.

In September, after revisiting a June 2022 article about Phreesia’s privacy practices, I wrote to its privacy inbox to confirm that it had no consent from me on record. To my surprise, the representative, a compliance analyst, simply offered to revoke my authorization. I was horrified and suddenly wracked with self-doubt. Had I accidentally clicked “I accept” when I was in pain or distracted, responding to a work email or coordinating a school pick-up? Revoke it, I indicated, but please show me the proof that I had accepted in the first place.

Global TikTok creators depend on U.S. viewers. A TikTok ban would be devastating

Rest of World’s Andrew Deck examines how a US ban on TikTok for security reasons threatens the livelihoods of TikTok creators outside the US because advertisers highly prize US audiences, and for some creators, income from those viewers can represent up to 80% of their revenues.

The experience of TikTok creators like Natasha could be described as a kind of platform dependence, according to David Nieborg, an associate professor of media studies at the University of Toronto. While Tiktok has provided unprecedented economic opportunity for creators globally, it has also made many reliant on the app for their livelihoods, and unsure of their economic future without it.

“Platform workers are always inherently in a deeply precarious position,” Nieborg told Rest of World. He describes the practice of migrating audiences to other platforms — whether Vine to Instagram, or TikTok to YouTube — as a common way for creators to mitigate risk.

Platforms owned by U.S. companies have the potential to reap huge benefits from a TikTok ban, in terms of both viewers and creators. Following India’s ban on TikTok and similar short-form video apps owned by Chinese companies, Instagram, YouTube, and a host of Indian-owned social media apps filled the vacuum left behind. Some U.S. TikTok creators have already claimed they are growing their audiences on Instagram Reels or YouTube Shorts — the TikTok clones launched by Meta and Google — to prepare for the ban.

U.K. National Cyber Force, Responsible Cyber Power, and Cyber Persistence Theory

Richard J. Harknett, Michael P. Fischerkeller, and Emily O. Goldman in Lawfare blog delve into the United Kingdom’s National Cyber Force (NCF) and its recently released document entitled “The National Cyber Force: Responsible Cyber Power in Practice,” highlighting how the UK approach closely aligns with US insights about cyberspace embodied in the defend forward strategy and the operational approach of persistent engagement, which they say is a testament to the explanatory power of cyber persistence theory (CPT).

The explanatory framework of CPT redefines security as seizing and sustaining the initiative in exploitation; that is, anticipating the exploitation of a state’s own digital vulnerabilities before they are leveraged against them, exploiting others’ vulnerabilities to advance their own security needs, and sustaining the initiative in this exploitation dynamic. States may choose not to abide by this logic or not operationalize it well. The consequence, however, will be cyber insecurity and a loss of relative power for those not persisting. Alternatively, states may choose to abide by the logic but do so in irresponsible ways that threaten peace and security—such as by using cyber-enabled ways and means to illicitly acquire intellectual property, circumvent international sanctions, and undermine confidence in democratic institutions. The U.K. has provided a helpful framework for distinguishing such irresponsible cyber behavior.

Read more