• Metacurity
  • Posts
  • Best Infosec-Related Long Reads for the Week of 12/2/23

Best Infosec-Related Long Reads for the Week of 12/2/23

The hidden story of HHS' state-sponsored hack, College students subjected to massive tracking, China's expansion into destructive capabilities, Meta's long slog to E2EE, Silicon Valley's AI arms race

Metacurity is pleased to offer our free and paid subscribers this weekly digest of the best long-form infosec-related pieces we couldn’t properly fit into our daily crush of news. So tell us what you think, and feel free to share your favorite long reads via email at [email protected]. We’ll gladly credit you with a hat tip. Happy reading!

book near eyeglasses and cappuccino

The Untold Story of a Massive Hack at HHS in Covid’s Early Days

Bloomberg’s Jordan Robertson and Riley Griffin tell the story of how a seemingly unremarkable March 15, 2020, DDoS cyberattack on the computer network for the US Department of Health and Human Services (HHS) was a smokescreen for a state-sponsored probe of computer networks associated with the US’s pandemic response.

[Two HHS officials involved in the response, former Chief Information Officer Jose Arrieta and former Chief Information Security Officer Janet Vogel and two other officials] believe the scope, complexity and timing of the attacks point to China. “I am confident and believe that this attack was a nation-state effort that was perpetrated by the CCP,” says Arrieta, referring to the Chinese Communist Party.

HHS-OIG’s official investigation didn’t reach a conclusion about China’s involvement. But the US government was investigating other cyberattacks it suspected were related to the pandemic. In the spring of 2020 the Cybersecurity and Infrastructure Security Agency warned of cyberattacks exploiting the pandemic for espionage purposes; that May, CISA and the FBI said they were investigating a significant number of attempts by China to steal data related to Covid-19 research. Two months later the US Department of Justice indicted two Chinese Ministry of State Security hackers for attacking a wide range of organizations, including companies developing coronavirus vaccine testing technology and treatments.

In the first months of the pandemic, hackers tied to Russia, Iran, Vietnam and North Korea also sought information pertaining to the coronavirus, according to cybersecurity experts. “It was the most exigent crisis for every government on Earth, and they needed answers and that’s what these hackers are for,” says John Hultquist, chief analyst for Alphabet Inc.-owned Mandiant Intelligence. “There’s not an intelligence agency on Earth that didn’t get in on this.”

He Wanted Privacy. His College Gave Him None

The Markup’s Tara García Mathewson examines how a typical US college. Mt. San Antonio College subjects its students to a vast array of tracking technology, including homework trackers, test-taking software, license plate readers, and facial recognition.

Few institutions collect as much data about the people inside of them as colleges and universities do. Residential campuses, in particular, mean students not only interact with their schools for academics, but for housing, home internet, dining, health care, fitness, and socialization. Still, whether living on campus or off, taking classes in person or remotely, students simply cannot opt out of most data collection and still pursue a degree.

For many students, that’s not a problem. They generally trust their institutions and see the online elements of higher education as convenient. Putting up with data collection seems like a necessary cost.

But even though [student Eric] Natividad’s preoccupation with data privacy makes him a bit of an anomaly among his classmates, he’s part of a growing group of college students arguing it shouldn’t be this way. Long written off as not caring about privacy because of their extensive sharing on social media, college students have become more organized and insistent about what they see as a right. At the University of Michigan, a student’s quest for greater transparency led to ViziBLUE, a website launched in 2020 that lets students see what personal information is collected and how it is used. When the COVID pandemic forced a huge portion of higher education to move online, students across the country protested the use of online exam-proctoring software that gathered information about their faces and homes. On many campuses, protesters have pressured university administrators to commit to banning the use of facial recognition technology before ever trying it.

China’s Hackers Are Expanding Their Strategic Objectives

In Lawfare, national security writer Alyza Sebenius delves into how China, infamous for massive economic espionage and intelligence gathering operations, is expanding its hacking scope to target critical infrastructure to maintain disruptive capabilities.

This year, the United States has issued a series of warnings about China-based hacking incidents. In May, cyber agencies from each of the Five Eyes countries—the U.S., U.K., Australia, Canada, and New Zealand—issued a joint warning about a group of Chinese state-sponsored hackers, known as Volt Typhoon, that targeted “networks across U.S. critical infrastructure sectors.” Cyber agencies from the five countries warned that Volt Typhoon “could apply the same techniques against these and other sectors worldwide” and issued two dozen pages of technical detail about the group’s tactics. According to Microsoft, which initially detected the activity, Volt Typhoon had engaged in a multi-year campaign, beginning in mid-2021, aimed at hacking into critical infrastructure, including “communications, manufacturing, utility, transportation, construction, maritime, government, information technology, and education sectors.” This cyber operation—which is likely the activity to which Wales referred in his recent remarks—was aimed at the United States and Guam, a territory where the U.S. has strategic military bases.

In late September, the United States and Japan released an advisory about new activity by a China-linked hacking group known as BlackTech. According to the alert, the group targeted “government, industrial, technology, media, electronics, and telecommunication sectors, including entities that support the militaries of the U.S. and Japan.” The advisory—which was issued by the U.S. National Security Agency, Federal Bureau of Investigation, and CISA as well as the Japan National Police Agency and the Japan National Center of Incident Readiness and Strategy for Cybersecurity—did not identify the purpose of the attacks. It did, however, assess that the BlackTech hackers used sophisticated cyber tools to target routers to move “from international subsidiaries to headquarters in Japan and the U.S.—the primary targets.” BlackTech has operated since 2010, targeting public- and private-sector networks in the United States and East Asia and consistently modifying its cyber tools so that they are not flagged by security software, according to the alert. The United States and Japan urged network providers to take steps to “protect devices from the backdoors the BlackTech actors are leaving behind.”

The recent warnings of China’s hacking activity, particularly those issued in conjunction with Five Eyes countries and Japan, serve as a reminder that U.S. cybersecurity depends in part on the security of its partners’ key networks. An August Washington Post report alleged that China’s hackers penetrated classified Japanese military networks in 2020, creating alarm in Washington about the sensitive information that China could access on the networks of a U.S. intelligence partner. Notably, the United States’ resolve to work with allies to address hacking incidents is part of a broader trend in its approach to cybersecurity, which transcends the China threat. For example, in 2022 and 2023, CISA released eight alerts in conjunction with allies—frequently other members of the Five Eyes—to warn of new hacking activity by Russia, Iran, and China.

Why It Took Meta 7 Years to Turn on End-to-End Encryption for All Chats

Wired’s Lily Hay Newman goes behind the scenes on how Meta took seven years to finally complete founder Mark Zuckerberg’s promise to provide end-to-end encryption (E2EE) across all its communications apps, culminating in the announcement last week that Messenger is using E2EE by default.

The challenge of building end-to-end encrypted services has to do with the fact that such systems inherently blind the servers that enable them to the activity they are facilitating. In other words, these systems have to somehow stand in the hallway on the first day of school and tell each student which classes to go to and how to get there without knowing who any of the kids are or what their course schedule is.

This especially poses challenges for syncing people's messages across multiple devices or repopulating their messages on a new device. Some encrypted chat apps like Signal address this issue by storing all your messages locally on your device and then providing a tool that helps you transfer that data trove from one device to the next over Bluetooth when you, say, get a new phone and want to switch everything over. This approach doesn't preserve your history if you lose the device where the data is saved, and many users around the world don't have resources or access to devices with enough storage space to preserve messages locally. People may want to auto-delete messages anyway, but for users who want to save their history, such schemes can be impractical.

With these considerations in mind, Meta engineers developed an encrypted storage protocol, dubbed Labyrinth, that allows the company to store users' chat histories and other communication data on its servers for ease of use, but in a form that is always encrypted and inaccessible to the company.

“There are two things users will experience” in the transition, Gail Kent, Messenger's global policy director, tells WIRED. “They will be asked to create secure storage and create a pin that will then enable them to add the data and the messages onto other devices and restore if they lose their device. Yet nobody apart from them has access to that message content. It seems like a tiny thing, but the innovation behind it is pretty extraordinary. And then the second thing that they’ll see is a line in their conversations that says this message is now end-to-end encrypted. And when they start a new conversation, it will say that at the start of the conversation.”

Inside the A.I. Arms Race That Changed Silicon Valley Forever

In the New York Times, Karen Weise, Cade Metz, Nico Grant, and Mike Isaac take a deep dive into how OpenAI’s ChatGPT, released about one year ago, sparked a frenzy of activity across Silicon Valley and a headlong rush into the release of artificial intelligence platforms, sparking concerns among AI pioneers about the overlooked dangers of unpredictable outcomes.

What played out at Google was repeated at other tech giants after OpenAI released ChatGPT in late 2022. They all had technology in various stages of development that relied on neural networks — A.I. systems that recognized sounds, generated images and chatted like a human. That technology had been pioneered by Geoffrey Hinton, an academic who had worked briefly with Microsoft and was now at Google. But the tech companies had been slowed by fears of rogue chatbots, and economic and legal mayhem.

Once ChatGPT was unleashed, none of that mattered as much, according to interviews with more than 80 executives and researchers, as well as corporate documents and audio recordings. The instinct to be first or biggest or richest — or all three — took over. The leaders of Silicon Valley’s biggest companies set a new course and pulled their employees along with them.

Over 12 months, Silicon Valley was transformed. Turning artificial intelligence into actual products that individuals and companies could use became the priority. Worries about safety and whether machines would turn on their creators were not ignored, but they were shunted aside — at least for the moment.

At Meta, Mark Zuckerberg, who had once proclaimed the metaverse to be the future, reorganized parts of the company formerly known as Facebook around A.I.

Elon Musk, the billionaire who co-founded OpenAI but had left the lab in a huff, vowed to create his own A.I. company. He called it X.AI and added it to his already full plate.

Satya Nadella, Microsoft’s chief executive, had invested in OpenAI three years before and was letting the start-up’s cowboys tap into its computing power. He sped up his plans to incorporate A.I. into Microsoft’s products — and give Google a poke in its searching eye.

“Speed is even more important than ever,” Sam Schillace, a top executive, wrote Microsoft employees. It would be, he added, an “absolutely fatal error in this moment to worry about things that can be fixed later.”