Best Infosec-Related Long Reads for the Week, 11/18/23

Best Infosec-Related Long Reads for the Week, 11/18/23

The story of the Mirai creators, Laws needed to restrain data brokers, Fears of cyberwar are making things worse, Farewell to coding in the AI era, The deadly disinformation front in the Hamas war


Metacurity is pleased to offer our free and paid subscribers this weekly digest of the best long-form infosec-related pieces we couldn’t properly fit into our daily crush of news. So tell us what you think, and feel free to share your favorite long reads via email at info@metacurity.com. We’ll gladly credit you with a hat tip. Happy reading!

mug of black coffee beside person's feet wearing brown stockigns

The Mirai Confessions: Three Young Hackers Who Built a Web-Killing Monster Finally Tell Their Story

Wired’s Andy Greenberg tells the extensive tale of three young hackers behind the Mirai DDoS botnet, which brought down a vast swath of the internet in September 2016, including that of noted cybersecurity journalist Brian Krebs, a misstep that ultimately attracted the attention of the FBI but ironically gave the hackers’ a path to becoming legitimate white hat security researchers.

After a typical sleepless night at his keyboard, 19-year-old Josiah White sat staring at the three flatscreen monitors he’d set up on a workbench in a messy basement storage area connected to the bedroom he shared with his brother in their parents’ house. He was surrounded by computer equipment—old hard drives and a friend’s desktop machine he had offered to fix—and boxes of his family’s toys and Christmas tree ornaments.

For weeks, a cyber weapon that he’d built with two of his young friends, Paras Jha and Dalton Norman, had wreaked havoc across the internet, blasting victims offline in one unprecedented attack after another. As the damage mounted, Josiah had grown accustomed to the thrills, the anxiety, the guilt, the sense that it had all gotten so absurdly out of hand—and the thought that he was now probably being hunted by law enforcement agencies around the world.

He’d reached a state of numbness, compartmentalizing his dread even as he read Bruce Schneier’s doomsday post and understood that it was describing his own work—and now, even as a White House press secretary assured reporters in a streamed press conference that the Department of Homeland Security was investigating the mass outage that had resulted directly from his actions.

But what Josiah remembers feeling above all else was simply awe—awe at the scale and chaotic power of the Frankenstein’s monster that he and his friends had unleashed. Awe at how thoroughly it had now escaped their control. Awe that the internet itself was being shaken to its foundations by this thing that three young hackers had built in a flurry of adolescent emotions, whims, rivalries, rationalizations, and mistakes. A thing called Mirai.

Data Brokers, Military Personnel, and National Security Risks

Lawfare contributors Justin Sherman, Hayley Barton, Aden Klein, Brady Allen Kruse, and Anushka Srinivasan offer some policy solutions stemming from the data brokerage research project they conducted at Duke University’s Sanford School of Public Policy, which documented how they purchased from data brokers sensitive data about active-duty members of the military, veterans, and their families.

Data brokers gather and sell data on U.S. military personnel, including sensitive, individually identified, and nonpublic information about finances, health conditions, political beliefs, children, and religion. Simultaneously, the sale of data focused on military personnel sits within the broader, multibillion-dollar data brokerage ecosystem that gathers and sells data on virtually every single American. Our ability to purchase sensitive, individually identified, nonpublic information about military personnel with almost no vetting, including from a .asia domain, for as low as $0.12 per record, underscores the substantial risk that a foreign or malign actor could acquire this data in order to inflict harm on the U.S. military and U.S. national security. Those harmful actions could include profiling, scamming, blackmail, coercion, outing, reputation-damaging, stalking and tailing, microtargeting, and conducting other analyses on members of the national security community. To respond to these problems, regulatory agencies and the Defense Department can take some measures, but it ultimately comes down to congressional changes to U.S. privacy law.

A strong, comprehensive U.S. privacy law, with robust controls on the data brokerage ecosystem, would be the most effective step to prevent harms from data brokerage for all Americans. For example, the American Data Privacy and Protection Act (ADPPA), introduced in 2022 in the 117th Congress (not yet reintroduced in the current Congress), includes provisions to generally prohibit companies from transferring individuals’ personal data without their affirmative express consent—and to establish a centralized registry through which consumers can opt out of the sale of some of their data by some third-party data brokers. It also includes requirements for companies to implement security practices to protect and secure personal data against unauthorized access. Such provisions could introduce new controls around the collection and use of personal data about Americans, encompassing members of the U.S. military and their families.

Is the Fear of Cyberwar Worse Than Cyberwar Itself?

In Lawfare, Tom Johansmeyer, Ph.D. candidate at the University of Kent, Canterbury, argues that the global insurance market has a cyberwar problem, seeking to avoid covering cyberwar incidents altogether out of concern that a cyber conflict could devastate their balance sheets, making the fear of cyberwar a bigger problem than the threat of cyberwar itself.

Insurers have become quite adept at handling day-to-day cyber losses, such as isolated ransomware attacks and breaches, I learned through 10 interviews I conducted with cyber insurance executives to support my ongoing doctoral research. Known as attritional losses, these are the sorts of claims insurers encounter and handle routinely, similar to slip-and-fall claims in liability classes of business and fender benders in auto.

Systemic risk, by contrast, is more concerning. Also known as “cyber catastrophe” risk, it involves cyberattacks affecting a large number of companies at the same time, resulting in a significant and reasonably simultaneous aggregation of losses. Cyber catastrophe is analogous to hurricanes, earthquakes, and other natural disasters—in which many insureds (and insurers) are hit at the same time.

The reinsurance industry helps insurers address systemic risks outside of cyber, with more than $600 billion in capital allocated for reinsurance globally. This support has been slow to gain ground in the cyber insurance sector, though. Rather than purchase cyber reinsurance designed to hedge against the risk of systemic events, as insurance companies do for property catastrophes, insurers have been more inclined to use proportional structures, through which they effectively give a share of their portfolios to reinsurers. This means that they cede both attritional and systemic risk to reinsurers.

Among the largest and most concerning systemic scenarios for both insurers and reinsurers is cyberwar. There is a persistent fear that cyberwar is virtually uninsurable and needs to be excluded. Leading reinsurer Munich Re, which is also a leader in the cyber reinsurance market, says that cyberwar “risk transfer is not possible” because “its consequences are so large and wide-reaching that private industry simply is not able to bear such a ruinous risk.”

A Coder Considers the Waning Days of the Craft

Writer and programmer James Somers writes a eulogy for what he argues might be an increasingly lost art of writing code in the era of artificial intelligence when AI platforms such as ChatGPT-4 can cleanly and quickly solve the programming problems he has been thrilled to do on his own.

It wasn’t long before I caved. I was making a little search tool at work and wanted to highlight the parts of the user’s query that matched the results. But I was splitting up the query by words in a way that made things much more complicated. I found myself short on patience. I started thinking about GPT-4. Perhaps instead of spending an afternoon programming I could spend some time “prompting,” or having a conversation with an A.I.

In a 1978 essay titled “On the Foolishness of ‘Natural Language Programming,’ ” the computer scientist Edsger W. Dijkstra argued that if you were to instruct computers not in a specialized language like C++ or Python but in your native tongue you’d be rejecting the very precision that made computers useful. Formal programming languages, he wrote, are “an amazingly effective tool for ruling out all sorts of nonsense that, when we use our native tongues, are almost impossible to avoid.” Dijkstra’s argument became a truism in programming circles. When the essay made the rounds on Reddit in 2014, a top commenter wrote, “I’m not sure which of the following is scariest. Just how trivially obvious this idea is” or the fact that “many still do not know it.”

When I first used GPT-4, I could see what Dijkstra was talking about. You can’t just say to the A.I., “Solve my problem.” That day may come, but for now it is more like an instrument you must learn to play. You have to specify what you want carefully, as though talking to a beginner. In the search-highlighting problem, I found myself asking GPT-4 to do too much at once, watching it fail, and then starting over. Each time, my prompts became less ambitious. By the end of the conversation, I wasn’t talking about search or highlighting; I had broken the problem into specific, abstract, unambiguous sub-problems that, together, would give me what I wanted.

Having found the A.I.’s level, I felt almost instantly that my working life had been transformed. Everywhere I looked I could see GPT-4-size holes; I understood, finally, why the screens around the office were always filled with chat sessions—and how Ben had become so productive. I opened myself up to trying it more often.

‘Similar to a Virus’: Disinformation Becomes Deadly New Front in Israel-Hamas War

In Haaretz, Etan Nechin highlights how agents in the disinformation battle on conventional and social media surrounding the Israel-Hamas war are using AI and deepfakes to outmaneuver the government and the people.

The ubiquity of disinformation is coupled with actors who work to degrade trust in mainstream media outlets, as both sides blame the media for not capturing their side of the story. This is fueled by conspiracy theorists who claim the media is “hiding” the truth from the public, such as completely baseless claims that Israel knew of the Hamas attack in advance and willingly allowed the terror group to massacre 1,200 of its citizens. Another widely spread falsehood online is that the IDF killed attendees at the Tribes of Nova trance music festival, despite Hamas terrorists actually filming themselves during the slaughter that left some 260 partygoers dead.

[Dr. Liran Antebi, director of the Advanced Technologies and National Security program at Tel Aviv’s Institute for National Security Studies] argues that a significant part of the problem is the asymmetric warfare between state actors, rogue states and individual actors.

“The issue with state policies and regulations lies in their reactive nature,” she says. “In democracies, shutting down the internet is not a feasible option. The combination of bureaucratic red tape and technological illiteracy among officials complicates the regulatory process. This makes it difficult, if not impossible, to keep pace with rapidly advancing technology.

Read more