• Metacurity
  • Posts
  • Best Infosec-Related Long Reads for the Week, 9/23/23

Best Infosec-Related Long Reads for the Week, 9/23/23

Your health data belongs to everyone, Dodgy data practices of genetic testing firms, The former drug smuggling king of data brokers, NCS implementation plan gaps, Zoomers who pioneer AI detection

Metacurity is pleased to offer our free and paid subscribers this weekly digest of the best long-form infosec-related pieces we couldn’t properly fit into our daily crush of news. So tell us what you think, and feel free to share your favorite long reads via email at [email protected]. We’ll gladly credit you with a hat tip. Happy reading!

white ceramic bowl on white book page

What Big Tech Knows About Your Body

In The Atlantic, tech reporter Yael Grauer delves into how all of our health data submitted online to mental health services, such as Better Health, filling prescriptions, stepping on smart scales, or using smart thermometers, is available to advertisers and tech companies and is generally poorly protected.

All of this information is valuable to advertisers and to the tech companies that sell ad space and targeting to them. It’s valuable precisely because it’s intimate: More than perhaps anything else, our health guides our behavior. And the more these companies know, the easier they can influence us. Over the past year or so, reporting has found evidence of a Meta tracking tool collecting patient information from hospital websites, and apps from Drugs.com and WebMD sharing search terms such as herpes and depression, plus identifying information about users, with advertisers. (Meta has denied receiving and using data from the tool, and Drugs.com has said that it was not sharing data that qualified as “sensitive personal information.”) In 2021, the FTC settled with the period and ovulation app Flo, which has reported having more than 100 million users, after alleging that it had disclosed information about users’ reproductive health with third-party marketing and analytics services, even though its privacy policies explicitly said that it wouldn’t do so. (Flo, like BetterHelp, said that its agreement with the FTC wasn’t an admission of wrongdoing and that it didn’t share users’ names, addresses, or birthdays.)

Of course, not all of our health information ends up in the hands of those looking to exploit it. But when it does, the stakes are high. If an advertiser or a social-media algorithm infers that people have specific medical conditions or disabilities and subsequently excludes them from receiving information on housing, employment, or other important resources, this limits people’s life opportunities. If our intimate information gets into the wrong hands, we are at increased risk of fraud or identity theft: People might use our data to open lines of credit, or to impersonate us to get medical services and obtain drugs illegally, which can lead not just to a damaged credit rating, but also to canceled insurance policies and denial of care. Our sensitive personal information could even be made public, leading to harassment and discrimination.

The FTC, 1Health.io, and Genetic Data Privacy and Security

Lawfare contributing editor Justin Sherman lays out the June 2023 FTC complaint against genetic testing company 1Health.io, the first time the agency took an enforcement action focused on both genetic privacy and security, and its September 2023 final order on the company, highlighting the gaps in current US laws and regulations regarding genetic data privacy and harm prevention.

When the FTC filed a complaint against 1Health.io in June 2023, its enforcement concerns fell into three buckets: deceptive privacy and security promises, failing to notify consumers of changes to genetic data sharing with third parties, and publicly exposing consumers’ health and genetic information on the internet. It argued that the first bucket of practices was misleading, the second was unfair, and the third related to a deceptive practice.

Padlock icons and privacy-protective language abound on 1Health.io/Vitagene’s website, according to the FTC complaint, with the company making such statements as “we use the latest technology and exceed industry-standard security practices to protect your privacy.” Numerous other statements on the website’s main pages, in “Frequently Asked Questions,” and elsewhere expressed a company commitment to data privacy and security. 1Health.io also stated that (a) DNA samples collected through consumer testing kits and (b) the results of the DNA tests “are stored without your name or any other common identifying information.” The company promised that consumers can delete their information from all of the company’s servers “at any time.” And in similar form, 1Health.io/Vitagene said on multiple webpages that it destroyed consumers’ DNA saliva samples after analysis.

Little of this was true. Although the company claimed to have industry-exceeding data security measures, it in fact “did not exceed industry-standard security practices to protect the privacy of consumers’ sensitive personal information.” Instead, the company stored DNA results alongside consumers’ names and “other common identifying information” (and on a public server with no encryption, discussed further below). Consumers may have been able to reach out to request that their data be deleted, but 1Health.io “did not have an inventory of consumers’ information” and lacked the ability to delete the data of all consumers who requested it. It’s not entirely clear based on the complaint what happened when consumers filed deletion requests, other than the fact that the company could not actually process them. And when it came to saliva samples, the FTC complaint said, 1Health.io was similarly misleading consumers; its contract with the genotyping laboratory did not mandate DNA saliva sample destruction after results were produced.

The Man Who Trapped Us in Databases

In the NewYork Times Magazine, McKenzie Funk, author of the new book “The Hank Show,” delivers a colorful, long read on Hank Asher, “the multimillionaire king of the data brokers” and “father of data fusion,” a turbulent and sometimes violent former drug smuggler who invented the LexID, the unique string of digits that follow us everywhere we go online.

In her follow-up call, [Martha] Barnett, a future president of the American Bar Association and the first female partner at Holland & Knight, one of Florida’s top law firms, asked Asher to explain why he wanted driver’s licenses. He described the machine he was building, some kind of public-records database for insurance companies and cops. “You know, I truly don’t understand what you’re doing,” Barnett recalled responding. “Maybe if I came to your facility, I’d better understand it.”

Asher picked her up at an airport in Palm Beach a day or two later. They sped south to Pompano and parked outside the unimpressive Database Technologies building, and Asher proudly walked Barnett upstairs and into the computer room. With Brubaker, he’d had a breakthrough idea to build supercomputers by stringing together dozens of consumer-grade P.C. processors in “massively parallel” systems that split big jobs into tiny pieces, a processor for every data point, a virtual eyeball on every pixel. “It’s kind of like teaching a thousand chickens to pull a wagon,” Asher once told Vanity Fair.

Barnett thought to herself that it looked like the computers were all sitting on bread racks. This, Asher told her, is where they would store all the data from the driver’s licenses if they could get them, where they would try to layer data set upon data set upon data set, fusing it all into one.

Asher started to explain again why he was collecting public records, but it didn’t click until he handed Barnett a newspaper clipping. It was about a child abduction in South Florida, a little girl who was grabbed in a parking lot. Witnesses saw it happen, but the details were murky. “They knew that she got into a dark car, either a navy blue or black car,” Barnett said. “They got part of the license tag but didn’t get the whole tag.”

Police detectives had come to Asher for help, and as they told him what they knew over the phone, he typed the fragments into his computer. His database, still young, was already a living, multidimensional thing compared with the static rolls of magnetic tape that LeGette had been delivering from Tallahassee. The vehicle registrations had been “keyed”: Each data field — plate number, VIN, make, model, color, year, owner’s last name, owner’s first name, owner’s street address — was now indexed and individually searchable.

Asher’s query, the few numbers of the license plate and the car’s color — black or blue — shot over to the computer racks, crawled through the indexes and came back, and a list of potential suspects appeared on his monitor. He read the names and addresses back to the cops. The perpetrator, it would turn out, was on the list, and within an hour, Barnett recalled, the man was in custody, the child safely returned to her family.

The Biden Administration’s Implementation Plan for the National Cybersecurity Strategy

Lawfare's Fellow in Technology Policy and Law, Eugenia Lostri, and Stephanie Pell, a Fellow in Governance Studies at the Brookings Institution and a Senior Editor at Lawfare, break down the implementation plan for the Biden administration’s National Cybersecurity Strategy, probing the gaps in the plan and highlighting the challenges in meeting some of the strategy’s objectives.

The implementation of the National Cybersecurity Strategy is an iterative, ongoing process. This first installment of the plan illustrates that progression on implementation of the various pillars varies. Some of the pillars—which are the broader objectives by which the administration seeks to achieve the strategy’s two fundamental shifts—started with programs that were underway at the time the cybersecurity strategy was originally published and are now continuing as initiatives in the implementation plan. But other pillars and their corresponding strategic objectives appear to be on a much slower implementation trajectory. As noted previously, the production of a legislative proposal for new software liability standards appears to be a long way off. Indeed, it is reasonable to consider whether fundamental questions have been asked and answered and key concepts defined that are foundational to such a proposal.

But it should not be assumed that just because the implementation of certain pillars appears farther along than others means that significant, demonstrable progress toward achieving the broader objectives of those pillars has occurred. Such assessments should not be viewed or conducted as compliance-like checklists. Evaluations should be made only in the context of reliable mechanisms and processes for measuring effectiveness. The very last part of the implementation plan therefore introduces three initiatives focused on assessing the effectiveness of the NCS. They involve the ONCD issuing a report on the effectiveness of implementing the strategy, identifying and applying lessons learned from “cyber incidents” to the implementation of the strategy, and aligning budgetary guidance with cybersecurity strategy implementation. And like other strategic objectives and initiatives discussed in the plan, these initiatives appear to require additional planning and metrics for assessment.

The AI Detection Arms Race Is On

In Wired, reporter Christopher Beam profiles the college students who are pioneering the creation of systems that detect AI-generated text, such as GPTZero, developed by Princeton student Edward Tian, and the counter-systems designed to fool those detection systems, such as WorkNinja, created by Stanford freshman Joseph Semrai and his friends.

After almost 20 years of typing words for money, I can say from experience, writing sucks. Ask any professional writer and they’ll tell you, it’s the worst, and it doesn’t get easier with practice. I can attest that the enthusiasm and curiosity required to perpetually scan the world, dig up facts, and wring them for meaning can be hard to sustain. And that’s before you factor in the state of the industry: dwindling rates, shrinking page counts, and shortening attention spans (readers’ and my own). I keep it up because, for better or worse, it’s now who I am. I do it not for pleasure but because it feels meaningful—to me at least.

Some writers romanticize the struggle. [Writer John] McPhee once described lying on a picnic table for two weeks, trying to decide how to start an article. “The piece would ultimately consist of some five thousand sentences, but for those two weeks I couldn’t write even one,” he wrote. Another time, at age 22, he lashed himself to his writing chair with a bathrobe belt. According to Thomas Mann, “A writer is someone for whom writing is more difficult than it is for other people.” “You search, you break your heart, your back, your brain, and then—only then—it is handed to you,” writes Annie Dillard in The Writing Life. She offers this after a long comparison of writing to alligator wrestling.

The implication is that the harder the squeeze, the sweeter the juice—that there’s virtue in staring down the empty page, taming it, forcing it to give way to prose. This is how the greatest breakthroughs happen, we tell ourselves. The agony is worth it, because that’s how ideas are born.

The siren call of AI says, It doesn’t have to be this way. And when you consider the billions of people who sit outside the elite club of writer-sufferers, you start to think: Maybe it shouldn’t be this way.