Best Infosec-Related Long Reads for the Week, 11/4/23
Police watch AI cameras across America, SEO has ruined the internet, Congressional failure to protect kids from the internet, An online stalker targets college professors, Biden's AI whisperer, more
Metacurity is pleased to offer our free and paid subscribers this weekly digest of the best long-form infosec-related pieces we couldn’t properly fit into our daily crush of news. So tell us what you think, and feel free to share your favorite long reads via email at info@metacurity.com. We’ll gladly credit you with a hat tip. Happy reading!
AI Cameras Took Over One Small American Town. Now They're Everywhere
404 Media’s Joseph Cox delved into a cache of materials on Fusus, an AI-powered system that is rapidly springing up across small-town America and major cities that funnels usually siloed cameras of all stripes into one central location, providing law enforcement with unparalleled video surveillance, focusing on one client, the Starkville Police Department in Mississippi, which has 480 integrated cameras that connect to the Fusus network for a town with a population of 25,000.
Starkville PD is far from the only police department that has turned to Fusus. The EFF reported nearly 150 jurisdictions are Fusus customers. The Thomson Reuters Foundation, which reported on Fusus using some of EFF’s records, pointed to an IPVM report that said Fusus has helped network more than 33,000 cameras across 2,400 U.S. locations.
And Fusus is expanding internationally. In March, the company announced its launch in the United Kingdom. Dave Maass, investigations director from the EFF, told 404 Media that Fusus had a “huge presence” at the recent International Association of Chiefs of Police (IACP) conference. During a talk at the conference, the Royal Bahamas Police Chief logged into his account and opened an officer’s body cam live stream, Maass said.
Not all communities are embracing Fusus as swiftly as Starkville did. The city council for Columbia in Missouri, for example, voted 4-3 against purchasing Fusus, Columbia’s KBIA reported at the time. “It’s really disappointing to see how cavalier we’ve become about normalizing surveillance,” Second Ward Councilperson Andrea Waner reportedly said. The city is now revisiting the decision, however.
The people who ruined the internet
In The Verge, journalist Amanda Chicago Lewis takes a frolicking and colorful deep dive into the world of search engine optimization (SEO) and those who have chosen careers in the field, leading to the distortion and degradation of Google search results.
“The SEO people are just trying to make money,” said Peter Kent, the author of several dozen explanatory tech books, including SEO for Dummies and Bitcoin for Dummies. “The cryptocurrency people are trying to make money, but they’re also trying to overthrow, you know, the existing system.”
Kent has done his fair share of SEO jobs but also has something of an outsider’s perspective. For years, he’s been telling people that part of the SEO industry’s reputation problem is that 80 percent of SEOs are scammers.
“A lot of companies and individuals out there selling their services as SEO gurus don’t know what they’re doing or don’t really give a damn,” he explained. As a consultant, he’s often had businesses ask him to vet the work of other SEOs. “I would take a look at their site and determine the firm had done next to nothing and had been charging thousands a month for years on end.”
When I ran this 80 percent scam figure by other SEOs, most agreed it sounded accurate, though people were divided about what to ascribe to greed and what was just stupidity.
“It isn’t because they have a scammer’s heart,” said Bruce Clay. “It’s because they don’t have the real expertise.” Clay is an avuncular man with a mustache who is often credited with coining the phrase “search engine optimization” and is therefore called “the father of SEO.” He told me his agency never hires an SEO with less than a decade of experience.
“I don’t know if you can trust anything you read online.”
Though Google publishes guidelines explaining how to do better in search (“Make your site interesting and useful”), the exact formula for how and why one website gets placed over another is top secret, meaning that SEO involves a lot of reverse engineering and guesswork. With no clear chain of cause and effect around why a site’s ranking has changed, a less talented practitioner can take on the mien of a premodern farmer, struggling to figure out how to make it rain. Should he do that dance he did last year the night before it poured? Or maybe sacrifice his firstborn?
The algorithm is just too opaque, too complicated, and too dynamic, making it easy for scammy SEOs to pretend they know what they’re doing and difficult for outsiders to sort the good SEOs from the bad. To make things even more confusing for, say, a small business looking to hire someone to improve their Google ranking, even a talented SEO might need a year of work to make a difference, perhaps implying a good SEO was a scammer when in fact, the client was just being impatient or refusing to implement essential advice. “There’s a great deal of effort that’s required to do things to move the needle, and a lot of companies aren’t willing to put out the money for that, even though it may be worthwhile in the long run,” said John Heard, a longtime SEO based in Kansas.
Why Congress Keeps Failing to Protect Kids Online
In the Atlantic, Columbia Law School professor and former White House official Tim Wu argues that congressional dysfunction has ill-served parents, teenagers, and children by failing to produce a single bill to protect children from the heightened rates of depression, anxiety, and suicide the online world poses to them.
The case for legislative action is overwhelming. It is insanity to imagine that platforms, which see children and teenagers as target markets, will fix these problems themselves. Teenagers often act self-assured, but their still-developing brains are bad at self-control and vulnerable to exploitation. Youth need stronger privacy protections against the collection and distribution of their personal information, which can be used for targeting. In addition, the platforms need to be pushed to do more to prevent young girls and boys from being connected to sexual predators, or served content promoting eating disorders, substance abuse, or suicide. And the sites need to hire more staff whose job it is to respond to families under attack.
All of these ideas were once what was known, politically, as low-hanging fruit. Even people who work or worked at the platforms will admit that the U.S. federal government should apply more pressure. An acquaintance who works in trust and safety at one of the platforms put it to me bluntly over drinks one evening: “The U.S. government doesn’t actually force us to do anything. Sure, Congress calls us in to yell at us every so often, but there’s no follow-up.”
“What you need to do,” she said, “is actually get on our backs and force us to spend money to protect children online. We could do more. But without pressure, we won’t.”
The Lurker
In The Verge, journalist Erika Hayasaki tells the terrifying tale of university professor Janani Umamaheswar and her husband, law professor Alex Sinha, of an online stalker called “S” who lobbed false, racially motivated accusations against Hayasaki, and then later Sinha and other university professors, across websites and social media platforms, following them from school to school, with little recourse offered by university administrators, law enforcement or social media companies.
In the digital age, many threats to faculty and staff do not just come from those affiliated with campuses. They can come from individuals anywhere around the world, making harassers harder to track down or punish. Scholars now appear regularly in the press, maintain their own personal webpages, post regularly on social media, and are encouraged to write for broader audiences — these are now the expectations of a job once largely confined to their campus and field.
The Professor Watchlist, launched in 2016, has grown to include the names of more than nearly 1,000 scholars to its original roster of 200 and includes Angela Davis, Ibram X. Kendi, and Noam Chomsky. The site regularly posts photos and information about those deemed as radical professors “advancing leftists propaganda in the classroom.” In recent years, as attacks on critical race theory, Black history, and books or courses addressing gender identity have exploded across the country, many educators are feeling even more under scrutiny and at risk for extremist threats.
Even for less famous academics, like Umamaheswar and Sinha, the very substance of their work already made them potential targets in this political climate. Umamaheswar’s publications included research into “policing and racial (in)justice in the media.” Sinha’s publications included titles on “racial discrimination in the United States.” The two of them had co-authored a paper together on wrongful imprisonment. Both come from South Asian backgrounds, and it was not lost on either of them that S. is white. Based on their own knowledge of the criminal justice system, it would not have been implausible for law enforcement to not take her behavior to be a serious threat in the first place.
Biden’s Elusive AI Whisperer Finally Goes On the Record. Here’s His Warning.
Politico magazine contributing writer Nancy Scola profiles Bruce Reed, White House deputy chief of staff and longtime Democratic Party policy expert, one of the key architects of the AI executive order released by the administration this week, and his belief that AI is poised to shake the foundations of society.
When Alondra Nelson, who Biden named to lead the Office of Science and Tech Policy in 2022, arrived at the White House, she, under Reed’s guidance, began working on what they would end up calling a “Blueprint for an AI Bill of Rights.”
Nelson said that through the months-long process of working on the document, she came to realize that success in Biden’s White House, and in Reed’s office, meant embracing a straight-forward way of speaking about complex topics. The ability to boil policy down to everyday language is, say aides, something Biden values in Reed. “He’s just plain-thinking enough to be truly brilliant,” says an ally of Reed’s speaking without attribution to avoid being seen drawing attention to him, given his success working behind closed doors.
As Reed sees it, it’s especially valuable when dealing with a Silicon Valley that has long found success in Washington by cloaking its endeavors in technical jargon.
“The industry got away with a lot of stuff because ‘It’s complicated to understand,’” says Reed. “And who wants to work on tech policy if you actually have to understand how these microscopic things work? But you don’t. You just have to bring a common-sense view to what is good about it and what’s not, and what we can do — and treat it with the same healthy scrutiny that we do everything else in American life.”
The AI bill of rights draft took months to wend its way through the White House’s approval processes, and when it was finally, after much delay, released last October, the final 73-page blueprint was plain-talking, pointing to, for example, algorithms that unfairly decide who gets what credit products — “too often, these tools are used to limit our opportunities and prevent our access to critical resources or services” — and politically savvy. The document takes a lighter-touch approach to law enforcement, arguing that some of the document’s stated principles, like transparency, might need to be interpreted differently in that context — alarming advocates who said that police use of AI tools like facial recognition is among their top concerns.
Reed was, says Nelson, particularly committed to ensuring that AI-powered algorithms don’t exacerbate bias and discrimination — and to telling a story about AI that the American public could understand. “Bruce is the standard bearer of holding together a narrative and a vision,” says Nelson, of AI having tremendous upsides but of the need for government to defend the rights of the American public as artificial intelligence gets ever more entrenched in American life.
Fake Nudes of Real Students Cause an Uproar at a New Jersey High School
The Wall Street Journal’s Julie Jargon highlights a shocking situation at Westfield High School in New Jersey in which boys were sharing AI-generated fake pornographic images of their female classmates, which only a handful of states have outlawed.
Digital bullying is widespread in schools across the U.S. Smartphones and their built-in cameras have already made the damage of digital harassment deeper and longer-lasting. While people have been able to doctor images with Photoshop and similar software for years, new AI image-makers make it easy to produce entirely fabricated photos. And any image can easily be shared widely on social and messaging platforms with a few taps.
“You would have needed an entire cluster of computers to generate images a few years ago. Now you just need an iPhone,” said Ben Colman, chief executive of Reality Defender, which works with companies and government agencies to detect AI-generated fake images.
Image generators from big companies—like OpenAI’s Dall-E and Adobe’s Firefly—have moderation settings that bar users from creating pornographic images. But a quick online search turns up dozens of results for face-swapping and “clothes removing” tools. Since these services likely use publicly available open-source software, moderation and technical guardrails are difficult, if not impossible, to enforce and implement, Colman said. It is almost impossible for the human eye to distinguish real from fake, he added.
More than 90% of such false imagery—known as “deepfakes”—are porn, according to image-detection firm Sensity AI. Tech firms including Snap and TikTok have pledged to work with government groups to stop such images from circulating. Snap and others say they ban AI-generated sexual images of minors and report them to the National Center for Missing and Exploited Children.