Best Infosec-Related Long Reads for the Week, 10/14/23

Best Infosec-Related Long Reads for the Week, 10/14/23

How data flows from apps to the feds, Top-notch AI voices are the new misinformation battlefield, Tech industry's attempt to undermine right-to-repair, The shape-shifting crypto war, more


Metacurity is pleased to offer our free and paid subscribers this weekly digest of the best long-form infosec-related pieces we couldn’t properly fit into our daily crush of news. So tell us what you think, and feel free to share your favorite long reads via email at info@metacurity.com. We’ll gladly credit you with a hat tip. Happy reading!

How Ads on Your Phone Can Aid Government Surveillance

In this Wall Street Journal piece, complete with a sophisticated, interactive graphical data stream, Byron Tau, Andrew Mollica, Patience Haggin, and Dustin Volz discuss how they identified a network of brokers and advertising exchanges whose data was flowing from apps to Defense Department and intelligence agencies through a company called Near Intelligence.

Near Intelligence, based in India with offices in the U.S. and France, was until earlier this year obtaining data from other brokers and advertising networks. It had several contracts with government contractors that were then passing that data to U.S. intelligence agencies and military commands, according to people familiar with the matter and documents reviewed by the Journal.

Near was surreptitiously obtaining data from numerous advertising exchanges, the people said, and claimed to have data about more than a billion devices. When contacted by the Journal, several ad exchanges said they have cut Near off for violations of their terms of service. The exchanges told the Journal that their data is meant to help target ads, not for other purposes.

Privacy, legal and compliance specialists inside Near warned the company’s leadership that it didn’t have permission to save real-time bidding data and resell it this way, especially in the wake of tough new European privacy standards that came into place in 2018, the people said. Those specialists also warned the company that indirect sales to intelligence-community clients were a reputational risk. Near’s leadership didn’t act on those warnings, the people said.

In an email viewed by the Journal, Near’s general counsel and chief privacy officer, Jay Angelo, wrote to CEO Anil Mathews that the company was facing three privacy problems. “We sell geolocation data for which we do not have consent to do so…we sell/share device ID data for which we do not have consent to do so [and] we sell data outside the EU for which we do not have consent to do so.”

In another message, Angelo called the transfer of European Union data a “massive illegal data dump,” adding that the U.S. federal government “gets our illegal EU data twice per day.”

‘A.I. Obama’ and Fake Newscasters: How A.I. Audio Is Swarming TikTok

In the New York Times, Stuart A. Thompson and Sapna Maheshwari delve into the trend of how convincing AI-generated voices are gaining traction on TikTok since companies like ElevenLabs released a slate of new tools late last year, rapidly becoming the new weapon on the online misinformation battlefield.

Disinformation watchdogs have noticed the number of videos containing A.I. voices has increased as content producers and misinformation peddlers adopt the novel tools. Social platforms like TikTok are scrambling to flag and label such content.

The video that sounded like Mr. Obama was discovered by NewsGuard, a company that monitors online misinformation. The video was published by one of 17 TikTok accounts pushing baseless claims with fake audio that NewsGuard identified, according to a report the group released in September. The accounts mostly published videos about celebrity rumors using narration from an A.I. voice, but also promoted the baseless claim that Mr. Obama is gay and the conspiracy theory that Oprah Winfrey is involved in the slave trade. The channels had collectively received hundreds of millions of views and comments that suggested some viewers believed the claims.

While the channels had no obvious political agenda, NewsGuard said, the use of A.I. voices to share mostly salacious gossip and rumors offered a road map for bad actors wanting to manipulate public opinion and share falsehoods to mass audiences online.

“It’s a way for these accounts to gain a foothold, to gain a following that can draw engagement from a wide audience,” said Jack Brewster, the enterprise editor at NewsGuard. “Once they have the credibility of having a large following, they can dip their toe into more conspiratorial content.”

The Tech Industry Has a New Plan to Stop Right to Repair Laws

In 404 Media, Jason Koebler reports that on the heels of California Governor Gavin Newsom signing the most expansive and most important electronics right-to-repair law in the country is pushing for something called a “Memorandum of Understanding,” a cease-fire agreement between right-to-repair advocates and electronics makers that is, in reality, a blatant and cynical attempt to provide cover for challenging the California law and preempting other states from passing similar legislation.

There is a long history of MOUs in the repair world, and, generally speaking, they all seem to eventually favor big corporations.

After Massachusetts passed a 2013 law, automakers signed a national MOU that basically allowed that law to become national legislation (automakers agreed to comply with the Massachusetts law throughout the nation). This basically worked for a little while, until car manufacturers began to invent new ways of preventing repairs that were not covered by the MOU and which are the current subject of the 2020 lawsuit in Massachusetts. There, too, car manufacturers have signed a new MOU that they claim negates the need for legislation; actual car repair pros say the new MOU is “inadequate” and that they are not party to it.

John Deere and the agriculture industry, meanwhile, have signed a series of MOUs with lobbying groups that represent farmers that are nominally designed to make it easier for them to repair tractors. But Deere originally didn’t comply with its first promise, which was not technically an MOU but a bilateral “Statement of Principles,” between Deere’s dealers and the California Farm Bureau. A later MOU signed earlier this year between Deere and the American Farm Bureau Federation has a little more teeth but can become null and void if any state passes any law that is stronger than the MOU. As an anecdote for how this is going in practice, a farmer sent me a Signal message last weekend in the middle of the night to see if I could tell him where to find software that would allow him to “install and/or update corner post lights circuit board on 2008 John Deere 9870 combine … so hard to get help from John Deere without sacrificing first born lol.”

This is to say that voluntary MOUs are almost always less good than laws, which, you know, have the force of law behind them and penalties for not complying with them.

The Shapeshifting Crypto Wars

In the Lawfare blog, Susan Landau, Bridge Professor in The Fletcher School and Tufts School of Engineering, Department of Computer Science, examines how the fifty-year-old fight by law enforcement against encryption has evolved the basis for its arguments from national security-based export controls to “going dark” where criminal behavior is concerned to its current iteration of fighting child sexual abuse material (CSAM).

Many national security leaders strongly agree with the move to broader availability of E2EE. In 2015, former NSA Director Mike McConnell, former Department of Homeland Security Secretary Michael Chertoff, and former Deputy Defense Secretary William Lynn wrote in the Washington Post that “the greater public good is a secure communications infrastructure protected by ubiquitous encryption at the device, server and enterprise level without building in means for government monitoring.” In 2016, former NSA and CIA Director Michael Hayden told an interviewer that “we are probably better served by not punching any holes into a strong encryption system—even well-guarded ones.” That same year, Robert Hannigan, the former head of the U.K.’s General Communications Headquarters, the U.K.’s signals intelligence agency, argued for keeping encryption strong: “I am not in favour of banning encryption. Nor am I [] asking for mandatory ‘back doors’.”

Such arguments are part of the reason behind the change in cryptographic export controls in 1999 and 2000. It’s why encryption controls did not return in the face of terrorist attacks such as in Paris and San Bernardino. In 2015, the Obama administration opted not to pursue a legislative “solution” to the encryption problem of locked devices. The decided lack of public statements by the U.S. national security establishment in support of law enforcement’s stance on encryption is also notable.

The argument for ending E2EE to prevent CSAE fails to respect that balance needed in weighing competing public interests. Curbing the use of E2EE to prevent CSAE [child sexual abuse and exploitation] would be like prohibiting the use of airbags because their deployment can injure short people. The action may help protect those shorter in stature but only at the higher societal cost of failing to prevent far more injuries of a more serious nature.

Yet despite the strong support of the national security community and economic arguments for wide public availability of E2EE, in the name of seeking to reduce the online spread of CSAE, some legislators are strongly pressing to stop the use of E2EE—but doing so in a way that disguises some of the consequences of proposed legislation.

Your old phone is safe for longer than you think

In the Washington Post, Shira Ovide explains to consumers how security experts believe that smartphones can be safe for users for up to seven or eight years so long as users keep up with the security patches vendors, with one caveat.

One exception was Tarah Wheeler, CEO of the information security company Red Queen Dynamics. She said we use our phones so much, and the security of newer devices is so much better, that it may not be worth keeping old phones for many years.

If you have an Android phone from Samsung or another company, it’s more complicated to figure out if your older phone is secure enough to keep using.

Samsung says it will generally keep fixing security flaws for up to five years for its newer smartphones. You need to dig into the fine print, though.

If you see your Samsung phone on this list for twice-a-year security updates, that’s a sign that Samsung is losing interest in your device and it’s on the path to becoming less secure. That includes models such as the Galaxy A20s from 2019.

If your Samsung phone isn’t on that list at all, it may be unsafe.

I also like the End of Life website, which shows whether Apple, Samsung and other companies are still fixing security flaws for a particular product.

Your Android phone may have less frequent security fixes than Samsung discloses, depending on your wireless carrier. Apple also may update the security less often for older operating systems like iOS 15 and 16.

Most security experts said that’s still plenty secure for most people.

Read more