Best Infosec-Related Long Reads of the Week, 2/18/23

Best Infosec-Related Long Reads of the Week, 2/18/23

Scammers are using fake receipts to target users of payment apps in South America, America's overclassification crisis, How two reporters secretly recorded "Jorge," AI's hacking threat


Metacurity is pleased to offer our free and paid subscribers this weekly digest of the best long-form infosec pieces and related articles that we couldn’t properly fit into our daily crush of news. So tell us what you think, and feel free to share your favorite long reads via email at info@metacurity.com. We’ll gladly credit you with a hat tip. Happy reading!

black framed eyeglasses on white book page

Scammers are creating fake receipts — and a digital shoplifting boom

José Luis Peñarredonda and Daniela Dib in Rest of World reveal how scammers in South America are using makeshift apps to craft widespread fake receipts to target users of payment apps, exploiting the rise of these apps among the unbanked and marking a new era of digital shoplifting.

The prevalence of these sorts of scams is also related to the ease with which fake receipts and SMSes can be created. App stores unaffiliated with Google Play and Apple’s App Store — both of which tend to enforce policies that detect and ban malicious apps more rigorously — offer apps that allow users to create fake vouchers simply by typing in the target’s name and phone number, just as they would if they were making a genuine transaction. Then, scammers send these fake receipts over WhatsApp or show them directly to unsuspecting clerks.

Apart from being easy to use, these scam apps are very easy to create, Diego Velazquez, a developer who had posted a how-to video on his YouTube channel to prove his point, told Rest of World. Scam apps are basically just copying templates based on the original app’s receipts, he said.

The Cult of Secrecy: America’s Classification Crisis

Patrick Radden Keefe in Foreign Affairs predicts that the wholesale leaks of classified data, such as the still-unidentified Shadow Brokers’ leak of NSA-developed “cyberweapons” or the leak of CIA hacking tools by an agency software engineer, Joshua Schulte, will continue because the overclassification of government secrets has grown out of control.

The math becomes simple. Combine the vast dimensions of the classified world with the huge numbers of people who need access to it to do their jobs, and factor in the increasing ease of copying and transferring enormous volumes of digital information, and it seems almost certain that wholesale leaks of classified data will continue. Decades of bad habits practiced by government agencies hooked on classification clearly undermine transparency and democratic accountability, and this impulse to classify indiscriminately is often justified by invoking national security. But as [Matthew Connelly, author of a new book The Declassification Engine] points out, when everything is secret, nothing is secret: the “very size of this dark state . . . has become its own security risk.”

If the dangers of excessive government secrecy are so widely acknowledged, why has nothing been done about it? One reason, Connelly suggests, is that the authority to classify has become a cherished prerogative of government power—a tool used by presidents, generals, and various chieftains of lesser fiefs to enshroud their decisions in mystery and ward off scrutiny or accountability. Reform efforts founder in the face of bureaucratic recalcitrance. But another challenge is the sheer volume of restricted documents: because the government classifies more quickly than it declassifies, the amount keeps growing every year. How do you begin to declassify all this information, and if you cannot, what becomes of the historical record? In his book, Connelly proposes what might just be an inspired solution—but only if the government takes him up on it.

The people who kill the truth

Gur Megiddo and Omer Benjakob in Haaretz explain how, as part of a major investigation by the Paris-based organization Forbidden Stories, they secretly recorded “Jorge,” in reality Tal Hanan, a 50-year-old former Israeli special forces operative, who showed them how he hacked email and Telegram accounts of five victims in Kenya: presidential aide Kibet; Minister Chirchir; former National Assembly member James Omingo Magara; election campaign adviser Dennis Itumbi; and a political functionary named Simon Mbugua.

The pixelated Jorge turned out to be a first-rate salesman. In a jaw-dropping presentation to the clients, he revealed the array of tools at his disposal to achieve the ends for which the clients had approached him: cyberattacks; transnational disinformation campaigns; forged documents; incrimination of political adversaries; dissemination of fake reports; theft of bank documents.

Each of the tools was an instrument that could be used to break down resistance to political moves, or just to liquidate (in every nonphysical sense) the client’s political, personal or business rivals.

Without restraints, without morality and without discrimination, Jorge’s toolbox could be placed at the disposal of anyone ready to pay for it – even if its use resulted in an immediate danger to life.

The Age of AI Hacking Is Closer Than You Think

Bruce Schneier offers in Wired an excerpt from his new book, A Hacker's Mind: How the Powerful Bend Society’s Rules, and How to Bend, which argues how artificial intelligence will accelerate the speed, scale, scope, and sophistication of computer hacking.

In the absence of any meaningful regulation, there really isn’t anything we can do to prevent AI hacking from unfolding. We need to accept that it is inevitable, and build robust governing structures that can quickly and effectively respond by normalizing beneficial hacks into the system and neutralizing the malicious or inadvertently damaging ones.

This challenge raises deeper, harder questions than how AI will evolve or how institutions can respond to it: What hacks count as beneficial? Which are damaging? And who decides? If you think government should be small enough to drown in a bathtub, then you probably think hacks that reduce government’s ability to control its citizens are usually good. But you still might not want to substitute technological overlords for political ones. If you believe in the precautionary principle, you want as many experts testing and judging hacks as possible before they’re incorporated into our social systems. And you might want to apply that principle further upstream, to the institutions and structures that make those hacks possible.

Read more