Wednesday, May 1, 2024



This is David Z. Morris, filling in for Michael Casey to talk about so-called artificial intelligence, the threats it poses to the future – and how crypto could help mitigate them.

As Michael would surely agree, there are no real days off in crypto. I was reminded myself when I recently spent a long weekend at the fantastic Readercon fiction convention. Inevitably, I missed some important crypto stories, but I also got some up-close insight into another looming novelty: the existential threat that automated large language models (LLMs) like GPT3 pose to the entire internet.

You’re reading Money Reimagined, a weekly look at the technological, economic and social events and trends that are redefining our relationship with money and transforming the global financial system. Subscribe to get the full newsletter here.

That might sound hyperbolic. But at Readercon, I met Neil Clarke, founder and editor of the top-tier science fiction magazine Clarkesworld, which along with other fiction publications has become a canary in the coal mine of A.I. run amok. The rise of ChatGPT has inundated these journals with a flood of fake GPT-generated story submissions, a plague so severe Clarkesworld was forced to temporarily pause submissions this February, threatening the work and livelihoods of real authors.

“I’ve been calling it spam,” says Clarke, “Because that’s what it is. I sometimes refuse to even call it ‘artificial intelligence.’ You can’t humanize these things. It’s not like the science fiction of movies where it’s aware. It’s a statistical [language] model.”

The mention of spam should raise the antenna of longtime cryptocurrency watchers: the same problem lay at the very origins of Bitcoin.

Between 1998 and 2002, computer scientist Adam Back developed the concept of “Hashcash,” primarily intended to combat e-mail spam by requiring a tiny payment to send one. Back and his ideas became foundational to the development of Bitcoin, and he’s now CEO of crypto developer Blockstream.

Two decades later, with robotic barbarian hordes poised to swamp human communication systems, it might be time to revisit the Hashcash concept.

Large Language Hustlers

“ChatGPT came out in late November,” Clarke says, “And we immediately started seeing submissions using it. The first people to adopt it were the ones already submitting plagiarized works. It was readily embraced by people who were trying to make a quick buck off other people’s work.”

As they faced down the spam problem, Clarke says he and his team quickly realized the attack was coordinated. YouTube and TikTok channels focused on get-rich-quick schemes were promising viewers they could make thousands of dollars by submitting GPT-generated stories to fiction magazines like Clarkesworld. Clarkesworld pays a few hundred dollars per story, depending on length – not much more than beer money in some parts of the world, but extremely meaningful in others.

Those fraudulent promises from online grifters seem to have spread fast. Clarke says he received 54 AI-generated submissions in December. In January, he got 117 fake stories. In February, the number hit 514 before Clarke closed submissions midday on February 20.

“And that morning alone,” he says, “we had 50.”

Clarkesworld has a small staff, who normally review about 1,100 submissions a month. So the accelerating flood of trash threatened to overwhelm them, and solutions weren’t obvious.

“We have an open submission process, specifically designed to welcome in new writers and new voices,” says Clarke. “So we could close submissions from certain locations [to fight spam], but we also have legitimate authors coming in from those countries. And we’ve been told things like, ‘The payment for this story will cover my bills for a month.’”

“Authors like that are getting buried. The A.I. submissions hurt new authors, and authors who might not be from communities that are well-connected.” This is one clear way auto-generated content threatens to make the internet worse for human beings – particularly those at the margins.

“If you go back 15 or 20 years when we took submissions on paper,” Clarke says, “just the cost of postage was enough to decrease submissions from outside the U.S., Canada, and U.K. substantially. And as soon as you have digital submissions, we had this flood of international submissions.” That has led to a huge diversification of the fiction world – a creative renaissance that’s now threatened by the rise of LLMs.

Clarke is also a coder, which gave him useful tools for addressing the spam challenge. He began associating more metadata with submissions, such as whether they came through a VPN and the length of the user’s session. These and other criteria are now used as part of a “points system” that places stories more likely to be fake further down a review queue. This helps real authors get read first, but also ensures that every submission is eventually reviewed.

Finally, if a story is determined to be LLM-generated, the submitter is permanently banned from the system.

Those measures have helped Clarkesworld reopen submissions, for now – but a continued rise in the volume of spam they’re dealing with would mean the solution is only temporary.

“Worse than the Worst Human Writer”

One important aspect of Clarke’s experience is that the actual quality of the robotic submissions has been abysmally low. They’re almost instantly recognizable to a human reader, and have no actual chance of being published.

“ChatGPT3 was writing at a level below the worst human writers,” says Clarke, who after two decades as an editor knows exactly what the worst looks like. “GPT4 is getting closer to the worst human writers, but even that’s still rare.”

“The common thing is that they have perfect grammar, they have perfect spelling,” Clarke continues. “But the stories themselves don’t make a lot of sense. They jump over important things. They’ll start out with a basic premise, like ecological collapse, and introduce some scientists who can solve the problem, and then suddenly they’ve solved the problem. It’s missing the middle of the story, and bookending it with stereotypical openings and closings, done very poorly.”

That sounds a lot like Ted Chiang’s recent characterization of ChatGPT’s output as “a blurry JPEG of the internet.” This manifest crappiness happily debunks much of the brain-dead hype around LLMs. But it also makes the image of talented (and wildly underpaid) editors being forced to sift through the dross all the more depressing.

The Promise of Small, Refundable Fees

Another option for reducing bad submissions is a submission fee. Clarke cites ethical and creative concerns, since a fee would limit access – but in fact, the problems largely boil down to the technical shortcomings of current global payments infrastructure.

For instance, Clarke says he would be willing to charge a submission fee if it could be easily refunded, for instance to writers whose stories were accepted, or simply not AI-generated. An ideal spam-blocking fee would also be quite small – certainly far lower than the $25 or $30 worth of postage that was keeping away developing-world authors in the pre-internet era.

But there’s no way to do that with current tech.

“Tell me a credit card company where I can refund almost all of it. I’d lose the account,” says Clarke. As any good crypto bro knows, credit cards also don’t play well with small payments. But those aren’t even the biggest issue.

“There are also problems with trying to take payments in different parts of the world,” Clarke continues. “There are a number of African countries that credit card companies won’t work with. So that would eliminate authors. I’ve also had people suggest identity services, but those also have nation-sized holes in them. We need something that works for everybody.”

If you’re reading this, you already know where we’re headed: at least in principle, cryptocurrency and related systems could help mitigate Clarkesworld’s fake submission problem.

Requiring a small payment for all submissions would reduce low-quality submissions, lightening editors’ workloads, and compensating them for the spam that did come in. Because payments could be cheaply, quickly, and easily returned to real authors, the cost to actual human writers would be marginal. And because these systems are not confined by national borders, no real writers would be crowded out by the robo-regurgitators.

Though it would take considerable elaboration, some version of the same system may someday serve parallel purposes in less boutique settings. One can imagine a Steem-like system of staking incentives being used to punish automated posting on forums or social media, for instance. More elaborate decentralized identity systems, such as SpruceID, are more challenging and, for now, more nascent, but could have even more profound potential.

To be clear, none of this should be necessary. LLMs are quickly being revealed as little more than parlor tricks, whose real utility is probably limited, at least in the near term, to short-form customer service and clickbait entertainment. (Take for instance CNET’s disastrous experiment with using GPT to write news articles).

The technology’s biggest impacts are instead seen in the spread of fourth-rate gibberish that wastes the time and brainpower of all the actual humans involved. But if this is what the god-princes of Silicon Valley see as the next frontier of venture capital riches, then it is the world we’ll have to live in. At the very least, crypto offers one hope for fighting back.

Edited by Ben Schiller.



Source
#Survive #Era #Robot #Spam #Cryptos #History

Banner Content
Tags: , , , , , , , , , , , , , , , , , , , , , , , ,

Related Article

0 Comments

Leave a Reply