Generative Data Intelligence

Artificial intelligence is a liability

Date:

Comment Artificial intelligence, meaning large foundational models that predict text and can categorize images and speech, looks more like a liability than an asset.

So far, the dollar damage has been minor. In 2019, a Tesla driver who was operating his vehicle with the assistance of the carmaker’s Autopilot software ran a red light and struck another vehicle. The occupants died and the Tesla motorist last week was ordered to pay $23,000 in restitution.

Tesla around the same time issued a recall of two million vehicles to revise its Autopilot software in response to a US National Highway Traffic Safety Administration’s (NHTSA) investigation that found the Autopilot’s safety controls lacking.

Twenty-three thousand dollars is not a lot for two lives, but the families involved are pursuing civil claims against the driver and against Tesla, so the cost may rise. And there are said to be at least a dozen lawsuits involving Autopilot in the US.

Meanwhile, in the healthcare industry, UnitedHealthcare is being sued because the nH Predict AI Model it acquired through its 2020 purchase of Navihealth has allegedly been denying necessary post-acute care to insured seniors.

Restraints required

Companies selling AI models and services clearly understand there’s a problem. They refer to “guardrails” put in place around foundational models to help them stay in their lane – even if these don’t work very well. Precautions of this sort would unnecessary if these models didn’t contain child sexual abuse material and a panoply of other toxic content.

It’s as if AI developers read writer Alex Blechman’s viral post about tech companies interpreting the cautionary tale “Don’t Create the Torment Nexus” as a product roadmap and said, “Looks good to me.”

Of course there are older literary references that suit AI, such as Mary Shelley’s Frankenstein or Pandora’s Box – a particularly good fit given that AI models are frequently referred to as black boxes due to the lack of transparency about training material.

So far, the inscrutability of commercial models strewn with harmful content hasn’t taken too much of a toll on businesses. There’s a recent claim by Chris Bakke, founder and CEO at Laskie (acquired this year by a company calling itself X), that a GM chatbot used by a Watsonville, California, auto dealership was talked into agreeing to sell a 2024 Chevy Tahoe for $1 with a bit of prompt engineering. But the dealership isn’t likely to follow through on that commitment.

Still, the risk of relying on AI models is enough that Google, Microsoft, and Anthropic have offered to indemnify customers from copyright claims (which are numerous and largely unresolved). That’s not something you do unless there’s a chance of liability.

Regulation

Authorities are still trying to figure out how AI liability should be assessed. Consider how the European Commission framed the issue as it works toward formulating a workable legal framework for artificial intelligence:

“Current liability rules, in particular national rules based on fault, are not adapted to handle compensation claims for harm caused by AI-enabled products/services,” the Commission said [PDF] last year. “Under such rules, victims need to prove a wrongful action/omission of a person that caused the damage. The specific characteristics of AI, including autonomy and opacity (the so-called ‘black box’ effect), make it difficult or prohibitively expensive to identify the liable person and prove the requirements for a successful liability claim.”

And US lawmakers have proposed a Bipartisan AI Framework to “ensure that AI companies can be held liable through oversight body enforcement and private rights of action when their models and systems breach privacy, violate civil rights, or otherwise cause cognizable harms.”

Don’t get too excited about seeing AI firm execs behind bars: The involvement of AI industry leaders in this process suggests any rules that emerge will be about as effective as other regulatory frameworks that have been defanged by lobbyists.

But excitement is part of the problem: There’s just so much hype about stochastic parrots, as AI models have been called.

AI models have real value in some contexts, as noted by security firm Socket, which has used ChatGPT to help flag software vulnerabilities. They’ve done wonders for speech recognition, translation, and image recognition, to the detriment of transcribers and CAPTCHA puzzles. They’ve reminded industry veterans of how much fun it was to play with Eliza, an early chatbot. They look like they have real utility in decision support jobs, provided there’s a human in the loop. And they’ve taken complex command line incantations, with their assorted flags and parameters, and turned them into equally complex text prompts that can go on for paragraphs.

But the automation enabled by AI comes at a cost. In a recent article for sci-fi trade magazine Locus, author and activist Cory Doctorow argued, “AI companies are implicitly betting that their customers will buy AI for highly consequential automation, fire workers, and cause physical, mental and economic harm to their own customers as a result, somehow escaping liability for these harms.”

Doctorow is skeptical that there’s a meaningful market for AI services in high-value businesses, due to the risks and believes we’re in an AI bubble. He points to GM Cruise as an example, noting that the self-driving car company’s business model – in limbo due to an pedestrian injury and recall – amounts to replacing each low-wage driver with 1.5 more costly remote supervisors, without precluding the possibility of accidents and associated lawsuits.

Overload

At least there’s some potential for low-value business associated with AI. These involve paying monthly to access an API for inaccurate chat, algorithmic image generation that co-opts artists’ styles without permission, or generating hundreds of fake news sites (or books) in a way that “floods the zone” with misinformation.

It seems unlikely that Arena Group’s claim that its AI platform can reduce the time required to create articles for publications like Sports Illustrated by 80-90 percent will improve reader satisfaction, brand loyalty, or content quality. But perhaps generating more articles than humanly possible across the firm’s hundreds of titles will lead to more page views by bots and more programmatic ad revenue from ad buyers too naive to catch on.

Part of the problem is that the primary AI promoters – Amazon, Google, Nvidia, and Microsoft – operate cloud platforms or sell GPU hardware. They’re the pick-and-shovel vendors of the AI gold rush, who just want to sell their cloud services or number-crunching kit. They were all on-board for the blockchain express and cryptocurrency supremacy until that delusion died down.

They’re even more enthusiastic about helping companies run AI workloads, useful or otherwise. They’re simply cloud seeding, hoping to drive business to their rent-a-processor operations. Similarly, machine-learning startups without infrastructure are hoping that breathy talk of transformational technology will inflate their company valuation to reward early investors.

The AI craze can also be attributed in part to the tech industry’s perpetual effort to answer “What comes next?” during a time of prolonged stasis. Apple, Google, Amazon, Meta, Microsoft, and Nvidia have all been doing their best to prevent meaningful competition and since the start of the cloud and mobile era in the mid-2000s, they’ve done so fairly well. Not that anti-competitive behavior is anything new – recall the 2010 industry settlement with the US Department of Justice over the agreements between Adobe, Google, Intel, Intuit, and Pixar to avoid poaching talent from one another.

Microsoft made much of its AI integration with Bing, long overshadowed by Google Search, claiming it is “reinventing search.” But not much has changed since then – Bing reportedly has failed to take any market share from Google, at a time when there’s widespread sentiment that Google Search – also now larded with AI – has been getting worse.

Bring on 2024

To find out what comes next, we’ll have to wait for the Justice Department and regulators elsewhere in the world to force changes through antitrust enforcement and lawsuits. Because while Google has a lock on search distribution – through deals with Apple and others – and digital advertising – through its deal with Meta (cleared in the US, still under investigation in Europe and the UK) and other activities that piqued the interest of the Justice Department – neither the search business nor the ad business looks amenable to new challengers, no matter how much AI sauce gets added.

AI is a liability not just in the financial sense but also in the ethical sense. It promises wage savings – despite being extremely expensive in terms of training, development and environmental impact – while encouraging indifference to human labor, intellectual property, harmful output, and informational accuracy. AI invites companies to remove people from the equation when they often add value that isn’t obvious from a balance sheet.

There’s room for AI to be genuinely useful, but it needs to be deployed to help people rather than get rid of them. ®

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?