No points for guessing the subject of the first question the Wall Street Journal asked FTC Chair Lina Khan: of course it was about AI. Between the hype, the lawmaking, the saber-rattling, the trillion-dollar market caps, and the predictions of impending civilizational collapse, the AI discussion has become as inevitable, as pro forma, and as content-free as asking how someone is or wishing them a nice day.
Chair Khan didn't treat the question as an excuse to launch into the policymaker's verbal equivalent of a compulsory gymnastics exhibition.
Instead, she injected something genuinely new and exciting into the discussion, by proposing that the labor and privacy controversies in AI could be tackled using her existing regulatory authority under Section 5 of the Federal Trade Commission Act.
That's what made Chair Khan's remarks so exciting to us: in proposing that Section 5 could be used to regulate AI training, Chair Khan is opening the door to addressing these issues head on.
The FTC Act gives the FTC the power to craft specific, fit-for-purpose rules and guidance that can protect Americans' consumer, privacy, labor and other rights.
Some industry leaders think the problem can never be solved, even as startups publish papers claiming to have solved the problem.
Or, put more simply: today's chatbots lie, and no one can stop them.
That's a problem, because companies are already replacing human customer service workers with chatbots that lie to their customers, causing those customers real harm.
It's hard enough to attend your grandmother's funeral without the added pain of your airline's chatbot lying to you about the bereavement fare.
Guidance that promises to punish companies that replace their human workers with lying chatbots will give new companies that invent truthful chatbots an advantage in the marketplace.
If you can prove that your chatbot won't lie to your customers' users, you can also get an insurance company to write you a policy that will allow you to indemnify your customers against claims arising from your chatbot's output.
Earlier this month, FTC Senior Staff Attorney Michael Atleson published an excellent backgrounder laying out some of the agency's thinking on how companies should present their chatbots to users.
We think that more formal guidance about the consequences for companies that save a buck by putting untrustworthy chatbots on the front line will do a lot to protect the public from irresponsible business decisions - especially if that guidance is backed up with muscular enforcement.
This Cyber News was published on www.eff.org. Publication date: Fri, 28 Jun 2024 20:43:05 +0000