At the airport, I trusted ticket agents and maintenance engineers and everyone else who keeps airlines operating.
We trust their intentions, and know that those intentions will inform their actions.
We might not know someone personally, or know their motivations-but we can trust their behavior.
We don't know whether or not someone wants to steal, but maybe we can trust that they won't.
I wrote about four systems for enabling trust: our innate morals, concern about our reputations, the laws we live under, and security technologies that constrain our behavior.
Laws and security technologies are systems of trust that force us to act trustworthy.
Social trust scales better, but embeds all sorts of bias and prejudice.
That's because, in order to scale, social trust has to be structured, system- and rule-oriented, and that's where the bias gets embedded.
The former is interpersonal trust, based on morals and reputation.
The second is a service, made possible by social trust.
Because of how large and complex society has become, we have replaced many of the rituals and behaviors of interpersonal trust with security mechanisms that enforce reliability and predictability-social trust.
We are forced to trust the local police, because they're the only law enforcement authority in town.
We are forced to trust some corporations, because there aren't viable alternatives.
There's illegality, where you mistakenly trust the AI to obey the law.
There are probably more ways trusting an AI can fail.
The more you can trust that your societal interactions are reliable and predictable, the more you can ignore their details.
Doctors, lawyers, accountantsthese are all trusted agents.
Because the point of government is to create social trust.
To the extent a government improves the overall trust in society, it succeeds.
That's how we can create the social trust that society needs to thrive.
This Cyber News was published on www.schneier.com. Publication date: Mon, 04 Dec 2023 12:58:06 +0000