Societies with kings always have jesters, people who know a lot about what the king wants to hear and are willing and able to repeat it, even if it is demonstrably wrong. In this first installment, I mentioned that these stylistic sycophants are called Glitterati. Tom Woods recently mailed out a mailer talking about Yuval Harari.
Apparently, Mr. Harari took a swipe at Bitcoin, saying that it is an act of desperation for people and a sign of lost faith in the institutions of society. Here’s part of Woods’ response to that:
Your personal opinion of Bitcoin, pro or con, is not at issue here, so it is not necessary to send me a treatise on the subject. What is at issue, and what is simultaneously amusing and revealing, is his alarm at the prospect of more and more people coming to distrust fiat money.
First, he tells us that Bitcoin is based on "distrust of human institutions" -- but Bitcoin, not having been sent to us by Mars, is itself a human institution. Its stated purpose is to improve on existing human institutions. So when he says he hopes "humanity finds a way to build trustworthy human institutions," that's pretty much exactly what Bitcoin developers and supporters say they're doing.
If it's a decline in "trust in human institutions" he fears, perhaps he could trouble himself to take a moment to understand where that distrust has come from. Instead of blaming the victim, he might consider why someone might disapprove of fiat money and/or central banks.
That’s a philosophic smack-down of Harari by Woods. The real debate is over whether people have the right to choose how to economically interact with each other. Fiat money encroaches on economic freedom by interfering with the price mechanism that free agents rely on in order to adjust their economic behavior.
When fiat money gets printed, then a tiny minority of society (cronies and banksters) receive the “new money” first — before the prices rise in response to an influx of new money. These first recipients get the benefit. But what about the rest of us? What do we get?
We get the higher prices. Thanks.
We first have to remember the first popular Glitterati was Machiavelli — who knew exactly what to write so that he would be favored by the existing royalty of the day. By justifying harsh or deceptive rule over citizen-subjects, Machiavelli got himself a seat at the table with royalty.
There is good reason to believe that Harari says things kings like, on purpose. The architects of COVID had other plans, such as Building Back Better by enshrining a patronage system for Green (Climate Change) or ESG schemes, and by convincing the world to adopt central bank digital currencies (CBDCs).
Once anyone could be put out of business for not-being-Green-enough, or for not being “ESG” enough — and funding could get cut off on a whim due to the currency being entirely digital — then whoever is in charge gets a boon, their friends will trade loyalty for riches in return, their enemies will anguish in poverty, strife, and hunger.
Also of note is that Harari is that guy who said that free will is “over” — because of BigData (though his term is “Dataism”). Harari is an historian who isn’t a master of philosophy, and he even says that the notion of free will got created by religionists (Christians) to justify God’s punishments for wrong-doings on Earth.
But that gives away his lack of study of philosophy. Before the Council of Nicea in 325, there was Alexander of Aphrodisias (~200), who philosophically defended free will against the Stoics. Alexander was an Aristotelian, the prime commentator on Aristotle’s works. He was not writing as a Christian when he defended free will.
NOTE: Plato and Aristotle didn’t offer a formal philosophical defense of free will because it was taken for granted that you have to personally choose to build your own character. They found it uncontroversial and it wasn’t until Stoics later challenged free will that it needed a formal defense by Alexander of Aphrodisias.
Harari believes that silicon chips can “know you” even better than you know yourself. He writes about Angelina Jolie, who he depicted as if she was relying on a computer algorithm which told her to remove her own breasts (because of her having a gene that makes you susceptible to breast cancer).
Imagine AI telling boys they should remove their penises — or telling girls they should take male hormones. In such a world, later pain would be “no one’s fault” because no person made the decision, a computer did. But in such a world, the computer algorithm becomes a “black box” for all manner of exploitation:
General: Mr. President, why are we invading these islands and taking all of their resources away from them by force?
President: The algorithm said so.
General: So the fact that you get personally rich and that you use the islanders as your personal slaves didn’t work into the motivation?
President: No, the algorithm said that the greatest good would occur, for the greatest number of people, if we went ahead and plundered these islands. So I cannot be morally blamed for any harm caused. Oh yeah, I forgot to tell you: The algorithm said you should take a pay cut from now on.
The people would be told that they are “too stupid” to understand the guidance of the AI systems, and that they cannot blame their leaders for them having to go through hard times and having to witness so much pain and suffering, either themselves or of others who are sacrificed for the AI system’s definition of the Greater Good.
In the Inquisition, crusaders thought they could harm others because they convinced themselves that “God wills it!” and — with unjustifiable confidence — today’s technocrats convince themselves that they will be able to take from others because “AI said so.” But we’ve already seen bias from AI systems, so who’s fooling who?
Twisted (or bought off) programmers can get AI to agree to anything, and explicit guarantees that “someone is watching the algorithms and adjusting them for us” are not enough. That’s what Google and FaceBook claimed when they showed bias against conservatives. They claimed that they were periodically fixing the algorithms.
Reliance on BigData is just another excuse for the continued bad behavior of elites.
Reference
[2018 Harari essay] — https://www.theguardian.com/books/2018/sep/14/yuval-noah-harari-the-new-threat-to-liberal-democracy
[AI gone wrong: 21 times] — https://www.privateinternetaccess.com/blog/ai-gone-wrong/
[Embarrassing AI moments] — https://techcrunch.com/2024/02/23/embarrassing-and-wrong-google-admits-it-lost-control-of-image-generating-ai
Jeffrey Tucker’s theory about lockdowns only begins to open the window of malevolent intent. Maybe I don’t fully understand his theory but in my read of the facts and events as I experienced them, only evil fruits came from the pandemic response-so malevolence was on the table from day one.