Tesla’s Elon Musk is battling on every front of our rapidly unfolding future and may have found the ultimate means of insulting our (artificial) intelligence.
Elon Musk, the CEO of Tesla and founder of SpaceX, famously warned us that artificial intelligence could lead to the extinction of humanity. That explains why he founded and funded the nonprofit Open AI.
Open AI appears to have defined for itself a two-fold mission: to create a monster and protect the public from it. At the same time as it boasts about the unparalleled prowess of its new AI model, called GPT2, Open AI has sent us a warning: “Due to our concerns about malicious applications of the technology, we are not releasing the trained model.”
Human intelligence might interpret it this way: Our product is so powerful we cannot put it in your or anyone else’s hands — not for the moment. While you’re waiting, you can have a lighter version, knowing that we are protecting you against the evil people of the world. In other words, there is nothing to fear. Open AI is apparently committed to following Google’s recently abandoned maxim: “Don’t be evil.”
Jack Clark, head of policy at Open AI, explained why this was important: “We are trying to develop more rigorous thinking here. We’re trying to build the road as we travel across it.”
Here is today’s 3D definition:
For innovative thinkers and experimenters like Elon Musk, not totally casual
The idea of building a road as one travels across it could only come from one of The Daily Devil’s Dictionary’s favorite hyperreal heroes, Elon Musk. With hyperreal innovators, you never know what to think, even if you can’t help reacting. That’s what they’re good at, making people react and believe they have changed the world or are about to do so.
The mission statement on the Open AI website says: “Discovering and enacting the path to safe artificial general intelligence.” So which is it: discovering (i.e., inventing something new) or protecting people from a category of human activity they call “artificial general intelligence”?
Like Albert Einstein’s theory of relativity, it sounds as if there are two distinct things: general and special. Artificial general intelligence (AGI) is what threatens humanity because it can potentially be applied for any purpose. Instead of conveniently solving specific problems, it could have a direct impact on the way humans understand the world or rather think that they understand the world.
Here is one definition of artificial general intelligence: “AGI is a single intelligence or algorithm that can learn multiple tasks and exhibits positive transfer when doing so, sometimes called meta-learning,” leading to “recursive self-improvement.” It’s the machine that improves, not the people who benefit from it. Only the machine will “know” what’s going on and how it got there. But the outcome will seem logical because it is logical, which of course doesn’t mean it’s true. It proves consistent with the patterns it finds and analyses the data it receives. With theoretically unlimited data, it will keep learning and applying a logic that may derive either from the initial algorithm, created by humans, or some extrapolation justified by the data.
Machine learning is the ultimate example of hyperreality. Because it comes from a process rather than the behavior of real things, even if it takes into account their behavior, its results will end up looking as credible as reality — simulating reality without being reality. That explains why a hyperreal human operator like Elon Musk is eager to affirm his fear of the toolbox he himself is investing in and why we should understand that his fear justifies the investment. This positions him and his enterprise as a benefactor of humanity since its aim is to identify and presumably prevent evil uses of the tools.
Open AI has cast itself in a role similar to that of the military instructor teaching recruits to “know the enemy.” Musk is our guide to the future, putting us in the role of the hitchhikers of the galaxy.
AI has been in the works for some time. The meme grew out of Alan Turing’s groundbreaking and codebreaking work during World War II as he cracked the code of the Nazis, crediting him indirectly with saving European civilization. World civilization was saved (and in a very real sense lost on the same occasion) by the two atomic bombs the US dropped unnecessarily on Japan.
Turing went on to devise the Turing Test. “Turing proposed that a computer can be said to possess artificial intelligence if it can mimic human responses under specific conditions.” This was clearly special intelligence (sometimes called “weak AI”) rather than general intelligence (“strong AI”), and it depended on the correct notion that language at best delivers an approximation of meaning. Turing would not have claimed that machines can produce meaning, but he did believe they can sufficiently simulate the act of expressing intentions for humans to be fooled by their performance.
This actually sums up what any kind of artificial intelligence can achieve: Fooling humans into believing that the conclusions they reach and the outcomes they produce would have been achieved by humans, were they to benefit from the same amount of data before reaching and implementing their conclusions.
Logically there are two results here. The first is the value of the conclusion based on more refined methods of analysis (the result of “machine learning”). It will necessarily be superior to human judgment in terms of the amount of significant data handled before reaching a conclusion. This means it will be comprehensive without being comprehending. The second result is that, in the spirit of the Turing Test, the measure of success is the degree to which humans will be fooled.
The net result? Being rigorous means getting better at fooling people!
*[In the age of Oscar Wilde and Mark Twain, another American wit, the journalist Ambrose Bierce, produced a series of satirical definitions of commonly used terms, throwing light on their hidden meanings in real discourse. Bierce eventually collected and published them as a book, The Devil’s Dictionary, in 1911. We have shamelessly appropriated his title in the interest of continuing his wholesome pedagogical effort to enlighten generations of readers of the news.]
The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.
For more than 10 years, Fair Observer has been free, fair and independent. No billionaire owns us, no advertisers control us. We are a reader-supported nonprofit. Unlike many other publications, we keep our content free for readers regardless of where they live or whether they can afford to pay. We have no paywalls and no ads.
In the post-truth era of fake news, echo chambers and filter bubbles, we publish a plurality of perspectives from around the world. Anyone can publish with us, but everyone goes through a rigorous editorial process. So, you get fact-checked, well-reasoned content instead of noise.
We publish 2,500+ voices from 90+ countries. We also conduct education and training programs on subjects ranging from digital media and journalism to writing and critical thinking. This doesn’t come cheap. Servers, editors, trainers and web developers cost money. Please consider supporting us on a regular basis as a recurring donor or a sustaining member.