What we see is more than the physical environment we live in; what machines can “see” is only that.
Everyone knows robots can do things in the real world, but can they see and understand not only the things they do, but the world itself? From CB Insights we learn that, “With machine learning tools more broadly accessible, startups are developing computer vision to support a new wave of robotics.”
Here is today’s 3D definition:
The ability to observe the environment with a view to guiding the behavior of an animate (human) or inanimate being (machine) to achieve their programmed objectives
The thinkers and designers are absolutely right to place vision as the key to improved performance from future generations of robots and other devices powered by artificial intelligence (AI). Assessing the current state of progress, they have observed that: “It is comparatively easy to make computers exhibit adult-level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility.”
Technology experts routinely reassure us that we’re just at the beginning and that future generations will not only achieve the same skill levels as mature human beings, but easily surpass them. This is a yet-to-be-realized event in human history they call the “singularity.” The experts have been mobilizing their own “vision” to put a date on when that will happen as they speculate about the probable consequences on humanity, the most notable of which will be the replacement of “all human jobs by 2136.”
However far back one goes in history, from the first mechanical contraptions to tomorrow’s AI-driven robots, the role people have assigned to machines has been defined by the question “how to” and not “why.” It starts with an identified goal and then may expand, even exponentially, to addressing multiple goals. The “how to” approach nevertheless means focusing on the mechanics of both the vision that feeds it, constructing the machine’s supposed understanding of the environment and the program of optimized actions that will logically follow through some form of mechanical agency and deliver the services we expect from it.
As Elon Musk tellingly observes, the experts are making their predictions on the basis of “extrapolations” rather than functional analysis and the fuzzy world of “understanding”: what we can only call “vision,” a very different, non-mechanical concept. To illustrate the difference, starting with the definition proposed above, we need to examine critically the notions conveyed by these words: observe, environment, behavior, programmed and objectives.
Humans — but apparently not technology experts, who prefer thinking like machines — possess a dynamic, interacting notion of past, present, future and personal involvement. Unlike machines, they don’t simply detect the elements of the environment, but also judge them in terms of past experience, current (and sometimes conflicting) goals and future outcomes, both predictable and desired.
As we have seen, programming mere physical “vision” for machines is a challenge we are currently far from meeting. Do the techno-pundits really think algorithms can duplicate the myriad choices — including the impact of cultural factors — that enter into the perception of experience? They don’t seem to have thought deeply about that.
But there is one other factor of vision they never seem to think about: proprioception. Why? Because though it guides us through every moment of our lives, we never think about it. You could say it thinks for itself, but not in the way even the most sophisticated machine can think.
Here is the medical definition of proprioception: “The ability to sense stimuli arising within the body regarding position, motion, and equilibrium.” This enables everything we do. This is the foundation of our vision in every sense of the word, starting with our relationship with the physical environment but including the projection of ourselves — our being, our personalities — into the future, including as well, the prospect of death.
My conclusion is simple: I will start taking seriously the experts’ predictions about machine intelligence and its impact on society as soon as I hear how they intend to tackle the question of the proprioception of machines.
*[In the age of Oscar Wilde and Mark Twain, another American wit, the journalist Ambrose Bierce, produced a series of satirical definitions of commonly used terms, throwing light on their hidden meanings in real discourse. Bierce eventually collected and published them as a book, The Devil’s Dictionary, in 1911. We have shamelessly appropriated his title in the interest of continuing his wholesome pedagogical effort to enlighten generations of readers of the news.]
The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.
For more than 10 years, Fair Observer has been free, fair and independent. No billionaire owns us, no advertisers control us. We are a reader-supported nonprofit. Unlike many other publications, we keep our content free for readers regardless of where they live or whether they can afford to pay. We have no paywalls and no ads.
In the post-truth era of fake news, echo chambers and filter bubbles, we publish a plurality of perspectives from around the world. Anyone can publish with us, but everyone goes through a rigorous editorial process. So, you get fact-checked, well-reasoned content instead of noise.
We publish 2,500+ voices from 90+ countries. We also conduct education and training programs on subjects ranging from digital media and journalism to writing and critical thinking. This doesn’t come cheap. Servers, editors, trainers and web developers cost money. Please consider supporting us on a regular basis as a recurring donor or a sustaining member.