The pace of change in the artificial intelligence (AI) and machine learning arena is already breathtaking, and it promises to continue to upend conventional wisdom and surpass some of our wildest expectations as it proceeds on what appears at times to be an unalterable and pre-ordained course. Along the way, much of what we now consider to be “normal” or “acceptable” will change. Some technology companies are already envisioning what our collective AI future will look like and just how far the boundaries of normality and acceptability can be stretched.
In 2016, for example, Google produced a video that provided a stunningly ambitious and unsettling look at how some people within the company envision using the information it collects in the future. Shared internally at the time within Google, the video imagines a future of total data collection, where Google subtly nudge users into alignment with the company’s own objectives, custom-prints personalized devices to collect more data, and even guides the behavior of entire populations to help solve global challenges such as poverty and disease.
Entitled “The Selfish Ledger,” the nine-minute film maintained that the way we use our smartphones creates a constantly evolving representation of who we are, which it terms a “ledger,” positing that these data profiles can be built up, used to modify behaviors and transferred from one user to another. This ledger of our device use — the data on our actions, decisions, preferences, movements and relationships — is something that can be passed on to other users, much as genetic information is passed on through the generations.
Building on the ledger notion, the video presents a conceptual Resolutions by Google system in which Google prompts users to select a life goal and then guides them toward it in every interaction they have with their phone. The ledger’s requirement for ever more data and the presumption that billions of individuals would be just fine with a Google-governed world are unnerving. The video envisions a future in which goal-driven automated ledgers become widely accepted. It is the ledger, rather than an end user, that makes decisions about what might be good for the user, seeking to fill gaps in its knowledge in a “Black Mirror”-type utopian reality.
Like other firms who are leading the pack in AI, Google is increasingly inquisitive about its users, assertive in how it wishes to interact them, and pressing existing limits about what is considered an acceptable level of intrusion into their lives. Much of this may be welcomed, based on how we have already been “programmed” to accept the company’s unsolicited overtures and now consider them to be perfectly normal and acceptable.
As the ethical deployment of emerging technologies — and AI specifically — continue to be subjects of public discourse, Google appears to be unfazed by the potential ethical implications of its current products, practices and vision of the future, or whether it is overstepping its bounds by proceeding apace to implement its vision. Google wants to understand and control the future before it occurs by, in essence, creating it and using AI and machine learning to help interpret and manage it. That is both an welcome and chilling proposition, but the truth is that our collective technological future is unfolding at lightning speed, and no single government or company can control it.
So, is Google to be commended for attempting to contain and craft the future, or should it be feared and resisted at every turn? Is there a middle ground? Will the fact that most consumers do not know the difference, or necessarily care, enable organizations like Google to basically do whatever they want? Is our great leap into the AI unknown meant to be purely exhilarating, or should we be intuitively cautious and approach it with care? The truth is that there is no single answer to these questions, nor is there one that is necessarily a right or wrong answer.
Artificial Intelligence Is Here
Artificial intelligence is already a fact of life and its potential will grow exponentially, along with its applicability and impact. Just as manned flight could only have occurred once combustion engines technically enabled it, the use of graphics cards, creation of custom hardware, the rise of cloud computing and the growth in computing capabilities — all occurring at the same time — have made AI a force to be reckoned with. Being able to rent cloud space or outsource computational resources means relative costs have come down to earth and will continue to do so. The widespread use of open-source, internet-based tools and the explosive growth in data generation have also made a big difference.
So much data is now generated on a daily basis globally that only gigantic infusions of data are likely to make a difference in the growth of artificial intelligence going forward. That implies that only the largest, most technically sophisticated firms with the capability to consumer and process such volumes of data will benefit from it in a meaningful way in the future.
Attempting to govern AI will not be an easy or pretty process, for there are overlapping frames of reference and many of the sectors in which AI will have the most impact are already heavily regulated. It will take a long time to work through the various questions that are being raised. Many are straightforward questions about technology, but many others are about what kind of societies we want to live in and what type of values we wish to adopt in the future.
If AI forces us to look ourselves in the mirror and tackle such questions with vigor, transparency and honesty, then its rise will be doing us a great favor. History would suggest, however, that the things that should really matter will either get lost in translation or be left by the side of the road in the process.
The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.