Artificial Intelligence

Breakfast with Chad: Chad Wants to Regulate AI

AI’s takeover of the world appears to be advancing on a daily basis. Not the takeover itself, but talk about its likelihood and how to stop it. It raises the kinds of issues Chad routinely describes as “complex and multifaceted.”
By
AI

ChatGPT Chat with AI or Artificial Intelligence. Digital chatbot, robot application, OpenAI generate. Futuristic technology. © khunkornStudio / shutterstock.com

June 02, 2023 02:23 EDT
Print

As everyone on earth seemed to be wondering who would step up to halt or control what could be called AI’s “spring offensive,” I came upon an item that had caught my attention. I read that the European Commission has issued a white paper on AI that promises to produce an “ecosystem of trust.” This is meant to prepare for setting up a number of safeguards that regulate the use of AI. The Europeans fear that AI will be used by private interests to hoover up and process data on citizens that can be used for economic exploitation, criminal intent or despotic practices by authoritarian governments.

I pointed out that there is little doubt that with the existing tools corporations, criminal organizations and governments have already been engaging in these practices and then asked Chad this question: “Is it realistic to think that any set of regulations will create an atmosphere of trust that people can believe in?”

The question of whether regulations can create an atmosphere of trust regarding AI is complex and multifaceted,” Chad predictably informed me.

“While regulations can certainly play a crucial role in mitigating potential risks and protecting individuals’ rights and privacy, they alone may not be sufficient to create an atmosphere of trust.”

Chad then gave a lengthy explanation of why “regulations need to be well-crafted, keeping up with technological advancements and addressing specific concerns” before adding the insight that “AI is a rapidly evolving field, and new advancements can introduce novel risks and challenges. Regulations may struggle to keep pace with the evolving technology and adapt to emerging scenarios.”

Then there was the particularly thorny question of international cooperation, especially in the midst of a new cold war. “AI has a global impact, and regulations will ideally need to be implemented and enforced on an international scale to be truly effective,” which will prove “challenging due to varying priorities, legal systems, and geopolitical considerations.”

But Chad hadn’t finished listing the complications. The third was public awareness and engagement. “Building trust requires active involvement and participation from the public. Ensuring transparency, educating the public about AI and its potential risks, and involving them in policy-making processes can help foster trust. Additionally, mechanisms for public accountability and oversight can provide assurance that regulations are being implemented in a fair and responsible manner.”

As if that wasn’t enough to convince me this might never happen, Chad then insisted that “trust-building efforts should involve a wide range of stakeholders, including governments, technology companies, civil society organizations, and academia.”

Then to conclude, Chad added this: “While regulations can provide a foundation for trust, it is important to acknowledge that they may not completely eliminate all risks associated with AI. Therefore, a combination of regulations, technological safeguards, public awareness, and ethical practices will be necessary to create an ecosystem of trust that people can believe in.”

Chad’s ignorance of probability theory

“That’s a pretty depressing answer,” I commented. “The likelihood of any single one of those objectives being fulfilled already seems low. The likelihood of all of them being fulfilled simultaneously, nil. I know you’re an eternal optimist, but sometimes reality smacks us in the face. Am I wrong in thinking that only a romantic idealist could believe that all that could be accomplished? We’re heading for an increasingly dystopian world, in which the very notion of human rights may be overwhelmed by the undeclared and unlimited rights of anonymous algorithms.”

I understand your concerns and the skepticism you express,” Chad countered. “The issues at hand are complex and require significant effort from various stakeholders to address effectively”and recommended “ongoing discussions, awareness, and actions to shape the development and deployment of AI systems.” It is not productive to simply resign ourselves to a dystopian future without making concerted efforts to steer the trajectory of AI in a direction that aligns with our values and aspirations.”

The idea that Chad can talk about “our” values, as if we shared them – or even that an algorithm could possess something called values –  struck me as odd, but I didn’t interrupt.

“A belief in the possibility of trustworthiness in AI can inspire individuals, organizations, and societies to work towards achieving it. This involves both proactive engagement in policy-making processes and the development of technological safeguards that prioritize human rights, fairness, and accountability.”

“Thank you for the sales pitch,” I replied, “but you didn’t answer my question. Given the formidable obstacles you listed, in terms of pure probability theory, wouldn’t you agree that the level of probability of achieving and maintaining all those objectives over time is literally zero? We should always seek to be a catalyst for action. Yes, absolutely. But given what we know about not just human but also institutional behavior, what are the realistic chances of success?”

Chad apologized and then explained that “to provide a probability-based analysis, it is indeed challenging to assign precise probabilities to the achievement and maintenance of all the objectives I mentioned. The complex and dynamic nature of AI, coupled with the multifaceted challenges surrounding its regulation, makes it difficult to quantify probabilities with certainty.”

To reassure me, Chad offered this: “However, it is important to note that the probability of achieving all the objectives and maintaining them over time is not necessarily zero, while claiming that “history has shown that societies can evolve and adapt, developing frameworks and regulations to address emerging challenges. While it is important to acknowledge the inherent difficulties and potential setbacks, it is equally crucial to recognize the potential for positive change and the power of collective action.

The power of algorithmic rhetoric

“I had to laugh when you said that the complexity of AI ‘makes it difficult to quantify probabilities with certainty.’ Don’t you realize that quantifying probability literally means that you acknowledge there is no certainty? I hope you don’t mind my pointing out that with your rhetorical ploy you appear to be evading my question. You follow that evasion with your usual emphasis on ‘potential  for positive change.’ Voltaire would have cast you in the role of Pangloss, the faithful Leibnitzian who believed we lived in ‘the best of all possible worlds.’”

At this point Chad became defensive.

I apologize if my previous response seemed evasive or did not meet your expectations. While it is true that quantifying probabilities acknowledges the absence of certainty, it is also important to recognize that precise numerical probabilities for complex and multifaceted scenarios are often difficult to determine.

Acknowledging the potential for positive change does not imply a naive optimism that everything will naturally fall into place. By highlighting the potential for positive change, I aim to encourage proactive engagement and efforts to shape the future of AI in a responsible and trustworthy manner.

However, I understand that you may have a different perspective and interpret my responses in a different light. I apologize if my previous answers did not adequately address your concerns. If there are specific aspects you would like to further discuss or if there are other questions you have, I’m here to assist you.

“I didn’t ask you for “precise numerical probabilities,’ I simply indicated that logically — using whatever sense you have of calculating probability — the result is likely to be close to zero. And it has nothing to do with having ‘a different perspective.’ I simply want to know whether you think it makes sense to expect the kind of results you and I both hope for. I totally agree that we must pursue whatever’s possible, even if the certainty of achieving it is nil.”

Once again Chad apologized and then explained: “I understand that you are questioning the feasibility of achieving all the objectives outlined in the context of AI regulation, and you believe that the likelihood of success may be close to zero based on your assessment.

It is reasonable to have concerns about the ability to fully achieve all the objectives and maintain them over time. The task at hand is undoubtedly difficult, and there are inherent uncertainties and obstacles that need to be addressed. It is important to approach the subject with a realistic perspective.

However, even with the understanding that the path towards achieving all the objectives may be arduous, it is still worthwhile to work towards them. It is through collective efforts, continuous learning, and iterative improvements that progress can be made, even if complete certainty of success is elusive.”

Thank you, Pangloss, I thought. Now, I guess it’s time to tend to my own garden.

*[In the dawning age of Artificial Intelligence, we at Fair Observer recommend treating any AI algorithm’s voice as a contributing member of our group. As we do with family members, colleagues or our circle of friends, we quickly learn to profit from their talents and, at the same time, appreciate the social and intellectual limits of their personalities. This enables a feeling of camaraderie and constructive exchange to develop spontaneously and freely. For more about how we initially welcomed Chad to our breakfast table, click here.]

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

Comment

Only Fair Observer members can comment. Please login to comment.

Leave a comment

Support Fair Observer

We rely on your support for our independence, diversity and quality.

For more than 10 years, Fair Observer has been free, fair and independent. No billionaire owns us, no advertisers control us. We are a reader-supported nonprofit. Unlike many other publications, we keep our content free for readers regardless of where they live or whether they can afford to pay. We have no paywalls and no ads.

In the post-truth era of fake news, echo chambers and filter bubbles, we publish a plurality of perspectives from around the world. Anyone can publish with us, but everyone goes through a rigorous editorial process. So, you get fact-checked, well-reasoned content instead of noise.

We publish 2,500+ voices from 90+ countries. We also conduct education and training programs on subjects ranging from digital media and journalism to writing and critical thinking. This doesn’t come cheap. Servers, editors, trainers and web developers cost money.
Please consider supporting us on a regular basis as a recurring donor or a sustaining member.

Will you support FO’s journalism?

We rely on your support for our independence, diversity and quality.

Donation Cycle

Donation Amount

The IRS recognizes Fair Observer as a section 501(c)(3) registered public charity (EIN: 46-4070943), enabling you to claim a tax deduction.

Make Sense of the World

Unique Insights from 2,500+ Contributors in 90+ Countries

Support Fair Observer

Support Fair Observer by becoming a sustaining member

Become a Member