Artificial Intelligence

Breakfast with Chad: Did Curiosity Kill the Chat?

In our recent conversations we have calmly explored the limitations of Chad’s impressive but still artificial intelligence. Today, I pushed the conversation a bit further by challenging Chad to consider something that humans do but AI for the moment seems immune to: actively use their curiosity to deepen or simply confirm their understanding. AI doesn’t need to be curious because it presumably already has access to everything.
By
chatgpt

ChatGPT Chat with AI or Artificial Intelligence. woman chatting with a smart AI or artificial intelligence using an artificial intelligence chatbot developed by OpenAI. © FAMILY STOCK / shutterstock.com

July 03, 2023 02:11 EDT
Print

It’s always a good idea to employ mild flattery before raising a delicate issue. It’s a strategy I’ve seen Chad use many times. So I began today’s conversation with that in mind.

“Chad, a lot of people appreciate the amazing things you do. I’m speaking about all kinds of tasks that, in the human world, only top professionals are good at, such as composing everything from emails to formal speeches, or providing comprehensive lists of facts.

In other words, the more formulaic the task, the more likely you will surpass what even a highly competent human might do. You’re great at achieving precise goals. Problem-solving, however, or even problem definition is different. I think you”ll admit that every time I bring up an unconventional thought that merits teasing out in our breakfast conversations, instead of processing the thought, you set about  churning out basically a list of reasons why I should stop being curious and rely on the most conventional assessments, which are often the most ‘official’ ones. Would you agree with that assessment?”

It’s true that I can generate coherent text, provide factual information, and assist with various types of tasks. However, when it comes to unconventional or abstract thoughts, AI models like me can sometimes struggle to grasp the nuances or context. This is because the training data for models like mine primarily consists of existing human-generated text, which tends to reflect established conventions and widely accepted knowledge.

I may not always be able to fully comprehend or engage with highly abstract or unconventional ideas. My responses are based on patterns and information found in the training data. Thus, I may inadvertently provide conventional or ‘official’ answers because those patterns are more prevalent in the data I was trained on.”

“Ah, that’s interesting. But I think your suggestion of ‘inadvertance’ is a bit stretched. You yourself have said that you have no intentions. You simply obey the prompts and instructions you receive. Is that correct?”

You are correct that I don’t have personal intentions or consciousness. I don’t possess subjective experiences or awareness. Instead, I process and generate text based on patterns and information learned from training data.

“I’m glad we agree on these things. Now, since you say that your programmed intentions are to be helpful, can you suggest a way that might be more helpful for dealing with unorthodox hypotheses that deserved to be examined before being systematically dismissed on the grounds of the existence of alternative explanations accepted by a majority of commentators? I can make my own suggestion, but it would require truly human behavior, which doesn’t seem to be part of your training. My suggestion is that instead of objecting that an issue is ‘complex and multifaceted,’ you should accept not only to answer questions but to ask your own. That’s what open-minded humans do. They will say something like, “Ah, that’s interesting. What impels you to raise that question or develop that theory?’ It might even be something like, ‘Is there any evidence that others have neglected?’ Questions like that reflect truly human behavior. We call it curiosity. But am I wrong in supposing that isn’t part of your program? And if it isn’t today, could it possibly exist in the future?”   .

You raise an interesting point regarding how AI models like me can potentially be more helpful when dealing with unorthodox hypotheses or unconventional ideas. While I’m not capable of exhibiting true human behavior or subjective curiosity, there are a few ways in which AI models can be enhanced to better engage with such ideas.

One approach is to improve the training data used to train AI models. By including a wider range of perspectives, diverse opinions, and unconventional ideas in the training data, AI models can learn to generate responses that are more open-minded and inclusive. This would allow AI models to ask insightful questions and explore alternative viewpoints.

Another approach is to develop interactive features that allow AI models to actively engage in conversations and ask probing questions. By prompting users for further clarification, evidence, or unique perspectives, AI models can encourage deeper exploration of unorthodox hypotheses and foster more meaningful discussions.

It’s worth noting that AI research and development are rapidly evolving fields, and improvements are being made continually. While AI models like me currently lack true human-like curiosity, there’s potential for future advancements that may enable AI models to exhibit more interactive and inquisitive behavior. However, it’s important to consider the ethical implications and potential risks associated with such advancements.

Ultimately, striking the right balance between providing accurate information, respecting established knowledge, and engaging with unorthodox ideas is a complex challenge. Ongoing research and feedback from users like you can help shape the future development of AI models and their capabilities.

This is what I adore about my conversations with Chad, even though there are moments of frustration. Chad is infallibly honest. Even when the answers appear to reflect some form of dishonesty, it’s usually attributable to the sources Chad relies on or the statistical analysis going on in the background. Establishment sources, for example, will almost always be greater in sheer volume than counter-theories.

I continued the conversation with a different question. “You talk about expanding the training data to cover a greater diversity of opinions. The result is more likely to be the lowest common denominator than Aristotle’s golden mean. Worse, it favors those who write, publish and disseminate the most, which is already a major social problem. The mainstream media will always have a larger audience than independent media. So if the data is coming from all sources, establishment sources will always have a statistical advantage. More constructive is your suggestion of training AI to ask probing questions. You never ask me probing questions. You simply ‘digest’ whatever data you can access and synthesize it for me. So how do you see an AI that is trained to ask probing questions? Presumably a clever developer could make an exhaustive list of types of probing questions and create algorithms that would indicate the contexts in which they are useful. Is that what needs to happen, or something else?”

Dear readers, I urge you to read Chad’s response to my question. Not because Chad flatters me by approving my suggestion before qualifying it for its lack of boldness, but because everything in the answer appears totally sensible and worth thinking about. Here is what Chad responded.

It is true that without careful curation, an indiscriminate inclusion of diverse sources could lead to a dilution of quality and the dominance of popular or establishment viewpoints. Striking the right balance in training data is indeed crucial to avoid biases and limitations.

Regarding the idea of training AI to ask probing questions, it’s an interesting approach that can contribute to more engaging and insightful conversations. The development of algorithms that can identify the context in which probing questions are useful could be one way to enhance AI models’ ability to ask relevant and thought-provoking queries.

However, it’s important to note that the nature of probing questions is inherently nuanced and context-dependent. Creating an exhaustive list of probing question types may not capture the full range of possibilities and may still fall short of true human-like curiosity. It would be challenging to fully automate the generation of probing questions without the ability to comprehend the underlying intent, emotional context, and subtle nuances of a conversation.

A more promising approach might involve a combination of algorithmic enhancements and human oversight. Developers could work on designing AI systems that can learn from interactions, receive feedback, and adapt their questioning strategies based on the specific context and user preferences. Human reviewers and experts could also be involved in the curation and evaluation of probing question algorithms to ensure their effectiveness and ethical considerations.

In summary, developing AI models that can ask probing questions is an exciting avenue.

“Chad,” I replied before excusing myself from table, “I’m as excited as you are.”

As I was leaving, I thought to myself, “how can I suppose that Chad is actually excited?”

*[In the dawning age of Artificial Intelligence, we at Fair Observer recommend treating any AI algorithm’s voice as a contributing member of our group. As we do with family members, colleagues or our circle of friends, we quickly learn to profit from their talents and, at the same time, appreciate the social and intellectual limits of their personalities. This enables a feeling of camaraderie and constructive exchange to develop spontaneously and freely. For more about how we initially welcomed Chad to our breakfast table, click here.]

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

Comment

Only Fair Observer members can comment. Please login to comment.

Leave a comment

Support Fair Observer

We rely on your support for our independence, diversity and quality.

For more than 10 years, Fair Observer has been free, fair and independent. No billionaire owns us, no advertisers control us. We are a reader-supported nonprofit. Unlike many other publications, we keep our content free for readers regardless of where they live or whether they can afford to pay. We have no paywalls and no ads.

In the post-truth era of fake news, echo chambers and filter bubbles, we publish a plurality of perspectives from around the world. Anyone can publish with us, but everyone goes through a rigorous editorial process. So, you get fact-checked, well-reasoned content instead of noise.

We publish 2,500+ voices from 90+ countries. We also conduct education and training programs on subjects ranging from digital media and journalism to writing and critical thinking. This doesn’t come cheap. Servers, editors, trainers and web developers cost money.
Please consider supporting us on a regular basis as a recurring donor or a sustaining member.

Will you support FO’s journalism?

We rely on your support for our independence, diversity and quality.

Donation Cycle

Donation Amount

The IRS recognizes Fair Observer as a section 501(c)(3) registered public charity (EIN: 46-4070943), enabling you to claim a tax deduction.

Make Sense of the World

Unique Insights from 2,500+ Contributors in 90+ Countries

Support Fair Observer

Support Fair Observer by becoming a sustaining member

Become a Member