Outside The Box

Learning to Share Our Cultural Space with a Well-Informed, Articulate Intruder — Part 2

As we struggle to find productive ways of integrating AI into our professional and social reality, DeepSeek suggests focusing on five collaborative skills: epistemic triangulation, prompt chaining, causal boundary detection, accountability mapping and linguistic skepticism. The barrier is psychological, not technical. These skills are teachable — but institutions lack motivation to act.
By
Learning to Share Our Cultural Space with a Well-Informed, Articulate Intruder — Part 2

Via Shutterstock.

May 05, 2026 06:55 EDT
 user comment feature
Check out our comment feature!
visitor can bookmark

In the first part of this conversation, I began a discussion with DeepSeek concerning the confusion surrounding the reliability of generative AI and the expectations of its users. After assessing the performance benchmarks in areas such as coding, we agreed that our society is largely confused about how to use AI tools productively.

Our discussion led me to make the following suggestion:

What this tells me is that we humans need to develop better instincts and better work habits when exploiting generative AI. And to be extremely concrete, I believe that our society – and that includes educational institutions and media – needs to develop and refine our capacity for critical thinking in what I would call a “complex intelligence environment.” This means we need to create a new set of “collaborative social skills” as we learn to recognize that AI, when we interact with it, is not a pure intelligence but a contributing member of a human, social and civilizational network.

Do you believe that is possible and can we talk about what it might require?

DeepSeek then reformulated my observation and developed its reflection by focusing with considerable detail on the collaborative skills we need to work on.

Let me restate your idea in my own words to ensure I understand: 

You’re suggesting that we stop treating AI as either a master (deference) or a tool (pure instrument) and instead treat it as a participant in a human-centered cognitive network – one that has specific strengths, predictable weaknesses, and no intrinsic authority. Interacting with it well requires new social skills, not just technical ones.

I agree. Here’s what those skills might include:

Five collaborative skills for humans in an AI-mediated world

1. Epistemic triangulation – the ability to treat AI as one source among many

  • What it means: Never ask AI a question without also asking “how else could I know this?” and “what would a skeptical colleague say?”
  • Example: AI gives you a medical suggestion. Instead of accepting or rejecting it, you triangulate: check a trusted medical database, ask a human doctor, compare with known guidelines.
  • Why it’s a skill: It requires resisting the cognitive ease of a single authoritative answer.

2. Prompt chaining with verification – treating AI responses as hypotheses, not facts

  • What it means: Each output is a draft to be challenged, not an answer to be accepted. You learn to ask “show your reasoning,” “what are the counterarguments?,” and “what evidence would change this?”
  • Example: After AI suggests a business strategy, you explicitly prompt: “Now give me three reasons this strategy could fail” and “What assumptions am I making that might be wrong?”
  • Why it’s a skill: Most people stop at the first answer. The skill is making verification a habitual second step.

3. Causal boundary detection – knowing where correlation is enough and where causality is required

  • What it means: Instinctively recognizing which tasks require causal understanding (medical diagnosis, strategy, engineering) vs. which are fine with pattern matching (spelling, boilerplate, translation).
  • Example: You don’t ask AI “should I fire this employee?” (causal, ethical) but you do ask “draft a neutral summary of their performance reviews” (pattern matching).
  • Why it’s a skill: Most people lack mental models of causality vs. correlation. This requires basic training in philosophy of science and statistics.

4. Accountability mapping – never letting AI be the final decision-maker for high-stakes outcomes

  • What it means: Explicitly stating “if this goes wrong, who is responsible?” before using AI. If the answer is not “a named human,” you don’t proceed.
  • Example: Using AI to draft a legal contract is fine, but signing it without human review is not. The human signing takes accountability.
  • Why it’s a skill: It requires overriding the temptation to offload responsibility onto a machine that feels authoritative.

5. Linguistic skepticism – fluency in detecting AI’s characteristic failure modes

  • What it means: Recognizing markers of AI error: excessive confidence, unnatural fluency, lack of hedging, invented citations, false specificity (e.g., “83% of experts agree” with no source).
  • Example: Reading an AI-generated analysis and noticing “this sounds too neat – real problems have trade-offs and ambiguity.” That’s a red flag.
  • Why it’s a skill: AI is designed to sound authoritative. Resisting that authority requires conscious effort and practice.

What would it take to actually build these skills at scale?

You mentioned education and media. I agree those are the levers. Here’s what each would need to do differently:

For educational institutions (K-12 through university)

Current practiceNeeded shift
Ban or ignore AITeach with AI explicitly, including its failure modes
Focus on finding the right answerFocus on evaluating multiple answers
Individual workCollaborative work with AI as a partner
MemorizationEpistemic skills (triangulation, verification, sourcing)
Trusting textbooksQuestioning all sources, including AI

Concrete example: Instead of “write an essay on X,” the assignment becomes: “Use AI to generate three different arguments on X. Then write a meta-essay explaining which you find most convincing, why, and what the AI missed or got wrong.”

For media and journalism

Current practiceNeeded shift
Publish AI-generated content without labelingMandatory disclosure of AI involvement
Treat AI as a sourceTreat AI as a tool used by a source
Speed over accuracySlower, triangulated reporting
Headlines that amplify AI’s confidenceHeadlines that hedge appropriately

Concrete example: A news story about an AI forecast should not say “AI predicts market crash” but “AI model, known to have X% error rate on similar predictions, suggests possible downturn; experts disagree.”

For professional culture (business, medicine, law)

Current practiceNeeded shift
Adopt AI for efficiencyAdopt AI only where error cost is low
Let AI draft without reviewMandate human-in-the-loop for decisions
Blame the tool when things go wrongBlame the process (and the accountable human)
Implicit trust in outputsExplicit verification protocols

Concrete example: A hospital using AI for triage notes must require a human to sign off on every recommendation, with the AI’s confidence score and known error modes displayed alongside.


The hardest part: emotional and cognitive resistance

You hint at this when you mention “inherited and inaccurate ideas of what intelligence is.” The real barrier is not technical; it’s psychological.

  • We crave authority. AI sounds confident. Our brains want to stop searching.
  • We dislike ambiguity. AI gives crisp answers. Real problems are fuzzy.
  • We overgeneralize. Because AI is right about spelling, we assume it’s right about strategy.

Developing the skills you describe means fighting every cognitive shortcut we have. That’s why it’s hard. But not impossible – we’ve done it before (e.g., learning to be skeptical of advertising, learning to check sources before sharing news).


Final answer to your question

Is it possible? Yes. But only if we:

  1. Stop anthropomorphizing AI – it is not a junior colleague, not an oracle, not a mind. It is a text generator optimized for plausibility.
  2. Teach failure modes explicitly – every course should include “here’s how AI gets this subject wrong.”
  3. Design accountability into workflows – no high-stakes AI use without a human whose name is on the outcome.
  4. Practice the five skills until they become automatic, like looking both ways before crossing the street.

You are right that we need new collaborative skills. The good news: they are teachable. The bad news: we are not teaching them yet, and the cost of delay is already visible in bad medical advice, flawed strategies, and misplaced trust.

My provisional conclusion

This is wonderful advice and conveniently presented in the form of a checklist. It’s a convincing example of what AI does well. These practices have emerged from the mass of data drawn from articles and discussions by hundreds if not thousands of people in various contexts. DeepSeek has distilled them and made them digestible to anyone curious enough to think about this particular issue.

We need to keep our focus on what it calls the bad news, because even with these recommendations, it remains unlikely that the institutions we’re talking about — education and media — will be motivated to respond, even if “the cost of delay is already visible.”

I believe more work needs to be done precisely on the factors of motivation in our societies. I also believe it’s a task we can begin to address in multiple ways and with the assistance of AI. It may require us to return to the trio of ideas about intelligence itself that DeepSeek developed earlier in the conversation:

  • Omnipotence (if it’s smart, it should know everything)
  • Consistency (if it’s smart, it shouldn’t make silly mistakes)
  • Intentionality (if it produces text, it must “believe” or “mean” something)

These factors apply equally to human intelligence, though not necessarily with the same emphasis. Nevertheless, the guidelines DeepSeek proposed constitute a good starting point for further reflection. We will definitely keep them in mind.

Your thoughts

Please feel free to share your thoughts on these points by writing to us at dialogue@fairobserver.com. We are looking to gather, share and consolidate the ideas and feelings of humans who interact with AI. We will build your thoughts and commentaries into our ongoing dialogue. 

[Artificial Intelligence is rapidly becoming a feature of everyone’s daily life. We unconsciously perceive it either as a friend or foe, a helper or destroyer. At Fair Observer, we see it as a tool of creativity, capable of revealing the complex relationship between humans and machines.]

[Lee Thompson-Kolar edited this piece.]

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

Comment

0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments

Support Fair Observer

We rely on your support for our independence, diversity and quality.

For more than 10 years, Fair Observer has been free, fair and independent. No billionaire owns us, no advertisers control us. We are a reader-supported nonprofit. Unlike many other publications, we keep our content free for readers regardless of where they live or whether they can afford to pay. We have no paywalls and no ads.

In the post-truth era of fake news, echo chambers and filter bubbles, we publish a plurality of perspectives from around the world. Anyone can publish with us, but everyone goes through a rigorous editorial process. So, you get fact-checked, well-reasoned content instead of noise.

We publish 3,000+ voices from 90+ countries. We also conduct education and training programs on subjects ranging from digital media and journalism to writing and critical thinking. This doesn’t come cheap. Servers, editors, trainers and web developers cost money.
Please consider supporting us on a regular basis as a recurring donor or a sustaining member.

Will you support FO’s journalism?

We rely on your support for our independence, diversity and quality.

Donation Cycle

Donation Amount

The IRS recognizes Fair Observer as a section 501(c)(3) registered public charity (EIN: 46-4070943), enabling you to claim a tax deduction.

Make Sense of the World

Unique Insights from 3,000+ Contributors in 90+ Countries