In 2004 Fair Observer will be expanding its reach. I have taken the lead in the creation of a non-profit organization in France that will make our journalism available to the world’s francophone audience, in Europe, Africa, North America and the IndoPacific. We will invite French-speakers to publish in French on our new sister website.
At Fair Observer we have had our moments of euphoria and exasperating drama. More than a dozen years after our initial launch, our version of crowdsourced journalism has demonstrated its resilience. In 2024 we will extend our reach and expand our horizons, geographically, culturally and linguistically.
The Fair Observer team has consistently stood by a set of ethical principles that start with the respect for the widest range of philosophies, ideologies, lifestyles, belief systems and simple viewpoints We are convinced that one doesn’t have to be a professional journalist, media celebrity, pundit, influencer or even a skilled writer to contribute to everyone’s understanding of a complex world
This expansion of our horizons leads us as a team to reflect on how our crowdsourced journalism works and where it can take us. To a large extent, we have sought to follow respectfully the model of traditional journalism. But the very fact that we crowdsource brings to the fore a wider question that will have consequences for the way we understand the news in the age of AI. We call it metajournalism.
The word is not part of the average person’s daily vocabulary. It isn’t clear what people understand when the concept is involked. That is why I decided to interrogate ChatGTP4 and asked it this simple question: “What is metajournalism?” Here is how it replied.
“Metajournalism is a term used to describe the discourse about journalism itself. It is a form of journalism that focuses on the definition, boundary, and legitimation of journalism as a cultural practice. The Meta Journalism Project is an initiative that works with publishers around the world to strengthen the connection between journalists and the communities they serve.”
The first part of this answer points towards academic work dating back to at least 2015 that has aimed at describing and analysing how journalism works. It explores the idea that in “metajournalistic discourse, various actors inside and outside of journalism compete to construct, reiterate, and even challenge the boundaries of acceptable journalistic practices and the limits of what can or cannot be done.” In the context of 2023, one might say that the discourse around metajournalism has never been more prominent. The debate has reached fever pitch in the media over the question of censorship, leading to the accusation that there is now a censorship industrial complex.
But is that debate really about journalism? Or is it just a specific example of the established practice within the US culture wars? In this case the polarizing debate is designed and exploited by both politicians and the media to use journalism to entertain the masses… by infuriating them!
The second part of ChatGPT’s answer is even more interesting. It introduces a major civilizational hazard that we must now live with: the Google-ized distortion of linguistic meaning. I asked the AI engine about what I consider to be both a word and a concept: metajournalism. Half of the response concerned neither a word nor a concept, but… a brand! This is doubly ironic. The Meta Journalism Project is definitely not about metajournalism. It is about Meta, the brand formerly known as Facebook, which also happens to be one of the designated perpetrators of the kind of censorship by an established elite that the critics have focused.
What this tells us about AI and the society we live in
In today’s hypercommercialized world, brands trump not just legitimate inquiry but also basic human curiosity. Google and other search engines famously gives priority to featuring paying brands. OpenAI, the creator of ChatGPT, may be a precinct of Microsoft and therefore, in some sense, a rival of Meta. But the status of brands is such that seeking to understand what people are truly interested in takes a back seat to the promotion of anything that happens to be a brand. This led me to ask a new question.
“I’m surprised you fail to recognize that my legitimate curiosity about a concept and a word in the language which possesses its own meaning has nothing to with Mark Zuckerberg’s brand. Is it because I needed to be more precise in my question, which was very simple: What is metajournalism? Any rational human would, in case of doubt, ask me whether I was referring to Meta’s program, which actually should have been clear since I wrote the wrote metajournalism in lower case and with no space after meta-. Can you explain why you offered me information about something I have no interest in?
ChatGPT’s answer was both predictable and exasperating.
“I apologize for any confusion caused by my previous response. I understand that you were asking about the term “metajournalism” and not the Meta Journalism Project. Metajournalism is a term used to describe the discourse about journalism itself. It is a form of journalism that focuses on the definition, boundary, and legitimation of journalism as a cultural practice1.
I hope this clears up any confusion. Please let me know if you have any other questions.”
I had raised an important metajournalistic question and the answer was true to the well-known formula of AI language models: apologize, reassure and clarify. Note that ChatGPT didn’t even bother to answer my question, which expressed real curiosity about the processes it uses to select which kind of information for its responses. I followed up with this accusation:
“You didn’t answer my precise question about why you offered information that had nothing to do with my question. Can you answer that now?”
Now came the real surprise. The new answer was absolutely identical — word for word, paragraph for paragraph — with the previous one.
This time I had to push more firmly.
Three strikes and AI is out
“You don’t seem to remember my original question: “Can you explain why you offered me information about something I have no interest in?” I’d really like an explanation, if you can offer one.”
Then for the third time I received exactly the same initial paragraph. This time, however, it was followed by a new explanation.
“As an AI language model, I do not have emotions or personal opinions. I am programmed to provide helpful and informative responses to your questions. I am sorry if my previous responses did not meet your expectations. If you have any other questions, please let me know.”
All was not lost. This non-answer can actually be read as a perfect answer. It isn’t just that AI lacks emotions or opinions. It lacks two other things which even insincere humans possess: empathy and the capacity for honesty. Empathy allows human minds to focus on what other people want or are trying to accomplish. It may disagree with the goal but empathy provides the capacity to understand motivation, once it’s explained.
The question of honesty is a bit more complex. AI admits it follows rules, but when asked about the rules – which was precisely the sense of my question – it plays dumb. Should we assume that it has no means of understanding the rules? Or should we conclude that it knows the rules but, like a trained CIA officer, it has been instructed to avoid talking about them?
What should be obvious from this type of experience is that AI is not only programmed to do a lot of amazing things, it is also programmed not to do a lot of the most essential things that occur within human relationships.
This experiment reveals one other thing. However frustrating it may be, we need to dialogue with AI and continually challenge, if we hope to have any serious idea of how it may respond to our needs. We have the moral duty of assessing AI’s legitimacy to be a voice of reason.
In our journalism we believe that developing a voice of reason is a noble goal. Because of some unspoken rules or unarticulated principles that will likely be hidden from us, that goal often remains unachieved.
Metajournalism is any form of journalistic discourse that seeks to express unarticulated truths. My frustrating dialogue today with AI on the subject of metajournalism turned out to be precisely that: an exercise in metajournalism.
Merry Christmas, Chad, and a happy 2024!
*[Artificial Intelligence is rapidly becoming a feature of everyone’s daily life. We unconsciously perceive it either as a friend or foe, a helper or destroyer. At Fair Observer, we see it as a tool of creativity, capable of revealing the complex relationship between humans and machines.]
The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.
Support Fair Observer
We rely on your support for our independence, diversity and quality.
For more than 10 years, Fair Observer has been free, fair and independent. No billionaire owns us, no advertisers control us. We are a reader-supported nonprofit. Unlike many other publications, we keep our content free for readers regardless of where they live or whether they can afford to pay. We have no paywalls and no ads.
In the post-truth era of fake news, echo chambers and filter bubbles, we publish a plurality of perspectives from around the world. Anyone can publish with us, but everyone goes through a rigorous editorial process. So, you get fact-checked, well-reasoned content instead of noise.
We publish 2,500+ voices from 90+ countries. We also conduct education and training programs
on subjects ranging from digital media and journalism to writing and critical thinking. This
doesn’t come cheap. Servers, editors, trainers and web developers cost
Please consider supporting us on a regular basis as a recurring donor or a sustaining member.
Will you support FO’s journalism?
We rely on your support for our independence, diversity and quality.