Peter Isackson, Fair Observer’s Chief Strategy Officer, speaks with Yves Zieba, CEO of Syntezia Sàrl. They discuss how artificial intelligence is reshaping journalism, media ethics and the very business models that underpin the industry. Isackson brings his own curiosity to the discussion, noting he is currently writing about AI and media. Zieba responds with a wide-ranging assessment of opportunities, risks and ethical imperatives.
AI in media
Zieba begins by stressing that AI is not a marginal development but a disruptive transformation in media. He argues that ignoring it is perilous: Companies that choose not to adopt AI risk rapid collapse in audience reach, advertising revenue and subscriptions. By contrast, those that embrace AI have a chance to reimagine their models.
Hyper-personalization stands out as one of the most promising innovations. Zieba points to Netflix as an example of how tailoring content to individual preferences can revolutionize an entire industry. Applied to journalism, AI-driven personalization could help news organizations engage audiences more deeply, cut through the clutter of “infobesity” and foster stronger ties between readers and their preferred journalists.
Journalists using AI
AI, Zieba explains, can take over repetitive “robot tasks” such as data formatting or drafting routine reports. This automation frees journalists to focus on investigations, analysis and high-value reporting. With AI accelerating production, a single journalist might publish 200 articles per month compared to 20 in the past.
Isackson raises concerns that tailoring content may still resemble a one-way monologue rather than a dialogue. Zieba counters that AI enables new forms of dialogue by creating intermediary roles in public relations and public affairs. Moreover, smartphones and citizen journalism provide “eyes and ears everywhere,” extending journalists’ capacity while reinforcing the importance of professional standards.
Can AI be transparent?
One of the major risks Zieba highlights is “hallucination” — AI generating plausible but false information. He calls for the rise of “hallucination checkers,” akin to fact-checkers, as editorial teams now carry the added burden of ensuring accuracy in AI-assisted work.
Transparency, Zieba insists, is essential. Just as journalists disclose their sources, they must disclose the AI tools used in producing content. Trust, he argues, ultimately rests with the person who signs an article and the institution that guarantees editorial oversight. Smartphones, crowdsourcing and citizen reporting may broaden information flows, but Zieba still places greater trust in professionals who can uphold standards.
Isackson adds a cultural dimension, suggesting hallucination is not unique to machines — human culture itself thrives on interpretation and fabrication. For him, the deeper issue is whether AI can contribute constructively to social dialogue rather than merely providing streams of information.
AI is changing media
Zieba emphasizes that AI is changing both the volume and the nature of journalism. Some reporters embrace AI for its creative advantages, while others fear job losses. Local sports writers or niche reporters are particularly vulnerable. The tension, he says, is between risks and opportunities: Ignoring AI is dangerous, but uncritical adoption also brings hazards.
For Zieba, the balance lies in recognizing AI as a tool, not a value proposition. It accelerates reporting and analysis, but humans must continue to provide judgment, insight and ethics. He believes that the media has a broader social role as a public good, contributing to civic education, citizen journalism and collective trust.
Journalists forced to use AI?
Management and regulation often lag behind the reality of newsroom practices. Zieba notes that many editors officially sanction only a limited set of AI tools, even while reporters secretly use more powerful ones. This creates tension: executives and regulators are “three steps behind,” unsure how to handle ownership, responsibility or ethical standards.
The result is a creeping sense that journalists may be compelled to rely on AI tools — both because competitors are doing so and because management will eventually demand it. For Zieba, the real danger is not using AI at all, or waiting too long, which he calls potentially “lethal” for any media organization.
The Chief AI Officer in media
This organizational challenge has given rise to a new position: the Chief AI Officer (CAIO). Zieba describes the CAIO as a board-level role, reflecting AI’s disruptive power that cuts across finance, legal, human resources and editorial functions. Unlike a Chief Technology Officer, who may see AI as just another tool, the CAIO must take a strategic view of AI’s potential to reshape the entire company.
This is not simply a technical job. A CAIO must provide leadership, coordination and vision, ensuring that AI strategy aligns with broader organizational goals. Without such oversight, Zieba argues, companies will stumble in the face of rapid technological change.
Navigating flux
Asked about the future, Zieba is candid: “There are more unknowns than knowns.” He cites the rapid obsolescence of once-hyped practices like prompt engineering as proof that AI evolves at breathtaking speed. Rather than pretending to predict, he favors a pragmatic framework he calls “flux,” designed to help organizations live with uncertainty.
The discussion ends on a note of shared appreciation. Isackson and Zieba agree that the debate over AI in journalism is far from finished. What remains clear is that the stakes — trust, ethics and the survival of media organizations — could not be higher.
[Lee Thompson-Kolar edited this piece.]
The views expressed in this article/video are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.
Comment