New Zealand’s top news publisher, along with other major global players, has taken a stand against Open AI, refusing to allow them to use their content to power the Chat GPT AI tool. Stuff, the publisher, claims that its content is being harvested without permission and used to generate low-quality results. The question arises: is it wise to distance ourselves from AI in this way? This issue takes us back to 2012 when Google was already causing concern among news publishers who feared that people were relying too heavily on the search engine for their news. At Project [R]evolution, an event focused on the impact of digital technology, Michael Jones, Google’s chief technology advocate, made an interesting analogy. He compared Google to a neutral librarian helping you find the right book, but not having written it. This idea of a “walled garden” around online content became a source of criticism for Google. Fast forward to today, and we find ourselves in a world dominated by generative AI applications like Google’s Bard and Microsoft Bing Chat. These AI services respond to user prompts and summarize information scraped from the internet, including news articles from publishers. However, as Dr. Merja Myllilahti, a senior lecturer in journalism and media, discovered, these AI tools are far from perfect when it comes to sourcing news from New Zealand. In fact, they often link to incorrect sources or random stories. This raises concerns about the reliability of AI-generated news. Despite this, some news organizations have taken matters into their own hands by blocking AI platforms from accessing their content. CNN, Reuters, The Washington Post, Bloomberg, The New York Times, and The Guardian are among those who have made this decision, with Stuff, New Zealand’s biggest news publisher, joining them just this week. Stuff’s CEO, Laura Maxwell, expressed the importance of journalistic content in enhancing the value of generative AI tools. She warned that without access to high-quality, reliable news content, these AI models risk relying on a sea of misinformation and unverified information from the internet, ultimately leading to a self-destructive cycle. So, is distancing ourselves from generative AI the best solution? There are certainly benefits to AI tools in terms of digital news gathering and publishing. The BBC, for example, uses AI to adapt content based on the user’s location within the UK. Likewise, local subscriber service BusinessDesk has seen significant time savings in article creation since implementing ChatGPT. However, challenges around copyright infringement and fair compensation for news publishers remain. Stuff owner Sinead Boucher has been vocal about the potential negative impact of generative AI on the media industry and society as a whole. She believes that global tech companies have taken advantage of news content to train their models, but little value flows back to the original creators. Boucher argues that licensing or some form of compensation is necessary to ensure that journalism retains its value in the AI era. By preventing AI platforms from accessing their content, news publishers hope to regain control and ensure fair treatment. The issue at hand is not just about access to information but also about the control and influence that tech giants hold over the digital news ecosystem. Boucher acknowledges the mistakes made in the past when news organizations failed to recognize the true value of their content and adapted their business models to suit the platforms. This time, she insists on fair compensation for content creators to avoid a future where AI models feed on low-quality, AI-generated content. The introduction of the Fair Digital News Bargaining Bill in New Zealand aims to address this issue by requiring platforms like Google and Facebook to pay news publishers for circulating their news. Ultimately, the goal is to strike a balance between AI technology and the preservation of quality journalism. While some may question whether Stuff’s stance is motivated by their recent implementation of paywalls, Boucher asserts that this discussion goes beyond that. We are in the early stages of this technology, and profound decisions must be made regarding content treatment and participation without knowing exactly how things will develop. So, the debate continues, and only time will tell how the relationship between generative AI and news publishers will unfold.