The generation and dissemination of online news and information have long been supported by early kinds of artificial intelligence, prior to the invention of generative AI. From creating earnings reports and sports descriptions to making tags and transcriptions, larger newsrooms have used automation for years to simplify production and mundane jobs. These methods are gaining traction in larger news organizations, but they have been significantly less prevalent in smaller newsrooms. Automation of important news and information-related jobs, such as content recommendation and moderation, search result generation, and summary generation, is also being progressively automated by tech corporations via the use of AI.
Assuming that creative work would be mostly unaffected, public discussions around the emergence of AI have mostly focused on how it may effect physical labor and operational jobs like food service or manufacturing. Nevertheless, there have been concerns raised regarding the possibility of new, more accessible, and significantly more advanced "generative AI" systems like DALL-E, Lensa AI, Stable Diffusion, ChatGPT, Poe, and Bard disrupting white-collar jobs and media work, abusing copyright (inside and outside of newsrooms), misinforming the public, and eroding trust. From pitching articles and moderating comment sections to covering local events (with varied outcomes) and creating summaries or newsletters, these technologies also provide new avenues for sustainability and creativity in news creation.
Some of the methods used by newsrooms to test out generative AI have come under fire for being opaque and prone to mistakes. News publishers are facing accusations of copyright and terms-of-service violations from individuals who are using their news content to train artificial intelligence tools. Some of these publishers have even gone as far as to strike deals with tech companies or block web crawler access to their content. It's also worth noting that generative AI tools have the potential to further divert search engine traffic away from news content.
As a result of these changes, journalists, content providers, lawmakers, and social media sites face new ethical and legal dilemmas. The ways in which AI is incorporated into news creation and dissemination by publishers, the information that AI systems glean from news material, and the impact of global AI policies on these aspects are all part of this.
These days, AI is just a mainstream technology. A large number of organizations have reported using AI by the year 2021, according to research. This is especially true for enterprises situated in developing economies. Professionals have started to record the growing importance of AI for IT businesses and news publishers, both independently and in connection to one another. Furthermore, there is growing evidence that AI is already in widespread usage in both the algorithms of social media platforms and the production of ordinary news stories, however this is more common among more affluent and bigger publications.
The public and media have little knowledge about artificial intelligence. According to studies, journalists' knowledge and perspective on the widespread usage of AI in news are at odds with one another. Research on artificial intelligence in journalism that focuses on the audience has also shown that readers have a hard time telling the difference between human- and AI-created articles. Despite abundant evidence that AI technologies may perpetuate societal prejudices and promote the creation of misinformation, they also perceive less media bias and more credibility for specific forms of AI-generated news.
There has been a lot of theoretical investigation on the use of AI in journalism. While evidence-based work can help us answer certain critical issues, it is typically more qualitative than quantitative, making it difficult to get a good picture of the whole picture. A lot of theoretical work has been devoted to discussing how AI is altering the face of journalism, how platform businesses are influencing both AI and the news industry, and what this means for AI in terms of journalism's value and its capacity to achieve democratic goals. European Union policy discussions and the need for openness about AI news techniques to build trust have occupied much of the media policy literature.
Research on how AI changes the news that people view, whether directly from publishers or indirectly via platforms, should be prioritized in future efforts. To get a more complete picture of how technology developments influence news practices worldwide, AI study should concentrate on regions other than the United States and economically developed nations. The policy side of things might benefit from comparing use cases in order to establish global standards for AI-related news openness and disclosure.
57%
of companies based in emerging economies reported Al adoption in 2021
(McKinsey, 2021)
67%
of media leaders in 53 countries say they use Al for story selection or recommendations to some extent
(Reuters Institute for the Study of Journalism, 2023)
Most nations' governments have failed to keep up with the rapid rate of AI progress. Responses by regulators to new technology, such as AI, differ from one nation to the next and may take many forms, including direct regulation, soft legislation (such as recommendations), and industry self-regulation. Russia and China are only two examples of the governments that have a hand in, or at least influence over, their country' artificial intelligence research and development efforts. Involving different stakeholders is an approach that some try to use to foster innovation. Some people are trying to find ways to control AI so that people are not affected by it. In contrast to nations like China, which asserts the state's right to gather and exploit residents' data, the European Union's privacy laws have emphasized robust protections for citizens' data from private companies and the government.
These divergences highlight the fact that people do not agree on the principles that should support AI laws or ethical frameworks, which makes reaching a worldwide agreement on how to regulate the technology difficult. But laws in one country can have far reaching implications in another. Without sacrificing democratic principles of a free press, an open internet and free speech those putting forward policies and solutions must consider the many possible outcomes and acknowledge the global differences.
Lack of agreement on what is AI will undermine efforts to regulate the technology making it much harder to detect and punish breaches. With the rate of innovation and the intricacy of these systems, experts have claimed that general solutions would not work and have instead advocated for tailored solutions. Underrepresented groups such as those living in poverty or experiencing other forms of social exclusion must be actively included in the process of making laws for AI.
Lastly, given the growing relevance of news material in training AI in policy and regulatory issues, responses to AI should most likely take into account the need to maintain a free and independent press. This is relevant today to discussions on digital content copyright and fair use updates, collective bargaining agreements and other forms of support between publishers and the companies that create and sell these technologies.
Become a
Top Rated Author
Today