Caution and consultation needed as AI is rolled out in media and entertainment
Media companies should not use Artificial Intelligence (AI) generated tools to compromise ethical standards nor reduce employment opportunities for journalists and must be transparent with their audiences about the use of such tools, says the union for Australian media workers.
The Media, Entertainment & Arts Alliance says that while AI has the potential to enhance and extend the work of journalists, there are also many risks associated with its adoption by media companies.
MEAA has been closely watching the development of ChatGPT and other software that use large volumes of existing data to create or synthesise text, images, music or videos.
The most recent meeting of the MEAA National Media Section committee discussed the rapid advancement and distribution of AI and resolved to form a standing sub-committee to provide ongoing consideration of the impact of evolving technology.
At its core, media professionals must have a say in decisions by publishers and broadcasters about integrating AI into workflows. And media outlets must be upfront with their audiences, the public and the communities they serve about how AI material is being incorporated into editorial output.
“Caution must be exercised when adopting AI technology into journalism,” said MEAA Media Federal President, Karen Percy.
“A balance must be struck between the promise and opportunities of AI and the unique threats it poses to the public’s ability to trust in the news they read, as well as the work, income, rights and creative agency of media workers.
“Responsibly designed, AI has the potential to usefully supplement, extend and enhance our work, but it also has far-reaching consequences that need careful consideration, consultation and regulation.
“AI can provide efficiencies and new opportunities for storytelling techniques, but it also has the potential for errors and ethical breaches, reduced editorial independence and control and reduced job opportunities for media workers.”
A discussion paper was tabled at the NMS meeting which identified key issues for further consideration including moral rights and copyright of the originators of content used by AI; compensation for media professionals; and the importance of human involvement in labelling and fact-checking content produced by AI tools.
MEAA says that agencies should be as transparent as possible about their use of AI-generated content, so that readers, viewers and listeners understand what they are consuming and how it has been put together.
Content produced by AI must also not entrench racial, gender, class and other forms of bias.
“Fact-checking and proper sourcing of material is crucial to maintain trust and integrity in the media and must be undertaken by news organisations that want to use AI material,” Ms Percy said.
“How will ethical obligations be met in the AI environment?
“We also know that there are inherent biases built into all technology, so it’s important to ensure that existing discriminatory practises and shortfalls in diversity are not exacerbated.”
Media professionals need to have input into how and why media organisations determine how AI will be used.
MEAA encourages publishers and broadcasters to direct efficiencies introduced by AI into expanding original coverage and addressing poorly-served sectors and communities.
Last update: March 24, 2023