Artificial Intelligence Meets Art

Gerd Altmann: Artificial Intelligence 201103 | Creative Commons License CC0 1.0

Over the past year I’ve been thinking more about artificial intelligence (AI) and the impact it is having on our society. AI has the potential to revolutionize many industries. There are significant implications for not only in the tech sector, but across our economy. All its potential ramifications are too much to cover in a short blog post. It would require a book — or several books — and there will be unforseen effects. Because I frequently interact with artists and writers, I’d like to explore some of the issues being discussed around AI, as its use in artistic space presents a number of ethical and legal issues, particularly when it comes to the use of artists’ works to train AI systems.

A principal concern involves copyright and intellectual property. When an artist creates a work, they have exclusive rights to that work, including the right to reproduce, distribute, and display it. However, when AI systems are trained on large datasets of images, music, or other media, it is often done without the permission of the artists who created those works. This can lead to artists’ works being used without their knowledge or consent, and without them being compensated for it.

Another issue is the potential for AI to create new works that are similar to or based on existing works. While some artists may be excited about the creative possibilities of AI, many others are concerned about AI creating works that could be mistaken for their own, or that could compete with their own works in the market. This could lead to confusion and potential legal disputes over ownership and credit. There is also the potential for artists’ works to be used in offensive, unethical, or even illegal ways. Cartoonist and illustrator Sarah Andersen wrote an op-ed in today’s NY Times describing her experiences when AI was used based on her webcomic:

Opinion | The Dark Possibilities of A.I. and Art – The New York Times (nytimes.com)

There are also concerns around the potential for AI to perpetuate biases that are present in the data used to train it. If an AI system is trained on a dataset that is not diverse or representative, it may create works that are biased or perpetuate harmful stereotypes. This could have significant implications for artists and art consumers, particularly those who are marginalized or underrepresented. This is a problem I tested using OpenAI’s ChatGTP. Entering a number of queries related to content tied to minority religious viewpoints, I found that ChatGTP’s algorithms generated clearly biased results, demonstrating a significant lack of nuance.

To address these problems, among other things, it is important for artists, AI developers, and policy makers to work together to establish clear guidelines and best practices for the use of artists’ works in AI training. Policies could include obtaining explicit permission from artists before using their works, ensuring that artists are fairly compensated for the use of their works, and taking steps to ensure that AI systems are trained on diverse and representative datasets.

The intersection of AI and the arts is already creating powerful and transformative effects, so it is important to carefully consider the ethical and legal issues involved and to ensure the rights and interests of artists are protected.

Note: This is an interesting and deliberately ironic experiment. The first draft of this blog post was generated using OpenAI’s ChatGTP.