Posted on 6 Comments

Art, AI and AI Art – Part 1

Alien lizards roller skating in a Pride march with Pride flags

Introduction to Part 1

This post began as a limited review of what has become known as AI Art. In order to do that, I had to delve deeper into AI in general. Consequently, the post has grown significantly and is now in three parts. This post, Part 1, looks at AI in general. Part 2 will look at more specific issues raised by AI art, while Part 3 will look at AI art from the perspective of an artist. Part 4 will be a gallery of AI-generated images.

Links to each post will be here, once the series is complete.

What is AI?

It has felt almost impossible to avoid the topic of AI in recent weeks. Hardly a day passed, it seems, without some pronouncement, whether promoting the next big thing or predicting the end of the world. One thing seems clear though. Whatever concerns exist, the push to develop this thing called ‘AI’ is now probably unstoppable.

Looking deeper into the topic, I soon hit problems. AI seems to mean anything, depending on the motivation of the person using it. As a descriptive term, it is almost useless without a qualifier. This isn’t unusual in public discourse. I’m not going to consider all the bad-faith uses of the term, though. I will limit myself to trying to get some clarity.

What is now called AI is different to the former idea of ‘expert systems’. Instead of depending on humans trying to input rules to be followed, AI uses the raw data to find rules and patterns. This has only become possible with the advent of so-called ‘big data’. The term refers to data that is so large, fast or complex that it’s difficult or impossible to process using traditional methods. This is why research AIs are being used in, for example, drug research or cancer screening, using data from health systems like the NHS.

Weak and Strong AI

Essentially, there are two sorts of AI, ‘weak’ and ‘strong’. For a summary of the difference, I turned to AI. Here’s what the free version of ChatGPT said:

The terms “strong AI” and “weak AI” are used to describe different levels of artificial intelligence capabilities. They refer to the extent to which a machine or computer system can simulate human-like cognitive functions and intelligence.

Weak AI (Narrow AI)

Weak AI refers to artificial intelligence systems designed and trained for a specific or narrow range of tasks. These systems excel at performing well-defined tasks but lack general cognitive abilities and consciousness. Weak AI systems are prevalent in our daily lives. They are commonly found in applications like virtual personal assistants (e.g., Siri, Alexa), recommendation systems, image recognition, and natural language processing. They can appear intelligent within their specific domain, but do not possess real understanding or awareness.

Strong AI (General AI or AGI – Artificial General Intelligence)

Strong AI refers to artificial intelligence systems with the ability to understand, learn, and apply knowledge across a wide range of tasks and domains, just like a human being. A strong AI system would have cognitive abilities that match or exceed human intelligence. This includes not only performing specific tasks but also understanding context, learning from experience, and reasoning about various subjects. Strong AI would have a level of consciousness, self-awareness, and general problem-solving capabilities similar to humans.

Despite the claims of Google engineer, Blake Lemoine, it seems clear that we are nowhere near achieving Strong AI (AGI). All apps and programs currently in use, including ChatGPT, are examples of Weak AI. They are ‘designed and trained for a specific task or a narrow range of tasks.’ In other words, they are tools, just as much as the power looms of the late 18th century.

That does not mean that the introduction of even Weak AI is without risk. Like all disruptive technologies, it will create stress as older ways of working are replaced. Jobs will be lost, new jobs will be created, almost certainly not for the same people. If steam transformed the 19th century, and computers the 20th, Weak AI is likely to trigger a wave of change that will be an order of magnitude greater. We are beginning to see some of the potential in health care. It is already aiding in diagnostics, drug discovery, personalized treatment plans, and more.

Concerns raised in AI ‘training’

The ultimate goal in training an AI is to enable it to find patterns in data. Those patterns are then used to make accurate predictions or decisions on new, formerly unseen data. This process requires huge computational resources and careful design. The vast amounts of data involved, whether in training or in day-to-day use, raises significant ethical concerns.

Ethical concerns

Examples include:

  • Biases present in training data are likely to lead to biased outcomes. There is a need for transparency and accountability in training of AI systems, but the sheer volume of data involved makes this difficult. AI can help but…
  • Privacy concerns also arise in the use of training data. Improper handling of data or breaches in AI systems could lead to identity theft, unauthorized access to sensitive information, and other security risks. AI could also be used aggressively, to identify weak points in data protection, to break in and generally to create disruption.
  • Facial recognition systems could increase security by identifying potentially suspicious behaviour but at the risk of loss of privacy. Bias in the training data could lead to inequality of treatment in dealings with some ethnicities.
  • Adoption of AI in medical contexts potentially creates risk for patients. Avoiding risk requires rigorous testing, validation, and consideration of patient safety and privacy.
  • AI-generated content, such as art, music, and writing, raises questions about creativity and originality and challenges traditional notions of copyright or even ownership of the content.
  • Self-driving vehicles may enhance convenience and safety, but ethical issues arise in, for example, resolving safety issues affecting the vehicle occupant vs pedestrians or other road users.
  • AI can be used to optimize energy consumption, monitor pollution, and improve resource management. On the other hand, the energy demands of training AI models and running large-scale AI systems could contribute to increased energy consumption if not managed properly.

Fears about AI

Expressed fears of what AI might lead to range from the relatively mundane to the apocalyptic. The more dramatic risks generally relate to AGI, not narrow AI, although the distinction is often fudged. There are serious risks, of course, even with ‘Weak’ AI.

Impact on employment and skills

Widespread use of AI could lead to mass unemployment, (although similar fears were expressed in the early years of the Industrial Revolution). This is one of the concerns of the Screenwriters Guild in their present strike. Writers wanted artificial intelligence such as ChatGPT to be used only as a tool that can help with research or facilitate script ideas, and not as a tool to replace them. The Screen Actors Guild has also expressed concern over the use of AI in their strike.

To a limited extent, they are too late. Forrest Gump, Zelig, Avatar, The Matrix, and Toy Story, all in different ways, used elements of what will need to be in any workable film AI. There has been talk for many years about using computer-generated avatars of deceased actors in live-action filming.

Military AI

Military use of AI could lead to life-and-death decisions being made by machines with no human involvement. Associated risks include the possible use of AI being used by bad actors in cyber warfare. ‘Deep Fakes’, such as those of Volodymyr Zelenskyy and Barack Obama, are only the start.

Text-based AI

I have had only limited experience with text-based AI. I have used ChatGPT to support my research into AI, but I don’t know how it would cope with longer pieces like this blog post. None of the other programs I have tried have produced anything beyond banal superficialities. Too often the output is junk. There have been reports of the AI simply making things up, which paradoxically is probably the closest it gets to originality. With the current state of development, I would not believe any publicly available AI which claimed to generate original text.

Issues for education

Even so, packages like ChatGPT seem already to be causing problems in education. Unsurprisingly, plagiarism checkers are also being deployed, although they probably use AI too! So far as I can tell, these text generators don’t provide academic references as normally used, or any link to their original sources. If developers can crack that, then there will be a real problem, not just for schools and universities.

I asked ChatGPT to list some further reading on issues around the use of AI. It gave me what looks to be a good, varied list, probably as good as anything I might find by a Google search. I asked three times, getting differently formatted results each time. Version one gave me 2 or 3 titles under a range of headings, version two gave me a list of ten titles, while version three organised the list by books, articles and online resources. There is a significant overlap, but the content varies between the three versions. None of them are properly formatted to provide references, although enough information is given to find them.

I asked again, for a list in Harvard style, and got a list of 10 books and journal articles. When asked. I got details of the requirements for Harvard Style references. A further request gave me a list of other referencing systems in use in academic circles.

Conclusions

Strong AI has been a common fear for centuries. From Pygmalion or Pinocchio to the films 2001 or AI (in itself a partial retelling of Pinocchio), similar tropes arise. I covered the same theme in my short stories, here and here.

Weak AI

The use of even weak AI raises many complex moral and philosophical questions. Some of these, such as the Trolley Problem, had been interesting ways to explore ethical issues, but now, faced with self-driving cars, they have become real. Using AI in any form of decision-making process will raise similar problems.

There is still a long way to go to get ‘intelligence.’ If it happens, I suspect it will be an accident. Eventually, though, I believe even ‘weak’ AI will be able to provide a convincing facsimile. Whether dependence on AI for critical systems is desirable is another question.

AI as a research tool

In the more limited area of text generation, ChatGPT, used as a research tool, appears to work as well as Google, without the need to trawl through pages of results. On the other hand, it is probably the later pages of a Google search which provide the equivalent experience to the chance find on the library shelves. Without careful choice of words, it seems probable that AI search tools will ignore the outliers and contrary opinions, reinforcing pre-existing biases. But then, the researcher has to actually look at that chance find on the shelf.

I asked ChatGPT to list some further reading on issues around the use of AI and to include some contrary views. This it duly did, giving 2 or 3 references under a range of sub-topics with an identified contrary view for each. As a research tool, I can see that AI will be very useful. Its use will require care, but no more care than is required in selecting and using reference material found in any other manner.

On the other hand, it seems likely that, whether we like it or not, within a few years, AI packages will be able to generate a good facsimile of original text. A good proportion of journalism is already generated in response to press releases. How long before we get AI-generated news reports created in response to AI-generated press releases about AI-developed products? All untouched by human hands…



6 thoughts on “Art, AI and AI Art – Part 1

  1. […] Part 1 looked at AI in general. Some more specific issues raised by AI art are covered in Part 2. Part 4 will be about my experience with one app, including a gallery of AI-generated images. […]

  2. […] this, we’ve seen the rise of AI art. For a set of posts looking at AI and AI art, click here for Part 1. Subsequent posts are linked from […]

  3. […] or Risograph printing. I’m still mentally processing this, but you can read the first of a series of posts on AI and AI art here. The others in the series are linked from […]

  4. […] last four posts on generative AI may have been late to the party, it seems. This story, from Vox, […]

Leave a Reply

Your email address will not be published. Required fields are marked *