Posted on Leave a comment

Why is it art?

a wall painting at lascaux cave

Since the cave paintings at Lascaux and similar locations were painted before concepts of composition were even thought of, are they art?

Some time ago, in a Facebook group, someone asked this good question.

My initial answer was as below.

I doubt if the people making them saw them as ‘art’. That’s imposing our view of the world on the makers. From our perspective, though, yes they are art. Plus, places like Lascaux were painted by numerous hands, perhaps over centuries. They are not single compositions. What they show though is considered mark making towards an idea.

I suspect the principles they were following were more in the line of magical thinking than aesthetics. They may have thought, “if I paint this animal being killed, we will be successful on our next hunt.” Another possibility is that it was an offering to the spirit world, giving thanks for a successful hunt. Both these have been found in modern times by anthropologists. Whatever it was, they seem to have had something guiding them. They were not just throwing pigment around at random.

Sadly, the discussion didn’t progress much. The rest of this post is based on the argument I was trying to make. It includes a few extra points that didn’t occur to me at the time.

I now think the original question was based on a false premise. We don’t know if they had ideas about composition. We do know that they are not random daubs on a wall. They represent a significant human achievement, only possible because of a great deal of effort and time. They meant something to their makers.

We can never know the motives of the makers or the guiding principles they were working under. We can only speculate. Our speculations cannot fail to be coloured by our own world view, as was the original questioner.

The question of what was in the minds of the original makers of these paintings is a different question to whether they are ‘art.’ We don’t even know if the makers had a concept of art as an endeavour in its own right. There seems to have been an urge to decorate, shown in other finds from many similar cultures, but we still don’t know why it was done.

If we shift our viewpoint from looking at cave paintings to looking at scientific illustrations, it is perhaps clearer. Hooke made incredibly detailed and brilliantly executed drawings of what he saw in his microscope. These are hugely valuable in terms of their scientific intent. To see them as art means looking at them from the perspective of a separate set of values to those of the original maker. The one doesn’t negate the other. Both reference frameworks can apply simultaneously.

In the end, I suspect defining art is like defining a game. After all, what links tennis, golf, poker and Resident Evil? All games, but we would find it hard to describe the common characteristic. So in my view art includes Lascaux cave paintings, Neolithic rock carving, Medieval illuminated manuscripts, Rembrandt, Monet, Malevich, Hockney, Basquiat, and David Bailey.

Hooke is close enough in time for us to have some idea of his thought processes. I think we can feel reasonably confident that he had a sense of his work as having an aesthetic value beyond being a ‘just’ a scientific illustration. We can’t know whether the creators of cave paintings had ideas or concepts of composition, or what they were thinking. We can be sure, though, that they were thinking…

This post is related to several others about meaning in art.

Neolithic art has also provided inspiration for many of my own prints.

Mixed media print - monotype and pastel
Rocking in Rhythm #1 mixed media pastel over monoprint
Collagraph print inspired by neolithic rock carvings
Hammer marks on Weardale
Posted on Leave a comment

Asemic Writing

Some time ago, I linked to this wonderful video, featuring an artist book ‘Hushed Writing’ or Grafia Callada, by Spanish graphic designer and artist, Pepe Gimeno. I make no apologies for showing it again.

Since then, I have come across the work of the US artist Cecil Touchon, in particular his asemic writing. Some of Touchon’s work involves fragmentary text which he arranges into collage, often then painting the image. I’ve touched on the idea of fragmented text in this post on making stencils, but I confess I hadn’t thought of taking that idea further. It is the earlier work in which he transforms found texts by overwriting that seems to have strong affinities with Gimeno. They both produce pieces which have the structure and appearance of text, but without content.

Asemic writing is not the exclusive domain of Touchon. Indeed the roots, seem to go back to c800 CE and the Tang Dynasty. Since then, the Middle Ages and Renaissance saw the use of Pseudo-Kufic (imitation Arabic script) decoration. In the 20th century, many artists including Kandinsky and Man Ray have experimented with it. In many ways, the pictographs of Adolf Gottlieb fall into this genre too. The abstract expressionist scribbles which appear in much contemporary art are surely also descended from this idea. However, in its approach, Grafia Callada, from Gimeno seems to remain unique.

Posted on Leave a comment

Meditation

Minimalist Collage

Some time ago, I can’t remember how, I came across a reference to this book. The images intrigued me, since they seemed to have so much in common with 20th century abstraction, despite their origins in 17th century Rajasthan as aids to meditation.

Book cover Tantra Song

Paradoxically, the strongest visual affinity seemed to be with minimalism. Compare, for example, ‘Tremolo’ by Agnes Martin from 1962 (on the left) with this piece from the book.

The paradox stems from the symbolism of the Tantric paintings when compared with the aim of the minimalists to remove the self. To quote another minimalist, Sol de Witt, ‘what you see is what you see.’

Mark Rothko, not a minimalist, described the myths of antiquity as “the eternal symbols upon which we must fall back to express basic psychological ideas.” This ties in very strongly with the work of an avowedly spiritual artist, Bill Moore. That isn’t surprising since he was an ordained Roman Catholic Priest.

“I often use cruciform shapes,” he says. “But, like Antoni Tàpies, I believe that the power of the cross goes far beyond its use in a Christian context. We’re drawn to what I call essential shapes, patterns and textures. They’re found in all kinds of civilizations and traditions. In fact, the geometric ratios that I use almost subconsciously are the same as those used in ancient Indian, Egyptian and Greek architecture, as well as medieval European cathedrals.”

https://frbillmoore.com/

The Tantric images seem to be made on similar terms. The symbols used have meaning for the devotees, although according to Jamme, these are not fixed. The images are the prop for meditation, beginning with whatever the image ‘means’. This expands and shifts as the mind explores itself.

This seems similar to the use of the Stations of the Cross in Christianity. While the images at each station can be quite elaborate, they can be reduced simply to a Roman numeral. It is the meditation on the meaning of each station that matters.

Another painter, Sir Terry Frost, (here) said: “To look at a painting which gives you the opportunity to have solitude, to be yourself and to be able to wander into reverie, is more than hedonistic, it’s spiritual”.

Until I came across that quote, I had never ‘got’ the work of Mark Rothko. I loved his way with paint, but the paintings themselves seemed shallow. Somehow it managed to pin down for me their essentially meditative nature. Which leads me back to the man himself:

“Art to me is an anecdote of the spirit, and the only means of making concrete the purpose of its varied quickness and stillness.

However, despite all this, the artist who first sprang to mind when I looked at the Tantra paintings, was Robert Motherwell. More specifically, it was the collage in this small catalogue from a show of his work in 2013.

(See also here)

Cover to Robert Motherwell Collage, published by Bernard Jacobson Gallery

I think it was probably the simplicity of these pieces which made me draw the parallels. They transcend their commonplace, everyday origins, encouraging a similar meditative response as the Tantra paintings.

When I discovered it, the quote, from Terry Frost, gave me a focus for thinking about my own work. Until then, I think I always had, at the back of my mind, the guilty feeling that I was just making patterns. Understanding that others can find meaning in something, even if I don’t embed it there myself, liberated me. I realised that there is no meaning in abstract art. It does not require understanding. It just is. An artist may mentally attach meanings to the shapes and colours of their work, but even if they explicitly share those meanings, there can be no guarantee others will discover them or see the same things.

In the end, all art has the potential to be a subject for meditation. Even the flight of the eye across an image, is a form of meditation, a form of reverie. That is as it should be, I think. Art without emotion, seems an empty exercise.

Some images from Tantra Song

Further Reading

Tantra Song

The Atlantic

The Paris Review

Hyperallergic

Robert Leeming

Bill Moore

Modernist Missionary

Stations of the Cross

My Last Art Beats (video)

Robert Motherwell

Robert Motherwell, abstraction and philosophy

Robert Motherwell, early collages

Agnes Martin

MoMA biography

Abstract Minimalism

Terry Frost

Tate Bio

Posted on Leave a comment

Finding your style

It is common for artists to be told of the importance of developing a consistent and coherent style. Galleries of course like this since it makes marketing so much easier if an artist can be nicely packaged up.

It wouldn’t be too far off the mark to say that pretty much every professional relationship that I had cultivated throughout the 1990s collapsed as a result of what happened to my work in Mayo. When people looked at the paintings, their jaws dropped. It was as if I’d betrayed them. How dare I take another path?

Stuart Shils – https://www.stuartshils.com/writing/reviews/aidan-dunne-the-irish-times/

The artist Stuart Shils about the problems he had when his style changed after a visit to Ireland in 1998.

The artist Patrick Heron had similar problems after a change of direction.

[The gallery director] wrote to Heron complaining that he was just beginning to find a market for his still lives and now Patrick had to hit him with this. Most artists have to put up with gallery owners who would like them to stick to the latest selling line…

Patrick Heron by Michael McNay, Tate Publishing

Some other posts on style and the creative impulse.

Posted on Leave a comment

What is profit?

assorted paintings

How do you price your work?

Pricing of work seems to be one of those black arts with no definitive answer. It should go without saying of course that you need to cover the cost of your materials. Beyond that, go on any art based forum, and you will find a myriad of answers. I’m going to stick my neck out here and say that most of these are simply wrong, especially in the handling of profit.

Artists in particular seem to think that the art business is somehow different to mere commerce. They are wrong. Artists need to eat just as much as car mechanics or window cleaners.

Profit

Let’s start by looking at profit. The key mistake, made by many, is to think that profit is the same as income. It isn’t. You will often see statements to the effect that profit is the money you have left after paying for business expenses. From income, you pay your personal costs – food, rent/mortgage etc. It is the wage you take from your business, just as much as the wage you would be paid as an employee. Wages are an expense of business, even for a sole trader. Profit on the other hand is the money you use to build your business – from profit you pay for tools and materials, advertising and promotion, web costs, studio rental. I don’t know why, but for some reason this distinction seems hard to grasp. There is one further complication you do need to be aware of. In the UK, at least, the tax position of sole traders – which is what you would be as a solo artist – does not differentiate between income and profit. This only applies on your tax return. Do not make the mistake of applying the same approach in your accounting practices.

A common response in forums is that if the cost of say a painting is made up from hourly income plus materials plus profit, the final figure will be too high. That may well be true, but if you ignore the difference between income and profit, you may effectively be paying people to take your work…

It is common in start-ups for the business owner not to take an income in early years. They don’t however expect this situation to be permanent. Would you work for nothing for the rest of your life?

So, with that in mind, how do you calculate the price of an art object? That will be the subject of a future post.

Posted on 1 Comment

Is Generative AI just a party trick after all?

Magician show clipart, vintage illustration

My last four posts on generative AI may have been late to the party, it seems. This story, from Vox, suggests that the public are not taking to AI with quite the enthusiasm of a few months ago.

Their conclusion?

Generative AI can do some amazing things. There’s a reason why Silicon Valley is excited about it and so many people have tried it out. What remains to be seen is whether it can be more than a party trick, which, given its still-prevalent flaws, is probably all it should be for now.

https://www.vox.com/technology/2023/8/19/23837705/openai-chatgpt-microsoft-bing-google-generating-less-interest

I’m not sure. My experience has been mixed. With care, I think text based generative AI could be a useful tool. My experiences with art AI were less positive. I totally failed to get anything which matched more than the simplest of briefs. I don’t think that will change very soon, certainly for the average user. For now, I think it is indeed a party trick. My grandson, at least, appreciated the pictures I made for him.

Posted on 1 Comment

AI, Art and AI Art – Part 4

ai generated portrait of a woman in green with a green, broad-brimmed hat, with flowers

This is the last of four posts dealing with AI and AI Art. It takes a different form to the previous three. In this post, I look only at the output from AI Art apps, without regard to how it works or what issues its use might raise. The post concludes with an overall assessment of AI art and my reactions.

In part 3, I briefly described how I tested the app, and mentioned the problems experienced with the output. This post was intended to expand on that, giving example text prompts and the resulting image. However, the theory and the practice have proved very different.

I’m sure I will be returning to the topic. I will try and come here to add new inks.

Is AI just a party trick?

Testing the AI art app

To recap, my initial aim in testing the AI art app was to push it as far as possible. I was not necessarily trying to generate useable images. The prompts I wrote:

  • brought together named people, who could never have met in real life, and put them in unlikely situations.
  • were sometimes deliberately vague.
  • were written with male and female versions.
  • used a variety of ethnicities and ages.
  • used single characters, and multiple characters interacting with each other.
  • used characters in combination with props and/or animals
  • used a range of different settings.

I realised I also needed to test the capacity of the AI to generate an image to a precise brief. This is, I believe, the area where AI art is likely to have the most impact. Doing this proved much harder than I expected.

In essence, generating an attractive image with a single character does not require a complex prompt. I suspect this is already being used by self-publishers on sites like Amazon.

Creating more complex images, at least with Imagine AI, is much more difficult. There are ways around the problem, but these require use of special syntax. This takes the writing of the prompt into a form of coding for which documentation is minimal.

Talking to the AI art app

This problem of human-AI communication is not something I’ve given any real thought to, beyond fiddling with the text prompt. This paper addresses one aspect of it. From this, it became clear that the text prompt used in AI art apps, or the query used by the likes of ChatGPT are not used in their original form. The initial text (in what is termed Natural Language or NL) has to be translated into logical form first. Only then can it be turned into actions by the AI, namely the image generation, although that glosses over a huge area of other complex programming.

This is a continuously evolving area of research. As things stand, the models used have difficulty in capturing the meaning of prompt modifiers. This mirrors my own difficulties. The paper is part of the effort to allow the use of Natural Language without the need to employ special syntax or terms.

Research into HCI

The research, described in this paper, points towards six different types of prompt modifiers use in the text-to-image art community. These are:

  • subject terms,
  • image prompts,
  • style modifiers,
  • quality boosters,
  • repeating terms, and
  • magic terms.

This taxonomy reflects the community’s practical experience of prompt modifiers.

The paper’s author made extensive use of what he calls ‘grey literature’. Grey literature is materials and research produced by organizations outside of the traditional commercial or academic publishing and distribution channels. In the case of AI art, much is available from the companies developing the apps. This from Stable Diffusion and this, deal with prompt writing.

Both of them take a similar approach to preparing the text prompt. They suggest organising the content into categories, which could be mapped onto the list of prompt modifiers referred to above.

The text-to-image community

As with any sphere of interest, there seems to be a strong online community. Given the nature of this particular activity, they post their images on social media. Some of this, being tactful, is best described as ‘specialist’. Actual porn is generally locked down by the AI companies. That doesn’t stop people pushing the boundaries, of course. If you decide to explore the online world of AI art, expect lots of anime in the shape of big chested young women in flimsy clothing. From the few images I’ve seen which made it past the barrier, the files used for training the app must have included NSFW (Not Safe For Work) images. What we get is not quite Rule 34, but skates close…

https://imgs.xkcd.com/comics/rule_34.png

What else?

It’s not all improbable girls, though. The Discord server for the Imagine AI art app has a wide range of channels. These include nature, animals, architecture, food, and fashion design as well as the usual SF, horror etc. The range of work posted is quite remarkable in isolation, but in the end quite samey. Posters tend not to share the prompt alongside the image. It isn’t clear therefore if this is a shortcoming in the AI, or a reflection of the comparatively narrow interests of those using the app.

Judging by the public response to AI, it seems unlikely that many artists in other media are using it with serious intent. That too will bias users to a particular mind set. Reading between the lines of the posts on Discord, my guess is that they tend to be young and male. Again, this limited user base will affect the nature of images made.

The output from the AI app

The problems I described above have prevented me from the sort of systematic evaluation, I planned. A step by step description of the process isn’t practical. It takes too long. The highest quality model on Imagine is restricted to 100 image generations in a day, for example. I hit that barrier while testing one prompt, still without succeeding.

In addition, I did a lot of this work before I decided to write about it, so only have broad details of the prompts I. I posted many of those images on Instagram in an account I created specifically for this purpose.

https://www.instagram.com/ianbertram_ai_

Generic Prompts

I began with some generic situations, adding variations as shown in brackets at the end of each prompt. In some cases, I inserted named people into the scenario. An example:

  • A figure walking down a street (M/F and various ages, physique, ethnicities, hair style/colour, style of dress)

Capturing a likeness

I wanted to see how well the app caught the likeness of well known people. By putting them in impossible, or at least unlikely situations, this would push the app even further. An example:

  • Marilyn Monroe dancing the tango with Patrick Stewart. I also tried Humphrey Bogart, Donald Trump and Winston Churchill.

I discovered a way to blend the likenesses of two people. This enables me to create a composite which can be carried through into several images. Without that, the AI would generate a new face each time. The numbers in the example are the respective weights given to the two people in making the image. If one is much better known than the other, the results may not be predictable, but should still be consistent:

  • (Person A: 0.4) (Person B: 0.6) sitting at a cafe table.

Practical applications

I also wanted to test the possibility of using the app for illustrations such as book covers, magazine graphics etc. Examples:

  • Man in his 50s with close-cropped black hair and a beard, wearing a yellow three-piece suit, standing at a crowded bar
  • Woman in her 50s with dark hair, cut in a bob, wearing a green sweater, sitting alone at a table in a bar.
  • Building, inspired by nautilus shell, art nouveau, Gaudi, Mucha, Sagrada Familia

To really push things, I wrote prompts drawn from texts intended for other purposes. Examples:

  • Lyrics to Dylan’s Visions of Johanna
  • Extracts from the Mars Trilogy by Kim Stanley Robinson
  • T S Eliot’s The Waste Land

I tried using random phrases, drawn from the news and whatever else was around, and finally random lists of unrelated words.

Worked example

This post would become too long if I included examples of everything from the list above, which is already shortened. Instead, I will show examples from a single prompt and some of those as I develop it. The prompt is designed to create the base image for a book cover. The story relates to three young people who become emotionally entangled as a consequence of an SF event. (A novel I’m currently writing)

Initial prompt:

Young man in his 20s, white, cropped brown hair, young woman, in her 20s, mixed race, afro, young woman in her 20s, white, curly red hair

This didn’t work, the output never showed three characters, often only one. If I wasn’t trying to get a specific image, they would be fine as generic illustrations.

Shifting away from photo realism, this one might have been nice, ethnicities apart, but for one significant flaw…

Next version

In order to get three characters, I obviously needed to be more precise. So I held off on the physical details in an attempt to get the basic composition right. After lots of fiddling and tweaking, I ended up with this

(((Young white man))), 26, facing towards the camera, standing behind (((two young women))), both about 24, facing each other

The brackets are a way to add priority to those elements with strength from 1 (*) to 3 (((*))).

The image I got, wasn’t perfect, but workable and certainly the closest so far.

Refining the prompt

My next step was refining the appearance, which proved equally problematic

(((young white man))), cropped brown hair, in his 20s, facing towards the camera, standing behind (((a young black woman))), ((afro hair)), in her 20s, facing (((a young white woman, curly red hair))), (((the two women are holding hands)))

I got nowhere with this. I usually got images where the man was standing between two black girls. In one a girl was wearing a bikini for some reason. In another she was wearing strange trousers, with one leg shorts, the other full length. I also got one with the composition I wanted, but with three women.

More attempts and tweaks failed. The closest to a usable image was this, using what is called by the app a pop art style. I eventually gave up. If there is a way to generate an image with three distinct figures in it, I have yet to find it.

This section is simply a slideshow of other images generated by the AI in testing. They are in no particular order, but show some of the possibilities, in terms of image quality. If only the image could be better matched to the prompt…

Consumer interest

I relaunched my Etsy shop, to test the market, so far without success. I haven’t put a lot of effort into this, so probably not a fair test. References to sales from the shop are to the previous version. At the time of writing, no AI output has sold. This is the URL:

https://www.etsy.com/uk/shop/PanchromaticaDigital

I also noticed on Etsy, and in adverts on social media, what looks like a developing market in prompts, with offerings of up to 100 in a variety of genre. These are usually offered at a very low price. The differing syntax used by the different apps may be an issue, but I haven’t bought anything to check. I saw, too, a number offering prompts to generate NSFW images. I’m not sure how they bypass the AI companies restrictions. Imagine, at least, seems to vet both the prompt and the output.

Overall Conclusions

It’s art, Jim, but not as we know it

In Part 3, I asked ‘Is AI Art, Art?’ It’s clear that many of those in the AI art community, consider the answer to be yes. They even raise similar concerns to ‘traditional’ artists about posting on social media, the risk of theft etc. The more I look, the more I think they have a point. The art is not I believe in the image alone, but in the entire process. It is not digital painting, it is, in effect, a new art form.

Making the images, getting them to a satisfactory state, is a sort of negotiation with the AI. It requires skill and creative talent. It requires more than simple language fluency, but an analytic approach to the language which allows the components of the description to be disaggregated in a specific way. Making AI art also requires an artistic eye to evaluate and assess the images generated and to evaluate what is needed to refine that image, both in terms of the prompt and the image itself.

The State of the art

As things stand, AI art is far from being a click and go product. Paradoxically, it is that imperfection which triggers the creativity. It means users develop an addictive, game-like mind set, puzzling away at finding just the right form of words. In Part 3, I referred to Wittgenstein and his definition of games. This seemed a way into looking at the many forms taken by art. A later definition, by Bernard Suits, is “the voluntary attempt to overcome unnecessary obstacles.” This could be applied to poetry, for example, with poetic forms like the sonnet.

Writing the prompt is very similar, it needs to fit a prescribed format, with specific patterns of syntax. In this post, I wrote about breaking the creative block by working within self-imposed, arbitrary rules. The imperfect text-to-image system, as it currently stands, is, in effect, the unnecessary obstacle that triggers creative effort.

The future

It seems inevitable that the problems of Human-AI communication will be resolved. AIs will then be able to understand natural language. I don’t know if we will ever get a fully autonomous AI art program. It certainly wouldn’t be high on my To-Do list. We don’t need it. A better AI, able to understand natural language and generate art, without the effort it currently takes, would be a mixed blessing. It would, however minimally, offer an opportunity for creativity to people who, for whatever reason, don’t believe themselves to be creative. On the other hand, too many jobs and occupations have already had the creativity stripped from the by automation and computerisation. Stronger AIs are going to accelerate that process.

It’s easy to say, ‘new jobs will be created’, but those jobs usually go to a different set of people. Development of better, but still weak, AIs will be disruptive. With genuine strong AI, all bets are off. We cannot predict what will happen. It is possible that so many jobs will be engineered away by strong AI that we will be grateful for the entertainment value, alone, of deliberately weak AI art apps and games.

Posted on 2 Comments

AI, Art and AI Art – Part 3

AI generated image, Japanese art style showing two women walking with a tower on the horizon.

This is Part 3 of a series of linked posts considering AI. It looks at AI art from an artist’s perspective. Inevitably it raises the, probably unanswerable, question ‘what is art’.

Part 1 looked at AI in general. Some more specific issues raised by AI art are covered in Part 2. Part 4 will be about my experience with one app, including a gallery of AI-generated images.

Links to each post will be here, once the series is complete.

Is AI Art, Art?

Is the output from an AI art app, actually ‘art’? I’m not sure if that is a debate I want to enter or if there is a definitive answer. Just think of the diversity of forms which are all presented under the banner of Art. What brings together as ‘art’ the cave paintings at Lascaux, Botticelli, Michelangelo, Rembrandt, J M W Turner, David Hockney, Damian Hirst, Alexander Calder, Piet Mondrian, John Martin, Olafur Eliasson, Beryl Cooke, Pablo Picasso, Edward Hopper, Carl Andre, Kurt Schwitters and Roy Lichtenstein? Or any other random list of artists?

One way forward is suggested by the idea of family resemblances. When he considered the similar question “what is meant by a game?” the philosopher Ludwig Wittgenstein used the concept. He argued that the elements of games, such as play, rules, and competition, all fail to adequately define what games are. From this, he concluded that people apply the term game to a range of disparate human activities that bear to one another only what one might call family resemblances. By this he meant that that things which could be thought to be connected by one essential common feature may in fact be connected by a series of overlapping similarities, where no one feature is common to all of them. This approach seems as if it would work for the list above. It is possible to trace a thread of connections which eventually encompasses all of them.

Whether such a thread could be extended to include work made by AI is not clear. I don’t intend to pursue it further here, but it may yet surface as a separate blog post

See also

https://en.wikipedia.org/wiki/Art

https://www.tate.org.uk/research/tate-papers/08/nothing-but-the-real-thing-considerations-on-copies-remakes-and-replicas-in-modern-art

Thought Experiments

As I wondered how the idea of family resemblance applied to works generated via an AI app, I realised that the act of thinking about something can be as useful as actually reaching a conclusion. Asking open questions without having an answer in mind helps us tease out what things mean, what they involve, and to explore our personal boundaries. This is the approach I’m going to take here, with a series of thought experiments.

Non-human creation

Work by animals has in the past been accepted as art, notably Congo, a chimpanzee and Pockets Warhol, a Capuchin monkey. Congo, in particular, seems to have had some sense of composition and colour. He refused to add to paintings he felt complete. However, it seems that animals cannot own the copyright to their work, at least in the US.

So, is the ability of animals, non-human intelligences, to create comparable with the production of art by computers? If not, what distinguishes one from the other?

One of the criticisms, directed at AI art, is that it lacks human emotion in its creation. That seems to argue against the acceptance of work by Congo or Pockets Warhol as art.

Is it too limited, for other reasons? What about the emotional response which might be experienced by an observer? Is the emotional response to an image comparable to the response we might have to a beautiful view? In the latter case, there is no artist per se.

Alien Art

If human emotion in the creative process is the defining factor in art, can anything created by non-humans, be art in human terms? I don’t believe so. To paraphrase Arthur C Clarke – The rash assertion that man makes art in his own image is ticking like a time bomb at the foundation of the art world. Obviously, if we stick to that view, we also exclude the work made by Congo or Pockets Warhol.

We don’t know whether life exists elsewhere in the universe, let alone intelligent life. But, for the sake of our thought experiment, let’s assume aliens are here on earth and that some of them are, in their terms, artists. For our purposes, let’s also assume that these hypothetical aliens see light in more or less the same range of frequencies as humans.

Going back to Arthur C Clarke, he discussed the potential impact of alien contact on human society in Profiles of the Future, originally published in 1962. Clarke also cites Toynbee’s Study of History. From our own history, we can predict that the response to alien contact would be dramatic. If alien art became known to us, it would inevitably also have an impact.

Such art would, by definition, be beyond our experience. It would be entirely new. We would know little of the cultural context for their art. Nor would we have access to the internal mental dialogue of these alien artists. What drives them is likely to be unknowable. Our relationship with any art they make, can only be an emotional response – how it makes us feel. I suppose it could be argued that we have some common ground with primates, which helps us relate to Congo and Pockets Warhol. Lacking that common ground, would it be possible for humans to respond meaningfully to alien art?

How does your answer sit with arguments about cultural appropriation of art from other human cultures?

Animal or Human?

Closer to reality, suppose we set up a ‘blind viewing’ of work by Congo and work by other human artists.

Would our observer be able to identify which was which? On what basis? Quality or something else? If your answer is based on quality (i.e. good/bad) does that not imply art is only art if it is good? Who decides if it is good?

In case you are wondering, the image on the left is by Joan Mitchell, that on the right is by Congo.

What if Congo was still around and his work used as a dataset to train an AI. That AI then generates work using that dataset. Would our observer be able to identify which was which? What about a three-way comparison, adding AI work to the Congo/human choice above?

See also this: https://www.nytimes.com/2022/08/18/t-magazine/art-activism-social-justice.html

Untouched by human hands…

Suppose, in some AI development lab, we link up a random text generator to an AI art app. The app is set up to take the random text and to generate an image from it. Each image is then automatically projected for a defined period of time, before the process is repeated with the next generation. Beyond the setup process, there is no human intervention.

What would an observer see? I suspect that, not knowing what was going on behind the scenes, they would see some remarkable images but also much dross and repetition. Isn’t dross and repetition, though, a characteristic component of almost all human endeavours? What does it mean if an AI does the same?

Are the individual images created in this way ‘art’? Would your view change, once you knew the origin of the image?

Ask yourself – what distinguishes an image generated by a human from one of otherwise comparable quality, generated by an AI? What happens if we compare the human dross with the AI dross?

Take a step back. Is the whole set up from text generator to projection equipment and everything in between, a work of art?

Does that view change if the setup moves from the lab to an art gallery, where it is called an ‘installation’? Why? The same human being conceived the idea. (I can assure you I’m not an AI.)

What would happen if a proportion of the images were randomly drawn from real digital works. Would our observer be able to distinguish the ‘real thing’ from the AI images? On what basis would that be made? What does it mean if they can’t separate them?

Original or reproduction?

Suppose, an AI app generates an image which looks as if it might be a photograph of a physical painting. Perhaps this one.

Suppose, further, that a human artist takes that flat image and paints an exact copy down to the last splash of paint.

How would an observer, ignorant of the order of events, see the painting and the digital image? Would it be unreasonable for them to assume the painting was the original and the digital image the copy? What does that say about the idea of the original? What if the AI image was the product of the random text generator? Does your view change if the painter wrote the original text prompt?

A further twist. Suppose that the digital image file was sent instead to a very sophisticated 3D printer to create a physical object that mimicked in every way the painting made by the artist. Where is the original, then?

For a long post on the difference between an original and a reproduction, go here.

Is AI art any good?

That is a question with several aspects.

Is it good, as art?

That can only be answered at all, if you accept the output as art. On the other hand, I don’t think a definition of art based on artistic quality stands up. All it does is shift the definition elsewhere, without answering the original question.

Is it good, technically?

That is almost as hard to answer. Look at this image. Clearly the horse has far too many legs. Is that enough to say it is technically bad?

So what about this image from Salvador Dali? Mere technical adherence to reality is clearly not enough.

Is it good at doing what it claims to do?

This section is based almost entirely on my experience with one app, but from other reading I believe that experience to be typical.

The apps seem to have little difficulty in handling stylistic aspects, provided obviously that those styles form part of the training data. Generally, if you specify say 1950s comics, that’s pretty much what you get.

Other aspects are much less successful. That isn’t surprising if you consider the complexity of the task. What’s probably more surprising is that it works as often as it does.

AI has a known problem with hands, but I found other problems too. A figure would often have more than the standard quota of limbs, often bending in ways that would require a trip to A&E in real life. Faces were often a mess. Two mouths, distorted noses, oddly placed eyes all appear – even without the Picasso filter! Certain combinations of model and style seemed to work consistently better than others.

Having more than one main figure in an image, or a figure with props such as tables or musical instruments, commonly caused problems. Humans in cars, more often than not, had their heads through the windscreen – or the bonnet. Cars otherwise tended to be driverless.

In a group of figures, it is common for limbs to be missing, or to be shared between figures. A figure sitting in a chair might lose a leg or merge into the furniture. If they are holding an object, it might float, or have two hands on it, with a third one elsewhere.

How close does it get, matching the image to the prompt?

In Imagine AI, the app I have using, it is possible to set a parameter which sets fidelity to the prompt against the quality of the image. I’m not sure how fidelity and quality are related, possibly through the allocation of processing resources.

I found getting specified attributes, like gender, ethnicity etc applied to the correct figure to be surprisingly difficult. Changes in word order can result in major changes to the generated image. Sometimes even the number of figures was wrong. Where I succeeded, there was no guarantee that this would be retained in further iterations. Generally, figures in the background of a scene tended to be dressed in similar colours to the main character and to be of the same ethnicity.

Getting variations in the physique of figures seems to be simpler for males than females. It seems very easy for depictions of women to become sexualised, compared to the same prompt used for a male figure. This is presumably a function of the training data.

What about the pictorial qualities?

Despite all the caveats, I have been surprised by the quality of the output, even quasi-photographic images and once the prompt is right, certain painting styles. Some styles still seem more likely to be successful, especially with faces and hands, or involving props like tables. Even so, and probably with some post-processing, much of the output could stand against the work of commercial illustrators and graphic designers, especially at the low cost end of the market. I have already noticed AI imagery in cover design of self-published books on Amazon.

It is the mimicry of techniques like impasto which give me the greatest doubts. I suppose it is early in the development of the field, but I saw no sign of anything which tried to use the essential characteristics of digital media in ways analogous, for example, to the use of grain in film photography. I suppose it could be argued that the widespread availability of reproductions has detached things like representations of impasto from their origins. In addition, digital imagery has been around for a limited period of time compared to traditional photography.

Impact on artists and art practice

As I said in Part 2:

For the future, much depends on the direction of development. Will these apps move towards autonomy, capable of autonomous generation of images on the basis of a written brief from a client? Or will they move towards becoming a tool for use by artists and designers, supporting and extending work in ‘traditional’ media? They are not mutually exclusive, so in the end the response from the market will be decisive.

I’m not sure that I would welcome a fully autonomous art AI. It wouldn’t do anything that humans can’t already do perfectly well. I can however see value in an AI enhanced graphic tool, which would have the capacity to speed up work in areas like advertising, film and TV.

Advertising and graphic design

In situations like this, where a quick turn round is required, I can envisage an AI generating a selection of outline layouts, based on a written brief from a client. This could be refined by say selecting an element and describing the changes needed. A figure could be moved, its pose altered, clothing changed etc. Once the key elements were agreed and fixed in position, the AI could then make further refinements until the finished artwork is generated.

Obviously this process could be managed by non-artists, but would be very much enhanced if used under the direction of an artist, working as part of a team. If the changes were made during a discussion, via a touch screen and verbal instruction, the position of the artist in the team would be enhanced.

Print on Demand

Print on demand services are common. Artists upload their files to a website, which then handles the production and shipping of products made using the uploaded image. Orders are typically taken on the artist’s own website or sites like Etsy. Products typically offered range from prints to clothing to phone cases. AI could contribute at various points in the process.

At the moment, a template has to be set up by the artist for each product they want to offer, which seems a perfect use for AI, probably with fine-tuning by the artist.

Preparing the full description for each product can be a complex process, especially when SEO is taken into account. Again, an AI could take on much of the donkey work, enabling artists to spend more time in making art. It may even be possible to partly automate the production of the basic descriptive text for an image. If an image can be created from text, it should be possible to generate text from an image.

Retailing

Many department stores offer a selection of images in frames ready to hang. The images themselves are rarely very distinctive and probably available in stores across the country. It is likely that the image forms a significant part of the selling price.

Assuming the availability of an AI capable of generating images to a reasonably high resolution, I can see stores, or even framing shops, offering a custom process.

“Tell us what you want in your picture, and we will print and frame it for you.”

Artists

Many artists already work digitally. I can see how an interactive AI as described above under Advertising and Graphic Design could be used to assist. A sketch drawing could be elaborated by an AI, acting effectively as a studio assistant. This could then be refined to a finished state by the artist.

Printmakers can already use digital packages like Photoshop to prepare colour separations for printing or silk screens. It should be possible with an AI to go beyond simple CMYK separations and create multiple files which can be converted into print screens or perhaps used to make Risograph prints.

Testing the AI App

I looked at a range of apps, initially using only free versions and generally only the Android versions. Some of them were seriously hobbled by advertising or other limitations, so couldn’t be properly assessed.

Initially, I played with a series of different prompts to get a feel for how they worked. I then tried some standard prompts on all of them. I finally settled on Imagine, and paid a subscription. I’ll be basing the rest of this post on that app. I’ll include a couple of the worst horrors from others, but I won’t say which package produced them, since in all probability there will have been significant improvements that would make my criticism unfair.

The Imagine app in use.

My aim was as much to see what went wrong, as it was to generate usable images. I wrote prompts designed to push the AI system as far as possible. The prompts brought together named people, who could never have met in real life, and put them in unlikely situations. Some were deliberately vague. Others tried out the effect of male and female versions of the same prompt, different ethnicities and ages. I wrote prompts for single characters, for multiple characters interacting with each other, and for characters with props and/or animals and in different settings. I’ve given some examples below.

Imagine has different models for the image generation engine, plus a number of preset variations or styles. This adds extra complexity, so for some prompts, I ran them with different models, holding the style constant, and vice versa.

Outputs

Obviously, it isn’t enough to talk about these apps. The only test of their capabilities is to see what they produce. Part 4 will look at a selection, good and bad, of images and offer some thoughts on prompt writing as a creative process.

Conclusions

As with AI in general, AI art raises some interesting moral and philosophical questions. They may not be so fundamental as the Trolley Problem, but they will affect the livelihood of many people and will have a significant social impact. Finding a path through those questions, as the thought experiments show, will not be easy.

Much more quickly, though, we will get apps and packages that do specific jobs. Some are there already – colourising old B&W films for example. These are likely to have significant economic impact.

Posted on 5 Comments

Art, AI and AI art – Part 2

An AI image of Aphrodite rising from the waves, after the original by Botticelli

Introduction to Part 2

This post began as a limited review of what has become known as AI Art. In order to do that, I had to delve deeper into AI in general. Consequently, the post has grown significantly and is now in three parts. Part 1 looked at AI in general. This post, Part 2 will look at more specific issues raised by AI art. Part 3 will look at the topic from the perspective of an artist. Finally, Part 4 will be a gallery of AI-generated images.

This isn’t a consumer review of the various AI art packages available. There are too many, and my budget doesn’t run to subscriptions to enough of them to make such a review meaningful. My main focus is on commonly raised issues such as copyright, or the supposed lack of creativity. I have drawn only on one AI app. Imagine AI, for which I took out a subscription. I tried a few others, using the free versions, but these are usually almost shackled by ads or with only a limited set of capabilities.

Links to each post will be here, once the series is complete.

How do AI art generators work?

What they do, is take a string of text, and from that, generate pictures in various styles. How do they achieve that? The short answer is that I have no idea. So, I asked ChatGPT again! (Actually I asked several times, in different ways) I’ve edited the responses, so the text below is my own words, using ChatGPT as a researcher.

In essence there are several steps, each capable of inserting bias into the output.

Data Collection and Preprocessing

The AI art generator is trained on a large dataset of existing images. This can include paintings, drawings, photographs, and more. Generally, each image is paired with a text that in some way describes what the image is about. The data can be noisy and contain inconsistencies, so a certain amount of preprocessing is required. The content of the dataset is critical to the images that can be produced. If it only has people in it, the model won’t be able to generate images of cats or buildings. If the distribution of ethnicities is skewed, so will be the eventual output.

Selection of Model Architecture

The ‘model’ is essentially the software that interprets the data and generates the images. There are numerous models in use. The choice of model is critical, it determines the kind of images the AI art generator can produce. In practice, the model may have several components. A base model might be trained on a large database of images, while a fine-tuning model is used to direct the base model output towards a particular aesthetic.

Training

During training, the AI model learns to capture the underlying patterns and features of the images in the dataset of artworks. How this is done depends on the model in use. It seems, however, that they all depend on a process of comparing randomly generated images with the dataset and refining the generated image to bring it as close as possible to the original

Generating Art

After training, the AI can be used to generate new art. This is a significant task in its own right. The app’s AI model needs to understand the semantics of the text and extract relevant information about the visual elements mentioned. It then combines the information extracted from the text prompt with its own learned knowledge of various visual features, such as objects, colours, textures, and more. This fusion of textual and visual features guides the model in generating an image that corresponds to the given prompt.

Fine-Tuning and Iteration

There is a skill in writing the text prompts. The writer needs to understand how the text to image element of the app works in practice. In use, therefore, there is often a need for fine-tuning. Artists may adjust the prompt or other parameters to achieve the results they have in mind. Feedback from this process may also help in development and refinement of the model.

Style Transfer and Mixing

Some AI art generators allow for style transfer or mixing. The AI will generate a new image based on the content of a specific piece, but in the style of another.

Post-Processing

The generated image may then be subject to further post-processing to achieve specific effects or to edit out artefacts such as extra fingers.

Is it really intelligent?

Many of these apps describe themselves as AI ‘Art’ generators. That is, I think, a gross exaggeration of their capabilities. There is little ‘intelligence’ involved. The system does not know that a given picture shows a dog. It knows that a given image from the training data is associated with a word or phrase, say dog. It doesn’t know that dogs have four legs. Likewise, it doesn’t know anatomy at all. It probably knows perhaps that dog images tend to have the shapes we identify as legs, broadly speaking one at each corner, but doesn’t know why, or how they work, or even which way they should face, except as a pattern.

Importance of training data

Indeed, in the unlikely event of a corruption in the training data, such as identifying every image of a dog as a sheep, and vice versa, the AI would still function perfectly, but the sheep would be herding the dogs. If the dataset did not include any pictures of dogs, it could not generate a picture of a dog.

On top of that, if there is any scope for confusion in the text prompts, these programs will find it. To be fair, humans are not very good at understanding texts either, as five minutes on Twitter will demonstrate. Even so, I’m sure that art AI will get better, technically at least. It will even learn to count limbs and mouths.

Whatever we call it, we know real challenges are coming. Convincing ‘deep fake’ videos are already possible. I’m guessing that making them, involves some human intervention at the end to smooth out the anomalies. That will change, at which point the film studios will start to invest.

We are still a long way from General AI though. An art AI can’t yet be redeployed on medical research, even if some of the pattern matching components are similar.

Is AI art, theft?

These apps do not generate anything. They depend upon images created by third parties, which have been scraped from the web, with associated text. It is often claimed that this dependency is plagiarism or breach of copyright. There are several class-action lawsuits pending in the US, arguing just that.

Lawsuits

These claims include:

  • Violation of sections of the Digital Millennium Copyright Act’s (DMCA) covering stripping images of copyright-related information
  • Direct copyright infringement by training the AI on the scraped images, and reproducing and distributing derivative works of those images
  • Vicarious copyright infringement for allowing users to create and sell fake works of well-known artists (essentially impersonating the artists)
  • Violation of the statutory and common law rights of publicity related to the ability to request art in the style of a specific artist

Misconceived claims

It is difficult to see how they can succeed, but once cases get to court, aberrant decisions are not exactly rare. For what it’s worth, though, my comments below. (IANAL)

  • The argument about stripping images of copyright information seems to be based on an assumption the images are retained. If no version of an image exists without the copyright data, how is it stripped?
  • The link between the original data and the images created using the AI seems extremely tenuous and ignores the role of the text prompts, which are in themselves original and subject to copyright protection.
  • A style can not be copyrighted. The law does not protect an idea, only the expression of an idea. In prosaic terms, the idea of a vacuum cleaner cannot be copyrighted, but the design of a given machine can be. If a given user creates images in the style of a known artist, that is not, of itself, a breach of copyright. If they attempt to pass off that image as the work of that artist, it is dishonesty on the part of the user, not the AI company. This is no different to any other case of forgery. Suing the AI company is like suing the manufacturer of inks used by a forger.
  • If style cannot be protected, how can it be a breach to ask for something in that style?

Misconceived claims

Essentially, the claims seem to be based on the premise that the output is just a mash-up of the training data. They argue that the AI is basically just a giant archive of compressed images from which, when given a text prompt, it “interpolates” or combines the images in its archives to provide its output. The complaint actually uses the term “collage tool” throughout, This sort of claim is, of course, widely expressed on the internet. It rests though, in my view, on a flawed understanding of how these programs really work. For that reason, the claim that the outputs are in direct correlation with the inputs doesn’t stand. For example, see this comparison of the outputs from two different AI using the same input data.

As the IP lawyer linked above suggests:

…it may well be argued that the “use” of any one image from the training data is de minimis and/or not substantial enough to call the output a derivative work of any one image. Of course, none of this even begins to touch on the user’s contribution to the output via the specificity of the text prompt. There is some sense in which it’s true that there is no Stable Diffusion without the training data, but there is equally some sense in which there is no Stable Diffusion without users pouring their own creative energy into its prompts.

In passing, I have never found a match for any image I have generated using these apps on Google eye or Tineye. I haven’t checked every one, only a sample, but enough to suggest the ‘use’ of the original data is, indeed, de minimis, since it can’t actually be identified. Usually I would see lots of other AI generated images. I suspect this says more about originality than any claims to the copying of styles.

I suppose, if an artist consistently uses a specific motif, such as Terence Cuneo’s mouse, it could be argued there was a copyright issue, but even then I can’t see such an argument getting very far. If someone included a mouse in a painting with the specific intent of passing it off as by Cuneo, that is forgery, not breach of copyright.

Pre-AI examples

This situation isn’t unique. Long before AI was anything but science fiction, I saw an image posted on Flickr of a covered bridge somewhere in New England. The artist concerned had taken hundreds, perhaps thousands of photos of the same bridge, a well known landmark, all posted on Flickr, and digitally layered them together. He had not sought prior approval. The final image was a soft, misty concoction only just recognisable as a structure, let alone a bridge. The discussion was fierce, with multiple accusations of theft, threats of legal action etc.

In practice, though, what was the breach? No one could positively identify their original work. Even if an individual image was removed, it seems highly unlikely that there would be any discernible impact on the image. I would argue that the use of images from the internet to ‘train’ the AI is analogous to the artist’s use of the original photos of that bridge. In the absence of any identifiable and proven use of an image, there is no actionable breach.

Who has the rights to the image?

An additional complication, in the UK at least, stems from the fact that unlike many countries, the law makes express provision for copyright protection of computer-generated works. Where a work is “generated by computer in circumstances where there is no human author, the author of such a work is “the person by whom the arrangements necessary for the creation of the work are undertaken”. Protection lasts for 50 years from the date the work is made.

It could be argued that in the case of AI art packages, the person making the necessary arrangements is the person writing the text prompt. As yet, that hasn’t been tested in a UK court.

See Also

A paper produced by the Tate Legal and Copyright Department. I can give no assurance it is still current.

https://www.tate.org.uk/research/tate-papers/08/digitisation-and-conservation-overview-of-copyright-and-moral-rights-in-uk-law

Is AI use of training data, moral?

Broader issues of morality are also often raised. There are two aspects to this.

There are moral rights within copyright legislation. Article 6bis of the Berne Convention says:

Independent of the author’s economic rights, and even after the transfer of the said rights, the author shall have the right to claim authorship of the work and to object to any distortion, modification of, or other derogatory action in relation to the said work, which would be prejudicial to the author’s honor or reputation.

If the use of a specific work in an AI generated image cannot be identified or even proven to be there in the first place, it is difficult to believe that its use in that way is ‘prejudicial to the author’s honor or reputation.

Broader morality

There is also a broader moral issue. Is it ethical to use someone else’s work, unacknowledged and without remuneration, to create something else. As with all moral argument, that is without a definitive answer. This Instagram account is interesting in that respect.

There is a fine line between taking inspiration and copying. That line is not changed by the existence of AI. Copying of artistic works has a long tradition. As Andrew Marr says in this Observer article, “the history of painting is also the history of copying, both literally and by quotation, of appropriation, of modifying, working jointly and in teams, reworking and borrowing.”

The iconic portrait of Henry VIII is actually a copy. The original, by Hans Holbein, was destroyed by fire in 1698, but is still well known because of the many copies. It is probably one of the most famous portraits of any English or British monarch. Copying of other works has also been a long-standing method of teaching.

Is it acceptable to sell copies of other peoples work?

That of course begs the question of whether AI art is a copy. Setting that aside, it also takes us back to the issue of forgery, or the intent of the copyist. For many years, the Chinese village of Dafen is supposed to have been the source of 60% of all art sold worldwide. Now the artists working there are turning to making original work for the Chinese market. Their huge sales of copies over the decades, suggests that buyers have no objection to buying copies. None of those sales pretended to be anything but.

Giclée a scam?

Many artists sell copies of their own work, via so-called ‘giclée’ (i.e. inkjet) reproductions. The marketing of these reproductions often seems to stray close to the line, with widespread use of empty phrases, even if impressively arty sounding – ‘limited edition fine art print’ and similar. I’ve even seen a reputable gallery offering a monoprint in an edition of 50. There was no explanation, in the description, of how this miraculous work of art was made. It was of course an inkjet reproduction. To be accurate, there was an explanation, but it was on a separate page. There was no link from the sale page.

Ignoring the fact that these are not prints as an artist-printmaker would expect to see them, the language and marketing methods used, are designed to obscure the fact that these are not, of themselves, works of art, but copies of works of art.

In that context, I believe an anonymous painted copy of a Van Gogh to be more honest about what it is, than an inkjet reproduction of an oil painting, by a modern minor artist. It at least has some creative input directly into it, whereas the reproduction is pretty much a mechanical copy. I’ll return to this in Part 3.

Bias in AI art

The possibility of bias in AI in general is a real cause for concern. In the specific case of AI art, the problem may be less immediately obvious, but as AI art is used more widely, the representations it generates will become problematic if they are biased towards particular groups or cultures. One remedy would be to increase transparency about data sources. If the datasets used to train AI models are not representative or diverse enough, the AI output is likely to be biased or even unfair in the representations created.

Issues likely to affect the dataset include:

  • A lack of balance in the representation of gender, age, ethnicity and culture
  • A lack of awareness of historical bias, which can then become replicated in the AI
  • If labels attached to images during preprocessing or dataset creation inaccurately describe the content of images or are influenced by subjective judgments, these biases may be perpetuated in the model.
  • Changes to the AI model after deployment may introduce bias if not properly managed and documented.

Lack of transparency may lead to other problems:

  • AI systems, often work as “black boxes,” They provide results without explaining how those results were obtained.
  • Difficulty in meeting regulatory requirements such as on data sources
  • Poor documentation of the data sources and data handling procedures, preprocessing steps, and algorithms used in the AI.
  • Inability to demonstrate the existence of clear user consent mechanisms, and adherence to data protection regulations (e.g., GDPR)

These can all lead to poor accountability and lack of trust.

How does this relate to AI?

AI, as it currently stands, does not copy existing works. Nor does it collage together parts of multiple works. Somehow, and I do not pretend to understand the technical process, it manages to generate new images. They may become repetitive. They may, especially the pseudo-photographs, reveal their AI origin, but despite all that they somehow produce work which is not a direct copy – i.e. original.

For the future, much depends on the direction of development. Will these apps move towards autonomy, capable of autonomous generation of images on the basis of a written brief from a client? Or will they move towards becoming a tool for use by artists and designers, supporting and extending work in ‘traditional’ media? They are not mutually exclusive, so in the end the response from the market will be decisive.

Posted on 6 Comments

Art, AI and AI Art – Part 1

Alien lizards roller skating in a Pride march with Pride flags

Introduction to Part 1

This post began as a limited review of what has become known as AI Art. In order to do that, I had to delve deeper into AI in general. Consequently, the post has grown significantly and is now in three parts. This post, Part 1, looks at AI in general. Part 2 will look at more specific issues raised by AI art, while Part 3 will look at AI art from the perspective of an artist. Part 4 will be a gallery of AI-generated images.

Links to each post will be here, once the series is complete.

What is AI?

It has felt almost impossible to avoid the topic of AI in recent weeks. Hardly a day passed, it seems, without some pronouncement, whether promoting the next big thing or predicting the end of the world. One thing seems clear though. Whatever concerns exist, the push to develop this thing called ‘AI’ is now probably unstoppable.

Looking deeper into the topic, I soon hit problems. AI seems to mean anything, depending on the motivation of the person using it. As a descriptive term, it is almost useless without a qualifier. This isn’t unusual in public discourse. I’m not going to consider all the bad-faith uses of the term, though. I will limit myself to trying to get some clarity.

What is now called AI is different to the former idea of ‘expert systems’. Instead of depending on humans trying to input rules to be followed, AI uses the raw data to find rules and patterns. This has only become possible with the advent of so-called ‘big data’. The term refers to data that is so large, fast or complex that it’s difficult or impossible to process using traditional methods. This is why research AIs are being used in, for example, drug research or cancer screening, using data from health systems like the NHS.

Weak and Strong AI

Essentially, there are two sorts of AI, ‘weak’ and ‘strong’. For a summary of the difference, I turned to AI. Here’s what the free version of ChatGPT said:

The terms “strong AI” and “weak AI” are used to describe different levels of artificial intelligence capabilities. They refer to the extent to which a machine or computer system can simulate human-like cognitive functions and intelligence.

Weak AI (Narrow AI)

Weak AI refers to artificial intelligence systems designed and trained for a specific or narrow range of tasks. These systems excel at performing well-defined tasks but lack general cognitive abilities and consciousness. Weak AI systems are prevalent in our daily lives. They are commonly found in applications like virtual personal assistants (e.g., Siri, Alexa), recommendation systems, image recognition, and natural language processing. They can appear intelligent within their specific domain, but do not possess real understanding or awareness.

Strong AI (General AI or AGI – Artificial General Intelligence)

Strong AI refers to artificial intelligence systems with the ability to understand, learn, and apply knowledge across a wide range of tasks and domains, just like a human being. A strong AI system would have cognitive abilities that match or exceed human intelligence. This includes not only performing specific tasks but also understanding context, learning from experience, and reasoning about various subjects. Strong AI would have a level of consciousness, self-awareness, and general problem-solving capabilities similar to humans.

Despite the claims of Google engineer, Blake Lemoine, it seems clear that we are nowhere near achieving Strong AI (AGI). All apps and programs currently in use, including ChatGPT, are examples of Weak AI. They are ‘designed and trained for a specific task or a narrow range of tasks.’ In other words, they are tools, just as much as the power looms of the late 18th century.

That does not mean that the introduction of even Weak AI is without risk. Like all disruptive technologies, it will create stress as older ways of working are replaced. Jobs will be lost, new jobs will be created, almost certainly not for the same people. If steam transformed the 19th century, and computers the 20th, Weak AI is likely to trigger a wave of change that will be an order of magnitude greater. We are beginning to see some of the potential in health care. It is already aiding in diagnostics, drug discovery, personalized treatment plans, and more.

Concerns raised in AI ‘training’

The ultimate goal in training an AI is to enable it to find patterns in data. Those patterns are then used to make accurate predictions or decisions on new, formerly unseen data. This process requires huge computational resources and careful design. The vast amounts of data involved, whether in training or in day-to-day use, raises significant ethical concerns.

Ethical concerns

Examples include:

  • Biases present in training data are likely to lead to biased outcomes. There is a need for transparency and accountability in training of AI systems, but the sheer volume of data involved makes this difficult. AI can help but…
  • Privacy concerns also arise in the use of training data. Improper handling of data or breaches in AI systems could lead to identity theft, unauthorized access to sensitive information, and other security risks. AI could also be used aggressively, to identify weak points in data protection, to break in and generally to create disruption.
  • Facial recognition systems could increase security by identifying potentially suspicious behaviour but at the risk of loss of privacy. Bias in the training data could lead to inequality of treatment in dealings with some ethnicities.
  • Adoption of AI in medical contexts potentially creates risk for patients. Avoiding risk requires rigorous testing, validation, and consideration of patient safety and privacy.
  • AI-generated content, such as art, music, and writing, raises questions about creativity and originality and challenges traditional notions of copyright or even ownership of the content.
  • Self-driving vehicles may enhance convenience and safety, but ethical issues arise in, for example, resolving safety issues affecting the vehicle occupant vs pedestrians or other road users.
  • AI can be used to optimize energy consumption, monitor pollution, and improve resource management. On the other hand, the energy demands of training AI models and running large-scale AI systems could contribute to increased energy consumption if not managed properly.

Fears about AI

Expressed fears of what AI might lead to range from the relatively mundane to the apocalyptic. The more dramatic risks generally relate to AGI, not narrow AI, although the distinction is often fudged. There are serious risks, of course, even with ‘Weak’ AI.

Impact on employment and skills

Widespread use of AI could lead to mass unemployment, (although similar fears were expressed in the early years of the Industrial Revolution). This is one of the concerns of the Screenwriters Guild in their present strike. Writers wanted artificial intelligence such as ChatGPT to be used only as a tool that can help with research or facilitate script ideas, and not as a tool to replace them. The Screen Actors Guild has also expressed concern over the use of AI in their strike.

To a limited extent, they are too late. Forrest Gump, Zelig, Avatar, The Matrix, and Toy Story, all in different ways, used elements of what will need to be in any workable film AI. There has been talk for many years about using computer-generated avatars of deceased actors in live-action filming.

Military AI

Military use of AI could lead to life-and-death decisions being made by machines with no human involvement. Associated risks include the possible use of AI being used by bad actors in cyber warfare. ‘Deep Fakes’, such as those of Volodymyr Zelenskyy and Barack Obama, are only the start.

Text-based AI

I have had only limited experience with text-based AI. I have used ChatGPT to support my research into AI, but I don’t know how it would cope with longer pieces like this blog post. None of the other programs I have tried have produced anything beyond banal superficialities. Too often the output is junk. There have been reports of the AI simply making things up, which paradoxically is probably the closest it gets to originality. With the current state of development, I would not believe any publicly available AI which claimed to generate original text.

Issues for education

Even so, packages like ChatGPT seem already to be causing problems in education. Unsurprisingly, plagiarism checkers are also being deployed, although they probably use AI too! So far as I can tell, these text generators don’t provide academic references as normally used, or any link to their original sources. If developers can crack that, then there will be a real problem, not just for schools and universities.

I asked ChatGPT to list some further reading on issues around the use of AI. It gave me what looks to be a good, varied list, probably as good as anything I might find by a Google search. I asked three times, getting differently formatted results each time. Version one gave me 2 or 3 titles under a range of headings, version two gave me a list of ten titles, while version three organised the list by books, articles and online resources. There is a significant overlap, but the content varies between the three versions. None of them are properly formatted to provide references, although enough information is given to find them.

I asked again, for a list in Harvard style, and got a list of 10 books and journal articles. When asked. I got details of the requirements for Harvard Style references. A further request gave me a list of other referencing systems in use in academic circles.

Conclusions

Strong AI has been a common fear for centuries. From Pygmalion or Pinocchio to the films 2001 or AI (in itself a partial retelling of Pinocchio), similar tropes arise. I covered the same theme in my short stories, here and here.

Weak AI

The use of even weak AI raises many complex moral and philosophical questions. Some of these, such as the Trolley Problem, had been interesting ways to explore ethical issues, but now, faced with self-driving cars, they have become real. Using AI in any form of decision-making process will raise similar problems.

There is still a long way to go to get ‘intelligence.’ If it happens, I suspect it will be an accident. Eventually, though, I believe even ‘weak’ AI will be able to provide a convincing facsimile. Whether dependence on AI for critical systems is desirable is another question.

AI as a research tool

In the more limited area of text generation, ChatGPT, used as a research tool, appears to work as well as Google, without the need to trawl through pages of results. On the other hand, it is probably the later pages of a Google search which provide the equivalent experience to the chance find on the library shelves. Without careful choice of words, it seems probable that AI search tools will ignore the outliers and contrary opinions, reinforcing pre-existing biases. But then, the researcher has to actually look at that chance find on the shelf.

I asked ChatGPT to list some further reading on issues around the use of AI and to include some contrary views. This it duly did, giving 2 or 3 references under a range of sub-topics with an identified contrary view for each. As a research tool, I can see that AI will be very useful. Its use will require care, but no more care than is required in selecting and using reference material found in any other manner.

On the other hand, it seems likely that, whether we like it or not, within a few years, AI packages will be able to generate a good facsimile of original text. A good proportion of journalism is already generated in response to press releases. How long before we get AI-generated news reports created in response to AI-generated press releases about AI-developed products? All untouched by human hands…