Me and AI
Have been thinking about my art and AI. I don't use AI, as you may know--if one takes as fundamental in AI that it learns. Which seems reasonable as a necessary condition for AI.
There will be terrific works of art in which AI is beautiful and crucial. But there will be many more where it is an inconsequential fashion statement. It's funny that programmed art is so affected by dev fashion. For the sake of strong work, it's as important to not let programmer fashion dictate how we pursue excellence.
AI is not a silver bullet cure for creating great generative art or computer art more broadly. AI has great promise, but sometimes it's preferable to use other approaches than AI.
There's currently an AI gold-rush going on. I have seen a previous gold-rush: the dotcom gold-rush of 1996-2000. It's in the nature of gold-rushes that people flock to it, misunderstand it, and create silly work with it that nonetheless is praised.
For many years, I have created programmed, generative, computer art, a type of art that is often associated with AI techniques.
The Trumpling characters (and other visual projects) that I am able to create, as you may have noted, have about them a diversity/range and quality that challenges more than a few art AIs. As art. As character. As expressive. As intriguing. As fascist chimera / Don Conway at http://vispo.com/aleph4/images/jim_andrews/aleph/slidvid12 , for instance.
The thing is this: it takes me some doing to learn how to create those. Both in the coding/JavaScript--and then in the artistic use of Aleph Null in generating the visuals, the 'playing' of the instrument, as it were, cinematically. That takes constant upgrades and other additions to the source code, so that i can explore in new ways, continually. Or stop for a while and explore what is already present in the controls, the instrument.
Some of the algorithms I've developed will be developed further; my work is the creation of a "graphic synthesizer"--a term I believe I invented--a multi-brushed, multi-layered, multi-filled brushstroke where brushes have replaceable nibs and many many parameters are exposed to granular controls. dbCinema was also a "graphic synthesizer" and a "langu(im)age processor" (another term I made up). I started dbCinema around 2005. I started Aleph Null in 2011. It's 2019 now. I've been creating graphic synthesizers for some time now.
If I understand correctly, what AI has to offer in this situation is strong animation of the parameters. It's learning would be in creating better and better animations without cease. Well, no, not really. Not 'without cease'. It could be cyclic. And probably is.
It's as good as the training data--and what is done with the training data, what images are grouped together, and how they're grouped together in their position and so on.
The following is what I do instead of using AI.
My strategy is this:
- Create an instrument of generative art that allows me and other users of the tool to learn how to create strong art with Aleph Null. There is learning going on, but it's by humans.
- Expose the most artistically crucial parameters (in the below architecture) in interactive controls--to get human decisions operating on some of those parameters--especially my own decisions--that is, Aleph Null and dbCinema are instruments that one plays.
- A control is allowed only if you can see the difference when you crank on it.
- The architecture: a 'brush + nib' paradigm, and layers, in an animation of frames.
- A brushstroke: a shape mask to give the mask shape + a fill of the resulting shape. Any shape. An animated shape mask, possibly, so the shape changes + dynamic somewhat random fills chosen from/sampled from a folder of images--or a folder of videos, eventually. There are text nibs, also, so that a brushstroke can be a letter or word or longer string of text which is possibly filled with samples of images.
- The paint that a brush uses can be of different types: a folder of images; a folder of videos; a complex, dynamic gradient; a color. A brush fills itself with paint from its paint palette (the brush samples from its paint source) and then renders at least one brushstroke per frame.
- Each brush has a path. Can be random, or exotic-function-generated. Can be a mouse path--or finger path.
- A brush is placed in and often moved around in a layer. Can be moved from layer to layer.
Where could AI help Aleph Null? One could either concentrate on making Aleph Null more autonomous or use/create AI that acts as a kind of assistant to the human player of the instrument.
If the former, i.e., if one concentrates on creating/using AI that makes Aleph Null more autonomous as an art machine--more autonomous from human input--then usually that requires an evaluation function, something that evaluates the quality of an image created by Aleph Null or used by Aleph Null, in order to 'learn' how to create quality work. Good data on which to base an evaluation function is difficult to come by. You could use the number of 'likes' an image acquires, for instance, if you can get that data from Facebook or wherever. Getting your audience to rate things is another way, which usually doesn't work very well.
My strategy, instead of this sort of AI, will be to create 'gallery mode'. Aleph Null won't be displayed in galleries as an interactive piece until 'gallery mode' has been implemented. There'll be 'gallery mode' and 'interactive mode'. Currently, Aleph Null is always in 'interactive mode'. One of the pillars of 'gallery mode' is the ability to save configurations. If you like the way Aleph Null is looking, at any time, you can save that configuration. And you can 'play' it later, recall it. And you can create 'playlists' that string together different saved configurations. We normally think of a playlist as a sequence of songs to be played. This is much the same thing, only one is playing a sequence of Aleph Null configurations.
A configuration is a brushSet, i.e, a set of brushes that are configured in such and such a way.
Playlists will allow Aleph Null to display varietously without the gallery viewer having to interact with Aleph Null. Currently, in 'interactive mode', the only way Aleph Null will display varietously is if you get in there and change it yourself.
When you save a configuration, you also assign it a duration to play. So that when you play a playlist, which is a sequence of configurations, each configuration plays for a certain duration before transitioning to the next configuration.
When Aleph Null is displayed in a gallery, by default, it will be in 'gallery mode'. It will remain in gallery mode, displaying a playlist of configurations, until the viewer clicks/touches Aleph Null. Then Aleph Null changes to 'interactive mode', i.e., it accepts input from the viewer and doesn't play the playlist anymore. It automatically reverts to 'gallery mode' when it has not had any user input for a few minutes.
This idea of saving configurations and being able to play playlists, which are sequences of saved configurations/brushSets, is something I implemented in the desktop version of dbCinema. And this seems more supportive of creating quality art than an AI evaluation-learning model. Better because humans are saving things they like rather than software guessing/inferring what is likable.
Anyway, years ago, I decided that I probably wouldn't be using AI cuz I want to spend my time really making art and art-making software. One can spend a great deal of time programming a very small detail of an AI system. My work is not in AI; it's in art creation. The only possib for me of incorporating AI into my work is if I can use it as a web service, ie, I send an AI service some data and get the AI to respond to the data. Rather than me having to write AI code.
But, so far, I think my approach gives me better results than what I'd get going an AI route. The proof is in the pudding.