A Note on AI Art and Petitions

Okay, I’m not any kind of visual artist, but I need book covers like any other writer and there is an impressive brew-ha-ha going on amongst artists about AI art and how its datasets use other people’s art without their permission, or even knowledge. Meaning the copyright of any such art produced should be legally murky.

I’ve commissioned individual artists in the past, and I’ve gotten covers from SelfPubBookCovers, which explicitly lists in their guidelines that AI art is not allowed, specifically to avoid copyright trouble for everyone. So I currently have no dog in this fight.

However, that whole “dubious ethics used to make pretty pictures” does strike a nerve. So… if anyone wants to poke around, I ran across some people who’re trying to lobby Congress in the US to tackle it for copyright laws, and their justifications for doing so. Here’s the info I’ve got if anyone wants to check it out.

Concept Art Association fundraiser to lobby against AI-art: https://www.gofundme.com/f/protecting-artists-from-ai-technologies

The woman artist behind this is in this video of three artists talking about AI: https://www.youtube.com/watch?v=Nn_w3MnCyDY

As of the 21st, Karla Ortiz reached out to contacts in the EU.

For anyone interested in Europe, here is an Italian fundraiser: https://www.gofundme.com/f/help-protect-our-art-and-data-from-ai-companies

Anyone who is an artist or has more info or opinions, feel free to comment what you think!


16 thoughts on “A Note on AI Art and Petitions

  1. I dabble in art. I’m not nearly good enough to do book covers. But I really do not like the sound of this AI.

    I have no problem with digital, that’s just another medium. But this AI is definitely a very murky area for Copyright.

    Unfortunately, I can’t contribute anything.

    Liked by 3 people

  2. I just saw an article related to Machine Learning today….


    Has why Machine Learning was applied to art in the first place– and it wasn’t to produce art, it was to identify it, since ‘style’ is such an… “I know it when I see it” thing.

    Besides that inaccuracy in their framing, the Concept Art folks are deliberately conflating two different things– sampling art AIs, and machine learning AI. They function entirely differently.

    It’s like, in music, if you combined remixing songs, and copywriting a style or technique of music, and called them ‘music remixing.’

    One is defensible, the other would destroy all development in music. Imagine someone declaring that since they were the first to use a musical scale, no-one else was allowed to use that. Or someone who found a new way to tune their guitar was expected to reasonably have claim to that, as all other uses would be “advanced music remixing.”

    They do seem to slightly mitigate against multiple-artist art projects being restricted by applying it directly to being done by a human, I’m not sure how the various famous “painting animals” would be classified, and there is a large hole left for things like the computer animation used by Hollywood, and even video games.

    That kind of hopefully unintended consequence would it very difficult for the emerging threat of small studios to get established and challenge Hollywood.

    Liked by 4 people

      1. Sadly, yes– and my husband just pointed out it could catch video games, too.

        The big studios could probably manage something like the Disney style lawsuits, but the folks who build their programs from the bottom up could have their rights invalidated, based on interpretation of if the instructions used were sufficiently “real” art. (Especially since the folks making decisions are unlikely to be even a little familiar with technology.)

        Liked by 3 people

  3. There’s at least three different angles to look at this from, and not a lot of room for coming to a broad consensus.

    First, the lawyers, and what they currently say now. Very particular way of thinking, and normally there would be some grounds for ‘trust the lawyers to figure it out’. Not right now, the profession has screwed itself over.

    Second, what do people sense about art and ‘art’? How does it work? There are definitely people who believe that they perceive the soul of the human artist in the work.

    Then there is the electrical engineer and computer scientist concept of information science. Which leads to some summaries that can sound pretty flippant to someone not versed in the theory. Then the information scientist types can get salty and all ‘respect muh authoriteh’, which can be a wee bit unpersuasive.

    Music is a good example for explaining what on earth the information science types think they are saying. Music is a mechanical vibration in air. (You can write music on paper, along with lyrics, but that is upstream of what I’m saying.) At different points around the performers, the vibration waves passing through the point are going to be different.

    When you strike an object, it vibrates. For the simple case of something like a drum head, if you don’t break it, no matter how you put the energy in, after a while the energy coming out will be distributed over the ‘natural frequencies’ of the drum head. The size, thickness, and something to do with the strength or tension of the drum head, or the string, or whatever can be used to calculate something called the harmonic frequencies, what it vibrates at all other things being the same. Your first harmonic frequency is lowest. IIRC, the other harmonics are integer multiples of the first. so f2=2*f1, f3=3*f1, etc. There is going to be a lot of energy at the first harmonic frequency, and less at the higher ones. This is relevant because there is debate over how high in frequency humans can hear, and about whether standard digital music is subtly off in duplicating the harmonics of real music.

    EEs get into the picture when you use a microphone or transducer to convert the mechanical vibration in air to an electrical ‘vibration’ in a wire. A microphone is one type of tranducer, the one that converts audio vibrations into electrical vibrations. The audio vibrations make a magnet move, and the magnet moving generates currents in a wire, and those currents are your analog signal.

    There are reasons EEs these days like to avoid relying on analog signals, so they are usually going to convert an analog signal to a digital signal, which brings us to the theory of information.

    Analog signals are continuous in time and in value, digital signals are discrete in both. If your analog signal has a highest value of 10^-4 volts, and a lowest value of 0, if your value resolution is 10^-5 it will be assigning values into 10 bins, if it is 10^-6, it will be assigning values into 100 bins. And it will be assigning values at some rate, the sampling frequency. The bit of information theory relevant here, if your sampling frequency is twice the highest contained in the analog signal, you are losing information, but not information that you care about.

    Standard digital music assumption is that 22khz sampling gives you the first 11khz of signal, which is every thing that humans hear. May or may not be true. Probably isn’t entirely true. There is definitely vehement disagreement with it.

    Anyway, the EEs do have opinions about exactly how much information is contained in an audio sample captured at whatever bit number of value sampling, and sampling frequency. Computer scientists look at different ways to compress that sample, to discard information from it, so that it can still be used in many ways, but will need less space. Anyway, we can copyright lyrics, and we can copyright performances, and I think we can copyright scores.

    Images are also things that we digitally sample, and we can recognize a jpeg of the Mona Lisa as perhaps falling under the same copyright as the Mona Lisa. An image file is either a single matrix for greyscale, or three for an RGB image.

    Images/matrices happen to be one of the major ways of doing AI. Machine learning/deep learning/artificial neural networks (ANN). (Related are convolutional neural networks (CNN).) This is a scheme that requires you to have a significant dataset that you ‘know the meaning of’, and want to know the meaning of similar data. You make a ‘model’ that has several layers (matrices), and at a minimum you want an input layer to feed data into, and an output layer to output the ‘meaning of the input’. Typically, you have a bunch of intermediate layers, with a bunch of internal connections. How the layers/matrices/images in the model talk to each other depends on ‘weights’. The training process is 1) plug an image into the input 2) see what it does to the output 3) adjsut the weights so that it gives the correct output again. I’m kinda confused how it is supposed to ensure that the previous settings for the images still work? Maybe it processes all the images for each variation in the weights?

    Anyway, if the model and weights are one megabyte, and the images used to train it were one terabyte, one conclusion would be that the model does not contain all of the information that the training images contained. Is this persuasive?

    Well, the model size is obvious, but the size of the images used to train it is something that you would have to trust the person providing the model. And, you would also have to trust someone more skilled in this AI approach if they say that they cannot extract any of the original images.

    But, with all of the trust violations occurring, why must anyone extend that trust?

    Liked by 3 people

    1. I was looking at it from the perspective of, there are known instances of specific artists’ work being scraped for the various AIs when the artists in life said “no, nay, never, xnay.”

      That’s a violation of copyright.


      1. There has also been at least one case of an artist doing a drawing livestream and having someone take a screencap of the unfinished drawing, running it through an AI art generator, and uploading “their” finished art piece before the original artist even finished theirs.

        Liked by 1 person

      2. There has also been at least one case of an artist doing a drawing livestream and having someone take a screencap of the unfinished drawing, running it through an AI art generator, and uploading “their” finished art piece before the original artist even finished theirs.

        That would be just as wrong if they were just really fast, copied what the livestreamer was doing, got to a set point, did their own (faster) technique, and then uploaded their copied-directly-as-the-livestream-was-going copy.

        It’s also a jerk move style of theft that has been going on since at least the Renaissance, where someone would read something published in one area, travel, and then publish it as his own idea. (Sometimes not even bothering to rewrite stuff.)

        Liked by 2 people

      3. I’m pretty sure that would only work for the “remix” style AI (under derivative works) not the “Machine Learning” style– style is specifically excluded from protection.

        Most of the cases where it would be really useful are things that we already deal with in fan art– my husband just did a five minute lecture (which the kids kind of listened to) using the Final Fantasy 7 part I re-release cover– copy the building, no matter who you put in front, definitely covered for copyright.
        (linking below)
        Have “a guy standing on a rock with fire in the background,” and even if he’s holding a sword and wearing a long coat, not covered. Have Aerith standing on a stone in front of the spray from a crashing wave, and it’s definitely covered. (And Square will eat your lunch for selling it, but they have repeatedly and publicly stated they love fan stuff for personal use…they probably released this one, actually….)

        Liked by 2 people

      4. The trouble with machine learning *making* art is that copying art and then releasing it for cheaper than the “original” art it’s based on is nothing new. It is only the method that is new. And laws have trouble keeping up with new methods. All machine learning enables is copying art styles and releasing the art faster than it use to be. *A lot* faster mind you… but it’s still the same system.

        I honestly wish people who complain about AIs copying art would complain about people copying art. It’s the same thing, just different volume/speed.

        And for some reason people aren’t going to complain if a *person* copies the art style of an artist who didn’t want their work copied (especially if they don’t intend to profit off it)… but they’ll complain if a computer just… reads the style as part of their data set. People are *constantly* doing what machine learning AIs do when they are fed data… we just don’t think of ourselves as doing that. But we are…

        As someone who learned my relatively rare art-style that involves math by… copying people… I fail to see the issue with *making* art this way. Profiting from it? Sure. But trying to prevent AIs from being fed data? LOLOLOL… figure out how to make people ignore data first. That’s where the real problem has always laid.


      5. Um.

        Picture Hollywood. Picture all the stuff they’ve been doing with VFX to make ever more horrible movies.

        Picture Hollywood in use of this tool.

        Consider, perhaps, that we do indeed need something laid down on who can use what for their datasets before someone like Disney puts it to widespread commercial use.

        Liked by 1 person

      6. The awfulness of Hollywood is that they can’t find a decent story with a pile of books, several guides, and five hundred Lead Assistant In Charge Of Finding A Story That approaches 90s Animation.

        Not their FX.


      7. What do you think most VFX is? It’s using a lot of reference material gotten from somewhere to make something “new” out if it. Or using someone’s partical generator, (based in real life physics usually), etc.

        The entire VFX industry is built with copying in mind. This is just the next step. No one did anything when they had humans doing the copying, why would they do anything when AI is now doing that job?

        All AIs are doing is doing a human process *faster* and at greater volume. And it’s that speed and volume people feel threatened by more than anything else. The actual thing the AI is doing is less the issue.


  4. Honestly, for me one of the big reasons I go “oh, heck no!” to AI-generated anything is the simple fact that art, both visual and audio, is bound up in a lot of emotion. The very intent of art, from the artist’s point of view, is to express an emotion, something AI is incapable of. AI-generated art would be kind of … Soulless. No matter how perfect the composition, it would still feel off, missing some spark.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s