I've written before about artificial intelligence (AI) and its impact on the art community. Commercial artists, especially, worry about AI bots mining their work for use in image-generation systems without compensation. But now there's a possible solution, Glaze.
Glaze was created jointly by an academic research group at the University of Chicago and several professional artists. The inventors say:
Glaze is a tool to help artists to prevent their artistic styles from being learned and mimicked by new AI-art models such as MidJourney, Stable Diffusion and their variants. It is a collaboration between the University of Chicago SAND Lab and members of the professional artist community, most notably Karla Ortiz. Glaze has been evaluated via a user study involving over 1,100 professional artists. [I participated in the study.]
Basically, the tool makes subtle changes to an artist's image, causing the AI to see it as having been made by another hand. (The Glaze team calls this change "cloaking," and it's nearly undetectable by human eyes.) When the AI sees a cloaked image of one of my paintings, for example, it may see my style as being more akin to Van Gogh's. If I upload many cloaked images to the Internet and these get incorporated into the AI's training dataset, over time the AI will gradually "learn" that this is my style. So, when someone prompts the AI to "paint a landscape in the style of Michael Chesley Johnson," it will generate one that looks like a Van Gogh.
But I see a couple of problems. First, the slowness of the software. The tool allows the user to change the degree of protection it gives, but the better the protection, the longer the process takes—up to many minutes per image. Also, because the tool runs locally rather than server-side, it requires you to download several gigabytes of resource files on installation. (And these will need to be updated periodically.) No doubt these issues will be addressed in future versions.
But I see a bigger problem, one that has to do with the proportion of cloaked to non-cloaked images in the dataset. Here's an example. Frank Frazetta was an extremely popular illustrator in the science fiction and fantasy industry for many decades. If I do a Google search on "Frank Frazetta images," I'm told there are 2.4 million results on the Internet. No doubt most if not all of these images are already in the dataset. Cloaking won't affect any of these images; it only works on new uploads. I expect it will take a long time for enough cloaked images to be added to make a difference for Frazetta. (Although perhaps the AI might develop a preference for "learning" from newer images rather than older, which would help.)
Will I use the tool? Probably not, since I'm not a commercial artist and don't share their concerns. But I do find how this is playing out interesting. Maybe I will worry more when 3D printers capable of spitting out actual oil paint are being used with AI image-generation to create wall art in my style. But I'm too much of an experimenter to have a particular style, so maybe not.
If you need more background on how AI image-generation works in order to understand all this, go read my other posts on AI as well as the FAQ on the Glaze site. You can also download the tool there.
Here are zoomed-in sections of each of the three images above. I can see a small difference in each, with the largest difference in the Level 25 version.
Original |
Level 25 Cloaking - somewhat noticeable |
Level 75 Cloaking - very noticeable |