> The team is going to great lengths to make this ethical and fair (try and generate a photo of a copyrighted character like Hello Kitty or Darth Vader)
are you saying it won't work? if that's the case, that seems really silly. actually, it goes against everything I believe in (as well as my understanding of even the kindest meaning of the word "hacker"). it drives me up the wall, it makes my blood boil
who is going to stop me from drawing hello kitty myself?
it's not the tool's job to regulate my creativity. the law exists to regulate the use of my art, not the act of creating the art. I can draw hello kitty all I want and leave it in my drawer, if it floats my boat
limiting the tool just makes me never want to use it. you're like Sony fighting digital music in the 2000s. the future is right in front of you but you just can't see it.
GP works for Adobe, and Adobe's bread and butter are the professional creators who would love a world where there is hardware DRM on your eyes and you can't even see their creations or a likeness of them without paying a royalty (and one to "rent" the memories of the visualization, not to "own" the memories like we do now). While I largely agree with you, the GP post is exactly what I would expect from an Adobe person.
> "it's not the tool's job to regulate my creativity. the law exists to regulate the use of my art, not the act of creating the art. I can draw hello kitty all I want and leave it in my drawer, if it floats my boat"
And you good sir are the few types of people who keep my sanity in check. thanks for calling out the tyrants :) - they are terrible and just like the kings of old want constant praise and their rings kissed for the blessing of imposing serfdom to all but their chosen nobility
I think there is a broader phenomenon in the society for complete disregard for law/order and taking it in their own hands. Enabled by Big Tech and giant ESG corporations, this is a method of sideskirting written laws, constitution, the judicial process and force feeding the public with an authoritarian whip. It is anything but 'democratic' hiding under ostensibly kind causes such as safety, climate, misinformation, etc.
It's not a good look for the tech industry and the ESG corporate culture. It is destroying the very hacker culture that made them.
> "the law exists to regulate the use of my art, not the act of creating the art"
Thanks airstrike, for dropping this truth bomb.
And just like that, Adobe expands its ministry of creativity powers. In the beginning, it was dollar bills on the no-can-do list. Now it's pictures of Darth Vader and hello kitty.
This runs into the core problem with technology--we answer "What can we do" before "Should we do it" and "What are the impacts"
Let's say you take your hello kitty dot art, and make a poster promoting a commercial event. You then take it to FedEx Kinkos and use a self-service copy machine to make 1000 copies. You could reasonable argue that you are violation of copyright infringement, and the photocopier / FedEx kinkos isn't.
Now instead, you have AI generate a poster, and it generates a very similar image to hello-kitty. It's arguably so similar than a reasonable person would say it's a copy. You take that poster and again make 1000 copies. Is there copyright infringement? If so, who if anyone, is liable for damages?
Whoever put the poster up for display and reaped some reward out of it is liable for damages. Everyone else is just doing their job in the supply chain. We want supply chains to work for the good of the economy, which is a proxy for increasing availability and reducing prices of "goods and services" to the average person.
They use Firefly to generate a poster, and unbeknownst to them, the image it generated is a reasonable facsimile of a copyrighted/trademark character.
The person has inadvertently committed copyright infringement.
So does Firefly need to come with a warning?
The safer solution, to the chagrin of another commenter, is for Adobe to neuter the tool by only training on data in which Adobe has express permission to use.
Surely with all our contemporary AI prowess we can train a model that identifies "reasonable facsimiles of copyrighted/trademark characters" after generating them and alert the user that it could be argued as such. Still, let the user decide.
We do not need creative technology to regulate observance of copyright law.
(By the way I think the chagrined other commenter was yours truly ;-))
With that approach you risk ending up in a very frustrating loop of copyrighted works... A bit like picking a name in an MMORPG that's been out for a few months ends up being a hell of constantly getting your name requests rejected over and over again.
A simple warning that what’s been generated looks similar to something that’s copyrighted is not a bad idea. Then it’s up to the AI user to do their due diligence if they intend to use the resulting work for commercial purpose. Neutering the tool from the get go is a step too far.
Reminds me of the recent chatgpt GPT4 virtue signaling in the update notes: "Now AI will refuse more prompts, so it safer" - Yeah this was requested by exactly 0 real human beings (lawyers and politicians excluded of course)
I hate limitations as much as the next person, but these tools are viewed as generators by company xyz. You don’t want Disney to sue Adobe because the tool can circumvent IP and abuse it.
So, the above was somewhat flip and terse, but the kind of lawsuit being avoided is also the kind of thing that provides clarity on legal issues and removes spaces of doubt. This can be broadly beneficial.
Giants battling it out can result in a clearer environment for everyone else that couldn’t afford legal risk in an environment of doubt.
Does your art live on Adobe.com? I tried to make this clear. Currently “who made it” isn’t as clear as for paper and pencil. AI isn’t a seen as a plain tool yet, it’s part of the artist.
There will be open source tools replicating this within months. You can build your own model based on billions of images on the web or use someone else's or contribute to one.
To expand on this, what we're seeing with LLaMa is that you can fine-tune your model using other models.
It's not clear that the quality will be exactly the same (in fact it will very likely be worse), but working generators are essentially ways to quickly generate training data. And I can't think of a legal argument for why generated output from a model would be less legal to use as training data than an unlicensed photo off of DeviantArt.
Nobody has really called out OpenAI on this, but OpenAI has a clause in it's TOS that you won't use output to build a competing model. But that's... just in it's TOS. If you don't have an OpenAI account, it's not immediately clear to me (IANAL) why you can't use any of the leaked training sets that other people have generated with ChatGPT to help align a commercial model.
Certainly if someone makes the argument that generators like Copilot/Midjourney aren't violating copyright by learning from their sources, it's very hard to make the argument that Midjourney/Copilot output is somehow different than that and their output can't be used to help generate training datasets.
are you saying it won't work? if that's the case, that seems really silly. actually, it goes against everything I believe in (as well as my understanding of even the kindest meaning of the word "hacker"). it drives me up the wall, it makes my blood boil
who is going to stop me from drawing hello kitty myself?
it's not the tool's job to regulate my creativity. the law exists to regulate the use of my art, not the act of creating the art. I can draw hello kitty all I want and leave it in my drawer, if it floats my boat
limiting the tool just makes me never want to use it. you're like Sony fighting digital music in the 2000s. the future is right in front of you but you just can't see it.