> Surveys consistently showed that consumers believed artists deserved payment when AI generated content in their style.
It's interesting that "consumers" are generally for the expansion of IP laws. At at the moment, I'm fairly certain that "style" is not something protected by Copyright. I personally do not want this, and I'm sure there are likely many like me. Poorly thought out IP laws lead to chilling-effects, DRM, stupid and unnecessary litigation, and ultimately a loss of digital freedoms.
> What 325 Cold Emails to Artists Taught Us
I'm surprised 1% didn't respond with "EAT HOT FLAMING DEATH SPAMMER" for sending them unsolicited commercial email. ;)
I thought this was a great write up on the current state for artists and AI engines. I'm honestly surprised by this nugget:
> A free Tess subscription to use their own model for brainstorming and scaling repetitive work (roughly 1 in 4 artists took advantage of this)
So based on the math I'm seeing... the 21 artists in the system, only 5 ("1 in 4") optioned to use the tool for their own productivity? That seems really low and makes me wonder what the user experience for creation feels like. I would assume if you decided to commit to this endeavor, you would want to see what derivative results will look like.
I evaluated Tess.design about a year ago for an app I was building. At first I was excited because I wanted a service that compensated artists. However the number of artists was very limited and the blog post said “more will be added soon” but it had already been a year and it seemed like none had been added, not a good sign.
Then I tested out the image generation itself and I was unable to come up with prompts that achieved the kind of images I wanted. My only prior experience at the time was OpenAI API. With OpenAI I usually got what I wanted on the first or second try, but with Tess, I couldn’t get a usable result even after 20 tries.
So in addition to the limited number of artists, I think the quality of outputs vs. competing models was a huge factor. I needed to generate thousands of images, so I couldn’t afford to do dozens of attempts for each one.
Hopefully one day there will be a service that can match the quality of OpenAI Image API and Flux but with compensation for artists.
> …every image was traceable to a single consenting artist
> …fine-tune a Stable Diffusion base model.
So your entire business proposition was a lie, as you literally used a base model trained on billions of images by other artists too!
They took a base model, so something trained on stolen work - and then added a vaneer of non-stolen work. I too would be skeptical of their legal position.
The 1 in 4 artists actually using the model for their own work is the most interesting data point here. If you're building a royalty system and 75% of the people being paid don't even want to use the tool themselves, that tells you something about the gap between "this is fair compensation" and "this is actually useful to my creative process." The royalty model might be the right thing ethically but it doesn't solve the adoption problem.
I love this writeup--it's one of the refreshing looks into how startup innovation happens on-the-ground. We're inundated with new products and startups so often that it's easy to forget that the people working on the product are taking a bet with no promise of future payoff. In this case, it didn't work out, despite the team putting in their hard work, sweat, and clearly lots of stress.
Startups are not for the weak but the process detailed here is how we've gotten some of the most transformative and innovative products in technology. Props on attempting this unique idea; very sad that it didn't work out, but sometimes the market just can't support certain ideas!
They failed because they gave advances that were never going to be paid back and expected artists to bring in customers.
The demand to produce something in an artists style is low. The volume required to make it interesting to artist isn't present.
AI adoption and pushed back is greatest with artists you would be better off asking for money to shutdown AI.
The tech itself sounds interesting and would love that writeup.
Props for a postmortem, much like scientific studies that publish negative results.
This reminds me of the articles I occasionally see in the local newspaper about a restaurant that is closing down. So often it’s one that I’ve never heard of before that. To me, that’s the number one issue. If your likely customer base (or at least an audience member who reads a lot about the industry/market) hasn’t heard about your product, how are you going to have a successful business?
I wonder, did they pay for the artists whose art they took without paying or asking to train that LLM model they are promoting? I guess we know the answer :)
I mean there's no point; everyone still gets super mad even in the cases where models where trained only on content that a company owns or has paid for.
I wish artists would stop with the "it stole our work bullshit" and just be more honest about the "it can do what we do and we're terrified and scare for our future" part.
Because that I can 100% understand, and contrary to previous jobs just disappearing, we do live in "the future" and things like UBI or free cross-training should be available for this sort of thing.
> One engineer who left Kapwing in fall of 2025 said that the short-lived Tess investment contributed to burnout.
Don’t take this personally.
Even if you told this person to work constantly and they believed in you and the business, it’s not totally your fault that they burned out. I say this as someone that has burned out twice, is currently burned out, and blames those that I currently and formerly worked for. I know the problem is as much me as them. Yes, employers have a responsibility to their employees not to burn them out. But, if they do, even if the employer is in a power position where the employee felt they had no other choice, and I felt that both times, the employee can choose not to work that much or care that much for almost whatever that means- if you’re literally holding a gun it’s different of course.
I know of a developer that committed suicide and the toll that took on the employer. But the employer can’t take on all of that themselves.
I’m sorry that your business failed, but I hope that something good comes out of this.
Also- I’m not saying that any part of your responsibility in burning out this person was ok. Just that not all of it is your fault.
I'm not a native English speaker, but since when became 'lessons' a 'learnings'?
As somebody who occasionally gets tiny ASCAP checks I think an ASCAP/BMI model might work for artists (and maybe even writers?) I guess this is more like SESAC, but maybe that's how this will end up working.
Are there successful non-AI artist platforms for works of art?
The individual who figures out how to do this will be both wealthy and beloved.
the spotify comparison is telling because spotify succeeded by being better than piracy, not by being more ethical. tess was trying to compete on ethics against tools that were just flat out better at the actual job.
i generate hundreds of images weekly for video content and the honest truth is i never think "i want this specific artist's style." i think "i need a documentary still that looks like 1970s film grain" or "i need a character that matches my last 50 frames." consistency and speed matter way more than provenance. the few times i tried artist-specific fine tunes the quality was noticeably worse than just prompting a good base model well.
the 6.5% artist signup rate buried in there is actually the real story. they cold emailed 325 high end editorial artists and got 21. those artists didn't want passive income from AI - they wanted AI to not exist in their market at all. paying someone royalties to automate away their livelihood is a weird value prop no matter how you frame it.
How about that few want one artist’s particular style reproduced, instead they want what they are vaguely seeing in their head produced from a cacophony of styles
This article is bullshit. You can't get a full model from training on just one artist's work. A pretrained model is required. The pretrained model was likely one which was indeed trained on the works of others without consent.
What's more, their reasoning for abandoning the company was to build out another company with a suspiciously similar idea...
> The timing wasn’t right. We depended on artists helping us to promote the platform, and they didn’t.
There's a certain arrogance to believing the timing "simply wasn't right". It looks really bad if you try it with any recent controversy:
* "The timing wasn't right to charge people for heated car seats"
* "The timing wasn't right to make Photoshop a subscription service"
* "The timing wasn't right to increase fees"
It's a way of talking yourself away from the fact that what you are making may, inherently, be disliked. The cited survey even seems to have been read as favourably as possible:
> Surveys consistently showed that consumers believed artists deserved payment when AI generated content in their style.
This doesn't mean people want artists style to be generated by AI. It could mean they think it's horrible, but if it happens they should at least be compensated for it. In fact, the quotes survey even says 43% believe companies should ban copying artists styles. I could make the exact opposite argument with the same data:
"Many consumers believe companies should ban copying styles, and this may be a more common opinion than measured as most people have no experience with modern AI tools and therefore no chance to have made an opinion yet. What is known is that the majority believe that if artists were to be copied, they should at least be compensated"
edit: formatting, typo