Jeremiah Krage’s Post

View profile for Jeremiah Krage, graphic

Creative Consultant & Culture Futurist focused on emerging tech; helping brands leverage character-based IP ✨ Human + AI advocate

Great tutorial on creating consistent fictional characters from Henry Daubrez 👀 👇

View profile for Henry Daubrez, graphic

Global VP of Design & Creative Innovation / Partner at DEPT® 🏴☠️ | CEO & CCO at DOGSTUDIO/DEPT® 🐕 | ATRBUTE-represented GenAI Artist 🎨 | Generative AI Angel Investor & Advisor 🤖

I have found a small trick to generate videos with my own consistent fictional characters through the very new "AI custom model" training in KLING AI. Normally the feature is supposed to allow you to record and upload a few minutes of the very real yourself having different expressions, upload on the platform to train it to use your face, and then apply it in different videos. Now, what happens if I don't want my face but I want the one of a character I have previously designed? 1/ So, I first here I used Blendbox by Blockade Labs to create the character I am going to use for the training...quite frankly, Midjourney, Leonardo, Comfy, Everart, whatever....just create the person you want to get in your videos. Spoiler alert: you cannot be too far from the character you create, portraits are better...in this case I was a little far but it still worked. I would likely try again with a closer view if It didn't eat so many credits in the first place. 2/ Once I have my character, I upload it into Kling's image-to-video and focus on generating only 10 second videos because the model-trainer needs videos of 10 sec minimum. I try to generate different situations, expressions, camera movements, and focus on the ones where there is no hallucination. The rest? Garbage. 3/ I head to the "AI custom model" section, and upload a front-facing video I generated to start with, and then on the next step, start uploading the others. Why did I provide generations from Kling itself for the training and not from another video tool? Simply because hallucinations are kept minimal, and because the resolution is high enough. The tool requires you to upload videos in the highest possible quality. If your videos aren't consistent, character is morphing, etc...the tool will tell you the video cannot be used. 4/ *Ting*, the model is now cooked and ready to use. You can head to the text-to-video section and start generating. Kling enables 2500 characters and has a decent prompt adherence, so you might want to really describe the character, settings, etc because the model you just trained is only for the face. Side note: I spent a ton of credits generating the training videos, the training itself was also 999 credits, but when you are uploading the videos for the training data, Kling will tell you how much is enough. In this case I settled half way through, but if you have the courage and credits to create more videos (I used 14 of 10 sec in my case), you likely can create better results. Side note 2: Now you can generate the same character again and again, nothing prevents you from using Kling's lipsync feature and get them to speak. Which I did...but you will never hear it because I have made a gif out of it...you know social media limitations...

  • No alternative text description for this image
  • No alternative text description for this image
  • No alternative text description for this image
  • No alternative text description for this image
  • No alternative text description for this image
    +4

To view or add a comment, sign in

Explore topics