The new Generative AI video tools from Adobe, announced at Adobe Max https://lnkd.in/eBiwAV4u, have certainly got people talking. If you want more details, CreativeBloq has a write-up here: https://lnkd.in/eW6uQy2C, The Verge gives a solid overview here: https://lnkd.in/eMNgTRWe, and Scott Simmons offers a practical breakdown over on ProVideo Coalition here: https://lnkd.in/e6PWywfg These new tools are hitting both the excitement and existential worry buttons in the AI creative space. When we press the excitement button, we get productivity game-changers like Generative Extend in Adobe Premiere. This could be huge for the type of content work I do at CreativeBloke: https://lnkd.in/eDE9XBaN, where I’m already using Generative Repair in Photoshop. More importantly, these new video production tools will really enhance the great work we do for clients as part of the international team of talented folks (and me) at StaticJoe Studio: https://staticjoe.com. Then there’s the ‘existential worry’ button (which has to be large to fit all those feelings). After spending a few hours with some ideation experiments in DALL-E last night, I’m amazed at how quickly AI image generation works, but there’s still that lingering sense of ‘negative discombobulation,’ as in ‘that would have taken me days to make that image,’ coupled with the ‘nope, that’s not what I’m imagining, AI person, why can’t you see what’s in my head??’ Despite my years of professional experience, I can’t shake the feeling when using AI image and motion tools, especially when it comes to my own creative process. It makes me wonder about the human journey in creativity – that sense we have, as visual storytellers, of just knowing when something is right. AI isn’t there yet, but the thought that it might one day ‘see what’s in Mike’s head’ does give me hope, and maybe I won’t need to press the ‘existential worry’ button so hard about the humanity of creation getting swamped by the sheer amount of AI imagery tools that are appearing in this new frontier. But back to the main point: these new Adobe tools are fantastic. From a production perspective, Adobe’s AI products have always been rooted in the commercial artist space, so they avoid the potential copyright issues we’ve seen with other AI tools. What do you think about these new Adobe tools? And, more importantly, should I start making T-shirts with ‘negative discombobulation’ on them? #AdobeMax #AIinCreativity #GenerativeAI #ContentCreation #VideoEditing #CreativeWorkflows #StaticJoeStudio #CreativeBloke #VFX #TechInnovation
Mike Griggs’ Post
More Relevant Posts
-
https://lnkd.in/df4cvWYs Adobe's "Magic Fixup" represents a significant leap forward in the realm of photo editing, harnessing the power of artificial intelligence to transform the way we manipulate images. This innovative AI model, developed by Adobe Research, automates complex image adjustments while preserving the artistic intent behind every photo. What sets Magic Fixup apart is its unique training on video data rather than static images. This approach allows the AI to understand the dynamic nature of objects and scenes, adapting to changes in light, perspective, and motion with unprecedented accuracy. The core of Magic Fixup lies in its ability to perform 'spatial recomposition,' which enables users to rearrange elements within a scene quickly and seamlessly. For instance, if you wanted to move a piece of furniture in a room from a well-lit area to a shadowy corner, Magic Fixup would adjust the lighting on the object to match its new surroundings, maintaining a photorealistic result. This process, which could take hours of manual editing, is now achievable in mere seconds, thanks to the AI's sophisticated algorithms. Moreover, Magic Fixup introduces 'perspective editing,' a feature inspired by the ZoomShop project. It allows for the alteration of scene perspective by rendering regions at different depths with varying focal lengths. This means that users can not only move objects around within an image but also change their size and orientation to fit the new perspective, all while the AI ensures the edits blend in naturally with the rest of the image. The technology operates using two diffusion models that work in tandem: a detail extractor and a synthesizer. The detail extractor processes the reference image and a noisy version of it, producing features that guide the synthesis and preserve fine details from the original image. The synthesizer then generates the output conditioned on the user's coarse edit and the extracted details, ensuring that the final image remains true to the original while incorporating the desired changes. Adobe's Magic Fixup is poised to revolutionize photo manipulation, making it accessible not just to professionals but to amateurs as well. Its ability to intelligently edit images while preserving natural reflections and lighting is a testament to the potential of AI in creative fields. As we move forward, tools like Magic Fixup will likely become indispensable in the workflow of photographers, designers, and artists, offering an unprecedented level of control and precision in digital image creation.
This New Adobe AI ‘Magic Fixup’ Will Change How You Edit Photos Forever
https://www.youtube.com/
To view or add a comment, sign in
-
Free stock images + Generative AI + Photoshop = Notion "Notion" is about being lost, but having a plan. #photoshop #photoshopwork #photoshopedit #myart #freestockimages #generativeai #graphicdesign
To view or add a comment, sign in
-
I remember when I first started to learn how to make vector graphics using vector design programs. I wondered if I was ever going to achieve some form of mastery. I kept practicing, forcing myself to create digital designs. Eventually, I got the hang of the basics and started to layer on more complex techniques. Looking back, I realize the hardest part of getting better was making myself practice. Practice designs seemed less important compared to my purposed-driven projects, but now I see that practice makes the purpose happen. A big shift in my practice lately is understanding how I can complement my creative process with generative AI. 💡 GenAI helped me design this moving collage - A stylized representation of small molecules and DNA shaping a human form. The tools I used: Midjourney Adobe Photoshop Adobe Photoshop generative AI Adobe Illustrator Immersityai Let me know what you think? How you are making practice part of your purpose? #midjourney #generativeAI #designpractice #jefstorytellingarts
To view or add a comment, sign in
-
Brushstrokes and "traditional" digital workflows as a driver for Generative AI is pretty incredible.. I worked some more on that cyberpunk scene from yesterday. The workflow is still a bit hacky, but over the next years we'll see the emergence of a new generation of creative tools that offer much higher control and customization based on the users style and aesthetics. The basics of this is: Photoshop painting + Krea ai + Magnific ai and Mischief (an old endless zoom sketching tool) for that zoom. #art #ai
To view or add a comment, sign in
-
New releases from mid journey can now take images and reimagine them like #Veras by EvolveLAB. they are also entering texturing. From Mid journey: Hi @here @everyone, today we’re testing our new **external image editor**, **image retexturing**, and **next-gen AI moderation** systems. **The image editor** lets you upload images from your computer and then expand, crop, repaint, add, or modify things in the scene. We’re also releasing an **‘image retexturing mode’** that estimates the shape of the scene and then ‘retextures it’ so that all of the lighting, materials, and surfaces are different. All image editing actions can be controlled via text prompting and region selection. The editor is compatible with model personalization, style references, character references, and image prompting. Along with these features, we’re testing a more nuanced and intelligent V2 AI moderator. This moderator holistically examines your prompts, images, painting masks, and output images. This might be the most intelligent AI moderator ever, but it’s still in early testing, and we’re trying to refine the rules it follows to be as good as possible. All of these things are very new, and we want to give the community and human moderation staff time to ease into it gently, so for this first release phase, **we’re opening up these features to the following community groups:** - People who have generated at least 10,000 images - People with yearly memberships - People who have been monthly subscribers for the past 12 months (we tried to be more nuanced, but it’s complicated with the database, sorry!) **Known issues:** - If you ask something either extremely out of place or you ask for a tiny region to be changed, it might not give you what you want - If you put a tiny head in a scene and ask it to outpaint the body may be too big (so make the head bigger) We hope you have fun testing these systems and watching the magic of Midjourney spill out into the world. We hope it inspires you to bring beauty to your surroundings, whether it’s remodeling your living room or imagining new kinds of fashion based on your personalized model and styles you love.
To view or add a comment, sign in
-
I started to present my sketches to my clients with NewArc.ai renders! Don't worry to show that you are using AI to support your design. I even presented some DELL-E generated images to show some ideas, it's like presenting a moodboard with new images instead of what is there in the market. I do the accurate rendering and then play with the creativity percentage and generate new images. My clients are fully aware that any AI generate image then should be redesigned, on the last, with the right quotes, correct heel and toe spring, real materials and stitches, all the feasibility issues should be fixed and all should stay inside the target price and factory skills. AI for me is like a junior designer assisting me. I am really enjoying the process! #aifootweardesign #footweardesign
To view or add a comment, sign in
-
3D Generation Assets Using AI I am excited to announce the latest update to DonatelloAI, my tool built with Evergine (https://evergine.com/) and TripoAI services (https://lnkd.in/eZiaNpFM). The new version 0.1.4 brings some intriguing new features that I'd love to share with you. New Features in Version 0.1.4: New Model Gallery: Store and manage your generated models, making it easier to compose complex scenes using your favorite assets. New Toolbar: Includes Move, Rotate, Scale transformations, and options for Wireframe or Solid render modes. Optimize and Export generated models: Reduce the number of faces, choose the model format (GLTF, USDZ, FBX, OBJ, STL), and select texture size and format (BMP, DPX, HDR, JPEG, OPEN_EXR, PNG, TARGA, TIFF, WEBP) to ensure compatibility with your favorite programs. Video: https://lnkd.in/dKwkv9Zq Explore the latest version of DonatelloAI here: https://lnkd.in/eTEEQqfA DonatelloAI Features: - Text to 3D - Image or Sketch to 3D - AutoRig and Animating Generated Models - Stylization - Optimized Models and Export to Multiple Formats: GLTF, USDZ, FBX, OBJ, STL - Model Gallery - Compose Large Scenes with Generated Models #Evergine #TripoAI #GenerativeAI #AITools
To view or add a comment, sign in
-
New feature alert! Move beyond just raster images and drop your own SVG on canvas! Upload your logo, edit, merge, and let AI fine-tune your vector files with ease: https://lnkd.in/dq5ZTdDf
To view or add a comment, sign in
-
AI is revolutionizing the creative world, especially for designers, by enhancing creativity, personalizing experiences, and boosting efficiency. At Adobe MAX 2024, new features like Firefly’s enhanced generative fill and Generative Match allow designers to edit images with just a few prompts and match styles seamlessly. Text to Vector makes vector creation easier than ever, while Photorealistic 3D object creation in Adobe Substance takes realism to new heights. Tools like Project Stardust leverage AI for smart object recognition and editing, making complex adjustments simple. These innovations enable designers to turn concepts into reality with ease, delivering high-quality results faster and more intuitively. AI not only acts as a collaborative partner but also offers continuous learning, keeping designers ahead of trends and pushing the boundaries of what's possible in the world of design. #AdobeMAX2024 #CreativityUnleashed #DesignInspiration #AIinDesign #DigitalCreativity #AdobeFirefly #InnovationInDesign #GenerativeAI https://lnkd.in/gqjzSS8A
Adobe MAX 2024 - The Creativity Conference
adobe.com
To view or add a comment, sign in