Discover how Pixar blends art with cutting-edge technology in our exclusive interview with Dylan Sisson. Learn how HPC and AI revolutionise storytelling and visual effects, pushing boundaries in animation.
SC Conference Series’ Post
More Relevant Posts
-
Discover how Pixar Animation Studios pushes the boundaries of animation with high-performance computing and AI. Dylan Sisson explores the transformative impact on storytelling and visual experiences in an exclusive interview. Dive into the future of creativity at Pixar!
HPC Creates Cinematic Magic: Pixar's Technological Canvas • SC24
https://sc24.supercomputing.org
To view or add a comment, sign in
-
#Topics A flexible solution to help artists improve animation draws on 200-year-old geometric foundations [ad_1] MIT researchers have introduced a versatile technique that gives an animator the flexibility to see how different mathematical functions deform complex 2D or 3D characters. The new technique lets animators choose the function that best fits their vision for the animation. Credit: Massachusetts Institute of Technology Artists who bring to life heroes and villains in animated movies and video games could have more control over their animations, thanks to a new technique introduced by MIT researchers. Their method generates mathematical functions known as barycentric coordinates, which define how 2D and 3D shapes can bend, stretch, and move through space. For example, an artist using their tool could choose functions that make the motions of a 3D cat's tail fit their vision for the "look" of the animated feline. This gif shows how researchers used...
A flexible solution to help artists improve animation draws on 200-year-old geometric foundations - AIPressRoom
https://aipressroom.com
To view or add a comment, sign in
-
In my latest devlog, I share how I've structured a Third Person Controller that's both modular and maintainable by applying S.O.L.I.D. principles. The central PlayerController orchestrates the Input, Movement, Camera, and Animation systems, each handling its own responsibilities while communicating through shared data structures. This approach keeps the systems loosely coupled, making the controller easier to debug, extend, and modify. I'm now focusing on enhancing the MovementSystem with gravity, ground checks, and jump mechanics for more dynamic character movement. #GameDevelopment #Unity3D #SOLIDPrinciples #IndieDev
Structuring the Third Person Controller — A Deep Dive
medium.com
To view or add a comment, sign in
-
Animation v. Visualization There are a lot of groups that make “animation” demonstrative style videos. However, this leaves a lot of questions. Who will be testifying to this? What is the foundation to say confidently, “this is how it happened?” Does this back what your expert is saying? Here at Knott Laboratory, with both engineering experts and our in-house visualization team we can establish how an incident occurred because it’s based on engineering analysis. Visualizations require reconstructions. Here’s a case where we had surveillance video. Applying videogrammetry to the footage, we reconstructed the incident in 3D virtual space to show everyone’s movements. With this accurate reconstruction, now you can see from alternative perspectives. Visualizations backed by engineering analysis allow Knott experts to show the incident in a more understandable way, while maintaining accuracy to defend them in court. Knott Laboratory Digital Media Forensics Jason Evans Jack Brannon
To view or add a comment, sign in
-
AI + Motion Graphics 😳 Today I'm testing Flux and a Cinema 4D stylized miniature icon Lora - the prompt coherence is really impressive. I've brought everything into the latest version of Pika which I'm also impressed with for its consistency and coherence. When it comes to these simple motion-design style animations I've found Pika beats Runway/Luma in terms of consistency and quality. What are your thoughts on working with Flux and Pika? And if you're interested in learning more about integrating AI into design and VFX workflows send me a message #ai #aidesign #design #animation #motiongraphics #flux #comfyui #pika
To view or add a comment, sign in
-
Generative Animation using a mixture of 3D and live action footage as inputs for image-to-image diffusion in ComfyUI. No AnimateDiff or ToonCrafter, just a single ControlNet Union note (Tile+Scribble) and an unsampling workflow I developed called Frame-2-Frame. I trained a LoRa on about 100 of my own drawings that were in the style of traditional cel animation - so lineart and flat coloring only with a neutral background for easy chroma keying in post. Each generation outputs a sequence that can then be composited onto different backgrounds to create a scene. Its amazing the different a day makes. I've managed to generate consistency across different object types (effects, objects, people, etc.) by narrowing the gap between minimum and maximum sigma in the KarrasScheduler. The reason for using image-to-image over other methods is better tracking of inputs which allows me to capture secondary action and follow-through on clothing, hair, expressions, changes in poses and capturing of details like logos and text. The workflow isn't without its flaws. Its very sensitive to the inputs its given since it leans so heavily on them for tracking to the 2D output, /the best thing about it is that its scalable and each generation takes less than ten minutes for 120 frames. As always a more detailed write-up will be shared shortly. #ai #animation #motiongraphics #stablediffusion #comfyui #filmmaking #vfx #technology #3D
To view or add a comment, sign in
-
#Topics A flexible solution to help artists improve animation | MIT News [ad_1] Artists who bring to life heroes and villains in animated movies and video games could have more control over their animations, thanks to a new technique introduced by MIT researchers. Their method generates mathematical functions known as barycentric coordinates, which define how 2D and 3D shapes can bend, stretch, and move through space. For example, an artist using their tool could choose functions that make the motions of a 3D cat’s tail fit their vision for the “look” of the animated feline. This gif shows how researchers used their technique to provide a smoother motion for a cat’s tail.Image: Courtesy of the researchers Many other techniques for this problem are inflexible, providing only a single option for the barycentric coordinate functions for a certain animated character. Each function may or may not be the best one for a particular animation. The artist would have to start from scratch with a new approach each time they want to try for a slightly different look. “As researchers, we can sometimes get stuck in a loop of solving artistic problems without consulting with artists. What artists care about is flexibility and the ‘look’ of their final produc...
A flexible solution to help artists improve animation | MIT News - AIPressRoom
https://aipressroom.com
To view or add a comment, sign in
-
Introducing ToonCrafter: Generative Cartoon Interpolation 🚀 I'm excited to share latest breakthrough in animation technology! ToonCrafter is a novel method for creating smooth and high-quality transitions between frames in cartoons, requiring only a starting and an ending frame. The Challenge: Traditional interpolation methods struggle with the exaggerated movements and occlusions (covered parts) in cartoons, leading to jarring transitions and unnatural motion. Solution: ToonCrafter uses advanced techniques from live-action video processing, and adapts them to the unique demands of cartoon animation. By generating new frames rather than simply adjusting existing ones, ToonCrafter ensures fluid and natural transitions. Key Innovations: Toon Rectification Learning: This technique adapts live-action video methods to cartoons, addressing style differences and enhancing compatibility. Dual-Reference 3D Decoder: Maintains intricate details in the newly generated frames, ensuring high-quality output. Flexible Sketch Encoder: Empowers users to control the interpolation process through sketch inputs, offering greater creative flexibility. Explore more by reading the official paper: https://lnkd.in/dRbeWFC8 More examples: https://lnkd.in/dcSnSF5d
To view or add a comment, sign in
-
Chroma 0217 - In Which the First Motion Capture was Made! In 1967, the world of computer graphics and animation witnessed a significant breakthrough with the creation of "The Stick Man," a pioneering project developed by Lee Harrison III. This project marked the first use of motion capture, also known as digital puppetry, and real-time computer graphics, setting a foundational milestone in the evolution of digital animation and visual effects. Lee Harrison, working out of his attic lab in Blue Bell, Pennsylvania, developed the "Animac" system. This innovative system allowed for the real-time manipulation and performance of digital figures, a concept that was groundbreaking at the time. The Animac system used potentiometers linked in a tree structure, a concept later known as "parenting," to control the line segments that represented the stick figure. These controls were attached to a harness, enabling a person's movements to be reflected in real-time on the screen. This method was a precursor to modern motion capture techniques, providing a crude yet effective way to animate based on human movement. The essence of digital puppetry lies in the manipulation and performance of digitally animated figures in a virtual environment, rendered in real-time by computers. In "The Stick Man," this was achieved by filming the real-time output from the Animac system onto 16mm film. This early form of motion capture provided a dynamic way to create animations based on human movement, demonstrating the potential of computers to create and manipulate animated sequences live. The significance of "The Stick Man" extends beyond its immediate impact. It laid the groundwork for more sophisticated motion capture systems that emerged in the 1980s and 1990s. For example, the 1981 film 'Adam Powers, The Juggler' utilized advanced computer graphics techniques to achieve more realistic animations. The principles established by Harrison's Animac system have influenced a wide range of applications in entertainment and beyond. These techniques allow for the creation of lifelike digital characters and complex animations that were unimaginable in Harrison's time. Today, digital puppetry and motion capture are integral to filmmaking, television production, interactive theme park attractions, and live theater. 💬 #AlphaChromatica #Chroma #HeyGanz #VFXHistory
To view or add a comment, sign in
-
Want to get ahead of the competition by 66%? Who wouldn't?! According to a study by the International Journal of Advanced Computer Science and Applications, 3D animation videos can increase customer engagement, well by 66%. See for yourself how our amazing team at IWEnvision brought Pete - Your Documentation VA to life with this stunning 3D animation, and the results speak for themselves. Ready to give your customers a 3D experience they won't forget? DM us, and let's make your vision a reality. Igor Mrenoshki, Nikolche Kuzmanovski, Goran Stojanovski #IWEnvision, #3D, #3DAnimation, #3DSolutions
To view or add a comment, sign in
4,706 followers