Collaborative Project Begins

27 Jan 2021

Today, Luke briefed us about the collaborative project that we have to do throughout this term. Each group must be at least 4 persons and should collaborate with at least 1 person from other course. The completion of the project is not really matter but the process and documentation on this blog of how we collaborate and try to finish the project is the most important. We can either propose a new project or join other courses project. For animation course, Luke suggested to us to do a previs project and use existing model from assets library as much as possible.

Right after the briefing, I was thinking to propose my own project. I have a few previs ideas for a sci-fi story because I really like this theme. As I begin to write the story outline, I saw some of my classmates already proposed their ideas and looking for team members.

One that really caught my attention was the idea by Emma. She proposed a sci-fi horror story which is very good and quite close to what I’m interested to do. She announced that she already got a sound designer on board and still looking for some more animators. So I think it’s better if join her project instead of creating another project. I also feel very confident to work with her as I saw her good performance during the last term.

I immediately contacted her to apply and gave my YouTube channel link for some samples of my animation works. I think I can fit into the roll and give my contributions to her project. I also mentioned to her that I’m quite new to Maya as I used another software for my previous animations but gave my confidence that I will adapt quickly for any requirement of the project. 

I’m now in the team and we wait for another people to join.

29 Jan 2021

By the end of the week, we’ve got enough members for the project which I think a very good combination from 3 courses with 2 animators, 2 VFX artists and 2 sound designers.

Group Leader: Emma Copeland
Animator: Emma Copeland, Kamil Fauzi
VFX: Gherardo Varani, Antoni Floritt
Sound Design: Callum Spence, Ben Harrison

We joined a new Discord Server created by Emma.

Expectations of Professionalism

(Term 2: Week 1)

In this term we are going to do a collaborative project. When working with other people and in group, we expect some kind of professionalism among the team members. 

I have worked in groups during my previous study and also when working in productions. There are a lot of personalities and behaviors that we can see but here are some of my expectations of professionalism. Some of them are also a reminder to myself for me to improve in this regard.

Discipline with Time
I think this is one of the most important things in professionalism. It can affect everything when working in team. We need to always discipline with time and schedule. Don’t let other people wait for meeting, completing the tasks, and delivery. This also the area where I myself has to improve because I always do too many works from personal, study and freelancing at once which can affect my time discipline. There is a chapter in Islamic Quran where specifically reminds us about the important of time. It is always be my reminder to manage them properly.

Responsible
Always be responsible and giving commitment with the tasks that was assigned to us. Take a good care of the work like it’s our personal thing. Complete the work with the expected quality, find solution if there is any problem and don’t leave the responsibility to others.

Honesty 
Always honest during work and don’t slacking off. Tell the truth about our capability and is the work can be done in within the timeframe. Don’t lie to team member of any problem which can cause delay to the work. When working with confidential project, don’t share or leak any information that we have agreed upon.

Communication and relationship
Respond timely and properly to conversation, act courteously and use good manners. Be friendly with other team members. Get to know their personality so we can engage properly with them. Sometimes too close or too open in relationship and conversation with certain people can change the atmosphere in the team. 

Positivity
Leave negative thinking aside and always do good deeds. Don’t keep complaining without providing viable solutions. Don’t be jealous with other team member’s accomplishment but to use it as encouragement to improve ourselves. Don’t talk negative behind another person with can cause split in the team.

Notes
We cannot remember everything so it is very important to note down important points during discussion and task delivery. It will help with the work efficiency when we can refer back and do the job accordingly. This is something I have been practicing since a long time ago by taking note and recording audio of a meeting especially when meeting with clients since it happened sometimes, they change and don’t remember what they said.

Research & Presentation

(Term 1: Week 14)

Advantages of Real-time Rendering in Animation and Visual Effects Design

Introduction

The world has reached technological advancement where real-time technology is possible to reinvent the entertainment production techniques. The industry must undergo a new paradigm shift. The next evolution in content production must adapt to the more flexible and interactive technology that can produce higher quality results in extremely short time.

Since the very beginning, the animation and visual effects production process were almost linear in nature and basically consisted of five phases of production which are development, preproduction, production, postproduction and distribution. Each phase had to be ready before the production could continue to the next phase. It also suffers with the delay to get immediate results from the current time-consuming rendering technique known as off-line rendering. In traditional workflows, any significant change to characters or camera angle might send the production back to square one to reproduce the work and visual render.

In other aspect, the artistic demands are also increasing. Previously impossible visions, from virtual world to photorealistic digital humans or beasts, directors are competing to come out with unique new ideas in their masterpiece. But with all that demands, tight schedules and budgets, the mantra of “fix it in post” resulting pressures on the visual effects production around the world. It is common for artists to feel frustrated with long working hours and turnaround time.

What is Real-Time Rendering

Real-time rendering is a technology designed to quickly process and display images on screen. It can be referred to basically anything related to render but most often synonym to 3D computer graphic and computer-generated imagery (CGI). There are actually many real-time engines available in the market for free and commercial such as Unreal Engine, Unity 3D, Blender’s Eevee and CryEngine. Video game creators have been using this technology for decades to create interactive games that rapidly renders 3D visual while simultaneously accepting user input allowing user to interact with characters and virtual world in real-time.

Real-time rendering and offline rendering are the two major rendering types. The main difference between the two is speed. Offline rendering is popular among animation and film production because of its capability to produce realistic renders, despite having to sacrifice time. But as real-time engines become more capable in producing realistic imagery in shorter time, designers starting to recognise this technology and begin implementing it in their workflows. 

As a brief comparison between rendering method, offline rendering such as Mental Ray and Blender’s Cycle use a technique called ray tracing, which render realistic images using a method that almost identical to how real lighting works. It cast multiple rays of light and bounce around the scene to reflect, refract, or absorb by objects. Since there are many probabilities along the bouncing route and one ray from one pixel is not that accurate, it casts additional randomised rays from that pixel and averages the result. This whole process takes a lot of processing power and time to calculate.

Figure 1: Ray Tracing
Figure 2: Rasterisation

The method used by real-time renderers such as Unreal Engine and Blender’s Eevee on the other hand are designed to ease the processing burden by using a process called rasterisation. The technique can be described as image manipulations where the 3D geometry is projected onto a raster of pixels that make up 2D image. Every pixel color is determined by shader based on surface normal and light position. Image edges are then anti-aliased, and occlusion is determined by z-buffer values. To make the output look nice, additional trickery is added such as light maps, shadow maps, blurred shadows, screen space reflections, and ambient occlusion. Real-time render uses approximations on the behaviour of light and will not be as accurate as off-line render. But modern real-time engines are getting more powerful than ever in which they can now doing hybrid rendering to do real time ray tracing with minimal sample and very few bounces to render only reflections and shadows. It then combines machine learning denoiser result with rasterised diffuse and other passes to get a nice final result.

Benefits for Production Workflow

In short, it is about interactivity, time and cost. One of the biggest advantages in using a real-time platform is that all departments are able to work immediately and simultaneously. It is now just ‘production’, and the term ‘preproduction’ is a thing of the past. Pre-visualisation works can be changed and updated with refine versions without needing to start from scratch in a new phase of production. 

Review cycle is fast as directors can give feedbacks in real-time and is no longer remain several days or weeks behind artist’s works as the production does not have to wait for painfully slow renders. Creative decisions can be made faster and artists can progress quickly with up-to-date direction without any delays between design iterations. This can prevent the loss of valuable production time and resources. 

Because of the less time intensive in real-time environment, artists can pitch and quickly experimenting their ideas to test out hunches and concepts in a way they cannot in traditional offline workflow. This will empower the director with additional idea perspectives as well. Early development process like script writing also can take advantage of the technology to use previs (revisualisation) to look at the sets, the characters, and all the assets to create a great script.

All of this would never have been possible without real-time technology, as it would have taken far too long to create unnecessary versions of the scene and ultimately too much luxury in terms of cost and time. With real-time rendering integration, directors, designers and clients will be able to instantly see how the end result of a project will look like. They will have complete control to experiment with any ideas and making changes to things like characters, lighting, and camera positions as they work, and avoid having to wait for lengthy rendering time.

Benefits for Visual Creativity

Real-time rendering is being integrated in visual effects and animation to create virtual production, digital humans, animated series, and commercials. It is clear that this is not a distant dream or possible trend but the future of filmmaking.

Epic Games, the creator of Unreal Engine, reveals how their engines are enriching production, enabling interactive storytelling and real-time production. Broadcasting companies have achieved a new level of quality and creativity by connecting broadcast technology with real-time engine for environment. Virtual sets eliminate high costs associated with physical sets and complex scenes can be shot live, without requiring extensive post-production, further driving down costs. 

Figure 3: Real-time virtual set

In weather news for example, special effects and CG elements like rain and flood can be added to a scene instantly and can interact with weather-caster in real-time allowing for greater flexibility with creative decision making. In 2015, The Weather Channel introduced this kind of immersive mixed reality experience to better explain the anatomy of a hurricane.

Figure 4: Weather-caster in virtual flood

In 2017, The Future Group uses Unreal Engine and Ross Video technology in combination with its own interactive mixed reality platform to produce Lost in Time, an episodic series in which contestants can compete, navigate and interact with the spectacular interactive virtual environments in real-time.

Figure 5: Contestants compete in a digital world of Lost in Time

In the same year, visual effects studio, The Mill produced a futuristic short film called ‘The Human Race’, that features the 2017 Chevrolet Camaro ZL1 in a heated race with an autonomous Chevrolet FNR concept car. For this film, live video feeds and data were fed on set into Unreal Engine together with The Mill’s Cyclops virtual production toolkit. CG car was then overlaid on top of a proxy vehicle covered in tracking markers called the Blackbird. The use of real-time technology gives the viewers the ability to customise the car in the film as they watch it. This hybridisation of film and gaming open possibilities in interactive creative storytelling. A film that you can play.

One of the most outstanding use of real-time rendering in full production is Zafari in 2018, a charming 52 episodes animated series produced by Digital Dimension. The series revolves around a cast of quirky critter characters who are born with the skin of other types of animals. Zafari is the first episodic animated series to be fully rendered in Unreal Engine.

The team was looking to create stunning visual effects with global illumination, subsurface scattering, motion blur, great water and lush jungle environment, while without being overly expensive and time-consuming to render. The animation also features dynamic simulation for character furs, trees and vegetations. Traditionally is not a simple task, but the studio was able to achieve that with the help of real-time rendering technology. Digital Dimension stated they can do 20 test renders within half an hour as compared to 2 in a day using previous iterations of the pipeline. This is the major benefit of using a real-time engine compared to reliance on a render farm in which shot iterations can be turned around extremely quickly.

One of feature films that has taken advantage of real-time rendering was Rouge One: A Star Wars Story. Industrial Light & Magic’s Advanced Development Group (ILM / ADG) was using Unreal Engine to make photorealistic renders of the beloved, sarcastic droid K-2SO in real time and bypassing the pre-rendering process. The droid had some great scenes and stole the hearts of millions of Star Wars fans. The group has its own render pipeline built on Unreal called the ADG’s GPU real-time renderer. ADG develops the tool to expand and enhance the creative process of storytelling through real-time rendering of visuals that approach motion picture quality, as the other scenes of the film was rendered using the standard offline render. By comparison, the real-time render scenes are very much identical to the shots that was rendered using offline render. By using the technology, the team was able to see the character on screen during shooting with added benefit that they could visualise the data in camera on set with virtual cinematography. The film also used real-time virtual sets using a system called SolidTrack which allows the team to build geometry of what is going to replace on the blue screen. It generates real-time graphics to get a representation of what the final set extension’s going to be.

ILM continued pursuing the usage of real-time rendering in the Solo: A Star Wars Story production. The team used StageCraft VR, a new virtual production system that powered by Unreal Engine to design and understand the scene physical dimensions when it came to previsualising the stunts. Working with real-time assets in a virtual production environment means that the team can move things around and play with different aspects of the sequence. ILM stated that the ability to work creatively in real time brings out something that is impossible with cumbersome, slow, pre-rendered assets.

These are just a few examples of real productions taking advantage of real-time rendering, with many more are expected to start embracing the technology and get creative with it.

Conclusion

Real-time production pipeline has many advantages that can save tremendous amount of time and cost while still maintaining high quality result. The technology is now powerful enough to produce high visual fidelity to match the aesthetic style of the creative vision. 

Real-time rendering allows creator to achieve shots that are otherwise unachievable and to obtain them quickly. And since the quality differences between offline and real-time rendering are getting closer and generally indistinguishable, we may see more major productions around the world to invest in real-time engines and standardise on new tools that make it easier to integrate the technology with existing pipelines.

The idea of hitting render and see the entire shots popping out in seconds may sounds too good to be true, but this is the power of real-time filmmaking. The future of content creation in a world where storytellers are not restricted by technology but empowered by it. A world where the only limitations are creativity and imagination.

References

Sloan, K., 2017. Why Real-Time Technology is the Future of Film and Television Production, pp.3-13.

Moller, T., 2018. Real-Time Rendering – Fourth Edition, Chapter 2: The Graphic Rendering Pipeline, pp.11-14. 

Evanson, N., 2019. How 3D Game Rendering Works, A Deeper Dive: Rasterization and Ray Tracing [Online] Available at: <https://www.techspot.com/article/1888-how-to-3d-rendering-rasterization-ray-tracing/> [Accessed: 10 January 2021]

Unity. Real-Time Rendering in 3D [Online] Available at: <https://unity3d.com/real-time-rendering-3d> (Accessed: 10 January 2021)

Unity. Real-time filmmaking, explained [Online] Available at: <https://unity.com/solutions/real-time-filmmaking-explained> [Accessed: 10 January 2021]

Novotny, J., 2018. How Does Eevee Work [Online] Available at: <https://blender.stackexchange.com/questions/120372/how-does-eevee-work> (Accessed: 10 January 2021)

Mirko, 2020. The Main Advantages of Real Time Engines vs Offline Rendering in Architecture [Online] Available at: <https://oneirosvr.com/real-time-rendering-vs-offline-rendering-in-architecture/> (Accessed: 10 January 2021)

Lampel, J., 2019. Cycles vs. Eevee – 15 Limitations of Real Time Rendering in Blender 2.8 [Online] Available at: <https://cgcookie.com/articles/blender-cycles-vs-eevee-15-limitations-of-real-time-rendering> (Accessed: 10 January 2021)

Failes, I., 2017. How Real-time Rendering Is Changing VFX And Animation Production [Online] Available at: <https://www.cartoonbrew.com/tools/real-time-rendering-changing-vfx-animation-production-153091.html> (Accessed: 10 January 2021)

Pimente, K., 2018. Animated children’s series ZAFARI springs to life with Unreal Engine [Online] Available at: <https://www.unrealengine.com/en-US/spotlights/animated-children-s-series-zafari-springs-to-life-with-unreal-engine> (Accessed: 16 January 2021)

Failes, I., 2017. Upcoming Animated Series ‘Zafari’ Is Being Rendered Completely With The Unreal Game Engine [Online] Available at: <https://www.cartoonbrew.com/tools/upcoming-animated-series-zafari-rendered-completely-unreal-game-engine-153123.html> (Accessed: 16 January 2021)

The Mill. Blending live-action film and gaming for Chevrolet [Online] Available at: <https://www.themill.com/experience/case-study/chevrolet-the-human-race/> (Accessed: 18 January 2021)

Bishop, B., 2017. Rogue One’s best visual effects happened while the camera was rolling [Online] Available at: <https://www.theverge.com/2017/4/5/15191298/rogue-one-a-star-wars-story-gareth-edwards-john-knoll-interview-visual-effects>(Accessed: 18 January 2021)

Seymour, M., 2017. Gene Splicer From 3lateral & ILM Rogue One on UE4 [Online] Available at: <https://www.fxguide.com/quicktakes/gene-splicer-from-3lateral-ilm-rogue-one-on-ue4/> (Accessed: 18 January 2021)

Morin, D., 2019. Unreal Engine powers ILM’s VR virtual production toolset on “Solo: A Star Wars Story” [Online] Available at: <https://www.unrealengine.com/en-US/spotlights/unreal-engine-powers-ilm-s-vr-virtual-production-toolset-on-solo-a-star-wars-story> (Accessed: 18 January 2021)

Polinchock, D., 2019. The Weather Channel Uses Immersive Mixed Reality to Bring Weather to Life [Online] Available at: <https://www.mediavillage.com/article/the-weather-channel-uses-immersive-mixed-reality-to-bring-weather-to-life/> (Accessed: 18 January 2021)

Lumsden, B., 2019. Virtual Production: The Future Group pushes XR to the limit [Online] Available at: <https://www.unrealengine.com/en-US/spotlights/virtual-production-the-future-group-pushes-xr-limit> (Accessed: 18 January 2021)

Character Performance Animation

(Term 1: Week 9-14)

Our last animation task for this term is to make a 10 to 15 seconds character performance animation. It is the art of making a character or object to move in such a way to lead the audience to believe the subject can actually think for itself. The animation can be with or without dialogue.

When Luke briefs the task and show some examples to us, I immediately thinking to do my animation based on one of my favorite movie quotes that I like to use in real life as a silly joke to my previous fellow animators while we are working hard to finish an animation.

It was from the ending fighting scene of The Matrix Revolution (2003) when Smith says to Neo “Why, Mr. Anderson? Why, why, why? Why do you do it? WHY DO YOU PERSIST?!”. The people who know the quote will answer “Because I choose to”. And we will laugh as we persist in our interest to animation even it is hard. I think the quote would be great for the exaggerated character acting animation.

Okay, back to work. I remembered the quote from a very long time ago, so I went to YouTube to recall my memory of the scene and realised that the actual scene was actually a lot longer!

The video can be cut, but I think the sound of the rain and thunder in the scene are a bit distracting. I preferred to have a cleaner sound of dialogue.

Then I remembered another movie scene from Johnny English (2003) when Johnny tries to unravel the mystery of how a thief can come into the highly secured room to steal the queen’s jewels.

I think this scene is perfect for this task. So, I extracted the dialogue beginning from “Should I coming through the window” until Johnny almost fall down into the hole behind him.

References

Luke wanted us to act and record ourselves for the reference. He said that a good animator must also know how to act so we can understand more of how the character moves.

I then recorded several videos of me acting out the dialogue. I referred the original acting and attempted to make some tweaks. After lots of tries, I must say that it was very hard to do. Rowan Atkinson (Johnny) acting on the other hand was in fact very brilliant with his funny expressions and body language.

Original video of Rowan Atkinson brilliant acting
My (terrible) acting

My acting was so bad that I decided to use the original video as the main reference or otherwise the outcome of my animation would be very bad as well. I will still use my acting video as reference for parts that does not showing the actor in the original video.

And below are the screenshots of the important keyframes from my acting video as storyboard

Storyboard

As we can see from the video and storyboard, there are several moving and hold motions happening throughout the scene. At frame 5 to 7 the character quickly raises the arms and hands, so I have to be careful with character limitation during the animation process. The character also changes the foots position on the floor for several times at frame 2, 8, 10, 11, and 16 . From frame 16 to 20, the character also make a body and hands swinging motion before falling down.

Character and Scene Preparations

I used same ‘Thepp’ character which I can say my most favourite character in this term as I used the character in most of my assignments. I’m very comfortable with its rig and controllers so I can focus more on the animation part which is more important than always trying to figure out how to control the character during the animation process. The character also has pretty good controllers for lip-sync and facial expressions.

I’m back for the final animation!

As for character limitations, Thepp has four fingers only for each hand, but it shoudn’t be a problem since the scene does not require him to count his fingers to ten or something like that. I have also identified that Thepp don’t really have a visible shoulder and hip shape when they rotate or move. So, I had to use more exaggerated movements or other methods if I want to make the motion more visible. Something to take note to, Theep also has a very large and wide head, so it will become a bit of a problem if he wants to raise his arms straight or to touch the top of the head. Other than that, he can do pretty much any poses with no problem.

As for background, I decided to make a very simple – not too crowded backdrop modeling so the audience will not get distracted and focus more on the character. Since Thepp is actually a mummy, I think the tomb like room is perfect for him.

Backdrop

I imported the audio file into the timeline and the audio ends at frame 430. Which mean the scene duration will be about 18 seconds in 24 frame per seconds animation.

Animation Planning

1. Body poses and basic facial blocking
2. Body motions splining
3. Facial expressions detailing
4. Lip-sync: Jaw motions > lip shapes > tongue > teeth
5. Final polishing for body and face

Some animators may prefer to animate the lip-sync first as its maybe easier to animate since the head has no animation yet. But I prefer to animate the body first because sometimes not all lip-sync has to be animated when the character is moving and the mouth is not visible to the camera.

Full-body Blocking

As usual, during this process I used the ‘Step tangents’ curve in the Graph Editor so I can focus on the pose-to-pose keyframes. I mostly referred to both my (horrible) acting and the original acting by Rowan Atkinson when blocking each pose. I made several pose adjustments since the anatomy of  the character was a bit different than normal human and also exaggerate some poses to make it look more cartoony.

And then I reached to the point where I had to pose the character for the dialogue “Should I drop down from the ceiling” where the character has to raise his arms and shoulders. For the shoulders I can raise them very slightly but the polygons around the shoulders would break if I went further. So, to give the illusion that the character is raising his shoulders, I lowered his head by rotating the neck bone slightly towards the front.

For the arms, I had no option but to carefully intersect the arms to his large head a bit to make the pose looks closer to the reference. Since the view is from the front, the intersect was not very visible. I think the blocking poses for the dialogue part turned out pretty good.

The other challenge was when the character is almost fall down and trying to balance his body by swinging his arms. It was not much a problem during the blocking process since there are no in-between motions yet, but I had to spend more time in this section during the splining process to make the timing and direction of the swinging body and arms motion look proper and believable.

During this process I also included basic facial expression like eye direction, eyebrow and some mouth opening but not a lip sync yet.

First draft blocking
Final blocking

Full-body – Splining

I was a bit nervous to convert the ‘Step tangents’ to ‘Auto tangents’ since I had experienced weird rotation animation problem when doing the previous body mechanics task. Fortunately, this time the animation had no such problem. If I could divide the animation into 3 sections, the first 2 were quite good and only need some minor tweaks to the animation curves in the Graph Editor to fix the timing for proper ease-in and out.

The last section on the other hand, was a different story. It was the swinging body and arms part that I already predicted during the blocking stage. I spent most of the time fixing the curves and even had to redo some of the original blocking poses multiple, multiple times to get the right motions.

Bad falling animation

And then the same thing happened to the falling animation. The timing and motion looked so fake. After doing too many revisions, I got a bit frustrated and then settled with the motion that I think looks good enough although I’m not really satisfied with it since I had to finish several more animation stages.

Throughout the animation, I added a subtle in-between arc motions each time the character changing its pose to another pose by modifying the animation curves or by adding in-between poses. The examples for this are the bouncy or up and down motions when the hip and head rotate from one direction to another. It will make the animation looks more cartoony and stylised.

Final splining without facial expression and lip-sync

Facial Expressions and Lip-sync

After the body animation finished, I proceeded to finalise the facial expressions such as eyes blinking, eyebrows motion, and additional fix to the existing eyes direction. For the eyelid, I made the eyes blink during the middle of rotating head from one pose to another. But not at each rotation, so it depended on the motion to make it looks more natural.

Facial animation without lip-sync yet

The lip-sync process was quite straight forward, and I still applied the same step-by-step process as the previous phoneme task, – which to finish the jaw animation first, to the lip shapes, the tongue, and finally the teeth motions.

Below are step-by-step videos for the lip-sync animation:

1. Animate the jaw opening and closing by rotating its controller

Jaw motions without lip shape

2. Animate the lip shape on top of the jaw motion

Jaw with lip shape motions

3. Adding tongue bending and teeth up-and-down (close and open) motion

Jaw, lip, tongue and teeth motions

Polishing Small Details

In this stage, I mostly made additional adjustment to the follow-through motions especially for the spine and arms. The tip of each body part (child) should move slower than its previous body part (parent). I also polished every small detail especially to exaggerate the pose and timing of fingers motion a little bit more and adding small squash and stretch to the eyes shape.

I also made some more adjustment to the falling down animation in terms of poses and timing, and extended the follow-through motion for the hand and foot a bit to make it appear until the last frame before the character disappear behind the floor. I think the overall motions were a bit better now.

Lighting and Rendering

Since I’m very new to Maya, I have to admitted that I’m pretty weak to its lighting and rendering engine. I have made a render in the previous animation task by some quick tutorials. I discovered that the Maya Hardware and Software rendering engines are not reliable and many suggested to use Arnold Render instead. But I think what I did before was not proper since I just put a very basic lighting and the scene did not have any textures which could affect the lighting and rendering look.

I watched several more tutorial videos on YouTube and read some forums to further my knowledge regarding this area. This time I made the backdrop to have proper shader and texture that suitable for Arnold Renderer. The Arnold shaders did have proper effects to the scene look like texture bump and color reflection on the surface of the objects. I used the Physical Sky as the main light for the hard shadow and low intensity SkyDome light as ambient to soften the shadow a bit.

What I managed to render is not that great, but the final look was sufficed and suitable for the animation.

Final Animation – Playblast

Final Animation – Render

Phonemes

(Term 1: Week 9)

This week we learned about phonemes and how to animate a lip sync.

For this task, I’m back using the ‘Thepp’ character as I think the character has a fairly good facial rig that is easier to control. This is my first experience to animate a lip sync in Maya.

References

Below are some references from internet that I used most when doing the animation task.

Credit: Phoneme mouth chart reference by Will Boyer
Credit: Phoneme mouth chart reference by Preston Blair

Importing Audio

The next preparation is to import the audio file by dragging it to the timeline. And just like that the voice is available in the timeline and can be heard when scrubbing the timeline and during playback.

Maya support .WAV and .AIFF audio format on Windows. I started doing the file preparation on my Mac and tested several audio formats. I discovered that Maya on Mac support .MP3 and .M4A audio with no problem, but when I moved the working files to continue working on my Windows laptop, the audio files were not working & cannot be imported in Maya. So it will be best to just use .WAV format for both platforms compatibility especially when working with team with different machines.

Animation Process

The first thing I tried was to animate the mouth, jaw and lip at the same time at every intended time frame. But after several keyframes, I thought the process became quite tedious. Selecting and reposition each controllers, scrubbing the timeline and selecting each controllers again back and forth was not fun at all. And the result was quite bad.

After several failed attempt, I was thinking why not doing and finish the animation part by part like what I have learned in the previous assignments when animating character’s body. As Luke always mentioned during his lecture as well, to always animate and fix the hip motion first (main body) before moving to other body parts.

So going back to my case here, the ‘main body part’ for lip sync is the jaw. So I started animating the jaw first and then the lip, the tongue and the teeth. I can assure you that the animation process became so so much easier.

Jaw controller, rotate it up & down to open & close based on the phoneme

Below are the basic steps I took when doing the animation part by part:

Step 1: Animate the opening and closing motion of the jaw first.

Step 2: Animate the closing and opening motion of ‘O’ lip shape.

Step 3: Animate the teeth and tongue

Step 4: Animate the head / other part

In this task, I don’t use the Graph Editor much but just a very minor tweaks to smooth out some motions since I think the automatic curves animation turned out pretty good already.

The dialogue I chose above was not too fast, so I did a quick test with other audio with faster dialogue. I discovered that sometime you had to ignore some lip shapes if the character talk fast and chose to make shapes for the most prominent sound only. Too many drastic shape changes within 1 frame will make the lip motions look janky and not smooth.

Final Animation