Houdini – Cabin Model

Week: 24 – 31 Jan

Last week we were introduced with Houdini software by Mehdi Daghfous. I never used Houdini before. Starting from this week forward, Mehdi will provide a weekly tutorial for us to practice and familiar with the software.

Houdini introduction with Mehdi last week

This week tutorial is about the basic interface and getting to understand the software concept. Speaking of concept, Houdini is quite different from any other 3d software I’ve tried before. It is nodes based and use it for pretty much everything. It’s like we do a visual coding and connect the functions to achieve what we want to create. There are a lot of syntax that we should know as well to use throughout the software.

These are some of them that I noted when watching Mehdi tutorial:

  • OBJ : Object
  • SOP : Surface operators
  • DOP : Dynamics operators
  • ROP : Render operators
  • $HIP : Path for saving file
  • $OS : Export geo name
  • $F4 : Frame number

We learned how to build a simple cabin model and get to familiar with the fundamental of the software along the way. During the modeling process, I found that Houdini has all the standard modeling tools like we found in the other 3d software, but with they are done through nodes. It’s quite tricky at first as to understand what and where the nodes can be connected. But I can see the advantage of this is where a node can be shared which creating a number of procedural objects and instances with variations or similar parameters can be faster.

The main interface of Houdini
Boolean operation
Boolean operation
Inverting normal

Final Model 1

This is my cabin model after following the tutorial. As my ‘OCD’ kicks in, I also organised the nodes and renamed everything accordingly.

Final Model 2

Mehdi want us to add wood shape wall to the cabin and do it ourselves using the knowledge that we have learned before. Below is my cabin version 2.

Term 2 – Self-Reflection and Future Path

Throughout the first semester, I have learnt so much in terms of technical aspects and knowledges in animation. I’m very grateful to learn so many things from the professionals at UAL and practice techniques used by the industry. One of the things that I think was contributing to my development is to do this blog page as part of the learning process. The blog is where I need to write about my thinking on certain topics, documenting the works process and problem-solving steps when completing the animation tasks. I really feel this is a very interesting learning process that has helped me to not only thinking about making nice animation but also more conscious about the progresses. I’m actually very weak at writing that sometimes it can take days for me to just draft one blog entry before I can publish it. This blog has to some extent helped me to improve my English as well which can be very useful if I want to work in the international production companies.

In terms of technical, I’m actually a long time 3dsmax and Blender user due to my previous working experiences. I barely use Maya before, so I took this opportunity to learn the software significantly. It can be different from someone that never use any 3d software before than someone who already has some other software workflows baked into the brain and muscle memory. During the first several weeks, I’m quite struggled to change my habits when navigating around the software as I keep pressing the wrong buttons and shortcuts. It can be a bit frustrating, but I have managed to improve in that regards and more comfortable working with Maya now. I also learnt and practicing a lot of techniques from tutorial videos of how to do the things I know with other software in Maya. To me, knowing technical aspect is very important so I can use the full potentials of the software when doing the animation parts. I also learnt how to use 3DEqualizer to do camera tracking and rotomation as it being used in the film industry.

Finishing the first term, I can say that I still has a lot to learn and improve. I’m very excited to continue learning in the next term at UAL.

Future Path

Regarding my thinking about the future path, I’m actually very interested in two different branches in the field which are character rigging (including animation) and cinematography in animation. This is why I attempted to somewhat combine those two elements in my previous personal animation several years ago, although not very successful.

The trailer of my previous personal animation. Full animation in my YouTube Channel

The characters in the animation don’t have a very good rig, plus it don’t show the character face as I’m quite weak with facial rigs at that time. But lately, I’m growing to love character rig and animation after my previous working experiences and also when doing character animation assignments during the first term at UAL. I’m really interested to have deep knowledge and skill to produce sophisticated character rigs.

On the other side, I’m still very interested with the art of cinematography when telling stories. As you can see in my full animation, I’m quite focus on how the camera moves and delivers the shots. I love how camera angle, movement and composition can convey the mood in a film or animation. Especially in 3D, camera can be more flexible to play around with any idea to give dramatic, actions and sensation feeling to the audience.

I hope I can further my knowledge and excel in this two different areas. Or maybe I should focus on one only. My heart was torn in two now.

Collaborative Project Begins

27 Jan 2021

Today, Luke briefed us about the collaborative project that we have to do throughout this term. Each group must be at least 4 persons and should collaborate with at least 1 person from other course. The completion of the project is not really matter but the process and documentation on this blog of how we collaborate and try to finish the project is the most important. We can either propose a new project or join other courses project. For animation course, Luke suggested to us to do a previs project and use existing model from assets library as much as possible.

Right after the briefing, I was thinking to propose my own project. I have a few previs ideas for a sci-fi story because I really like this theme. As I begin to write the story outline, I saw some of my classmates already proposed their ideas and looking for team members.

One that really caught my attention was the idea by Emma. She proposed a sci-fi horror story which is very good and quite close to what I’m interested to do. She announced that she already got a sound designer on board and still looking for some more animators. So I think it’s better if join her project instead of creating another project. I also feel very confident to work with her as I saw her good performance during the last term.

I immediately contacted her to apply and gave my YouTube channel link for some samples of my animation works. I think I can fit into the roll and give my contributions to her project. I also mentioned to her that I’m quite new to Maya as I used another software for my previous animations but gave my confidence that I will adapt quickly for any requirement of the project. 

I’m now in the team and we wait for another people to join.

29 Jan 2021

By the end of the week, we’ve got enough members for the project which I think a very good combination from 3 courses with 2 animators, 2 VFX artists and 2 sound designers.

Group Leader: Emma Copeland
Animator: Emma Copeland, Kamil Fauzi
VFX: Gherardo Varani, Antoni Floritt
Sound Design: Callum Spence, Ben Harrison

We joined a new Discord Server created by Emma.

Expectations of Professionalism

(Term 2: Week 1)

In this term we are going to do a collaborative project. When working with other people and in group, we expect some kind of professionalism among the team members. 

I have worked in groups during my previous study and also when working in productions. There are a lot of personalities and behaviors that we can see but here are some of my expectations of professionalism. Some of them are also a reminder to myself for me to improve in this regard.

Discipline with Time
I think this is one of the most important things in professionalism. It can affect everything when working in team. We need to always discipline with time and schedule. Don’t let other people wait for meeting, completing the tasks, and delivery. This also the area where I myself has to improve because I always do too many works from personal, study and freelancing at once which can affect my time discipline. There is a chapter in Islamic Quran where specifically reminds us about the important of time. It is always be my reminder to manage them properly.

Responsible
Always be responsible and giving commitment with the tasks that was assigned to us. Take a good care of the work like it’s our personal thing. Complete the work with the expected quality, find solution if there is any problem and don’t leave the responsibility to others.

Honesty 
Always honest during work and don’t slacking off. Tell the truth about our capability and is the work can be done in within the timeframe. Don’t lie to team member of any problem which can cause delay to the work. When working with confidential project, don’t share or leak any information that we have agreed upon.

Communication and relationship
Respond timely and properly to conversation, act courteously and use good manners. Be friendly with other team members. Get to know their personality so we can engage properly with them. Sometimes too close or too open in relationship and conversation with certain people can change the atmosphere in the team. 

Positivity
Leave negative thinking aside and always do good deeds. Don’t keep complaining without providing viable solutions. Don’t be jealous with other team member’s accomplishment but to use it as encouragement to improve ourselves. Don’t talk negative behind another person with can cause split in the team.

Notes
We cannot remember everything so it is very important to note down important points during discussion and task delivery. It will help with the work efficiency when we can refer back and do the job accordingly. This is something I have been practicing since a long time ago by taking note and recording audio of a meeting especially when meeting with clients since it happened sometimes, they change and don’t remember what they said.

Research & Presentation

(Term 1: Week 14)

Advantages of Real-time Rendering in Animation and Visual Effects Design

Introduction

The world has reached technological advancement where real-time technology is possible to reinvent the entertainment production techniques. The industry must undergo a new paradigm shift. The next evolution in content production must adapt to the more flexible and interactive technology that can produce higher quality results in extremely short time.

Since the very beginning, the animation and visual effects production process were almost linear in nature and basically consisted of five phases of production which are development, preproduction, production, postproduction and distribution. Each phase had to be ready before the production could continue to the next phase. It also suffers with the delay to get immediate results from the current time-consuming rendering technique known as off-line rendering. In traditional workflows, any significant change to characters or camera angle might send the production back to square one to reproduce the work and visual render.

In other aspect, the artistic demands are also increasing. Previously impossible visions, from virtual world to photorealistic digital humans or beasts, directors are competing to come out with unique new ideas in their masterpiece. But with all that demands, tight schedules and budgets, the mantra of “fix it in post” resulting pressures on the visual effects production around the world. It is common for artists to feel frustrated with long working hours and turnaround time.

What is Real-Time Rendering

Real-time rendering is a technology designed to quickly process and display images on screen. It can be referred to basically anything related to render but most often synonym to 3D computer graphic and computer-generated imagery (CGI). There are actually many real-time engines available in the market for free and commercial such as Unreal Engine, Unity 3D, Blender’s Eevee and CryEngine. Video game creators have been using this technology for decades to create interactive games that rapidly renders 3D visual while simultaneously accepting user input allowing user to interact with characters and virtual world in real-time.

Real-time rendering and offline rendering are the two major rendering types. The main difference between the two is speed. Offline rendering is popular among animation and film production because of its capability to produce realistic renders, despite having to sacrifice time. But as real-time engines become more capable in producing realistic imagery in shorter time, designers starting to recognise this technology and begin implementing it in their workflows. 

As a brief comparison between rendering method, offline rendering such as Mental Ray and Blender’s Cycle use a technique called ray tracing, which render realistic images using a method that almost identical to how real lighting works. It cast multiple rays of light and bounce around the scene to reflect, refract, or absorb by objects. Since there are many probabilities along the bouncing route and one ray from one pixel is not that accurate, it casts additional randomised rays from that pixel and averages the result. This whole process takes a lot of processing power and time to calculate.

Figure 1: Ray Tracing
Figure 2: Rasterisation

The method used by real-time renderers such as Unreal Engine and Blender’s Eevee on the other hand are designed to ease the processing burden by using a process called rasterisation. The technique can be described as image manipulations where the 3D geometry is projected onto a raster of pixels that make up 2D image. Every pixel color is determined by shader based on surface normal and light position. Image edges are then anti-aliased, and occlusion is determined by z-buffer values. To make the output look nice, additional trickery is added such as light maps, shadow maps, blurred shadows, screen space reflections, and ambient occlusion. Real-time render uses approximations on the behaviour of light and will not be as accurate as off-line render. But modern real-time engines are getting more powerful than ever in which they can now doing hybrid rendering to do real time ray tracing with minimal sample and very few bounces to render only reflections and shadows. It then combines machine learning denoiser result with rasterised diffuse and other passes to get a nice final result.

Benefits for Production Workflow

In short, it is about interactivity, time and cost. One of the biggest advantages in using a real-time platform is that all departments are able to work immediately and simultaneously. It is now just ‘production’, and the term ‘preproduction’ is a thing of the past. Pre-visualisation works can be changed and updated with refine versions without needing to start from scratch in a new phase of production. 

Review cycle is fast as directors can give feedbacks in real-time and is no longer remain several days or weeks behind artist’s works as the production does not have to wait for painfully slow renders. Creative decisions can be made faster and artists can progress quickly with up-to-date direction without any delays between design iterations. This can prevent the loss of valuable production time and resources. 

Because of the less time intensive in real-time environment, artists can pitch and quickly experimenting their ideas to test out hunches and concepts in a way they cannot in traditional offline workflow. This will empower the director with additional idea perspectives as well. Early development process like script writing also can take advantage of the technology to use previs (revisualisation) to look at the sets, the characters, and all the assets to create a great script.

All of this would never have been possible without real-time technology, as it would have taken far too long to create unnecessary versions of the scene and ultimately too much luxury in terms of cost and time. With real-time rendering integration, directors, designers and clients will be able to instantly see how the end result of a project will look like. They will have complete control to experiment with any ideas and making changes to things like characters, lighting, and camera positions as they work, and avoid having to wait for lengthy rendering time.

Benefits for Visual Creativity

Real-time rendering is being integrated in visual effects and animation to create virtual production, digital humans, animated series, and commercials. It is clear that this is not a distant dream or possible trend but the future of filmmaking.

Epic Games, the creator of Unreal Engine, reveals how their engines are enriching production, enabling interactive storytelling and real-time production. Broadcasting companies have achieved a new level of quality and creativity by connecting broadcast technology with real-time engine for environment. Virtual sets eliminate high costs associated with physical sets and complex scenes can be shot live, without requiring extensive post-production, further driving down costs. 

Figure 3: Real-time virtual set

In weather news for example, special effects and CG elements like rain and flood can be added to a scene instantly and can interact with weather-caster in real-time allowing for greater flexibility with creative decision making. In 2015, The Weather Channel introduced this kind of immersive mixed reality experience to better explain the anatomy of a hurricane.

Figure 4: Weather-caster in virtual flood

In 2017, The Future Group uses Unreal Engine and Ross Video technology in combination with its own interactive mixed reality platform to produce Lost in Time, an episodic series in which contestants can compete, navigate and interact with the spectacular interactive virtual environments in real-time.

Figure 5: Contestants compete in a digital world of Lost in Time

In the same year, visual effects studio, The Mill produced a futuristic short film called ‘The Human Race’, that features the 2017 Chevrolet Camaro ZL1 in a heated race with an autonomous Chevrolet FNR concept car. For this film, live video feeds and data were fed on set into Unreal Engine together with The Mill’s Cyclops virtual production toolkit. CG car was then overlaid on top of a proxy vehicle covered in tracking markers called the Blackbird. The use of real-time technology gives the viewers the ability to customise the car in the film as they watch it. This hybridisation of film and gaming open possibilities in interactive creative storytelling. A film that you can play.

One of the most outstanding use of real-time rendering in full production is Zafari in 2018, a charming 52 episodes animated series produced by Digital Dimension. The series revolves around a cast of quirky critter characters who are born with the skin of other types of animals. Zafari is the first episodic animated series to be fully rendered in Unreal Engine.

The team was looking to create stunning visual effects with global illumination, subsurface scattering, motion blur, great water and lush jungle environment, while without being overly expensive and time-consuming to render. The animation also features dynamic simulation for character furs, trees and vegetations. Traditionally is not a simple task, but the studio was able to achieve that with the help of real-time rendering technology. Digital Dimension stated they can do 20 test renders within half an hour as compared to 2 in a day using previous iterations of the pipeline. This is the major benefit of using a real-time engine compared to reliance on a render farm in which shot iterations can be turned around extremely quickly.

One of feature films that has taken advantage of real-time rendering was Rouge One: A Star Wars Story. Industrial Light & Magic’s Advanced Development Group (ILM / ADG) was using Unreal Engine to make photorealistic renders of the beloved, sarcastic droid K-2SO in real time and bypassing the pre-rendering process. The droid had some great scenes and stole the hearts of millions of Star Wars fans. The group has its own render pipeline built on Unreal called the ADG’s GPU real-time renderer. ADG develops the tool to expand and enhance the creative process of storytelling through real-time rendering of visuals that approach motion picture quality, as the other scenes of the film was rendered using the standard offline render. By comparison, the real-time render scenes are very much identical to the shots that was rendered using offline render. By using the technology, the team was able to see the character on screen during shooting with added benefit that they could visualise the data in camera on set with virtual cinematography. The film also used real-time virtual sets using a system called SolidTrack which allows the team to build geometry of what is going to replace on the blue screen. It generates real-time graphics to get a representation of what the final set extension’s going to be.

ILM continued pursuing the usage of real-time rendering in the Solo: A Star Wars Story production. The team used StageCraft VR, a new virtual production system that powered by Unreal Engine to design and understand the scene physical dimensions when it came to previsualising the stunts. Working with real-time assets in a virtual production environment means that the team can move things around and play with different aspects of the sequence. ILM stated that the ability to work creatively in real time brings out something that is impossible with cumbersome, slow, pre-rendered assets.

These are just a few examples of real productions taking advantage of real-time rendering, with many more are expected to start embracing the technology and get creative with it.

Conclusion

Real-time production pipeline has many advantages that can save tremendous amount of time and cost while still maintaining high quality result. The technology is now powerful enough to produce high visual fidelity to match the aesthetic style of the creative vision. 

Real-time rendering allows creator to achieve shots that are otherwise unachievable and to obtain them quickly. And since the quality differences between offline and real-time rendering are getting closer and generally indistinguishable, we may see more major productions around the world to invest in real-time engines and standardise on new tools that make it easier to integrate the technology with existing pipelines.

The idea of hitting render and see the entire shots popping out in seconds may sounds too good to be true, but this is the power of real-time filmmaking. The future of content creation in a world where storytellers are not restricted by technology but empowered by it. A world where the only limitations are creativity and imagination.

References

Sloan, K., 2017. Why Real-Time Technology is the Future of Film and Television Production, pp.3-13.

Moller, T., 2018. Real-Time Rendering – Fourth Edition, Chapter 2: The Graphic Rendering Pipeline, pp.11-14. 

Evanson, N., 2019. How 3D Game Rendering Works, A Deeper Dive: Rasterization and Ray Tracing [Online] Available at: <https://www.techspot.com/article/1888-how-to-3d-rendering-rasterization-ray-tracing/> [Accessed: 10 January 2021]

Unity. Real-Time Rendering in 3D [Online] Available at: <https://unity3d.com/real-time-rendering-3d> (Accessed: 10 January 2021)

Unity. Real-time filmmaking, explained [Online] Available at: <https://unity.com/solutions/real-time-filmmaking-explained> [Accessed: 10 January 2021]

Novotny, J., 2018. How Does Eevee Work [Online] Available at: <https://blender.stackexchange.com/questions/120372/how-does-eevee-work> (Accessed: 10 January 2021)

Mirko, 2020. The Main Advantages of Real Time Engines vs Offline Rendering in Architecture [Online] Available at: <https://oneirosvr.com/real-time-rendering-vs-offline-rendering-in-architecture/> (Accessed: 10 January 2021)

Lampel, J., 2019. Cycles vs. Eevee – 15 Limitations of Real Time Rendering in Blender 2.8 [Online] Available at: <https://cgcookie.com/articles/blender-cycles-vs-eevee-15-limitations-of-real-time-rendering> (Accessed: 10 January 2021)

Failes, I., 2017. How Real-time Rendering Is Changing VFX And Animation Production [Online] Available at: <https://www.cartoonbrew.com/tools/real-time-rendering-changing-vfx-animation-production-153091.html> (Accessed: 10 January 2021)

Pimente, K., 2018. Animated children’s series ZAFARI springs to life with Unreal Engine [Online] Available at: <https://www.unrealengine.com/en-US/spotlights/animated-children-s-series-zafari-springs-to-life-with-unreal-engine> (Accessed: 16 January 2021)

Failes, I., 2017. Upcoming Animated Series ‘Zafari’ Is Being Rendered Completely With The Unreal Game Engine [Online] Available at: <https://www.cartoonbrew.com/tools/upcoming-animated-series-zafari-rendered-completely-unreal-game-engine-153123.html> (Accessed: 16 January 2021)

The Mill. Blending live-action film and gaming for Chevrolet [Online] Available at: <https://www.themill.com/experience/case-study/chevrolet-the-human-race/> (Accessed: 18 January 2021)

Bishop, B., 2017. Rogue One’s best visual effects happened while the camera was rolling [Online] Available at: <https://www.theverge.com/2017/4/5/15191298/rogue-one-a-star-wars-story-gareth-edwards-john-knoll-interview-visual-effects>(Accessed: 18 January 2021)

Seymour, M., 2017. Gene Splicer From 3lateral & ILM Rogue One on UE4 [Online] Available at: <https://www.fxguide.com/quicktakes/gene-splicer-from-3lateral-ilm-rogue-one-on-ue4/> (Accessed: 18 January 2021)

Morin, D., 2019. Unreal Engine powers ILM’s VR virtual production toolset on “Solo: A Star Wars Story” [Online] Available at: <https://www.unrealengine.com/en-US/spotlights/unreal-engine-powers-ilm-s-vr-virtual-production-toolset-on-solo-a-star-wars-story> (Accessed: 18 January 2021)

Polinchock, D., 2019. The Weather Channel Uses Immersive Mixed Reality to Bring Weather to Life [Online] Available at: <https://www.mediavillage.com/article/the-weather-channel-uses-immersive-mixed-reality-to-bring-weather-to-life/> (Accessed: 18 January 2021)

Lumsden, B., 2019. Virtual Production: The Future Group pushes XR to the limit [Online] Available at: <https://www.unrealengine.com/en-US/spotlights/virtual-production-the-future-group-pushes-xr-limit> (Accessed: 18 January 2021)

Character Performance Animation

(Term 1: Week 9-14)

Our last animation task for this term is to make a 10 to 15 seconds character performance animation. It is the art of making a character or object to move in such a way to lead the audience to believe the subject can actually think for itself. The animation can be with or without dialogue.

When Luke briefs the task and show some examples to us, I immediately thinking to do my animation based on one of my favorite movie quotes that I like to use in real life as a silly joke to my previous fellow animators while we are working hard to finish an animation.

It was from the ending fighting scene of The Matrix Revolution (2003) when Smith says to Neo “Why, Mr. Anderson? Why, why, why? Why do you do it? WHY DO YOU PERSIST?!”. The people who know the quote will answer “Because I choose to”. And we will laugh as we persist in our interest to animation even it is hard. I think the quote would be great for the exaggerated character acting animation.

Okay, back to work. I remembered the quote from a very long time ago, so I went to YouTube to recall my memory of the scene and realised that the actual scene was actually a lot longer!

The video can be cut, but I think the sound of the rain and thunder in the scene are a bit distracting. I preferred to have a cleaner sound of dialogue.

Then I remembered another movie scene from Johnny English (2003) when Johnny tries to unravel the mystery of how a thief can come into the highly secured room to steal the queen’s jewels.

I think this scene is perfect for this task. So, I extracted the dialogue beginning from “Should I coming through the window” until Johnny almost fall down into the hole behind him.

References

Luke wanted us to act and record ourselves for the reference. He said that a good animator must also know how to act so we can understand more of how the character moves.

I then recorded several videos of me acting out the dialogue. I referred the original acting and attempted to make some tweaks. After lots of tries, I must say that it was very hard to do. Rowan Atkinson (Johnny) acting on the other hand was in fact very brilliant with his funny expressions and body language.

Original video of Rowan Atkinson brilliant acting
My (terrible) acting

My acting was so bad that I decided to use the original video as the main reference or otherwise the outcome of my animation would be very bad as well. I will still use my acting video as reference for parts that does not showing the actor in the original video.

And below are the screenshots of the important keyframes from my acting video as storyboard

Storyboard

As we can see from the video and storyboard, there are several moving and hold motions happening throughout the scene. At frame 5 to 7 the character quickly raises the arms and hands, so I have to be careful with character limitation during the animation process. The character also changes the foots position on the floor for several times at frame 2, 8, 10, 11, and 16 . From frame 16 to 20, the character also make a body and hands swinging motion before falling down.

Character and Scene Preparations

I used same ‘Thepp’ character which I can say my most favourite character in this term as I used the character in most of my assignments. I’m very comfortable with its rig and controllers so I can focus more on the animation part which is more important than always trying to figure out how to control the character during the animation process. The character also has pretty good controllers for lip-sync and facial expressions.

I’m back for the final animation!

As for character limitations, Thepp has four fingers only for each hand, but it shoudn’t be a problem since the scene does not require him to count his fingers to ten or something like that. I have also identified that Thepp don’t really have a visible shoulder and hip shape when they rotate or move. So, I had to use more exaggerated movements or other methods if I want to make the motion more visible. Something to take note to, Theep also has a very large and wide head, so it will become a bit of a problem if he wants to raise his arms straight or to touch the top of the head. Other than that, he can do pretty much any poses with no problem.

As for background, I decided to make a very simple – not too crowded backdrop modeling so the audience will not get distracted and focus more on the character. Since Thepp is actually a mummy, I think the tomb like room is perfect for him.

Backdrop

I imported the audio file into the timeline and the audio ends at frame 430. Which mean the scene duration will be about 18 seconds in 24 frame per seconds animation.

Animation Planning

1. Body poses and basic facial blocking
2. Body motions splining
3. Facial expressions detailing
4. Lip-sync: Jaw motions > lip shapes > tongue > teeth
5. Final polishing for body and face

Some animators may prefer to animate the lip-sync first as its maybe easier to animate since the head has no animation yet. But I prefer to animate the body first because sometimes not all lip-sync has to be animated when the character is moving and the mouth is not visible to the camera.

Full-body Blocking

As usual, during this process I used the ‘Step tangents’ curve in the Graph Editor so I can focus on the pose-to-pose keyframes. I mostly referred to both my (horrible) acting and the original acting by Rowan Atkinson when blocking each pose. I made several pose adjustments since the anatomy of  the character was a bit different than normal human and also exaggerate some poses to make it look more cartoony.

And then I reached to the point where I had to pose the character for the dialogue “Should I drop down from the ceiling” where the character has to raise his arms and shoulders. For the shoulders I can raise them very slightly but the polygons around the shoulders would break if I went further. So, to give the illusion that the character is raising his shoulders, I lowered his head by rotating the neck bone slightly towards the front.

For the arms, I had no option but to carefully intersect the arms to his large head a bit to make the pose looks closer to the reference. Since the view is from the front, the intersect was not very visible. I think the blocking poses for the dialogue part turned out pretty good.

The other challenge was when the character is almost fall down and trying to balance his body by swinging his arms. It was not much a problem during the blocking process since there are no in-between motions yet, but I had to spend more time in this section during the splining process to make the timing and direction of the swinging body and arms motion look proper and believable.

During this process I also included basic facial expression like eye direction, eyebrow and some mouth opening but not a lip sync yet.

First draft blocking
Final blocking

Full-body – Splining

I was a bit nervous to convert the ‘Step tangents’ to ‘Auto tangents’ since I had experienced weird rotation animation problem when doing the previous body mechanics task. Fortunately, this time the animation had no such problem. If I could divide the animation into 3 sections, the first 2 were quite good and only need some minor tweaks to the animation curves in the Graph Editor to fix the timing for proper ease-in and out.

The last section on the other hand, was a different story. It was the swinging body and arms part that I already predicted during the blocking stage. I spent most of the time fixing the curves and even had to redo some of the original blocking poses multiple, multiple times to get the right motions.

Bad falling animation

And then the same thing happened to the falling animation. The timing and motion looked so fake. After doing too many revisions, I got a bit frustrated and then settled with the motion that I think looks good enough although I’m not really satisfied with it since I had to finish several more animation stages.

Throughout the animation, I added a subtle in-between arc motions each time the character changing its pose to another pose by modifying the animation curves or by adding in-between poses. The examples for this are the bouncy or up and down motions when the hip and head rotate from one direction to another. It will make the animation looks more cartoony and stylised.

Final splining without facial expression and lip-sync

Facial Expressions and Lip-sync

After the body animation finished, I proceeded to finalise the facial expressions such as eyes blinking, eyebrows motion, and additional fix to the existing eyes direction. For the eyelid, I made the eyes blink during the middle of rotating head from one pose to another. But not at each rotation, so it depended on the motion to make it looks more natural.

Facial animation without lip-sync yet

The lip-sync process was quite straight forward, and I still applied the same step-by-step process as the previous phoneme task, – which to finish the jaw animation first, to the lip shapes, the tongue, and finally the teeth motions.

Below are step-by-step videos for the lip-sync animation:

1. Animate the jaw opening and closing by rotating its controller

Jaw motions without lip shape

2. Animate the lip shape on top of the jaw motion

Jaw with lip shape motions

3. Adding tongue bending and teeth up-and-down (close and open) motion

Jaw, lip, tongue and teeth motions

Polishing Small Details

In this stage, I mostly made additional adjustment to the follow-through motions especially for the spine and arms. The tip of each body part (child) should move slower than its previous body part (parent). I also polished every small detail especially to exaggerate the pose and timing of fingers motion a little bit more and adding small squash and stretch to the eyes shape.

I also made some more adjustment to the falling down animation in terms of poses and timing, and extended the follow-through motion for the hand and foot a bit to make it appear until the last frame before the character disappear behind the floor. I think the overall motions were a bit better now.

Lighting and Rendering

Since I’m very new to Maya, I have to admitted that I’m pretty weak to its lighting and rendering engine. I have made a render in the previous animation task by some quick tutorials. I discovered that the Maya Hardware and Software rendering engines are not reliable and many suggested to use Arnold Render instead. But I think what I did before was not proper since I just put a very basic lighting and the scene did not have any textures which could affect the lighting and rendering look.

I watched several more tutorial videos on YouTube and read some forums to further my knowledge regarding this area. This time I made the backdrop to have proper shader and texture that suitable for Arnold Renderer. The Arnold shaders did have proper effects to the scene look like texture bump and color reflection on the surface of the objects. I used the Physical Sky as the main light for the hard shadow and low intensity SkyDome light as ambient to soften the shadow a bit.

What I managed to render is not that great, but the final look was sufficed and suitable for the animation.

Final Animation – Playblast

Final Animation – Render

Phonemes

(Term 1: Week 9)

This week we learned about phonemes and how to animate a lip sync.

For this task, I’m back using the ‘Thepp’ character as I think the character has a fairly good facial rig that is easier to control. This is my first experience to animate a lip sync in Maya.

References

Below are some references from internet that I used most when doing the animation task.

Credit: Phoneme mouth chart reference by Will Boyer
Credit: Phoneme mouth chart reference by Preston Blair

Importing Audio

The next preparation is to import the audio file by dragging it to the timeline. And just like that the voice is available in the timeline and can be heard when scrubbing the timeline and during playback.

Maya support .WAV and .AIFF audio format on Windows. I started doing the file preparation on my Mac and tested several audio formats. I discovered that Maya on Mac support .MP3 and .M4A audio with no problem, but when I moved the working files to continue working on my Windows laptop, the audio files were not working & cannot be imported in Maya. So it will be best to just use .WAV format for both platforms compatibility especially when working with team with different machines.

Animation Process

The first thing I tried was to animate the mouth, jaw and lip at the same time at every intended time frame. But after several keyframes, I thought the process became quite tedious. Selecting and reposition each controllers, scrubbing the timeline and selecting each controllers again back and forth was not fun at all. And the result was quite bad.

After several failed attempt, I was thinking why not doing and finish the animation part by part like what I have learned in the previous assignments when animating character’s body. As Luke always mentioned during his lecture as well, to always animate and fix the hip motion first (main body) before moving to other body parts.

So going back to my case here, the ‘main body part’ for lip sync is the jaw. So I started animating the jaw first and then the lip, the tongue and the teeth. I can assure you that the animation process became so so much easier.

Jaw controller, rotate it up & down to open & close based on the phoneme

Below are the basic steps I took when doing the animation part by part:

Step 1: Animate the opening and closing motion of the jaw first.

Step 2: Animate the closing and opening motion of ‘O’ lip shape.

Step 3: Animate the teeth and tongue

Step 4: Animate the head / other part

In this task, I don’t use the Graph Editor much but just a very minor tweaks to smooth out some motions since I think the automatic curves animation turned out pretty good already.

The dialogue I chose above was not too fast, so I did a quick test with other audio with faster dialogue. I discovered that sometime you had to ignore some lip shapes if the character talk fast and chose to make shapes for the most prominent sound only. Too many drastic shape changes within 1 frame will make the lip motions look janky and not smooth.

Final Animation

Research Planning & Notes

(Term 1: Week 8-13)

In this week class, Luke asked us about our research topic for 1500 words essay. I proposed to make a research about real-time rendering as I’m really interested in that topic because it’s getting more popular and advanced nowadays.

I was involved in using a real-time rendering, Unreal Engine when I was working to produce a game with my friends about 2 years ago. I was the animator for the in-game animations and cinematics for the game. The game visual was not designed towards a realistic look, but I was blown away with the power of the engine which can render almost everything I threw into the cinematics in real-time such as fog, reflection and volumetric lights.

My small experiences with Unreal Engine when producing animations for the game

As I was getting interested with the engine and searching online for tutorials to improve my knowledge with Unreal Engine, I found out that the game engine is getting popular in actual film and animation productions because its capability to produce realistic rendering in a shorter time compared to offline rendering.

Back to my research topic, at first I want to focus on the method used by real-time rendering of how it can optimise the processing power and rendering time compared to other rendering type. But I think the research will results in to have too much technical jargons, terms and algorithms that I myself don’t understand as I’m not into programming.

Luke said that real-time rendering is such a massive topic. So he want me to narrow it down to how it use to produce effects (VFX) or how it use in virtual production.

After that I was thinking for few days about my topic because I don’t want it to be too big and too small either.

Early Research

Title:
– Real-time Rendering Method in Virtual Production
– Real-time Rendering and How It Can Benefit Visual Production
– Future of Visual with Real-time Rendering
– Advantage of Realtime Rendering in Visual Creation.

Question / research scope:
– What is Real-Time Rendering
– Technology for fast rendering
– Brief history of real-time rendering
– Has been around for decades in games
– Method / how it works
– three conceptual stages: application stage, geometry stage, rasterizing
– Type of real-time engine
– Integration in production
– What are the benefits
– Time
– Cost
– Previs for script writing, set building, motion capture, animation
– What is the future of rendering? Can it compete with another render?

Early references:

I found a book titled “Real-Time Rendering, Fourth Edition: (4th New edition)” by Tomas Akenine-Moller, Eric Haines and Naty Hoffman. Its really going in-depth and very technical too. Its not really suitable for my short essay, so I may use this book just for basic references only.

https://www.techspot.com/article/1888-how-to-3d-rendering-rasterization-ray-tracing/

https://unity3d.com/real-time-rendering-3d

https://unity.com/solutions/real-time-filmmaking-explained

https://www.designblendz.com/blog/what-is-real-time-rendering-and-how-does-it-wo

https://www.broadcastnow.co.uk/tech/how-real-time-game-engines-are-enhancing-production/5124990.article

https://80.lv/articles/integrating-real-time-rendering-into-film-production-pipeline/

https://unity.com/solutions/real-time-filmmaking-explained

Video credit: Eduonix Learning Solutions

After more thinking and looking at various references, here are almost the final outline for my research essay.

Research Outline

Title
Advantages of Real-time Rendering in Animation and Visual Effects Design

Introduction (200 words)
– Problem in production in terms of rendering time, cost, etc
– Solution with real-time rendering

What is Real-time Rendering (300 words)
– Description of what is real-time rendering
– Brief example of how real-time engine works
– Maybe compare to off-line rendering
– Example of real-time engines
– Not too technical


Benefits to workflow and creativity (900 words)
– Can see result in short time / instantly
– Fast review / feedback cycle
– Can experiment with ideas
– Team can collaborate in real-time
– Save time & cost
– Short development time
– Some real-time engines are free

– Find samples of real productions using real-time rendering
– How it helps with creativity
– How it used during production
– How it opens a new type / style of entertainments

Conclusion (100 words)
– Real-time rendering is the future as it getting powerful

Other related links:

How Real-time Rendering Is Changing VFX And Animation Production

https://www.unrealengine.com/en-US/spotlights/animated-children-s-series-zafari-springs-to-life-with-unreal-engine

Upcoming Animated Series ‘Zafari’ Is Being Rendered Completely With The Unreal Game Engine

https://www.polygon.com/2017/3/1/14777806/gdc-epic-rogue-one-star-wars-k2so

https://www.theverge.com/2017/4/5/15191298/rogue-one-a-star-wars-story-gareth-edwards-john-knoll-interview-visual-effects

Gene Splicer From 3lateral & ILM Rogue One on UE4

https://www.unrealengine.com/en-US/spotlights/unreal-engine-powers-ilm-s-vr-virtual-production-toolset-on-solo-a-star-wars-story

https://www.unrealengine.com/en-US/spotlights/unreal-engine-powers-ilm-s-vr-virtual-production-toolset-on-solo-a-star-wars-story

https://www.unrealengine.com/en-US/spotlights/virtual-production-the-future-group-pushes-xr-limit

Body Mechanics

(Term 1: Week 7 & 8)

This week we learnt about body mechanics in animation and was assigned to do a short animation for this topic.

Body mechanics can be described as the way character body change and move from one pose to another pose. A great body mechanics animation is when the sequence of movement through the body is prepared and timed properly to describe the intended action for audience to believe that the character moved. Some good examples to show a full body mechanics are gymnastic, parkour and even as simple as a character lifting a heavy object.

References

For this week task, I planned to do a short parkour action. It is different than the previous character animation tasks as the character will not doing a repetitive animation like walk cycle but instead, a variety of continuous actions from start to finish. It can be hard to animate this complex motion using only imagination or by designing the action from scratch. So, a good video reference from the parkour expert itself can help a lot.

I then proceeded to find the references online. After searching for several hours, I noticed that there are actually many good parkour videos but some of them are not suitable as animation reference because of shaky camera movements and extreme angles in the video. Most of the videos also are a fast cut compilation of short parkour clips which very hard to follow. For the reference, I’m looking for a video with consistent camera angle. 

I finally found a video by StuntsAmazing on YouTube which has the criteria I wanted. I took a small portion of the video as a reference for my animation.

Credit: Video by StuntsAmazing

Storyboard & Movement Observation

The following are the key poses that I identified from the video.

From the video and storyboard, we can see the person doing 2 jumps that require 360 degree rotation for the whole body in the animation process later.

For the first jump, the person lands with his left hand touching the ground first (pose 5) and followed by his right hand (pose 6). This may require IK control to make the hands stick to the ground while the body still moving during the animation process. There is a follow through and overlapping motion for the legs during the jump. At the end of the first jump, he lands his right foot first (pose 7) and then the left foot (pose 8).

The person continues to make 2 small run cycles (from pose 8 to pose 13). He then makes the second jump which is a roll jump. During the jump, there is a ‘squash and stretch’ happening. The person squeezes his legs to the body (pose 15) and extend them back when he lands with both hands (pose 16), followed by his back (pose 17) and then his left foot (pose 18).

Blocking

For this week task, we need to get the blocking done first and then continue for final animation after we receive feedbacks from Luke.

This time, I used another character called ‘Sam’ because I think his anatomy is much better to practice the body mechanics animation than the previous ‘Thepp’ character. 

Handsome isn’t?

As usual, I begin with placing down all the key poses based on the storyboard using the ‘step tangents’ in the Graph Editor. I think the process was not too hard since the character rigs are quite easy to control even though I just used to this character. The only minor problem was the reference video has some blurry frames because of motion blur. So I had to anticipate and predict some of the body parts placement especially for the hands and foots.

Step Tangents are beautiful

Blocking Video

Feedbacks

These are feedbacks from Luke:
– Fix curve for the body / spines
– Add more overlapping motion to the legs

Polishing

After I received the feedbacks, I begin fixing the original blocking first and followed the poses suggested by Luke. 

Then I started the splining and polishing stage. Right after I converted the keyframes to ‘Auto tangents’, I instantly noticed some problems with rotation animation especially for the arms and foots. They are now doing a funny spinning motion throughout the animation.

I made this mistake during the pose-to-pose blocking where I just rotated each body part without paying attention to their rotation direction. When the body doing a full rotation to positive direction for example, I should rotate the other body parts to the same direction. I didn’t noticed this problem before because ‘Step tangents’ will not show the in-between motion. This is something I should take note of to prevent the same mistake in the future.

I managed to fix some of the problems using the Graph Editor by aligning the curve to have proper increment at each axis direction. But it became a too complicated since the rotation directions are mixed between the axis and keyframes. The easiest way I found was to delete the problematic rotation keys and recreate the key by rotating it back to its intended position through the right direction.

There were a lot of full rotations going on throughout the animation and I spent most of the time to fix them. I think this was the biggest problem happened in this task which I learnt a lot. The other parts were almost the same as the previous assignment like offsetting some keys to create overlapping motion, and changing the animation curves to control the slow in and out.

Final Animation