Indie Film Project – Camera Tracking Week

Week: 31 May – 6 Jun

During the weekend, I re-watched quickly the Dom’s tutorials to recall some of the important technical aspects. I also watched several other tutorials on YouTube to learn more about the software.

This week, I also doing camera tracking for all the footages of my personal project. So the timing was quite right as my mind will only about camera tracking this week although both project are using 2 different tracking software which are 3DEqualizer and Blender.

I have 8 shots for the indie film project to track and 12 shots for personal project and. So I called this week as ‘CAMERA TRACKING MAYHEM WEEK!’

https://youtu.be/vI9_s8oRB_w
The above tutorial won’t let me embed the video here, but it helped me a lot to understand more about tracking data to import to Maya, the use of camera group and scaling.

Above was of the earliest tutorials I watched for 3DEqualizer on YouTube. This one shows how to deal with blurry footage and rolling shutter which can be helpful as the footages that I’m going to work on has the same problems.

I watched the above tutorial just to learn to how to import 3DEqualizer tracking to Blender since I used Blender for my personal project. In case Blender failed to track my footages, I will have a backup software for it. And after watching this, I learned that we cannot simply import 3DE tracking to Blender but to use and run some scripts for it to work properly. I didn’t try this but something I have taken notes of.

This one was a quite simple tutorial but it teaches how to use lineup controls constraint markers to plane to align the camera and track.

I watched many others 3D Equalizer tutorials that I bookmarked them for future references.

Tracking with 3DEqualizer

I then proceeded with the tracking process and I managed to track 6 out of the 8 footages for this week. It took me about 5 days to track all the shots while also doing camera tracking with my personal project’s footage.

Along the way, I experienced many challenges, bad tracking, bad camera alignment and software crashes. At the same time, I also learned a lot of things and discover several tricks that helped me solve the camera movements. My final tracking was not really 100% perfect as there are several small vibrations in certain frames.

It is probably too long to blog everything for each shots here, so I just make a summary for shots that I think have unique solving methods than the normal and standard ones.

The shots that I’m able to track:
CC_0100, CC_0200, CC_0500, CP_0100, CP_0200_0300 & CP_0400

Remaining two shots:
CC_0300 & CC_0400.

The CC_0300 & CC_0400 were the hardest to track for me with their camera shakes, rotation, and motion blurs. I’m really struggled to get good track with it. I felt almost like those 2 shots don’t want to be defeated by me at this point. I’m a bit disappointed but at the same time also happy that I’m able to solve 6 of the footages. I might return to those 2 shots in another week.

CC_0300; the camera alignment was okay at the first half of the shot
… and then the alignment gone wrong towards the end

Among the shots that I’m able to track, the hardest were CP_0200_0300 and CC_0100. Which is really really weird!, because in Blender, CC_0100 was the easiest and the fastest to solve. Blender was able to quickly recognised the camera zooming motion with changing focal length without problem. It took me about 1 day just to solve this shot in 3DE compared to approximately 30 minutes in Blender.

I’m able to track this shot pretty quick in Blender but in 3DE it was one of the hardest that I’m pulling my hair when doing it

I tried everything; from placing markers near each corner of the shot to make 3D Equalizer recognise the zoom, to changing camera constraint and enabled dynamic focal length. None of that giving satisfactory results. It can track the camera movement, but not the zoom. It think the camera is moving forward, instead of zoom even after I set the positional camera constraint to fixed.

So, as a special method for shot CC_0100, I finally decided to animate the changes of focal length (zoom) value manually. To get the correct estimation for focal length, I put a 3d plane in the shot as a measurement object. I aligned the 3d plane with the road at the beginning of the shot and keyframed the camera’s focal length value right before the zoom started.

Before the zoom. I didn’t follow the focal length that Luke provided as it causes the perspective of the plane not matching with the road
Zooming with blur
After the zoom. Estimating the focal length value by re-matching the 3d plane with the road

I then go frame by frame to find where the camera stop zooming. I then changed the focal length value carefully until the 3d plane shape and perspective are matching with the road again. With full suspense, I clicked the ‘Calc All From Scratch’ button and….. it works!!! It solved the camera movements perfectly! I can’t describe how happy I am at that moment.

The mountain of deviations!

The final solve error aren’t pretty because 3DE think the markers deviated so much, but since the movements were correct, I think I can’t rely on the number.

CP_0200_0300

The same thing can be said for CP_0200_0300. The deviation was very high but this was the best I could get and cannot get any lower than this. And since the camera movement was correct, I ignored the number. The camera movement was quite challenging to track with the camera rotation. In the first few tries, I cannot solve it and then I changed the ‘Positional Camera Constraint’ to ‘Fixed Camera Position Constraint’ able to help with the tracking.

Below are the rest of the footages that I managed to track without much problems.

CP_0100. The most easiest and quite perfect
CC_0500
CP_0400
CC_0200.

I didn’t have much problem with CC_0200 but the I cannot get any lower solve error before the camera movement become broken when I delete any markers that have high deviation value. There were a lot manual tracking and re-position of markers frame by frame by hand. I’m very careful to predict their location when the shot became blur.

Videos

I exported the tracking data for 3 of the random shots that I managed to track into Maya first just to test how they look with the car model.

CC_0200
CC_0500
CP_0100

I uploaded those 3 videos first to our Discord server. I didn’t have time yet to finish the road geo, realigning and rescaling all the 6 shots in Maya as I’m also finishing the camera tracking for the footages of my personal project at the same time. So I will update the team again after I finish with the Maya files.

To be honest, my mind was very tired this week after doing all the tracking works in both the indie film and personal project. Nevertheless, I’m very happy I that managed to track 6 out of 8 footages from this project.

Indie Film Project – Helping With Camera Tracking Tasks

Week: 24 – 30 May

Early this week, I tried to track several shots from the indie film project as personal practice since I’m not signed up as a matchmover in this project. Last week, I noticed that the videos have a lot of motion blurs which Emma and Marianna experienced difficulties to solve the tracking in 3D Equalizer. So I wanted to try and see if I’m able to do it in Blender using the knowledges that I learned several weeks ago. After looking at all the videos again, I decided to try on the CC_0100 and CC_0200 first.

CC_0100

CC_0100 has a fast camera zoom at the beginning which caused very high blur. The first method I tried was to use the ‘Detect Features’ to automatically place markers on areas where Blender think it can track for the whole duration of the videos. Unfortunately, it failed to recognise the ground area probably because of the blur during the zoom. I proceeded with the ‘solve’ function just to see if it can actually solve the camera movement with the automatic markers but the result was very bad.

Blender failed to automatically detect the ground

I had no other option but to manually place markers and probably tracked it frame by frame. I analysed the video again and noticed the image was sharp towards the end of the footage. So I decided to do a backward tracking. There were actually many contract areas which I can place the markers, but most of them were small which completely disappear during motion blur.

When I skimming through the video, the only contrast areas that still has good visibility during motion blur was the road lines. So I placed several markers at the sharp corners of the road lines.

I then tracked all the markers backward frame by frame carefully and stop when the markers deviated from its original place because of the blurry frames. I manually placed the markers by guessing the correct positions. The road lines make it quite easy to predict the locations even when its blur because I can still see the line’s corners.

The road lines became overlapped because of the motion blur but we can still see the line’s corner and predict the location for manual marker placements.

After several tries, correcting positions and removing bad markers, I managed to solve the camera movement with solve error of 0.89. It’s not great but that was lowest I can get before the camera alignment broke if I do a bit more clean up.

Final camera solve

Video

CC_0200

Next I proceeded with the next video, CC_0200. This footage has a lot of camera movements with rotation and position. The same problems happened with this shot when using the auto detect features. I then used the same tricks as the CC_0100 to manually place and track the marker.

This shot was multiple times harder to track compared to CC_0100.

I placed some markers on the road lines corner like I did on the CC_0100.
So much blur
I carefully predict and estimate the location of markers on blurry frames.
I carefully predict and estimate the location of markers on blurry frames.
Final camera solve

It took me about 5 hours on this shot and I finally managed to solve the camera with accurate camera movement, but with a very high solve error value of 15.30 px. I could get lower value than 1.0 by deleting high deviation markers, but the camera movements would not aligned properly anymore. I guessed I cannot rely on the numbers only and probably because of the high vibration of the original camera, Blender think the markers have bad deviation.

Video

I then sent both videos to our Discord server during today’s class (26th May). Luke said the tracked were good and asked if I’d like to pick other shot and help the team with the camera tracking.

I’m not really sure because the tracking I did before was done using Blender and not 3DEqualizer. I don’t remember much about 3DEqualizer since I only used it several time only during Term 1. I may need to re-watch the Dom’s tutorials to recall the important technical aspects of the software. I think this as a good opportunity for me to practice the software again as 3DEqualizer is one of the industry’s standard for matchmove and tracking.

So I said to Luke that I will try to track on the same footages that the other matchmovers were doing at this moment as a backup and see if I can good results.

Personal Project – Video Plates v2

Week: 24 – 30 May

Early this week, I managed to re-shoot the video plates to replace the rejected version recorded last week. I get up early in the morning at 5 o’clock as last time to avoid the crowds that coming out to go to work and school.

This time, I chose another place near my flat that was visible to the early morning light and not covered by the building’s shadow. This is to prevent the problem with the last videos where I need to increase the camera brightness which cause over expose to the other already bright areas in the video. With properly lit location, I can also get lower ISO as possible to prevent grainy videos.

I carefully shoot the videos. The location of the video is actually one of the main path for people to walk to go to work as there are several offices around this place. There is also a gym that is open early in the morning, and during the shooting there were already several people coming to exercise there. I cannot spend too much time to shoot as more and more people started passing by. I shoot as many variations as possible when there is no / not many people around.

After the shooting, I did a quick editing on my computer to check the scene. This time I’m very satisfied with the result as it has good camera angles, movements and overall, consistent brightness throughout the videos.

I cut each shot with the intended duration and exported them with extra 10 to 15 frames each as a safe frames for final cutting later. I will use this version of background plates for the camera tracking process in Blender next.

Indie Film Project – Tasks

Week: 17 – 23 May

Today (19th May), we finally, we received the actual video footages. Luke made a briefing video and spreadsheet about our tasks for each shot.

Our team members and tasks

There are total of 8 videos that divided into 2 categories, which are CC (car crash) and CP (car pass). CC shots would need additional VFX works for the car crash simulation and effects.

The flow for each shot will be basically like this: ‘Matchmove > Animation > VFX > Lighting & Rendering > Compositing‘. For the animation tasks, me and the other animators (Layne & Tina) would need to wait for the footages to be tracked by the matchmovers first (Emma, Marianna & Antoni) so that we can animate the car accordingly to the camera angle, and ground surface of the scene. We can actually start animating the car while the footages being tracked, but the animations might need to be changed again when we receive the camera track to match the path and direction of the road in the shots.

I checked the footages in the collaboration folder and noticed that all of the videos were blurry or have motion blur when the camera moves. I’m not an expert in camera tracking, but I’m a bit concerned that the videos could be very challenging to track.

About 2 days later, my concern was true as Emma and Marianna were discussing for several days that they were having trouble to work with the blurry videos.

Although I’m not a matchmover in this project, I was thinking to try to track some of the shots in Blender to see if I’m able to solve it. This can also a good practice for me since I’m also doing camera tracking in my personal project.

Personal Project – Video Plates v1 (Rejected)

Week: 17 – 23 May

This week, I didn’t spent much time working on my personal project since I’m a bit busy with the collaborative indie film project.

I actually planned to (and should have) finalise the scenes and actions for the storyboard as soon as possible so I can shoot the background plates (video footages) for the camera tracking process. I thought long and hard to design a good actions sequence for the story but I still don’t have any concreate idea for this. I felt a bit stuck that I finally decided to follow some of the actions and shots from the old Killer Bean animation since the motive of my project was not to design the choreography but to do character animation and camera tracking.

Some of the interesting action sequences from Killer Bean

I quickly referred the Killer Bean animation and drew a new rough story sketches to finalise the actions, the number of shots, angle and camera movement for the video shooting. There will be 12 shots, and I targeted it to have a duration of 30 seconds maximum for whole animation.

I woke up as early as 5 am in the morning to shoot the background plate videos to avoid the crowds and people that coming out in the morning to go to work and school. I shoot the videos using my iPhone only since I don’t have any better camera equipments. The angles were based from the story sketches I did last night. I made several variations and alternatives for each shot as backup.

Right after the shooting, I went to my computer and I made a quick editing see the result and how the shots coming together. Honestly, I’m not satisfied with the video. The location was not very good and since it was in between two buildings, the area was under the shadow which forced me to increase the camera brightness during shooting and made the other already bright areas to become over exposed.

Rejected background plates

I planned to do a re-shoot in another day, probably during the weekend or early next week.

Personal Project – Character Rigging

Week: 10 – 16 May

As planned for this week, I proceeded with the character rigging stage after the 1 to 1 session with Luke.

Model cleanup & finalisation

Before the rigging process begin, I decided to make several changes and improvement to the both character models. The previous models were actually quite rushed with some of their topologies look a bit messy. So I fixed the meshes to have more quad shapes rather than triangles, added necessary loop cuts to some areas such as on the arms, body and cloth, and ensured that layered objects to have aligned segments so they can be bent at the right area without the meshes overlapping to each other.

Last week version (left), modified version (right)
Last week version (left), modified version (right)

The most noticeable changes were the shoes models where I modified their lower and front shapes. I also removed the shoelaces on Zeo’s shoes to make the design to look cleaner. I didn’t changed the Agent character as much since both of them shared the same body components, other than few tweaks to the topologies of his cloth.

Last week version (left), modified version (right)
I implemented Luke’s suggestion to make the knee and elbow to have more rounded shape. At first, I added several more edges and faces to make the rounded cap, but it looks a bit detail and the area look too stand out for the simple character design that I want to achieve. So I ended up with much simple rounded shape.
Last week version (left), modified version (right)

I changed and finalised the color of the characters as well. And for this project, I want the characters to have a simple material color style so I didn’t apply any heavy textures to them.

For the last part, I combined some the character parts which supposed to be together into 1 object to make the rigging process easier. I also reorganised all the final objects in the outliner and renamed them properly so I can easily identify and find them during the rigging process later.

Honestly, it has been almost a year since I last did a character rigging in Blender. I didn’t used Blender as much compared to my main 3d software, 3dsmax. So I have forgotten several crucial parts of rigging in Blender like setting up IK, pole target and controllers. I quickly headed to Youtube and watched several rigging tutorials to refresh my memory.

A simple tutorial to understand the very basic of bone and skinning
This was the tutorial I watched last year when I first started learning rigging in Blender. So I watched it again for this project. Although the whole series are about Riggify plugin, the first part of it is about to setup bone manually which I decided to use for my characters.

Bone & controller placements

With everything ready, I started the rigging process. For this project, I decided to build the skeletal bone manually rather than using a template from the Blender’s Rigify plugin. Rigify is actually great which offers variety of skeletal templates for biped, bird and quadruped character, but I want to have more direct and flexible bone structure for my characters.

With my experience with Rigify several times before, the system will create proxy and instances when generating final skeletal which can be a bit tricky to revert to edit and generating the bone again. On the plus side, Rigify will make all the controllers and automate some of its bone position when posing the character such as the arm twist for example. While its great to have that, I think I prefer to create much simpler bone for this project.

I began by creating the main bone structure first which covers all the character’s body parts.

The main skeletal
Hand and finger bones

After the main bone structure was completed, I created the controller bones for the root, pelvis, hands, foots, elbows and knees.

Main controller bones

I then setup the IK the arms and legs, and Pole Target for the elbows and knees.

IK & pole target setup

Lastly, I added the facial bones for the eyebrows, eyelids, eyeballs and lip.

Facial bones

Skinning and vertex weight

During the skinning process, I used the manual vertex value instead of the standard weight painting method. I prefer the manual value method especially when the object has a low-poly state since we can select the vertices easily.

For me it is multiple time faster to skin this way. We can just select the vertices and enter the intended weight to quickly make them follow which bone we want, compared to using weight painting brush where we keep painting again and again to get the correct values for each body parts.

This manual value method is especially very effective when dealing with very narrow areas such as fingers. Using weight paint in this area instead can be very painful when it painted the wrong surfaces since they are very close to each other.

When using manual method, if I want the vertices around the character’s chest to follow 100% to the upper spine bone for example, I can just set those vertices weight value to 1.0 to that bone, and set the vertices loop between the upper and lower spine to the value of 0.2 so that it will follow the upper spine for 20% to create soft and smooth bend. It’s very fast and I can get consistent weight for all sides.

Vertex weight value
Entering vertex weight values manually can be faster than doing weight painting especially for the narrow areas such as fingers

The only annoying part of using the manual value in Blender is that it will not automatically normalize the weight value to the sum of 1.0 between multiple bones that shared the same vertices. So I will need to use the ‘Normalize All’ function every time I entered new value to the selected vertices so it will deduct and calculate the balance weight values to the other bones. If I forgot to ‘normalize’, multiple bones can effect the same vertices with the same maximum weight which can cause the shapes to not properly follow the intended bones.

I used the manual vertex weight value when skinning in 3dsmax as well, but 3dsmax will automatically normalize the value. I’m not sure if I can do the same in Blender but I cannot find the way yet, other than to use the ‘Normalize All’ function.

In Blender, we need to manually normalize the weight every time we enter the manual vertex value so that all the weights will not overlap with other bone and their sum will stay at 1.0

Personal Project – 3D Modeling & 1 to 1 Session

Week: 3 – 9 May

Earlier this week I began modelling the characters in Blender based on the sketches that I did last week. The characters I designed have some similar body parts such as the body, eyebrows, mouth, hand and legs so I can use the same model for those. The major different between them are the eyes, cloth and color.

So the plan was to model the main character first, which is Zeo and reuse his model for the Agent character with additional props. I mostly used the basic and standard modeling tool such as extrude, bevel, cut and loop cut during the modeling.

I also used several modifiers such as ‘Mirror’ to make the character symmetry and the ‘Subdivision Surface’ to make the low poly shape smoother. At some parts like cloth and belt, I used the ‘Solidify’ modifier to create the depth for those models.

Most parts are modeled for 1 side only
With ‘Mirror’ modifier for symmetry
‘Solidify’ modifier for object depth
‘Subdivision Surface’ modifier to make the low-poly shape smoother
Main character, Zeo
The enemy, Agent

I managed to finish both character models within 3 days. I didn’t proceed to rig the characters yet because I want to show to Luke first during the 1 to 1 session with him tomorrow.

I managed to finish both character models within 3 days. I didn’t proceed to rig the characters yet because I want to show to Luke first during the 1 to 1 session with him tomorrow.

1 to 1 Session With Luke

During today’s evening (6th May), I had a video call with Luke on Discord. I have prepared 3d models, tracking samples and rough storyboard (not final) for this session.

I shared my screen and showed him the reference for my idea first, which is Killer Bean. I planned to make a short action story like the Killer Bean animation but with real video footages as background. I then presented the 3d character models that I have finished in Blender. Luke was okay with the idea and characters. He gave a suggestion to improve the topology by making the elbow and knee to have a rounded cap shape so they can be bent properly.

Character topology

Next, I showed my camera tracking test videos to demonstrate if I’m capable to do the 3d character on real footage project. The tracks were not 100% perfect, but I think I can improve it in the actual footages later.

I then presented the rough storyboard, as the initial idea for the story. I planned to make a very short animation, sort of like a TV advertisement. A battery ad to be exact. So it will be around 15 to 30 second long. Both characters will be shooting to each other like an action scene in The Matrix and Killer Bean with the ending tagline “The BatteriX: No other batteries could challenge”. I described the actions that I planned in the storyboard, and mentioned that the scenes probably will be changed since the story was not final yet. Luke gave suggestions to do a closer shot to some scenes and pick 2 to 3 shots that really show decent camera tracking and full body actions for the showreel.

Indie Film Project – Animation Test Feedbacks

Week: 3 – 9 May

This week, the actual footages was still unavailable and might still being filming. Everyone in the team were doing practices and test since last week as preparation for the actual work later.

The Ftrack page that Luke created was being populated with everyone test videos from all departments. I saw some of them and very impressed with their works. Watching other people works always inspire me to keep improving myself.

I also uploaded my car animation test I did last week to Ftrack and below is Luke’s feedback on my animation.

Indie Film Project – Animation Test

Week: 26 Apr – 2 May

Last week, Luke created a new collaborative folder for the indie film project on his shared OneDrive for us to share and upload our working files. He also opened a new review page on Ftrack for anyone wanting to share videos for feedback.

Since we were still waiting for the actual footages to work on, Luke provided several materials for all departments to practice first. For the animation team, Luke made a tutorial video on how to animate a car with realistic motion and rumble. We can practice using the temporary car model from the collaborative folder.

I downloaded the car and road model including their texture files. The car was rigged very nicely with necessary controllers to animate the car complete with automatic tire rotation when the car moves. I setup a new project folder and put everything in the respective directories.

Car model with rig
Road model

Even this is just a practice, I used reference system when importing both models into a new empty Maya file, which will become an animation file. The animation will be done on a model that was linked to the external file. So any update to the original car model will be applied back to the animation file automatically.

Referencing car and road model

I repositioned and placed the car properly in a suitable location on the road. I then made a ‘camera and aim‘ and parent it to the car’s root controller so the camera will move together with the car.

I used animation layers as suggested by Luke in his tutorial. I created 2 additional animation layers for this. BaseAnimation layer is for the main car movement while carbody_02 and carbody_03 are for the car’s body animations.

Animation Layers

I started with the rumble animation first on the carbody_02 layer. I made a short 60 frames sequence with a slight bumping motion. I then loop the sequence in the Graph Editor using Curves > Post Infinity > Cycle with Offset.

I actually used the same technique when I animated the space buggy during the Term 2 collaborative project.

I then made another animation in carbody_03 layer with much longer sequence with subtle and smoother wobble animation. The animation in both layers will blend together to create more organic motion and not look too repetitive.

I tweaked both animations several times by adjusting the curves and keyframes in the Graph Editor until I satisfied with the result. The objectives were to make the rumble to look very subtle but still noticeable and realistic.

After the rumble motion finished, I then animated the car’s root in the BaseAnimation layer to make the car moving down the road. I only animated a small portion of the animation and loop it again using the same function as before to create a continuous motion.

For the last part, I animated the camera using it’s target (aim). I made a subtle camera shake motion manually to the camera’s aim to make the camera look like it is being held by hand.

Below is the final animation for this practice.

Final Animation

Personal Project – Rough Designs & Practices

Week: 26 Apr – 2 May

After I proposed my personal project to Luke last week, I started thinking more about the short animation story, the characters, actions and suitable places that I can shoot for the background footages.

I planned to use Blender for every part of this animation as I learned this software can do compositing and camera tracking as well. I have a limited experience for modelling and rigging in Blender, plus I never done any camera tracking in this software before. So I think this is a great opportunity for me to polish and learn new techniques in this software.

Concept & Characters

Killer Bean by Jeff Lew

My main reference for the character is from a Killer Bean animations by Jeff Lew. I actually have done a very simple short animation inspired by Killer Bean as a test when I started learning 3D modeling & rigging several years ago. I made the character designs from a battery and called it ‘The BatteriX’ as a parody to the Matrix films.

My old animation. Very very bad

The old animation was done in 3dsmax. So this time around, I want to redo everything again from scratch using Blender. While I’m at it, I’m improving the character design as well. I did some sketches as guides for the 3d modelling process and the main character is now wearing cloth. Finally!

Main character, Zeo
The enemy, Agent

I planned to start modelling the characters early next week and hopefully finish it before the 6th May session with Luke that I booked last week. I want to show them to Luke first to get his feedback before I rig the characters.

Camera Tracking Practice in Blender

I didn’t proceed for the modelling yet because I want to research and practice on the camera tracking first to see if I can do it properly in Blender since I never done it before. I watched several video tutorials on Youtube on how o do it. Below are some videos that really helped me understand the technique.

This last video is not really about camera tracking but I did watched it and learned few techniques and understand more about Blender tracking capabilities.

I then proceed to do some practices using footages that I shoot near my house and a test footage from the indie film project. I used both the manual and auto tracking function in Blender.

After several tries, I managed to get solve error of 0.45 px
The solve error on this video was very low (0.22 px), but the tracking was not very accurate. I guess we cannot only rely on the numbers.
The most accurate tracking of all three since the video was very stable compared to the previous two.

I actually did more test on several other videos but below are 3 of them which I think turned out quite good. They may not 100% accurate but I’m now understand how it works in Blender.