I’ve learned many good things throughout the Advanced and Experimental Unit for these two terms. From learning different new software, tools and techniques, making more involvement with friends and team, and to finally discover what I really want to pursue. I’ve discussed about my thinking on future path that was still unclear at that time in my old entry (https://kamil.myblog.arts.ac.uk/2021/02/01/self-reflection-and-future-path/).
By the end of this term, it has become more clear that I want to focus my career on animation especially character animation. I also discovered that I somehow able to perform camera tracking works quite nicely, although not at expert level. I managed to track footages from my personal project as well as 6 out of 8 challenging footages from the Indie Film project, which I originally didn’t signed up as matchmover but as animator only. Before this, I always assumed that I’m very weak with tracking and don’t have much interest in it. But thanks to Luke, he was the one who encouraged me to keep the idea of doing camera tracking in my personal project, which lead me to try doing the same with the Indie Film project.
It’s completely different now that I even proposed to include camera tracking as part of my final major project (FMP) with a combination of 3D character animation.
During this term, I’m more active and engaged in conversations with others when compared to the previous terms. I also tried my best to assist friends who asking for help in the class and group Discord server when they have problems with technical and software. Probably because of this, several classmates started messaging me personally when they need help. I believe by helping others, I can also learn something new when I try to solve the problems.
Speaking of learning new things, in terms of techniques, I’m now more knowledgeable with tracking tools in Blender and 3DEqualizer, discovered few tricks of how to solve a challenging camera track, learned and understand more tools in several 3d software that I used in this term. I also learned how to use Houdini and quite amazed with the software capability, although I didn’t plan to use it as my primary animation tool at this moment.
With all that I have learned now and in the future, I hope that they are not only for my own benefit but to keep developing it further to contribute and give benefits to other people as well. This is something that I want to achieve with my future works and thesis. To me, that is the most important.
With 5 more days until dateline, I decided to give priority on finalising all the shots that I’ve managed to block last week. And if I had more time left, I would then work on the remaining shots. I actually planned to finalise the animation right after doing the blocking last week, but I was very busy with the Indie Film project in preparing the Maya files with road geo and final camera scaling, so that I could send the files to the team members as soon as possible for them to be able to carry out their tasks.
Finalising Character Animation
Back to the personal project, I started with Shot11 again this time as this shot has all the effects in it, so I can reuse all the effect settings that I setup in this shot for another shots.
During the previous blocking process, I intentionally used the constant interpolation so I can focus on the blocking poses and not distracted by the in-between motions. But to finalise the animation, I need to change it to another interpolation first.
I selected all the keyframes in the Graph Editor and changed the interpolation to ‘Bezier’.
But with the current ‘Bezier’ interpolation, an ‘Automatic’ handle type was assigned to it which caused the character’s motion to look a bit weird and it keep moving even the character has the same poses in 2 adjacent keyframes because Blender will try to smoothen the curves by averaging its shapes based on the location of the keyframes. It’s like when we add a ‘Subdivision Surface’ modifier to smoothen an object shape, and it use the location of edge loops and topology of the object to average the smooth shape.
To fix this, I changed the handle type to ‘Auto Clamped’ instead of ‘Automatic’. The animation looked better now but a bit more works need to be done. With the new interpolation, I’m now have more controls to change the timing and offset the animation for which parts should move slower or faster by adjusting the keyframes and curves.
I paid attention to each movement in the sequence including the subtle changes to character’s facial expression and animating small details such as the follow through motion of the pistol bag. And since this scene has a slow motion part, I animated the character’s finger pulling the trigger and also the movement for the top part of the gun.
During this proses, I constantly switched between the low-poly and subdivision mode to make sure the final shape doesn’t overlap with other parts or objects
I then proceeded to finalise the animation for other 4 shots. The process was almost similar with Shot11.
Render Properties
Before I proceeded with making the effects, I want to make sure that the render settings are correct first so I can get proper render results especially when I want to test and see how the effects will look in render.
I changed several parameter in the ‘Render Properties’ rollout. I decided to use ‘Cycles’ render engine instead of ‘Eevee’ because I want to have a better glass reflection for the bullet trail effects. I also turned on the ‘Transparent’ and ‘Transparent Glass’ so that the 3D character can be rendered on the footage and I will be able to achieve the glass effect that I want.
Since ‘Cycle’ generally take much longer time to render, I want to optimise it to get faster render but at the same time not sacrificing the quality too much. I reduced the render sampling from the default; 64 to 8 and tried to render a frame. The render was quite fast, about 20 second per frame with 1920 x 1080 resolution, but the result was too grainy and bad.
I countered the grainy image by activating the ‘Denoising’ with ‘OpenImageDenoise’. There are several other denoiser options such as NLM & OptiX, but ‘OpenImageDenoise’ is the fastest to calculate. I tried to render again with 8 sampling and Denoising turned on, and this time the render quality was a bit better but the duration increased to about double.
I didn’t quite satisfied yet so I increased the sampling to 16 and tried to render again. The render took about a minute now but the quality was nice with very minimal grain. I think it matched the background footage which also has a bit of noise.
I then experimented with higher sampling to see if I can still increase the quality with acceptable render time. The render quality was getting better but it took around 2 to 5 minute per frame to render. I also tried higher sampling rate without denoising turned on, but none of the results and time fit with what I want to achieve. So I reduced the sampling back to 16 which I think the soft spot for good quality and acceptable render time.
Early Compositing Setup
Normally, compositing would be done in the last phase. But as I want to make the effects, I need to configure this first, so all assets in the foreground and background layers including the effects can appear on the background footage as I intended.
Foreground and Background Layers
In this project, since Blender has the compositing capability, I decided to do this process directly in Blender instead of using another compositing software such as Adobe After Effects and Nuke. I never done this in Blender before, so I always wanted to try it. The Blender’s compositing is a node-base and after several tries and playing around, I think its very straight forward and easy to understand.
Below was the early version of compositing nodes that I did for the shot to combine the character layer with the background plate.
Compositing nodes to make the character appear on the background plate
I made sure that the ‘Convert Premultiplied’ in Alpha Over nodes were turned on to make the edges of any object in the layer blend smoothly with the background. The compositing nodes will be updated again later as I’m making the effects.
Spark Effect
With all the render settings and basic compositing ready, I proceeded to make the spark, muzzle and bullet effects. I decided to use manual method for all the effects instead of using particle simulation.
For the spark, I used several low-poly spheres that have been modified to an oval shape. I arranged them in several variations of sizes and positions so that they looked a bit random. The spark group was then parented to a dummy object (aka ‘Empty’ in Blender) called ‘Plain Axes’ so I can change its location to anywhere I want, even when the objects in the group have animation.
I then animated the oval shaped spheres to begin from a very small size to big and then small again until they disappear while moving in outward directions. The duration of the spark animation was very short, which is 8 frames only.
I created a new material for the spark with ‘Emission’ node connected to it so it will glow by itself even without lighting. I set the strength to 10.
I updated the previous composition node with 2 ‘Glare’ nodes and attached them to the ‘Composite’ and ‘Viewer’ output to make the emission effect appears in render and also in the viewport preview. I set the quality to ‘High’ and size to maximum 9.
In Shot11, the character will dodge the enemy’s bullets for 3 times and 2 of them required the spark effect. So I duplicated the spark into 2 and placed them at the right place and changed the animation timing as both of them will appear at different time in the scene.
Spark #1 in viewport previewSpark #1 in render testSpark #2 in viewport previewSpark #2 in render test
Bullet Trail Effect
The bullet trail was created from a low-poly cylinder. I modified the object to imitate shape of the trail effects from the Killer Bean and The Matrix.
Low-poly cylinderNow it’s a bullet trail!
Next I created a bullet model and a new ‘Plain Axes’. I parented the trail model to the bullet, and then parented the bullet to the ‘Plain Axes’ object. This is the same method as I used for the previous spark effect.
Bullet and it’s trail with Array and Subdivision modifier
I applied the ‘Array’ modifier with 10 count to make the trail longer and added the ‘Subdivision’ modifier to the model with level 2 of detail for both viewport and render to smoothen the shape.
Array and Subdivision modifier settings
I then animated the bullet from off camera towards the character. And since this shot has a slow motion part, I modified the bullet’s animation curve in the Graph Editor to go from fast at the beginning to suddenly moving really slow during the second half of the scene.
Taper modifier
I inserted the ‘Taper’ modifier to the trail model to add a very subtle inflate and deflate motion to the effect. I then modified the curve in the Graph Editor.
For the trail’s final look, I insert a new material and tested various combination of different values of Metallic, Specular, Roughness etc to get the result I wanted. After numerous try an error, I ended up with 0.0 value for almost everything except 0.6 value for IOR and 1.0 for Transmission and Alpha.
Below are some of the render test I did when experimenting with different parameter values.
And here is the final look for the bullet trail effect.
Bullet trail final look in render
Muzzle Flash
Similar to the 2 previous effects, I created the muzzle flash from a primitive object – a low-poly sphere to be exact. I modified the sphere into the shape below and applied a ‘Subdivision’ modifier to smoothen the shape.
I animated the muzzle by manipulating its size and adjusted the animation curve in the Graph Editor for smooth transition from fast to slow motion movement.
Muzzle flash animation curveMaterial with high emission in viewport preview
I then applied the same material as the spark effect which has emission on its own.
Muzzle flash final look in render
Final Compositing
Next, I imported the muzzle flash into another shots and noticed there were some visual errors on the muzzle effect when rendered. It’s like the effect was clipped or masked on the character shape.
Muzzle visual error in render
After checking several places, I found the problem came from the layers and the ‘Convert Premultiplied’ in Alpha Over node when compositing the background and foreground layer. So, some tweaks need to be done. The ‘Convert Premultiplied’ don’t work well with object that has glowing effect or soft transparency like the muzzle.
The previous compositing nodes
I can disable the ‘Convert Premultiplied’ to fix the problem with the muzzle, but that would then caused a problem with the character. I need to keep that option active in order to get a smooth alpha blend for the character with the background footage. So I created another scene collection and layer just for the muzzle to separate it in compositing and render.
New collection and layer for the muzzle
In the compositing window, I created a new ‘Render Layer’ node for the Muzzle layer and another ‘Alpha Over’ node without the ‘Convert Premultiplied’. I then combined the old ‘Alpha Over’ node and the Muzzle’s ‘Render Layers’ that I just created to the new ‘Alpha Over’.
This way, all objects and character in the foreground layer will be blended properly with the background using ‘Convert Premultiplied’, while the muzzle will be rendered separately without ‘Convert Premultiplied’.
And below is the result
Fixed!
As a final adjustment to the compositing, I added the ‘Bright/Contrast’ node to the Foreground Render Layers to change the brightness and contrast of the character so that it match with the background footage. I also added ‘Blur’ node to the Background Render Layers with animated Gaussian values to add focus and depth illusion when the character is closer to the camera.
Final compositing nodes
Final Render
I applied the same process and steps to all shots and below are the final render.
Shot07 final renderShot08 final renderShot09 final renderShot10 final renderShot11 final render
As we already know, there have been many proven formulas and guides on how to animate biped, quadruped and even flying creatures which are used by many animators around the world. Even though there are many movies and games that featured 6-legged creatures, there are not many extensive studies and formal documented guides for this type of creature basic’s movements especially with different leg designs, timings, directions and movement patterns.
Creature with multiple legs like this can become complicated to animate, so this is the area that I want to study, and perhaps come out with the standard basic formula that can be useful to others. My thesis will basically a study on the basic movements of 6-legged creatures while my FMP will be a short animation with these types of creature characters that demonstrates the findings from the thesis.
Scopes
Focus on 6-legged creatures for their 5 basic movements to understand the timings and patterns which are walk & run cycle, jumping & landing and turning direction animation. To note, that some 6-legged creatures are moving sideways instead of forward.
Study around the 3 types of creatures with different leg designs.
Insect Type Legs: segmented and usually spread around the creature body such as ant, crab, scorpion and spider. Spider has 8 legs by the way, but I will probably use it as well since it can be useful to understand more on how multi-legged creature moves.
Horse Type Legs: front legs are usually bending backward such as horse, cow and sheep.
Tiger Type Legs: front legs are usually bending forward such as cat, tiger and lion.
The creatures that will be studied for the last 2 types are alien and fantasy creatures from movies and game since this type of 6-legged creature don’t exist in the real world.
References
I will do observation on the real creatures such as ant, cockroach and crab either from live or videos as well as existing animation, movies and games for the alien and fantasy creatures.
And I will also refer the previous studies on biped and quadruped as they will still has some relations to this topic.
Aim & Benefits
My aim for this thesis is I hope to discover and develop the basic keys, pattern and timing for 6-legged animation that can be applied to any real, alien and fantasy creature. And as I mentioned previously, perhaps this research can be useful and become a reference for other animators to animate this type of creature. My research probably can be continued for hybrid creature with wings, more multi-legged creature like centipede or creature with hands that doubled as legs.
_____________________________
FINAL MAJOR PROJECT
I’m actually very interested in character animation especially towards a stylise animation. So without moving far from my interest and my thesis, I’m planning to do a stylised character animation that combined with the knowledge from my thesis of the 6-legged creature.
Concept
A short animation story or clips with stylised 6-legged creatures as characters with real video footages as background since I’ve studied and explored more on this technique in my personal project in term 3.
Software
3d software: Blender Camera Tracking: 3D Equalizer and Blender Editing & Compositing: DaVinci Resolve and Final Cut Pro
Objective & Goal
I hope I can further strengthen the skills that I want to focus for my career which are character animation, rigging and matchmove while at the same time demonstrates the formulas that have been studied in my thesis.
With less than 2 weeks before dateline, I was very eager to start with the character animations phase that I targeted to animate at least 2 or 3 shots this week even though there were still some shots that have yet to be tracked. I want to do something different than tracking this week after the ‘Camera Tracking Mayhem’ last week. So doing animation could help me to refresh my mind a bit. Since I had about half of the shots tracked, I can slip to animation and see if I discover any problems quicker so I still have time to make any changes.
I decided to begin with Shot11 first. In this scene the character (Zeo) will dodge bullets several times and jump while spin his body slightly in slow motion towards the end of the shot. I don’t actually recorded the video in slow motion, but with the visual and camera movements on second half of the video, I think I can just animate the character with fake slow motion without looking weird when combined with the non-slow-mo video.
First thing I did was to match the size of the character with the environment by scaling the camera group with its tracking markers. The markers position on the floor was already correct, so to scale the camera properly without making the markers going below the floor, I cannot simply change the camera size using its own pivot but from the center grid. A bit different than Maya & 3dsmax, Blender has something called 3D Cursor that we can place anywhere and use as a temporary pivot. So I used the cursor by changing the ‘Transform Pivot Point’ to ‘3D Cursor’ which located at the center and scale from there.
Auto Keying button (in blue color)
Then I began making the key poses for the character. I used ‘Auto Keying’ to automatically key any changes I made to the controllers and bones.
During this process, I referred the actions and poses that I’ve chose from the Killer Bean animation.
After I got every single pose I want, I locked the full-body pose by selecting all the bones and controllers and keyed them by pressing ‘I > Location & Rotation’. This method was to prevent it from changing when I alter any next or previous key poses. I also changed the F-Curve interpolation to ‘Constrant’ (step) so I can focus on the blocking poses and not distracted by the in-between motion.
Constant interpolation
I continued making all the key poses and below are some of them.
I exaggerated some poses to make the action more visible during fast movement.
And of course the exaggeration was not pretty on different angle.
During the blocking process, I’m not really focus on the timing yet so it may off a bit. I will adjust the timing more during the splining phase. Below is the first blocking of Shot11.
Shot11 blocking
Next, I proceeded with Shot10 and then Shot09, Shot08 and Shot07. It seem now that my animation flow was animating from back to the front. But I think that’s okay since it is the last few shots that have a lot of elements that I want to highlight in my showreel. So in a worse case scenario if I didn’t manage to finish all the shots, I would have the last few shots that I wanted ready before the dateline.
For all those 4 shots, the animation process was almost the same as what I did with the Shot11 before and below are the blocking videos:
This week I worked on the camera tracking for the shots that were confirmed last week. I did the tracking in Blender using the knowledges that I learned in the Week: 26 Apr – 2 May.
At this moment, I also volunteered to do camera tracking tasks for the collaborative indie film project to help the team since they were struggled to track the blurry footages. So the timing was quite right as my mind was only about camera tracking this week although both projects are using 2 different tracking software which are Blender and 3D Equalizer.
I have 12 shots for personal project and 8 shots for the indie film project to track. So I called this week as ‘CAMERA TRACKING MAYHEM WEEK!’
I managed to track several shots of that project and currently I’m trying to solve several remaining shots. I can feel that I’ve starting to grow interest in camera tracking. It is something that I didn’t expect as I don’t really like it before. I also considered camera tracking was my lowest capability.
For the tracking process, I began with the shot that I thought was the easiest first to warm up and then I jumped to shot11 which I considered as the hardest. And, it turned out to be true. I spent almost a day to solve the camera that has a shift of position, level and perspective. I can tracked the first half of the video quite nicely but the second half still has several vibrations toward the end.
Shot 11, not perfect
After numerous tries, I decided to not wasting more time and moved on to the other shots since the character will actually jump in the air and not touching the ground anyway during the second half of the shot11. I may try to fix the camera again when animating the shot.
Below are the footages that I managed to track this week. Some of them were quite straightforward to track and I managed to solve them within first or second try.
Shot 05Shot 10Shot 08Shot 06Shot 09Shot 07
After using both software in the personal and indie film project, I can say that tracking in Blender is quite easy to do compared to 3D Equalizer since Blender is also a full fledge 3d software so I can quickly jump into 3d layout, adding, modify object and animation on the fly.
Blender tracking is quite limited in functions and settings, but it still able to track quite nicely. 3D Equalizer on the other hand feel a bit clunky to use with its rather odd interface. It can only perform ‘undo’ function for certain functions of the software, but still, 3D Equalizer is very powerful with more robust parameters and settings to accomplish high quality tracking, which making it one of the industry standards for tracking and matchmove.
During the weekend, I re-watched quickly the Dom’s tutorials to recall some of the important technical aspects. I also watched several other tutorials on YouTube to learn more about the software.
This week, I also doing camera tracking for all the footages of my personal project. So the timing was quite right as my mind will only about camera tracking this week although both project are using 2 different tracking software which are 3DEqualizer and Blender.
I have 8 shots for the indie film project to track and 12 shots for personal project and. So I called this week as ‘CAMERA TRACKING MAYHEM WEEK!’
https://youtu.be/vI9_s8oRB_w The above tutorial won’t let me embed the video here, but it helped me a lot to understand more about tracking data to import to Maya, the use of camera group and scaling.
Above was of the earliest tutorials I watched for 3DEqualizer on YouTube. This one shows how to deal with blurry footage and rolling shutter which can be helpful as the footages that I’m going to work on has the same problems.
I watched the above tutorial just to learn to how to import 3DEqualizer tracking to Blender since I used Blender for my personal project. In case Blender failed to track my footages, I will have a backup software for it. And after watching this, I learned that we cannot simply import 3DE tracking to Blender but to use and run some scripts for it to work properly. I didn’t try this but something I have taken notes of.
This one was a quite simple tutorial but it teaches how to use lineup controls constraint markers to plane to align the camera and track.
I watched many others 3D Equalizer tutorials that I bookmarked them for future references.
Tracking with 3DEqualizer
I then proceeded with the tracking process and I managed to track 6 out of the 8 footages for this week. It took me about 5 days to track all the shots while also doing camera tracking with my personal project’s footage.
Along the way, I experienced many challenges, bad tracking, bad camera alignment and software crashes. At the same time, I also learned a lot of things and discover several tricks that helped me solve the camera movements. My final tracking was not really 100% perfect as there are several small vibrations in certain frames.
It is probably too long to blog everything for each shots here, so I just make a summary for shots that I think have unique solving methods than the normal and standard ones.
The shots that I’m able to track: CC_0100, CC_0200, CC_0500, CP_0100, CP_0200_0300 & CP_0400
Remaining two shots: CC_0300 & CC_0400.
The CC_0300 & CC_0400 were the hardest to track for me with their camera shakes, rotation, and motion blurs. I’m really struggled to get good track with it. I felt almost like those 2 shots don’t want to be defeated by me at this point. I’m a bit disappointed but at the same time also happy that I’m able to solve 6 of the footages. I might return to those 2 shots in another week.
CC_0300; the camera alignment was okay at the first half of the shot… and then the alignment gone wrong towards the end
Among the shots that I’m able to track, the hardest were CP_0200_0300 and CC_0100. Which is really really weird!, because in Blender, CC_0100 was the easiest and the fastest to solve. Blender was able to quickly recognised the camera zooming motion with changing focal length without problem. It took me about 1 day just to solve this shot in 3DE compared to approximately 30 minutes in Blender.
I’m able to track this shot pretty quick in Blender but in 3DE it was one of the hardest that I’m pulling my hair when doing it
I tried everything; from placing markers near each corner of the shot to make 3D Equalizer recognise the zoom, to changing camera constraint and enabled dynamic focal length. None of that giving satisfactory results. It can track the camera movement, but not the zoom. It think the camera is moving forward, instead of zoom even after I set the positional camera constraint to fixed.
So, as a special method for shot CC_0100, I finally decided to animate the changes of focal length (zoom) value manually. To get the correct estimation for focal length, I put a 3d plane in the shot as a measurement object. I aligned the 3d plane with the road at the beginning of the shot and keyframed the camera’s focal length value right before the zoom started.
Before the zoom. I didn’t follow the focal length that Luke provided as it causes the perspective of the plane not matching with the roadZooming with blurAfter the zoom. Estimating the focal length value by re-matching the 3d plane with the road
I then go frame by frame to find where the camera stop zooming. I then changed the focal length value carefully until the 3d plane shape and perspective are matching with the road again. With full suspense, I clicked the ‘Calc All From Scratch’ button and….. it works!!! It solved the camera movements perfectly! I can’t describe how happy I am at that moment.
The mountain of deviations!
The final solve error aren’t pretty because 3DE think the markers deviated so much, but since the movements were correct, I think I can’t rely on the number.
CP_0200_0300
The same thing can be said for CP_0200_0300. The deviation was very high but this was the best I could get and cannot get any lower than this. And since the camera movement was correct, I ignored the number. The camera movement was quite challenging to track with the camera rotation. In the first few tries, I cannot solve it and then I changed the ‘Positional Camera Constraint’ to ‘Fixed Camera Position Constraint’ able to help with the tracking.
Below are the rest of the footages that I managed to track without much problems.
CP_0100. The most easiest and quite perfectCC_0500CP_0400CC_0200.
I didn’t have much problem with CC_0200 but the I cannot get any lower solve error before the camera movement become broken when I delete any markers that have high deviation value. There were a lot manual tracking and re-position of markers frame by frame by hand. I’m very careful to predict their location when the shot became blur.
Videos
I exported the tracking data for 3 of the random shots that I managed to track into Maya first just to test how they look with the car model.
CC_0200CC_0500CP_0100
I uploaded those 3 videos first to our Discord server. I didn’t have time yet to finish the road geo, realigning and rescaling all the 6 shots in Maya as I’m also finishing the camera tracking for the footages of my personal project at the same time. So I will update the team again after I finish with the Maya files.
To be honest, my mind was very tired this week after doing all the tracking works in both the indie film and personal project. Nevertheless, I’m very happy I that managed to track 6 out of 8 footages from this project.
Early this week, I tried to track several shots from the indie film project as personal practice since I’m not signed up as a matchmover in this project. Last week, I noticed that the videos have a lot of motion blurs which Emma and Marianna experienced difficulties to solve the tracking in 3D Equalizer. So I wanted to try and see if I’m able to do it in Blender using the knowledges that I learned several weeks ago. After looking at all the videos again, I decided to try on the CC_0100 and CC_0200 first.
CC_0100
CC_0100 has a fast camera zoom at the beginning which caused very high blur. The first method I tried was to use the ‘Detect Features’ to automatically place markers on areas where Blender think it can track for the whole duration of the videos. Unfortunately, it failed to recognise the ground area probably because of the blur during the zoom. I proceeded with the ‘solve’ function just to see if it can actually solve the camera movement with the automatic markers but the result was very bad.
Blender failed to automatically detect the ground
I had no other option but to manually place markers and probably tracked it frame by frame. I analysed the video again and noticed the image was sharp towards the end of the footage. So I decided to do a backward tracking. There were actually many contract areas which I can place the markers, but most of them were small which completely disappear during motion blur.
When I skimming through the video, the only contrast areas that still has good visibility during motion blur was the road lines. So I placed several markers at the sharp corners of the road lines.
I then tracked all the markers backward frame by frame carefully and stop when the markers deviated from its original place because of the blurry frames. I manually placed the markers by guessing the correct positions. The road lines make it quite easy to predict the locations even when its blur because I can still see the line’s corners.
The road lines became overlapped because of the motion blur but we can still see the line’s corner and predict the location for manual marker placements.
After several tries, correcting positions and removing bad markers, I managed to solve the camera movement with solve error of 0.89. It’s not great but that was lowest I can get before the camera alignment broke if I do a bit more clean up.
Final camera solve
Video
CC_0200
Next I proceeded with the next video, CC_0200. This footage has a lot of camera movements with rotation and position. The same problems happened with this shot when using the auto detect features. I then used the same tricks as the CC_0100 to manually place and track the marker.
This shot was multiple times harder to track compared to CC_0100.
I placed some markers on the road lines corner like I did on the CC_0100.So much blurI carefully predict and estimate the location of markers on blurry frames.I carefully predict and estimate the location of markers on blurry frames.Final camera solve
It took me about 5 hours on this shot and I finally managed to solve the camera with accurate camera movement, but with a very high solve error value of 15.30 px. I could get lower value than 1.0 by deleting high deviation markers, but the camera movements would not aligned properly anymore. I guessed I cannot rely on the numbers only and probably because of the high vibration of the original camera, Blender think the markers have bad deviation.
Video
I then sent both videos to our Discord server during today’s class (26th May). Luke said the tracked were good and asked if I’d like to pick other shot and help the team with the camera tracking.
I’m not really sure because the tracking I did before was done using Blender and not 3DEqualizer. I don’t remember much about 3DEqualizer since I only used it several time only during Term 1. I may need to re-watch the Dom’s tutorials to recall the important technical aspects of the software. I think this as a good opportunity for me to practice the software again as 3DEqualizer is one of the industry’s standard for matchmove and tracking.
So I said to Luke that I will try to track on the same footages that the other matchmovers were doing at this moment as a backup and see if I can good results.
Early this week, I managed to re-shoot the video plates to replace the rejected version recorded last week. I get up early in the morning at 5 o’clock as last time to avoid the crowds that coming out to go to work and school.
This time, I chose another place near my flat that was visible to the early morning light and not covered by the building’s shadow. This is to prevent the problem with the last videos where I need to increase the camera brightness which cause over expose to the other already bright areas in the video. With properly lit location, I can also get lower ISO as possible to prevent grainy videos.
I carefully shoot the videos. The location of the video is actually one of the main path for people to walk to go to work as there are several offices around this place. There is also a gym that is open early in the morning, and during the shooting there were already several people coming to exercise there. I cannot spend too much time to shoot as more and more people started passing by. I shoot as many variations as possible when there is no / not many people around.
After the shooting, I did a quick editing on my computer to check the scene. This time I’m very satisfied with the result as it has good camera angles, movements and overall, consistent brightness throughout the videos.
I cut each shot with the intended duration and exported them with extra 10 to 15 frames each as a safe frames for final cutting later. I will use this version of background plates for the camera tracking process in Blender next.
Today (19th May), we finally, we received the actual video footages. Luke made a briefing video and spreadsheet about our tasks for each shot.
Our team members and tasks
There are total of 8 videos that divided into 2 categories, which are CC (car crash) and CP (car pass). CC shots would need additional VFX works for the car crash simulation and effects.
The flow for each shot will be basically like this: ‘Matchmove > Animation > VFX > Lighting & Rendering > Compositing‘. For the animation tasks, me and the other animators (Layne & Tina) would need to wait for the footages to be tracked by the matchmovers first (Emma, Marianna & Antoni) so that we can animate the car accordingly to the camera angle, and ground surface of the scene. We can actually start animating the car while the footages being tracked, but the animations might need to be changed again when we receive the camera track to match the path and direction of the road in the shots.
I checked the footages in the collaboration folder and noticed that all of the videos were blurry or have motion blur when the camera moves. I’m not an expert in camera tracking, but I’m a bit concerned that the videos could be very challenging to track.
About 2 days later, my concern was true as Emma and Marianna were discussing for several days that they were having trouble to work with the blurry videos.
Although I’m not a matchmover in this project, I was thinking to try to track some of the shots in Blender to see if I’m able to solve it. This can also a good practice for me since I’m also doing camera tracking in my personal project.
This week, I didn’t spent much time working on my personal project since I’m a bit busy with the collaborative indie film project.
I actually planned to (and should have) finalise the scenes and actions for the storyboard as soon as possible so I can shoot the background plates (video footages) for the camera tracking process. I thought long and hard to design a good actions sequence for the story but I still don’t have any concreate idea for this. I felt a bit stuck that I finally decided to follow some of the actions and shots from the old Killer Bean animation since the motive of my project was not to design the choreography but to do character animation and camera tracking.
Some of the interesting action sequences from Killer Bean
I quickly referred the Killer Bean animation and drew a new rough story sketches to finalise the actions, the number of shots, angle and camera movement for the video shooting. There will be 12 shots, and I targeted it to have a duration of 30 seconds maximum for whole animation.
I woke up as early as 5 am in the morning to shoot the background plate videos to avoid the crowds and people that coming out in the morning to go to work and school. I shoot the videos using my iPhone only since I don’t have any better camera equipments. The angles were based from the story sketches I did last night. I made several variations and alternatives for each shot as backup.
Right after the shooting, I went to my computer and I made a quick editing see the result and how the shots coming together. Honestly, I’m not satisfied with the video. The location was not very good and since it was in between two buildings, the area was under the shadow which forced me to increase the camera brightness during shooting and made the other already bright areas to become over exposed.
Rejected background plates
I planned to do a re-shoot in another day, probably during the weekend or early next week.