I’ve learned many good things throughout the Advanced and Experimental Unit for these two terms. From learning different new software, tools and techniques, making more involvement with friends and team, and to finally discover what I really want to pursue. I’ve discussed about my thinking on future path that was still unclear at that time in my old entry (https://kamil.myblog.arts.ac.uk/2021/02/01/self-reflection-and-future-path/).
By the end of this term, it has become more clear that I want to focus my career on animation especially character animation. I also discovered that I somehow able to perform camera tracking works quite nicely, although not at expert level. I managed to track footages from my personal project as well as 6 out of 8 challenging footages from the Indie Film project, which I originally didn’t signed up as matchmover but as animator only. Before this, I always assumed that I’m very weak with tracking and don’t have much interest in it. But thanks to Luke, he was the one who encouraged me to keep the idea of doing camera tracking in my personal project, which lead me to try doing the same with the Indie Film project.
It’s completely different now that I even proposed to include camera tracking as part of my final major project (FMP) with a combination of 3D character animation.
During this term, I’m more active and engaged in conversations with others when compared to the previous terms. I also tried my best to assist friends who asking for help in the class and group Discord server when they have problems with technical and software. Probably because of this, several classmates started messaging me personally when they need help. I believe by helping others, I can also learn something new when I try to solve the problems.
Speaking of learning new things, in terms of techniques, I’m now more knowledgeable with tracking tools in Blender and 3DEqualizer, discovered few tricks of how to solve a challenging camera track, learned and understand more tools in several 3d software that I used in this term. I also learned how to use Houdini and quite amazed with the software capability, although I didn’t plan to use it as my primary animation tool at this moment.
With all that I have learned now and in the future, I hope that they are not only for my own benefit but to keep developing it further to contribute and give benefits to other people as well. This is something that I want to achieve with my future works and thesis. To me, that is the most important.
During the weekend, I re-watched quickly the Dom’s tutorials to recall some of the important technical aspects. I also watched several other tutorials on YouTube to learn more about the software.
This week, I also doing camera tracking for all the footages of my personal project. So the timing was quite right as my mind will only about camera tracking this week although both project are using 2 different tracking software which are 3DEqualizer and Blender.
I have 8 shots for the indie film project to track and 12 shots for personal project and. So I called this week as ‘CAMERA TRACKING MAYHEM WEEK!’
https://youtu.be/vI9_s8oRB_w The above tutorial won’t let me embed the video here, but it helped me a lot to understand more about tracking data to import to Maya, the use of camera group and scaling.
Above was of the earliest tutorials I watched for 3DEqualizer on YouTube. This one shows how to deal with blurry footage and rolling shutter which can be helpful as the footages that I’m going to work on has the same problems.
I watched the above tutorial just to learn to how to import 3DEqualizer tracking to Blender since I used Blender for my personal project. In case Blender failed to track my footages, I will have a backup software for it. And after watching this, I learned that we cannot simply import 3DE tracking to Blender but to use and run some scripts for it to work properly. I didn’t try this but something I have taken notes of.
This one was a quite simple tutorial but it teaches how to use lineup controls constraint markers to plane to align the camera and track.
I watched many others 3D Equalizer tutorials that I bookmarked them for future references.
Tracking with 3DEqualizer
I then proceeded with the tracking process and I managed to track 6 out of the 8 footages for this week. It took me about 5 days to track all the shots while also doing camera tracking with my personal project’s footage.
Along the way, I experienced many challenges, bad tracking, bad camera alignment and software crashes. At the same time, I also learned a lot of things and discover several tricks that helped me solve the camera movements. My final tracking was not really 100% perfect as there are several small vibrations in certain frames.
It is probably too long to blog everything for each shots here, so I just make a summary for shots that I think have unique solving methods than the normal and standard ones.
The shots that I’m able to track: CC_0100, CC_0200, CC_0500, CP_0100, CP_0200_0300 & CP_0400
Remaining two shots: CC_0300 & CC_0400.
The CC_0300 & CC_0400 were the hardest to track for me with their camera shakes, rotation, and motion blurs. I’m really struggled to get good track with it. I felt almost like those 2 shots don’t want to be defeated by me at this point. I’m a bit disappointed but at the same time also happy that I’m able to solve 6 of the footages. I might return to those 2 shots in another week.
Among the shots that I’m able to track, the hardest were CP_0200_0300 and CC_0100. Which is really really weird!, because in Blender, CC_0100 was the easiest and the fastest to solve. Blender was able to quickly recognised the camera zooming motion with changing focal length without problem. It took me about 1 day just to solve this shot in 3DE compared to approximately 30 minutes in Blender.
I tried everything; from placing markers near each corner of the shot to make 3D Equalizer recognise the zoom, to changing camera constraint and enabled dynamic focal length. None of that giving satisfactory results. It can track the camera movement, but not the zoom. It think the camera is moving forward, instead of zoom even after I set the positional camera constraint to fixed.
So, as a special method for shot CC_0100, I finally decided to animate the changes of focal length (zoom) value manually. To get the correct estimation for focal length, I put a 3d plane in the shot as a measurement object. I aligned the 3d plane with the road at the beginning of the shot and keyframed the camera’s focal length value right before the zoom started.
I then go frame by frame to find where the camera stop zooming. I then changed the focal length value carefully until the 3d plane shape and perspective are matching with the road again. With full suspense, I clicked the ‘Calc All From Scratch’ button and….. it works!!! It solved the camera movements perfectly! I can’t describe how happy I am at that moment.
The final solve error aren’t pretty because 3DE think the markers deviated so much, but since the movements were correct, I think I can’t rely on the number.
The same thing can be said for CP_0200_0300. The deviation was very high but this was the best I could get and cannot get any lower than this. And since the camera movement was correct, I ignored the number. The camera movement was quite challenging to track with the camera rotation. In the first few tries, I cannot solve it and then I changed the ‘Positional Camera Constraint’ to ‘Fixed Camera Position Constraint’ able to help with the tracking.
Below are the rest of the footages that I managed to track without much problems.
I didn’t have much problem with CC_0200 but the I cannot get any lower solve error before the camera movement become broken when I delete any markers that have high deviation value. There were a lot manual tracking and re-position of markers frame by frame by hand. I’m very careful to predict their location when the shot became blur.
Videos
I exported the tracking data for 3 of the random shots that I managed to track into Maya first just to test how they look with the car model.
I uploaded those 3 videos first to our Discord server. I didn’t have time yet to finish the road geo, realigning and rescaling all the 6 shots in Maya as I’m also finishing the camera tracking for the footages of my personal project at the same time. So I will update the team again after I finish with the Maya files.
To be honest, my mind was very tired this week after doing all the tracking works in both the indie film and personal project. Nevertheless, I’m very happy I that managed to track 6 out of 8 footages from this project.
Early this week, I tried to track several shots from the indie film project as personal practice since I’m not signed up as a matchmover in this project. Last week, I noticed that the videos have a lot of motion blurs which Emma and Marianna experienced difficulties to solve the tracking in 3D Equalizer. So I wanted to try and see if I’m able to do it in Blender using the knowledges that I learned several weeks ago. After looking at all the videos again, I decided to try on the CC_0100 and CC_0200 first.
CC_0100
CC_0100 has a fast camera zoom at the beginning which caused very high blur. The first method I tried was to use the ‘Detect Features’ to automatically place markers on areas where Blender think it can track for the whole duration of the videos. Unfortunately, it failed to recognise the ground area probably because of the blur during the zoom. I proceeded with the ‘solve’ function just to see if it can actually solve the camera movement with the automatic markers but the result was very bad.
I had no other option but to manually place markers and probably tracked it frame by frame. I analysed the video again and noticed the image was sharp towards the end of the footage. So I decided to do a backward tracking. There were actually many contract areas which I can place the markers, but most of them were small which completely disappear during motion blur.
When I skimming through the video, the only contrast areas that still has good visibility during motion blur was the road lines. So I placed several markers at the sharp corners of the road lines.
I then tracked all the markers backward frame by frame carefully and stop when the markers deviated from its original place because of the blurry frames. I manually placed the markers by guessing the correct positions. The road lines make it quite easy to predict the locations even when its blur because I can still see the line’s corners.
After several tries, correcting positions and removing bad markers, I managed to solve the camera movement with solve error of 0.89. It’s not great but that was lowest I can get before the camera alignment broke if I do a bit more clean up.
Video
CC_0200
Next I proceeded with the next video, CC_0200. This footage has a lot of camera movements with rotation and position. The same problems happened with this shot when using the auto detect features. I then used the same tricks as the CC_0100 to manually place and track the marker.
This shot was multiple times harder to track compared to CC_0100.
It took me about 5 hours on this shot and I finally managed to solve the camera with accurate camera movement, but with a very high solve error value of 15.30 px. I could get lower value than 1.0 by deleting high deviation markers, but the camera movements would not aligned properly anymore. I guessed I cannot rely on the numbers only and probably because of the high vibration of the original camera, Blender think the markers have bad deviation.
Video
I then sent both videos to our Discord server during today’s class (26th May). Luke said the tracked were good and asked if I’d like to pick other shot and help the team with the camera tracking.
I’m not really sure because the tracking I did before was done using Blender and not 3DEqualizer. I don’t remember much about 3DEqualizer since I only used it several time only during Term 1. I may need to re-watch the Dom’s tutorials to recall the important technical aspects of the software. I think this as a good opportunity for me to practice the software again as 3DEqualizer is one of the industry’s standard for matchmove and tracking.
So I said to Luke that I will try to track on the same footages that the other matchmovers were doing at this moment as a backup and see if I can good results.
Today (19th May), we finally, we received the actual video footages. Luke made a briefing video and spreadsheet about our tasks for each shot.
There are total of 8 videos that divided into 2 categories, which are CC (car crash) and CP (car pass). CC shots would need additional VFX works for the car crash simulation and effects.
The flow for each shot will be basically like this: ‘Matchmove > Animation > VFX > Lighting & Rendering > Compositing‘. For the animation tasks, me and the other animators (Layne & Tina) would need to wait for the footages to be tracked by the matchmovers first (Emma, Marianna & Antoni) so that we can animate the car accordingly to the camera angle, and ground surface of the scene. We can actually start animating the car while the footages being tracked, but the animations might need to be changed again when we receive the camera track to match the path and direction of the road in the shots.
I checked the footages in the collaboration folder and noticed that all of the videos were blurry or have motion blur when the camera moves. I’m not an expert in camera tracking, but I’m a bit concerned that the videos could be very challenging to track.
About 2 days later, my concern was true as Emma and Marianna were discussing for several days that they were having trouble to work with the blurry videos.
Although I’m not a matchmover in this project, I was thinking to try to track some of the shots in Blender to see if I’m able to solve it. This can also a good practice for me since I’m also doing camera tracking in my personal project.
This week, the actual footages was still unavailable and might still being filming. Everyone in the team were doing practices and test since last week as preparation for the actual work later.
The Ftrack page that Luke created was being populated with everyone test videos from all departments. I saw some of them and very impressed with their works. Watching other people works always inspire me to keep improving myself.
I also uploaded my car animation test I did last week to Ftrack and below is Luke’s feedback on my animation.
Last week, Luke created a new collaborative folder for the indie film project on his shared OneDrive for us to share and upload our working files. He also opened a new review page on Ftrack for anyone wanting to share videos for feedback.
Since we were still waiting for the actual footages to work on, Luke provided several materials for all departments to practice first. For the animation team, Luke made a tutorial video on how to animate a car with realistic motion and rumble. We can practice using the temporary car model from the collaborative folder.
I downloaded the car and road model including their texture files. The car was rigged very nicely with necessary controllers to animate the car complete with automatic tire rotation when the car moves. I setup a new project folder and put everything in the respective directories.
Even this is just a practice, I used reference system when importing both models into a new empty Maya file, which will become an animation file. The animation will be done on a model that was linked to the external file. So any update to the original car model will be applied back to the animation file automatically.
I repositioned and placed the car properly in a suitable location on the road. I then made a ‘camera and aim‘ and parent it to the car’s root controller so the camera will move together with the car.
I used animation layers as suggested by Luke in his tutorial. I created 2 additional animation layers for this. BaseAnimation layer is for the main car movement while carbody_02 and carbody_03 are for the car’s body animations.
I started with the rumble animation first on the carbody_02 layer. I made a short 60 frames sequence with a slight bumping motion. I then loop the sequence in the Graph Editor using Curves > Post Infinity > Cycle with Offset.
I then made another animation in carbody_03 layer with much longer sequence with subtle and smoother wobble animation. The animation in both layers will blend together to create more organic motion and not look too repetitive.
I tweaked both animations several times by adjusting the curves and keyframes in the Graph Editor until I satisfied with the result. The objectives were to make the rumble to look very subtle but still noticeable and realistic.
After the rumble motion finished, I then animated the car’s root in the BaseAnimation layer to make the car moving down the road. I only animated a small portion of the animation and loop it again using the same function as before to create a continuous motion.
For the last part, I animated the camera using it’s target (aim). I made a subtle camera shake motion manually to the camera’s aim to make the camera look like it is being held by hand.
During this week class (21st April), Luke gave a bit overview of what tasks will need to be done by each department in the coming weeks. I took the animator role, so I my task would be animating a car. Perhaps to make it look realistic since this 3d car will be combined into a real footages. Luke said that the videos are still being filming at the moment, so we will need to wait a little bit until we receive the videos.
Few days later Abbie setup a new Discord server for the project, and everyone involved quickly joined the server. There are several team members from the VFX course as well, and I’m glad to see Antoni & Ghera joined this project. I’ve teamed up with them during the Term 2 collaborative project, and they are very good and cooperative.
After thinking for several days and watching quite a lot of short animations since the last week post, I decided to do 3d character animation in a real video footage for my personal project. I really love character rigging and animation, and quite weak with camera tracking. So with this project, I can practice character animation while at the same time improve my camera tracking skill. For the collaborative project, I decided to join the indie film project as animator. I then emailed my project proposal to Luke.
I then booked a session with Luke on 6th May for his further advices and also to show my initial progress of the personal project.
I’m now in Term 3. In this week class, Luke briefed us on what we have to do throughout this term. There should be 2 minimum projects which are another collaborative project and personal project. The collaborative should be a teamwork project with students from other departments, while the personal project is solely our own ideas which can be something that we interested to explore and showcase our preferred style and aim for the skillset to jobs that we are looking. Luke mentioned that there is a collaborative indie film project for anyone interested to involve as animator, matchmover, VFX and render artist.
We will do a lot of self directed learning and research for the projects, but at the same time we can book a session with several mentors for their advice and guidance. During this term also, we will need to be planning for our thesis and final major project and pithing them in week 9.
Our task for this week is to think about what projects we want to do for this term and why, then send the email proposal to Luke before class next week.
Personally, I’m not really sure yet what I want to do for my personal project, but I really like character animation. For the collaborative project, I probably decide to get involve in the indie film project. So I have until Monday to think about this. I’m going to watch short animations on YouTube to get some ideas.
First of all, I want to thank everyone in the team, Emma, Antoni, Ghera, Cally, and Ben. They have done phenomenal job in this project and I feel very fortunate to be able to work with them who are very talented in their respective fields.
When we started this collaborative project, we were a stranger to each other. Even with Emma, which is on the same course with me, I never talk or meet with her before. But throughout this project, everyone gets to know and get along with each other to complete the project. Everyone was very open to suggestions and easy to collaborate with.
Group Leader: Emma Copeland Animator: Emma Copeland, Kamil Fauzi VFX: Gherardo Varani, Antoni Coll Sound Design: Cally Spence, Ben Harrison
During our initial meeting when we had our storyboard ready, we set some targets for what we should at least achieve by the end of the project. Based on the storyboard, there are 3 acts (Act 1, Act 2 and Act 3) with 35 shots for the story. Technically the project is quite heavy, so we targeted to complete everything for Act 1 first, in terms of modelling, animation, lighting, visual effects, sound and music. And if we able to complete Act 1 quickly and have more time, we will proceed with the remaining 2 acts. We acknowledged that everyone in the team was also busy with their respective course assignments.
At the end of the project, we managed to complete everything that we targeted for Act 1, while sound department managed to produced complete sound and music for all 3 acts. It’s a bit sad that we were not able to proceed with the remaining acts, but I think everyone was proud with what we accomplished. The end product might not very brilliant, but the collaboration and commitment given by each team member were amazing.
If I can describe everyone on the team from my perspective, Emma is the group leader but she was very open to any suggestion. She always tries to finish her job quickly and very quick to learn new things. As you can see in my previous posts, she was active in our group’s conversation to give respond and feedbacks. She is also very good at explaining something and I wish I could be like her.
Antoni is a very hardworking person. He did many research and experiments on his own to get all the visual effect and lighting needed for each shot. He was also active to give his work-in-progress updates in our Discord channel. Ghera is the same as Antoni, and he even willing to do the animation for the first shot at the beginning of the story. The visual look that they setup for all the shots were very good. I personally love the atmosphere in the final render.
Ben is a very creative composer, and he has produced 3 amazing tracks that fit extremely well with the story. I especially like the track he did for Act 2. Cally on the other hand is a visionary foley artist with unique vision with the sound effects. When I received the audio effects for editing, some of the sound feel quite weird to me at first and I can’t imagine how it should be implemented. But when Cally did the audio mix on my video editing, the effects match very well with the scene.
With all that said, I also want to thank our supervisors, Luke, Christos and Ingrid, who has been very helpful in providing guidance and assistance throughout the course of this project.
What I Learned In Terms of Technical
I have learned several new techniques and tools in Maya. I can still be considered as new with the software and there are a lot of things that I need to learn and understand.
At the beginning of the project, I learned about how to properly setup and use referencing system in Maya. I had several experiences working with reference or proxy pipeline using 3dsmax and Blender before to understand how important and useful this method is. So when we started building assets in our collaborative folder, I quickly began testing and experimenting the feature to see how it works in Maya. I watched several video tutorials when exploring this method to check in case I missed anything. I found that the reference system in Maya is quite easy to understand and powerful to use in project. Despite that, there are small things that irritated me about how Maya project, working folder and referencing system works when compared to other software. As far as I discovered during this project, the features not working as seamless as 3dsmax where it always finds files that we throw inside the working folder without the need to ‘repath’. But maybe there are still somethings that I don’t completely understand about how to use the feature in Maya. I will explore this more during the Easter break.
During this term I also learned more about rigging in Maya. In this project, I rigged the space buggy so I can animate the vehicle with moving wheels and suspension for one of the scenes. I also did a very simple rig for the spaceship model so all the 3 spaceship’s doors can be animated.
What I really explored deeper about rigging in Maya was when Crystal asked for my help to rig one of her group’s characters for their game demo project. I’m not fully involve in her group but only to help her with the rig. I have done various character rigs in other software before to understand at least the very basic of it. But I don’t have experience to do a full character rig in Maya, so I did a quick research and watched several tutorial videos to understand the tools and process. I found that the character rig in Maya is not very hard to understand. So in this regards, I’m very happy that I learned so much things with Maya rig from placing bones, controllers and weight paint. The only thing was that I’m quite slow while doing the rig as I’m not yet familiar with the tools in Maya.
Throughout the process, I was actually very concerned about the rig will break when imported into a game engine. I have faced and learned many character rig problems since I was working in game production for almost 3 years. I knew that a lot of things must be setup properly like root, bone directions, and parenting for the rig to work correctly in game engines. I know what I should prevent and do in 3dsmax and Blender, but feel clueless when I rig the character in Maya. The rig I did works properly in Maya but it’s not guarantee will work in game engine. I don’t have much time to test it as I also had other tasks with my group project. Unfortunately, what I afraid turned out to be true when Crystal mentioned to me that the character legs broke when imported into Unity game engine. She said they proceed to do another rig themselves as she don’t want to interrupt me again. Even though the rig was not use in their final product, I’m so grateful that I have learned so much in this matter. I may want to revisit this again during the Easter break so I can understand more about rigging in Maya and how to prevent the problem.
There are a lot of other technical aspects that I learned with Maya during this project. So to list a few; I explored more about modelling, properly separating and combining meshes, grouping, modifying Arnold materials, lighting, using layers, understand more about outliner (which is something I’m quite confused during the last term), creating and using ‘Quick Select Set’ feature which is very useful during the animation process. The most important was, I learned how bad Maya 2020 is! So I’m going to stay with Maya 2018 as suggested by Luke.
Reflection
I have learned a lot throughout this project. One of them was I’m improving one of my behaviours which I think can be pretty bad sometimes. I’m a bit too perfectionist person where I always concern about everything must be done precisely and organised accurately which can cause delay to my works. One example is you can try to save any image and video in this blog to see that I even renamed everything properly, which nobody will actually care or see. In the past, my obsession was much worse. I will measure every 3d object and align vertices precisely to not have any random decimal in coordinate when I do 3d modelling, even that kind of measurement was not needed or necessary.
But as I work more and more with other people during my previous works and in this collaborative project, I learned to control and apply this behaviour for more appropriate situations. I can see perfection can be useful but too much picky can cause problems to the project. This is something that I continue to improve more when working in this project. I managed to accept and weight which things are more important. One example was I’m not become too fussy when I saw minor inconsistency in render quality between different shots. In the past, I would reject it and will ask for re-render again and again. I know that everyone is learning as much as I am. I feel calmer when I managed to do that and grateful this collaborative project happened. I still have this obsessive behaviour, but it is much, much better now.
I also realised that I communicated more with my friends whether using text in our Discord channel and verbal in our weekly video meeting. They were all very nice and there are times we become pretty chatty in the channel and make jokes. I joined the conversation whenever possible since I really want to improve myself to write and speak fluently in English. Sometimes I had difficulties when I want to explain somethings quickly. I admit that I had a lot more to learn but this collaborative project has somewhat helped me in this regard.
I talked about my thinking for future path in my first post during the first week of this term, https://kamil.myblog.arts.ac.uk/2021/02/01/self-reflection-and-future-path/ . My heart was torn into two. I’m very interested with cinematography and character rigging (and animation). To be honest, after doing this collaborative project, not only I still cannot decide which side to focus, but it was getting harder to determine. I did a character rigging tasks and animations, and also attempted to do my animation shots with cinematic elements. Although probably what I did with these areas was not very much in the project, I genuinely love to do both of them. I will have to do more thinking more about this during the Easter break since I have to decide and blog about this in the first week of the next term. Maybe I will also need to ask Luke for his opinions and suggestions for this.