

I will also include some writing for context, after-the-fact thoughts, and reflections on the affordances of each technique.

I had already watched tutorials on how to set-up and use each feature, but the videos below demonstrate a sort of "stream of consciousness" reaction to each facial animation feature. In order to accomplish this, I decided to take a different approach to my documentation methodology for this post - opting instead to live stream and record my reactions to each technique in real time as opposed to merely reflecting on the process after the fact. So that was my bias upon exploring these different techniques, however, I tried to remain neutral in my critiques of each feature. Working within an extensive self-teaching model, I have watched numerous tutorials on all the featured mentioned below and by and large, the LiveFace plug-in (mocap feature which uses Apple's TrueDepth Camera) came most recommended. First and foremost, I want to mention that I harbored a few biases going into this assignment. As someone who is, so far, working exclusively in iClone 7, I decided to document my trial and errors with a few of their facial animation techniques (whether manual features or motion capture plug-ins available as an iClone pipeline feature).
