Difference: MocapResequence (10 vs. 11)

Revision 112006-03-08 - KenRose

Line: 1 to 1
 
META TOPICPARENT name="CPSC526ComputerAnimation"

Animation using Motion Resequencing and Blending

Line: 17 to 17
  (Motion Graphs) An interesting idea. It would be interesting to compare their detection of candidate transitions method with the Sederberg algorithm (Time Warping) that I presented on Monday. In the "Path Synthesis" section, it was not clear to me how they handle rotating the motion (if any); i.e. when two identical motions needs to be concatenated to follow a specific path (which requires rotating the original motion). If I understood correctly they do not handle that and will only use sub-portions of existing data in order to follow the path. In that case – wouldn’t it be relevant to support orientation changes of an existing motion? -- Hagit Schechter
Added:
>
>
This paper presents a reasonably cool idea and explains some interesting issues. I like their explanation of why not to use a simple vector norm as a difference metric between frames. However, they deal with the problem of affine invariance by considering only rotation along one axis (the y axis), which greatly restricts the allowable translations (though makes the system actually solvable since it permits a closed solution). The graph pruning description is also interesting. Future work could possibly look at more interesting ways of creating transitions (e.g., throw an IK solver into it so that a run could transition to a backflip motion and the character would know to bend his knees... similar to motion doodles). Section 4.2 was a bit humourous; 10 paragraphs to explain a fancy way of doing exponential exploration. -- KenRose
 

Precomputing Avatar Behaviour From Human Motion Data

Line: 37 to 39
  (Pre-Computing Avatar Behavior) I find the merge of machine learning and computer graphics quite interesting. I see the potential of actually using the paper's suggested technique for video games, but the paper lacks in my view a thorough discussion of the usability issue. Another question that comes to my mind is whether the reinforcement learning technique used in the paper also work for other scenarios where two concepts needs to be learned at the same time. For example, a two people dance. -- Hagit Schechter
Added:
>
>
This paper is organized much like any other contending "real time" performance: we preprocessed everything we could and stuck it in a LUT. smile I'm a little confused to their update rule for value iteration. It kind of resembles the Bellman update, but I don't understand why there is a gamma^t term (the Bellman update uses a gamma term and the actual value iteration yields exponentiation). The automatic data annotation of motion is cool: it is a way of programmatically describing certain types of motion (kind of like a "motion language"). Are there issues with false positives or negatives? The O(MN) requirements limits the scalability of this approach to support multiple behaviors (humans have many more behaviours). Still though, only two behaviors can produce interesting results (the 30 boxers animation looks hilarious). An application for this type of system could be MMRPGs where you may need a lot of computerized characters that do various things. Looks similar to the virtual train station demo that Terzopoulos showed last semester as part of his artificial life talk. -- KenRose
 
This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2025 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback