Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
| ||||||||
Line: 19 to 19 | ||||||||
In essence, they learn a bunch of different models on a motion database, then test new (perhaps altered) data on those models and measure the results. The entire system seems extrememly dependent on the databse, how good of a sampling it is for "natural" motion and how large it is. Also, it seems to me that they aren't looking for "natural" motion so much as just looking for motion that is similar to what is found in the database. It doesn't make sense to compare it to human surveys of the results, since asking a human if a motion is natural is very different from determining if a motion is similar to that found in some database. - Roey Flor | ||||||||
Added: | ||||||||
> > | Using machine learning techniques to learn models based on Mocap data seems an interesting way to use the tools available, however I wonder if the contribution of this paper was merely to see if this could be done, or if a higher goal is achieved. I doubt the idea of 'naturalness' that the authors claim is being tested by these models were their original intention, as a 'un-natural' behavior by some (for eg, a combination of two natural motions) is considered natural by humans but not by these models. - Disha Al Baqui |