Adrian Freed: computing velocity properly

Adrian Freed <adrian@cnmat.berkeley.edu>

To: Vangelis L <vl_artcode@yahoo.com>, John MacCallum <john.m@ccallum.com>

Cc: Sha Xin Wei <xinwei.sha@concordia.ca>

Date: 2013-07-28, at 9:50 PM


I remember you mentioned finding some documentation that suggested that computing the velocity from successive kinect frames actually computes the velocity at the time between the frames and that they propose to come velocity by skipping a frame so that the midpoint  velocity corresponds to a time of the isochronous frame rate.

Those ideas represent very course approximations to the correct way of doing this for rigid body motion, the basis of which is described in the attached paper.


On Jul 29, 2013, at 1:48 AM, Vangelis L <vl_artcode@yahoo.com> wrote:

Almost :) that information was coming from the biomechanical side of things and was agnostic of the frame types. Since in biomechanics they are concerned of matching specific values to exact visual or other frame representations of movement, they propose to compute velocity every third frame (1st to 3rd) since the actual value of velocity that is calculated represents the 2nd frame and should be attached as information there. 
Do they propose a parabolic interpolation? In which space?


What I thing you are trying to avoid is a pre-defined space where the velocity calculation takes place. And that is related to what I got about Tensors so far... Tensors are bundles of vectors representing various desirable data from a point which position can be described by a given zero 0 in a Cartesian space. Each Tensor carries the bundle but it does not care about its Cartesian representation or the one to which it is compared to. Calculations between Tensors can be performed regardless of spatial representation and we can translate all data to any given Cartesian coordinate system on demand... is that any close?? In any case I think that the way a formula is applied to accurately measure f.i. velocity is related to how a device gathers the rotation data at the first place so that we connect to its most accurate representation. The biquaternion formula at the paper begins the example using the pitch, yow and roll information (latitude, longitude, roll over the z axis) and the maximum length of the working space which is confusing. 
Confusing ndeed.
How does this formula translates in practice, and if we apply this method do we need quaternions of two frames (or three frames according to the bio-mechanical scientists) in order to calculate the bi-quat and the velocity vector or the 4x4  rotation matrix? How are these calculated in Kicent? Does it matter?
Yes it matters but we are also trying to get the machinery in place to go beyond Kinect and also solve these POV issues.
I would like for example to produce a moving image that has a trace of a ribbon behind multiple positions of the body oriented according to the surface normal.