recycle movement blog for rhythm?

Here’s the blog for movement-based research that we can recycle for a fresh take on rhythm
http://movement.posthaven.com/
Anyone can post in current settings, I think —  but should we make it private as a scratch space?
Pavan, could the Laplacian fields work for aperiodic rhythmic patterns?  Presumably the formalism works for dimensions > 1.

Connor is investigating creating a journaling tool sync’ing multiple sensor data (OSC resolution) + audio + video tracks.
Does anyone have such a tool already that we can use?

introduction to tensors

Tensors are totally useful.  However these particular references seem difficult and rather specialized.   There must be some elementary reference or cheat sheet on tensors for classical theory of rigid body motion.  The classic works on mechanics are by  Lagrange and Hamilton.

Here's a text that may be a good systematic starting point for someone already trained to read math:
Especially chapters 1,2, and 10 (Elasticity).


The top-level clarification I'd like to make is the following:

A vector space V of dimension n  is a set with the following operations : vector addition + and scalar multiplication, which is isomorphic to Euclidean n-dimensional space.  (I'll assume you can look that up.)

A tensor T is a linear functional mapping from a product of vector spaces to the scalar field, say real numbers R.
(For simplicity let's say the vector spaces are all the same V.):

T:  V x V x V x … x V ---> R
(k times)

The degree of the tensor is the number of arguments -- the number of vector spaces in the product domain.
(In our case, degree T = k because there are k copies of V)   Each argument is a vector (hence each argument has a number components  = dimension of the vector space V)

T(v1, v2, v3, …, vk) is a scalar number.

So far this is not yet a tensor but a general map.  A tensor has two basic features:

(1)  It is linear in each argument:
For each argument, say in the i-th argument for each i:
T(… , u + v, …)  = T(… , u , …)   + T(… ,  v, …)   for all vectors u, v in V
and in the i-th argument for each i
T(… , s * v , …)  = s * T(… , u , …) 
for all scalars s in R, and all vectors v in V

(2) The value of T does not change when you change the coordinate basis of V according to which you represent the vectors in V.

The way this is written out in classical books on tensors is in terms of the orthogonal matrices for transforming a vector written w/r to one basis to that same geometric "arrow"'s coordinates w/r to another basis.   It's messy, but systematic.

Elegant modern maths write tensor in "co-ordinate-free" notations, but that hides some of the gritty intuition, and permits some basic confusions such as confusing vectors with tensors.

Tensors "eat" vectors and spit out scalar numbers.  And they can't be fooled by camouflaging the vectors changing their numeric coordinates by changing the basis w/r to which they are represented.


PS. By definition there are maps: V x V x V … x V --> R  called "forms"  that eat vectors multi-linearly and produce scalars.  But they do change value when the basis is changed.   Spaces of linear forms on a vector space V have as their bases, the duals to the basis vectors of V.  If the basis of V is say an orthonormal set of vectors v1, v2, …, vn, the dual forms dx1, dx2, …, dxm are defined by:
dxi ( vj ) = δij
where δij is the Kronecker delta function.

Adrian Freed: computing velocity properly

Adrian Freed <adrian@cnmat.berkeley.edu>

To: Vangelis L <vl_artcode@yahoo.com>, John MacCallum <john.m@ccallum.com>

Cc: Sha Xin Wei <xinwei.sha@concordia.ca>

Date: 2013-07-28, at 9:50 PM


I remember you mentioned finding some documentation that suggested that computing the velocity from successive kinect frames actually computes the velocity at the time between the frames and that they propose to come velocity by skipping a frame so that the midpoint  velocity corresponds to a time of the isochronous frame rate.

Those ideas represent very course approximations to the correct way of doing this for rigid body motion, the basis of which is described in the attached paper.


On Jul 29, 2013, at 1:48 AM, Vangelis L <vl_artcode@yahoo.com> wrote:

Almost :) that information was coming from the biomechanical side of things and was agnostic of the frame types. Since in biomechanics they are concerned of matching specific values to exact visual or other frame representations of movement, they propose to compute velocity every third frame (1st to 3rd) since the actual value of velocity that is calculated represents the 2nd frame and should be attached as information there. 
Do they propose a parabolic interpolation? In which space?


What I thing you are trying to avoid is a pre-defined space where the velocity calculation takes place. And that is related to what I got about Tensors so far... Tensors are bundles of vectors representing various desirable data from a point which position can be described by a given zero 0 in a Cartesian space. Each Tensor carries the bundle but it does not care about its Cartesian representation or the one to which it is compared to. Calculations between Tensors can be performed regardless of spatial representation and we can translate all data to any given Cartesian coordinate system on demand... is that any close?? In any case I think that the way a formula is applied to accurately measure f.i. velocity is related to how a device gathers the rotation data at the first place so that we connect to its most accurate representation. The biquaternion formula at the paper begins the example using the pitch, yow and roll information (latitude, longitude, roll over the z axis) and the maximum length of the working space which is confusing. 
Confusing ndeed.
How does this formula translates in practice, and if we apply this method do we need quaternions of two frames (or three frames according to the bio-mechanical scientists) in order to calculate the bi-quat and the velocity vector or the 4x4  rotation matrix? How are these calculated in Kicent? Does it matter?
Yes it matters but we are also trying to get the machinery in place to go beyond Kinect and also solve these POV issues.
I would like for example to produce a moving image that has a trace of a ribbon behind multiple positions of the body oriented according to the surface normal.

Adrian Freed : more computations with movement salience: from Teoma workshop, CNMAT July 2013

From: Adrian Freed <adrian@cnmat.berkeley.edu>

To: Vangelis L <vl_artcode@yahoo.com>, John MacCallum <john.m@ccallum.com>

Cc: Sha Xin Wei <shaxinwei@gmail.com>, Andy W.Schmeder <andy@cnmat.berkeley.edu>

Now we have the dot product, cross product, velocity and acceleration formulae in "o."

it is time to consider more carefully how to leverage dancerly practice and biomechanics without
building in too many damning assumptions.

Working on Saturday with Teoma I got confirmation of my concerns about the notion of your "self" coordinate system referenced to a single point of the body, i.e. in  mid-hips. She asked me to map hip movement so she could stand in one place for part of our piece. I used shoulder/hip distances for this which might not be what you would expect until you realize that the hips and shoulders have to work in counter motion in order for someone not to fall over.
One way of thinking about this is that Teoma can move the several interesint points of origin dynamically. Another more mature way perhaps is screw theory which models multiple connected bodies.

The quick hack without the full screw theory model is to acknowledge the hierarchy of connected limbs with tapering masses. Angular velocities as we have will therefor be useful here, but I think swept areas are probably better. You can weight the vector lengths with masses to get kinetic energy. Now the exercise is to decide which triangles to use for the swept areas. The formula for swept area is half the norm of the cross product but there is a problem here interpreting this as "effort". We have an asymmetry due to gravity.
It is a lot more work to raise a leg than sweep one. The problem is we have been focussing on kinetic energy without incorporating
changes in potential energy. I suspect you can project things to form a triangle with a point on the ground to reflect an area swept that estimates the potential energy change. Even better is to bite the bullet and move everything into tensors. Then you have enough
traction to do things like continuum mechanics 

Continuum mechanics is a branch of mechanics that deals with the analysis of the kinematics and the mechanical behavior of materials modeled as a continuous mass rather than as discrete particles. The French mathematician Augustin-Louis Cauchy was the first to formulate such models in the 19th century, but research in the area continues today.

Modeling an object as a continuum assumes that the substance of the object completely fills the space it occupies. Modeling objects in this way ignores the fact that matter is made of atoms, and so is not continuous; however, on length scales much greater than that of inter-atomic distances, such models are highly accurate. Fundamental physical laws such as the conservation of mass, the conservation of momentum, and the conservation of energy may be applied to such models to derive differential equations describing the behavior of such objects, and some information about the particular material studied is added through a constitutive relation.

Continuum mechanics deals with physical properties of solids and fluids which are independent of any particular coordinate system in which they are observed. These physical properties are then represented by tensors, which are mathematical objects that have the required property of being independent of coordinate system. These tensors can be expressed in coordinate systems for computational convenience.

and exterior algebra which we need to do vector fields for orientation correlation.






dualquats for music/sound representation not just motion in space

Tha attached paper has an interesting idea at the end: a dualquat harmonic oscillator which can describe

helical motion. 

You may recall that Shepard's original paper on pitch circularity discusses helical representations for
pitch perception.  http://psycnet.apa.org/journals/rev/89/4/305/

This strongly suggests that dualquats could also be used to operationalize motion in musical contexts.
The pitch one is obvious because the mapping is clear and we have lots of well known ways to think of translation and rotation
in this context (e.g. arpeggiated cadences, vamping) but I think there may be some interesting cases where we can use this
for fancier rhythmic things too. For example, in your work on polytempo you can define certain requirements such as events lining up in time as configurations of points on different rigid bodies in motion (one for each meter) that have to be colinear. My intution is that
interpolation with dualquat's will give you better results than the current scheme in tempocurver. It may at the very least give you hints as to how to represent what you are composing in a 3d GUI. I often feel that phase unwrapping would be better understood if people  could see it in 3D. With care the information we need can be obtained by casting shadows as I did with Amar for the 3D SDIF editor.

There is of course the question of how to integrate time into the model. You could start by parameterizing the screw parameters as a function of n.deltaT in discrete time signal processing fashion and then drive the system with my relaxation functions (shifted and time scaled sawtooths).

I hope this is good enough. I am slightly afraid that we might want to go all out and do tensors because I have seen time introduced in another paper by forming linear sums of two tensors to represent a reference frame from which the dualquats emerge. I am hoping that dualquats do most of the heavy lifting and that we will be happy with using o.'s existing vector functions with lambda() to define the operations to endow vectors with the appropriate constraints and operations to become tensors.
The code will be less obvious because we don't have operator overloading but most of the papers on tensors end up looking like APL which is not very clear either.


Having developed critiques of periodicity in one essay and being a fan of the point- and line-free geometries of Whitehead, Spencer Brown and others, I am rather embarrassed to be suggesting this strong push into dual-quats, the power tools for leveraging the Greek idolatry of the circle and line. Maybe the way to deal with ghosts is to dance with them?


more computations with movement salience

 OK, my last word (today) on this:

So, it is not obvious how to do the Euler, Lagrange equations with dual quaternions which we would need for the velocity acceleration, energy etc.

No worries though: it is tackled in the attached paper.

I added another paper to show off the range of interesting problems tackled with dual quaternions. We can also do physics engines, collision detection etc. rather easily.


Re: more computations with movement salience

Begin forwarded message:

From: Adrian Freed <adrian@cnmat.berkeley.edu>

Subject: Re: more computations with movement salience

Date: 25 July, 2013 4:01:52 PM PDT

To: John MacCallum <john.m@ccallum.com>

Cc: Sha Xin Wei <shaxinwei@gmail.com>, "Andy W.Schmeder" <andy@cnmat.berkeley.edu>, Vangelis L <vl_artcode@yahoo.com>


John,


As I suspected, with the right algebra built into "o.", we can do all these movement things 

we are trying to fudge with elementary stuff more compactly and richly.

I believe the best choice is the dual quaternion: http://en.wikipedia.org/wiki/Dual_quaternion

clustering on guass maps for dance etc.

From: Adrian Freed <adrian@cnmat.berkeley.edu>

Subject: clustering on guass maps for dance etc.

Date: 1 July, 2013 9:55:10 AM PDT

To: John MacCallum <john@cnmat.berkeley.edu>

Cc: Sha Xin Wei <shaxinwei@gmail.com>, Andrew Schmeder <andy@enchroma.com>, Rama Gottfried <rama.gottfried@gmail.com>


I have an intuition fueled by a half understanding developed over a drink with Xin Wei that could pan

out into something…


With that disclaimer, the idea is to compute correlations betweens orientations of body parts of dancers (a dancer and between

dancers). The first step is to project the orientation vectors onto a sphere (i.e. a gauss map).  Then we do cluster analyses on this.

I am not sure how this is done adapting classical k-means things - I guess you can just replace the usual coordinate discretization with a graph. I wonder if one should go straight for a spherical harmonic representation so as to be able to characterize shapes? Of particular interest would be to compute the autocorrelation to identify periodicities in orientation change. I am not good enough

to be sure but I suspect  the Wiener–Khinchin theorem holds so these periodicities can be estimated efficiently with FFT's. While I am wildly speculating here it would be nice to compute the linking number of pairs of paths traced by the body closed in some

ingenious way based on the aformentioned periodicities. This would capture braid structures that occur if you imagine

hands and feet with ribbons tied to them or braid structures in patterned dance (e.g. Ceili Dance).


I will stop these speculations here as it is time to ignore me and plunge into the literature. I have attached something, John,

that addresses your interest in using kurtosis of bodliy motion to drive migrators. This paper came up when I searched for "gauss map cluster" so we should be able to use this as a seed to harvest related papers. What is interesting about this paper is they don't construct a surface description so we can use it perhaps on the depth map data from Kinect.


I am trying to drag Andy into this partly to help us avoid some silly false leads but also because I suspect there is something interesting here for aerial dance which gives more orientation freedom than landed dance.



p.s. There is something deeply silly about what I am suggesting (and philosophically disturbing from Xin Wei's point of view) if we use Kinect skeletonization data. With enough crunch you can do better

than kinect by deriving the kinematics from the imager data using local invariance structure (as hinted in the attached paper).  For testing on our slow laptops and getting our feet wet the Kinect is a handy starting point though.


p.p.s. My puzzling over the right arithmetic to use for those quality measures that the kinect software gives us (0, 0.5, 1.0)

lead me to indicator functions which are usually 0 or 1 so as to bridge set membership or other predicates to number. They are however generalized to be continuous in fuzzy set theory which presumably has a lot more to say about how we should operate with them.

I am particularly interested now in how to leverage the dynamics of such functions.