tag:movement.posthaven.com,2013:/posts movement 2021-04-27T18:54:16Z Xin Wei Sha tag:movement.posthaven.com,2013:Post/1684161 2021-04-27T18:54:16Z 2021-04-27T18:54:16Z Jessica Rajko + John MacCallum + Teoma Naccarato: Provocation Discussion - May 3rd, 1:00-1:30pm EST
Subject: Provocation Discussion - May 3rd, 1:00-1:30pm EST
Date: April 27, 2021 at 1:03:45 PM GMT-4

Greetings!



First, thank you again for carving time out of your schedule to join us in conversation. The following information details how we’ll move through our discussion and how you can prepare. 



Discussion Details and Structure: The plan is to record and share a series of, 30-minute discussions between ‘provocateurs’---those who submitted provocations back in 2018 in response to the question: “What escapes computation in interactive performance?” Meetings will be hosted and recorded on Zoom (link provided in this meeting request). All three of the project facilitators (Teoma, John, and Jessica) will be present during the conversation to host and hold space. Our aim is to keep the discussion low-key, conversational, and open-ended. We are not trying to reach some sort of summary conclusion or solution within 30 minutes. Rather, we see these discussions as another way in which we continue the rich dialogue put forth in the provocations and currently buzzing on our SloMoCo Discord channel. To give you a sense of how we’ll structure the 30 minutes, here is a flexible outline:



Provocation Discussion


  • Brief introduction by facilitators and sharing of provocations by provocateurs - 10 min (2-3 min per provocation)


  • Provocateurs ask each other questions and discuss - 10 minutes 


  • Facilitators join in conversation - 10 minutes



For sharing your provocation, you can read or summarise, and comment on your current thinking about it. A link to your provocation will be posted alongside the video of the discussion.



Please feel free to join us 5 - 10 minutes early if you wish. We won’t start recording until everyone is settled in, but we also want to be respectful of your time and keep the entire session as close to 30 minutes as possible. 



Preparation: All we need you to do in advance is  i) revisit your own provocation, and those of the other two provocateurs; and ii) bring one question for each other person, based on their provocation. The purpose for this discussion is to collectively read your three provocations through one another, exploring connections and generative tension in perspectives.


 


Here are links to the three provocations for your session:


•              Fran & Javier:https://provocations.online/whatescapescomputation/jaimovitch-morand/


•              Fred:https://provocations.online/whatescapescomputation/bevilacqua/


•              Xin Wei: https://provocations.online/whatescapescomputation/xin-wei/

]]>
tag:movement.posthaven.com,2013:Post/1551422 2020-05-29T19:43:19Z 2020-05-29T19:43:19Z TUI L Voice and Infrared Sensor Shirt - ADACHI Tomomi,, Pamela Z, Michel Waisvisz, Leslie-Ann Coles w David Rokeby

Tangible User Interfaces in Vocal Performance: The Sounding Body as Digital Assemblage

Gretchen Jude

http://www.critical-stages.org/16/tangible-user-interfaces-in-vocal-performance-the-sounding-body-as-digital-assemblage/

Includes a performance with sensate shirt by ADACHI Tomomi (2009)
]]>
tag:movement.posthaven.com,2013:Post/1537146 2020-04-28T23:17:53Z 2020-04-28T23:17:53Z Updated Gesture Follower (IRCAM, Goldsmiths)
The Gesture Follower allows for real-time comparison between a gesture performed live with a set of prerecorded examples. The implementation can be seen as an hybrid between DTW (Dynamic Time Warming) and HMM (Hidden Markov Models).

http://rapidmix.goldsmithsdigital.com/features/gesture-follower/

F. Bevilacqua, N. Schnell, N. Rasamimanana, B. Zamborlin, and F. Guédy, “Online Gesture Analysis and Control of Audio Processing,” in Musical Robots and Interactive Multimodal Systems: Springer Tracts in Advanced Robotics Vol 74, J. Solis and K. C. Ng, Eds., Springer Verlag, 2011, pp. 127-142. [Download PDF]

• F. Bevilacqua, B. Zamborlin, A. Sypniewski, N. Schnell, F. Guédy, and N. Rasamimanana, “Continuous realtime gesture following and recognition,” in Gesture in Embodied Communication and Human-Computer Interaction: Lecture Notes in Computer Science (LNCS) volume 5934, Springer Verlag, 2010, pp. 73-84.  [Download PDF]
]]>
Xin Wei Sha
tag:movement.posthaven.com,2013:Post/1512857 2020-02-24T07:31:30Z 2020-02-24T07:31:31Z data_av.maxpat and data_av_playback.maxpat
Let me draw attention to a pair of what should be among the more useful patchers we have in the SC kit for serious empirical research with movement.  See:

Max\ 8/Packages/SC/examples/utility/
data_av.maxpat
data_av_playback.maxpat
data_vcr.maxpat

Based on the IMU’s that Seth Thorn (AME), Connor Rawls wrote these very useful utilities to pack sensor data as tracks in uncompressed AIFF files, one track / sensor stream.

Note data_av_playback uses groove~ to sync video with the data.

If you’ve got meaningful rich time series. please  check out MUBU and try Connor’s sc.hurst external …

Xin Wei

___________________________________________
Sha Xin Wei | skype: shaxinwei | mobile: +1-650-815-9962
________________________________________________
]]>
tag:movement.posthaven.com,2013:Post/1486886 2019-12-09T04:38:37Z 2019-12-09T04:38:37Z Ying Gao (Montreal): gaze- and light-sensitive clothing

flowing water, standing time: robotic clothing reacting to the chromatic spectrum, by ying gao

https://www.itsnicethat.com/articles/ying-gao-flowing-water-standing-time-digital-fashion-021219

Ying Gao (Université du Québec à Montréal / UQaM)  is one of the most talented artists from our Hexagram research-creation active textiles and wearable computing axis. 

Ying-gao-flowing-water-standing-time-work-fashion-digital-itsnicethat-02

Gaze and light-sensitive clothing…

Ying-gao-flowing-water-standing-time-work-fashion-digital-itsnicethat-05



Ying-gao-flowing-water-standing-time-work-fashion-digital-itsnicethat-01
]]>
tag:movement.posthaven.com,2013:Post/1403115 2019-04-29T01:54:04Z 2019-04-29T01:54:05Z aerial work
Thanks Luke, It’s now in Synthesis/equipment/Space 3.0

In Great Yarmouth, txOom 2002
(a daughter of my tgarden, tg2001, and grandaunt to sc :)

trapeze-artist-aides strapped visitors of all ages and shapes into harnesses.
they were slung into the air…some laughing like crazy

while we beamed the data from accelerometers sewn into their translucent tails 
to max+nato/jitter+supercollider 



circa 1990 I saw Project Bandaloop do wonderful aerial work back near Capp Street Theater int he Mission in SF 
during the Street Performance festivals

there were lots of aerial dance works since there were so many climbers in SF who were also dancers when they weren’t up in the Sierras .
but one of my friends was part of the support team at the tragedy with Sankai Juku  in Seattle a few years before…

+  Einsteins Dream overhead cam looking down 23 feet onto the sand
with  visitors wading as if underwater through the projected ripples to the umbrella:
]]>
Xin Wei Sha
tag:movement.posthaven.com,2013:Post/1355680 2018-12-21T09:11:52Z 2018-12-21T09:11:52Z professional bassoonist playing a straw like a double reed instrument
Peter Bastian is a multi-instrumentalist from Copenhagen, Denmark.
Educated in classical bassoon, he’s seen
playing a straw like a double reed instrument

- As Adrian Freed likes to point out, the virtuosity is not in the the instrument 
- Adapting Simondon, the technicity is distributed across the knowledge incorporated into (literally!) the bodies of practitioners;
the articulated, institutionalized socially-mediated discipline of playing wind instruments; as well as the genealogical refinement of the instruments.
- It is important that Peter Bastian was trained in and on classical bassoon.  But the fact that he can subsequently transpose his practiced skill shows that the body and the instrument, seep deeply into one another (“gearing", to adapt from Merleau-Ponty), there is no hard line between body and instrument.

www.rareandstrangeinstruments.com

]]>
tag:movement.posthaven.com,2013:Post/1346108 2018-11-21T13:22:25Z 2018-11-21T13:22:25Z movement research notes
Hi Suren, Seth,

Good talk re our pilot exploratory movement research with 

Seth’s musicianship and music technology,
Suren’s movement amplification via optical flow as starting points.

We discussed initial experimental scenarios in the experience of improvisation

— the difference between performer’s sense of rate of passage of time vs a spectator’s sense of rate of passage of time.

— two or more performers coming to a cadence (together!) without a score or prior plan.

— a performer beginning to lose focus OR shifting intention, 
versus spectator’s awareness of that shift.

The key to a more rigorous, 21c approach to any of these experiential researches is to NOT measure merely one body but to develop relational measures. Simple example: instead of measuring trajectory of one dancer’s wrist, measure the distance between one dancer’s wrist and another dancer’s wrist.

Practical procedure, let’s  

1) video record the performrer,
close enough to see facial expression and manual activity, with timestamps

2) Simultaneously record data streams with timestamps

3) Talk-aloud: Have participants narrate what they are doing / thinking during playback of video, using Quicktime Player 7 to record an extra audio track 

4) Look for salient parallels between data streams and self-reports.

5) Revise feature extraction and iterate.


I cc Julian, Connor to see where is the code to record sensor / OSC data synchronized with video.

cc Todd, Lauren as fellow musicians/ movement + computational researchers.   Note that Lauren’s done extensive and creative work with haptics / low frequency sound / touch.



]]>
tag:movement.posthaven.com,2013:Post/1319300 2018-09-07T12:42:01Z 2018-09-07T12:42:01Z recycle movement blog for rhythm?
Here’s the blog for movement-based research that we can recycle for a fresh take on rhythm
http://movement.posthaven.com/
Anyone can post in current settings, I think —  but should we make it private as a scratch space?
Pavan, could the Laplacian fields work for aperiodic rhythmic patterns?  Presumably the formalism works for dimensions > 1.

Connor is investigating creating a journaling tool sync’ing multiple sensor data (OSC resolution) + audio + video tracks.
Does anyone have such a tool already that we can use?
]]>
tag:movement.posthaven.com,2013:Post/591417 2013-07-30T15:36:38Z 2013-10-08T17:27:50Z introduction to tensors

Tensors are totally useful.  However these particular references seem difficult and rather specialized.   There must be some elementary reference or cheat sheet on tensors for classical theory of rigid body motion.  The classic works on mechanics are by  Lagrange and Hamilton.

Here's a text that may be a good systematic starting point for someone already trained to read math:
http://tocs.ulb.tu-darmstadt.de/46123709.pdf
Especially chapters 1,2, and 10 (Elasticity).


The top-level clarification I'd like to make is the following:

A vector space V of dimension n  is a set with the following operations : vector addition + and scalar multiplication, which is isomorphic to Euclidean n-dimensional space.  (I'll assume you can look that up.)

A tensor T is a linear functional mapping from a product of vector spaces to the scalar field, say real numbers R.
(For simplicity let's say the vector spaces are all the same V.):

T:  V x V x V x … x V ---> R
(k times)

The degree of the tensor is the number of arguments -- the number of vector spaces in the product domain.
(In our case, degree T = k because there are k copies of V)   Each argument is a vector (hence each argument has a number components  = dimension of the vector space V)

T(v1, v2, v3, …, vk) is a scalar number.

So far this is not yet a tensor but a general map.  A tensor has two basic features:

(1)  It is linear in each argument:
For each argument, say in the i-th argument for each i:
T(… , u + v, …)  = T(… , u , …)   + T(… ,  v, …)   for all vectors u, v in V
and in the i-th argument for each i
T(… , s * v , …)  = s * T(… , u , …) 
for all scalars s in R, and all vectors v in V

(2) The value of T does not change when you change the coordinate basis of V according to which you represent the vectors in V.

The way this is written out in classical books on tensors is in terms of the orthogonal matrices for transforming a vector written w/r to one basis to that same geometric "arrow"'s coordinates w/r to another basis.   It's messy, but systematic.

Elegant modern maths write tensor in "co-ordinate-free" notations, but that hides some of the gritty intuition, and permits some basic confusions such as confusing vectors with tensors.

Tensors "eat" vectors and spit out scalar numbers.  And they can't be fooled by camouflaging the vectors changing their numeric coordinates by changing the basis w/r to which they are represented.


PS. By definition there are maps: V x V x V … x V --> R  called "forms"  that eat vectors multi-linearly and produce scalars.  But they do change value when the basis is changed.   Spaces of linear forms on a vector space V have as their bases, the duals to the basis vectors of V.  If the basis of V is say an orthonormal set of vectors v1, v2, …, vn, the dual forms dx1, dx2, …, dxm are defined by:
dxi ( vj ) = δij
where δij is the Kronecker delta function.

]]>
Xin Wei Sha
tag:movement.posthaven.com,2013:Post/591411 2013-07-30T15:22:46Z 2016-07-08T09:00:37Z Adrian Freed: computing velocity properly

Adrian Freed <adrian@cnmat.berkeley.edu>

To: Vangelis L <vl_artcode@yahoo.com>, John MacCallum <john.m@ccallum.com>

Cc: Sha Xin Wei <xinwei.sha@concordia.ca>

Date: 2013-07-28, at 9:50 PM


I remember you mentioned finding some documentation that suggested that computing the velocity from successive kinect frames actually computes the velocity at the time between the frames and that they propose to come velocity by skipping a frame so that the midpoint  velocity corresponds to a time of the isochronous frame rate.

Those ideas represent very course approximations to the correct way of doing this for rigid body motion, the basis of which is described in the attached paper.


On Jul 29, 2013, at 1:48 AM, Vangelis L <vl_artcode@yahoo.com> wrote:

Almost :) that information was coming from the biomechanical side of things and was agnostic of the frame types. Since in biomechanics they are concerned of matching specific values to exact visual or other frame representations of movement, they propose to compute velocity every third frame (1st to 3rd) since the actual value of velocity that is calculated represents the 2nd frame and should be attached as information there. 
Do they propose a parabolic interpolation? In which space?


What I thing you are trying to avoid is a pre-defined space where the velocity calculation takes place. And that is related to what I got about Tensors so far... Tensors are bundles of vectors representing various desirable data from a point which position can be described by a given zero 0 in a Cartesian space. Each Tensor carries the bundle but it does not care about its Cartesian representation or the one to which it is compared to. Calculations between Tensors can be performed regardless of spatial representation and we can translate all data to any given Cartesian coordinate system on demand... is that any close?? In any case I think that the way a formula is applied to accurately measure f.i. velocity is related to how a device gathers the rotation data at the first place so that we connect to its most accurate representation. The biquaternion formula at the paper begins the example using the pitch, yow and roll information (latitude, longitude, roll over the z axis) and the maximum length of the working space which is confusing. 
Confusing ndeed.
How does this formula translates in practice, and if we apply this method do we need quaternions of two frames (or three frames according to the bio-mechanical scientists) in order to calculate the bi-quat and the velocity vector or the 4x4  rotation matrix? How are these calculated in Kicent? Does it matter?
Yes it matters but we are also trying to get the machinery in place to go beyond Kinect and also solve these POV issues.
I would like for example to produce a moving image that has a trace of a ribbon behind multiple positions of the body oriented according to the surface normal. ]]>
Xin Wei Sha
tag:movement.posthaven.com,2013:Post/591410 2013-07-30T15:19:10Z 2013-10-08T17:27:50Z Adrian Freed : more computations with movement salience: from Teoma workshop, CNMAT July 2013

From: Adrian Freed <adrian@cnmat.berkeley.edu>

To: Vangelis L <vl_artcode@yahoo.com>, John MacCallum <john.m@ccallum.com>

Cc: Sha Xin Wei <shaxinwei@gmail.com>, Andy W.Schmeder <andy@cnmat.berkeley.edu>

Now we have the dot product, cross product, velocity and acceleration formulae in "o."

it is time to consider more carefully how to leverage dancerly practice and biomechanics without
building in too many damning assumptions.

Working on Saturday with Teoma I got confirmation of my concerns about the notion of your "self" coordinate system referenced to a single point of the body, i.e. in  mid-hips. She asked me to map hip movement so she could stand in one place for part of our piece. I used shoulder/hip distances for this which might not be what you would expect until you realize that the hips and shoulders have to work in counter motion in order for someone not to fall over.
One way of thinking about this is that Teoma can move the several interesint points of origin dynamically. Another more mature way perhaps is screw theory which models multiple connected bodies.

The quick hack without the full screw theory model is to acknowledge the hierarchy of connected limbs with tapering masses. Angular velocities as we have will therefor be useful here, but I think swept areas are probably better. You can weight the vector lengths with masses to get kinetic energy. Now the exercise is to decide which triangles to use for the swept areas. The formula for swept area is half the norm of the cross product but there is a problem here interpreting this as "effort". We have an asymmetry due to gravity.
It is a lot more work to raise a leg than sweep one. The problem is we have been focussing on kinetic energy without incorporating
changes in potential energy. I suspect you can project things to form a triangle with a point on the ground to reflect an area swept that estimates the potential energy change. Even better is to bite the bullet and move everything into tensors. Then you have enough
traction to do things like continuum mechanics 

Continuum mechanics is a branch of mechanics that deals with the analysis of the kinematics and the mechanical behavior of materials modeled as a continuous mass rather than as discrete particles. The French mathematician Augustin-Louis Cauchy was the first to formulate such models in the 19th century, but research in the area continues today.

Modeling an object as a continuum assumes that the substance of the object completely fills the space it occupies. Modeling objects in this way ignores the fact that matter is made of atoms, and so is not continuous; however, on length scales much greater than that of inter-atomic distances, such models are highly accurate. Fundamental physical laws such as the conservation of mass, the conservation of momentum, and the conservation of energy may be applied to such models to derive differential equations describing the behavior of such objects, and some information about the particular material studied is added through a constitutive relation.

Continuum mechanics deals with physical properties of solids and fluids which are independent of any particular coordinate system in which they are observed. These physical properties are then represented by tensors, which are mathematical objects that have the required property of being independent of coordinate system. These tensors can be expressed in coordinate systems for computational convenience.

and exterior algebra which we need to do vector fields for orientation correlation.






]]>
Xin Wei Sha
tag:movement.posthaven.com,2013:Post/591409 2013-07-30T15:16:32Z 2016-07-08T05:58:57Z Hamilton, Rodrigues, and the Quaternion Scandal

a good intro. to quaternions which along the way explains Hamilton's mistake and why we will be working with Euler/Rodrigues formulations.

"Hamilton, Rodrigues, and the Quaternion Scandal"

]]>
Xin Wei Sha
tag:movement.posthaven.com,2013:Post/591408 2013-07-30T15:15:13Z 2016-07-08T08:24:37Z dualquats for music/sound representation not just motion in space

Tha attached paper has an interesting idea at the end: a dualquat harmonic oscillator which can describe

helical motion. 

You may recall that Shepard's original paper on pitch circularity discusses helical representations for
pitch perception.  http://psycnet.apa.org/journals/rev/89/4/305/

This strongly suggests that dualquats could also be used to operationalize motion in musical contexts.
The pitch one is obvious because the mapping is clear and we have lots of well known ways to think of translation and rotation
in this context (e.g. arpeggiated cadences, vamping) but I think there may be some interesting cases where we can use this
for fancier rhythmic things too. For example, in your work on polytempo you can define certain requirements such as events lining up in time as configurations of points on different rigid bodies in motion (one for each meter) that have to be colinear. My intution is that
interpolation with dualquat's will give you better results than the current scheme in tempocurver. It may at the very least give you hints as to how to represent what you are composing in a 3d GUI. I often feel that phase unwrapping would be better understood if people  could see it in 3D. With care the information we need can be obtained by casting shadows as I did with Amar for the 3D SDIF editor.

There is of course the question of how to integrate time into the model. You could start by parameterizing the screw parameters as a function of n.deltaT in discrete time signal processing fashion and then drive the system with my relaxation functions (shifted and time scaled sawtooths).

I hope this is good enough. I am slightly afraid that we might want to go all out and do tensors because I have seen time introduced in another paper by forming linear sums of two tensors to represent a reference frame from which the dualquats emerge. I am hoping that dualquats do most of the heavy lifting and that we will be happy with using o.'s existing vector functions with lambda() to define the operations to endow vectors with the appropriate constraints and operations to become tensors.
The code will be less obvious because we don't have operator overloading but most of the papers on tensors end up looking like APL which is not very clear either.


Having developed critiques of periodicity in one essay and being a fan of the point- and line-free geometries of Whitehead, Spencer Brown and others, I am rather embarrassed to be suggesting this strong push into dual-quats, the power tools for leveraging the Greek idolatry of the circle and line. Maybe the way to deal with ghosts is to dance with them?


]]>
Xin Wei Sha
tag:movement.posthaven.com,2013:Post/591407 2013-07-30T15:12:32Z 2016-07-08T06:37:00Z more computations with movement salience

 OK, my last word (today) on this:

So, it is not obvious how to do the Euler, Lagrange equations with dual quaternions which we would need for the velocity acceleration, energy etc.

No worries though: it is tackled in the attached paper.

I added another paper to show off the range of interesting problems tackled with dual quaternions. We can also do physics engines, collision detection etc. rather easily.


]]>
Xin Wei Sha
tag:movement.posthaven.com,2013:Post/591406 2013-07-30T15:10:25Z 2016-07-08T11:18:59Z clifford and biquaternions

From: Adrian Freed <adrian@cnmat.berkeley.edu>

Date: 25 July, 2013 6:36:14 PM PDT

To: John MacCallum <john.m@ccallum.com>, Vangelis L <vl_artcode@yahoo.com>

Cc: Sha Xin Wei <shaxinwei@gmail.com>


 Clifford's original paper is quite readable



]]>
Xin Wei Sha
tag:movement.posthaven.com,2013:Post/591404 2013-07-30T15:07:54Z 2016-07-08T01:38:43Z Re: more computations with movement salience

Begin forwarded message:

From: Adrian Freed <adrian@cnmat.berkeley.edu>

Subject: Re: more computations with movement salience

Date: 25 July, 2013 4:01:52 PM PDT

To: John MacCallum <john.m@ccallum.com>

Cc: Sha Xin Wei <shaxinwei@gmail.com>, "Andy W.Schmeder" <andy@cnmat.berkeley.edu>, Vangelis L <vl_artcode@yahoo.com>


John,


As I suspected, with the right algebra built into "o.", we can do all these movement things 

we are trying to fudge with elementary stuff more compactly and richly.

I believe the best choice is the dual quaternion: http://en.wikipedia.org/wiki/Dual_quaternion

]]>
Xin Wei Sha
tag:movement.posthaven.com,2013:Post/591402 2013-07-30T15:01:06Z 2016-07-08T13:34:13Z clustering on guass maps for dance etc.

From: Adrian Freed <adrian@cnmat.berkeley.edu>

Subject: clustering on guass maps for dance etc.

Date: 1 July, 2013 9:55:10 AM PDT

To: John MacCallum <john@cnmat.berkeley.edu>

Cc: Sha Xin Wei <shaxinwei@gmail.com>, Andrew Schmeder <andy@enchroma.com>, Rama Gottfried <rama.gottfried@gmail.com>


I have an intuition fueled by a half understanding developed over a drink with Xin Wei that could pan

out into something…


With that disclaimer, the idea is to compute correlations betweens orientations of body parts of dancers (a dancer and between

dancers). The first step is to project the orientation vectors onto a sphere (i.e. a gauss map).  Then we do cluster analyses on this.

I am not sure how this is done adapting classical k-means things - I guess you can just replace the usual coordinate discretization with a graph. I wonder if one should go straight for a spherical harmonic representation so as to be able to characterize shapes? Of particular interest would be to compute the autocorrelation to identify periodicities in orientation change. I am not good enough

to be sure but I suspect  the Wiener–Khinchin theorem holds so these periodicities can be estimated efficiently with FFT's. While I am wildly speculating here it would be nice to compute the linking number of pairs of paths traced by the body closed in some

ingenious way based on the aformentioned periodicities. This would capture braid structures that occur if you imagine

hands and feet with ribbons tied to them or braid structures in patterned dance (e.g. Ceili Dance).


I will stop these speculations here as it is time to ignore me and plunge into the literature. I have attached something, John,

that addresses your interest in using kurtosis of bodliy motion to drive migrators. This paper came up when I searched for "gauss map cluster" so we should be able to use this as a seed to harvest related papers. What is interesting about this paper is they don't construct a surface description so we can use it perhaps on the depth map data from Kinect.


I am trying to drag Andy into this partly to help us avoid some silly false leads but also because I suspect there is something interesting here for aerial dance which gives more orientation freedom than landed dance.



p.s. There is something deeply silly about what I am suggesting (and philosophically disturbing from Xin Wei's point of view) if we use Kinect skeletonization data. With enough crunch you can do better

than kinect by deriving the kinematics from the imager data using local invariance structure (as hinted in the attached paper).  For testing on our slow laptops and getting our feet wet the Kinect is a handy starting point though.


p.p.s. My puzzling over the right arithmetic to use for those quality measures that the kinect software gives us (0, 0.5, 1.0)

lead me to indicator functions which are usually 0 or 1 so as to bridge set membership or other predicates to number. They are however generalized to be continuous in fuzzy set theory which presumably has a lot more to say about how we should operate with them.

I am particularly interested now in how to leverage the dynamics of such functions.


]]>
Xin Wei Sha
tag:movement.posthaven.com,2013:Post/342816 2012-12-27T10:02:00Z 2013-10-08T16:34:56Z Movement-Based Research

Principles for Movement-Based Research

Sha Xin Wei

Canada Research Chair, Critical studies of media arts and sciences / Director Topological Media Lab
Concordia Montreal

Basic Question:  How do we make sense of our surroundings and each other via corporeal movement: walking, gesturing, playing games and sports, dancing?

Approach: The key is that we don't just sit and talk about it, or watch videos of someone else doing the action.  Participants don't just think about movement, we think in movement.  This requires fresh ways to articulate time-based media.

Principles:

(1)    Live, in-person, first-person experience, not about spectators far away from or beyond arms-reach of the action.

(2)    Actors are spectators, spectators are actors.

(3)    Collective as well as solo movement.

(4)    How making sense of our surroundings and each other depends on media in movement: varying fields of lighting, sound, sensate and active materials / textiles, and objects: cloth, toys, furniture, and so forth.   Thus we are assembling our large-scale, motion / gesture - modulated, rich media environment as a ‘kinetoscope’ focussing on the body in spontaneous, creative, and thoughtful movement. 

(5)    Primarily about everyday  people in everyday movement,
but using techniques and insights of expert "movers" from movement arts, sport, martial art, meditation, music.

(6)    Deeply interdisciplinary, but at expert level:  we bring together teams of people who are good at what they do, but working beyond their home disciplines.

(7)    Methodology:

(7.1)    Abduction (David Morris, philosophy):
Our aim is to let the thinking body educate us into how to think about it, rather than trying to catch it in the mesh of traditional theories. Methodologically, our concern is with “abduction” (vs. deduction and induction): the process of formulating hypotheses in the first place. Instead of testing the body according to hypotheses derived by abstract reflection, we will study freely moving bodies to “abduct” new hypotheses and concepts.


(7.2)        We are our own subjects.  We as experimentalists will do the moving ourselves.   NOT:  theorists or scientists sitting and watching "test subjects" like dancers move about.  We in fact will do the moving ourselves.  Although there will be expert movers among us such as choreographer Michael Montanaro and his students, the MBR is not simply about producing new instances of dance, or music, per se.   Rather, we are after understanding ordinary or extraordinary phenomena in everyday experience This requires a careful training analogous to what phenomenology did with thinking-in-language about experience.  In this case we are adjoining to the medium of language, other modes of articulation, such as gesture, voice, mathematical notation, diagram, but all with the same degree of care and attention that motivated the original “phenomenological reduction” -- suspending the natural attitude, suspending the psychologism, and attempting variation of experience.  We believe that arts have powerful means for achieving the last.
 
(7.3)        We as makers inhabit our own environments.

(7.1) - (7.3) have scientific and ethical implications. 

Ethical implication:   We are not experimenting "on" other people.   We are creating experiences with ourselves.   When we invite guests (not "users") into our events, we invite them in the spirit of fellow creators of the event.  

Scientific implication: Al Bregman, one of the pioneers of psychoacoustic research, at the keynote speech at McGill's CIRMMT 2008 :

"scientific psychology has harboured a deep suspicion of the experience of the researcher as an acceptable tool in research. … In my many years of research on how and when a mixture of sounds will blend or be heard as separate sounds, my own personal experience and those of my students has played a central role in deciding what to study and how to study it. When I encouraged students to spend a lot of time listening to the stimuli and trying out different patterns of sound to see which ones would show the effect we were interested in, far into the academic year, and nearing the time that they should have been carrying out their experiments, they would get nervous and ask when they would start doing the “real research”. I told them that what they were doing now was the real research, and the formal experiment with subjects and statistics was just to convince other people."

We share Dr. Bregman's method, and extend it to corporeal movement.  In order to understand the movement phenomenon the experimentalist must be herself / himself move in a reproducible situation.

(8)    The environment is part of the experimental apparatus.
Retaining the concerns, but extending from or complementing methods of phenomenology, art theory, psychology, cognitive science, for example, we look for non-anthropocentric ways to understand and articulate ethico-aesthetic expression and improvisation.  This motivates our work with environments and gestural, textural computational media -- acoustics, soundfields, lighting, projected video, structured light, and kinetic materials.

Reproducible situation means we can reproduce the environmental conditions -- lighting, sound, haptic.   “Topological media choreography” methods give experimentalists the means to insert potential responses in lighting, acoustics or physical materials the way that physical materials “respond” to any gesture.  Why computational means, why not just use analog materials?   In fact, applying minimax design -- minimum tech for maximum experiential impact -- often we will use “unplugged” analog techniques, which may be no less expensive.  But using computational media permits the experimentalist to vary the environment away from its customary physics, in precise and reproducible ways.
(Reproducible does not mean repeating exact same event, if only because there have been prior events.   Repetition and memory are in fact objects of philosophical investigation, rather than data.)


(9)    Long-duration experiment-workshops (vs. 5 minute demo)

This facility is NOT for producing events for audiences to come and watch, like a rehearsed theatrical performance or like a 5-minute engineering demo, for that matter.  Instead, we will build the human and equipment infrastructure to run durable living events to which we can invite people to experience and improvise over time.  Movement is about change over time, and since we are working with living people, all our media and event logics are about change, unrehearsed change, improvisation in everyday situations.

(10)    Performance is dissemination. We will disseminate our findings in movement (workshops, events), and in time-based media (sound, video, gesture), as well as in “static” media such as text.

One of the key insights that choreographer Michael Montanaro provided is that a performance work is completed only at the moment that it’s encountered by an audience.  This resonates with Peter Brook: “True form only arrives at the last moment, sometimes even later. ... True form is not like the construction of a building where each action is the logical step forward from the previous one. On the contrary, the true process of construction involves at the same time a sort of demolition.”  

But then we can say that a presentation of the performance is itself also the dissemination of that work.

As scholars publishing in art research, engineering research, and philosophy, we expect to continue this important work of academic exchange, with graduate students from philosophy, cultural studies, anthropology, geography, science and technology studies, and other disciplines.

But the innovation is that in addition, we adapt dissemination techniques exemplified by the Grotowski WorkCenter in Pontedera, Italy, Brook's Theater of Cruelty ensemble, the Living Theater, the Bread and Puppet company.  That is, we invite other practitioners from around the world to come inhabit long-running experiments and workshops because the best way to disseminate insights in movement is via live, in-person, rigorous practice in a stable, well-prepared environment.]]>
Xin Wei Sha