Jessica Rajko + John MacCallum + Teoma Naccarato: Provocation Discussion - May 3rd, 1:00-1:30pm EST

Subject: Provocation Discussion - May 3rd, 1:00-1:30pm EST
Date: April 27, 2021 at 1:03:45 PM GMT-4

Greetings!



First, thank you again for carving time out of your schedule to join us in conversation. The following information details how we’ll move through our discussion and how you can prepare. 



Discussion Details and Structure: The plan is to record and share a series of, 30-minute discussions between ‘provocateurs’---those who submitted provocations back in 2018 in response to the question: “What escapes computation in interactive performance?” Meetings will be hosted and recorded on Zoom (link provided in this meeting request). All three of the project facilitators (Teoma, John, and Jessica) will be present during the conversation to host and hold space. Our aim is to keep the discussion low-key, conversational, and open-ended. We are not trying to reach some sort of summary conclusion or solution within 30 minutes. Rather, we see these discussions as another way in which we continue the rich dialogue put forth in the provocations and currently buzzing on our SloMoCo Discord channel. To give you a sense of how we’ll structure the 30 minutes, here is a flexible outline:



Provocation Discussion


  • Brief introduction by facilitators and sharing of provocations by provocateurs - 10 min (2-3 min per provocation)


  • Provocateurs ask each other questions and discuss - 10 minutes 


  • Facilitators join in conversation - 10 minutes



For sharing your provocation, you can read or summarise, and comment on your current thinking about it. A link to your provocation will be posted alongside the video of the discussion.



Please feel free to join us 5 - 10 minutes early if you wish. We won’t start recording until everyone is settled in, but we also want to be respectful of your time and keep the entire session as close to 30 minutes as possible. 



Preparation: All we need you to do in advance is  i) revisit your own provocation, and those of the other two provocateurs; and ii) bring one question for each other person, based on their provocation. The purpose for this discussion is to collectively read your three provocations through one another, exploring connections and generative tension in perspectives.


 


Here are links to the three provocations for your session:


•              Fran & Javier:https://provocations.online/whatescapescomputation/jaimovitch-morand/


•              Fred:https://provocations.online/whatescapescomputation/bevilacqua/


•              Xin Wei: https://provocations.online/whatescapescomputation/xin-wei/

Updated Gesture Follower (IRCAM, Goldsmiths)

The Gesture Follower allows for real-time comparison between a gesture performed live with a set of prerecorded examples. The implementation can be seen as an hybrid between DTW (Dynamic Time Warming) and HMM (Hidden Markov Models).

http://rapidmix.goldsmithsdigital.com/features/gesture-follower/

F. Bevilacqua, N. Schnell, N. Rasamimanana, B. Zamborlin, and F. Guédy, “Online Gesture Analysis and Control of Audio Processing,” in Musical Robots and Interactive Multimodal Systems: Springer Tracts in Advanced Robotics Vol 74, J. Solis and K. C. Ng, Eds., Springer Verlag, 2011, pp. 127-142. [Download PDF]

• F. Bevilacqua, B. Zamborlin, A. Sypniewski, N. Schnell, F. Guédy, and N. Rasamimanana, “Continuous realtime gesture following and recognition,” in Gesture in Embodied Communication and Human-Computer Interaction: Lecture Notes in Computer Science (LNCS) volume 5934, Springer Verlag, 2010, pp. 73-84.  [Download PDF]

data_av.maxpat and data_av_playback.maxpat

Let me draw attention to a pair of what should be among the more useful patchers we have in the SC kit for serious empirical research with movement.  See:

Max\ 8/Packages/SC/examples/utility/
data_av.maxpat
data_av_playback.maxpat
data_vcr.maxpat

Based on the IMU’s that Seth Thorn (AME), Connor Rawls wrote these very useful utilities to pack sensor data as tracks in uncompressed AIFF files, one track / sensor stream.

Note data_av_playback uses groove~ to sync video with the data.

If you’ve got meaningful rich time series. please  check out MUBU and try Connor’s sc.hurst external …

Xin Wei

___________________________________________
Sha Xin Wei | skype: shaxinwei | mobile: +1-650-815-9962
________________________________________________

aerial work

Thanks Luke, It’s now in Synthesis/equipment/Space 3.0

(a daughter of my tgarden, tg2001, and grandaunt to sc :)

trapeze-artist-aides strapped visitors of all ages and shapes into harnesses.
they were slung into the air…some laughing like crazy

while we beamed the data from accelerometers sewn into their translucent tails 
to max+nato/jitter+supercollider 



circa 1990 I saw Project Bandaloop do wonderful aerial work back near Capp Street Theater int he Mission in SF 
during the Street Performance festivals

there were lots of aerial dance works since there were so many climbers in SF who were also dancers when they weren’t up in the Sierras .
but one of my friends was part of the support team at the tragedy with Sankai Juku  in Seattle a few years before…

+  Einsteins Dream overhead cam looking down 23 feet onto the sand
with  visitors wading as if underwater through the projected ripples to the umbrella:

professional bassoonist playing a straw like a double reed instrument

Peter Bastian is a multi-instrumentalist from Copenhagen, Denmark.
Educated in classical bassoon, he’s seen

- As Adrian Freed likes to point out, the virtuosity is not in the the instrument 
- Adapting Simondon, the technicity is distributed across the knowledge incorporated into (literally!) the bodies of practitioners;
the articulated, institutionalized socially-mediated discipline of playing wind instruments; as well as the genealogical refinement of the instruments.
- It is important that Peter Bastian was trained in and on classical bassoon.  But the fact that he can subsequently transpose his practiced skill shows that the body and the instrument, seep deeply into one another (“gearing", to adapt from Merleau-Ponty), there is no hard line between body and instrument.


movement research notes

Hi Suren, Seth,

Good talk re our pilot exploratory movement research with 

Seth’s musicianship and music technology,
Suren’s movement amplification via optical flow as starting points.

We discussed initial experimental scenarios in the experience of improvisation

— the difference between performer’s sense of rate of passage of time vs a spectator’s sense of rate of passage of time.

— two or more performers coming to a cadence (together!) without a score or prior plan.

— a performer beginning to lose focus OR shifting intention, 
versus spectator’s awareness of that shift.

The key to a more rigorous, 21c approach to any of these experiential researches is to NOT measure merely one body but to develop relational measures. Simple example: instead of measuring trajectory of one dancer’s wrist, measure the distance between one dancer’s wrist and another dancer’s wrist.

Practical procedure, let’s  

1) video record the performrer,
close enough to see facial expression and manual activity, with timestamps

2) Simultaneously record data streams with timestamps

3) Talk-aloud: Have participants narrate what they are doing / thinking during playback of video, using Quicktime Player 7 to record an extra audio track 

4) Look for salient parallels between data streams and self-reports.

5) Revise feature extraction and iterate.


I cc Julian, Connor to see where is the code to record sensor / OSC data synchronized with video.

cc Todd, Lauren as fellow musicians/ movement + computational researchers.   Note that Lauren’s done extensive and creative work with haptics / low frequency sound / touch.



recycle movement blog for rhythm?

Here’s the blog for movement-based research that we can recycle for a fresh take on rhythm
http://movement.posthaven.com/
Anyone can post in current settings, I think —  but should we make it private as a scratch space?
Pavan, could the Laplacian fields work for aperiodic rhythmic patterns?  Presumably the formalism works for dimensions > 1.

Connor is investigating creating a journaling tool sync’ing multiple sensor data (OSC resolution) + audio + video tracks.
Does anyone have such a tool already that we can use?

introduction to tensors

Tensors are totally useful.  However these particular references seem difficult and rather specialized.   There must be some elementary reference or cheat sheet on tensors for classical theory of rigid body motion.  The classic works on mechanics are by  Lagrange and Hamilton.

Here's a text that may be a good systematic starting point for someone already trained to read math:
Especially chapters 1,2, and 10 (Elasticity).


The top-level clarification I'd like to make is the following:

A vector space V of dimension n  is a set with the following operations : vector addition + and scalar multiplication, which is isomorphic to Euclidean n-dimensional space.  (I'll assume you can look that up.)

A tensor T is a linear functional mapping from a product of vector spaces to the scalar field, say real numbers R.
(For simplicity let's say the vector spaces are all the same V.):

T:  V x V x V x … x V ---> R
(k times)

The degree of the tensor is the number of arguments -- the number of vector spaces in the product domain.
(In our case, degree T = k because there are k copies of V)   Each argument is a vector (hence each argument has a number components  = dimension of the vector space V)

T(v1, v2, v3, …, vk) is a scalar number.

So far this is not yet a tensor but a general map.  A tensor has two basic features:

(1)  It is linear in each argument:
For each argument, say in the i-th argument for each i:
T(… , u + v, …)  = T(… , u , …)   + T(… ,  v, …)   for all vectors u, v in V
and in the i-th argument for each i
T(… , s * v , …)  = s * T(… , u , …) 
for all scalars s in R, and all vectors v in V

(2) The value of T does not change when you change the coordinate basis of V according to which you represent the vectors in V.

The way this is written out in classical books on tensors is in terms of the orthogonal matrices for transforming a vector written w/r to one basis to that same geometric "arrow"'s coordinates w/r to another basis.   It's messy, but systematic.

Elegant modern maths write tensor in "co-ordinate-free" notations, but that hides some of the gritty intuition, and permits some basic confusions such as confusing vectors with tensors.

Tensors "eat" vectors and spit out scalar numbers.  And they can't be fooled by camouflaging the vectors changing their numeric coordinates by changing the basis w/r to which they are represented.


PS. By definition there are maps: V x V x V … x V --> R  called "forms"  that eat vectors multi-linearly and produce scalars.  But they do change value when the basis is changed.   Spaces of linear forms on a vector space V have as their bases, the duals to the basis vectors of V.  If the basis of V is say an orthonormal set of vectors v1, v2, …, vn, the dual forms dx1, dx2, …, dxm are defined by:
dxi ( vj ) = δij
where δij is the Kronecker delta function.