Week 4 – learning max/msp & outline of the essay

Part 1

Max/Msp Tutorial 1:

What I’ve learnt in this beginner tutorial video is that the composing process equals the program writing or even invent a brand-new instrument and that the strange “parameters” have nothing to do with traditional musical figures and the player’s fingers. Additionally, the composer can now realize whatever he/she wants to do with the work while one couldn’t design both tunes and timbre simultaneously before such softwares were invented (actually the timbre of an instrument is impossible to change) . Or we can say that only the timbre is left to further specification in computer music.(?) And as we can see in the worksite of Max/Msp, the software diminishes the original beauty of composition by changing the linear composition method to machinery distribution of “clicks”.

Part 2

I adjusted my structure of the essay to two networks related to computer music.

a)network within the computer composing process.

b)system between the composition and audience.(material – language – discourse) Human perspective towards sounds is based on the human culture rather than the sound itself. (the lack of sources/any visual focus at the stage.) In traditional music times, the compositions always perfectly fit audience’s expectation and give impressions which are familiar to daily life. Listeners perceive physical distance from the source when exposed to computer music.

Study mission for next week:

a)”Music and Technology (Contemporary History and Aesthetics)” – MIT Lecture

b)Machine Learning to Identify Neural 5 Correlates of Music and Emotions. (By Ian Daly, Etienne B. Roesch, James Weaver and Slawomir J. Nasuto)

Week 3 (Jan.25) – Proposal Draft

Reading material for today:

Truax, B. (1986). Computer music language design and the composing process. In The language of electroacoustic music (pp. 155-173). Palgrave Macmillan UK.

Proposal (draft)

  • History/Background – From instrumental music to computer music
  • Changing Roles of Music: There is no definite or final version of a computer music piece because the program output is not limited by instruments and the user can never predict how the system generates the user’s intentions. And the computer music is not only hearable but visible in a graphic form as well mostly because in softwares, the users compose music basing on parameters but not notes. What’s more, in times of instrumental music, low level skill of playing an instrument results in worse music, but in computer music, simple input can generate well-performed music through smart and well-controlled systems. (Question: Can we still name computer music “music” ? or should we just call it composed sound?)
  • Composing Process: It is not simple representation of sounds. For most time, the user makes a new kind of music. “the important ingredient is that the program offers data manipulation tools which correspond to the perceptual and musical concepts of the user”
  • Human Actions: What the user does is more machine learning or let’s say software experience than arrangement of the note flows. And in the composing process, the user is cooperating with many things but not merely a computer.
  • Question: What a musical softwares can’t do?