Monday, December 5, 2011


A short excerpt from my Telemusic final performance...


This is part of our main project this year, Kinecraft. 
Here is a link to the project abstract:

Here is a link that includes full performance audio: 

Friday, November 25, 2011


    Rehearsal for a collaborative project in Telemusic class this past semester. This video was taken from the Syneme Lab at the University of Calgary. A laptop orchestra from Mcmaster University in Ontario was accompanying this performance.   


    This class consisted of a couple students with art & music backgrounds as well as computer science. The synthesis of interdisciplinary knowledge was key in the creation of our final project. Those who weren't as capable of writing code made up for it sampling sounds, writing music, working in Ableton Live, etc. On the other hand, the few without much of a background in the arts handled the necessary programming tasks/problems with exceptional accuracy. 

    I enjoyed learning about the process that goes on behind a live networked performance. I can only see this type of networked collaboration increasing in popularity. Our instructor Ken Fields, (the Canadian Research Chair in Telemedia Arts,) taught us all about the Syneme Lab at the University of Calgary where most of his research takes place. A plethora of microphones, cameras, projectors, acoustic and digital mixers are available in the lab. You must learn how to route these devices properly between computers and instruments based on the artist's needs. Eventually, through a program called Jacktrip, we learned how to link up with other Universities & Research centers around the globe. Ken trusted us enough to head to Beijing for MUSICACOUSTICA 2011, leaving us in charge of the lab. Our skills were put to the test as we routed different participants to one another over the high speed ipv6 network.

There is an essential gap in terms of arts and science collaborations,” says Fields. “The strength of this new program lies in invigorating the current research infrastructure by fostering interdisciplinary creation.”                quote from

Monday, April 18, 2011


I developed this color wheel idea over the semester to see what kind of connections I could make between sounds and different shades of color. Working in MaxMSP, with the help of one of my instructors we built a prototype for my idea. You'll notice a list of numbers in the right hand corner. Those numbers are RGB data from where my cursor is clicking on the color wheel. Those numbers are converted into MIDI data and sent to a program called Absynth. The two programs communicate back and forth this way and the audio is routed out of absynth. Right now it's working kind of like an instrument. There's still alot of tinkering to do within the patch to manipulate absynth with more control. I'll eventually apply some of my research from this term towards specific sound - color relationships. 


Monday, April 11, 2011

Portable Music

As mp3 players became popular, more people started casually listening to music. Certainly, music was popular before this, but the ease and portable nature of the mp3 player increased the overall amount of music being consumed and subsequently altered our listening habits. Portable walk-mans and cd players became archaic quickly as we could now store a variety of albums in a sleek, pocket-sized device - doubtfully more appealing than carrying a bag full of cd’s around, incase you wanted to hear something other than the album you left home with. Not to mention the stylability of these devices, due to the over-fetishized qualities they contain. Some people aren’t even listening; they’re functioning entirely as fashion accessories or status symbols. They’re often used as deterrents as well, suggesting that if someone is plugged in they’re occupying one of their senses and therefore excluding themselves from the reality they’re otherwise totally a part of. All of a sudden we have access to a new morning soundtrack to rouse us out of the mundane routine, something to wear on the bus to avoid conversation, something to listen to while studying because the second hand on the wall clock is infuriatingly precise and so on. This new technology has conditioned us to listen passively. It's gotten people using their music device for other purposes, rather than for listening. 

Michael Schmidt, professor of philosophy of music & media, has some interesting thoughts on the changes occurring in music and how we interact with them... 

Wednesday, March 30, 2011


Recently I've been thinking of ways in which portable media devices have changed our perception of music. 
I read something about Jonathan Berger, a music professor at Stanford, about a little test he puts his incoming students through at the start of every year. He gives them a variety of music to listen to and then asks them to rate the songs in terms of highest and lowest quality. What he's found is that the mp3 formatted songs are on a steady incline to becoming most favorable over the other songs with superior audio quality. So this is kind of strange... Even music students would prefer the compressed, low bit mp3 sound vs. a song with much more dynamic range! Uncompressed audio on a CD has a bit rate 12 times as much as an average mp3! It seems as if the "mp3 player" (and it's stylishness) has had an inadvertent effect on our generation of music listeners. Perhaps we've become so accustomed to the lack of quality that our ears have tuned out the imperfections. I wonder if mp3's will become a fetish of some sort in the future like record players today?   

Monday, March 7, 2011

Aural Architecture

    A general knowledge of physical acoustic properties is definitely an asset to anyone involved in sound related work. An interesting place to start could be acoustic concert halls and the science behind the aural - architectural relationship.

    Wallace Sabine designed Boston Symphony Hall in 1900 which was the first Music Hall built with acoustic engineering principles applied to it's architecture. His calculations on ideal reverberation time laid the foundations for architectural acoustics. Today, additional factors outside the realm of sound dispersion/physics are considered such as the subjective preferences of listeners and performers.
    Things like Initial Time Delay Gap and Clarity Index are taken into account and used to measure direct and reflected sound. A direct sound is a pure tone without any audial pollution. An orchestra depends heavily on direct sound for added clarity to the listeners however reflected sound can add richness to musical tones. Reflected sound can also create distracting "muddy" tones depending on varying physical and objective parameters. Every orchestral arrangement will have an ideal clarity index depending on the instruments used during the performance, vocal ranges, musical repertoire, etc. Different materials are used to absorb/reflect different sounds. Sabine determined that one surface will function differently towards various frequencies: a high note played on a violin, for instance, will be absorbed more than a low note played by a double bass. Porous absorbers in the concert hall will absorb high frequency sounds more so than resonant absorbers, which respond to low frequency sounds. Early sound reflection is important for performers to be able to hear what they are playing while there is an equally as important need for extended reverberation in order for the room to facilitate the musicians efforts to fill the space.
    All venues will vary in terms of dryness and reverberation. A listener in a shoebox shaped hall will experience immediate sound reflection due to sound bouncing parallel off the narrow walls. This creates strong acoustic intimacy, warmth and listener envelopment. A fan shaped venue will have longer reverberation time due to the distance created between the walls and ceiling. This can add depth to a performance but it may also create echoes which could become somewhat of an annoyance to performers on stage. Singers and solo musicians must perform more efficiently in dry rooms as their sound is unequivocally accurate. The most practical auditoriums are those which can be modified to suit the needs of the musical arrangement, accommodating for speech as well as vocal and instrumental performances. Interchangeable sound baffles, removable panels and seal-able chambers can be altered accordingly. Some larger fan-shaped venues will hang panels from the ceiling (called clouds) to address potential reverberation issues. (These become visually stimulating as well.) Materials as uncontrollable as the audience's clothing will ultimately affect the sound absorption of a music hall, so there are indeed many different elements to consider when dealing with architectural acoustics. I've hardly scratched the surface.

    Sabine has argued that before such critical analysis of acoustics and architecture, there was an inverse relationship between the construction of musical pieces and buildings. Instead of building to accommodate different musical styles, music was probably written to suit the popular architectural forms of the era. The accessible materials and architectural styles of a specific area may have influenced the musical development in those areas, unbeknownst to anyone at the time. "The cavernous, highly resonant stone buildings of the Romanesque period allowed vocal tones to linger, supporting the exploration of rich vocal harmonies, characteristic of choral music of that era... The development of ornate contrapuntal Baroque music with its complex interplay of melodies may be due to the horizontal clarity of the classical outdoor amphitheater, which evolved into the horseshoe shaped concert building" (Forsyth, Buildings for music, 1985.)

    Next thing to take note of is how amplifiers and loud speakers are used in arena rock concerts nowadays... How would a band adjust from playing a small venue to an arena built to house a hockey game? How would a band accustomed to playing large arena venues adjust to playing in one shaped like a saddle...

Tuesday, February 22, 2011


I built a color/sound wheel using cocoa for artists. Basically, I'm telling the cursor to log the rgb data that it's hovering over and then trigger samples I've loaded into the resources folder based on whatever color data is logged. 

The ultimate goal is to allow the user to record their own sounds and place them where they feel fit, but for now I'll be playing with my own soundfiles.   

Wednesday, February 9, 2011

in relation to

In a current project of mine I'm trying to develop an interactive color wheel that produces sound based on RGB color data. I'm also building a sound library of different tones and effects that I hope to incorporate into some of the wheels. I think that due to the subjectiveness of a sound - emotion relationship, I may try to enable the viewer to upload their own sounds into each wheel to either prove or disprove a pattern in the relationship. I'll put up some samples soon...

Monday, February 7, 2011

Brainwave Entrainment

Today I've been reading about binaural beats, 

There is an exhausting amount of binaural beats, vibrations, harmonics and frequencies out there created by numerous people & companies that claim that if you buy their CD it will help alter your mood, calm you down, put you to sleep, evoke lucid dreams, enhance memory & brain function, cure headaches, give you more energy, help you stop smoking, make you feel drunk, cause hallucinations, awaken intuition and increase telepathic communication, repair DNA (?), etc, etc. I've been listening to these on youtube all afternoon and one thing I've noticed is that overused ethnic/tribal horns can become quite distracting when trying to repair DNA. There are some nice "Solfeggio frequencies" that i don't mind. Apparently these are based off of ancient Gregorian chants which were believed to bestow spiritual blessings when sung in harmony. There are six different frequencies, each with a different purpose. 

Perhaps if I were to get more involved in one specific practice; i could read all the research that's been put into their product, where and with who it was done and really give it a justified attempt, but for now I'll remain skeptical. There's explaining that there simply is no way to record or measure mind-consciousness. EEG will record brainwave activity and we can use drugs to alter our state of consciousness but there doesn't seem to be a way we can measure consciousness. I'm waiting on Dimasio's book yet, hopefully i'll gain some insight from it. Interested stuff nonetheless!

Friday, January 28, 2011

I'm interested in learning what happens to people while they experience music & sound; what specific role sound & music has on human consciousness and what happens to musicians as they create music, like during an improvised jazz solo for example. I'm waiting on a few music psychology books that will hopefully lead me through my research this semester. Also a book suggested to me called "The Feeling of What Happens" by Antonio Damasio. 

An interesting site with some visual-music info:

Here are some interesting theories I've stumbled upon...

Shepard Tone: 

Damasio quotes T.S Eliot on consciousness saying that it is "music heard so deeply, that it is not heard at all."