A woman holds an electronic tablet.

Archived Projects

Archived Projects

Music Informatics


To view the entire list of projects the Music Informatics Group has worked on, visit musicinformatics.gatech.edu

 

Acoustics

 

Accelerometer Guitar Pickup

Acoustic guitar pickups today generally face one of two issues: they either do not accurately convey the tone of the guitar, or they are prone to suffering feedback. Even in the top tiers of music, professional musicians looking to play acoustic guitar in a loud, live setting will often use poor sounding pickups in order to avoid feedback. Although there have been attempts to find a happy medium by combining multiple pickup systems, there is yet to be a system that is widely considered optimal for louder volume performances.

The MEMS accelerometer pickup system attempts to solve this dilemma. This research explores alternative transducer technology that would both accurately represent the sound of the acoustic guitar and reject feedback, looking at MEMS technology in particular. MEMS technology is designed to be directly connected to electronic circuit boards and is used particularly in mobile devices. This portability makes it useful for pickup transducers, as its small size allows it to be attached to the instrument with little or no effect on the acoustics of the instrument.

MEMS accelerometers in particular have great potential. Accelerometers can measure the movement of the guitar top, which directly correlates to the sound of the guitar. Accelerometers also would not directly pick up movement in the air, which helps to reject feedback.

 

Computational Music for All

 

DataToMusic

DataToMusic (DTM) API is a JavaScript library for developing data-agnostic sonification programs, and also a real-time environment for experimenting with musical structure models. It enables musicians and researchers to rapidly map data onto musical structures, to explore the boundaries between sonification and musification, and to live code with data.

Publications

 

Galatea

Galatea is a wireless electromyography smart suit and accompanying software package to allow for compositional influence over sound and light through a dancer's movement. The suit was made for dancer/choreographer Samantha Tankersley. Eight of Tankersley‘s muscles were located precisely with help from Jayma Lathlin, a University of Georgia Ph.D. candidate in Kinesiology. This medically informed process allowed reliable data to be captured from her biceps, deltoids, quadriceps, and triceps surae. After accurate muscle locations were confirmed, the silver conductive fabric was used as a permanent electrode in the electromyography circuit by being sewn into the suit. Four accelerometers were also installed in both ankles and wrists, providing additional information about each limb. 

Data is collected using a series of Arduino LilyPads, a line of open source smart textile microcontrollers made by Leah Buechley at the MIT Media Lab. All of this information is packaged and sent via wireless radio modules to Cycling 74’s Max/MSP. Once in Max, the data is parsed and repackaged into OSC (OpenSoundControl) a cross-platform computer music protocol. This OSC data is sent to three custom objects in Max For Live, the cross product programming environment made by Cycling 74 and Ableton Live. These Max for Live objects offer various levels of compositional control for the dancer. One object allows Tankersley‘s muscles to control note pitch, envelope, or duration. Another object allows each accelerometer to control the spatial location of a musical motif within a quadraphonic sound space. Additionally, a final Max for Live object gives the suit control over the DMX 512 light control protocol.

For more information visit here.

 

UrbanRemix

UrbanRemix is a collaborative and locative sound project. Our goal in developing UrbanRemix was to design a platform and series of public workshops that would enable participants to develop and express the acoustic identity of their communities and enable users of the website to explore and experience the soundscapes of the city in a novel fashion.

The UrbanRemix platform consists of a mobile phone system and web interface for recording, browsing, and mixing audio. It allows users to document and explore the obvious, neglected, private or public, even secret sounds of the urban environment. Participants in the UrbanRemix workshops become active creators of shared soundscapes as they search the city for interesting sound cues. The collected sounds, voices, and noises provide the original tracks for musical remixes that reflect the specific nature and acoustic identity of the community.

For more information visit here.

Publications

  • Freeman, J. (2015). “Listening, Movement, Creativity, and Technology,” in S. Mass (ed.), Thresholds of Listening. New York, New York: Fordham University Press.
  • Freeman, J., DiSalvo, C., Nitsche, M., and Garrett, S. (2012). “Rediscovering the City with UrbanRemix,” in Leonardo, MIT Press, 45:5, pp. 478-479
  • Freeman, J., DiSalvo, C., Nitsche, M., and Garrett, S. (2011). “Soundscape Composition and Field Recording as a Platform for Collaborative Creativity” in Organised Sound, Cambridge University Press, 16:3.


Piano Etudes

Inspired by the tradition of open-form musical scores, these four piano etudes are a collection of short musical fragments with links to connect them. In performance, the pianist must use those links to jump from fragment to fragment, creating her own unique version of the composition.

The pianist, though, should not have all the fun. So we developed a website, where you can create your own version of each etude, download it as an audio file or a printable score, and share it with others. In concert, pianists may make up their own version of each etude, or they may select a version created by a web visitor.

Piano Etudes was created by Jason Freeman for composer Jenny Lin; this collaboration was supported, in part, with a Special Award from the Yvar Mikhashoff Pianist/Composer Commissioning Project. Special thanks to Turbulence for hosting this website and including it in their spotlight series and to the American Composers Forum’s Encore Program for supporting several live performances of this work. The website was developed in collaboration with Akito Van Troyer.

For more information visit here.

Publications

  • J. Freeman. "Compose Your Own, Part 2" The New York Times Online, May 24, 2010.
  • J. Freeman. "Compose Your Own" The New York Times Online, April 23, 2010.
  • J. Freeman. "DIY Scores" Symphony: The Magazine of the League of American Orchestras, September/October 2010.


Flock

In Flock, a full-evening performance work commissioned by the Adrienne Arsht Center for the Performing Arts in Miami, music notation, electronic sound, and video animation are all generated in real time based on the locations of musicians, dancers, and audience members as they move and interact with each other.

Flock was commissioned by the Adrienne Arsht Center for the Performing Arts in Miami, with additional support from the Funding Arts Network, the Georgia Tech Foundation, and Georgia Tech’s GVU Center. It was premiered in five performances in Miami in December 2007, during Art | Basel | Miami Beach, which was produced by iSAW. It was subsequently presented in four performances at 01SJ in San Jose, California in June 2008, with the Rova Saxophone Quartet.

For more information visit here.

Publications

  • J. Freeman and M. Godfrey. “Creative Collaboration Between Audiences and Musicians in Flock" Digital Creativity, Vol. 20, No. 4, 2010.
  • J. Freeman. “Technology, Real-time Notation, and Audience Participation in Flock” Proceedings of the International Computer Music Conference (Belfast), 2008.
  • Freeman, J. “Extreme Sight-Reading, Mediated Expression, and Audience Participation: Real-Time Music Notation in Live Performance” Computer Music Journal Vol. 32, No. 3, 2008.


massMobile

A client-server system for mass audience participation in live performances using smartphones.

For more information visit here.

To download massMobile and use it in your own projects, please visit here.

Publications

  • Freeman, J., Xie, S., Tsuchiya, T., Shen, W., Chen, Y., Weitzner, N. (2015). “Using massMobile, a Flexible, Scalable, Rapid Prototyping Audience Participation Framework, in Large-Scale Live Musical Performances,” in Digital Creativity, Taylor and Francis, 26:3-4, 228-244.
  • Lee, S., and Freeman, J."Echobo: A Mobile Music Instrument Designed for Audience to Play"in Proceedings of the New Interfaces for Musical Expression conference (NIME 2013), Seoul, Korea.
  • Weitzner, N., Freeman, J., Chen, Y., and Garrett, S. (2013). “massMobile: Towards a Flexible Framework for Large-Scale Participatory Collaborations in Live Performances” in Organised Sound, Cambridge University Press, 18:1.


LOLC

In LOLC, the musicians in the laptop orchestra use a live-coding language, developed specifically for this piece, to create and share rhythmic motives based on a collection of recorded sounds. The language encourages musicians to share their code with each other, developing an improvisational conversation over time as material is looped, borrowed, and transformed. LOLC is supported by a grant from the National Science Foundation as part of a larger research project on musical improvisation in performance and education.

To see additional videos about LOLC and to download the software, visit here.

Publications

  • Lee, S., Freeman, J., Colella, A., Yao, S., and Van Troyer, A. (2012). "Evaluating Collaborative Laptop Improvisation With LOLC " in Proceedings of the Symposium on Laptop Ensembles and Orchestras (SLEO 2012), Baton Rouge, Louisiana.
  • Subramanian, S., Freeman, J., and McCoid, S. (2012). “LOLbot: Machine Musicianship in Laptop Ensembles” in Proceedings of the New Interfaces for Musical Expression Conference (NIME 2012), Ann Arbor, Michigan.


Drawn Together

Drawn Together is a collaboration between the Center for Music Technology, the College of Design and The Open Ended Group - a group of digital artists including Marc Downie, Shelley Eshkar, and Paul Kaiser. Drawn together is an artwork in which participants will interact with artificial intelligence agents to create unforeseen and original drawings and musical responses — ones whose form is in equal parts physical and virtual. Wearing 3D glasses and headphones, you will use charcoal or pencil to start making a real drawing on a real piece of paper. When you pause, the table will answer you by projecting 3D lines that seem to draw themselves, building upon your lines by echoing, extending, or complementing them below and above the table. The amplified sounds of your drawing will form the basis for electroacoustic responses matching those of the 3D projections.

Publications

  • Bretan, M., Weinberg, G., and Freeman, J. (2012). "Sonification for the Art Installation Drawn Together" in Proceedings of the International Conference on Auditory Display, Atlanta, Georgia.


Robotic Musicianship


Haile

Haile is a robotic percussionist that can listen to live players, analyze their music in real-time, and use the product of this analysis to play back in an improvisational manner. It is designed to combine the benefits of computational power and algorithmic music with the richness, visual interactivity, and expression of acoustic playing. We believe that when collaborating with live players, Haile can facilitate a musical experience that is not possible by any other means, inspiring players to interact with it in novel expressive manners, which leads to a novel musical outcome. Two pieces were composed for Haile, “Pow” for a robotic and human percussionist playing a Native American Pow Wow drum, and “Jam’aa”, for a Middle Eastern drum circle and a robotic percussionist.

View a video of the project here.

Publications

  • Sun, S., Malikarjuna, T., Weinberg. G (2012), “Effect of Visual Cues in Synchronization of Rhythmic Patterns” accepted to the 2012 International Conference of Music Perception and Cognition (ICMPC 12), Thessaloniki, Greece.
  • Weinberg, G., Blosser B., Mallikarjuna, T., Ramen (2009) “Human-Robot Interactive Music in the Context of a Live Jam Session” in the Proceedings of International Conference on New Instruments for Music Expression (NIME 09), Pittsburgh, PA, pp. 70-73.
  • Weinberg, G., Blosser B. (2009) “A Leader-Follower Turn-taking Model Incorporating Beat Detection in Musical Human-Robot Interaction” in the Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction, (HRI 2009) San Diego, CA.


Accessible Aquarium

The goal of the Georgia Tech Accessible Aquarium Project is to make dynamic exhibits such as those at museums, science centers, zoos and aquaria more engaging and accessible for visitors with vision impairments by providing real-time interpretations of the exhibits using innovative tracking, music, narrations, and adaptive sonification. It is an interdisciplinary collaboration between Georgia Tech researchers Bruce Walker, Tucker Balch, Gil Weinberg, Carrie Bruce, Jon Sanford, and Aaron Bobick, bringing together the fields of psychology, computing, music, and assistive technology. In this project, we are developing cutting edge bio-tracking and behavior analysis techniques that can provide input for sophisticated, informative, and compelling multimedia auditory displays, music, and sonification. As part of the project, we study how musicians (such as Laurie Anderson) interpret dynamic displays such as fish aquaria and develop an algorithm to use tracking information to generate a musical response that represents the movements of fish in the aquarium. The principles and techniques we develop will be immediately applicable to zoos, museums, and other informal learning environments with dynamic exhibits, leading to a dramatic increase in the opportunities for people with vision impairments to experience these types of exhibits.

View a video of the project here.

Publications

  • Nikolaidis, R., Weinberg G. (2011), “Generative Musical Tension Modeling and its Application in Dynamic Sonification”, Computer Music Journal, Vol. 36:1.


Brain Wave

With new developments in biological research, scholars are gaining more accurate information about complex systems such as the brain, and are looking for effective mechanisms to meaningfully represent such information. Since the human brain is pretty good at perceiving and making sense of auditory patterns, using sound and music to sonify complex systems seems to be a promising technique. In collaboration with the Potter Group at the Laboratory for Neuroengineering at Georgia Tech, we sonified signals from cultured neurons recorded by a multielectrode array. As part of the project, we attempted to address two main research goals: 1) to produce a generic and meaningful representation of the data that would help understand the activity in the culture, and 2) to create a musical product that has an aesthetic value. Based on computational pattern recognition techniques, the neural activity in the culture was mapped to eight speakers in an effort to represent the spatial propagation of spikes in the culture. We also developed eight percussive controllers that allow players to simulate spike propagation from different locations in the culture. The project was presented as a musical piece titled BrainWaves. The piece starts with automated spatial sound generation based on recorded neural activity and developed into an interactive section where players use our newly developed controllers to simulate spikes in the cultures in a variety of locations around the concert venue.

View a video of the project here.

Publications

  • Weinberg G., Thatcher T. “Interactive Sonification of Neural Activity” Proceedings of the International Conference on New Interfaces for Musical Expression (NIME 2006), Paris, France

    Weinberg G., Thatcher T. “Interactive Sonification: Aesthetics, Functionality, and Performance” Leonardo Music Journal 16, MIT Press.


Mobile Music

Can mobile devices facilitate expressive and creative musical experiences that may revolutionize the manner in which we create, perform, practice and consume music?

The Mobile Music Group develops new instruments and applications to empower music fans as well as musicians to create, perform, and consume music in novel manners using mobile devices. The goal of our research is to allow any music lover to unlock his or her musical creativity, anytime and at anyplace, and to share their personal creations through novel expressive and intuitive means.

Publications

  • Freeman, J., Lerch, A., Paradis, M. (Eds.)Proceedings of the 2nd Web Audio Conference (WAC-2016), Atlanta, 2016. ISBN: 978-0-692-61973-5
  • Lee, S.Srinivasamurthy, A.Tronel, G.Shen, W. (2012). “Tok!: A Collaborative Acoustic Instrument using Mobile Phones” in Proceedings of the New Interfaces for Musical Expression Conference (NIME 2012), Ann Arbor, Michigan.
  • Nikolaidis, R., and Weinberg G. (2010), “Playing with the Masters: A Model for Interaction between
Robots and Music Novices” The 19th International Symposium on Robotics in Music and Art (RO-MAN 10) Viarggio, Italy.


ZOOZbeat

ZOOZbeat is a gesture-based musical studio, simple enough for non-musicians to immediately become musically expressive but rich enough for experienced musicians to push the envelope of mobile music creation. Start playing with just a click or select among background beats in a variety of styles. Use shake and tilt movements, tap the screen or press the keypads to create and modify rhythmic and melodic lines. Based on years of research, our musical engine will interpret your actions into beautiful music that fits your style.

View a video of the project here.

Download ZOOZbeat here.

Publications

  • Weinberg, G., Godfrey, M., Beck, A. (2010) “ZOOZbeat – Mobile Music Recreation” in Extended AbstractsProceedings of International ACM Computer Human Interaction Conference (CHI 10), Atlanta, GA.
  • Nikolaidis, R., and Weinberg G. (2010), “Playing with the Masters: A Model for Interaction between 
Robots and Music Novices” The 19th International Symposium on Robotics in Music and Art (RO-MAN 10), Viarggio, Italy.
  • Weinberg, G., Nikolaidis, R., and Mallikurjuna, T. (2010), “A Survey of Recent Interactive Compositions for Shimon – The Perceptual and Improvisational Robotic Marimba Player” The International Conference on Intelligent Robots and Systems (IROS 2010), Taipei, Taiwan.


iltur

“iltur” is a series of musical compositions featuring a novel method of interaction between acoustic and electronic instruments with new musical controllers called Beatbugs. Beatbug players can record live input from acoustic and MIDI instruments and respond by transforming the recorded material in real time, creating motif-and-variation call-and-response routines on the fly. A central computer receives MIDI and audio data from both acoustic and electronic instruments, it computes the improvisation algorithms and facilitates the interaction among players. The Beatbugs use a piezoelectric sensor to capture and trigger musical phrases, two bend sensor antennae for manipulation, a speaker through which the variations are played, and white and colored LEDs for conveying the interaction to players and the audience.

View a video of the project here.

Publications

  • Weinberg, G. (2008) “The Beatbug – Evolution of a Musical Controller”, Digital Creativity, Taylor and Francis Press.

Questions?

 
If you can't find the information you were looking for, we'll get you to the right place.
Contact Us