Spring 2019 Seminars
The Georgia Tech Center for Music Technology Spring Seminar Series features both invited speakers as well as second-year student project presentations. The seminars are on Mondays from 1:55-2:45 p.m. in the West Village Dining Commons, Room 175, on Georgia Tech's campus and are open to the public. Below is the schedule for invited speakers and student presentations for Spring 2019:
January 7 - Gil Weinberg, Professor and Director, Center for Music Technology
January 14 - Grace Leslie, Assistant Professor, School of Music
January 21 - MLK Day (No Seminar)
January 28 - Joe Plazak
Interdisciplinary research on music encompasses a diverse array of domains and applications, both within academia and industry. Despite commonalities and many shared objectives, the questions, methods, and results of academic & industry research are often starkly different. This talk anecdotally highlights some of the quirks within these two worlds, while also posting a number of in-demand research skills for the coming future. The first half of the talk will focus on the speaker's past academic research related to affective audio communication; the second half will focus on industry research related to teaching computers how to read and write music notation.
Joe Plazak is a Senior Software Engineer and Designer at Avid Technology, where he spends his days drinking coffee and teaching computers how to read, write and perform music via the world's leading music notation software: Sibelius. He co-designs Sibelius along with a team of world-class composers, arrangers, performers, and super-smart techies. He earned a Ph.D. in Music Perception and Cognition from the Ohio State University while researching musical affect perception and computational music analysis, and thereafter taught music theory and conducted interdisciplinary research at a small liberal arts college. After years at the front of the classroom, he returned to the back row (where he belongs) and retrained within the Gina Cody School of Engineering and Computer Science at Concordia University while researching audio-based human computer interaction and also dabbling in computational art. He is a card-carrying academia refugee, an expat, a commercial pilot and flight instructor, Superwoman's husband, and a sleep-deprived dad (which he considers to be the best job of all).
February 4 - Marybeth Gandy (Cancelled)
Our relationship with technology and the interfaces to them is changing dramatically as we enter an era of wearable and ubiquitous computing. In previous eras of (personal and then mobile) computing the interface designer could rely on the visual channel for conveying the majority of information, while relying on typing, pointing, and touching/swiping for input. However, as computing devices are becoming more intimately connected to our bodies and our lives, a completely new approach to user interfaces and experiences is needed. In particular, the auditory channel has been relatively unexplored as a primary modality during these previous eras, but it is important now that we learn how to best leverage additional modalities in these wearable/ubicomp systems, which must support and anticipate our needs, providing services via interfaces that do not distract us from our primary tasks. In this talk I will discuss how sophisticated auditory interfaces could be utilized in future user experiences, provide examples developed by students in the Principles of Computer Audio course, highlight the challenges inherent to audio-centric interfaces, as well as the research and development that is needed to face those challenges.
Dr. Maribeth Gandy is the Director of the Wearable Computing Center and of the Interactive Media Technology Center within the Institute for People and Technology, and a Principal Research Scientist at Georgia Tech. She received a B.S. in Computer Engineering as well as a M.S. and Ph.D. in Computer Science from Georgia Tech. In her nineteen years as a research faculty member, her work has been focused on the intersection of technology for mobile/wearable computing, augmented reality, multi-modal human computer interaction, assistive technology, and gaming. Her interest is in achieving translational impact through her groups’ research and development via substantive collaborations with industry, helping wearable technologies to flourish outside the academic setting.
February 11 - Taka Tsuchiya
How can we explore and understand non-musical data with sound? Can we compose a music that tells a story about data? This study compares the methodologies for data exploration between traditional data-science approaches and the unconventional auditory approaches (i.e., sonification) with considerations such as the learnability, properties of sound, and aesthetic organization of sound for storytelling. The interactive demonstration utilizes CODAP (Common Online Data Analysis Platform), a web-based platform for data-science education and experiments, extended with the sonification plugins.
Takahiko Tsuchiya (Taka) is a Ph.D. student working with Dr. Jason Freeman. His researches include the development of sonification frameworks and a live-coding environment. From July to December 2018, he joined the Concord Consortium, a non-profit science-education company in Emeryville, CA as part of the NSF internship program. He developed sonification plugins for their data-science platform (CODAP) while also contributing to general R&D such as the improvement of the formula engine and data visualization.
February 18 - Shachar Oren
Oren will share the story of Neurotic Media, the Atlanta-based music distribution company he founded over a decade ago, which has successfully navigated distribution paradigm shifts from music downloads to ringtones, lockers, and on-demand streaming. In 2018, Neurotic Media was acquired by Peloton Interactive, where the platform now serves as the ‘source of truth’ for all things music, powering a growing music-related feature-set for Peloton’s fast-growing Member community of one million worldwide. Advancements in music technology are blurring the lines between the creative process and the distribution business. While, in the past, music was packaged into physical products, boxed and sold off of shelves, music today is syndicated digitally to a growing number of smart end-points which administer data of their own. This has opened the door to a new range of creative and business possibilities. Just recently, DJ Marshmallow enjoyed over 10M live views of the concert he performed inside popular video game Fortnite, a groundbreaking event by any measure. What if tomorrow, the DJ is GA Tech’s Shimon Robot’s AI system? Endel is an example of an app that helps people focus and relax with AI-manufactured sounds, which react to data points from your phone about your environment in real time. What can your phone also add about your state of mind? Seventy percent of smart speakers in the US today feature Amazon’s Alexa, who clearly knows a lot about your personal preferences and home front in general. What new creative directions are possible when we weave AI, deep data analytics, and the human mind?
Shachar Oren is VP of Music at Peloton, a global technology company reinventing fitness, and the CEO of its B2B music-centric subsidiary Neurotic Media. Neurotic Media successfully navigated several distribution paradigm shifts, from music downloads to ringtones, lockers, and on-demand streaming. In 2018, Neurotic Media was acquired by Peloton. Peloton instructors lead daily live streamed fitness classes with dynamic music playlists across cycling, running, bootcamp, strength, stretching, yoga and meditation – and deliver its live and on-demand programming via the company's connected fitness products, the Peloton Bike, and Peloton Tread. as well as the Peloton Digital app. The Neurotic Media Platform serves as the ‘source of truth’ for all things music within Peloton’s services. Since 2017, Shachar has served as President of Georgia Music Partners (GMP), the non-profit responsible for the Georgia music tax incentive – enabling music companies with qualified expenditures in Georgia to save 15%-30% of their cost. Shachar also serves on the Executive Advisory Board of the Georgia Tech College of Design.
February 25 - Ofir Klemperer
In his lecture and performance, Ofir will talk about the way our perception of music has changed with the digital age, and about the impact of computing on intuitive musical performance. Ofir then will demonstrate his way of bringing the instrumental performative practice back into electronic music, using his monophonic synthesizer, the Korg ms-20.
Ofir Klemperer (born in Israel 1982). is a composer, improviser, singer/ song writer, and producer. He received his Bachelor and Master degrees in music composition at the Royal Conservatory of The Hague, the Netherlands. Leaning heavily on the analog synthesizer the Korg MS-20, Ofir’s music is melodic in its core, and through orchestrating classical instruments along with punk-rock and electronics, he applies to his melodies an experimental approach and Noise.
Ofir’s music has been performed internationally, selected cities include: Tel Aviv, Amsterdam, Belgrade, Gent, Antwerp, Brussels, and Sao Paolo. His work has been featured at Bolzano Jazz Festival in Italy, and MATA Festival in New York City.
Some of the ensembles he has written for are: Israel Contemporary Players, Civic Orchestra of Chicago, Talea Ensemble, Asko|Schoenberg Ensemble, Pow Ensemble, Rosa Ensemble, Modelo62, Ensemble Klang, and Orkest de Ereprijs. You can find Ofir’s music on ofirklemerer.bandcamp.com and ofirklemperer.wordpress.com
In 2014 Ofir moved to the United States and lived in Cincinnati, OH until 2017. He is currently located in Atlanta, Georgia.