Past Seminars

Spring 2019 Seminars

The Georgia Tech Center for Music Technology Spring Seminar Series features both invited speakers as well as second-year student project presentations. The seminars are on Mondays from 1:55-2:45 p.m. in the West Village Dining Commons, Room 175, on Georgia Tech's campus and are open to the public. Below is the schedule for invited speakers and student presentations for Spring 2019:

January 7 - Gil Weinberg, Professor and Director, Center for Music Technology

January 14 - Grace Leslie, Assistant Professor, School of Music

January 21 - MLK Day  (No Seminar)

January 28 - Joe Plazak

View Abstract


Interdisciplinary research on music encompasses a diverse array of domains and applications, both within academia and industry. Despite commonalities and many shared objectives, the questions, methods, and results of academic & industry research are often starkly different. This talk anecdotally highlights some of the quirks within these two worlds, while also posting a number of in-demand research skills for the coming future. The first half of the talk will focus on the speaker's past academic research related to affective audio communication; the second half will focus on industry research related to teaching computers how to read and write music notation.

View Bio


Joe Plazak is a Senior Software Engineer and Designer at Avid Technology, where he spends his days drinking coffee and teaching computers how to read, write and perform music via the world's leading music notation software: Sibelius. He co-designs Sibelius along with a team of world-class composers, arrangers, performers, and super-smart techies. He earned a Ph.D. in Music Perception and Cognition from the Ohio State University while researching musical affect perception and computational music analysis, and thereafter taught music theory and conducted interdisciplinary research at a small liberal arts college. After years at the front of the classroom, he returned to the back row (where he belongs) and retrained within the Gina Cody School of Engineering and Computer Science at Concordia University while researching audio-based human computer interaction and also dabbling in computational art. He is a card-carrying academia refugee, an expat, a commercial pilot and flight instructor, Superwoman's husband, and a sleep-deprived dad (which he considers to be the best job of all).

February 4 - Marybeth Gandy (Cancelled)

View Abstract


Our relationship with technology and the interfaces to them is changing dramatically as we enter an era of wearable and ubiquitous computing. In previous eras of (personal and then mobile) computing the interface designer could rely on the visual channel for conveying the majority of information, while relying on typing, pointing, and touching/swiping for input. However, as computing devices are becoming more intimately connected to our bodies and our lives, a completely new approach to user interfaces and experiences is needed. In particular, the auditory channel has been relatively unexplored as a primary modality during these previous eras, but it is important now that we learn how to best leverage additional modalities in these wearable/ubicomp systems, which must support and anticipate our needs, providing services via interfaces that do not distract us from our primary tasks. In this talk I will discuss how sophisticated auditory interfaces could be utilized in future user experiences, provide examples developed by students in the Principles of Computer Audio course, highlight the challenges inherent to audio-centric interfaces, as well as the research and development that is needed to face those challenges.

View Bio


Dr. Maribeth Gandy is the Director of the Wearable Computing Center and of the Interactive Media Technology Center within the Institute for People and Technology, and a Principal Research Scientist at Georgia Tech. She received a B.S. in Computer Engineering as well as a M.S. and Ph.D. in Computer Science from Georgia Tech. In her nineteen years as a research faculty member, her work has been focused on the intersection of technology for mobile/wearable computing, augmented reality, multi-modal human computer interaction, assistive technology, and gaming. Her interest is in achieving translational impact through her groups’ research and development via substantive collaborations with industry, helping wearable technologies to flourish outside the academic setting.

February 11 - Taka Tsuchiya

View Abstract


How can we explore and understand non-musical data with sound? Can we compose a music that tells a story about data? This study compares the methodologies for data exploration between traditional data-science approaches and the unconventional auditory approaches (i.e., sonification) with considerations such as the learnability, properties of sound, and aesthetic organization of sound for storytelling. The interactive demonstration utilizes CODAP (Common Online Data Analysis Platform), a web-based platform for data-science education and experiments, extended with the sonification plugins.

View Bio


Takahiko Tsuchiya (Taka) is a Ph.D. student working with Dr. Jason Freeman. His researches include the development of sonification frameworks and a live-coding environment. From July to December 2018, he joined the Concord Consortium, a non-profit science-education company in Emeryville, CA as part of the NSF internship program. He developed sonification plugins for their data-science platform (CODAP) while also contributing to general R&D such as the improvement of the formula engine and data visualization.

February 18 - Shachar Oren

View Abstract


Oren will share the story of Neurotic Media, the Atlanta-based music distribution company he founded over a decade ago, which has successfully navigated distribution paradigm shifts from music downloads to ringtones, lockers, and on-demand streaming. In 2018, Neurotic Media was acquired by Peloton Interactive, where the platform now serves as the ‘source of truth’ for all things music, powering a growing music-related feature-set for Peloton’s fast-growing Member community of one million worldwide. Advancements in music technology are blurring the lines between the creative process and the distribution business. While, in the past, music was packaged into physical products, boxed and sold off of shelves, music today is syndicated digitally to a growing number of smart end-points which administer data of their own. This has opened the door to a new range of creative and business possibilities. Just recently, DJ Marshmallow enjoyed over 10M live views of the concert he performed inside popular video game Fortnite, a groundbreaking event by any measure. What if tomorrow, the DJ is GA Tech’s Shimon Robot’s AI system? Endel is an example of an app that helps people focus and relax with AI-manufactured sounds, which react to data points from your phone about your environment in real time. What can your phone also add about your state of mind? Seventy percent of smart speakers in the US today feature Amazon’s Alexa, who clearly knows a lot about your personal preferences and home front in general. What new creative directions are possible when we weave AI, deep data analytics, and the human mind?

View Bio


Shachar Oren is VP of Music at Peloton, a global technology company reinventing fitness, and the CEO of its B2B music-centric subsidiary Neurotic Media. Neurotic Media successfully navigated several distribution paradigm shifts, from music downloads to ringtones, lockers, and on-demand streaming. In 2018, Neurotic Media was acquired by Peloton. Peloton instructors lead daily live streamed fitness classes with dynamic music playlists across cycling, running, bootcamp, strength, stretching, yoga and meditation – and deliver its live and on-demand programming via the company's connected fitness products, the Peloton Bike, and Peloton Tread. as well as the Peloton Digital app. The Neurotic Media Platform serves as the ‘source of truth’ for all things music within Peloton’s services. Since 2017, Shachar has served as President of Georgia Music Partners (GMP), the non-profit responsible for the Georgia music tax incentive – enabling music companies with qualified expenditures in Georgia to save 15%-30% of their cost. Shachar also serves on the Executive Advisory Board of the Georgia Tech College of Design.

February 25 - Ofir Klemperer

View Abstract

In his lecture and performance, Ofir will talk about the way our perception of music has changed with the digital age, and about the impact of computing on intuitive musical performance. Ofir then will demonstrate his way of bringing the instrumental performative practice back into electronic music, using his monophonic synthesizer, the Korg ms-20.

View Bio


Ofir Klemperer (born in Israel 1982). is a composer, improviser, singer/ song writer, and producer. He received his Bachelor and Master degrees in music composition at the Royal Conservatory of The Hague, the Netherlands. Leaning heavily on the analog synthesizer the Korg MS-20, Ofir’s music is melodic in its core, and through orchestrating classical instruments along with punk-rock and electronics, he applies to his melodies an experimental approach and Noise.

Ofir’s music has been performed internationally, selected cities include: Tel Aviv, Amsterdam, Belgrade, Gent, Antwerp, Brussels, and Sao Paolo. His work has been featured at Bolzano Jazz Festival in Italy, and MATA Festival in New York City.

Some of the ensembles he has written for are: Israel Contemporary Players, Civic Orchestra of Chicago, Talea Ensemble, Asko|Schoenberg Ensemble, Pow Ensemble, Rosa Ensemble, Modelo62, Ensemble Klang, and Orkest de Ereprijs. You can find Ofir’s music on ofirklemerer.bandcamp.com and ofirklemperer.wordpress.com

In 2014 Ofir moved to the United States and lived in Cincinnati, OH until 2017. He is currently located in Atlanta, Georgia.

March 4 - Colby Lieder

View Abstract

Augmenting Reality with Music and Audio Engineering


When I was in graduate school, I always wanted to hear what guest speakers were passionate about, how they landed where they are in life, and---if there was any resonance with these---a few tips that I could file away. So I'll follow that pattern here. :) In this talk, I'll briefly discuss my research projects in academia, recent transition to industry, and the challenges facing music and audio design in the augmented reality space.

View Bio


Composer-engineer Colby Leider works at Magic Leap, an augmented-reality startup that blends technology, physiology, and creativity to reveal worlds within our world and add magic to the everyday. He previously served as associate professor and program director of the Music Engineering Technology Program (MuE) at the University of Miami Frost School of Music for many years, where he hosted ICMA, SEAMUS, and ISMIR conferences. Colby holds degrees from Princeton (Ph.D., MFA), Dartmouth (AM), and the University of Texas (BSEE). His research interests include AR/MR/VR systems, digital audio signal processing, sound synthesis, tuning systems, and acoustic ecology, and he has received grants from the National Science Foundation, NVIDIA, the Coulter Foundation, and several corporations.

March 11 - Martin Norgaard

View Abstract

Cognitive Processes Underpinning Musical Improvisation


Music improvisation involves the ability to adapt and integrate sounds and motor movements in real-time, concatenating previously stored motor sequences in order to flexibly produce a desired result, in this case, a particular auditory experience. The output of improvisation must then be evaluated by the musician in real time based on internal goals and the external environment, which may lead to the improviser modifying subsequent motor acts. I explore qualitative accounts of this process by expert and developing jazz musicians as well as improvisers from different cultural traditions. I then compare these descriptions with results from our related investigations using Electroencephalography and functional Magnetic Resonance Imaging. Finally, I argue that developing improvisation achievement cause positive far-transfer effects as measured by changes in executive function.

March 18 - Spring Break

March 25 - Tejas, Avneesh

April  1 - Ryan, Yi 

April 8 - Benjie, Yongliang

April 15 - Madhukesh, Jeremy

April 22- Yuqi, Keshav, Richard Yang

April 29- Jyothi



Fall 2018 Seminars

The Georgia Tech Center for Music Technology Fall Seminar Series features both invited speakers as well as second-year student project presentations. The seminars are on Mondays from 1:55-2:45 p.m. in the West Village Dining Commons, Room 175, on Georgia Tech's campus and are open to the public. Below is the schedule for invited speakers and student presentations for Fall 2018:

August 20 - Jason Freeman, Professor and Chair of School of Music

August 27 - Alexander Lerch, Assistant Professor, School of Music

September 3 - Labor day  (No Seminar)

September 10 - Gil Weinberg, Professor and Director, Center for Music Technology

View Abstract


The Robotic Musicianship Group at Georgia Tech aims to facilitate meaningful musical interactions between humans and machines, leading to novel musical experiences and outcomes. In our research, we combine computational modeling approaches for music perception, interaction, and improvisation, with novel approaches for generating acoustic responses in physical, social, and embodied manner. The motivation for this work is based on the hypothesis that real-time collaboration between human and robotic players can capitalize on the combination of their unique strengths to produce new and compelling music. Our goal is to combine human qualities such as musical expression and emotions with robotic traits such as powerful processing, mechanical virtuosity, the ability to perform sophisticated algorithmic transformations, and the capacity to utilize embodied musical cognition, where the robotic body shapes its musical cognition. The talk will feature a number of approaches we have explored for perceptual modeling, improvisation, path planning, and gestural interaction with robotic platforms such as Haile, Shimon, Shimi, Skywalker hand and the robotic drumming prosthesis.

September 17 - Grace Leslie, Assistant Professor, School of Music

View Abstract


The Georgia Tech Brain Music Lab is a community gathered around a unique facility combining EEG (brainwave data) and other physiological measurement techniques with new music technologies. Our mission is to engage in research and creative practice that brings health and well-being. This talk will present an overview of our activities at the Brain Music Lab, including sonification of physiological signals, acoustic design for health and well-being, therapeutic applications of musical stimulation, and brain-body music performance.

September 24 - Claire Arthur, Visiting Assistant Professor, School of Music

View Abstract


The computational and cognitive musicology group conducts empirical research to address questions about musical structure and/or human perception and cognition of music, with the aim of advancing our knowledge in these domains, but also in providing accessible technology and digital resources for music research, education, or creation. This talk will present an overview of the types of questions asked in the fields of computational and cognitive musicology, as well as specific examples of recent research, such as: statistical modeling of melody and harmony, voice-leading theory versus practice, measuring strong emotional responses to music, and the qualitative and quantitative differences of melodic tones in varying harmonic contexts.

October 1 - Nat Condit-Schultz, Lecturer, School of Music (Cancelled)

October 8 - Fall Recess (No Seminar)

October 15 - Nat Condit-Schultz, Lecturer, School of Music

October 22 - Siddharth, Ashish

October 29 - Richard, Mike

November 5 - Tejas, Benjie

November  12 - Jeremy, Yi 

November 19 - Ryan, Yongliang

November 26 - Avneesh, Yuqi

December 3- Madhukesh, Keshav



Spring 2018 Seminars

The Georgia Tech Center for Music Technology Spring Seminar Series features both invited speakers as well as second-year student project presentations. The seminars are on Mondays from 1:55-2:45 p.m. in the West Village Dining Commons, Room 175, on Georgia Tech's campus and are open to the public. Below is the schedule for invited speakers and student presentations for spring 2018:

January 8 - Matt Craney

View Abstract

Humanoids, Robotics, and Design for Disability
Robotics are going through a Cambrian explosion as barriers to development are being reduced, power density and computation capacity is increasing, and controls are advancing. Applying these advancements we are simultaneously making incremental and explosive steps forward in the development of powered prostheses. This talk will give an overview of some of the work happening at the MIT Media Lab Biomechatronics group including optogenetics, new amputation paradigms, computational socket design and of course robotic legs. I will present some concepts from my core research project that I intend to apply to my collaboration with Gil Wienberg and the Robotic Musicianship program; I will talk through the development techniques I use for a multi-degree of freedom robotic prosthetic leg for above knee amputees. All of this work will be framed by some of my previous work in robotic assembly of discrete cellular lattices (digital fabrication), humanoid robotics, product design, and advanced Solidworks modeling techniques.

January 15 - MLK day  (No Seminar)

January 22 - Frank Hammond, Center for Robotics and Intelligent Machines, Georgia Tech

View Abstract

The field of human augmentation has become an increasingly popular research area as capabilities in human-machine interfacing and robot manufacturing evolve. Novel technologies in wearable sensors and 3D printing have enabled the development of more sophisticated augmentation devices, including teleoperated robotic surgery platforms and powered prostheses, with greater speed and economy. Despite these advances, the efficacy and adoption of human augmentation devices has been limited due to several factors including (1) lack of continuous control and dexterity in robotic end-effectors, (2) poor motion and force coordination and adaptation between robotic devices and humans, and (3) the absence of rich sensory feedback from the robotic devices to the human user. My research leverages techniques in soft machine fabrication, robotic manipulation, and mechanism design to arrive at human augmentation solutions which address these issues from a methodological perspective. This talk will highlight aspects of our design methodology including the experimental characterization of human manipulation capabilities, the design of mechanisms and devising of control strategies for improved human-robot cooperation, and new efforts to enable virtual proprioception in robotic devices – a capability which could allow humans to perceive and control robotic augmentation devices as if they were parts of their own bodies.

January 29 - Astrid Bin

View Abstract

Much has been written about digital musical instruments (DMIs) from the performer's perspective, and there has been comparatively little study on the perspective of the audience. My PhD research investigated the audience experience of error in DMI performance - a playing tradition that is radically experimental and rule-breaking, leading some to suggest that errors aren't even possible. In this research I studied live audiences using a combined methodology of post-hoc data and live data, which was collected via a system I designed specifically for this purpose called Metrix. In this seminar I present this methodology, as well as some of the insights that resulted from this research on how audiences experience DMI performance, and how they perceive error in this context.

February 5 - Deantoni Parks 

View Abstract

Deantoni Parks is one of the finest drummers working today, displaying a sleek intuitive balance between raw rhythmic physicality and machine-like precision. His abilities have led him to collaborations with the likes of John Cale, Sade, the Mars Volta and Flying Lotus as well as a teaching stint a the Berklee College of Music. 
In this workshop, Deantoni Parks will explore how musicians can augment their natural talents with technology, adopting its benefits to fuel their own vision. According to Parks, "The relationship between music and technology is always evolving, but true music cannot exist without a soul." From this philosophical starting point, Parks will engage with attendees to seek out where an equilibrium between human and machine expression lie. 

February 12 - Tanner Legget (Mandala) and Daniel Kuntz (Crescendo)

February 19 - Minoru "Shino" Shinohara - Human Neuromuscular Physiology Lab, Georgia Tech

February 26 - Michael Nitsche, School of Literature, Media, and Communication, Georgia Tech

March 5 - Guthman Preparation

March 12 - Guthman Lessons Learned and Zach

March 19 - Spring Break (No Seminar)

March 26 - Somesh, Hanoi

April  2 - Hongzhao, Agneya 

April 9 - Vinod, Takumi

April 16 - Rupak, Hanyu

April 23 - Jyoti, Henry, Joe

 

Fall 2017 Seminars

The Georgia Tech Center for Music Technology Fall Seminar Series features both invited speakers as well as second-year student project proposal presentations. The seminars are on Mondays from 1:55-2:45 p.m. in the West Village Dining Commons, Room 175, on Georgia Tech's campus and are open to the public. Below is the schedule for invited speakers and student presentations for fall 2017:

August 21 - First day of classes; no seminar. View the eclipse instead.

August 28 - Valorie Salimanpoor, Baycrest Institute

View Abstract

Music is merely a sequence of sounds, each of which contains no independent reward value, but when arranged into a sequence it is perceived as intensely pleasurable in the brain. How does a transient and fleeting sequence of sounds bring us to tears or strong physiological reactions like chills? Although music has no clear survival value, it has been a fundamental part of humanity, existing as far back as history dates (prehistoric era) and has developed spontaneously in every recorded culture. In this talk I present brain imaging research to show how sophisticated cognitive functions integrate to give rise to musical pleasure and why you cannot find two people in the world with the exact same taste in music.

September 4 - Labor Day (no seminar)

September 11 - Robert Hatcher, soundcollide

View Abstract

Soundcollide’s mission is to disrupt the music creation process by breaking down barriers to collaboration, such as physical proximity. Our music technology platform will predict and enhance the performance of its users by streamlining the process of music creation and production. It accomplishes this task by increasing the amount of music that artists can create through an immersive collaborative experience where multiple users work concurrently.

While the application is under load, soundcollide’s machine learning tools will analyze and compile a history of artists' procedural preferences enabling artists to discover the most compatible connections for future recording and production collaborations. Our intent is to host the largest number of concurrent users, thus increasing the accuracy and reach of our machine-learning technology.

September 18 - Clint Zeagler, Wearable Computing Center, Georgia Tech

View Abstract
 

Working on a wearable technology interdisciplinary project team can be challenging because of a lack of shared understanding between different fields, and a lack of ability in cross-disciplinary communication. We describe an interdisciplinary collaborative design process used for creating a wearable musical instrument with a musician. Our diverse team used drawing and example artifacts/toolkits to overcome communication and gaps in knowledge. We view this process in the frame of Susan Leigh Star’s description of a boundary object, and against a similar process used in another musical/computer science collaboration with the group Duran Duran. More information available here.

September 25 - Brian Magerko, School of Literature, Media, and Communication, Georgia Tech

View Abstract

In a future that is increasingly seeing intelligent agents involved in our education, workforce, and homes, how human productivity fits into that unfolding landscape is unclear. An ideal outcome would be one that both draws on human ingenuity, creativity, and problem solving (and definition) capabilities but also on the affordances of computational systems. This talk will explore the notion of “co-creative” relationships between humans and AI, where such a path might be found.

October 2 - Carrie Bruce, School of Interactive Computing, Georgia Tech

View Abstract

When designing for variation in human ability, accessibility is frequently the measure of success. Although accessible design can make it easier for an individual to gain access to spaces, products, and information, the emphasis is often solely on impairment and its mitigation as the basis for design. Thus, we end up with “handicapped parking”, assistive technology, and other specialized designs that can be exclusionary and do not necessarily address an individual’s participation needs – or their engagement that sustains personal identity, supports context-related motivations, and promotes inclusion. While interactive technologies have the potential to enable this type of participation, there is a lack of evidence-based design that demonstrates how this can be accomplished. My work focuses on operationalizing and designing for participation through building an evidence base and encouraging research-driven practice. I will discuss a few projects related to participation, including the Accessible Aquarium Project. I will show how we can perform user-centered, research-driven practice in designing interactive spaces, products, and information based on access and participation needs.

October 9 - Fall Break

October 16 - Frank Hammond, Center for Robotics and Intelligent Machines, Georgia Tech

View Abstract

The field of human augmentation has become an increasingly popular research area as capabilities in human-machine interfacing and robot manufacturing evolve. Novel technologies in wearable sensing and 3D printing have enabled the development of more sophisticated augmentation devices, including teleoperated robotic surgery platforms and powered prostheses. Despite these advances, the efficacy and adoption of human augmentation devices has been limited due to several factors including (1) lack of continuous control and dexterity in robotic end-effectors, (2) poor motion and force coordination and adaptation between robotic devices and humans, and (3) the absence of rich sensory feedback from the robotic devices to the human user. My research leverages techniques in soft robot fabrication, wearable sensing systems, and non-anthropomorphic design strategies to arrive at human augmentation solutions which address the issues of device form and function from a methodological perspective. In this talk, I will highlight aspects of our powered prosthesis design methodology, including (1) the experimental characterization of human manipulation capabilities, (2) the design of mechanisms and control strategies for improved human-robot cooperation, and (3) new efforts to enable the neurointegration of robotic manipulation devices – a capability which could allow humans to perceive and control powered prostheses and extra-limb robots as if they were parts of their own bodies.

October 23 - Kinuko Masaki, SmartEar 

View Abstract

With the proliferation of voice assistants (e.g. Apple's Siri, Google's Now, Amazon's Alexa, Microsoft's Cortana) and “smart” speakers (e.g. Amazon's Echo, Google's Home, Apple Homepod), people are realizing that "voice is the next frontier of computing". Voice allows for efficient and hands-free communication. However, for a voice-first device to truly replace smartphones couple of technological advancements have to be made. In particular we need to 1) be able to pick up the user’s voice commands even in loud environments and in the presence of many interfering speech, 2) understand the user’s request even when spoken in a natural conversational way, and 3) respond back to the user in a very natural and human way. This talk will articulate how advancements in artificial intelligence/deep learning, digital signal processing, and acoustics are addressing these issues and helping to make voice computing a reality.

October 30 -  Hantrakul Lamtharn, Zach Kondak

November 6 - Ganesh Somesh, Hongzhao Guan, Agneya Kerure

November 13 - Masataka Goto, AIST, Japan

View Abstract

Music technologies will open the future up to new ways of enjoying music both in terms of music creation and music appreciation. In this seminar talk, I will introduce the frontiers of music technologies by showing some practical research examples, which have already been made into commercial products or made open to the public, to demonstrate how endusers can benefit from singing synthesis technologies, music understanding technologies, and music interfaces.

From the viewpoint of music creation, I will demonstrate a singing synthesis system, VocaListener (https://staff.aist.go.jp/t.nakano/VocaListener/), and a robot singer system, VocaWatcher (https://staff.aist.go.jp/t.nakano/VocaWatcher/). I will also introduce the world's first culture in which people actively enjoy songs with synthesized singing voices as the main vocals: emerging in Japan since singing synthesis software such as Hatsune Miku based on VOCALOID has been attracting attention since 2007. Singing synthesis thus breaks down the long-cherished view that listening to a non-human singing voice is worthless. This is a feat that could not have been imagined before. In the future, other long-cherished views could also be broken down.

As for music appreciation, I will demonstrate a web service for active music listening, "Songle" (http://songle.jp), that has analyzed more than 1,100,000 songs on music- or video-sharing services and facilitates deeper understanding of music. Songle is used to provide a web-based multimedia development framework, "Songle Widget" (http://widget.songle.jp), that makes it easy to develop web-based applications with rigid music synchronization by leveraging music-understanding technologies. Songle Widget enables users to control computer-graphic animation and physical devices such as lighting devices and robot dancers in synchronization with music available on the web. I will then demonstrate a web service for large-scale music browsing, "Songrium" (http://songrium.jp), that allows users to explore music while seeing and utilizing various relations among more than 780,000 music video clips on video-sharing services. Songrium has a three-dimensional visualization function that shows music-synchronized animation, which has already been used as a background movie in a live concert of Hatsune Miku.

November 20 - Takumi Ogatan, Vinod Subramanian, Liu Hanyu

November 27 - Rupak Vignesh, Zichen Wang, Zhao Yan

 

Spring 2017 Seminars

January 9: Mike Winters, Center for Music Technology Ph.D. student

View Abstract

Working with human participants is an important part of evaluating your work. However, it is not always easy to know what is ethical and not as several factors must be considered. In this talk, I will discuss ethical issues of using human participants for research from the eBelmont Report to the submitting an IRB. I will also consider the ethical issues in the projects I have worked on in the past year including a system for Image Accessibility.

January 23: Mark Riedl, an associate professor in the School of Interactive Computing and director of the Entertainment Intelligence Lab

View Abstract

Computational creativity is the art, science, philosophy, and engineering of computational systems that exhibit behaviors that unbiased observers would deem to be creative. We have recently seen growth in the use of machine learning to generate visual art and music. In this talk, I will overview my research on generating playable computer games. Unlike art and music, games are dynamical systems where the the user chooses how to engage with the content in a virtual world, posing new challenges and opportunities. The presentation will cover machine learning for game level generation and story generation as well as broader questions of defining creativity.

January 30: Lisa Margulis, professor and director of the Music Cognition Lab at the University of Arkansas

View Abstract

This talk introduces a number of behavioral methodologies for understanding the kinds of experiences people have while listening to music. It explores the ways these methodologies can illuminate experiences that are otherwise difficult to talk about. Finally, it assesses the potential and the limitations of using science to understand complex cultural phenomena.

February 6: Martin Norgaard, assistant professor of music education at Georgia State University

View Abstract

In our recent pilot study, middle school concert band students who received instruction in musical improvisation showed far-transfer enhancements in some areas of executive function related to inhibitory control and cognitive flexibility compared to other students in the same ensemble. Why does improvisation training enhance executive function over and above standard music experience? Music improvisation involves the ability to adapt and integrate sounds and motor movements in real-time, concatenating previously stored motor sequences in order to flexibly produce the desired result, in this case, a particular auditory experience. The output of improvisation must then be evaluated by the musician in real time based on internal goals and the external environment, which may lead to the improviser modifying subsequent motor acts. I explore how developing these processes could cause the observed far-transfer effects by reviewing our previous qualitative and quantitative research as well as significant theoretical frameworks related to musical improvisation.

February 13: Chris Howe, Project Engineer at Moog Music

View Abstract

Chris Howe is a project engineer at Moog Music where he helps create new musical tools to inspire creativity. He will be discussing his role as embedded systems designer on the Global Modular project, a collaboration with artist Yuri Suzuki which explores globalization through crowd-sourced sampling, convolution reverb, and spectral morphing.

February 20: Michael Casey, Professor of Music and Computer Science at Dartmouth

View Abstract

Our goal is to build brain-computer interfaces that can capture the sound in the mind's ear and render it for others to hear. While this type of mind reading sounds like science fiction, recent work by computer scientists and neuroscientists (Nishimoto et al., 2011; Haxby et al., 2014) has shown that visual features corresponding to subjects' perception of images and movies can be predicted from brain imaging data alone (fMRI). We present our research on learning stimulus encoding models of music audio from human brain imaging, for both perception and imagination of the stimuli (Casey et al., 2012; Hanke et al., 2015; Casey 2017). To encourage further development of such neural decoding methods, the code, stimuli, and high-resolution 7T fMRI data from one of our experiments have been publicly released via the OpenfMRI initiative.

Prof. Casey and Neukom Fellow Dr. Gus Xia will also discuss the Neukom Institute's 2017 Turing Test in Human-Computer Music Interaction, comprising several performance tasks in instrumental music and dance. Competitors are asked to create artificial performers capable of performing “duets” with human performers, possibly in real time.

                      Gus Xia, Neukom Postdoc Fellow at Dartmouth

View Abstract Abstract: Expressive Human-Computer Music Interaction In this talk, Gus will present various of techniques to incorporate automatic accompaniment system with musical expression, including nuance timing and dynamics deviations, humanoid robotic facial and gestural expression, and basic improvisation techniques. He will also promote 2017 "Turing Test for Creative Art", which is initialized at Dartmouth college and this year contains a new track on Human-computer music performance. For more information, please visit http://bregman.dartmouth.edu/turingtests/.

February 27: Roxanne Moore, Research Engineer II at Georgia Tech

View Abstract

There are a lot of ideas out there about how to "fix" education in the United States, particularly in K-12. However, new innovations are constantly met with the age-old question: Does it work? Different stakeholders have different definitions of what it means to "work" and each of those definitions has unique measurement and assessment challenges. In this talk, we'll look at different ways of answering the "Does it work?" question in the context of different education innovations that I've personally worked on. We'll look at the innovations themselves and the research methods used to assess whether or not those innovations "work." We'll also take a complex systems view of schools, including some systems dynamics models of school settings, to better understand the challenges and opportunities in K-12 education.

March 6: Klimchak, Artist

View Abstract

Klimchak will discuss his musical compositional methods, which involve the intersection of home-built instruments, low- and high-tech sound manipulation, and live performance. He will perform 2 pieces, WaterWorks (2004) for a large bowl of amplified water, and Sticks and Tones (2016) for frame drum, melodic and laptop.

March 13—Annie Zhan, Software Engineer at Pandora

View Abstract

Music technology has been playing a more and more important role in academic and industrial research and developments. At Pandora, we conduct lots of research around intelligent systems, machine listening, and recommendation systems. How is music information retrieval used in industrial companies? What are the key successes and challenges? This talk will cover several of my graduate research projects around MIR (music mood detection, the Shimi band), and audio fingerprinting duplicate detection system, music recommendation systems developed at Pandora.

March 27—Avrosh Kumar, Nikhil Bhanu

View Abstracts

Avrosh's Abstract: The focus of this project is to develop a DAW (digital audio workstation) interface to aid audio mixing in virtual reality. The application loads an Ableton Live session and creates a representation of virtual reality, taking advantage of the depth and wider field of vision. This provides a way for audio engineers to look at the mix, visualize panning and frequency spectra from a new perspective and interact with the DAW controls using gestures.

Nikhil's Abstract: Astral Plane is an object-based spatial audio system for live performances and improvisation. It employs Higher-Order Ambisonics and is built using Max/MSP with Ableton Live users in mind. The core idea is to create and apply metadata to sound objects (audio tracks in Live) in real-time, at signal rate. This includes object origin, position, trajectory, speed of motion, mappings etc. The novel features include interactive touch & gesture control via an iPad interface, continuous/one-shot geometric trajectories & patterns, sync with Live via Ableton Link and automatic spatialization driven by audio features extracted in real-time. The motivations are to explore the capability of stationary/moving sounds in 2D space and to assess the perceptibility of various trajectories, interaction paradigms in terms of musicality. The aim is to enable artists and DJs to engage in soundscape composition, build/release tension and storytelling. Project source and additional information is available on GitHub.

April 3—Shi Cheng, Hua Xiao

April 10—Milap Rane, Sirish Satyavolu

April 17—Jonathan Wang, Shijie Wang

View Abstracts

Jonathan's Abstract: The focus of this project in vocal acoustics is vocal health and studying the effects of vocal disorders on the acoustic output of the human voice. For many professionals, growths on the vocal folds alter their oscillatory motion and ultimately affect the sound of their voice as well as their health. However, most people with voice disorders do not seek medical attention or treatment. My project aims to create a preliminary diagnosis tool by comparing the recording of a patient’s voice with other voice recordings.

April 24—Brandon Westergaard, Amruta Vidwans

View Abstracts

Amruta’s Abstract: Dereverberation is an important pre-processing step for audio signal processing. It is critical step for speech recognition and music information retrieval (MIR) tasks. It has been a well researched topic in case of speech signals but these methods cannot be directly applied to music signals. In the previous semester evaluation of existing speech based dereverberation algorithms on music signals was carried out. In this semester the focus is towards using machine learning to perform music dereverberation. This project will be useful for MIR tasks and for audio engineers to obtain dry recordings.

May 1—Tyler White, Lea Ikkache

View Abstracts

Tyler's Abstract: My project is Robotic Drumming Third Arm. The main goals and motivations are to explore how a shared control paradigm between a human drummer and wearable robotics can influence and potentially enhance a drummer’s performances and capabilities. A wearable system allows us to examine interaction beyond the visual and auditory that is explored in non-wearable robotic systems such as Shimon or systems that attach actuators directly to the drums. My contributions to this project have been a sensor fusion system, data filtering, and smoothing methods, designed and fabricated a custom PCB, created a custom firmware and hardware to communicate via from the Arduino to Max/MSP, advanced stabilization techniques for two moving bodies, and high-level musical interactivity programs for performance. Watch a short video of the project here.

Lea's Abstract: This project revolves around a sound exhibition called Memory Palace. The application uses indoor localization systems to create a 3D sound "library" in the exhibition space. Users, with their smartphones, can, therefore, record sounds (musical or words) and place them in space. When their phone hovers around a sound someone has placed, it will play the sound. This application, based on web-audio and whose development started at IRCAM, aims to make users reflect on the subject of memory, and play with sounds and space.