Past Seminars

Spring 2018 Seminars

The Georgia Tech Center for Music Technology Spring Seminar Series features both invited speakers as well as second-year student project presentations. The seminars are on Mondays from 1:55-2:45 p.m. in the West Village Dining Commons, Room 175, on Georgia Tech's campus and are open to the public. Below is the schedule for invited speakers and student presentations for spring 2018:

January 8 - Matt Craney

View Abstract

Humanoids, Robotics, and Design for Disability
Robotics are going through a Cambrian explosion as barriers to development are being reduced, power density and computation capacity is increasing, and controls are advancing. Applying these advancements we are simultaneously making incremental and explosive steps forward in the development of powered prostheses. This talk will give an overview of some of the work happening at the MIT Media Lab Biomechatronics group including optogenetics, new amputation paradigms, computational socket design and of course robotic legs. I will present some concepts from my core research project that I intend to apply to my collaboration with Gil Wienberg and the Robotic Musicianship program; I will talk through the development techniques I use for a multi-degree of freedom robotic prosthetic leg for above knee amputees. All of this work will be framed by some of my previous work in robotic assembly of discrete cellular lattices (digital fabrication), humanoid robotics, product design, and advanced Solidworks modeling techniques.

January 15 - MLK day  (No Seminar)

January 22 - Frank Hammond, Center for Robotics and Intelligent Machines, Georgia Tech

View Abstract

The field of human augmentation has become an increasingly popular research area as capabilities in human-machine interfacing and robot manufacturing evolve. Novel technologies in wearable sensors and 3D printing have enabled the development of more sophisticated augmentation devices, including teleoperated robotic surgery platforms and powered prostheses, with greater speed and economy. Despite these advances, the efficacy and adoption of human augmentation devices has been limited due to several factors including (1) lack of continuous control and dexterity in robotic end-effectors, (2) poor motion and force coordination and adaptation between robotic devices and humans, and (3) the absence of rich sensory feedback from the robotic devices to the human user. My research leverages techniques in soft machine fabrication, robotic manipulation, and mechanism design to arrive at human augmentation solutions which address these issues from a methodological perspective. This talk will highlight aspects of our design methodology including the experimental characterization of human manipulation capabilities, the design of mechanisms and devising of control strategies for improved human-robot cooperation, and new efforts to enable virtual proprioception in robotic devices – a capability which could allow humans to perceive and control robotic augmentation devices as if they were parts of their own bodies.

January 29 - Astrid Bin

View Abstract

Much has been written about digital musical instruments (DMIs) from the performer's perspective, and there has been comparatively little study on the perspective of the audience. My PhD research investigated the audience experience of error in DMI performance - a playing tradition that is radically experimental and rule-breaking, leading some to suggest that errors aren't even possible. In this research I studied live audiences using a combined methodology of post-hoc data and live data, which was collected via a system I designed specifically for this purpose called Metrix. In this seminar I present this methodology, as well as some of the insights that resulted from this research on how audiences experience DMI performance, and how they perceive error in this context.

February 5 - Deantoni Parks 

View Abstract

Deantoni Parks is one of the finest drummers working today, displaying a sleek intuitive balance between raw rhythmic physicality and machine-like precision. His abilities have led him to collaborations with the likes of John Cale, Sade, the Mars Volta and Flying Lotus as well as a teaching stint a the Berklee College of Music. 
In this workshop, Deantoni Parks will explore how musicians can augment their natural talents with technology, adopting its benefits to fuel their own vision. According to Parks, "The relationship between music and technology is always evolving, but true music cannot exist without a soul." From this philosophical starting point, Parks will engage with attendees to seek out where an equilibrium between human and machine expression lie. 

February 12 - Tanner Legget (Mandala) and Daniel Kuntz (Crescendo)

February 19 - Minoru "Shino" Shinohara - Human Neuromuscular Physiology Lab, Georgia Tech

February 26 - Michael Nitsche, School of Literature, Media, and Communication, Georgia Tech

March 5 - Guthman Preparation

March 12 - Guthman Lessons Learned and Zach

March 19 - Spring Break (No Seminar)

March 26 - Somesh, Hanoi

April  2 - Hongzhao, Agneya 

April 9 - Vinod, Takumi

April 16 - Rupak, Hanyu

April 23 - Jyoti, Henry, Joe

 

Fall 2017 Seminars

The Georgia Tech Center for Music Technology Fall Seminar Series features both invited speakers as well as second-year student project proposal presentations. The seminars are on Mondays from 1:55-2:45 p.m. in the West Village Dining Commons, Room 175, on Georgia Tech's campus and are open to the public. Below is the schedule for invited speakers and student presentations for fall 2017:

August 21 - First day of classes; no seminar. View the eclipse instead.

August 28 - Valorie Salimanpoor, Baycrest Institute

View Abstract

Music is merely a sequence of sounds, each of which contains no independent reward value, but when arranged into a sequence it is perceived as intensely pleasurable in the brain. How does a transient and fleeting sequence of sounds bring us to tears or strong physiological reactions like chills? Although music has no clear survival value, it has been a fundamental part of humanity, existing as far back as history dates (prehistoric era) and has developed spontaneously in every recorded culture. In this talk I present brain imaging research to show how sophisticated cognitive functions integrate to give rise to musical pleasure and why you cannot find two people in the world with the exact same taste in music.

September 4 - Labor Day (no seminar)

September 11 - Robert Hatcher, soundcollide

View Abstract

Soundcollide’s mission is to disrupt the music creation process by breaking down barriers to collaboration, such as physical proximity. Our music technology platform will predict and enhance the performance of its users by streamlining the process of music creation and production. It accomplishes this task by increasing the amount of music that artists can create through an immersive collaborative experience where multiple users work concurrently.

While the application is under load, soundcollide’s machine learning tools will analyze and compile a history of artists' procedural preferences enabling artists to discover the most compatible connections for future recording and production collaborations. Our intent is to host the largest number of concurrent users, thus increasing the accuracy and reach of our machine-learning technology.

September 18 - Clint Zeagler, Wearable Computing Center, Georgia Tech

View Abstract
 

Working on a wearable technology interdisciplinary project team can be challenging because of a lack of shared understanding between different fields, and a lack of ability in cross-disciplinary communication. We describe an interdisciplinary collaborative design process used for creating a wearable musical instrument with a musician. Our diverse team used drawing and example artifacts/toolkits to overcome communication and gaps in knowledge. We view this process in the frame of Susan Leigh Star’s description of a boundary object, and against a similar process used in another musical/computer science collaboration with the group Duran Duran. More information available here.

September 25 - Brian Magerko, School of Literature, Media, and Communication, Georgia Tech

View Abstract

In a future that is increasingly seeing intelligent agents involved in our education, workforce, and homes, how human productivity fits into that unfolding landscape is unclear. An ideal outcome would be one that both draws on human ingenuity, creativity, and problem solving (and definition) capabilities but also on the affordances of computational systems. This talk will explore the notion of “co-creative” relationships between humans and AI, where such a path might be found.

October 2 - Carrie Bruce, School of Interactive Computing, Georgia Tech

View Abstract

When designing for variation in human ability, accessibility is frequently the measure of success. Although accessible design can make it easier for an individual to gain access to spaces, products, and information, the emphasis is often solely on impairment and its mitigation as the basis for design. Thus, we end up with “handicapped parking”, assistive technology, and other specialized designs that can be exclusionary and do not necessarily address an individual’s participation needs – or their engagement that sustains personal identity, supports context-related motivations, and promotes inclusion. While interactive technologies have the potential to enable this type of participation, there is a lack of evidence-based design that demonstrates how this can be accomplished. My work focuses on operationalizing and designing for participation through building an evidence base and encouraging research-driven practice. I will discuss a few projects related to participation, including the Accessible Aquarium Project. I will show how we can perform user-centered, research-driven practice in designing interactive spaces, products, and information based on access and participation needs.

October 9 - Fall Break

October 16 - Frank Hammond, Center for Robotics and Intelligent Machines, Georgia Tech

View Abstract

The field of human augmentation has become an increasingly popular research area as capabilities in human-machine interfacing and robot manufacturing evolve. Novel technologies in wearable sensing and 3D printing have enabled the development of more sophisticated augmentation devices, including teleoperated robotic surgery platforms and powered prostheses. Despite these advances, the efficacy and adoption of human augmentation devices has been limited due to several factors including (1) lack of continuous control and dexterity in robotic end-effectors, (2) poor motion and force coordination and adaptation between robotic devices and humans, and (3) the absence of rich sensory feedback from the robotic devices to the human user. My research leverages techniques in soft robot fabrication, wearable sensing systems, and non-anthropomorphic design strategies to arrive at human augmentation solutions which address the issues of device form and function from a methodological perspective. In this talk, I will highlight aspects of our powered prosthesis design methodology, including (1) the experimental characterization of human manipulation capabilities, (2) the design of mechanisms and control strategies for improved human-robot cooperation, and (3) new efforts to enable the neurointegration of robotic manipulation devices – a capability which could allow humans to perceive and control powered prostheses and extra-limb robots as if they were parts of their own bodies.

October 23 - Kinuko Masaki, SmartEar 

View Abstract

With the proliferation of voice assistants (e.g. Apple's Siri, Google's Now, Amazon's Alexa, Microsoft's Cortana) and “smart” speakers (e.g. Amazon's Echo, Google's Home, Apple Homepod), people are realizing that "voice is the next frontier of computing". Voice allows for efficient and hands-free communication. However, for a voice-first device to truly replace smartphones couple of technological advancements have to be made. In particular we need to 1) be able to pick up the user’s voice commands even in loud environments and in the presence of many interfering speech, 2) understand the user’s request even when spoken in a natural conversational way, and 3) respond back to the user in a very natural and human way. This talk will articulate how advancements in artificial intelligence/deep learning, digital signal processing, and acoustics are addressing these issues and helping to make voice computing a reality.

October 30 -  Hantrakul Lamtharn, Zach Kondak

November 6 - Ganesh Somesh, Hongzhao Guan, Agneya Kerure

November 13 - Masataka Goto, AIST, Japan

View Abstract

Music technologies will open the future up to new ways of enjoying music both in terms of music creation and music appreciation. In this seminar talk, I will introduce the frontiers of music technologies by showing some practical research examples, which have already been made into commercial products or made open to the public, to demonstrate how endusers can benefit from singing synthesis technologies, music understanding technologies, and music interfaces.

From the viewpoint of music creation, I will demonstrate a singing synthesis system, VocaListener (https://staff.aist.go.jp/t.nakano/VocaListener/), and a robot singer system, VocaWatcher (https://staff.aist.go.jp/t.nakano/VocaWatcher/). I will also introduce the world's first culture in which people actively enjoy songs with synthesized singing voices as the main vocals: emerging in Japan since singing synthesis software such as Hatsune Miku based on VOCALOID has been attracting attention since 2007. Singing synthesis thus breaks down the long-cherished view that listening to a non-human singing voice is worthless. This is a feat that could not have been imagined before. In the future, other long-cherished views could also be broken down.

As for music appreciation, I will demonstrate a web service for active music listening, "Songle" (http://songle.jp), that has analyzed more than 1,100,000 songs on music- or video-sharing services and facilitates deeper understanding of music. Songle is used to provide a web-based multimedia development framework, "Songle Widget" (http://widget.songle.jp), that makes it easy to develop web-based applications with rigid music synchronization by leveraging music-understanding technologies. Songle Widget enables users to control computer-graphic animation and physical devices such as lighting devices and robot dancers in synchronization with music available on the web. I will then demonstrate a web service for large-scale music browsing, "Songrium" (http://songrium.jp), that allows users to explore music while seeing and utilizing various relations among more than 780,000 music video clips on video-sharing services. Songrium has a three-dimensional visualization function that shows music-synchronized animation, which has already been used as a background movie in a live concert of Hatsune Miku.

November 20 - Takumi Ogatan, Vinod Subramanian, Liu Hanyu

November 27 - Rupak Vignesh, Zichen Wang, Zhao Yan

 

Spring 2017 Seminars

January 9: Mike Winters, Center for Music Technology Ph.D. student

View Abstract

Working with human participants is an important part of evaluating your work. However, it is not always easy to know what is ethical and not as several factors must be considered. In this talk, I will discuss ethical issues of using human participants for research from the eBelmont Report to the submitting an IRB. I will also consider the ethical issues in the projects I have worked on in the past year including a system for Image Accessibility.

January 23: Mark Riedl, an associate professor in the School of Interactive Computing and director of the Entertainment Intelligence Lab

View Abstract

Computational creativity is the art, science, philosophy, and engineering of computational systems that exhibit behaviors that unbiased observers would deem to be creative. We have recently seen growth in the use of machine learning to generate visual art and music. In this talk, I will overview my research on generating playable computer games. Unlike art and music, games are dynamical systems where the the user chooses how to engage with the content in a virtual world, posing new challenges and opportunities. The presentation will cover machine learning for game level generation and story generation as well as broader questions of defining creativity.

January 30: Lisa Margulis, professor and director of the Music Cognition Lab at the University of Arkansas

View Abstract

This talk introduces a number of behavioral methodologies for understanding the kinds of experiences people have while listening to music. It explores the ways these methodologies can illuminate experiences that are otherwise difficult to talk about. Finally, it assesses the potential and the limitations of using science to understand complex cultural phenomena.

February 6: Martin Norgaard, assistant professor of music education at Georgia State University

View Abstract

In our recent pilot study, middle school concert band students who received instruction in musical improvisation showed far-transfer enhancements in some areas of executive function related to inhibitory control and cognitive flexibility compared to other students in the same ensemble. Why does improvisation training enhance executive function over and above standard music experience? Music improvisation involves the ability to adapt and integrate sounds and motor movements in real-time, concatenating previously stored motor sequences in order to flexibly produce the desired result, in this case, a particular auditory experience. The output of improvisation must then be evaluated by the musician in real time based on internal goals and the external environment, which may lead to the improviser modifying subsequent motor acts. I explore how developing these processes could cause the observed far-transfer effects by reviewing our previous qualitative and quantitative research as well as significant theoretical frameworks related to musical improvisation.

February 13: Chris Howe, Project Engineer at Moog Music

View Abstract

Chris Howe is a project engineer at Moog Music where he helps create new musical tools to inspire creativity. He will be discussing his role as embedded systems designer on the Global Modular project, a collaboration with artist Yuri Suzuki which explores globalization through crowd-sourced sampling, convolution reverb, and spectral morphing.

February 20: Michael Casey, Professor of Music and Computer Science at Dartmouth

View Abstract

Our goal is to build brain-computer interfaces that can capture the sound in the mind's ear and render it for others to hear. While this type of mind reading sounds like science fiction, recent work by computer scientists and neuroscientists (Nishimoto et al., 2011; Haxby et al., 2014) has shown that visual features corresponding to subjects' perception of images and movies can be predicted from brain imaging data alone (fMRI). We present our research on learning stimulus encoding models of music audio from human brain imaging, for both perception and imagination of the stimuli (Casey et al., 2012; Hanke et al., 2015; Casey 2017). To encourage further development of such neural decoding methods, the code, stimuli, and high-resolution 7T fMRI data from one of our experiments have been publicly released via the OpenfMRI initiative.

Prof. Casey and Neukom Fellow Dr. Gus Xia will also discuss the Neukom Institute's 2017 Turing Test in Human-Computer Music Interaction, comprising several performance tasks in instrumental music and dance. Competitors are asked to create artificial performers capable of performing “duets” with human performers, possibly in real time.

                      Gus Xia, Neukom Postdoc Fellow at Dartmouth

View Abstract Abstract: Expressive Human-Computer Music Interaction In this talk, Gus will present various of techniques to incorporate automatic accompaniment system with musical expression, including nuance timing and dynamics deviations, humanoid robotic facial and gestural expression, and basic improvisation techniques. He will also promote 2017 "Turing Test for Creative Art", which is initialized at Dartmouth college and this year contains a new track on Human-computer music performance. For more information, please visit http://bregman.dartmouth.edu/turingtests/.

February 27: Roxanne Moore, Research Engineer II at Georgia Tech

View Abstract

There are a lot of ideas out there about how to "fix" education in the United States, particularly in K-12. However, new innovations are constantly met with the age-old question: Does it work? Different stakeholders have different definitions of what it means to "work" and each of those definitions has unique measurement and assessment challenges. In this talk, we'll look at different ways of answering the "Does it work?" question in the context of different education innovations that I've personally worked on. We'll look at the innovations themselves and the research methods used to assess whether or not those innovations "work." We'll also take a complex systems view of schools, including some systems dynamics models of school settings, to better understand the challenges and opportunities in K-12 education.

March 6: Klimchak, Artist

View Abstract

Klimchak will discuss his musical compositional methods, which involve the intersection of home-built instruments, low- and high-tech sound manipulation, and live performance. He will perform 2 pieces, WaterWorks (2004) for a large bowl of amplified water, and Sticks and Tones (2016) for frame drum, melodic and laptop.

March 13—Annie Zhan, Software Engineer at Pandora

View Abstract

Music technology has been playing a more and more important role in academic and industrial research and developments. At Pandora, we conduct lots of research around intelligent systems, machine listening, and recommendation systems. How is music information retrieval used in industrial companies? What are the key successes and challenges? This talk will cover several of my graduate research projects around MIR (music mood detection, the Shimi band), and audio fingerprinting duplicate detection system, music recommendation systems developed at Pandora.

March 27—Avrosh Kumar, Nikhil Bhanu

View Abstracts

Avrosh's Abstract: The focus of this project is to develop a DAW (digital audio workstation) interface to aid audio mixing in virtual reality. The application loads an Ableton Live session and creates a representation of virtual reality, taking advantage of the depth and wider field of vision. This provides a way for audio engineers to look at the mix, visualize panning and frequency spectra from a new perspective and interact with the DAW controls using gestures.

Nikhil's Abstract: Astral Plane is an object-based spatial audio system for live performances and improvisation. It employs Higher-Order Ambisonics and is built using Max/MSP with Ableton Live users in mind. The core idea is to create and apply metadata to sound objects (audio tracks in Live) in real-time, at signal rate. This includes object origin, position, trajectory, speed of motion, mappings etc. The novel features include interactive touch & gesture control via an iPad interface, continuous/one-shot geometric trajectories & patterns, sync with Live via Ableton Link and automatic spatialization driven by audio features extracted in real-time. The motivations are to explore the capability of stationary/moving sounds in 2D space and to assess the perceptibility of various trajectories, interaction paradigms in terms of musicality. The aim is to enable artists and DJs to engage in soundscape composition, build/release tension and storytelling. Project source and additional information is available on GitHub.

April 3—Shi Cheng, Hua Xiao

April 10—Milap Rane, Sirish Satyavolu

April 17—Jonathan Wang, Shijie Wang

View Abstracts

Jonathan's Abstract: The focus of this project in vocal acoustics is vocal health and studying the effects of vocal disorders on the acoustic output of the human voice. For many professionals, growths on the vocal folds alter their oscillatory motion and ultimately affect the sound of their voice as well as their health. However, most people with voice disorders do not seek medical attention or treatment. My project aims to create a preliminary diagnosis tool by comparing the recording of a patient’s voice with other voice recordings.

April 24—Brandon Westergaard, Amruta Vidwans

View Abstracts

Amruta’s Abstract: Dereverberation is an important pre-processing step for audio signal processing. It is critical step for speech recognition and music information retrieval (MIR) tasks. It has been a well researched topic in case of speech signals but these methods cannot be directly applied to music signals. In the previous semester evaluation of existing speech based dereverberation algorithms on music signals was carried out. In this semester the focus is towards using machine learning to perform music dereverberation. This project will be useful for MIR tasks and for audio engineers to obtain dry recordings.

May 1—Tyler White, Lea Ikkache

View Abstracts

Tyler's Abstract: My project is Robotic Drumming Third Arm. The main goals and motivations are to explore how a shared control paradigm between a human drummer and wearable robotics can influence and potentially enhance a drummer’s performances and capabilities. A wearable system allows us to examine interaction beyond the visual and auditory that is explored in non-wearable robotic systems such as Shimon or systems that attach actuators directly to the drums. My contributions to this project have been a sensor fusion system, data filtering, and smoothing methods, designed and fabricated a custom PCB, created a custom firmware and hardware to communicate via from the Arduino to Max/MSP, advanced stabilization techniques for two moving bodies, and high-level musical interactivity programs for performance. Watch a short video of the project here.

Lea's Abstract: This project revolves around a sound exhibition called Memory Palace. The application uses indoor localization systems to create a 3D sound "library" in the exhibition space. Users, with their smartphones, can, therefore, record sounds (musical or words) and place them in space. When their phone hovers around a sound someone has placed, it will play the sound. This application, based on web-audio and whose development started at IRCAM, aims to make users reflect on the subject of memory, and play with sounds and space.