Seminars

CMT Weekly Seminars

Fall 2017 Seminars

The Georgia Tech Center for Music Technology Fall Seminar Series features both invited speakers as well as second-year student project proposal presentations. The seminars are on Mondays from 1:55-2:45 p.m. in the West Village Dining Commons, Room 175, on Georgia Tech's campus and are open to the public. Below is the schedule for invited speakers and student presentations for fall 2017:

August 21 - First day of classes; no seminar. View the eclipse instead.

August 28 - Valorie Salimanpoor, Baycrest Institute

View Abstract

Music is merely a sequence of sounds, each of which contains no independent reward value, but when arranged into a sequence it is perceived as intensely pleasurable in the brain. How does a transient and fleeting sequence of sounds bring us to tears or strong physiological reactions like chills? Although music has no clear survival value, it has been a fundamental part of humanity, existing as far back as history dates (prehistoric era) and has developed spontaneously in every recorded culture. In this talk I present brain imaging research to show how sophisticated cognitive functions integrate to give rise to musical pleasure and why you cannot find two people in the world with the exact same taste in music.

September 4 - Labor Day (no seminar)

September 11 - Robert Hatcher, soundcollide

View Abstract

Soundcollide’s mission is to disrupt the music creation process by breaking down barriers to collaboration, such as physical proximity. Our music technology platform will predict and enhance the performance of its users by streamlining the process of music creation and production. It accomplishes this task by increasing the amount of music that artists can create through an immersive collaborative experience where multiple users work concurrently.

While the application is under load, soundcollide’s machine learning tools will analyze and compile a history of artists' procedural preferences enabling artists to discover the most compatible connections for future recording and production collaborations. Our intent is to host the largest number of concurrent users, thus increasing the accuracy and reach of our machine-learning technology.

September 18 - Clint Zeagler, Wearable Computing Center, Georgia Tech

View Abstract
 

Working on a wearable technology interdisciplinary project team can be challenging because of a lack of shared understanding between different fields, and a lack of ability in cross-disciplinary communication. We describe an interdisciplinary collaborative design process used for creating a wearable musical instrument with a musician. Our diverse team used drawing and example artifacts/toolkits to overcome communication and gaps in knowledge. We view this process in the frame of Susan Leigh Star’s description of a boundary object, and against a similar process used in another musical/computer science collaboration with the group Duran Duran. More information available here.

September 25 - Brian Magerko, School of Literature, Media, and Communication, Georgia Tech

View Abstract

In a future that is increasingly seeing intelligent agents involved in our education, workforce, and homes, how human productivity fits into that unfolding landscape is unclear. An ideal outcome would be one that both draws on human ingenuity, creativity, and problem solving (and definition) capabilities but also on the affordances of computational systems. This talk will explore the notion of “co-creative” relationships between humans and AI, where such a path might be found.

October 2 - Carrie Bruce, School of Interactive Computing, Georgia Tech

View Abstract

When designing for variation in human ability, accessibility is frequently the measure of success. Although accessible design can make it easier for an individual to gain access to spaces, products, and information, the emphasis is often solely on impairment and its mitigation as the basis for design. Thus, we end up with “handicapped parking”, assistive technology, and other specialized designs that can be exclusionary and do not necessarily address an individual’s participation needs – or their engagement that sustains personal identity, supports context-related motivations, and promotes inclusion. While interactive technologies have the potential to enable this type of participation, there is a lack of evidence-based design that demonstrates how this can be accomplished. My work focuses on operationalizing and designing for participation through building an evidence base and encouraging research-driven practice. I will discuss a few projects related to participation, including the Accessible Aquarium Project. I will show how we can perform user-centered, research-driven practice in designing interactive spaces, products, and information based on access and participation needs.

October 9 - Fall Break

October 16 - Frank Hammond, Center for Robotics and Intelligent Machines, Georgia Tech

View Abstract

The field of human augmentation has become an increasingly popular research area as capabilities in human-machine interfacing and robot manufacturing evolve. Novel technologies in wearable sensing and 3D printing have enabled the development of more sophisticated augmentation devices, including teleoperated robotic surgery platforms and powered prostheses. Despite these advances, the efficacy and adoption of human augmentation devices has been limited due to several factors including (1) lack of continuous control and dexterity in robotic end-effectors, (2) poor motion and force coordination and adaptation between robotic devices and humans, and (3) the absence of rich sensory feedback from the robotic devices to the human user. My research leverages techniques in soft robot fabrication, wearable sensing systems, and non-anthropomorphic design strategies to arrive at human augmentation solutions which address the issues of device form and function from a methodological perspective. In this talk, I will highlight aspects of our powered prosthesis design methodology, including (1) the experimental characterization of human manipulation capabilities, (2) the design of mechanisms and control strategies for improved human-robot cooperation, and (3) new efforts to enable the neurointegration of robotic manipulation devices – a capability which could allow humans to perceive and control powered prostheses and extra-limb robots as if they were parts of their own bodies.

October 23 - Kinuko Masaki, SmartEar 

View Abstract

With the proliferation of voice assistants (e.g. Apple's Siri, Google's Now, Amazon's Alexa, Microsoft's Cortana) and “smart” speakers (e.g. Amazon's Echo, Google's Home, Apple Homepod), people are realizing that "voice is the next frontier of computing". Voice allows for efficient and hands-free communication. However, for a voice-first device to truly replace smartphones couple of technological advancements have to be made. In particular we need to 1) be able to pick up the user’s voice commands even in loud environments and in the presence of many interfering speech, 2) understand the user’s request even when spoken in a natural conversational way, and 3) respond back to the user in a very natural and human way. This talk will articulate how advancements in artificial intelligence/deep learning, digital signal processing, and acoustics are addressing these issues and helping to make voice computing a reality.

October 30 -  Hantrakul Lamtharn, Zach Kondak

November 6 - Ganesh Somesh, Hongzhao Guan, Agneya Kerure

November 13 - Masataka Goto, AIST, Japan

View Abstract

Music technologies will open the future up to new ways of enjoying music both in terms of music creation and music appreciation. In this seminar talk, I will introduce the frontiers of music technologies by showing some practical research examples, which have already been made into commercial products or made open to the public, to demonstrate how endusers can benefit from singing synthesis technologies, music understanding technologies, and music interfaces.

From the viewpoint of music creation, I will demonstrate a singing synthesis system, VocaListener (https://staff.aist.go.jp/t.nakano/VocaListener/), and a robot singer system, VocaWatcher (https://staff.aist.go.jp/t.nakano/VocaWatcher/). I will also introduce the world's first culture in which people actively enjoy songs with synthesized singing voices as the main vocals: emerging in Japan since singing synthesis software such as Hatsune Miku based on VOCALOID has been attracting attention since 2007. Singing synthesis thus breaks down the long-cherished view that listening to a non-human singing voice is worthless. This is a feat that could not have been imagined before. In the future, other long-cherished views could also be broken down.

As for music appreciation, I will demonstrate a web service for active music listening, "Songle" (http://songle.jp), that has analyzed more than 1,100,000 songs on music- or video-sharing services and facilitates deeper understanding of music. Songle is used to provide a web-based multimedia development framework, "Songle Widget" (http://widget.songle.jp), that makes it easy to develop web-based applications with rigid music synchronization by leveraging music-understanding technologies. Songle Widget enables users to control computer-graphic animation and physical devices such as lighting devices and robot dancers in synchronization with music available on the web. I will then demonstrate a web service for large-scale music browsing, "Songrium" (http://songrium.jp), that allows users to explore music while seeing and utilizing various relations among more than 780,000 music video clips on video-sharing services. Songrium has a three-dimensional visualization function that shows music-synchronized animation, which has already been used as a background movie in a live concert of Hatsune Miku.

November 20 - Takumi Ogatan, Vinod Subramanian, Liu Hanyu

November 27 - Rupak Vignesh, Zichen Wang, Zhao Yan

Past Seminars