The Georgia Tech Center for Music Technology Seminar Series features both invited speakers as well as student project presentations. The seminars are on Mondays from 1:55 - 2:45 p.m. in the West Village Dining Commons, Room 175, on Georgia Tech's campus and are open to the public.
Fall 2020 Seminars
DetailsTitle: Neutrality and Fairness in Music Recommendation: A Matter of Digital Humanism
Abstract: Music recommenders have become a commodity for music listeners. In practice, the task of music recommendation is a multi-faceted task, serving multiple stakeholders. Besides the music listener and the publishers of the music, the service itself as well as other branches of the music industry are affected. In this talk, I will discuss the multiple aspects and stakeholders present in the process of music recommendation. I will further discuss possible impacts on academic research in this area, foremost regarding the questions of fairness, neutrality, and potential biases in datasets and illustrate aspects that should be taken into consideration when developing music recommender systems. Finally, I will link these discussions to the ongoing initiative of digital humanism that deals with the complex relationship between humans and machines.
Bio: Peter Knees is an Assistant Professor of the Faculty of Informatics, TU Wien, Austria. He holds a Master degree in Computer Science from TU Wien and a PhD in the same field from Johannes Kepler University Linz, Austria. For over 15 years, he has been an active member of the Music Information Retrieval research community, reaching out to the related fields of multimedia and text information retrieval, recommender systems, and the digital arts. His research activities center on music search engines and interfaces as well as music recommender systems, and more recently, on smart(er) tools for music creation. He is one of the proponents of the Digital Humanism initiative of the Faculty of Informatics at TU Wien.
14-Sep: Psyche Loui
DetailsTitle: Music as a Window into Emotion and Creativity
Abstract: I will describe recent efforts in my lab in which we identify the brain networks that enable strong emotional responses to music, and observe the effects of training in musical improvisation on brain and cognitive structure and function.
Bio: The neuroscience of music cognition, musical perception, pitch problems, singing, tone-deafness, music disorders and emotional impact of music and the voice, comprise much of Psyche Loui’s research and work. What happens in the brain when we create music? What gives some people a chill when they are moved by music? Can music be used to help with psychiatric and neurological disorders? These are questions that Loui tackles in the lab. Director of the MIND Lab (Music, Imaging and Neural Dynamics) at Northeastern University, Loui has published in the journals Current Biology, Journal of Neuroscience, Journal of Cognitive Neuroscience, NeuroImage, Frontiers in Psychology, Current Neurology and Neuroscience Reports, Music Perception, Annuals of the New York Academy of Sciences, and others. For her research on music and the brain, Loui has been interviewed by the Associated Press, CNN, WNYC, the Boston Globe, BBC Radio 4, NBC news and CBS radio, and the Scientist magazine. Loui graduated the University of California, Berkeley with her PhD in Psychology (Specialization: Cognition, Brain and Behavior) and attended Duke University as an undergraduate graduating with degrees in Psychology and Music and a certificate in Neuroscience. She has since held faculty positions in Psychology, Neuroscience, and Integrative Sciences at Wesleyan University, and in Neurology at the Beth Israel Deaconess Medical Center and Harvard Medical School.
21-Sep: Amy Belfi
DetailsTitle: Investigating the timecourse of aesthetic judgments of music
Abstract: When listening to a piece of music, we typically make an aesthetic judgment of it – for example, within a few seconds of hearing a new piece, you may determine whether you like or dislike it. In the present talk, I will discuss several lines of work focusing on when and how listeners make aesthetic judgments of music, and which musical and contextual factors contribute to these judgments. First, I will discuss work indicating that listeners can make accurate and stable aesthetic judgments in as little as several hundred milliseconds. Next, I will discuss work suggesting that the emotional valence of a piece of music contributes strongly to its aesthetic appeal. Finally, I will focus on ongoing work investigating differences in listener judgments of live versus recorded music.
Bio: Amy Belfi is an Assistant Professor in the Department of Psychological Science at Missouri S&T. She received her B.A. in Psychology from St. Olaf College, her Ph.D. in Neuroscience from the University of Iowa, and completed postdoctoral training at New York University. Her work covers a broad range of topics relating to music perception and cognition, including music and autobiographical memory, aesthetic judgments of music, and musical anhedonia.
28-Sep: David Sears
DetailsTitle: Expectations for tonal harmony: Does order matter? Mingling corpus methods with behavioral experiments
Abstract: An extensive body of research has repeatedly demonstrated that a tonal context (e.g., I-IV-V in C major) primes listeners to expect a tonally related target chord (I in C major). Tillmann and Bigand (2001) have shown, however, that scrambling the order of chords in the context fails to slow the speed and accuracy of processing. Given recent claims emerging out of corpus studies of tonal harmony that temporal order is a fundamental organizing principle in many musical styles, this talk will address whether listeners exploit this principle to generate predictions about what might happen next. To that end, I will present the results of behavioral studies that replicate Tillmann and Bigand’s experimental design, but train a probabilistic model on a large corpus of chord annotations to select the scrambled conditions. Our findings contradict those from Tillmann and Bigand’s study, suggesting listeners may internalize the temporal dependencies between chords in the tonal system.
Bio: David Sears is Assistant Professor in Interdisciplinary Arts at Texas Tech University. He directs the Performing Arts Research Lab (PeARL) with Dr. Peter Martens. His research interests include music perception and cognition, computational approaches to music theory and analysis, emotion and psychophysiology, and sensorimotor synchronization.
05-Oct: ISMIR papers presentations
12-Oct: Chris White
Abstract: Meter is a phenomenon of patterns. In general, music theorists imagine meter as arising from a series of consistently paced accents, as involving a listener who expects that pacing to continue into the future, and as grouping adjacent pulses to form a hierarchy of stronger and weaker pulses. Relying on these patterns, the computational approach of autocorrelation has been used by researchers and audio engineers to identify the meter of musical passages. This technique finds periodicities with which similar events tend to occur. For instance, the approach would consider a piece in which similar events tend to recur at the remove of the whole note, half note, and quarter note as being in 4/4 meter. My talk will outline how to implement this computational task on symbolic musical data, and then discuss certain parameters that can be adjusted depending on the engineering goals (e.g., tracking patterns of loudness verses patterns of harmonic change). I end by noting that this approach also requires an a priori definition of “accent” in order to discern relatively strong pulses from relatively weak ones, something that has provocative connections to how musical learners understand and internalize musical meter.
Bio: Chris White is Assistant Professor of Music Theory at the University of Massachusetts Amherst, having received degrees from Yale, Queens College–CUNY, and Oberlin College Conservatory of Music. His articles have appeared in many venues including Music Perception, Music Theory Online, and Music Theory Spectrum. His research investigates algorithmic and linguistic theories of music by presenting computational models of musical style, function, meter, and communication. Chris' work has also focused on geometrically modeling early 20th-century musics, especially the music of Alexander Scriabin. Additionally, Chris is an avid organist, having studied with Haskell Thompson and James David Christie. As a member of the Three Penny Chorus and Orchestra, he has appeared on NBC's Today Show and as a quarterfinalist on America's Got Talent.
19-Oct: Student presentations - Pranav, Lisa
26-Oct: Student presentation - Virgil, Tejas
02-Nov: Student presentation - Tianxue, Yiting
09-Nov: Student presentation - Sandeep, Lauren, Yihao
16-Nov: Student presentation - Yilin, Daniel, Rishi
23-Nov: Student presentation - Sophia, Laney