EarSketch is free and browser-based and is used widely in computing and music technology classrooms from elementary school through college and in all 50 states and over 100 countries.
To learn more about EarSketch and use it yourself, click here.
- Magerko, B., Freeman, J., McKlin, T., Reilly, M., Livingston, E., McCoid, S., Crews-Brown, A. (2016). “EarSketch: Thick Authenticity in a STEAM-based Approach for Underrepresented Populations in High School Computer Science Education,” in ACM Transactions on Computing Education (accepted and in press).
- Freeman, J. and Magerko, B. (2016). “Iterative Composition, Coding, and Pedagogy: A Case Study in Live Coding With EarSketch,” in Journal of Technology, Music, and Education, Intellect, 9:1, 57-74.
- Im, T., Freeman, J., Magerko, B., and Siva, S. (2016). “Using Music to Enhance Learning Outcomes for Non-Majors in an Introductory Programming Course,” in Proceedings of Envisioning the Future of Undergraduate STEM Education (EnFUSE 2016), Washington, DC.
- Freeman, J., Magerko, B., Edwards, D., Miller, M., Moore, R., and Xambó, A. (2016). “Using EarSketch to Broaden Participation in Computing and Music,” in Proceedings of Sound and Music Computing (SMC 2016), Hamburg, Germany.
- Moore, R., Edwards, D., Freeman, J., Magerko, B., McKlin, T., and Xambó, A. (2016) “EarSketch: An Authentic, STEAM-based Approach to Computing Education,” in Proceedings of the 2016 American Society for Engineering Education Annual Conference & Expo, New Orleans, Louisiana.
- Helms, M., Moore, R., Edwards, D., and Freeman, J. (2016). “STEAM-Based Interventions: Why Student Engagement is Only Part of the Story,” in IEEE Research on Equity and Sustained Participation in Engineering, Computing, and Technology (RESPECT 2016), Atlanta, Georgia.
- Xambó, A., Freeman, J., Magerko, B., and Shah, P. (2016). “Challenges and New Directions for Collaborative Live Coding in the Classroom,” in Proceedings of the 2016 International Conference on Live Interfaces (ICLI 2016), Sussex, England.
- Mahadevan, A., Freeman, J., and Magerko, B. (2016). “An interactive, graphical coding environment for EarSketch online using Blockly and Web Audio API,” in Proceedings of the 2016 Web Audio Conference, Atlanta, Georgia.
In Shadows, the pianist reads an open-form score from a laptop screen, choosing his own path through a series of connected musical fragments. At the same time, the laptop listens to the pianist, tracks the decisions he makes about what to play, and constantly updates the score in response. This dialogue between pianist and computer, actuated through a dynamic score, serves to amplify the expressive decisions made by the pianist, to subtly push him in new musical directions, and to create large-scale structural arcs in the music.
Shadows consists of four movements, each of which explores the pianist-computer-score interaction from a different perspective:
I. Traces. The score consists of 12 chords followed by their echoes. The speed at which the pianist moves from chord to chord affects how much of the score is displayed and how much is hidden.
II. Chorale. The pianist plays from a selection of five chords and three embellishment notes. Each time a chord or note is played, its harmonic density and complexity is changed.
III. Perpetual Quiet. The pianist builds arpeggios from a constantly changing set of pitches.
IV. Perpetual Melody. The pianist chooses from a combination of rhythmically driven, short melodic motives and chords. Connections between fragments are added and removed based on the amount each fragment is being played.
Jason Freeman wrote Shadows for pianist Melvin Chen, during an artistic research residency at IRCAM in Paris. Many thanks to Arshia Cont and Jean-Louis Giavitto from IRCAM and to Dominique Fober from GRAME for collaborating with Jason to extend their Antescofo and INScore software, respectively, for use in this piece.
- Tsuchiya, T., Freeman, J., and Lerner, L. (2016). “Data-Driven Live Coding with DataToMusic API,” in Proceedings of the 2016 Web Audio Conference, Atlanta, Georgia.
- Winters, M., Tsuchiya, T., Lerner, L., and Freeman, J. (2016). “Multi-Modal Web-Based Dashboards for Geo-Located Real-Time Monitoring,” in Proceedings of the 2016 Web Audio Conference, Atlanta, Georgia.
- Tsuchiya, T., Freeman, J., and Lerner, L. (2015). “Data-to-Music API: Real-time Data-Agnostic Sonification with Musical Structure Models,” in Proceedings of the International Conference on Auditory Display, Graz, Austria.
TuneTable is a responsive tabletop application with a tangible user interface. The intention is to teach basic computer programming concepts to middle school and high school students (ages 9-16 years old) using physical blocks that work as snippets of code. TuneTable applies computational elements like functions, parameters, and nested loops. Users compose short songs by building chains of blocks that represent code. Each block has a unique design on the bottom that, when placed on the acrylic surface of the table, is identified by the software using cameras mounted underneath the surface of the table. When the arrangement of blocks is recognized, the application outputs musical and visual feedback.
To View a video of the project, click here