EN / Day 5 / 17:15 / Track 1

Even though we don't think about it, the sound is a very rich source of information. Leveraging the properties of sound with machine learning, we can gain insights about an environment or an activity performed. Acoustic activity recognition has the potential to create new interactions and better smart systems, and in this talk, we explore how to experiment with this technology in JavaScript, using the web audio API and Tensorflow.js.


Charlie Gerard

Charlie is a senior frontend developer, Google Developer Expert and Mozilla Tech Speaker. She's passionate about human-computer interaction and spends her personal time building interactive side projects using creative coding, machine learning and hardware. She also loves giving back to the community by making all her prototypes open source, mentoring, blogging and speaking at conferences.

Приглашенные эксперты

Illya Klymov

15 years of JS everywhere: from microcontrollers to rendering video in the cloud. More than 6 years of educational experience (at two universities and Illya's own courses), Ph.D. in Computer Science (field of interest: System Analysis and Theory of optimal decisions). Now works as front-end developer in GitLab.