EN / День 1 / 12:00 / Зал 1

Even though we don’t think about it, the sound is a very rich source of information. Leveraging the properties of sound with machine learning, we can gain insights about an environment or an activity performed. Acoustic activity recognition has the potential to create new interactions and better smart systems, and in this talk, we explore how to experiment with this technology in JavaScript, using the web audio API and Tensorflow.js.

Charlie Gerard

Charlie is a senior frontend developer, Google Developer Expert and Mozilla Tech Speaker. She’s passionate about human-computer interaction and spends her personal time building interactive side projects using creative coding, machine learning and hardware. She also loves giving back to the community by making all her prototypes open-source, mentoring, blogging and speaking at conferences.