

Currently, we are able to do that after the fact, meaning that in real-time you don’t see the labels yet, at this stage. So, without knowing Berenice and Simon’s voice, for example, the transcript should say speaker one, speaker two. Speaker diarisation is nothing more than separating speakers into one or two. That’s one component of our Ambient Voice Intelligence, but on top of that, we also have by speaker diarisation and speaker identification. So whether you call it speech-to-text or ASR, that’s the core functionality of our speech engine. We built the speech-to-text technology that’s at the very core turning English spoken audio or video into written English. The engine behind Otter is called Ambient Voice Intelligence. If you hear any anything interesting, you can just click on the highlight button and just to highlight that last sentence or two, so you can come back to it more easily and review it and playback just those important bits of your conversation. Whether it is for deaf or hard-of-hearing students, accessibility for any participants that have accommodation needs, students who want to take notes, for meetings or interviews where you don’t have to worry about taking a full transcription of the entire meeting, Otter is right there. Simon Lau: The pandemic has forced everybody to adopt technology much quicker, so it’s very timely that we provided our integration with Zoom. Berenice Baker: Are you finding a lot more customer engagement during lockdown now people are on Zoom all the time?
