Yobe Revolutionizes Voice Tracking / Noise Management Technology

Issue 16 - Voice and Acoustics Revolution

The content and distribution of Azafran’s INSIGHTS newsletter is focused to our LP, incubator, research, investment and partner ecosystem. As we look to build a two-way dialogue benefitting our collective efforts, each month we highlight important news and our approach to the emerging intersection of deep technology, end to end solutions and platforms driven by voice, acoustics/sensory and imagery.

This article is one section of an entire issue of INSIGHTS. Please sign up to receive access to past and new issues as they are published.

Subscribe to our newsletter

Yobe Revolutionizes Voice Tracking / Noise Management Technology

One common denominator and challenge to all voice applications and tech on the market is that conversations and voice commands are happening in the real world and not a sealed chamber. From background noise to changing environments, this dynamic is affectionately referred to as the Cocktail Party Problem. As demonstrated in the chart to the right, Yobe, solves this problem via a four-step process: 1) Noise Management; 2) Surveillance; 3) Command Recognition and; 4) Biometric Authentication. Dive deeper on the following pages and learn how the team at Yobe is changing how we think of voice and acoustics while providing groundbreaking technology to help speed the revolution underway.

Yobe has developed cognitive software that mirrors the ability humans have to identify, extract and track voice of interest

Voice technologies continue to be plagued by issues stemming from background noise including other voices (i.e. errors in voice commands) and access/security using voice ID. Yobe’s innovative, patented approach to the ‘Cocktail Party Problem’ is exponentially more effective than comparable technologies and has been recognized by government agencies and commercial partners with contracts pending. Yobe uses AI to identify unique voice biometrics within a noisy intake signal to identify and track individual voices.

In addition, the signal of interest can be enhanced for a better listening experience, speaker identification, or used as a front end processor to enable significantly improved performance for automatic speech recognition (ASR). To date, there is no other real-time technology that can manage all these capabilities in a single on-the-edge (on device, non-internet dependent) software solution that is agnostic to microphone inputs and form factor. Yobe is in a unique position to create market disruption as a catalyst for voice technology convergence across multiple industries.

How Yobe’s Solution is Different:

  • operates 100% on the edge,
  • operates with no-apriori information about the auditory scene,
  • manages sound sources in the near field,
  • does not need hundreds of thousand of hours speech, no modeling data necessary,
  • does not need to attenuate media playback for effectiveness (ACE),
  • AI adapts to dynamically changing auditory environments including target voice and device movement,
  • able to separate and track voice of interest from other voices, processing does not introduce unnatural artifacts.

© Copyright 2017 - 2020, Azafran Capital Partners, Inc.