SemIsrael Blogs
Moshe Sheier

Moshe Sheier

Director of Strategic Marketing, CEVA

In the past few years, automatic speech recognition (ASR) has become common practice, with billions of voice-enabled products and services. A wide variety of ASR technologies exists, each suited for different use cases. Undeniably though, the holy grail of ASR is natural language processing (NLP), which lets users speak freely, as if they were talking to another person. A simple example is that you can say “Set a reminder for 9AM the day after tomorrow” to any of the leading virtual assistants like Alexa, Google Assistant, Siri or Cortana, and they would understand the intent. There is no specific order or magic word that you have to say. You could also say “remind me on Wednesday at 9 in the morning” or “set a reminder on May 16th at 9 AM” and get the same result. The bottom line in NLP is extracting the meaning, regardless of the phrasing.

Read the full article on Embedded Computing Design.

Just a few years ago, speech recognition was a huge challenge, and one of the most coveted hands-free user interfaces. I can easily recall how the first versions of Siri misheard what I said, or repeatedly responded “Sorry, I didn’t quite get that”. Today, significant advances in machine learning, spurred by cheaper and more efficient processing power, have made speech recognition so ubiquitous, that it’s practically taken for granted. In a recent keynote, Google Senior Fellow Jeff Dean claimed that neural networks reduced word errors by 30% when applied to speech recognition. Alongside direct improvements in speech recognition, noise cancellation and speech enhancement has also benefitted significantly from neural networks. An excellent example of this is Cypher’s technology, which isolates voice using deep neural networks. ASR targeted noise cancellation improves the raw data for the speech recognition engine, making the task more likely to succeed. These factors have led to the current state, in which speech recognition is a reliable, useful interface on many devices.

Read the full blog here

Here is a situation that I found myself in, and I’m sure other smart-home users can relate to it. A couple of weeks ago, while on a business trip in China, my smart doorbell (such as ring) rang. I picked up my smartphone to see who was at the door. To my surprise, it was my daughter standing outside our house. Although it was daytime in my location, she was standing in the dark, back at home. I turned on the intercom to ask her why she wasn’t inside, and she explained that she had misplaced her key. Now, I have a smart doorbell, a smart lock, and smart lighting system. But, instead of simply giving a command to unlock the door and turn on the light, I had to exit the doorbell app, open the smart lock app and only then could my daughter safely enter the house. Instead of having full control of my smart home at my fingertips, I have an assortment of different apps with different interfaces that I need to manage separately. I don’t think that this is the best we can get out of the Internet of Things (IoT).

Read The Full Blog Here

Today CEVA announced its new DSP-based IoT development platform – enabling developers prototype their IoT designs where the device needs to be both smart AND connected. The new platform combines a host of sensing, processing and connectivity technologies that mobile, wearable and smarthome SoC designs require.

Read the full Blog here

Sensor fusion allows smartphones, tablets and Internet of Things (IoT) devices to combine sensor signals from various sources and provide accurate, complete and dependable Context-Aware data for augmenting applications like indoor navigation, activity monitoring, and various user apps in general . Context-Aware data being anything from user’s physical state (walking, sitting, climbing), to environmental data such as being indoor/outdoor, watching TV, at a party, etc.

Smartphone and tablet manufacturers have mostly been implementing data fusion by creating or licensing sensor fusion software and running it on the device’s main ARM-based application processor.

Read the full blog post here