SemIsrael Blogs
Yair Siegel

Yair Siegel

Director of Product Marketing, Imaging & Vision, CEVA.

As demand for artificial intelligence (AI) increases, chip makers strive to create more powerful and more efficient processors. The goal is to accommodate the requirements of neural networks with better and cheaper solutions, while staying flexible enough to handle evolving algorithms. At Hotchips 2017, many new deep learning and AI technologies were unveiled, showing the different approaches of leading tech firms as well as budding startups. Check out this EETimes survey of Hotchips for a good summary of the event focused on chips for AI data centers.

Read More Here

With more than 1,200 attendees and over 90 presenters, the 2017 Embedded Vision Summit made one thing clear: I am not the only one who thinks that energy-efficient processors and simple-to-use software toolkits to utilize the available horse-power are critical for embedded vision. Jeff Bier made a compelling argument that cost and power consumption of vision computing will decrease by about 1000x over the next 3 years. He mentioned techniques like reducing data types and using software tools and frameworks to achieve significant improvements in resource usage and efficiency.

Immediately following the announcement of the iPhone 7 Plus, Apple’s first dual camera phone, commentators (like this post from wired and this post from eetimes) declared that dual cameras are the new norm for smartphone photography. That’s how much sway Apple has. As I speculated in my August 2015 post, this feature was anticipated since Apple acquired a company specializing in multi-camera technology. It’s not that Apple was the first (for example, HTC, LG and Huawei have had dual camera phones out for some time now), but Apple is the bellwether in this domain, and the differentiator between nice-to-have features and mainstream standards.

Read the full blog here

A few months ago, I wrote a post about Augmented Reality (AR) and Virtual Reality (VR) devices showcased at the MWC in Barcelona. While there was a lot of hype around AR and VR at many booths, I felt that there was still something missing before this technology could really take off. The main drawbacks that I saw were dependence on external processing power, and short battery life, both resulting in tethered devices. I was curious to see what breakthrough would bring this technology to the mass market, and open the gateway for all its potential uses to the vast public. Over the past few weeks, I suddenly found myself doing a lot of walking and driving around with my kids. You guessed it, I am referring to Pokémon Go. This successful game, which allows users to capture, train, and battle virtual creatures in real life surroundings, emphasizes the above points.

Read The Full Blog Here

The tenth annual Google I/O conference was all about artificial intelligence (AI). Advances in voice recognition, image recognition, translation, and contextual conversational assistance were some of the main highlights of the keynote. Google CEO Sundar Pichai stated that “thanks to profound advances in machine learning and AI… we are poised to take a big leap forward in the next ten years”. Judging solely by the immense amount of times he used the phrase “machine learning”, you could tell exactly where his focus is.

Read the full blog here

One of the must-have audience-grabbing attractions this year at many of the MWC booths was AR/VR devices. A slew of AR/VR products were announced and showcased this year in Barcelona: the Samsung Gear VR, HTC Vive, LG 360 VR, Epson Moverio BT-300, Vuzix and others. A clear indicator that this is not just hype around the current “cool-factor” and has some real use cases is McDonald’s newly issued happy meal which turns into a VR box.

Read the Full Blog Here

Today is an important milestone for CEVA’s Imaging & Vision product line as we are announcing a unique software framework for Deep Learning called CDNN (which stands for CEVA Deep Neural Network). The main idea behind this software framework is to enable easy migration of pre-trained Deep Learning networks into real-time embedded devices and be able to run efficiently and in low power on the CEVA-XM4 Vision DSP. These technologies enable a variety of object recognition and scene recognition algorithms which could be used in the future for applications such as automotive advanced driver assistance systems (ADAS), Artificial intelligence (AI), video analytics, augmented reality (AR) and virtual reality (VR).

Read the full blog here


The trade media is abuzz with speculations that the iPhone 7 is going to incorporate 3D vision using dual rear-facing cameras and add depth-sensing capability for mapping out 3D environments and tracking body movements and facial expressions. The basis of these speculations are multiple 3D technology acquisitions that Apple has made during the past couple of years.

In April 2015, Apple snapped multi-sensor camera technology firm LinX Imaging for an estimated $20 million. LinX employed small cameras with multiple sensors to capture several images with the same push of the button and blended them into a single image. That allows the camera to capture the same image from slightly different angles which enables generating depth information. The end result for the user is the ability to focus the image on different areas or objects in the picture. This technology could even work well on video given enough processing power.

Read the Full Blog Here