SemIsrael Blogs

Please take a break to read SemIsrael blogs.

With the global chip shortage continuing to make headlines, it’s more imperative than ever to address the talent gap in the semiconductor industry. The COVID-19 pandemic has accelerated our digital migration, moving more of our activities online. Ajit Manocha, president and CEO of SEMI, has discussed how critical it will be for the success and growth of the industry to close the talent gap.

MCUs come in a broad range of flavors, meaning you can pick the best one for the application with the right performance, feature set, peripherals, memory, and software programmability. So, then, why do many systems also use FPGAs next to the MCUs? Usually, it’s because there’s not a “perfect” MCU for their application. MCUs by definition are built to be generic for a wide variety of applications, or in the case of applications specific standard parts (ASSPs), targeting particular market segments. Customization is done with software. FPGAs come into play where embedded CPUs can’t execute the required workload efficiently and some level of hard logic is needed to process proprietary algorithms, support unique interface requirements or enable future system upgradability.

By Priyank Shukla and John Swanson, Staff Product Marketing Managers, Synopsys Solutions Group, and Anika Malhotra, Senior Product Marketing Manager, Synopsys Verification Group

It’s all about bandwidth these days – fueling hyperscale data centers that support high-performance and cloud computing applications. It’s what enables you to stream a movie on your smart TV while your roommate plays an online game with friends located in different parts of the country. It’s what makes big data analytics run swiftly and allows artificial intelligence (AI) algorithms to perform their magic and provide valuable insights for everyday gadgets and beyond.
As the data connectivity backbone for the internet, the Ethernet protocol is answering the call for increased bandwidth demands by supporting speeds of 200G, 400G and, now, 800G. Before long, 1.6T will not be out of the question. Going hand in hand with higher bandwidth is the need for efficient data connectivity over longer distances.

By Kenneth Larsen, Product Marketing Director, Synopsys Digital Design Group

The adoption of 3DIC architectures, while not new, is enjoying a surge in popularity as product developers look to their inherent advantages in performance, cost, and the ability to combine heterogeneous technologies and nodes into a single package. As designers struggle to find ways to scale with complexity and density limitations of traditional flat IC architectures, 3D integration offers an opportunity to continue functional diversity and performance improvements, while meeting form-factor constraints and cost.

By Jan Gilg and Tony Hemmelgarn

Siloes between engineering and business have existed in enterprises for decades. As manufacturers design and deliver smarter products and assets, access to real-time business information across networks is critical to bring new and improved innovations to market faster.

By Neel Desai, Sr. Product Marketing Manager, Digital Design Group, and Michael Posner, Sr. Marketing Director, IP

Chiplets are fast becoming the answer to cost-effectively deliver the high transistor counts at smaller geometries demanded by burgeoning applications like artificial intelligence (AI), cloud and edge computing, high-performance computing (HPC), and 5G infrastructure. By combining multiple, single silicon dies onto one package, chiplets provide another way to extend Moore’s Law while enabling product modularity and optimization of process node selection based on function. However, meeting power, performance, and area (PPA) targets for chiplets as well as larger, faster, and more complex SoCs continues to be a race as designers strive to achieve increasingly stringent time-to-market goals.

By Manuel Mota, Sr. Staff Product Marketing Manager, Die-to-Die IP

High-performance computing (HPC) is a hot topic these days, and for good reason. Consider the can containing your favorite soda – countless hours of simulation and engineering work using HPC systems have gone into designing streamlined cans that minimize aluminum waste. Indeed, the benefits of HPC are far-reaching, from its use in mining cryptocurrencies to drug testing, genome sequencing, creating lighter planes, researching space, running artificial intelligence (AI) workloads, and modeling climate change.

By Stelios Diamantidis, Sr. Director, Synopsys AI Solutions, Office of the COO

Artificial intelligence (AI) is touching so many aspects of our everyday lives, from consumer devices to broader applications like drug discovery, climate change modeling, and self-driving cars. One of the advantages that AI brings is its ability to derive actionable insights quickly from massive amounts of data. And that advantage is also providing a productivity boost for chip design.

Escalating chip complexity and scale is an ever-present theme in any integrated circuit (IC) design discussion. The challenges are becoming especially acute as we move to the era of hyper-convergent ICs, which are not only larger and more complex, but also introduce new requirements to perform multi-dimensional verification of these highly integrated system-level chips.

Simply throwing more horsepower at the problem is no longer sufficient (although still needed). A more productivity-oriented workflow is essential to deal with the variety of verification challenges in a modern system-on-chip (SoC). Multiple verification methods need to work in concert, and overall productivity is just as important as individual tool performance. It calls for a revamped verification strategy. Call it a continuum. This post, originally published in the “From Silicon to Software” blog, shares what you need to know.

By Raja Tabet, Sr. VP of Engineering, and Anand Thiruvengadam, Product Marketing Director, Custom Design and Physical Verification Group

In our data-driven world, applications like high-performance computing (HPC) and artificial intelligence (AI) are taking center stage, delivering intelligence and insights that are transforming our lives. However, the growing complexities of HPC and AI designs are driving the need for much more complex semiconductor devices. Increasingly, multiple components and technologies are coming together in hyper-convergent designs to meet demands for bandwidth, performance, and power for these compute-intensive applications. To achieve power, performance, and area (PPA) targets, such complex chips need to be analyzed as a single system—an approach that’s difficult to support via traditionally disparate tools. In this post, originally published in the “From Silicon to Software” blog, we’ll examine the trend of IC hyperconvergence and explain why the traditional, disaggregated approach to circuit simulation is no longer sufficient.

Page 1 of 20