SemIsrael Blogs
Chipworks Technology Insider

Chipworks Technology Insider

In the last 24 hrs our lab guys have torn down the iPad 2 and come to the conclusion that the main innovation is the A5 chip.  Flash memory is flash memory, the DRAM in the A5 package is 512 MB instead of 256 MB, and the touchscreen control uses the same trio of chips as the iPad 1 – not even a single chip solution as we’ve seen in the later iPhones.  And the 3G version uses the same chipset as the Verizon iPhone launched a few weeks ago.

Who?  Tear down a Microsoft Kinect game sensor, and you’ll find that PrimeSense provides the core technology that does the key function of the motion-sensing and related processing in it.  We posted a teardown of it just before Christmas, and found the PrimeSense PS1080 inside.     

The iPhone 4 is the first portable consumer device to feature full nine degree-of-freedom (9-DoF) inertial sensing. Apple have done this by integrating a three-axis accelerometer and a three-axis gyroscope from STMicroelectronics (ST), together with an electronic compass from AKM.  The iPhone product line has moved steadily towards this goal with each successive model. Steve Nasiri and his Invensense colleagues discussed the benefits of 9-DoF sensing in some detail in a recent Whitepaper.

We recently got hold of TI’s Sitara AM3715, a 45 nm application processor with a 1 GHz ARM Cortex A8 core and a POWERVR SGX™ graphics accelerator, among other capabilities, but of course, so does the OMAP3630 (not coincidentally similar to Apple’s A4 chip from the Ipad and iPhone 4).

Past public teardowns on Apple mobile devices from Chipworks and others, have tended to focus on the lack of state-of-the-art silicon. They pointed to Apple's success as a result of good systems integration and a holistic experience. While this presented headline-worthy analysis, it downplayed the importance of the truly amazing semiconductor innovation.

Metal gates are back. First Intel used them in its microprocessors and now Winbond in its latest DRAM.

To the broader market, the move to metal gates is understood as being based on pure electrical performance (i.e., speed), but in the case of the DRAM device, this new technology also suggests a significantly lower manufacturing cost. By way of a history lesson, metal gates haven’t been used in production for several decades because the lack of self-aligned source/drains added unnecessary process complexity. Ironically, in this product, they actually simplify the process.

I mentioned in a previous blog that image sensor companies would be deploying  BSI technology when their targeted applications demanded it.  Admittedly, I was mostly thinking about mobile phone applications where form factor and ever shrinking pixel pitch seems to be the primary driver for BSI.

As noted in several news articles, nicely consolidated in Image Sensors World blog postings, BSI is also making headway in digital still camera (DSC) and video camera applications.  The latest Sony BSI design win that we’ve seen is from Casio’s EX-FH100 EXILIM DSC.  In fact we documented a number of interesting and innovative devices in a product teardown on this camera.

This product is positioned as a high speed EXILIM camera with a wide-angle 24 mm, 10x optical zoom lens.  It offers a maximum burst rate of 40 fps for still images (maximum image size of 9.0 Mp, maximum capacity of 30 frames), and also a 1,000 fps high-speed movie mode for slow motion movie functionality. 

The image signal processor (ISP) is a Sony CXD4122GG 2nd generation camera system chip designed for use with Sony Exmor R CMOS image sensors (CIS).  The CXD4122 is capable of high-speed imaging of up to 10 Mp images at 50 fps, and high-speed video at 240 – 1,000 fps.

Sony-BSI2-image1

Sony CX412D2GG Image Sensor processor
Before getting into the CIS silicon, it is worth noting that Sony’s IMX050 BSI sensor displays quite a lot of innovation at the packaging level.  We’ve previously seen the Sony BSI die elsewhere, but this is the first time we’ve seen it packaged with embedded passive components in the chip carrier.  Sony claims the new miniaturized package is 30% smaller than their existing solution.  A detailed process report is underway to study the packaging.

 Sony-BSI2-image21

Planar and Side X-Ray of Sony IMX050 CIS – Embedded Passives
The IMX050 is a 1/2.3” optical format, 1.65 µm pixel pitch CIS featuring Sony’s 2nd generation BSI process technology and column-parallel A/D conversion design.  Sony’s Exmor-R backgrounder describes the inadequacies of CCD’s for this application as being the driver for their high-speed CIS development.  The implementation of BSI also resulted in an approximate 2x increase in sensitivity as compared to an equivalent front-illuminated pixel. 

Sony presented their high-speed BSI technology in February at the 2010 International Solid-State Circuits Conference (ISSCC).  Our preliminary findings indicate the CIS from the ISSCC presentation is a match to the Casio sensor.  Essentially, the new Sony technology hits across three industry/consumer sweet spots: high-speed readout, high resolution, and high signal-to-noise (SNR).  A full imager process review (IPR) report will provide coverage of the high-speed BSI pixels and 0.14 µm copper fabrication process.

 Sony-BSI2-image3

Sony IMX050 CIS – 2010 ISSCC Paper 22.9 (l), Chipworks Back-of-Die Photo
In summary, Sony have a very definite strategy to service high-performance, high value consumer CIS applications.  As low resolution, small pixel mobile sensors have become a commodity, Sony has chosen to play at the other end of the spectrum where technical innovation is likely to derive higher margins.  As seen in this teardown example, they are also able to grow their business by winning designs for their technology in their consumer electronics competitors.  One final observation is the continued displacement of CCD’s in high-end DSC’s, as we’ve already seen in some Canon and Sony point and shoot cameras.

Computing is getting closer to the consoles we see on old episodes of Star Trek TNG, with everything seeming to go touch screen these days. It is actually uncanny how the touch screens resemble the shiny black sleek things that Captain Picard used to use. The latest hot gadget to leverage this technology is, of course, the Apple iPad.

 Star-Trek-screen

Example of a Star Trek touch screen: (http://www.startrek4u.com/special/multimedia/lcars/curry/enterprise-e-engineering-ne.png)

Enabling technology is an important precursor to great systems and consumer acceptance of new and nifty products. For example, from a semiconductor standpoint, you could argue that the driving force behind the success of devices like portable music players was the availability of huge amounts of small and inexpensive memory.


Now with touch screens, the enabling technology is powering a whole new type of application development for systems and software design. The combination of smart phone ubiquity and emerging tablet computing opens billions of new sockets, where the secret sauce for the next killer app is once again the semiconductor technology that is beneath.


With a market forecasted to grow to $9 billion dollars by 2015 (from $3.6 billion in 2008), there has been a flurry of activity in this sector, with some analysts reporting over 170 suppliers in the supply chain today (source = DisplaySearch).

To illustrate the innovation we will use Apple – a current (and deserved) media darling who has done an exemplary job integrating touch screen technology. 

What makes a touch screen controller work?

The most popular technology used in touch screens is the resistive 4-wire and 5-wire approach, due to low cost and simple interface electronics. There is also a recent notable increase in overall shipping of products based on the Projected Capacitive Touch (PCT) approach. This latter technology is used by Apple and others, including the Samsung Pixon12 and Sony Ericsson Satio.


Controlling the screen is managed through a variety of chipset solutions. Staying with our example, Apple used to use a 5-chip solution in the original 2G phone, evolved to a 3-chip solution, and now its latest devices use a single Texas Instruments chip – the 343S0487. This latest TI chip has documented design wins in the iPod Touch, iPhone, and Magic Mouse – but notably not the iPad.


Getting the required touch functionality onto the 343S0487 single chip was facilitated by fabricating at 90 nm. This gave the required density to put all this functionality into a small footprint and at a low cost. This device has a contacted gate pitch of ~341 nm and a metal 1 half pitch at ~155 nm. The device further features five metals and an aluminum RDL. By comparing the CMP, we do see a fill pattern characteristic of Texas Instruments, and can conclude that the device is likely manufactured within one of its fabs.

 touch-screen-image2

touch-screen-image3

 

In fact, TI has also been gaining some other big socket wins in this industry, notably the Motorola DROID resistive touch screen (using TI’s TSC2046 4-wire touch screen controller with low voltage digital I/O). What is different about this chip versus the one found in the Droid is that the device made for Apple contains over 50% digital components and memory versus a simple analog chip that looks very similar to the Burr Brown version made 10 years ago (TI acquired Burr Brown). Getting all of Apple’s advanced features clearly requires some relatively heavy processing power.

To see an annotated die photo of the TI 343S0487, you’ll need to visit our page promoting die photos of several of the touch screen controllers we have in inventory (scroll to bottom of page).


Although the focus of this article has been on Texas Instruments, a number of other innovative suppliers are in this market, including Cypress, Analog Devices, Broadcom, Synaptics, eGalax_eMPIA , and others. In the end there will, of course, be winners and losers.  However, as the market matures, there is plenty of opportunity for all of these players to succeed with their own unique spin on the technology.

Chipworks recently extracted and analyzed the circuits from Alpha and Omega’s AOZ9007 battery protection IC. This chip is used for lithium-ion rechargeable battery packs, and competes with products from Sony, Texas Instruments, Fairchild Semiconductor, ON Semiconductor, Analog Devices, and Maxim Semiconductor.

When it first landed on my desk, I figured that it was “just another IC with comparator circuits” – ho hum.

However, once the analysis progressed I found some surprising things.

The AOZ9007 battery protection IC consists of a two-stacked die. The battery protection package includes a power control integrated circuit stacked on top of an integrated dual common drain MOSFET. With this configuration, not only will the battery protection IC have better performance, it will also be smaller. Effectively reducing the area of this device, making it suitable for power controllers where size is an issue.

die_photos_and_x_ray 

Figure 1: Alpha and Omega AOZ9007 battery protection IC x-ray and die photos
Also, this is not just “another IC with comparator circuits.” These comparators act as detectors that prevent the single cell lithium-ion rechargeable battery packs from overcharge, over discharge, and over-current conditions. Hence, they are sensors that will put more life into your battery.

 figure_2_circuit_schematic

Figure 2: Detector circuits (circled) for the battery protection IC
The power control IC consists of a number of constant-current logic gates such as inverters. This is in contrast to “normal” inverters with the PMOS and NMOS source connected directly to VDD and GND. These constant-current inverters are used not only in the oscillator but everywhere that constant-current is required.

figure_3_circuit_schematic 

Figure 3: Logic circuit showing the constant-current gates
We extracted the full device, including the oscillator, counter, logic circuits 1 and 2, short-circuit detector, etc. Clients in this market will find the Alpha and Omega AOZ9007 battery protection IC to be a very compelling industry benchmark.

Reverse Engineering Software and Systems

As part of our aptent support business, Chipworks applies software reverse engineering to generate evidence of patent infringement, which is used by IP groups and outside counsel in patent licensing negotiations and litigations. This evidence is in the form of a claim chart which maps relevant patent claim elements to the infringing product. We apply software reverse engineering to analyze and document infringement on a wide variety of products, from consumer electronics to communication devices and automotive systems.


We recently went inside an automotive electronic control unit (ECU). These ECUs comprise a wide range of units like the powertrain control module (PCM), body control module (BCM), electric power steering (EPS), airbag control unit (ACU), or electronic brake control module (EBCM). In light of the recent media attention targeting the automotive industry, we thought it would be interesting to share some of our findings in relation to the perceived quality of the systems software against other semiconductor-based systems we have analyzed.


Since our overall findings are a bit mixed, we will spare the blushes of the leading automotive companies by not publishing the make or type of module we analyzed.


Reverse Engineering an Automotive Module


First we disassembled the module, reverse engineered the PCB, scoped its signals while operational, and reverse engineered its software. 


Two types of software analysis were done. We extracted the raw binary code from the module and analyzed it statically (called “dead” code analysis). We also analyzed the code while it was in action on the target device (called “dynamic” or “live” code analysis).


We then rated several aspects of the unit (e.g., physical protection, code complexity) for quality. The results were surprising to us.


Physical Protection: Good


The module has a heat sink which also plays the role of a mechanical cover. The CPU detects the presence of the module and stops running the code if it is removed. Below is an example of one of the holes used for screwing the module to the PCB. 

 Automotive_resistor

Note the resistor connecting the screw pad to the CPU.


Code and Data Space Protection: Good


The code space is checked for corruption and tampering by calculating and checking a CRC over the complete code space. The twist on this particular implementation is that there are several routines doing similar CRC checks, making tampering with the code more difficult.


The content of the RAM data space cannot, of course, be checked for correctness since it changes all the time. Instead, the memory integrity is checked by writing and reading back standard 0xAA – 0x55 patterns in the background. The obligatory enforcement of the atomic instructions has been observed here.


Compared with other consumer and communication devices that come across our desks, both the code and data space protection were a notch higher.


Code Complexity: Bad


Inner workings of the code have been analyzed, and a number of software routines completely dissected. The surprise here was just how much code was created for what was supposed to be simple processing of a few sensor values. A lot of the code improved sensor resolution and produced more precise reading; correction, normalization, and adjustment routines were abundant.


This left us with a question – why was it necessary to go to great pains to provide 0.003% precision when 0.3% would have sufficed for the purpose?


This observation was further confirmed by the sheer number of routines in the code – over 700 in total. Around 200 of them were involved in the comparatively simple task of processing two sensor inputs and outputting two variables. This certainly looks like overkill.


The calling tree, showing which routines call which others, looks rather intimidating.

 Automotive_calling_tree

The overwhelming impression was that precision was put far ahead of code simplicity.  Occam’s razor was rather dull in the process of this code development.


Debugability: Bad


A good coding practice is to sprinkle the code with debug logs. Granted, they do increase the code size, but they can also save your bacon when you need to debug a problem in the field on a live system. Usually, the execution of the debug logs is skipped and turned on only when needed. 


We did not see evidence of debug logs in this code.


The CPU in question did not have a JTAG or any other debug port. The only way to debug it effectively is to use an in-circuit emulator (ICE). The only problem with the ICE is that the CPU needs to be completely removed and replaced with a bulky ICE module. Such a contraption can not fit mechanically in the narrow space provided for the module. Couple this with the above mentioned physical protection, and the result is a very difficult setup to debug in the field.


Error Handling: Ugly


Another good coding practice for mission critical applications is to check for software errors and log any found. A balance needs to be achieved between too much error checking and too little. Too much leads to code bloat (more code, more bugs), and too little will let code bugs lurk undetected.


The voluminous code we inspected is certainly not a result of too many error checks. In many of the routines inspected, there were no checks for software errors.


We did, however, find checks verifying the raw sensor values and final output values, but not much more. Even these errors were not logged immediately when they were detected, potentially making the debugging more difficult.


However, the most worrisome error checks were those where values were checked if they were outside a maximum range allowed. One would think that alarm bells would be going off upon detecting any such errors, and they would be logged and appropriately handled. Instead, all that was done was to simply cap the value to the maximum (or minimum) and forward it on as if nothing had happened! 

Page 1 of 2