Uncategorized – DISPLAY CONSULTING

Uncategorized

feb15

OLED vs. LCD not like LCD vs. CRT… The CRT was the dominant display technology for nearly 100 years. There were, of course, other displays such as Vacuum Fluorescent, Electroluminscent, Projection, and Plasma. But the CRT was the “king of the hill”. As CRT technology evolved it became the only technology that could produce video images in full color. It created and dominated the television market for many decades. Then in the mid-sixties along came Liquid Crystals – a really poor display technology by any measure. Displays made with Liquid Crystals were temperature sensitive, slow to respond, barely readable with poor contrast, and difficult to address with all the rows and columns. It appeared that at best LC technology might find a home in simple and small displays such at those for wristwatches and calculators. But gradually one-by-one the problems that were limiting LC performance began to be solved. New LC materials were developed with less temperature sensitivity and faster response. New ways of controlling LC molecules were discovered. Color filters were added and better ways of backlighting were developed. However, the major driving force that propelled intense effort to develop LC technology was the introduction of the laptop computer in the early 90s. There was no other way to make a laptop than with an LC display. Clearly, CRTs were too bulky, plasma displays could not be made that small, and EL was not able to produce full color. As LC displays captured the markets for all small, and now medium size, displays this provided the base on which to build an infrastructure that encouraged further development. However, while all this was going on, manufacturers of CRTs did not feel threatened because they were confident that LC displays could not compete on image quality and that they were destined to sizes that did not exceed 20 inches. The conclusion was that in larger sizes LCDs would always be too expensive because they had to rely on semiconductor-like manufacturing processes. Of course, now we know that this conclusion did not hold up. In the first decade of the 21st century, LCDs grew to dominate all sizes of displays and the CRT was doomed to rapid obsolescence. It is my opinion that the CRT manufacturers underestimated how important flatness and lower weight would be in driving consumer behavior. The prevailing opinion was that since the CRT still produced a better image with faster response that this would be the primary determinant that consumers would use in making their purchasing decisions. However, at the larger display sizes above 30-inch diagonals, CRT televisions were seriously bulky and weighed over 100 pounds. Furthermore, CRTs were limited to sizes no larger than 36 – 40 inches because that was the largest size TV that would fit through a residential door. And at those sizes the weight approached 200 pounds. The elegance of flatness and portability overcame any small remaining defects in image quality. Even plasma technology could not keep up. As LC displays became the technology of choice for virtually all sizes, the manufacturing capability improved with better images and lower cost. By now, LC Displays can be considered a mature technology. Virtually all consumers have purchased a flat-panel TV and the quality of the images is excellent and can match and even exceed the best that was available from a CRT. Given the tremendous success that has been enjoyed by LC displays over the last several years the question then becomes, what’s next? Is there another display revolution on the horizon that would match the “great conversion” from CRTs to LCDs? Some in the display industry had the dream that introduction of 3D technology would create another major conversion. To anyone who took a careful look that was doomed to failure from the very start. The inherent defects with stereoscopic 3D viewing and the need for polarizing glasses did not bode well for having this be the next great technology era. With 3D not happening, some are now proposing that OLED technology will become the next major improvement to drive flat panel sales. While OLED technology may indeed provide some noticeable benefits in image quality, it will not become a driving force such as happened with the transition from CRTs to LCDs. There is no great impetus for converting to an OLED display such as the laptop provided for LC displays and that the bulk of the CRT provided for LCD television. OLED displays provide a modest improvement. They have done the best where battery power is a major concern such as for cell phones. OLED displays will have to become cost competitive with LCDs for larger displays to be considered a displacing technology. Right now that appears to be a major challenge and it is not yet clear how it will be resolved. The next generation of television displays will be driven by the move to higher resolutions such as 4K and UHD TV. It is most likely that for the next several years the needs for higher resolution will be met by the already mature LC technology. OLED technology will have to work hard to exceed what LCDs can do. There should be no expectation that OLED will be the driving force behind another major conversion such as we saw when CRTs were abandoned for LCD flat-panels. Should you wish to offer your thoughts on this topic or others, you may reach me directly from this site, by email at silzars@attglobal.net, or by telephone at 425-898-9117.

feb15 Read More »

march15

Semi-False Prophets… At one time or another, we have all wished that we could see into the future. On a personal level this may just be intellectual curiosity, but for many businesses it’s a matter of survival. Just ask Kodak and Radio Shack if you don’t believe me. However, even on a personal level it can be important in making career choices and perhaps planning ahead to retirement years. Given this level of interest, there are many prognosticators who claim that they can answer these questions for us – if only we will buy their books or otherwise pay for their wisdom. And they promise to “prove” to us how prescient they have been with their past predictions. They will tell you that their track record is 80% or better. However, on closer examination we will see that the claimed successes are a very partial result and mostly the result of extrapolations of what already exists. It did not take any special prognosticating skill to have predicted 10 or more years ago that compute power will continue to increase and what the consequences of that might be. However, how many predictions did you see that told you there would soon be major new companies such as Google, Facebook, and Amazon? If anyone could really see into the technology future then would it not make sense for companies such as Kodak, IBM, and Microsoft to take advantage of this knowledge and get into these new businesses before new challengers arrive on the scene? Would it not make sense for a company such as Microsoft to be the first to market with a successful search engine and create a social network? Why let a new start-up come out of seemingly “nowhere” to take a major chunk of business away from an existing large company such as IBM? The answer of course is that the “prophets of technology future” have no special skills in what new surprises lie ahead. They can take what is already known and try to extrapolate it with some limited success. But they will invariably fail when it comes to uncovering the innovative new developments that catch us all by surprise. Could any of us have imagined how quickly social media would take over? Could we have imagined that in a span of just a few years we would all be addicted to our cell phones to the point where we cannot wait for the plane to land before we turn them on, nor carry on a dinner conversation without checking for messages? Did anyone predict the rapidity with which the concept of “applications” would take hold — or even that there would be such a phenomenon? The easy part of predicting is using known technology to extrapolate where it might lead us. I personally have had good success in predicting how display technology will evolve based on what we know about materials and how long it takes from basic discoveries to a marketable product – it’s about 20 years. However, even in the display industry most of us could not foresee that LC technology would take over virtually everything. We all thought that plasma panels would have a place and that CRTs would not go away as quickly as they did. Looking ahead turns out to be far more complicated than some would have us believe. There is a complex relationship between innovation, consumer behavior, and government regulation. Quite often an initial effort may fail to later become a major success with just a few seemingly minor changes. Consider the evolution of the tablet computer that Microsoft tried and failed at but, not too long after, Apple made into a major success. We live in a dynamic and mostly unpredictable world. That creates wonderful advantages for those with new an innovative ideas, but similarly creates difficult challenges for existing major companies looking to sustain their successes. The life of every new product or technology follows an S-curve from initial birth, through rapid growth and acceptance, to mature stability followed by an eventual decline. The challenge is to be able to know what this curve will look like and to have something new in the wings to bring to market when maturity is approaching. Some companies such a Kodak never did figure it out. Others such as Microsoft seem to be doing better. In the display industry, we are doing well with the introduction of ever larger displays, more resolution, and continuing improvements that allow displays to become even more ubiquitous. Should you have some surprises up your sleeve that you wish to tell me about, or if you wish to respond to these comments, you may reach me directly from this site, by e-mail at silzars@attglobal.net, or by telephone at 425-898-9117.

march15 Read More »

april15

The Miracle Liquid… In the mid-60s some interesting work was done at Sarnoff Labs in Princeton that demonstrated the capability of a liquid crystal-like material to modulate light. It seemed that there might be some limited use for such a material – perhaps in simple numerical displays – possibly watches or instruments. Inherently, displays made with this new liquid were temperature sensitive, slow to respond, had poor contrast, and a limited viewing angle. The need to polarize the light in order to see the image caused significant light loss. To try to do a color display would require the addition of filters and even more light loss with less that 10% of the light getting through. Nevertheless, researchers persisted and by the mid-90s laptop computers began to show up with displays that were pixelated into rows and columns. This was a major advance from the segment-addressed displays that were suitable for watches and instruments. However, these displays were only useable for word processing and other applications that did not require fast response. However, even with design refinements, such as dual-scan addressing, the contrast was poor and the viewing angles severely limited. So after 30 years of development from the mid-60s to the mid-90s, we were only able to make displays with liquid crystals for limited applications. And even in these applications the image quality was marginal. At the major display conferences, there were technical presentations describing new designs that put an active storage element, such as a transistor and a capacitor, at each picture element. However, the consensus among display experts was that this approach would be a very expensive proposition. Semiconductor technology would have to be used on a scale that was much larger that any silicon wafers. And not only that, the yield would have to be much higher than in a typical silicon process where defective parts are simply discarded. It would not be possible to do this with a display of useful size. The most optimistic projections for active matrix displays were that they would eventually reach sizes no larger than about 20-inches in diagonal measure. And then miracles happened! LC materials were developed that were less temperature sensitive. The speed of response improved to be even faster than needed for video applications, The inherently poor contrast and limited viewing angles were overcome with various new structures such as multi-domain cells. Manufacturing costs for active matrix backplanes were reduced to be competitive even with the inherently simpler structures of CRTs. Backlight efficiency was improved and large displays could be made that were energy efficient. Every limitation was overcome to provide displays that ended up better than any and every other technology. CRTs were not suitable for laptop computers or other portable products and were limited on the upper end by weight and bulk. EL could not do full color. FEDs failed miserably for a variety of reasons. Plasma displays tried to be competitive at the larger sizes but in the end could not keep up. Conference room projectors still have some market but are being replaced with large LCD panels. How was it that a thin layer of soapy liquid became not only the dominant display technology but for all practical purposes the only one? There is no inherent law of nature that I can think of that would have predestined this outcome. Why all the pieces fell into place and why all the inherent limitations could be overcome will be something that we can ponder over and appreciate as we view our 65-inch and larger Ultra-High Definition televisions and communicate on our cell phones with their 500 line/ inch ultra-sharp displays. Should you have some thoughts on this topic or others you may reach me directly from this site, by e-mail at silzars@attglobal.net or by telephone at 425-898-9117.

april15 Read More »

may15

With the Wave of a Hand… Almost since the beginnings of television, we have dreamed of “hang on the wall’ displays. The CRT was bulky and although it produced good images there was always the future dream of displays that were like the artworks hanging on the walls in our homes. Many innovative approaches were tried to compress the bulk of the CRT into something thinner. Most failed. Finally, along came plasma panels and liquid crystals. Those two technologies allowed us to achieve our long-held dreams of flat-panel displays. In a similar vein, many years ago we had the Dick Tracy comic strip with the watch that could communicate and even show video. Then we had the Star Trek TV series with the flip-phone-like communicators. Once again we could dream and hope to see these dreams someday become reality. And indeed they have. Today’s cell phones do even more than these fantasy communicators of yesteryear. So where do we go from here? Are there other unfulfilled dreams that we can work on? Currently, we are seeing a growing emphasis on developing gestures and voice commands as a way to interact with our gadgets. And certainly being able to have a clear and unambiguous conversation with an electronic device would be a good thing to have. However, we don’t seem to be there yet. Voice recognition and understanding context requires more than our current computer capabilities are able to deliver. Perhaps over the next decade or two these challenges will be overcome. Then, how about gesture recognition? Do we really want or need gestures as a way to interact with our electronic devices? To me this seems like an even more challenging problem than voice recognition. Many of us use gestures when we are carrying on a conversation – whether in person or even by phone. These gestures do not have any specific meaning. They are simply a way for us to emphasize what we are saying. Even when I am sitting at my computer, I may take a moment to stretch and wave my hands around. This does not mean that I am conveying any kind of information or request. I am simply stretching. So if a have a supposedly intelligent gadget that is observing me, how will it know when my gestures mean something and when they do not? Gesture recognition is not something that we have dreamed about for years and years as we did with flat-panel displays. In fact, I don’t remember ever having even the slightest desire to wave my hands, shake my head, or blink my eyes as a way to interact with an electronic device. That then leads us to the obvious question of what problem are we trying to solve or what long-felt need are we attempting to satisfy? It seems to me until we can define such an objective that gesture related technologies will remain in the novelty category. There is of course the recently rediscovered realm of virtual reality. With all the recent efforts on head-mounted virtual reality displays, there is the natural extension that for interactive games we may want to include gestures as a way to interact with virtual objects. In that context, the ability to recognize and respond to arm and hand movements could be an interesting application. Perhaps in that limited context gesture recognition might prove to be useful. However, as far as general use for such mundane activities as turning lights on and off or controlling our televisions, I can’t see a practical way that a “wave of the hand” will ever become a main stream application. Should you wish to wave your hand and tell me otherwise, you may reach me directly from this site, by email at silzars@attglobal.net, or by telephone at 425-898-9117.

may15 Read More »

june15

A Beautiful Display… Be it a cell phone, a tablet, a laptop computer, or a large screen TV, the display is the centerpiece of how we interact with these devices. Of course other features are important as well, but if we were to prioritize what is important I think most of us would place the display at or near the top of our list. Over the last decade, we have come to expect superb displays from each and every one of the products mentioned above and every other product that uses a display. In fact, we have come to view displays as commodity items that are excellent but pretty much the same. And to a certain degree that is a correct perception. In doing product teardowns it is common to find well-known manufacturers using displays not only from their own factories but from other manufacturers as well. Therefore, should we assume that the display itself is no longer much of a competitive advantage when designing a new product? Wouldn’t this be especially true for those companies that do not have their own display manufacturing capabilities? Currently, I believe there is one outstanding example of how competitive advantage can be gained by having a unique display technology that provides superior performance. And that is the OLED displays that Samsung has developed for its cell phones. By having sole access to this technology and by being able to create unique products that allow for not only superb performance but for other features such as curved edges, Samsung can create products that have significant market appeal. By comparison Apple does not have full access to such a capability and must work with other manufacturers who may not have the same technical capability for making displays that will meet Apple’s expectations. It takes years of persistent effort and a focused technology effort to end up at the forefront of new technology capability. Samsung saw this opportunity some years ago and is now exploiting it in their products. Because they are in the lead, they can expect to maintain that position with continued intensive development. This will make it most difficult for others to catch up. Some 60 years ago, in the small town of Beaverton, Oregon there was a company called Tektronix that had developed an excellent product called an oscilloscope. As the company grew, it found that it could not get the kind of displays it wanted for its new products. The existing CRT vendors were neither able nor willing to provide the superb displays that Tektronix felt it needed. So Tektronix undertook a three-year effort to make their own. It was not an easy path and many times they seemed to be staring failure in its face. But persist they did and ended up with CRT displays that no one else could match. These specially made CRT displays became the centerpiece of their products and contributed significantly to the future success of the company. History seems to be repeating itself, except on a larger scale. Having superb display capability gives a company something of great value. In today’s world it takes extraordinary dedication to achieve such a position. But when achieved, it can pay back substantial and sustainable returns. Should you wish to offer your thoughts on this topic or others you may reach me directly from this site or by phone at 425-898-9117.

june15 Read More »

july15

An Inevitable Outcome… In a June 3rd article in the Seattle Times, the headline read “Instagram photo feed opens wide to ads”. Who would have guessed? All those wonderful features touted by the social media sites are undergoing a change that was quite predictable and is now taking hold at an ever-accelerating pace. The pressure on the social media providers to continue to grow and to show increasing revenue and higher profits is now in full force. So these “free” services are finding it necessary to extract an indirect price from all users by pushing more and more advertising our way. The new wrinkle is that this advertising is being tailored to what the social media sites think we want to see and to what we are most likely going to respond. And how do they know to do this? Basically, by snooping in on our every conversation and every posting to analyze our behaviors. This is happening so gradually that most of us are not paying much attention. And that is exactly how these providers want us to behave. A few weeks ago, I ordered some new toner for a photographic-quality color printer. Soon my every Internet search showed up with a sidebar ad for related items such as photographic paper from the same manufacturer as my printer. And I almost responded because I had been thinking about trying out some papers with different surface textures. Because this was something that really was of interest to me, I did not think of it as an obnoxious intrusion. So here we go down the slippery slope of lost privacy and a society where we become the objects of cleverly researched electronic sales pitches. What we are undergoing is not unlike becoming hooked on an addictive drug. First, it is offered to us as virtually free with seemingly no downside consequences. But as we become regular users the situation begins to change. Our behaviors are monitored and analyzed and we become the targets of sales pitches based on the results of having our every communication automatically assessed. In itself this may not be all that terrible. However, as the available space on our computer screens becomes ever more cluttered with these targeted sales pitches the original benefits begin to fade and what we see instead are blatant attempts to influence us to look and buy. It seems to me that from the very beginnings of the Internet we have been way too trusting that the seamier sides of human behavior would not be a serious problem. But here we are today with never-ending viruses, thefts of our identities, phishing attacks, and hundreds of spams each day – many that are malicious. Should we expect that the social media providers will always behave with total honesty? That is unfortunately hard to believe when we cannot even count on that from our own government agencies. Are there experiences that you would like to share on this topic or others? You may contact me directly from this site or by phone at 425-898-9117.

july15 Read More »

august15

An Interesting Convergence… When away from my office on travel, I occasionally like to take a look on my computer to catch up on the local news in Seattle. I will typically either access the Seattle Times Newspaper site or alternately the King5 Television station site. Both provide good coverage of local news, sports, and weather for the Seattle area. As I did this on a recent trip, I began to realize that on the Internet there is basically no difference between a Newspaper site and a Television station site. They both have similar appearances and both provide essentially the same contents. If I didn’t know which was the “newspaper” and which was the “television station”, I would not be able to distinguish. What does this tell us as we look ahead to the future? The interesting conclusion is that on the Internet every content provider ends up looking the same. Whether the original source is from a newspaper publisher, a television station, a magazine publisher, a search engine, or an original content creator – all end up indistinguishable on our computer screens. We are losing the distinctions that different media used to provide. The feel of a newspaper and how we read it is not the same as how we watch news on a television screen. A magazine has yet a different feel and we may read it on an airplane or in bed at night. Books in print provide yet another distinctive feel and a permanence that a computer screen does not have. All these distinctive experiences are now converging and blending into one – the electronic display. Of course we are already witnessing this whenever we venture to a shopping mall, a restaurant, or an airport. It is no longer possible to enjoy any of these activities without noticing everyone intently staring at their cell phones, tablets, or laptop computers. So it’s likely that we are already adapting to this new world of all-electronic display media and accepting that our access to any and all information sources will be by looking at electronic displays of various sizes. Given this scenario what does it tell us about current enterprises such as newspapers, television stations, magazine publishers, and perhaps many others? There is, however, a downside that is currently by and large being ignored. This downside has been noted and concern expressed by one of the originators of the of the Internet, Vincent Cerf. He is concerned that we are creating a world with no permanence – a “digital dark age”. What will happen to the family photographs stored “in the cloud” if the cloud suffers some kind of damage? All of our other information is similarly being centralized and stored in digital media. This is a fundamental change from our centuries of history where knowledge in the form of books, texts, and pictures was dispersed so that no one event could damage or destroy them all. The same was true for family histories in the form of photographs and letters. Many family members had copies that later could be retrieved and duplicated by others. Furthermore, a photograph or a text does not require any special device to render it readable. That is of course not the case with digital media. We need compatible electronic devices in order to be able to retrieve the information. And once damaged, there is typically no way to do such retrieval. We are living in a time of change and a time where we are potentially being careless with our future. If all of our records, memorabilia, and knowledge are stored at some unknown remote site “in a cloud” what will happen if one day this “cloud” is no longer there? Have we become too naively dependent and too willing to accept this untested way of keeping our important records? Could this become the century of lost knowledge that will have to be painstakingly recreated by future generations? Have you committed to storing all your important records “in a cloud” somewhere? I would be interested to hear your thoughts on this topic or others. You may contact me directly from this site, or by telephone at 425-898-9117.

august15 Read More »

september15

And Suddenly It’s 1993 — All Over Again… In October of 1993, I wrote a column titled “Is Virtual Reality About to Become Real Reality?” In that column, I described a scenario of a home entertainment center that was set up to provide an immersive virtual reality experience. I went on to say “What better leading indicator could we have that there is high user interest and enough technology happenings for someone to publish a magazine about it?” My conclusion in October of 1993 was that, “The next several years will be very interesting because we can expect to see many new products introduced, most of which will not be commercially successful, followed by an evolution of one or a few standard platforms similar to what has happened in desktop and laptop computers.” So here we are some 22 years later and we seem to be hearing the same story all over again. What went wrong? What did not happen over the last 22 years to make virtual reality a “real” reality? Is there something fundamentally different today – other than the multi-billion dollar investments that are being made? Perhaps the biggest difference is how much more computing capability we have today to try to create the virtual reality experience. We also have made considerable progress in how capably we can model the interface between what a person is doing and how the simulated reality environment needs to respond. The improvements in display performance have not been as dramatic but are of crucial importance in how the virtual reality experience is presented. For most applications 3D has come and gone but could find a happy home with virtual reality. But, are we there yet? Can we finally expect to cross the boundary between clearly fake reality and something that begins to feel quite real? This final step may turn out be harder that we think — and 22 years from now, I may be writing another column about it. Even with the computing and hardware improvements that have occurred over the last 22 years there may yet be significant obstacles. The problem is that as we get more accurate in simulating a virtual environment our brains are busily analyzing the situation and telling us that something is still not quite right. We may want to “believe” but our senses are too smart to fall for the deception – and fight us all the way. Sometimes this is called the “uncanny valley” and is typically applied to robots that are intended to look and act like real humans. The closer we make them look like humans the more disturbing they become. Thus, it becomes extremely difficult to jump the boundary that separates “real reality” from “virtual reality”. How will this affect the products that are currently under development? What will the future hold for the many start-up companies and their enthusiastic investors? Are we seeing a true breakthrough or are we reliving a 22-year old dream that we forgot about? With this historical perspective, my tendency is to be somewhat cautious and skeptical about the more enthusiastic predictions. Virtual reality may need to begin its journey to success with some limited applications in specialized areas. Where can we use these capabilities to solve specialized problems? Or perhaps the entertainment market will find some early adopters for games and other viewing experiences. It will be a real surprise — but not a bad one — if someone does come up with a virtual reality product that makes a major splash in the marketplace. In the meantime, I would be interested in hearing your thoughts on this topic or others. You may reach me directly from this site or by phone at 425-898-9117.

september15 Read More »

october15

A Future We Could Have Had by Now… Over the last few months, I have had the opportunity and challenge to put into operation several new computers and then to make them work with new and existing printers, cameras, scanners, and various displays – including large screen televisions. And what a frustrating and time consuming experience it has been! Perhaps frustrating is even an understatement. Why should it take so much time, effort, and emotional stress to get a computer to recognize a printer and vice versa? Why do I need to hunt up a “Network Key” and enter it manually? Why do I need to struggle with a “format unrecognized” problem on a new large-screen “smart” TV? Shouldn’t there be an easier way? Just about every new electronics product now claims to be “smart”. The “Internet of Things” is supposed to be the wave of the future. So why should it take so much effort and frustration to get commonly used devices to talk to each other? It shouldn’t be that way. And here is why. If we were simply to give each device its own universally recognizable “phone number” and it was printed on the identifier tag that shows the model and serial number, any computer could simply “call up” the product and the connection would be established immediately. If the device needs to get on to my network, I should be able to simply enter the “phone number” of my network and the new printer would again be instantly “installed”. Of course this may not be as much fun for the marketing departments of the new “smart” products because it may limit how much promotional stuff they can push our way. As I am sure you have noticed the current install disks are just chock full of attempts to get us to buy into various data mining and sales efforts and to sneak a toolbar onto our computer screens that ties us permanently to these promotional efforts. So why aren’t we already living in this simple and elegantly interconnected future – and are we likely to have it anytime soon? The realistic answer is that this will probably not happen any time soon. Even though the technology is no more complicated than each of us having an e-mail address or a phone number, there seems to be no organization or government entity that is interested in making life easier for consumers. At one time in its past, Microsoft could have promoted such an approach and was sufficiently dominant that it could have made it happen. But no longer. There are now other major competing forces and other large players that make such a cooperative effort unlikely. Also, it seems that we consumers have become desensitized and have been lulled into accepting the unnecessary complications that “smart” devices are imposing on us. Have the “smart” devices figured out how to train us to do the interconnection work for them? What an interesting outcome! Our lives were supposed to be made easier by all these new capabilities but instead we have ended up being servants to the devices instead. I would be interested to hear of your experiences in inter-device communications. Or perhaps you are a “smart” device and would like to provide a contrary explanation. In either case, you may reach me directly from this site, or by phone at 425-898-9117.

october15 Read More »

nov15

An Incredible Journey… This past week, I had the opportunity to disassemble and examine a 75-inch Ultra-High Definition TV. At the 75 inch size, unpacking and handling this very large TV presented quite the wrestling match. This probably should have been a two-person effort, but over time I have learned how to handle even these large TVs by myself. Once the back cover was off, what was most impressive was how little there was in the way of circuitry. There were two adjacent power supply boards, a signal input board, and one other small board for converting the input signals into the format needed to drive the addressing circuitry – which is hidden in the LC panel assembly. And that’s all. The rest is empty space that is the backside of the metal frame that holds the LC panel and the backlights. What incredible progress we have witnessed in the span of just a few years. It was only 20 years ago that ten or twelve inch LC displays were of the passive type and were only used in the new laptop computers that were beginning to impact our lives. These displays were slow, had poor contrast, limited color capability, and were limited to no more than 256 rows. At that time, those of us in the display industry could not imagine that LC technology would ever be used for anything more than small or medium size displays. Even in the beginning years of this century, the future for rear-projection TV and Plasma panels seemed secure for large-screen viewing. How could we have foreseen the incredible improvements that would be made in Liquid Crystal Display performance and manufacturing cost? The prevailing wisdom that TFT technology would always remain expensive because it was a “semiconductor-like” process for sure did not hold up. Perhaps we should have realized that the 10 micron features in a TFT are much easier to manage on large size substrates than the near nano-meter features of a modern microprocessor. However, even as an active participant in the evolution of these displays, it causes me to stop and appreciate in wonder at just how much has been accomplished. Take apart an LC panel. All you will find is a barely visible soapy-feeling wetness between the two glass sheets. If you look under a microscope, on one sheet you may find a pattern of red, green, and blue rectangles or perhaps trapezoidal shapes. On the other side you may find the TFT tucked away in a corner of the pixel. It seems impossible that such a structure with a tiny amount of soapy film between can somehow create beautiful full-color video images. During the last twenty years, we really have been on an incredible journey. Not only has LC technology exceeded our viewing expectations, it has done so with manufacturing improvements that allow it to be affordable for nearly all consumers. Wasn’t it just a few years ago that modest size LCD TVs were priced at well above a thousand or two thousand dollars? Today these same size sets sell for hundreds of dollars. Where will this incredible journey end? With the advent of 4K television and ever-larger screens, we apparently still have travel opportunities ahead of us. It will be exciting to see what is around the next bend. Should you wish to discuss this topic further, you may contact me directly from this site or by telephone at 425-898-9117. As I am sure you will agree, there is much for us to be thankful for during this year’s Thanksgiving Season.

nov15 Read More »

Scroll to Top