Edit Template

Aris Silzard

march04

It must be hard to explain… It was only a few years ago that display companies did marketing studies that concluded that for flat panels to have significant penetration into the CRT-dominated display industry the costs would need to be no more than about 25% higher than those of comparable CRT products. A few “strange” voices were also heard to suggest that flat-panels could be sold into office environments based only on their smaller footprints. The proposition was that this would be useful to increase worker density even further. But even the space planners promoting these ideas would typically include other cost savings features such as lower energy utilization. So what happened? In some magical way (perhaps we should call it skillful sales promotion?) our price point thresholds were “re-adjusted” so that today when we see an advertisement for a plasma panel for just under $4,000 we think it is a bargain that we shouldn’t pass up. And similarly, an LCD monitor that is more than twice the price of a comparable CRT monitor is considered a great purchase because it is the “new digital technology”. Perhaps some of what is going on can be explained by an article that I recently read in USA Today. (I should mention that in general my impressions about this newspaper are favorable and I even get it by home subscription. Therefore, my use of the following example is not for the purpose of making them look bad. It just is what it is.) The title of the article was, “Fans make a run on big TVs before big game” and it was written by Lorrie Grant. (USA Today, January 30, 2004) Here is a sampling of what she wrote. “January is traditionally the slowest shopping month of the year, but the Super Bowl – and new digital technology – has turned the month into a retail event… About 1.5 million TVs were expected to be sold for the Super Bowl, says a National Retail Federation survey. The big screen (exceeding 40 inches) in important for the big game. Hottest among them is the middle-market LCD flat-panel TV. It has a sharper picture than the standard projection television but costs much less than plasma TVs, which have the highest picture quality. A 50-inch plasma TV can cost $9,000 depending on the brand, while a 52-inch LCD is in the $3,500 range and a 52-inch projection TV is around $1,300. “This is the trend we’ve seen all year. Everyone wants plasma, but the LCD is in the sweet spot of what they can and want to spend,” says Bill Cimino of Circuit City.” Can you figure out what she is talking about? I can’t, but I sure get the impression from reading these excerpts, and the rest of her article, that I had better hurry up and join in the big-screen purchasing stampede, and buy whatever technology I can afford. From this article and many others, we are led to the inescapable conclusion that it is important to have these “great new digital technologies” because they are “so much better”. Now exactly what that means is seldom made clear. I found one recent exception to this in the March 2004 issue of Consumer Reports. Here’s a sample of their way of writing about big-screen TVs. “The hoopla about flat panels and big screens notwithstanding, you’ll still get the best combination of picture quality, viewing angle, and price from familiar direct-view TVs – the tube-based sets you’ve been watching for decades.” Perhaps this kind of bland commentary does not make for exciting reading in the popular press, but it’s something that makes more sense to those of us in the technical community. Much of this March issue of Consumer Reports is devoted to unscrambling the misinformation about display technologies and the various choices for TV and HDTV. In general, I found the commentary to be about as accurate as I would know how to make it. For that reason alone, it may be worthwhile to take a quick look at it. Let me know if you disagree. For me, it was a refreshing change from most of what we get to see and read. It appears that when the marketing studies were done a few years ago — that predicted flat-panel prices would need to be close to those of CRTs — we were being way too logical and behaving too much like engineers and not enough like real live consumers. We were surveying the world as it was then, with buyers who were thinking about the prices they were seeing for TVs and computer monitors at that time. Therefore, a $350 price for a TV seemed about right and even less for the typical computer monitor. When the first advertisements for $5,000+ plasma panels came out, the prices looked like a misprint or maybe even a bad joke. But after a few months, they didn’t look completely ridiculous – only very expensive. Then a few more months later, we were favored with an occasional ad promoting special sales of these panels for “only” $4,995. Wow! Just as we suspected, the prices were coming down! Now, the slightly-under $5,000 price was not only not ridiculous, it was getting to be a downright good buy, while the recently introduced $10,000 50-inch panels were the ones that now seemed too expensive. How quickly we forgot and how quickly we re-adjusted. And, as we can now see, this set the price acceptance point for all the other new display technologies at numbers much higher than we thought were possible for large volume sales. However, the consumer behavior that is perhaps most difficult to accept for us engineers is that the most important performance criteria – the quality of the display itself – has become secondary to the form factor and the futuristic appearance of the new panels. I suppose the secret of business success is knowing how to predict such behavior and then to have the confidence in

march04 Read More »

april04

Presumptuous Assumptions… I’ve never been much of an enthusiast for the computer-controlled house or the internet-driven kitchen. Nevertheless, in spite of all the good wisdom that I have tried to offer over the years, the topic still seems to intrigue those looking for how to extend the use of personal computers, or for new uses for the latest high-tech products. What is especially interesting is that many of these efforts are based on the presumption that houses and kitchens will continue to be designed and lived in much as they are today, and just need to be more automated and/or more “interconnected. I found the latest examples of this in the March 2004 issue of Popular Science magazine. The article is titled “The Souped-up Kitchen – The next decade will take you on many journeys, but one place you won’t go is hungry”. Now, I fully believe that most of us with kitchens and homes won’t go hungry. It’s just that I am wondering if this is the way that we will do it. Here is a sampling of what is supposed to come into our lives. • Voice commands that will allow a ceiling projector to show a recipe once we have selected it. • An oven that will keep food cool until we give it the command to start cooking over a cell-phone as we are driving home from work. • Other appliances will be interconnected to automatically coordinate yet other parts of the meal preparation. • The refrigerator will have a built in camera and RFID tag sensor to remotely tell us what is contains. • The trash compactor will have an RFID sensor to keep track of what is thrown away so that the next shopping list can be created beginning with these items. • A robotic wet/dry vacuum cleaner will be docked in its own cabinet to come out and clean up spills and even the counter top – the article doesn’t say who will clean up the vacuum cleaner. • The kitchen will sense who is present and modify the environment and items to be presented for consumption. Well, I think you get the idea. More computer control, more automation and more gadgets that help you cook, shop, and keep track of all this stuff. And of course displays everywhere to show the recipes, to tell us the status of the inventory, and to provide access to the computer intelligence that will run all of these interconnected devices. However, there is one very basic assumption behind all of this. (There is also the secondary assumption that all of these interconnected gadgets will work as designed – a giant assumption that has no connection to current reality). However, in this column let’s focus on the primary assumption. Is it really a given that we will continue to have kitchens that are like the ones we have today? Let’s consider a few societal behaviors that may bring that assumption into question. The very concept of family relationships is changing. More households depend on two incomes to maintain their lifestyles. The adults earning those incomes are less likely to keep the “normal” working hours that were the norm a few decades ago. If there are children in the family, they are likely to have their own busy schedules. The concept of regular mealtimes and the traditional roles of the husband as wage earner and the wife as the person who cooks and maintains the home is close to vanishing. All this creates a changing environment where the function of the kitchen no longer fits the assumptions on which the additional “features” listed above are based. Instead of making the conventional kitchen more efficient, perhaps the futurists should be thinking about how to deal with today’s reality where families are eating out more, where food is brought home at the last minute from a take-out facility, and where mostly-prepared salads or dinners are brought home from the grocery store for a quick final preparation. And where hurried meals may be taken as the busy schedules allow. None of this requires additional automation. In fact, it requires less from the kitchen than ever before. There is another very basic and I believe fundamental flaw in the functioning of the futuristic kitchen as proposed by the technology-driven designers. And that is the concept of inventory control. The premise that somehow we can create shopping lists by tracking what goes in and out of the refrigerator and other storage locations is the highest level of wishful thinking that I can imagine. That turns out to be difficult even in the most disciplined of environments, such as in managing a factory assembly line. Retail establishments try to maintain control by point-of-sale tracking of each item that leaves the store and by carefully recording all incoming inventory — and they still need to do periodic physical counts. In what kind of fantasy world can we expect a family with no desire and no inventory-control training to track the use of consumables – especially when most of those consumables are left in partly used quantities, and not even necessarily returned to their intended locations? How does one account for a partly eaten chocolate cake, or one-third of a head of lettuce? And how do we deal with the variations in what every family member might like to try next? Our desire for variety in our nourishment is not at all like the predictable environment of a product assembly line. When we consider the changes that have happened in our work lives, along with the changes in how most families are responding to the activities of their growing children, we have to conclude that for most families the concept of the traditional mealtime has become a rare event. There is less predictability and less time available for preparation and for consumption. And, in addition, we must not forget to consider our demands for variety and novelty in our food choices. Given this reality, the

april04 Read More »

may04

Welcome to the Future… It happened so imperceptibly that we hardly noticed. We began our flat panel journey roughly 40 years ago with such modest beginnings that not one of us could see the exciting future that lay ahead. One part of our journey started with LC displays for watches and calculators. We then moved on to somewhat more complicated but still mostly segment-addressed LC displays for test and measurement instruments, and to the first low-resolution row and column ones for portable applications. We achieved a major milepost when the first lap top computers with monochrome LC screens became commercial products. However, the low contrast and poor viewing angle of these early passive matrix displays made them just barely adequate — even for the rudimentary word processing and spread sheet programs of that time. The developers and promoters of competing display technologies were of course more than happy to point out these severe deficiencies, and yet others, such as limited temperature range and poor speed of response. But ever so gradually, with creativity and persistence, LCD developers were able to overcome each and every limitation. The introduction of TFTs was a big step, but so were the various techniques that allowed for wider viewing angles, improved contrast, and better color gamut. As each year slipped by, the LC displays got better and bigger. Nevertheless, we still continued to look to the CRT as the standard for comparison. Overall it could still claim to have the best display quality. Then on a day not so long ago, perhaps even at last year’s SID Symposium, we looked at an LCD screen and somewhat to our astonishment had to admit that we really could no longer find much wrong with it. It was bright, with excellent color and contrast, and had good viewing angle. And the speed of response looked quite adequate even for video images. A threshold had been crossed. These LC displays were now “good enough” even to our critical display-engineer eyes. While we knew that further refinements and improvements in display quality would surely still come, the only significant remaining issues we could at that moment suggest were manufacturing cost and the scale up of worldwide manufacturing capacity. In an almost parallel universe, plasma panel technology began its journey from similarly modest beginnings and at about the same time. How many of you remember “nixie tubes” in the early digital multi-meters? The first “real” plasma panels were like the early LCDs — also monochrome, but with the added feature of having a warm orange glow emanating from each active neon gas pixel. When color did come along, achieving clean saturated colors proved to be a difficult challenge for plasma technology, even after the three electrode structure was invented and down-converting phosphors became the accepted method for creating light emission. It was not until the early 90’s, with the introduction of the barrier rib structure, that we finally could provide adequate isolation between pixels and could create panels with colors that were bright and crisp. With that improvement, plasma technology was able to move at a faster pace toward product commercialization. Remaining issues such as the efficiency of the emission process, the complexity of the driving circuitry and long-life operation have continued to be challenges, but the spectacular images produced on these large screens have already captured the imagination and enthusiasm of the consuming public everywhere. This technology too has now arrived. All of those past predictions of a futuristic world where we were supposed to find flat panel displays everywhere – well, it’s here! We have met virtually all of the expectations of the futurists. We are in the midst of a worldwide scale up of flat panel manufacturing capacity unprecedented even by semiconductor industry standards. It will not be too many more years before the sales of displays exceed those of the entire semiconductor industry. The LC and plasma display technologies will dominate our display world at least for the next decade. The CRT will continue to be important but its future is limited mostly to specialty applications and to television products in many regions of the world. An unfortunate result of the high enthusiasm for flat panel technologies could be the premature abandonment of CRT development and manufacturing. That should not be allowed to happen. Will we end up with only one dominant display technology? I believe it is way too early to make that assessment. In fact, it seems far more likely that other display technologies will also enjoy a robust future. The upcoming SID Symposium (May 24-28, Seattle, WA) is likely to provide a number of key “leading indicators” to what the future may hold. One place to look will be in the Sessions describing the latest developments in display materials. Since it typically takes a decade or more from the time a new material is discovered to the time when commercial products begin to appear, understanding the status of such materials developments will provide valuable insights into future opportunities. The most visible and perhaps most important new display technology that is currently in the midst of a transition from basic materials research to early commercialization is OLEDs. The future promise of this technology for efficient, bright, full color displays is considerable, but the challenges are also significant. But are they any greater than those first encountered by the developers of LC and Plasma technologies? It will take a few more years to find out. And what about other still evolving display technologies such as inorganic EL, FEDs, LEDs, and new light engines for projection applications? In some cases the answers are becoming clearer, but for others more research will be needed before the future path can be determined. We are experiencing unprecedented growth as a result of worldwide consumer enthusiasm for the flat panel technologies we have been creating over the last 40 or more years. And while understandably the excitement is mostly for these new technologies, we shouldn’t forget that the venerable CRT

may04 Read More »

june04

It Won’t be Long Now… Last week, I went to pick up a few plumbing repair items at our local Home Depot store. As I approached the checkout counter area, I noticed that there was only one cashier open at this relatively slow time of day. Nevertheless, there were at least eight people waiting in line ahead of me to pay. Right next to this line of increasingly impatient customers was the new self-checkout area. One person was valiantly trying to use this new “faster and more convenient” method of completing the transaction. However, the items would not register properly and the automated system told him that he could not remove his bag until some additional step was completed. This person looked uncomfortable and embarrassed by the time a clerk finally came over to help. Unfortunately, while standing in line and watching all of this take place I did not have the satisfaction of even a twinge of smugness or comfort – for I know that my day is also coming, and very soon. We are all about to suffer through such training experiences, as the frustrated gentleman I was observing, on our way to becoming checkout clerks — whether we like it or not. The trend is clear. It all started some years ago with self-service gas pumps. (Should you wish to remember what those “good old days were like” you can still have this quaint experience in the state of Oregon where the law does not allow you to pump your own gas.) I’m not sure exactly what came next, but it may have been ATM machines followed more recently by airlines with computer-generated boarding passes. Each new application has been more complicated than the last. There are not many interesting choices when buying gas. ATM machines can take care of some banking needs, but not all. The airlines needed the help of the Internet to make the ticket buying and subsequent check-in processes work with reasonable convenience. In earlier and less sophisticated times, when real people were still available to help us with our transactions, the benefit at the gas pump was that the person filling the tank also cleaned your windows as a complimentary service. And in times even longer ago, they also checked the oil. Can you remember back that far? However, after those services disappeared there was not much benefit to having someone else fill the gas tank for us. We can do that just about as well and probably faster since we don’t have to wait for someone else to complete their transaction. With ATM machines there is the advantage that these machines are accessible 24 hours each and every day. They are there and waiting to fulfill our every wish. Just punch the right buttons and money comes spilling out – as if it really did grow on trees. But what are the advantages of an automated checkout at a Home Depot or a grocery store? Clearly the store should be gaining something by reducing labor costs. But what’s in it for us? Will this necessarily lead to lower prices? Of course if the store already does not have enough clerks and the lines are too long, then it’s an obvious benefit to try to complete the transaction as quickly as possible. But should that be the accepted way to run a business? Reducing service to an unacceptably low level and then introducing an automated system to bring it back up to where it used to be does not seem like a genuine improvement — especially since in this process the labor costs have been shifted onto the customer. Perhaps this is right in line with stores putting stickers on every fruit and vegetable because some clerks can no longer distinguish between apples, oranges, and apricots. Given this situation with an unclear benefit, I have decided to adopt a strategy of resisting these changes for as long as possible. I will continue to try to use ticket agents at airline counters and checkout personnel at stores until I have no other option. But, why am I being so obstinate and unwilling to join the latest technology that the 21st century has to offer? I’m glad you asked. It is because I see this as a very clumsy and transitory approach to something that is going to be much better and that is literally just around the corner. I believe that the automated checkout counters based on the current scanner technology will in a few short years be replaced by a much more convenient method based on RFID tag technology. We are almost there. The security tags used in many products can easily be changed to incorporate additional information. The price stickers placed on other products can just as easily be made “active”. The only minor problem will be with items such as fruits and vegetables. The days of picking and choosing the nicest looking ones may be coming to an end. To make the RFID tag approach work effectively, we will need to change the way some of these products are packaged. The large discount stores such as Costco have already made this conversion by doing bulk packaging and eliminating the pick-your-own approach. So, are you ready for the shopping experience of a few years from now? It will be so elegantly simple. Just select the items you want. Take them to the checkout area. Insert your credit card and push “total”. That is all there will be to it. Every item in your shopping cart will have been recognized and instantly tabulated. The list with a detailed description of every item will be on the flat panel screen in front of you and printed out for your records. The total time for the transaction, regardless of the number of items purchased, will have taken no more than one minute. You will even be able to put the items into “paper or plastic” as you shop. It is also likely that

june04 Read More »

july04

Segueing into the Future… Sometimes technology progress seems to resemble a random walk — with many bumps into unseen walls — rather than anything that can be planned or predicted. That is not a comforting thought for those of us who would like to be able to identify and develop new business opportunities based on new technology innovations. What is especially fascinating, however, is that these initial product introductions often stimulate further attempts at solving the same problem. These competing efforts provide choices that, when combined with consumer feedback, eventually lead to products that are truly useful and beneficial. A recent, and highly publicized, example is the Segway personal transporter. You may remember the initial news stories alluding to a mysterious new concept so revolutionary that it would change the way we lead our lives and would have immediate and dramatic impact on all society. These pronouncements were shrouded in secrecy and supported by famous personages who were allowed preview glimpses of this revolutionary innovation. These carefully orchestrated news stories created an immense amount of curiosity about this revolutionary “thing” – whatever it might turn out to be. Some months later, we common people were finally let in on the secret. What we were shown was a gyroscopically controlled two-wheeled scooter that is self-stabilizing and is controlled by a shifting of body position. The assertions were made that this would result in a completely new way for people to get around. We would all need one and would use it as our major mode of transport. Legislation was introduced in all parts of the country to allow these transporters to be used on sidewalks and anywhere else that people currently get around by walking. The carefully constructed publicity campaign about this “completely new mode of transportation” prompted many states and cities to rush through new laws approving the use of these devices with few restrictions. Could this really become the people mover of the 21st century? The high-tech aspect of gyroscopic control and body-lean control has certain appeal. But stripped of its technology glamour, what is this thing — really? It seems that what we actually have is an electrically powered scooter with side-by-side wheels. How is that better than a conventional scooter with the wheels in-line? The self-balancing part is good, but the extra space needed for a side-by-side wheel configuration will limit how many of these can navigate on a crowded sidewalk. Suppose you gave one to everyone on a busy day in New York or Tokyo? I think we would end up with quite an impressive pile-up of high technology gear. Then something else happened that I think is more subtle than the “minor” practical problems just mentioned. Most who tried it were not comfortable with the way the movement of this transporter was controlled. Not too long ago, there was a promotion at our local park where people could try one out. The person demonstrating was, of course, a very proficient operator and the machine seemed to respond to his subtle touch in an almost magical way. However, when asked, most people seemed uncomfortable to participate and the few who did had difficulty making the transporter do their bidding. There was something about it that was not only not intuitive but also quite disconcerting. Could it be that we have learned from our earliest childhood days that you put “one foot in front of the other” when you learn to walk, while this machine demands that you keep your feet side-by-side while leaning like you are about to fall over? That is not a comfortable way for me to try to maintain my balance. Even when we ride a bicycle we tend to have our feet move in the direction of motion. Evidently, some of these perceptions and concerns got others to thinking about what the real essence of this “transporter” is and what other versions could be introduced that would be better and/or cheaper. So what do we see happening today? There is currently a rapidly developing market for all kinds of motorized scooters. Some are electric and some are gas-engine powered – and a few even have no motor at all. A recent model that is described in the June issue of Popular Science magazine has four-wheels and is styled to look very much like the Segway. The two additional smaller wheels are in the rear for non-gyroscopic balance. The selling price is less than one-fourth of the price of the Segway. Even with the power off, this machine will not fall over. And the controls are about as easy and familiar as the ones on my lawnmower. That to me seems like an elegant and cost effective solution. So, what about the Segway? The recent news reports are that the company is a long ways from meeting its sales targets. Will it succeed or eventually fade into oblivion? That may depend on how willing or unwilling the founders are to consider what is really important to consumers. The early technology hype was great for stimulating thinking and in awakening consumer desire for personal transporters. Once that demand was created, however, others realized that there were simpler solutions. Those may be the ones that will eventually prevail. Consumers are smarter than we technologists sometimes give them credit for. They have a way of waiting until the time is right before they invest their money in new technologies or unproven ideas.For us in the display industry, these same principles apply. Recently we have seen a high level of enthusiasm for LCD and Plasma technologies. But it seems to me that the value proposition of quality versus cost for the traditional CRT is still viable. The recent development of a shallower CRT by Philips may be just what is needed to keep this technology from disappearing. Since even the flat panels need some mechanical support and are not as light and thin as some of us would like to think, a CRT that is not

july04 Read More »

aug04

A Convergence Anomaly… While waiting for my connecting flight in the Denver airport a few weeks ago, I was observing the interaction between what I presumed was a grandmother and her approximately eight-year-old granddaughter. The grandmother was taking photos of the child with her cell phone camera and then showing them to the granddaughter – mostly I think to pass the time. They were both having great fun with this activity, and it seemed that the grandmother was especially enthralled with this feature on her new cell phone. From a logical engineering perspective, this made little sense to me. Why was it so much fun to click off images on a cell phone? Wouldn’t it have been easier and more enjoyable to use a small digital camera instead? But for reasons that I can’t completely explain, the cell phone did seem to create a more playful mood. In this case, I don’t think either the grandmother or the young girl were especially interested in sending these images to their friends or other family members. They were simply having fun capturing playful images of the young girl on the cell phone display. My first experience with seeing the combination of a cell phone with a digital camera was on a trip to Japan a few years ago. At that time, I thought this was a pretty silly idea and that the “fad” would soon pass. Well, so much for thinking like an engineer. This unlikely combination it seems is not only here to stay but is growing in popularity. In past columns, we have discussed some of the fundamental principles that can guide us in assessing whether certain technologies can be combined with others — or not. The most basic of these fundamental principles is that the technology convergence candidates must have compatible life cycles. For example, it does not make sense to make a PC an integral part of a home’s heating and lighting systems. Those systems typically have more than fifty-year life cycles whereas the PC will become obsolete in just a few years. The second fundamental is that the convergent technologies must have similar reliability expectations. I don’t want a computer controlling my car if it is going to “crash” every few days at unpredictable times. A third fundamental is the environmental compatibility of how we use the technologies being considered for convergence. For example, if we typically watch television from across the room and with other family members present, it’s not likely that we will also want to do our e-mails on this same screen – no matter what the major software developers do to try to convince us otherwise. Companies in search of new opportunities often try to promote new products that violate these fundamental principles. The typical market response is that there is an initial enthusiasm in the technical and popular press, and a few buyers with ample discretionary income are enticed to buy these products, but soon the interest fades and there is no mainstream penetration.So what about the combination of a cell phone with a digital camera? Somewhat surprisingly, this combination does not seem to violate any of the three fundamental principles I have put forth. The life cycles of digital cameras and cell phones are roughly the same, and so is their reliability. In fact, in this case the two are really independent functions and the failure of the camera would not necessarily make the cell phone useless. The usage environment is also compatible. Therefore, even as a practical engineer, perhaps I should have been less critical of this combination when I first saw it being appreciated by a group of young Japanese schoolgirls. I have to admit that there really is something fascinating about being able to call someone and then send them a photo of where you are and what you are doing. Perhaps this fits the traditional saying of “a picture is worth a thousand words.” Having achieved success with this technology combination, it is a given that innovative product engineers are already exploring the next possible “feature” extensions. Currently, there seem to be two new opportunities. One is to take the still-picture and/or limited-motion video transmission capability to the next obvious step of full video. The other is to try to transmit some modified version of television programming over the cell phone network. In both cases, if this works it will be a revival of past efforts that did not pan out. About forty years ago, Bell telephone introduced the world to the picture phone. The “picture phone” was also prominently featured in the movie 2001- A Space Odyssey. However, even when the technology became reasonably affordable there was no great consumer demand for adding video to our telephone conversations. In fact, it was the opposite. People didn’t want others to see them when they were perhaps in their overly casual at-home dress. Will the difference now be that the cell phone is used more when we are in public and our appearance is more acceptable? Or has society become more accepting of casual dress in general? The answer to this question is just as puzzling as the logic behind the combination of a cell phone and a digital camera. Personally, I see no benefit to transmitting my picture to the recipient of my phone call. There may even be times when I don’t want that person to know that I am reading my e-mails while participating in a conversation that has a frustratingly low rate of information exchange. But is there enough value to others for sending video while they are travelling on business or on vacation? There may just be sufficient interest to make this technology work in such an outside-the-home environment, whereas it did not work for typical telephone usage in times past. For the travelling businessperson calling home to talk to the family, conversations with the spouse and younger children could have more meaning if there is video to accompany the voice transmission.

aug04 Read More »

sept04

Please Don’t Stomp on My Flower… Like new Crocuses poking their heads up through the cold ground of early spring, new ideas are very fragile. Just a touch of skepticism or an initially negative response from a person of authority can often be enough to stop further consideration. And even when the idea, and the person behind it, is strong enough to withstand these first assaults, subsequent critical reviews may succeed in squashing any further attempts to explore the potential opportunities that such a new idea may bring. The conclusion will be – “Well, we just couldn’t make a good business case, given all the risks and uncertainties.” Of course you couldn’t. That’s the fundamental challenge of new ideas — not all the questions can be answered until further work is done. Many business managers today insist that they need to have essentially all the answers before they will take the first step. How big is the market for the products that will come from this? Who will be the customers? How will the competition respond? Have all the technology risks been addressed? Will we be able to produce the products at a competitive price? What will the manufacturing investment have to be? And so on. Compare this to what you have sitting in front of you on your lab bench: A blob of glowing “stuff” that looks nothing like what anyone has seen before. All you would like to do is to try to take this interesting “glowing goo” to the next level of making a rudimentary low-resolution display. How do you make a business case when even you don’t know what could come from all this? But without some new funding, the work will soon have to be abandoned – even as a “backroom” project. I recently read an article in Smithsonian magazine (August 2004) about Chester Carlson and the beginnings of Xerography. He made the first image in 1938, but the first successful product was not introduced until 1959. It took that many years of persistent and perhaps some would say irrational effort to get from the first demonstration of the technical concept to the first successful product. Along the way Chester Carlson spent six years unsuccessfully trying to interest companies in developing and manufacturing the copier he had envisioned. None of the executives and business decision-makers in these major corporations could see the potential value and business opportunity in this new way of creating images on paper. Now, I am not suggesting that every new idea is a great one. However, many that sometimes seem a bit crazy in the beginning turn out to be not so crazy after all. For example, suppose that I told you that I wished to introduce a new display product for consumer use that requires a glass bottle that will be evacuated and can easily be broken sending glass shards in all directions with explosive force. Not only that it will require voltages above 35,000 volts and contain toxic materials such as barium and strontium. And, by the way, since the high voltage electron streams are aimed right at the viewer, I need to put lots of lead into the glass to provide for adequate shielding from x-rays. Then there are a few other “minor details” such as how to get the electrons to only hit those areas of the display screen that I have selected. Well, I think you get the idea. The venerable CRT would most likely not make it “out of committee” if the idea were to be introduced today and if a business case had to be made for it. This same conclusion could be made for other display technologies as well – including the new highly successful flat panels such as LCDs and PDPs. In the beginning, LCDs had poor contrast, were way too slow for video, the temperature range was inadequate for many applications, and the viewing angle was limited. Larger screens and resolutions beyond about 500 lines were practical impossibilities because of the limitations of the available passive addressing schemes. Active matrix addressing was considered too expensive (except for specialized commercial and military applications) and could only be made in small screen sizes. The first Plasma panels were a really nice color – if you like orange. Several decades passed before a color display could be produced. And even then, the efficiency, brightness, and cost of manufacturing scale-up were considered barriers to this technology becoming accepted for large-volume consumer products. None of us, no matter how knowledgeable, could predict that every one of these problems would yield to a commercially viable solution. The ability to scale TFTs to the sizes we have today, and with reasonable manufacturing yields, was simply not envisioned by anyone. I suppose this is the Moore’s Law equivalent for display technology. The developments that allowed for wider viewing angles and that led to LC materials that can operate at full video speeds also could not be extrapolated from what was known in the earlier days of LCD development. For plasma panels, a look at history leads to a similar conclusion. After many years of trying, the first truly successful color plasma display was introduced around 1995. It depended on a complex multi-layer screen-printed substrate. This challenging back-plane manufacturing process seemed to indicate that larger sizes and higher resolutions would be difficult to achieve and that costs would always be too high for a consumer product. However, with additional work, new ways of making the back plane structure were developed which were easier to produce in large volumes. Once development of color plasma panel products began in earnest, the other challenges of low brightness and poor efficiency also yielded to creative solutions. I suppose the perversity of all this is that most of the time we simply cannot predict which obstacles will yield to a solution and which will become the “show stoppers” and/or “fatal flaws” that lead to eventual failure. It can be terribly frustrating to the scientists

sept04 Read More »

oct04

Do We Need a Number?… The digital camera makers have it easy! All they have to do is specify one number — the number of pixels — and everyone instantly “knows” how good the camera is going to be at taking pictures. One megapixel used to be considered a good number — but no more. Now the number has to be closer to ten megapixels before anyone considers it to be a serious product. Even what we call point-and-shoot cameras are now typically in the three megapixel range. But what about all the other important features, like the lens, the storage capability, the waiting time between shots, the time between when the button is pressed and when the picture is actually taken, battery life, and yet many other important parameters? If you really want to know, you can most likely find them somewhere in the product brochures, but how many buyers do this careful analysis? Surely a five-megapixel camera must be better than one rated at only four! So, why bother with all those other details, especially since there are likely to be more “features” than the typical user will ever comprehend, let alone put to good use. So isn’t it nice to have just one number that seems to say it all? Should we, therefore, have a number that could similarly be conveniently applied to express the “goodness” of displays? Could it be resolution, contrast ratio, brightness, or size? Well, we do seem to arrange televisions by size and we do promote computer monitors and laptop computers based on the size and resolution of their displays. But how much does that say about the quality of the images that we will see? Resolution does give us a reasonable starting point for the selection of a computer monitor, but for television, the various standards ranging from NTSC up to 1080-line interlaced HDTV seem to create more confusion than help. What good is a display that is capable of 720 lines of progressive scan when used with an NTSC signal, or even with a 1080-line interlaced signal? One would expect it to be better than a 480-line VGA display, but by how much? Over the last couple of years, with the introduction of the new plasma panels and larger LCD TVs, there was a surge of promotional information touting contrast ratio as a highly significant specification. The numbers started out at reasonable levels but then over time escalated into the thousands. It is not unusual to see advertisements that claim 3,000:1 contrast. Given that a typical printed page is only about 10:1, and it looks quite good to most of us, one has to wonder how these incredibly high numbers are achieved. Was the black level measured in a dark room with the power switch turned off? I hope not. But lately the contrast ratio race seems to have run its course. I don’t see nearly as many advertisements touting such numbers. But what do we have instead? In a recent visit to a Circuit City store, I tried to see if there was some way to evaluate displays, especially for the various types of TVs, other than by just looking at them. I began my survey by studying the specification tags placed by the products on the showroom floor. Here are a few of the interesting features that I found listed: Three-line comb filter, velocity scan modulation, Invar shadow mask. I think those of us who have worked with these products might know what these are, but how about the average shopper? And these are not examples that I specifically selected for their uniqueness. The other items listed didn’t provide information that I would consider any more helpful. How does it help to know that a Sony TV has velocity scan modulation while a Panasonic set has an Invar shadow mask? In the rear-projection TV area, the descriptions did not help any more than the examples just listed. The most prominent promotion was for rear-projection sets that were based on LC technology – they didn’t say which one. However, in observing these products, I was surprised and a bit puzzled to note that they were noticeably dimmer than either the CRT-based ones or those with a DLP logo on them. Even as a reasonably knowledgeable buyer, I quickly became frustrated with this confusing and disparate set of claimed features that have at best an indirect connection to the observed image quality of the display. On the other hand, perhaps having no number at all is better than having a number that does not tell the full story. We have already noted the limitation of megapixel counts in digital cameras. Similarly, in the audio section of the store, it seems that in today’s world the only number that is used for comparison is the watts of output power. Is it as simple as that? Some years ago we worried about specifications such as harmonic and intermodulation distortion, and quiescent noise. Then we created incredibly inefficient speakers to get more base notes out of small volumes and along the way seemed to have forgotten such “minor” details as the quality of the sound. I guess when you are driving down the street making loud “boom-boom” sounds, low distortion is the furthest thought from your pounding head.In the computer monitor and front projector part of the store, the product descriptions became more logical and more useful. Most of the computer monitors were clearly specified with screen resolution and brightness in nits. That was a pleasant surprise. I could actually walk down the aisle and compare screen-to-screen performance. The visual appearance of the displays seemed to match what one would expect from the numbers listed. The front projectors were also easier to evaluate based on resolution, and in this case, the ANSI lumens for brightness. Since the projectors were not operating, I could not see if the lumen numbers would actually match what would be observed. I really don’t know what would happen

oct04 Read More »

nov04

One Change Leads to Another?… At the recent Society for Information Display Executive Committee meeting, Andy Lakatos, the Editor of the SID Journal, reported that the Journal is in the final stages of a successful transition to an all electronic publication. The Journal will no longer be published quarterly as a printed document but will now become a monthly publication available in electronic format to members and subscribers through the SID web site. At the end of each year there will, however, still be a supplemental CD sent to all subscribers. Electronic publishing will provide a number of significant benefits with more frequent issues and a greatly reduced time from paper submission to publication. There will also be links to provide immediate reference search capability. The Society will benefit by eliminating the costs of printing and mailing. So, is this the perfect answer or could there be a few negatives? Personally, I will miss the visual reminder that a hard copy has provided me — sitting in the pile of other recent but yet unread publications on my desk. I will also miss the ease of taking the Journal with me on a business trip — to read along with all the other technical magazines that accumulated during those busy times in my office. I like the ease of reading whenever and wherever I happen to be. I also like the ability to retain only those articles that interest me. Lightening one’s briefcase by leaving behind a stack of just-read magazines at the end of a long airplane ride, for me, provides a tangible and satisfying measure of accomplishment. Without the reminder of a hard copy, will anyone remember to look on-line each month to review the latest articles? With all the other demands placed on our time, when will we remember to do it? Will the published articles simply disappear into a black hole in information space? To insure that such an unfortunate possibility does not come to pass, the Society decided to publish a short summary of each month’s papers in Information Display magazine, the Society’s other major publication. This synopsis of the Journal will, in the future, provide the visual reminder that the full papers can be accessed online. This seems like a very nice balance between the benefits provided by electronic publishing while still retaining an important convenience of the print-on-paper medium. But, as often happens, the end of this story is not really the end of the story. The very next item for the committee’s consideration was the “opportunity” to also put Information Display magazine on line. This discussion did not get very far at this particular meeting because there was immediate and serious concern expressed about how our advertisers would respond to such a major change. Nevertheless, this left me wondering if we are simply postponing the inevitable. Personally, I like reading magazines. Sometimes I read them at perhaps peculiar times and in all kinds of locations. I take them with me when I travel. I read them while waiting for a meeting to start. I read them at night when I want to relax. I even take them along on vacations. They are my way to fill those other moments when not much else is happening that needs my attention. What would it take for me to be able to do this electronically? Let’s consider a few items that I think will have to become available before the conversion to all electronic publishing will feel as convenient for me as the “old fashioned” paper publications. I think the item of highest priority on my list would have to be a wireless connectivity — wherever I go and without any extra costs or difficult procedures for access. Then I suppose along with that would come the need to have a display that is large enough, has enough resolution, and is readable in all kinds of environments. And yes, battery life should be adequate to handle at least eight hours of use without having to worry about where to find a power plug, as I often find myself doing while waiting for a delayed flight in an airport. If these basic requirements could be satisfied then there could be some new and interesting opportunities for all-electronic publications. It should be possible to highlight and save (or send to a printer) only those articles, and/or paragraphs from articles, of special interest. It should be possible to instantly access references or advertisers’ sites for more specific information. It should likewise be possible to excerpt and forward specific bits of information to others. And readability should be convenient if the articles are organized so that browsing from page to page is trivially easy. With these capabilities in place, I think I could begin to like the idea of magazines in electronic format. In fact, I may even begin to embrace the concept. But what about my current need for visual reminders? Which magazine should I access and how will I remember to do it? I suppose a monthly e-mail summarizing the contents could be one way to accomplish this. However, that doesn’t feel like quite enough of a reminder to me. Maybe in this case I’m the problem? Do I need to sign up for behavior modification therapy? Or are we going to end up missing an opportunity by eliminating the “push” benefits of the printed page? Perhaps we won’t know until we have the capabilities in place that I have described above. I think the transition from print to all-electronic publications will begin in earnest when wireless connectivity becomes available at many more locations and no longer requires special subscriptions and difficult procedures for access. It seems to me that that should occur in the next five to seven years. After that, the transition to electronic magazine publishing will likely occur quite rapidly. Once this transition begins in earnest, there could be other interesting consequences. One side effect could be on the future development of electronic

nov04 Read More »

dec04

A Few More Things to Understand… Outside my office window, hanging from the branch of a small tree, I keep a hummingbird feeder — well supplied with sugar water. Even though hummingbirds are supposed to be migratory, I seem to have acquired several that have decided to stay through most of the winter. I don’t think my feeder has anything to do with it, but just in case, I make sure that there is never a time when the food runs low. The more that I have watched these little creatures the more fascinated I have become with them. One thing that I have observed is that just like us humans, they don’t like to share very well. Of the three that are currently regular visitors, one is clearly the dominant “bully.” He will sometimes tolerate one of the other two to drink at the same time, but not the third. The shy one has learned to make quick visits when the other two are not around. However, with one trying to dominate the feeder, inevitable squabbles break out. These are temporarily resolved by displays of flying skill that would make any fighter pilot envious. The speed, agility, and maneuverability of these little flying machines is something that can only be appreciated by observation. Otherwise, it would be beyond description and most of us would not believe that such capabilities can exist. Thinking about this as a scientist and engineer, I began to try to imagine what it would take to “design” and “build” something like a hummingbird. Surely, given the volume, the information processing capabilities of a hummingbird’s brain cannot be all that great. But clearly the real-time image processing capability is in some ways better than what we humans have and the I/O capability is something that is still a long ways beyond what we know how to do. Basically, they never run into anything even while flying at full speed (I think around 30-40 miles per hour) and within inches of tree branches and other objects. They have the ability to catch insects on the fly. And of course they can hover and/or fly backwards on a whim. They are so sure of their flying skills and quick reaction capability that I have had them land and drink from the feeder while I was holding it in my hand. And the final difficult-to-imagine feat is that they are known to migrate over distances of over 1000 miles. I have to admit that I cannot even begin to grasp all the technology pieces that would have to come together to design even a poorly performing hummingbird. We may have the raw compute power available but we don’t have the image-capture and processing capability and we certainly don’t have the techniques to put this capability into a package the size of a hummingbird. And perhaps most important, we don’t have the energy sources developed to power such a “mini-bot.” Have I picked an unfair example? Well, then how about a common housefly? It’s taken us until recently just to figure out how they can land upside down on a ceiling! So would you like to try to design one? In this case, the compute power must really be modest. But how is it that a fly can evade a clearly superior human for hours on end? The real-time image processing capabilities of a housefly are obviously also quite impressive. And for nature, it only takes a miniscule amount of waste products to make one. Lately we have read and heard great predictions regarding MEMS technology and how we will have “nano-robots” flying everywhere doing surveillance activities and perhaps repair work within our bodies. We may have some of the rudimentary pieces of this technology in hand or under development, but how about power sources that are sufficiently compact and long lasting? After years of development, we have only solved the power source problem for a few portable items like watches and calculators. But for almost everything else, we are way behind in providing the “battery” life that we would like to have. Wouldn’t it be great if your laptop computer ran on sugar water like a hummingbird? What a plot line for a science fiction comedy — your laptop runs out of power and you pour the rest of your latte into it and you’re back in operation. Feeling properly humbled by my inability to come up with even a rudimentary design for a hummingbird (or a fly), I decided to move on and contemplate even deeper subjects. Perhaps the approaching Christmas Season has something to do with this. What other important things might there be that we don’t know? What could there be that we don’t even know that we don’t know? For example, if you did not have a television receiver or a radio, how would you prove to someone that the space around you is filled with useful information (although sometimes the useful aspect may come into question)? Without a detector “box” of some kind, we have no way of knowing that weak RF signals are present all around us. We can not see them, feel them, or hear them. For all practical purposes, they do not exist. And what do you think someone would think of you if you tried to convince them that indeed they do exist, and that some of these signals even originated thousands of miles away? So the big important question is – is there something else out there for which our known detector “boxes” don’t produce a result? Hmmm… That could be a very interesting question to ponder as we approach the Christmas Holidays. In our day-to-day activities, it is easy to become immersed in the details of what we are doing. And because our technology progress has made life quite comfortable for many of us, it is easy to become enamoured with the material benefits that technology has provided us. But perhaps we still have a long road

dec04 Read More »

Scroll to Top