Edit Template

Aris Silzard

jan10

What’s Next?… Sometimes the future is easier to predict than at other times.  A decade ago, while I was serving a two-year term as President of the Society for Information Display, I gave a number of display industry overview talks at SID conferences.  It was with considerable confidence that I predicted the growth of flat panel displays based on LC and Plasma technologies and the gradual demise of CRTs.  My predictions were not just educated guesses.  They were based on an analysis of the solid progress that was being made in the materials used for these displays and by noting the plans that the large Pacific Rim corporations were beginning to implement for the manufacturing scale-up of larger substrates.  In addition, there was rapid progress being made in refining and improving the image quality of the flat panels that could be produced with both LC and Plasma technologies.  Thus, it was not all that difficult to predict where the next decade would take us.   That future is now here!  LC displays are everywhere – in sizes small and large.  Plasma displays are not quite as ubiquitous and thus do not have as extensive of an infrastructure as LC displays.  Nevertheless, at least in my opinion, they produce large screen video images that are slightly better than LCDs.  But both technologies produce HDTV images that are deemed by consumers to be “good enough”.   The screen size, “thinness”, styling, and price of these displays seem to be more important purchasing criteria than comparing the quality of the displayed images.  Perhaps how bright they appear in the store and how “vivid” the colors are influences some consumers, but otherwise size and price appear to be the major driving factors.  So it should not surprise us that all of these displays are now available at purchase prices that would have been difficult to imagine a decade ago.  In the year 2000, a hoped-for goal for the selling price of a 40” display was $4,000 — and there was no assurance that manufacturers would be able to achieve even that target price.  Did any of us expect that we would be able to beat that goal by more than a factor of five in ten years?  Both LC and Plasma technologies are now reaching maturity, the images they produce are excellent, and they are meeting the needs of the marketplace.  These displays are everywhere – on cell phones, cameras, computers, clocks, in appliances, cars, video players and televisions, and even on large digital signs in stores and airports.  Where can the display industry possibly go next?  What is the next great opportunity that can compare to flat panel televisions or laptop and desktop computers?  Trying to answer these questions will make the next decade more intriguing and perhaps less certain than the last.  Where do we go next?  Further improvements in flat panels?  Flexible displays?  Stereoscopic “3-D” displays?  Brighter sunlight readable displays?  Hmmm… The future doesn’t look nearly as predictable as it did a decade ago.  We can be quite certain that further improvements will be made in the existing flat panel technologies.  A major change that we can already begin to see in LC technology is the conversion of backlighting from CCFLs to LEDs.  That will result in more accurate color reproduction and eventually in improved overall efficiency.  As OLED technology continues to improve we may see more products introduced that will have CRT-like image quality and be even thinner and lighter than either LCD or Plasma panels.  Flexible displays should continue to progress and begin to meet certain specialty needs.   But there is no compelling reason to have flexible televisions or computer displays.  Perhaps the first opportunities will be in “wearable” displays that enhance cell phones or other such portable products.  But let’s not forget 3-D displays?  According to some prognosticators this is the great new opportunity and will become the way we watch television at home.  Therefore, shouldn’t this be the major new thrust in the next decade?  Unlike many, I’m skeptical.  Right now we are seeing a high level of enthusiasm for some of the movies that have recently been released in 3-D.  But those are all in the science fiction or computer animation categories.  The artificiality of two-image stereo is quite compatible with movies that rely on computer generated images.  For that same reason, computer games are also a great application for stereo displays.  But for general viewing?  I think we will find that the average person does not want to be tied to polarizing glasses and that scenes with real people in real settings will not benefit from two-image stereo.  It will be interesting to see, but I can’t visualize us all watching television in the year 2020 in 3-D while wearing polarizing glasses.  (I suppose I should note that we know of no other way to do this reasonably well than with some kind of image selection mechanism attached to our eyes).       Can we make displays brighter and more readable in outdoor environments?  Or make them more like paper?  Yes, we can.  Those will be among the predictable developments over the next ten years.  We can improve on both emissive and non-emissive displays.  That can and will happen.  These improvements will not necessarily open up any large new markets, but many of the existing products such as cell phones and electronic books will benefit.  I think it is safe to predict that portable electronic devices of all kinds — with displays — will proliferate in the next decade. In looking ahead, if we are to be treated to a few serious surprises, they will have to happen first at the materials level.  As we enter the 2010’s, I’m not seeing anything that would lead us to experience revolutionary changes in the display industry in the coming decade.   Remember it took 40 years for both LCDs and Plasma Displays to get really good.  We’ve now been working on OLED technology for over 20 years and it’s still not a major

jan10 Read More »

feb10

Lost in “Feature-land”… About a month ago, I brought home a new advanced-level digital camera.  But, before proceeding, I should explain that photography has been in my blood since I was in grade school.  My first camera, when I was about 9 years old, was a folding bellows type that had a simple viewfinder, one shutter speed, and a two-position aperture.   In later years this modest beginning blossomed into an arsenal of 35 mm and 2 ¼ format cameras along with various wide-angle and telephoto lenses.  A darkroom with color developing and printing capability followed.  But acquiring equipment was never my primary goal.  The capabilities that were added were always in the quest for better image quality and the ability to capture photographs that could compare to those done by photographers even more dedicated to this art.  The technical side of photography never seemed that complicated to me.  Basically, there are only three parameters that one can vary when taking a photograph – the lens aperture, shutter speed, and focus.   For at least the last thirty or forty years, upper end film cameras have included through-the-lens light meters that were in some cases as basic as an analog meter with a pointer, or more recently, a bit more sophisticated with a row of tiny LED’s.  The film exposure was selected by centering the needle, or the glowing lights, based on a combination of shutter speed and aperture.  The choices were not that difficult to make.  If there were moving objects of interest then fast shutter speeds were good.  If there was minimal light, then larger apertures and slower shutter speeds were the suitable choice.  Setting the focus was equally straightforward — decide on the subject of interest and then either look through the viewfinder and turn the focus ring on the lens or simply estimate the distance and set the focus ring to that value, in feet or meters.  Now, three steps is something that I can remember – i.e. choose shutter speed and aperture, and don’t forget to focus.  I got it.  And a month or a year later – I still got it.   And no matter which camera I pick up, these steps are equally obvious and easy to locate.  The aperture and focus rings are always on the lens barrel and the shutter speed dial is on the body.  These simple steps very quickly become automatic and all the real attention can then be devoted to taking the photograph. But, now we have entered the digital age – the age where computer intelligence is supposed to replace our own limited capabilities.  My new digital camera has automated everything.  It has auto-focus, automatic light metering, choices for light balancing, and yet other “features” that would require a much longer list to enumerate.  The User’s Manual for this camera consists of 278 pages of detailed instructions.  There are at least 22 buttons on the camera body in addition to menu selection and command dials.  The icons in some cases are understandable but in other cases seem quite mysterious as to what they may signify.  Many of the knobs and dials have multiple functions that can be changed depending on the selection from multi-level menus.  Just the auto-focus function by itself has 7 major choices and 23 sub-choices to cover apparently every conceivable situation. However, here’s where the fun begins.  For every one of these automated choices there is an explanation in the User’s Manual for when it may not produce usable results.  Is there too little contrast in the subject?  The auto-focus may not work.  Is the light too dim?  Is there a strong pattern in the background such as windows on a building?  Does the subject of interest move to the side?  Caution, it may not work.  So in order to take a photograph using all this computer intelligence, first I have to decide which of these 23 choices best matches up with my current shooting situation, and then I have to set the “automated” focus feature first using the major choices followed by the sub-menu choices before I can take the picture.  There are at least 10 pages in the User’s Manual devoted to how to manage all these choices and under what circumstances each one may — or may not — produce acceptable results.   On the last page of this long chapter explaining focusing features there is a short two-sentence paragraph devoted to “manual focus”.   It says that for manual focusing one should turn the focusing ring on the lens while looking through the viewfinder.  Ah ha… Finally, something that I can understand and remember.  I’m sure you can guess where my camera is now set and where it is going to remain set for as long as I own it. The aperture and shutter speed settings are equally rich in “features”.  And once again, there are various options for compensating for special situations.  There is also a long list of automated choices for picture taking situations that one is supposedly likely to encounter.  However, as with automated focusing, every one of these “normal” situations has an accompanying description of when the camera may not produce good results and how to “compensate” when the desired outcome is not being achieved.  And once again, near the end of a long chapter there is a short paragraph that says that the camera can be operated in a “manual exposure” mode using the built in light meter (just like the ones in film cameras) to set the aperture and shutter speed.  What a relief!   I can use this camera after all — using my own brain and my own understanding of how I want the image to turn out.   And several months from now I will still remember where the two dials are that control these functions – shutter speed and aperture.               Is it realistic to expect that the typical, or even technically savvy, person will be able to remember the multiple and multi-level functions of over 20 buttons and control dials?  Is

feb10 Read More »

march10

Remember Virtual Reality?… In October of 1993, I wrote a column titled “Is Virtual Reality About to Become Real Reality?”.  The column was written to assess the state of this developing technology that was predicted to become the new way that we would interact with our computers – especially for playing video games.  The concept was that we would all be wearing video headsets with motion sensors that would immerse us in the video game or any other viewing experience.  The headsets would give us full stereo (3D) vision and the head motion sensors would allow for the presentation of images corresponding to where we were looking so that we could fully immerse ourselves into the viewing experience.   As my column then observed, there was even a recently introduced magazine exclusively dedicated to this new medium.  So, what happened?  Here we are seventeen years later and there is nothing resembling the products being brought to market in 1993 – or anything resembling evolved versions of these technologies.  It seemed like such a good idea.  Even some of the large consumer electronics companies were beginning to jump onto the bandwagon.  Unfortunately, for all this enthusiasm a few “minor” problems began to emerge.  One of these “minor details” was that the viewing experience was giving some users motion sickness.  Since the artificial visual immersion environment was disconnected from the actual physical environment of the user, there was a sensory conflict created and for many users this led to discomfort similar to that of a bad carnival ride experience.  Not all that much was written about these problems, but over the next year or two “virtual reality” products faded into oblivion. Today, in the midst of unbridled enthusiasm about 3D movies and 3D TV for the home, it feels like swimming against a strong current to make a few cautionary comments about what may happen as we continue to push this new technology into more general use.   But let’s be brave and give it a try anyway. First, it’s useful to recognize that all of the excitement over 3D has been created with movies that are computer-generated animations and/or in the realm of science fiction.  For these movies, an artificial environment is perfectly acceptable and special effects are what we enjoy.  Reality is suspended and the visual experience can be whatever we want it to be.  And while so engrossed, in the movie theater setting, we don’t have an actual physical environment around us to observe for comparison.        But even in a movie-theater setting, how will we respond if we try to duplicate the 3D experience with a movie that is not an animation or computer generated?  That may prove to be much more challenging.  Real environments familiar to us will not look “quite right”.  The 3D stereo effect that is incomplete to our visual system will become a distraction rather than adding to the viewing experience.  It will be interesting to see if a conventional movie, such as Blind Side for example, will be made in the next few years — and be successful — in 3D.   In the meantime, we can certainly continue to enjoy the fantasy experiences of computer-generated science fiction type movies in 3D.  Now, comes that hard part.  Can we bring 3D TV into the home and be successful in doing it?  Can it become a normal viewing experience?  Or will we repeat the “virtual reality” experience all over again? There are several challenges that may be hard to overcome.  The first is the requirement to wear polarizing or active shutter glasses for any currently known 3D technology.  The auto-stereoscopic methods are simply not useable yet and no one knows what to do to make them good enough for general use.   But the real obstacle may be the smaller screen and the ambient surroundings that create a visual system conflict.  This will be much more pronounced for TV than during a theater viewing experience.  TV viewing in 3D will appear to be like looking into a diorama or through a window.  So rather than becoming a part of the scene, it will appear that we are in a disconnected location looking from the outside in.  A “window into the world” is perhaps not so bad with stationary scenes but when the camera is moving this may become quite disconcerting.  As with all viewing experiences, some people will adapt better than others, but the visual conflicts introduced may cause many to simply quit using 3D technology. Over the years, 3D stereoscopic photography has had its moments of enthusiasm.  When I was about eleven years old, I remember collecting Viewmaster disks of various vacation spots.  My favorite was one from Carlsbad Caverns.  But those times are gone.  Will they come back?  Indeed they may – but I think only briefly.  Once the current unbridled enthusiasm dies down, we may find that 3D TV has a number of significant challenges to overcome.  The video game enthusiasts and the sports fans may not be enough to carry 3D TV through to a sustainable product implementation.  Nevertheless, the attempts of companies to bring products to market and their promotion of them will be great fun to watch.  It is always difficult to take positions that go against popular wisdom.  The tide of enthusiasm wants to sweep up everything in its path.  Yet enthusiasm for a bad idea will not lead to a sustainable result.  Can 3D TV in the home be a sustainable technology?  As you can see, I have serious doubts.  Are you ready to jump on the bandwagon and purchase the first 3D TV product that comes to market?  Or are you willing to wait and see how all this plays out?   Let me know your thoughts.  You can contact me directly from this site, directly by e-mail at silzars@attglobal.net, or by telephone at 425-898-9117.

march10 Read More »

april10

Do You Know What You’re Watching?… It’s been some time now since — willingly or not — we all went through the conversion to digital TV.  This changeover coincided nicely with the rapid growth of flat-panel televisions and the correspondingly rapid demise of the CRT.   So the end result should be that now when we turn on our new large-screen flat- panel televisions we should be presented with gorgeous digital High-Definition images. We should be!   And we certainly wouldn’t be satisfied with anything less.  Right?  Oh, what optimists we engineers turn out to be!   We see the world through our technologically capable eyes – not business reality and the reality of what consumers are willing to accept as “good enough”.  Just because broadcast and cable signals are now “digital” and presumably high-definition – at least somewhere along their paths to our televisions — apparently doesn’t mean that these signals also have to end up that way for our viewing pleasure. When it comes to watching television, I may not be the perfect example of a typical consumer, but bear with me anyway.   In our house, we have two flat-panel TVs and one venerable CRT that still produces excellent video images with its built-in line-doubler.   We have cable outlets in almost every room and Comcast is our designated provider.  When the conversion was being made to digital transmission, an installer was sent to our home and he added a “converter box” to each of the three televisions.  These converter boxes were apparently needed whether it was the old-style analog CRT TV or the newer flat-panel digital sets.  Soon after, I noticed that the two flat-panel sets had less impressive (smaller) converter boxes and did not seem to be displaying HDTV images even when the programs were supposedly being sent out in HDTV.  A call to Comcast confirmed that indeed this was the case.  In order to have HDTV capability on all three televisions, I would have to pay an extra $9.95 per month for each converter box capable of HDTV conversion. Really?  How could it cost more to directly send a signal to a TV in its native format than to convert it to an older analog format?  Did I ask Comcast for a technical explanation?  I think you can guess the answer. Not wishing to pay $20.00 per month more for high-definition capability on televisions that get very limited use, I told Comcast what I thought of their policy and proceeded with “Plan B”.  By adding an indoor “rabbit ears” antenna and a simple mechanical switch, these less-used televisions can now receive high-definition signals off the air as well as the lesser quality signals from the Comcast cable converter boxes.  However, the limits imposed by cable providers are not the only ones that keep the typical consumer from getting high-definition signals for all their viewing experiences.  Not all programs are broadcast or produced in high-definition to begin with.  Or the cable providers simply decide on which programs to send out at full bandwidth and which ones to compress down to lower quality.  The end result is that we are experiencing a range of image qualities that someone else has decided are “good enough” for our viewing enjoyment. How can this be?  How can cable companies get away with providing an inferior quality product?  The simple answer is that for most of our viewing, we don’t care.  Unless it’s a special sports event or some other program that has unusual significance to us, we simply don’t care if it’s high-definition or not.  The old analog television signals were apparently already “good enough”.   An image with roughly 400 – 500 lines of resolution and reasonably accurate color rendition is “just fine”.  If the format is wrong, and the people all look like they have gained weight – no big deal, we’re too lazy, or don’t know how, to change the picture.         What we do seem to appreciate is the larger sizes of the new flat-panel TVs and the fact that they are thinner and not as difficult to move around as the old CRT sets.  But all that extra engineering effort that has gone into higher resolution, faster LC response for reduced motion blur, and now LED backlighting for improved color rendition has, in my opinion, by-and-large been unappreciated and wasted.  It may matter in the showroom, when the purchase decision is being made, but once installed in a typical home all that wonderful image quality is frequently unavailable from the television signals that come off the air, from a cable connection, or even from the typical DVD player.  For most of our television viewing, we as consumers are quite satisfied with images that are “good enough” – just “good enough” for us to become engrossed in an interesting story line, sporting event, or news program. And now along comes 3D TV.  It’s a great novelty and for many consumers over the next year or two it will be the “must have” feature when making the next flat-panel purchase.  But how long will it be before the polarizing or shutter glasses are relegated to the pile of other junk items on the family room coffee table – as one more example of a gadget-driven culture that does not appreciate the capabilities that already exist? Have we become like spoiled children with too many toys?  We must have the latest one, but we quickly tire of it and need to move on to yet another one.  Flat panel displays with 3D capability may be the next big sellers – for at least a year or two – but it may be for reasons other than an enhanced viewing experience.  Will those viewing glasses be found in that pile of other junk on the coffee table or will they become a part of our daily television viewing experience?  I think you can see that I have some serious doubts about the later.  I would be interested to hear about your experiences in the conversion from analog to digital

april10 Read More »

may10

Hidden Agendas… Wouldn’t it be great if we could foretell the future — especially if we’re working at a company trying to develop the next great display technology?   Being able to know well ahead of time what products consumers will fall in love with and “simply won’t be able to do without” would most likely provide us with an insurmountable competitive advantage.  But if we were indeed able to know what’s in our future, wouldn’t that mean that we can’t do anything to change it?  That would leave us as helpless observers of predestined events.  And that, to me, doesn’t sound like an interesting way to go through life. Nevertheless, we are all curious about what lies ahead.  And since for commercial enterprises that can involve financial rewards, there are a number of folks who have made a business out of trying to predict what new opportunities may lie beyond the visible horizon and what new technology developments may soon become “disruptive”. I too, on occasion, have tried to look ahead and offer my opinions of where I thought the display industry would be ten or more years into the future.  Having done this for more than the past 30 years, I can say that most of my prognostications were reasonably accurate.  My explanation for my modest successes is that I have always tried to base my projections on the rate at which I thought display technology could evolve and the rate at which I thought worldwide manufacturing capacity could be implemented.  Unfortunately, using this approach, I was never able to come up with a truly “disruptive” technology prediction.  New technologies evolve gradually and it takes typically twenty years to get from a research laboratory demonstration to real products.  That allows everyone to become comfortable with the new developments as they are evolving.  And when the new technology finally has a major business impact (such as flat panel displays on CRTs) the evolutionary process does not appear to have been at all surprising or chaotic. However, in spite of my excellent prediction track record, I have not become a famous or well-publicized futurist or technology prognosticator.  And why is that, you may ask?  I believe the explanation is really quite simple.  My predictions are unspectacular and to the popular press “boring”.   And when the future develops pretty much as I predicted the response is – well of course it happened that way, it’s an obvious outcome.  Once the future becomes the present, intelligent foresight is no longer appreciated.  But if accuracy and a good track record aren’t sufficient to achieve “fame and fortune” then what does it take to become a well-publicized futurist whose opinions are regularly sought by the popular press?   Well, it appears that the more audacious the predictions the better.  As examples, consider the following.  Immortality is just around the corner!  We will all have nanobots running throughout out bodies fixing whatever ails us!   Solar energy will replace all of our energy needs in less than twelve years!  This list can go on and on, but I think you get the idea.  Audacious predictions create publicity and the publicity makes the author known and his books and reports become salable items.  And once he is on this track, the next round of predictions has to be even more audacious to keep up the popular interest.  A prediction that medicine will give us immortality in the next twenty years – that’s good to sell a book or two.   A prediction that we will be served by intelligent robots smarter than us and have lots of free time – that’s good for at least a few popular press articles and another book or two. Yet another path to financial rewards for those predicting the success of a new technology can be the organization of consortia or new businesses around this sure-to-happen new “growth opportunity”.  Such promotional activities can be perpetuated for a number of years — until reality finally sets in — because there are nearly endless explanations for why the new technology is on the verge of success but just needs a little more time and effort to get there.  And just like other money raising organizations that eventually run out of steam, once a cause is exhausted, a new one always seems to be waiting in the wings. And that is why, when it comes to listening or reading about futuristic predictions, one has to be on the lookout for those darn “hidden agendas” – the motivations that may underlie the predictions that can cause a “time warp” or a “wisdom warp” of sorts.   What will it take to attract the attention of the public and thus sources of money to pay for the futurist’s “wisdom”?   A relatively mundane and conservative prediction will most likely not do the trick.  And who in ten years will look back and remember what was said?   The popular media seems to have a non-existent memory when it comes to reviewing these self-serving pronouncements .  Surely we don’t have any of these problems within our display industry?  Do we?  Well, we don’t have too many, but on occasion something pops up that does have that “hidden agenda” feel about it.  The good thing about us engineers is that by nature we are a skeptical lot.  That tends to keep us from getting too far afield from a sound technical path.  Therefore, if we keep the technical issues firmly in mind, and think about those “minor” details that can easily turn into “fatal flaws”, we will be able to see past any of those over-enthusiastic and self-serving predictions. I would be interested to hear your thoughts about how you think display technology will progress in the next decade.  You can contact me from this site, directly by e-mail at silzars@attglobal.net, or by telephone at 425-898-9117.

may10 Read More »

june10

The Green Paradox… We live on a planet of finite size and predictably limited resources.  If we run out, there is realistically nowhere else we can go to get “more stuff”.  We have known this for many years.  Thus, in order to accommodate our growing population we have to be careful how we use our available resources and we also have to be careful that we don’t destroy or poison the very space that we will need for our survival.  Of course, not everyone agrees on exactly how we should go about taking care of our environment and sometimes really bad things have to happen before we learn our lessons – major oil spills, thick layers of smog, ground water contamination, etc.  In spite of, or perhaps because of, these bitter lessons, overall, humanity is becoming more aware of the need to protect earth’s environment — the only home that we know.   For the worldwide display and electronics industries this has led to a number of initiatives and government mandated regulations to reduce power consumption and to eliminate materials that can potentially poison the environment.  Controlled disposal and recycling or reusing the constituent materials has been another useful approach.                 While all this sounds like progress, there are two countervailing forces that are perhaps even stronger than the push to protecting the environment.  The strongest of these is the desire for companies to grow and prosper by selling more and more products, and the complementary desires of consumers to have the very latest gadgets.  Product life cycles are now measured in months.  Where we used to have one computer for each household we now have several for each family member.  Another force that is working against making any progress in “being green” is the approach that manufacturers have taken in designing products that can only be repaired by making major module replacements or products that are not repairable at all. In the days of vacuum tube electronics, when the television or radio failed it was typical to replace a faulty tube (or sometimes a capacitor) and the product was back in service.  The expectation was that a television set would be in use for at least ten years and most likely considerably longer.  The same was true for radios.  Today, the expectation is that most electronic products will have a life cycle of no more than one to three years.  For example, most users consider their cell phone obsolete after just one year.  A typical desktop or laptop computer has a life cycle of no more than three years – although this column is being written on one that is considerably older than that and the software is “Office ‘97”. Recently, I encountered a problem with a 30” high-end monitor that had lost its capability to synchronize the vertical signal.  To me this seemed like a readily repairable problem – especially since this particular monitor sells for well over $500.   The obvious approach was to trace the vertical synch signal to locate where it was failing.  To facilitate the task, I found a service manual on-line and sure enough the path for the synch signals was identified.  It didn’t take more than a few minutes to verify that the video board was not the problem and that it was sending the proper synch signal to the display driver board.  OK, problem solved!  Replace the driver board and we’re back in business.  At least, that’s what I naively assumed before I read a few more pages of the service manual.   As I soon learned, this board is permanently attached to the display panel itself and there is no replacement circuit board available.  What the service manual says is that if such a problem is encountered then the indicated “repair” is replacement of the entire “display module”.  How can they say this with a straight face?  Other than the “display module” (with the driver board permanently attached to it), this monitor consists of a $10 power supply board, a $10 video input board, and a black plastic box.   Thus, instead of being able to plug in a new $10 driver board, this $500+ monitor is now a piece of worthless junk.  Perhaps that is good for the manufacturer, but it sure doesn’t do anything for protecting the environment or the pocket book of the consumer.   This is, of course, not an unusual experience.  It’s no different than having a numerical LED display lose one segment of a digit on a kitchen appliance such as a microwave oven and be told that the only available replacement part is the entire electronics module – typically priced at around $200.  So instead of replacing a $0.99 LED, we are forced to junk the entire module and add more electronics waste to our landfills.   However, even that wasteful repair can be done for only a limited time.  After a few years, as new models replace older ones manufacturers quit stocking the model-specific replacement modules.  Repair thus becomes impossible.   Repair at the component level is also no longer an option.  The key components are typically custom IC chips that are surface mounted with 50 or more pins that are too small to align and solder by hand even if a replacement part could be found.  So we have traded repairability for “features” and obsolescence.   Whatever the gadget, if it doesn’t work, we toss it in the junk pile and go buy a new one.  Thus, in spite of all the wonderful words we hear about energy and resource conservation, the practical reality is that it’s mostly just that – wonderful words.  A paradox!  We may make some modest efforts to reduce power consumption or to eliminate the really dangerous materials, but mostly the desire for consumers to have new gadgets with the latest “features” and the desire for manufacturers to sell more products overcomes any real effort to conserve our available resources.    Perhaps the meaningful conservation will happen when we actually begin to run out of

june10 Read More »

july10

The Picture Phone… The Picture Phone – it’s finally here!  It took fifty years but the future has at long last arrived.  My goodness, what took us so long?  Bell Telephone had the idea way back in 1956 and built the first laboratory test system that year.  By 1964 a more complete experimental system was far enough along to do public demonstrations.  The expectation was that soon the Picture Phone would become a widely used enhancement to regular voice calls. In the 1968 classic movie 2001: A Space Odyssey the Picture Phone is prominently featured in a call home to earth from an orbiting space station.  The expectation was that businesses would adopt this technology first and then it would spread to more general consumer use.  But in spite of the monopolistic strength of the Bell System, the Picture Phone did not find even limited acceptance.  Why were the predictions for this technology so wrong? Cost was one issue.  The bandwidth needed for uncompressed video was much greater than for voice and required two additional sets of wires for sending and receiving the video signals.  However, the real obstacle to acceptance turned out to be much more fundamental than technology complexity.  During the 1960’s and 1970’s and even a decade or two after, the telephone was a fixed location device.  I had one on my desk and you had one on yours.  And when I wanted to talk to you, I would call your telephone and if you happened to be at your desk, we would get to talk.  If you were somewhere else, leaving a message was maybe a viable option.  Adding video to this arrangement does not do much to enhance the communications process.  In most situations, I already know what you look like and seeing you in your office or in a room in your home does not do much to facilitate our communications process.  Furthermore, most telephone users did not want to have uncontrolled video access in case they were not properly dressed or otherwise unprepared for video viewing.  Given this user resistance, and lack of clear benefit, the concept was doomed — even at no additional cost.  The 1970’s passed, as did the 1980’s, and the Picture Phone became a forgotten product concept relegated to the technology junkyard of “great but impractical ideas”.  But then in the 1990’s something began to happen that no one had imagined when the Picture Phone was first conceived.  A new technology, that first started as the “car phone” in the 1980’s, began to transition into the truly portable “cell phone”.  Soon everyone had to have one and instead of calling a telephone at a fixed location, people began to call each other’s phones that now accompanied them wherever they went.   We quickly transitioned from a location based communications system to one that was location independent.  We could talk to each other at any time no matter where we were or what we happened to be doing.   Then an additional feature was added to a few cell phone models that initially made no sense whatsoever — at least not from a purely technical viewpoint.  Someone decided to combine a cell phone with a digital camera.  Now, why would anyone want to combine a cell phone with an inferior camera?  Why not just bring along a small camera that produces good quality photos instead?  If you want to take a picture, use a camera – if you want to make a call use your phone!  How wrong that turned out to be.  Users quickly figured out that having a camera in their cell phones allowed them not only to capture images but to instantly send these images to let their friends see the new locations they were visiting, or special events they were experiencing.  Now instead of just seeing you sitting at your desk, as was envisioned by the Picture Phone developers, we are all able to participate in each other’s life experiences.  What a dramatic difference – and all of this a result of the conventional telephone having become a location independent device. What would the pioneers who envisioned the original Picture Phone think of this new direction?  Just one important change in how we make “phone calls” allowed a technology that was relegated to a junkyard to be reborn in a new and exciting incarnation.  Well, it only took fifty years, but the telephone, image capture, and image transmission are now inextricably linked.  Welcome to the future world of the Picture Phone.  Should you wish to offer your thoughts about the evolution of the Picture Phone technology, or any other display related topic, you may contact me directly from this site, by email at silzars@attglobal.net, or by telephone at 425-898-9117.

july10 Read More »

august10

The Hammer and the Nail… I’m sure that you have heard the folk wisdom that “to a hammer everything looks like a nail”.  That saying is typically intended to apply to those whom we perceive to be of a narrow minded nature or stuck in their traditional ways of doing things.  But could such an unintended narrow viewpoint also affect the development of interesting new products?     I think we have a recent and very dramatic example to illustrate just such an occurrence.  Consider the recent success of the iPad.  Once the iPhone became an unqualified success, it was quite easy to imagine that a larger version with even more versatility could be a desirable device.  The easy-to-use touch screen facilitating our ability to search for information and to communicate with others from any location and under all circumstances is something that all of us can readily appreciate and enjoy.  But is the iPad really a new idea?   What about the tablet computers that were touted a number of years ago but never became successful products?  Certainly the concept of a portable flat display with touch interactivity has been known for many years. Not so many years ago, Microsoft made a major effort to develop what they decided to call the “tablet computer”.  As I remember the events, this was a favorite product that Bill Gates personally pushed hard to try to get into general use.  I remember going to presentations by Microsoft managers where Powerpoint slide presentations were made using these handheld tablets.  I never could quite understand why one would want to prance about the stage holding a tablet computer cradled in one arm while poking at it with the fingers of the other hand just to change slides on a screen.  Furthermore, this uncomfortable arrangement made the use of a laser pointer almost impossibly difficult.    And here is where I think the wisdom of the “hammer and the nail” comes to the fore.  When Microsoft envisioned the tablet computer all they could see was a modified – and presumably advanced — version of a Windows-based laptop computer.  That of course meant that these new tablet computers would likewise be used for word processing, spreadsheets, power point presentations, and maybe some e-mail activity.  Using a touch screen or a stylus on a tablet display for such activities has marginal benefits at best.  The big “hammer” in Redmond had the idea for a new “nail” but simply could not envision the tablet computer as anything other than a reconfigured device for doing the activities that were generating all those great software sales for the company.  No matter how hard Microsoft (and its hardware partners) tried, consumers could not see the benefits of giving up the keyboards on their laptop computers for a tablet with a stylus and/or touch screen.  A modified version of a laptop computer without a keyboard just wasn’t going to be even a marginally  successful product. Of course it wasn’t only Microsoft that couldn’t envision an iPad-like product.  At the time, no one else — including Apple — saw it either.  What apparently had to happen to open up our creative vision was to observe the evolution of cell phones as not only useful for voice communications but as location independent information acquisition devices and as our ever-present communication companions.  Once our cell phone usage broadened from voice communications to encompass acquiring all sorts of useful information, the concept of a display with a touch screen made eminently more sense.  Just point your fingers at what you want to know and there it is on the display screen – no need to boot up a computer or find a place to sit down to use one.  Thus, the cell phone led the way to broadening our vision of what a portable communications device could really do.  And going back even further, the nascent birth of that evolution was most likely stimulated by the first efforts to include low resolution digital cameras into cell phones. Being able to accurately predict the future would be of great value – both intellectually and commercially.  But I’m afraid that such talents are simply beyond our capabilities.  Over and over we see famous — and sometimes not-so-famous — personages claim to be able to predict where technology is going to take us.  Usually the next five or ten years are not so hard to envision if the predictions are based on extrapolations of developing technology and the scale-up of worldwide production capabilities.  That, for example, was the case for LCD growth that could quite accurately be predicted during the first decade of this century.  Unfortunately, we cannot say the same for products such as electronic book readers or 3D television.  We really don’t know where those are going to end up — because technology growth is not the only determinant for their success.   And as for yet other new products that we should try to envision?   Well, we’ll just have to try lots of new innovations and see which ones become accepted as mainstream.  The natural selection process only allows a limited number of successes.  That means lots of “hammering” on new kinds of “nails” before we find the right new combinations.  The good news is that there are still lots of opportunities out there for new kinds of displays and new ways to use them. Should you wish to comment on this topic or others, you can reach me directly from this site, by email at silzars@attglobal.net, or by a conventional voice telephone at 425-898-9117.

august10 Read More »

sept10

No Time to Think… A few days ago, I was out for a run on a scenic tree-lined trail that meanders along the shore of Lake Sammamish.  It’s a wonderful place to enjoy nature’s solitude with views of the lake, native vegetation, and Mt. Rainier off in the distance – and, yes, intermixed there are also houses and driveways.  However, for a densely populated suburban area this is about as good as it can get.   As one might expect, others also take advantage of this idyllic setting. On this particular day, the first person I encountered was a young woman with a very small baby in a sling that was across her chest and around the back of her neck.  The baby appeared to be quite comfortable in this tiny customized hammock.  In addition to the baby, in one hand she was holding a leash attached to a dog – a friendly and energetic golden retriever.  The baby and the dog seemed like about all one should try to handle while out for a walk.  But no – in her other hand she was holding a cell phone to her ear while busily carrying on a conversation.  To me this seemed like multi-tasking taken to the extreme – walking for exercise, while tending a baby, while also walking the dog, while talking on the phone.  Could anything else have been added?   So much for “stopping to smell the flowers”.  Let’s consider another setting.  I’m sitting on an airplane at the end of a full day of business activity followed by the flight back to Seattle.  As the airplane touches down and exits the runway, there is an instant reaction from many of my fellow passengers.  Without even a moment’s hesitation they have already activated their communication devices and are frantically reviewing the latest text and voice messages.  Clearly, the world will not survive for many more minutes if these messages don’t receive an immediate reply.  And once the airplane is parked at the gate, it matters not that the objective should be to — as efficiently as possible — retrieve bags and go home.  But apparently, this must be done while holding a cell phone with one shoulder to an ear, struggling to retrieve an oversized suitcase from the overhead bin, and continuing an evidently un-interruptible conversation.  In September 1997, I wrote a column about tiny projection displays that were being designed into eyewear.  The expectation at that time was that perhaps we would all be wearing these eyeglass-like head-mounted displays as a way to interact with electronically accessed information.  The title of my column was “Bump… Oh, Excuse Me, Thump… Whoops… Crash!…”  I’m sure you can understand from this title the concept I was trying to convey.  When walking down a busy street or crossing at an intersection, it may not be the wisest choice to become immersed reading a text message or searching the web.   Well, the eyeglass mini-projectors did not come about quite as expected, but the iPhone and other handheld communication devices did.  Recently, I had to grab a colleague by his sleeve as he was about to wander into fast-moving traffic on a busy downtown street.  He was, of course, trying to send a text message while we were walking back to his office from lunch. Finally, I will mention a recent study done with high-school students that concluded that there is an addictive behavior created by the need for instant and continuous communications.  There is an expectation among this generation that a delayed response of even a few hours to a text message is a sign of lack of interest and is even likely to be considered rude.  So the pressure is perpetuated for a never-ending stream of electronic messages.  Real human interaction has been replaced with multiple electronic versions. Have we unleashed behaviors over which we have lost control and that may not be all that good for us?  What about having some time to just think and contemplate?  Is it possible to be creative with all this electronic noise around us?  How many people do you see out for a walk or in any other setting without some kind of electronic device not connected to their ears or between their thumbs?  Are we no longer able to let our minds just contemplate our place in this world where we happen to find ourselves?  Next time you are heading out for a walk, a run, or a bike ride, try leaving the electronics at home and see what happens.  At first, it may be a bit scary but who knows what great new ideas you may come up with.  At the very least you will have some time to appreciate nature’s blessings. Should you wish to discuss the great new display technology you conceived on your recent electronics-free walk, you may reach me directly from this site, by e-mail at silzars@attglobal.net, or by telephone at 425-898-9117.

sept10 Read More »

october10

From Another Planet… It was a late fall day in Washington DC.  I was taking a taxi back to my hotel.  It had been a productive but tiring day.  It was definitely time for some quiet time and peaceful contemplation.  Dusk was beginning to settle in as we slowly bumped our way along in the rush hour traffic.  I began to observe the rows upon rows of windows through which I could now see the typical desks and cubicles where people spent their working days.  Most were empty by now.  The overhead fluorescent lights illuminating these abandoned workspaces, still piled high with various documents, created a stark and lonely scene. I felt disconnected from this scene – like a visitor from another planet.   What if I really were a visitor from another planet trying to understand how these developing human beings live and why they have chosen certain ways to interact with each other.  I think I would be puzzled for sure. Why do these strange creatures spend so much time transporting themselves from where they sleep to where they do some kind of activity that consists mostly of sitting by themselves in a small cloth-padded area for most of the day?  And then near the end of the day they reverse this process by transporting themselves back to their sleeping locations – where they also appear to have at least one of two meals each day.  They typically sleep in a location with what appears to be a family unit based on biological offspring.  But why don’t they just do their daytime activities in that same location?  Their interaction with other humans is so infrequent that these activities could easily be carried out with only an occasional personal contact.  So why do they follow this peculiar routine?  Don’t they realize that with the communication tools and computer knowledge that they already have in their possession location independence is theirs for the taking? Could it be that on this planet earth, this strange place called Washington DC is an anomaly?  There seem to be many other peculiar behaviors in this town – some that don’t fit any known concepts of logical reasoning.   Perhaps such a peculiar situation is only found in this center of government activity.  But alas a visit to every other major city ends up confirming this same time-consuming behavior – whether Tokyo, London, or New York.  Humans appear to rush around in the early part of the day to find their daytime roosting locations and then rush back to their nighttime roosts near the end of the day to eat and sleep.  And then they do it over and over and over again – day after day after day.  How terribly inefficient.  Isn’t it obvious to all these people that much more of what is apparently considered useful activity could be accomplished without this change of location?  Indeed, if our alien visitor were to take a more careful look, she would find that there is already a growing trend toward this more productive behavior.  While the larger and more tradition-bound organizations continue to do their activities as they have for many decades, a new breed of entrepreneurs is evolving and making use of recently developed communication and computer tools that eliminate the need to have separate locations for daytime and nighttime activities.  For these more advanced beings, constant physical proximity to other humans is no longer a necessity.  It is only for those activities that utilize specialized equipment or facilities (e.g. product manufacturing) or that cannot be accomplished without the actual presence of the human (e.g. getting a haircut) that a change of physical location is still a requirement. This new behavioral trend is in its infancy but will accelerate as the tools for remote interactions continue to improve.  Real time video with multiple displays at each work location will make two-way interactions even more efficient.  Group meetings can and will take place from multiple locations with readily accessible high-quality video becoming routine and essentially free to every user.  The days of single displays at each workstation are numbered.  We will need and use multiple displays to facilitate these remote interactions.  Constant e-mail communication between workers in different offices (even on the same floor of the same building) has already become the accepted norm.  There is little reason not to extend this to locations a bit further apart – perhaps several time zones apart.  We are in a period of interesting and rapid change.  Constant connectivity is being “built-in” to the next generation of young humans currently in grade school and high school.  As these students mature and enter the work force they will see no logical reason to have to change locations just to “go to work”.  The technology is evolving rapidly to make this transition nearly effortless.  Thus, we are in for some exciting work-life changes over the next few decades.  If you would like to offer your thoughts on this topic or others – from wherever your remote location may be, you can reach me directly from this site, by e-mail at silzars@attglobal.net, or by telephone at 425-898-9117.

october10 Read More »

Scroll to Top