Visual demo that people can't see blue very well

Of the three primary colors, people (homo sapiens) see green best, followed by red, but we really suck at seeing blue.  This page:

is a nice demonstration of just how much we suck at seeing blue, especially compared to green.  Most image and video compression takes advantage of this fact, and sacrifices a ton of bits devoted to transmitting the blue portion of an image.

Current status of CableCard and "open cable" in general

Cable Card Goes Mainstream on July 1 (

The article above summarizes the current situation with CableCard — it's just barely starting to become availble. 

Some background on CableCard … 

CableCard is a small standardized electronic card that you plug into a cable set-top box (or DVR) of your choice.  The idea was that consumers would be able to buy their own set-top box or DVR (e.g. at BestBuy), and the cable company would send the consumer a CableCard which contains the security/access-control hardware to prevent cable theft.

CableCard is a hardware solution to the cable companies security/access-control concerns — you have to get a physical CableCard from your cable company.  The article also mentions a software solution under development, known as Downloadable Conditional Access System (DCAS).  As a software solution, you would not need to get a physical piece of hardware (like a CableCard) to plug into the cable box you just purchased from BestBuy.  Rather, the cable company would electronically send "keys" to the software in your cable box.

After years of little activity on the CableCard front, much of the recent activity is due to a March 2005 FCC ruling (pdf) requiring cable companies to stop shipping "integrated" cable boxes (including DVRs) by July 2007 (in 2 months).  To avoid fines, the only solution available to cable companies today is to ship boxes that support CableCard.

Practical eye-tracking for GUIs

The Gaze-enhanced User Interface Design (GUIDe) project at Stanford has a nice video demonstration of their EyePoint system for "pointing" in GUIs (see below).

Eye-tracking technology has been around for a while, so that's not the interesting part.  The interesting part is how they deal with the "Midas problem".  The advantage of a mouse is that you can let go of the mouse when you don't want to move the pointer, but with eye-tracking your eyes are always moving.  It's called the "Midas problem" in reference to the legend of King Midas, who wished for the power to turn things into gold at his touch;  this backfired because everything he touched immediately turned into gold, including his food, his drink, and even his daughter.

EyePoint deals with this problem by still requiring some other action to indicate that pointing is desired, but without totally eliminating the speed of eye-tracking.

Navigating a set of photos in 3D

Suppose you have a lot of photos all with the same object in them, or photos from the same area.  Navigating the photos as thumbnails doesn't work very well when you have more than a handful of photos.

A Univ of Washington team has a demonstration of a system for navigating the photos in 3D.  The system computes the relationship between the camera position among all the photos, then allows you to "zoom in" on the object by finding another photo that is closer.  Or it allows you to "circle around" an object by finding photos to the left and right.

This system does not attempt to stitch photos into seamless panorama, nor does it attempt to build a complete virtual 3D world.  But the effect is almost that good.

They've produced a 5 minute video demonstrating their system (Flash).  Scroll down on the page for a description of the system.  You You can also get all the details on the team's main project page.   Microsoft Research has some beta software based on the UW research (I haven't tried it out yet).

DRM is the same as perpetual motion

There's lots of discussion about whether DRM (Digital Restrictions Management) is a good thing or a bad thing.  But most of the debate assumes that the idea of DRM is actually possible.  In this Slashdot posting, Eustace Tilly summarized the fundamental fallacy of DRM (my emphasis added):

[DRM relies on cryptography, and] cryptography is designed so that a message from A can be read by B but not by C. With DRM, B and C are the same person.  The message from A (the publisher) must be readable by B (the consumer) but not by C (the consumer).

I hope you understand now why DRM is a concept flawed in its fundamentals.

DRM would be useful. So would a perpetual motion machine. It is wishful thinking to believe that the sheer utility of a function means it is capable of being produced.

Should you spend more for higher HDTV resolution?

If you're shopping for an HDTV, is it worth it to pay more money for a set that supports higher resolution (at the same screen size)?

The latest digital TVs support different maximum resolutions: 480p (853×480), 720p (1280×720), & 1080p (1920×1080). The first resolution (480p) is typically referred to has "enhanced definition", while the last two (720 & 1080) are referred to as "high definition" (HDTV).

Digital TVs will down-convert higher resolution signals to the resolution that the TVs support. So a 480 set can still display a 1080 signal by down-converting the signal to display fewer pixels.

For the same screen size, higher resolution sets cost more money. Most people would assume that for a given screen size (say 40 inches), a 1080p set would be better than a 720p set. But depending on how far away you are sitting from the screen, you may not be able to see the difference in quality because the pixels will be too small. In that case you're wasting your money. So at what point are you wasting your money on pixels you won't be able to see?

Carlton Bale blogged on this question, and created a chart (see below). For a 40 inch TV, and sitting 8 feet away, most people will see the benefit of 720p over 480p. But most people will not see any additional benefit going to 1080p. So if you're buying a 40 inch TV and watching from 8 feet, it's a waste of money to pay more for a 1080p set.

Generating electricity from waste heat

Apple is talking to Eneco about using Eneco's solid-state cooling technology.

One interesting side-effect is that Eneco's cooling technology generates electricity. This could be especially appealing in laptops.  Two problems laptops must solve are dissipating waste-heat (cooling) and extending battery life.  Laptops could use Eneco's technology to both cool the laptop, and then use the electricity generated from the cooling to extend the battery life of the laptop.

Update: Slashdot picked up on Eneco's technology.  Two readers point out a similar technology back in 2002 from Cool Chips.  Unfortunately, Cool Chips seems to be stalled.   Hopefully Eneco will make better progress.

Update: ITWeek reports on a meeting where Eneco presented to potential investors.  Scroll two-thirds of the way down to read about some concerns the audience had regarding the commercialization of the technology. The ITWeek article concludes:

The lack of clarity on such fundamental design issues suggests it is likely to be some time before Eneco powered devices emerge. But if these issues can be overcome – and anyone with any experience of energy conversion technologies will tell you it remains a big if – the company does appear to have a truly disruptive technology that could deliver clean, cheap and efficient power to a raft of different industries.

Better faces in computer animation

This Seattle Times article talks about the problems of realistic human faces in computer animation.   The biggest problem facing computer animators is known as the "uncanny valley".  As animation techniques get better, the tiny remaining errors are magnified.  Paradoxically, this makes the face seem less realistic.  I've always believed that the answer to the uncanny valley is to digitize the performance of real actors, then use that digitized performance in computer animation.  (Some directors hope that computer animation will eliminate the need for prima donna actors, but I suspect that that the most cost-effective technique will always involve actors in some form.)

Production companies have been doing full-body motion-capture for several years by attaching a few dozen reflective spots to an actor, then using cameras and computers to track those dots.  These dots are then used to build the digital motion model.  But to get past the uncanny valley for digital faces, thousands of points would need to be tracked.  The current main-stream techniques aren't practical at these scales.

The Seattle Times article reports on the technology of Mova, a silicon valley startup.  Their solution begins by sponging green fluorescent paint onto a performers face.  They record the actors performance with a set of cameras, then use a program that uses imperfections in the sponged-on paint to build up a detailed digital model of the face for later use in computer animation.  Mova claims their system has sub-millimeter accuracy which will be necessary to get past the uncanny valley.   Their web site has a few movies and Flash presentations, but only a tiny bit of the final product.  That little bit looks very good, so if Mova can deliver large quantities of this type of animation they have an excellent future in front of them.

Edwards AFB Air Show, 2005

This site has a bunch of good pics and nice descriptions of the 2005 air show at Edwards Airforce Base.

One of the premier airshows in the world is held at Edwards Air Force Base in the Mojave desert about an hour north-east of Los Angeles. It's a show like no other, held at America's most historic aviation test facility, adjacent to Roger's dry lake bed which is used as an emergency landing area during flight tests, and for occasional visits by the Space Shuttle returning from orbit. This is the only show anywhere in the world where you'll hear military aircraft break the sound barrier (twice in one day at the 2005 show, once by an F-16 fighter and later by a B-1 bomber).