April 6, 2006

The Uncanny Valley of CPUs and Moore's Law

In the Computer Graphics industry, there's a concept called the "uncanny valley". The idea is that there's a major visual plateau that you hit when things get VERY realistic looking. For a while, things are more and more convincing as you get more photo-realistic, and more and more pleasing, until the graphics get so realistic that every little thing that's just off jumps out at you.

And because its otherwise so realistic (and most people see these defects only subconciously), this can create disbelief, and even revulsion. The gap in belief actually WIDENS in this "uncanny valley"
as you approach photorealism, at least until one can iron out these previously unimportant kinks.

I think that's more or less where we are with compute power on the desktop.

The trivial summation of Moore's law is: "Computers get twice as fast every 18 months". There's more subtlety there (something about empirical observations in trending of transistor counts per square inch on circuit boards), but its a fair summation I think.

Unfortunately, we're not seeing the product and consumer experiences that really benefit from Moore's Law anymore (games aside). We're in the "uncanny valley" of application experiences, at least from a desktop compute power perspective. (OK, so its more of a plateau than a valley, but you get the idea...)

What's interesting is this: It's not clear to me if this current experience gap, and our (industry "our") ability to clear it, is a failure of sufficient compute power growth to enable these new experiences, or a failure of imagination.

I suspect the latter.


Anonymous said...

Interesting. But from the original Masahiro Mori paper, it seems that the concept "uncanny valley" was originally applied to robotics (Mori was a roboticist) -- the human-likeness of a Robot/android. I guess combining this and passing the Turing Test will be the ultimate yardstick for getting to the point of Asimov's Robots (or Lt Cmdr Data for that matter :P )

Anonymous said...

p.s. has anyone ever tried created an AIMBot to take on the Turing test? a quick google search didn't return much .... hmmmm...I think I have a project for my summer intern ;-)

Sree Kotay said...

:) Yeah, its originally a robotics concept, but we're PRETTY far away from the uncanny valley of artificial humans. When I first heard the term, it was in CG, where we're actually running into it regularly...

Anonymous said...

"we're PRETTY far away from the uncanny valley of artificial humans"

Of course! Hence the use of positronic brain to get us to the other side of the valley :P

But seriously, Moore himself stated last year that the law will probably not hold for too long as we reach the limits of transistor miniaturization. And IMHO the trouble is that some of the quantum leaps that can be imagined in consumer experiences like perfect speech recognition and NLP are hard problems -- some of them NP-complete!

Anonymous said...

I tend to think the slowing down of experience leap has more to do with issues beyond numbers of gates per square inch or CPU speed. Things like copyright framework and right owners' hesitation to take the dive, 'final' bandwidth into the house, devices that's taking consumers' time away from PCs, etc. are the new challenges. Moore's law applies only when speed was the primary issue. More innovation is happenning on cellphones than on PCs, right? Harddisk price is still in the middle of moore's law like spiral, right? Form factor matters, ubiquity matters, battery life matters, new social behavior patterns matter, animated buddy icon isn't the cutting edge anymore...

Anonymous said...

Well, the Turning Test might not be a good yardstick after all. Have you heard of the ELIZA effect?

I don't need an AI that passes the Turing Test, or any other test of "intelligence", I just want DWIM... and THAT'S something for which there has been almost no progress in the last 25 years.

Anonymous said...

That's exactly what I meant! :)

jvaleski said...

Two things come to mind: one, the latest release of the movie King Kong. never before have I seen such photo-realism in CG; simply off the charts. two, XUL. I recall us (Netscape) pitching an Aqua look-alike skin/widget-set iteration of Mozilla/Gecko to Apple (pre-Safari). For a schlew of reasons, Apple said don't bother; it ain't Aqua. When building a graphics lib for an application (buttons, text-input, scrolling, bars, arrows, blah blah blah) there are subtle, hard to put your finger on, challenges around usability of those components in an actual application. WxWindows had simimair challenges. Focus/selection just "feels" different from the predominant widget set, and it leads to a subconscious irritation on the user's part. This valley can manifest itself in user feedback comments like "it feels slow" (when technically, and provably, it isn't), "something's not right."

Ahh, the joys of human perception.

Sree Kotay said...

Yeah, there's an interesting problem in that as EVERYTHING gets faster/better at Moore's laws rates, our ability to appreciate those improvements seems to be diminishing.

I'm just not sure if this is a local "valley/plateau" phenomenon (I think it is), or its a more fundamental diminishing return issue.

Anonymous said...

"Yeah, there's an interesting problem in that as EVERYTHING gets faster/better at Moore's laws rates, our ability to appreciate those improvements seems to be diminishing."

But I would argue that everything is _not_ getting better at Moore's law. Faster? Sure. More convenient? No doubt. But better? IMO, software has only gotten marginally better in the past 20 years. We lose our appreciation for the latest and greatest, because, at the core of it, we've seen it all before.

Name one piece of software functionality that we have now that couldn't fundamentally be done in, say, 1996, when P90's were pretty much top-of-the-line, and everyone had modems?

Sure, the graphics are Moore(10) better (heck 256 colors was still standard 10 years ago). I can do raytracing renders orders of magnitude faster... as well as converting audio, video and graphics files... although ironically it still takes my machine about the same amount of time to boot and applications seem to take about the same amount of time to launch.

E-commerce was in its infancy, the eventual scale of which was possibly still unforseen at the time. But it was already happening... and despite the Web having been transformed more from a document display platform to a true graphical application development platform (in other words, where computers in general were circa 1979), and gaining a ubiquity to the point that I honestly can't figure out how I got on without it, everything we do today could have been done, fundamentally, back then.

Certainly getting past 57.6kb modems has given us a tremendous amount of capability... and storage has increased by orders of magnitude, and has only barely slowed down, but seriously, what can we do now that we couldn't do at all (not just faster, easier, more colorful) in 1996?

I honestly can't think of anything.

And that's where we are in 2006. Despite the incredible level of interconnectedness we have achieved, we really aren't doing anything _new_. Just like we all said when 2000 arrived?

"Where's my flying car?"

The idea that we would have flying cars by 2000 wa sa common idea in the 50's (or earlier), but we don't have flying cars. I would argue there are good reasons we don't, but consider:

Where's voice recognition?

Where's handwriting recognition?

Where's natural language processing?

Where is an operating system that can actually respond to what I am trying to do? I don't mean anything approaching AI, but when I make the same &$*%$& adjustment to Windows explorer, or some other piece of common software, 158 times that the OS doesn't eventually recognize that I'm likely to want it done the next time. The closest we have to that is something like Microsoft's autorun, a feature that drives me crazy not because it tries to predict what I want to do when I plug in an external storage device, but because I have never wanted it, and never will and have been unable for years to figure how to get it to shut off and stay off! Another example (and I'm not picking on MS, it's just what I'm most familiar with)... I'm playing a game and pounding on the shift key. Windows interrupts what I'm doing and asks me if I need some accessibility feature turned on. Now this is clearly meant to be helpful, and maybe it is for someone who actually needs it, but I get tired of having to turn the darn thing off time after time (at least this one stays off, but I usually reinstall Windows on my main machine about twice a year, so I have to go through all these little annoyances over and over.) Or look at Clippy, a feature no one asked for, and almost no one used, and _no one_ liked.

This is the best we can do? MS has been pounding on making Vista for 5+ years, and it will be a little faster (or not), more secure, prettier (but not if XP is any indication), and more usable, but what will it give me that I can't do today? Nothing. Mac OS X is innovative, pretty and does a lot things better than anyone else, but what does it let me do that is unique to OS X? Nothing.

That's why I think Moore's Law, whether it continues, or stalls, or just ends (well... that won't happen) is largely irrelevant today for 95% of computer users.

In essence, our computers are _devolving_ not evolving. They are becoming more and more just glorified media delivery vehicles, ultra-fancy telephones, and massive data storage devices, and less and less machines that can actually perform sophisticated tasks to make our lives easier, more interesting, or just plain fun.

What was the original topic again? ;-)

Sree Kotay said...

Rick, completely agree :)

That's exactly what I hope I was saying: we're in the "uncanny valley" of application experiences DESPITE improvements in compute power, bandwidth, etc.

Things have gotten "better" and "easier" - but we haven't enabled fundamentally new applications of general compute power.

As you point out, if you missed the last 10 years, you would find nothing strange or comfortable a 2006 computing experience.

But that will change. I don't know where it'll come from my, but my bets are on untethered broadband and broader content applications/virtualization technologies will drive it (just a guess).

Any thoughts?

Anonymous said...




















言论与小道理之意。正是小说之为小说的本来含义。 桓谭在其所著《新论》中,对小说如是说:“若其小说家,合丛残小语,近取譬论,以作

短书,治身理家有可观之辞。”(小说仍然是“治身理家”的短书,而不是为政化民的“大道”。) 班固认为小说是“街谈巷语、道听涂(同“

途”)说者之所造”,虽然认为小说仍然是小知、小道,但从另一角度触及小说讲求虚构,植根于生活的特点。 清末民初,










如何写小说 作者:老舍





































是艺术品 --艺术品是用我们整个的生命、生活写出来的,不是随便的给某事某物照了个四寸或八寸的像片。我们的责任是在创作:假借一件事