Pages

April 6, 2006

The Uncanny Valley of CPUs and Moore's Law

In the Computer Graphics industry, there's a concept called the "uncanny valley". The idea is that there's a major visual plateau that you hit when things get VERY realistic looking. For a while, things are more and more convincing as you get more photo-realistic, and more and more pleasing, until the graphics get so realistic that every little thing that's just off jumps out at you.

And because its otherwise so realistic (and most people see these defects only subconciously), this can create disbelief, and even revulsion. The gap in belief actually WIDENS in this "uncanny valley"
as you approach photorealism, at least until one can iron out these previously unimportant kinks.

I think that's more or less where we are with compute power on the desktop.


The trivial summation of Moore's law is: "Computers get twice as fast every 18 months". There's more subtlety there (something about empirical observations in trending of transistor counts per square inch on circuit boards), but its a fair summation I think.

Unfortunately, we're not seeing the product and consumer experiences that really benefit from Moore's Law anymore (games aside). We're in the "uncanny valley" of application experiences, at least from a desktop compute power perspective. (OK, so its more of a plateau than a valley, but you get the idea...)

What's interesting is this: It's not clear to me if this current experience gap, and our (industry "our") ability to clear it, is a failure of sufficient compute power growth to enable these new experiences, or a failure of imagination.

I suspect the latter.

12 comments:

Anonymous said...

Interesting. But from the original Masahiro Mori paper, it seems that the concept "uncanny valley" was originally applied to robotics (Mori was a roboticist) -- the human-likeness of a Robot/android. I guess combining this and passing the Turing Test will be the ultimate yardstick for getting to the point of Asimov's Robots (or Lt Cmdr Data for that matter :P )

Anonymous said...

p.s. has anyone ever tried created an AIMBot to take on the Turing test? a quick google search didn't return much .... hmmmm...I think I have a project for my summer intern ;-)

Sree Kotay said...

:) Yeah, its originally a robotics concept, but we're PRETTY far away from the uncanny valley of artificial humans. When I first heard the term, it was in CG, where we're actually running into it regularly...

Anonymous said...

"we're PRETTY far away from the uncanny valley of artificial humans"

Of course! Hence the use of positronic brain to get us to the other side of the valley :P

But seriously, Moore himself stated last year that the law will probably not hold for too long as we reach the limits of transistor miniaturization. And IMHO the trouble is that some of the quantum leaps that can be imagined in consumer experiences like perfect speech recognition and NLP are hard problems -- some of them NP-complete!

Anonymous said...

I tend to think the slowing down of experience leap has more to do with issues beyond numbers of gates per square inch or CPU speed. Things like copyright framework and right owners' hesitation to take the dive, 'final' bandwidth into the house, devices that's taking consumers' time away from PCs, etc. are the new challenges. Moore's law applies only when speed was the primary issue. More innovation is happenning on cellphones than on PCs, right? Harddisk price is still in the middle of moore's law like spiral, right? Form factor matters, ubiquity matters, battery life matters, new social behavior patterns matter, animated buddy icon isn't the cutting edge anymore...

Anonymous said...

Well, the Turning Test might not be a good yardstick after all. Have you heard of the ELIZA effect?

I don't need an AI that passes the Turing Test, or any other test of "intelligence", I just want DWIM... and THAT'S something for which there has been almost no progress in the last 25 years.

Anonymous said...

That's exactly what I meant! :)

jvaleski said...

Two things come to mind: one, the latest release of the movie King Kong. never before have I seen such photo-realism in CG; simply off the charts. two, XUL. I recall us (Netscape) pitching an Aqua look-alike skin/widget-set iteration of Mozilla/Gecko to Apple (pre-Safari). For a schlew of reasons, Apple said don't bother; it ain't Aqua. When building a graphics lib for an application (buttons, text-input, scrolling, bars, arrows, blah blah blah) there are subtle, hard to put your finger on, challenges around usability of those components in an actual application. WxWindows had simimair challenges. Focus/selection just "feels" different from the predominant widget set, and it leads to a subconscious irritation on the user's part. This valley can manifest itself in user feedback comments like "it feels slow" (when technically, and provably, it isn't), "something's not right."

Ahh, the joys of human perception.

Sree Kotay said...

Yeah, there's an interesting problem in that as EVERYTHING gets faster/better at Moore's laws rates, our ability to appreciate those improvements seems to be diminishing.

I'm just not sure if this is a local "valley/plateau" phenomenon (I think it is), or its a more fundamental diminishing return issue.

Anonymous said...

"Yeah, there's an interesting problem in that as EVERYTHING gets faster/better at Moore's laws rates, our ability to appreciate those improvements seems to be diminishing."

But I would argue that everything is _not_ getting better at Moore's law. Faster? Sure. More convenient? No doubt. But better? IMO, software has only gotten marginally better in the past 20 years. We lose our appreciation for the latest and greatest, because, at the core of it, we've seen it all before.

Name one piece of software functionality that we have now that couldn't fundamentally be done in, say, 1996, when P90's were pretty much top-of-the-line, and everyone had modems?

Sure, the graphics are Moore(10) better (heck 256 colors was still standard 10 years ago). I can do raytracing renders orders of magnitude faster... as well as converting audio, video and graphics files... although ironically it still takes my machine about the same amount of time to boot and applications seem to take about the same amount of time to launch.

E-commerce was in its infancy, the eventual scale of which was possibly still unforseen at the time. But it was already happening... and despite the Web having been transformed more from a document display platform to a true graphical application development platform (in other words, where computers in general were circa 1979), and gaining a ubiquity to the point that I honestly can't figure out how I got on without it, everything we do today could have been done, fundamentally, back then.

Certainly getting past 57.6kb modems has given us a tremendous amount of capability... and storage has increased by orders of magnitude, and has only barely slowed down, but seriously, what can we do now that we couldn't do at all (not just faster, easier, more colorful) in 1996?

I honestly can't think of anything.

And that's where we are in 2006. Despite the incredible level of interconnectedness we have achieved, we really aren't doing anything _new_. Just like we all said when 2000 arrived?

"Where's my flying car?"

The idea that we would have flying cars by 2000 wa sa common idea in the 50's (or earlier), but we don't have flying cars. I would argue there are good reasons we don't, but consider:

Where's voice recognition?

Where's handwriting recognition?

Where's natural language processing?

Where is an operating system that can actually respond to what I am trying to do? I don't mean anything approaching AI, but when I make the same &$*%$& adjustment to Windows explorer, or some other piece of common software, 158 times that the OS doesn't eventually recognize that I'm likely to want it done the next time. The closest we have to that is something like Microsoft's autorun, a feature that drives me crazy not because it tries to predict what I want to do when I plug in an external storage device, but because I have never wanted it, and never will and have been unable for years to figure how to get it to shut off and stay off! Another example (and I'm not picking on MS, it's just what I'm most familiar with)... I'm playing a game and pounding on the shift key. Windows interrupts what I'm doing and asks me if I need some accessibility feature turned on. Now this is clearly meant to be helpful, and maybe it is for someone who actually needs it, but I get tired of having to turn the darn thing off time after time (at least this one stays off, but I usually reinstall Windows on my main machine about twice a year, so I have to go through all these little annoyances over and over.) Or look at Clippy, a feature no one asked for, and almost no one used, and _no one_ liked.

This is the best we can do? MS has been pounding on making Vista for 5+ years, and it will be a little faster (or not), more secure, prettier (but not if XP is any indication), and more usable, but what will it give me that I can't do today? Nothing. Mac OS X is innovative, pretty and does a lot things better than anyone else, but what does it let me do that is unique to OS X? Nothing.

That's why I think Moore's Law, whether it continues, or stalls, or just ends (well... that won't happen) is largely irrelevant today for 95% of computer users.

In essence, our computers are _devolving_ not evolving. They are becoming more and more just glorified media delivery vehicles, ultra-fancy telephones, and massive data storage devices, and less and less machines that can actually perform sophisticated tasks to make our lives easier, more interesting, or just plain fun.

What was the original topic again? ;-)

Sree Kotay said...

Rick, completely agree :)

That's exactly what I hope I was saying: we're in the "uncanny valley" of application experiences DESPITE improvements in compute power, bandwidth, etc.

Things have gotten "better" and "easier" - but we haven't enabled fundamentally new applications of general compute power.

As you point out, if you missed the last 10 years, you would find nothing strange or comfortable a 2006 computing experience.

But that will change. I don't know where it'll come from my, but my bets are on untethered broadband and broader content applications/virtualization technologies will drive it (just a guess).

Any thoughts?

Anonymous said...

小说是以刻画人物为中心,通过完整的故事情节和具体的环境描写来反映社会生活的一种文学体裁。小说有三个要素:人物、故事情节、环境

(自然环境和社会环境)。小说反映社会生活的主要手段是塑造人物形象。小说中的人物,我们称为典型人物;这个人物是

作者根据现实生活创作出来的,他不同于真人真事,"杂取种种,合成一个",通过这样典型的人物形象反映生活,更集中、更有普遍的代表性

。小说塑造人物的手段可以是概括介绍,可以是具体的描写,可以写人物的外貌,也可以刻画人物的心理活动;既可以人物的行动对话,也可

以适当插入作者的议论;既可以正面起笔,也可以侧面烘托。小说主要是通过故事情节来展现人物性格、表现中心的。故事

来源于生活,但它通过整理、提炼和安排,就比现时生活中发生的真事更集中,更完整,更具有代表性。小说的环境描写和

人物的塑造与中心思想有极其重要的关系。在环境描写中,社会环境是重点,它揭示了种种复杂的社会关系,如人物的身份、地位、成长的历

史背景等等。自然环境包括人物活动的地点、时间、季节、气候以及景物等等。自然环境描写对表达人物的心情、渲染气氛都有不少的作用。

简单地说,小说就是以塑造人物形象为中心,通过故事情节的叙述和环境

的描写反映社会生活。

小说的特点

“虚构性”,是小说的本质。“捕捉人物生活的感觉经验”,是小说竭力要挖掘的艺术内容,其感觉经验愈是新鲜、细微、独特、准确、

深刻,就愈是小说化。“虚构性”与“捕捉人物生活的感觉经验”,是上述要素中最能体现小说性质的东西。

“小说”一词的来源

小说”一词最早见于《庄子·外物》:“夫揭竿累,趣灌渎,守鲵鲋,其

于得大鱼难矣;饰小说以干县令,其于大达亦远矣。”“县”乃古“悬”字,高也;“令”,美也,“干”,追求。是说举着细小的钓竿钓绳

,奔走于灌溉用的沟渠之间,只能钓到泥鳅之类的小鱼,而想获得大鱼可就难了。靠修饰琐屑的言论以求高名美誉,那和玄妙的大道相比,可

就差得远了。春秋战国时,学派林立,百家争鸣,许多学人策士为说服王侯接受其思想学说,往往设譬取喻,征引史事,巧借神话,多用寓言

,以便修饰言说以增强文章效果。庄子认为此皆微不足道,故谓之“小说”,即“琐屑之言,非道术所在”“浅识小道”,也就是琐屑浅薄的

言论与小道理之意。正是小说之为小说的本来含义。 桓谭在其所著《新论》中,对小说如是说:“若其小说家,合丛残小语,近取譬论,以作

短书,治身理家有可观之辞。”(小说仍然是“治身理家”的短书,而不是为政化民的“大道”。) 班固认为小说是“街谈巷语、道听涂(同“

途”)说者之所造”,虽然认为小说仍然是小知、小道,但从另一角度触及小说讲求虚构,植根于生活的特点。 清末民初,

维新派梁启超等大力倡导“小说界革命”,小说理论面目一新。小说地位空前提高,乃至被奉为“国民之魂”“正史之根”“文学之最上乘”

,再不是无足轻重的“街谈巷语”“琐屑之言”。
小说的分类:


区分短篇、中篇、长篇小说
字数的多少,是区别长篇、中篇、短篇、微型小说的一个因素,但不是惟一的因素。人们通常把一千字之内的小说称为微型小说一千字到

一万字的小说称为短篇小说,一万字到十万字的小说称为中篇小说,十万字以

上的称为长篇小说。这只是就字数而言的,其实,长、中、短篇小说的区别,主要是由作品反映生活的范围、作品的容量来

决定的。长篇小说容量最大,最广阔,篇幅也比较长,具有比较复杂的结构,它一般是通过比较多的人物和纷繁的事件来表现社会生活的,如

《红楼梦》。中篇小说反映生活的范围虽不像长篇那样广阔,但也能反映出一定广度的生活面,它的人物的多寡、情节的繁简介于长篇与短篇

之间,如《人到中年》。短篇小说的特点是紧凑、短小精悍,它往往只写了一个或很少几个人物,描写了生活的一个片断或插曲。短篇小说所

反映的生活虽不及长篇、中篇广阔,但也同样是完整的,有些还具有深刻、丰富的社会意义。

如何写小说 作者:老舍
  小说并没有一定的写法。我的话至多不过是供参考而已。

  大多数的小说里都有一个故事,所以我们想要写小说,似乎也该先找个故事。找什么样子的故事呢?从我们读过的小说来看,什么故事都

可以用。恋爱的故事,冒险的故事固然可以利用,就是说鬼说狐也可以。故事多得很,我们无须发愁。不过,在说鬼狐的故事里,自古至今都

是把鬼狐处理得象活人;即使专以恐怖为目的,作者所想要恐吓的也还是人。假若有人写一本书,专说狐的生长与习惯,而与人无关,那便成

为狐的研究报告,而成不了说狐的故事了。由此可见,小说是人类对自己的关心,是人类社会的自觉,是人类生活经验的纪录。

  那么,当我们选择故事的时候,就应当估计这故事在人生上有什么价值,有什么启示;也就很显然的应把说鬼说狐先放在一边——即使要

利用鬼狐,发为寓言,也须晓得寓言与现实是很难得谐调的,不如由正面去写人生才更恳切动人。

  依着上述的原则去选择故事,我们应该选择复杂惊奇的故事呢,还是简单平凡的呢?据我看,应当先选取简单平凡的。故事简单,人物自

然不会很多,把一两个人物写好,当然是比写二三十个人而没有一个成功的强多了。写一篇小说,假如写者不善描写风景,就满可以不写风景

,不长于写对话,就满可以少写对话;可是人物是必不可缺少的,没有人便没有事,也就没有了小说。创造人物是小说家的第一项任务。把一

件复杂热闹的事写得很清楚,而没有创造出人来,那至多也不过是一篇优秀的报告,并不能成为小说。因此,我说,应当先写简单的故事,好

多注意到人物的创造。试看,世界上要属英国狄更司的小说的穿插最复杂了吧,可是有谁读过之后能记得那些勾心斗角的故事呢?狄更司到今

天还有很多的读者,还被推崇为伟大的作家,难道是因为他的故事复杂吗?不!他创造出许多的人哪!他的人物正如同我们的李逵、武松、黛

玉、宝钗,都成为永远不朽的了。注意到人物的创造是件最上算的事。

  为什么要选取平凡的故事呢?故事的惊奇是一种炫弄,往往使人专注意故事本身的刺激性,而忽略了故事与人生有关系。这样的故事在一

时也许很好玩,可是过一会儿便索然无味了。试看,在英美一年要出多少本侦探小说,哪一本里没有个惊心动魄的故事呢?可是有几本这样的

小说成为真正的文艺的作品呢?这种惊心动魄是大锣大鼓的刺激,而不是使人三月不知肉味的感动。小说是要感动,不要虚浮的刺激。因此,

第一:故事的惊奇,不如人与事的亲切;第二:故事的出奇,不如有深长的意味。假若我们能由一件平凡的故事中,看出他特有的意义,则人

同此心,心同此理,它便具有很大的感动力,能引起普遍的同情心。小说是对人生的解释,只有这解释才能使小说成为社会的指导者。也只有

这解释才能把小说从低级趣味中解救出来。所谓《黑幕大观》一类的东西,其目的只在揭发丑恶,而并没有抓住丑恶的成因,虽能使读者快意

一时,但未必不发生世事原来如此,大可一笑置之的犬儒态度。更要不得的是那类嫖经赌术的东西,作者只在嫖赌中有些经验,并没有从这些

经验中去追求更深的意义,所以他们的文字只导淫劝赌,而绝对不会使人崇高。所以我说,我们应先选取平凡的故事,因为这足以使我们对事

事注意,而养成对事事都探求其隐藏着的真理的习惯。

  有了这个习惯,我们既可以不愁没有东西好写,而且可以免除了低级趣味。客观事实只是事实,其本身并不就是小说,详密的观察了那些

事实,而后加以主观的判断,才是我们对人生的解释,才是我们对社会的指导,才是小说。对复杂与惊奇的故事应取保留的态度,假若我们在

复杂之中找不出必然的一贯的道理,于惊奇中找不出近情合理的解释,我们最好不要动手,因为一存以热闹惊奇见胜的心,我们的趣味便低级

了。再说,就是老手名家也往往吃亏在故事的穿插太乱、人物太多;即使部分上有极成功的地方,可是全体的不匀调,顾此失彼,还是劳而无

功。

  在前面,我说写小说应先选择个故事。这也许小小的有点语病,因为在事

实上,我们写小说的动机,有时候不是源于有个故事,而是有一个或几个人。我们倘然遇到一个有趣的人,很可能的便想以此人为主而写一篇

小说。不过,不论是先有故事,还是先有人物,人与事总是分不开的。世界上大概很少没有人的事,和没有事的人。我们一想到故事,恐怕也

就想到了人,一想到人,也就想到了事。我看,问题倒似乎不在于人与事来到的先后,而在于怎样以事配人,和以人配事。换句话说,人与事

都不过是我们的参考资料,须由我们调动运用之后才成为小说。比方说,我们今天听到了一个故事,其中的主人翁是一个青年人。可是经我们

考虑过后,我们觉得设若主人翁是个老年人,或者就能给这故事以更大的感动力;那么,我们就不妨替它改动一番。以此类推,我们可以任意

改变故事或人物的一切。这就仿佛是说,那足以引起我们注意,以至想去写小说的故事或人物,不过是我们主要的参考材料。有了这点参考之

后,我们须把毕生的经验都拿出来作为参考,千方百计的来使那主要的参考丰富起来,象培植一粒种子似的,我们要把水份、温度、阳光……

都极细心的调处得适当,使他发芽,长叶开花。总而言之,我们须以艺术家自居,一切的资料是由我们支配的;我们要写的东西不是报告,而

是艺术品 --艺术品是用我们整个的生命、生活写出来的,不是随便的给某事某物照了个四寸或八寸的像片。我们的责任是在创作:假借一件事

或一个人所要传达的思想,所要发生的情感与情调,都由我们自己决定,自己执行,自己作到。我们并不是任何事任何人的奴隶,而是一切的

主人。

  遇到一个故事,我们须亲自在那件事里旅行一次不要急着忙着去写。旅行过了,我们就能发现它有许多不圆满的地方,须由我们补充。同

时,我们也感觉到其中有许多事情是我们不熟悉或不知道的。我们要述说一个英雄,却未必不教英雄的一把手枪给难住。那就该赶紧去设法明

白手枪,别无办法。一个小说家是人生经验的百货店,货越充实,生意才越兴旺。

  旅行之后,看出哪里该添补,哪里该打听,我们还要再进一步,去认真的扮作故事中的人,设身处地的去想象每个人的一切。是的,我们

所要写的也许是短短的一段事实。但是假若我们不能详知一切,我们要写的这一段便不能真切生动。在我们心中,已经替某人说过一千句话了

,或者落笔时才能正确地用他的一句话代表出他来。有了极丰富的资料,深刻的认识,才能说到剪裁。我们知道十分,才能写出相当好的一分

。小说是酒精,不是搀了水的酒。大至历史、民族、社会、文化,小至职业、相貌、习惯,都须想过,我们对一个人的描画才能简单而精确地

写出,我们写的事必然是我们要写的人所能担负得起的,我们要写的人正是我们要写的事的必然的当事人。这样,我们的小说才能皮裹着肉,

肉撑着皮,自然的相联,看不出虚构的痕迹。小说要完美如一朵鲜花,不

要象二簧行头戏里的“富贵衣”。

  对于说话、风景,也都是如此。小说中人物的话语要一方面负着故事发展的责任,另一方面也是人格的表现--某个人遇到某种事必说某种

话。这样,我们不必要什么惊奇的言语,而自然能动人。因为故事中的对话是本着我们自己的及我们对人的精密观察的,再加上我们对这故事

中人物的多方面想象的结晶。我们替他说一句话,正象社会上某种人遇到某种事必然说的那一句。这样的一句话,有时候是极平凡的,而永远

是动人的。

  我们写风景也并不是专为了美,而是为加重故事的情调,风景是故事的衣装,正好似寡妇穿青衣,少女穿红裤,我们的风景要与故事人物

相配备--使悲欢离合各得其动心的场所。小说中一草一木一虫一鸟都须有它的存在的意义。一个迷信神鬼的人,听了一声鸦啼,便要不快。一

个多感的人看见一片落叶,便要落泪。明乎此,我们才能随时随地的搜取材料,准备应用。当描写的时候,才能大至人生的意义,小至一虫一

蝶,随手拾来,皆成妙趣。

  以上所言,系对小说中故事、人物、风景等作个笼统的报告,以时间的限制不能分项详陈。设若有人问我,照你所讲,小说似乎很难写了

?我要回答也许不是件极难的事,但是总不大容易吧!