Pages

December 20, 2007

Best Deal of 2007: Microsoft-Facebook?

Much has been written on the terms of the deal between Microsoft and Facebook. The gist of it is this: Microsoft paid Facebook $240M for 1.6% of the company, and the exclusive rights to sell online advertising for the site.

This being the year-end, all pundit types have got their best and worst lists coming out, and this deal seems omnipresent on worst deals of the year lists.

Its not hard to see why this argument is made: "worst" is really a proxy for "stupidly lopsided value creation", and is Facebook remotely worth that much (remember that MySpace went for $500M and YouTube for $1.6B)? It creates an implied valuation on their user base (figure 50M users or so) of, like, $300 a head... that's some big math... hard to see how Microsoft ever really recoups its investment.

But of course, that's to focus on the value with regard to public markets and value cap - the *wrong* metric here. That implied valuation of $15B, is pretty much (forgive my language) bull$#!t because this deal was, in reality, a barter deal.

Let's look at the deal another way, very simply, in terms of cash:

1) Facebook gets a $240M cash infusion while giving up very little control or equity,
2) Microsoft gets a significant destination outside its network in which to build the value of its recent, very large acquistion of aQuantive (4% of the $6B that's already "sunk"),
3) Microsoft has to generate incremental ARPU of only $5 a user *in total* to break even,
4) Facebook is valued at $15B, which means...
5) Facebook is either (depending on where you think this ends): (a) off the market for some time at that price (so no Google, Yahoo, et al spoilers), or (b) tied to Microsoft and hardening/creating value in their online ad platform

Win, win, win, win, win - at least for Microsoft and Facebook: you know, the parties actually doing the deal?

Isn't that the definition of "best"? Lopsided value creation for *both* sides?

Any "investment" dollar$ back from the deal is pure upside for Microsoft. That means that they, more or less, let Facebook fill in the denominator: $240M of $XX - Microsoft doesn''t/didn't *really* care what that number was...

I don't know about you, but it leaves me with a funny taste in my mouth... is this the "revenue exchange" program of the new bubble - a variation of the old: "I'll buy from you if you buy from me and both our revenues go up, but we're not spending any money trick?"

Dunno - but it also smells suspiciously similar to another "equity for exclusivity" deal...

December 13, 2007

My new favorite word

w00t?

Not so much.... I like "backronym...." :)

(ok, maybe *favorite* is a strong word - but its funny... looks funny, sounds funny, is funny - try saying it aloud)

December 5, 2007

Urgh.

Ok -so the only worse flight to take than a red-eye, is a red-eye with connections. Urgh... to quote the inimitable (all evidence to the contrary) Danny Glover: "I'm getting too old for this [expletive]..." :)

November 29, 2007

Touch UI and the Art of Intent

Observation: One of the great things about the mouse as an input device is the idea of "intent" - that your cursor indicates your locus of attention when used for interaction (Roll-over states and tool-tips being trivial expressions of this).

You have a simpler model of this semantic with the Blackberry trackball (and the
Blackberry jog wheel before that), and a crappier version with things like using your remote control with a TV EPG (guide) or the arrow keys on your cell phone to navigate menus.

Touch interaction systems, like the iPhone, lack that model completely - just like most older (read: HW only) Consumer Electronics UIs (think VCR or DVD player).

In some cases, that really doesn't matter much... and in other cases, the directness of interaction provides a far better paradigm.... but, it suggests the question:
is "intent" a semantic that will disappear for Touch UI? Or is it a temporarily "lost" item, like tactile feedback - just a gap to be crossed?

November 25, 2007

Review: Beowulf (in 3D!)

First, the short version: it was a fast paced, man-movie of an adventure (especially in 3D!). Not quite as testosterone-ly epicly big screen worthy as, say, 300, but still a movie well worth seeing in a theater (in 3D!). Really fun.

The longer version: It was good, but won't hold up - even for re-watching in the near term, and certainly not in the longer term as a "film".

Obviously, there's the whole 3D(!) -motion capture thing. Definitely a huge step forward here, especially as compared to Zemeckis' previous outing with this technology in The Polar Express. In particular, I thought they really nailed the close shots (especially the eyes), but in attempting to service the "real" left a lot of the motion looking very stiff. Ironically, it was the big motion action sequences and distance shots that looked the most fake - great effort, but still short of the "reality bar" and likely to look Dr. Who bad in a decade or so. Still - there are moments when you really, completely get drawn in (and then, *whack* get snapped out :P...)

Story-wise, the conceit of the movie (i.e. why the filmakers are "re-visiting" the well known story) is this: Imagine if everything in the original were literally true, but embellished by the narrator (Beowulf himself for much of the tale) and edited for "mature content" by the transcribers (likely cleric/priest scribes) of the epic Olde Englishe poeme. It definitely adds a layer of pathos to the story - filling in the missing "back story", but also afflicts the story with that Soap opera interconnectedness that's drowned action/adventure storytelling (especially sci-fi and fantasy) in the wake of the Empire Strikes Back.

Sometimes a dragon is just a big evil dragon
, you know?
(apologies in advance for the politics of the previous link - but, whattya gonna do?)

Still - its was fun to watch, and fun to note where (and why) it diverged from its source. And it was co-written by Neil Gaiman, who, even when just off, is miles better than most...

November 21, 2007

The Great SetWorkingSetSize() Scam...

I saw this post at ZDNet about Firefox 3 memory usage,. Setting aside for a second whether Firefox 3 is better than IE 7 or Firefox 2, this reminds me of one of the great cheats of "small applications" developers everywhere:

SetProcessWorkingSetSize(GetCurrentProcess(), -1, -1)

This Windows API makes your application look *very* efficient without actually doing anything, and has been employed by MANY a popularly considered"lightweight" application (and some, um, less light) - because the "Memory Size" column in Task Manager on Windows doesn't reflect memory usage.

"Hunh?!?!", you say?

That column actually reflects the working set of memory for your app - which is the amount of memory currently "realized" (in active use) by your process. Let's look at some use cases to illustrate what that actually means:

1) Allocate a bunch of memory and free it. Your app isn't reserving the memory space, but the working set may still be high - Windows will lazily reclaim if its needed by another application.
2) Minimize all your application windows. This does the equivalent of the Windows API call I listed above, and the memory working set for that application gets *totally* paged out. Then Windows will load back the memory pages as they're accessed - its the equivalent of clearing a cache.

This last is confusing (and illustrates the issue): after
SetProcessWorkingSetSize(GetCurrentProcess(), -1, -1), "Memory Size" in Task Manager doesn't reflect what's been "reserved" (allocated) by an application, just what blocks of memory are being/have been actively "touched" since the working set was "cleared".

If all that's confusing, fortunately for you, its easy to boil down to a simple action: Use the "Virtual Memory Size" column in Task Manager instead to look at application memory usage. You can find under the "View... Select Columns..." menu. It reflects what the application has requested from the OS, but not yet released, i.e. the real memory consumed by the application!

More info here.

November 15, 2007

Android: Sound and Fury signifying... ?

So... I've been looking at Android a little bit (what's actually available thus far is really the SDK, not the platform itself).

A few comments:
- Custome Java(-ish - more on this shortly)VM/bytecode engine
- Integrated Webkit for HTML/JavaScript authoring
- Fast, robust 2d graphics (native and software only) with a custom presentation engine
- OpenGL ES, with possibility of HW acceleration
- Rich storage semantics through SQLite
- Still a little fuzzy on the licensing terms: Google says (emphasis mine)"Over time, more of the code that makes up Android will be released..." - not sure what that means...

The SDK support Java application only at this stage (and the forseeable future?) - though, in theory, platform source code being available under "non-restrictive" terms creates the opportunity for other types of enhancements.

So - still early, but on the good side: LOTS of (welcome/needed) attention to the graphics and presentation layers.

On the bad: fracturing Java (which only recently has started to make progress on getting past the "build once/debug everywhere" problems). If Sun doesn't address this with Google, it'll be hard for them to maintain any credibility or cohesion with the Java Community Process for managing Java's roadmap.

Interestingly this hilights a truism of mine: for software, single implementation trumps single specification - more on this in a future post.

How this will work with open source and Android is unclear, because as Johnathon Schwartz (Sun's CEO) points out: "Companies compete, communities simply fracture".

In particular, it'll be interesting how Google rationalizes their rumoured "non-fragmentation" clause and the idea of "non-restrictive" licensing...

November 12, 2007

Watch this Space, pt 1
(AKA the Android cometh)

Updated: Up and available around noon EST (someone sent me the link already! :)). Will look at it tonight... amusingly, win and mac tarballs downloading now, but consistently crapping out on linux set...

So, despite all the "news" last week (*cough* tease) - and all the subsequent pontificating - today (in theory) is when the Android SDK - the guts of the long rumoured Google "gPhone" - is actually released, and we get some real meat.

I have to hand it to the Symbian guys, though who take "quote of the week" for John Forsyth's likening of a Linux mobile initiative to the common cold: "It keeps coming round and then we go back to business."

Heh.

So... watch this space (I'll share some thoughts once I get an "open" look...)

November 7, 2007

Toddlers bond with Robot

Pretty cool story in the New Scientist on Monday: Robot becomes one of the kids. Basically, researches found that, with the correct behaviour emulations, an advanced robot was able to integrate into a toddler group as a peer (as opposed to "as a toy", or "as a pet", even, based on touching clues and other interations).

Video below (link):


This has implications for group behaviour theory and social development (
it hints at a lot about how we develop thinking about "us"). But mostly it speaks to the evolution of robotics assistance in the classroom, especially for young children.

There is clear evidence that the "uncanny valley" gets substantially farther and wider as we age - the question that this study ponders, but only partially answers, is: Why would we ever use/need *humanoid* robots?

Still: cool :)

November 1, 2007

ES4: The Javascript 2.0 Blaze

Updated: Now on Slashdot.

A bit of a flame war going on in the ECMAScript working group (which spices up an otherwise reasonably boring mailing list).

This blog post (from Mozilla Foundation CTO and Javascript creator Brendan Eich) is really only tip of the iceberg - you should follow some of the links from his post, or check out the mailing list archive.

Javascript 2.0, or more formally ECMAScript Edition 4 (or simply ES4) has been in the works for a good, oh.... 8 years now. With the ES4-in-motion work from Adobe (nee Macromedia) in the form of ActionScript 3 (AS3), and the rise of Firefox, Safari, et al. its been getting a serious push to completion over the last year and a half, especially in the face of Microsoft's C#, Silverlight, and (though no one said it directly), I think even Adobe's Flex and AIR.

The battlelines are pretty clearly drawn, with
Microsoft and Yahoo on one side, and the Mozilla team, Opera, Adobe (interesting, eh? "Enemy of my Enemy" anyone?), and oh, pretty much everybody else on the other side. Or, as you might first opine from that cast, Evil v. Good.

MS and Yahoo think the language is changing too much, whilst the others think that it needs to in order to be competitive for the larger scale programming projects the web is increasingly requiring.

My opinion? As is usually the case, they're both right - you only have to look at the Flash community's response to ActionScript 3 .

In short: They like it - a lot, but its very different than AS2 development.

And, oh, from an implementor's perspective, AVM1 and AVM2 (which, roughly, correspond to ES3/AS2 and ES4/AS3) are two completely different VMs. Is it possible to make one that does both? Sure... but there's no denying ES4 requires *substantially* more effort and complexity to implement (note I'm *not* making an argument about code size here....)

An so, in a deliciously Shakespearean turn, Doug Crockford of Microsoft of Yahoo asserts (correctly I think) that the issue is fundamentally one of nomenclature.

Of marketing.

If its not called Javascript, would anyone use it? And if it is, how similar should it be?

Quite frankly, Brendan's probably right in that, whatever justifications the opposition might even believe, there is a bias to keep Javascript "ghetto-ized" to a degree - because of existing investments and strategy considerations.

That doesn't, however, make it wrong to push in that direction.

Personally, I do wish there were less emphasis on the "compiler/VM" split that Java brought into vogue - it seems to be at the the heart of a lot of the design decisions that make AS3 and ES4 feel less "Javascript-y" to me.... but that's both good and bad - ECMAScript 3 is forgiving of errors well past the point of stupidity.

And so, lightweight stuff really is harder, but its also a lot easier to write stuff well...

October 29, 2007

Hi Hulu Hulu Nuku Nuku Wah Ha Hah...

Updated: Hunh - they fixed the scaling :) Beta-in-motion... cool!

Hulu (the YouTube/iTunes clone from "Big Media") is now in beta (closed, but still)... see the embedded player below (link).

It looks pretty nice - nothing mind bogglingly interesting or anything, but somewhat well executed (annoyances already: can't invoke menu without sitting through the ad, scales size REALLY poorly - note that I'm not using the default "520 x 295" size... and permalink, wherefore art thou?). You can follow a "related content chain"to other assets by clicking on links when you invoke the "menu" - rollover the clip above to see what I mean.

I obtained this clip from from the Hulu blog (written by the CEO)- which has all of two entries since August.

CEO blogging is nice and Web 2.0h-ey - but only if you follow through, so minus style points for that. On the flipside, its an episode of the Office (full episode!) they use as a first example - which is frikkin' hilarious, so there's that.

I have to admit, as content providers increasingly push out the middle man, and offer the content directly... well - I'm not sure how many "pure" aggregators will be left standing.

Maybe this will be one. Or maybe not.

October 24, 2007

Steady Search Improvements

I'm not sure when it started happening, but both Google and Live now display a simple "table of contents" (link) for the first entry (Yahoo doesn't currently, and Ask has its own variant)....

Its really quite handy.

Back when, I had proposed a similar feature for AOL Explorer, based on the idea that an auto-discoverable RSS feed (a simple feature all browsers now have) is more than just an "update feed" - its really also an alternative editorially managed table of contents for the site. The idea was that you'd visit the site (like ESPN or CNN), and perhaps a small toaster would appear in the lower right presenting the feed as a mini TOC for quick navigation.

In any case, no idea if that's how the search guys are doing it - but the idea that the RSS feeds are actively programmed means they present an alternative, intelligent view into the site clearly must feed relevancy (which is really another word for "recommended" if you think about it...).

And little features like this showing are going to be increasingly important - more content means more specialization (witness: search engine fatigue)- so nicely done.

October 17, 2007

Software the Ultimate

If you're a language geek of any sort, you should really be regularly reading Lambda the Ultimate (er - "language geek" of the digital variety, that is - you freak-ish polyglots can go elsewhere). Of course, if you are one, you're likely already there, so the reference here is redundant.

That said, there was an excellent post from a few weeks ago that any futurist should embrace as a foundational principle (uh - IMHO :)). That principle, emodied in an argument about the base nature of programmable circuity, is that "Everything is software - the rest is just wiring". At some point, this will seem like a obvious thing - we'll wonder how anyone could have ever imagined it differently (and I'll bet some already do :)).

The most dramatic public demonstration of this is the iPhone, of course, where the "hardware" interface (input keys, etc.) is configured on-demand, programmatically (i.e. in "software"). But, the rise of the Programmable Processing Unit (the "PPU", whether called the CPU, GPU, Embedded processor, or whatever) has been underway for a long time - implicitly masked in the rise of "Edge Processing Capacity": smart devices of all sorts (phones, fridges, routers, blah blah ...), presaged by the Personal Computer itself.

Not there haven't been some promising mis-steps (Transmeta Crusoe comes to mind) - but it is the path. Most industries today, including Video, are full of single purpose, limited function ASICs, and that will change.

A few trends contribute directly to the idea, and value, of "Software-as-Hardware":
  • Specialization of function delivering a higher quality experience: think IMDb v. Yahoo Finance v. Google. And if you don't believe even Google believes this, ask yourselves why they have CodeSearch, Google Finance, and the like.
  • Aggregation of access points: think PDA/laptop/cell phone, or TV/Internet and Game console convergence - I don't mean "connectivity" here, but your physical access point to digital services.
    This is driven by what I think of as "the Lazy IT" principle required for mass commoditization: You just don't want to manage - that is, administer, install, update, and (in the case of portable access points) carry - all these access points.
    Sidenote: Mobility of access as a proxy for personalization will be an interesting trend to watch here.
  • The rise of what I call "Content Engineering": data driven design systems (think HTML, Flash, and to a lesser extent Java, .NET, etc.) enabling richer and more dynamically flexible relationships between content and services providers and their end users. The essence of this practice, on-demand delivery, is at the heart of what drives the move to progammability.
Another future wrinkle will be as our devices (access points in this context) become even more configurable. We see a *tiny* bit of this with some fun phone form factors (say THAT 5 times fast), but I think piezo-electric stuff (and/or some karmically related technologies like digital ink, or the like) will drive some dramatic application innovations that create significant behaviour shifts in next 8 years.

Imagine what it'll be like if your apps or content can change not just the surface, but the shape of your terminal.

Power consumption impedances (in the "laws of physics" sense), I think, are the only unknown blocker, versus greater programmability. Though perhaps there are creative ways to solve even that...

Of course, its not entirely impossible I'd feel differently if my title were "Chief Hardware Architect" :)

October 11, 2007

Flash rulez

Courtesy of Corey.
Fairly impressive set of announcements from Adobe's MAX conference this year.

Most notably (for me):
  • Aformentioned Flash player support for H.264/MPEG-4 should be released in the next few weeks (though media streaming is still tied to their Media Server, which kinda sucks),
  • 2D Shading language for Flash code-named Hydra; you can check out a HW accelerated only version here,
  • C/C++ compiler for Actionscript; not sure if this will be productized, but the demo of Quake I software rendering compiled to the AS3 VM is pretty cool (end of the second video, here),
  • Substantially expanded text control: flow, wrap-around, tables, etc.,
  • 2.5 rendering, e.g. a perspective display system.
Get (slightly) more detailed info at Adobe Labs.

With some significant focus of the developer productivity/debugging chain, Adobe could make things very interesting for the current generation of incumbents (Sun, Microsoft, etc.).

Certainly it turns up the heat intensely for the Silverlight team... and even moreso for the future of Java on the desktop for RIA.

October 10, 2007

Secrets of the Cable Universe #2: Bandwidth, pt 2

Continuing from Part 1.

To cut to the chase, your coaxial cable (and hence your cable company) is capable of delivering (roughly somewhat less than) 5Gbps into your house.

The math for that number is pretty simple. There's about 750mhz (or so) or RF spectrum avaible, divided into 6mhz "channels". Why 6mhz? Because that's about how much spectrum you need to deliver an uncompressed analog NTSC AV signal - in the digital world, that translates into (about) 40Mbps. So 40*125 = 5 (you know, adjusting for decimal places).

That sounds like a big number, and it is, but there are a few mitigating factors, as discussed last time.

First, it is a "shared" connection. There a certain number of households grouped into a "service group", usually between 200 to 2000 (very broadly), which connect to some physical networking gear at the cable plant (I use the term "plant" very loosely here). Within that service group, DOCSIS (the cable networking data interface protocol) basically works like a form of encrypted ethernet. Everybody in that service group sees all the packets, but cannot decode those packets.

So right off the bat, your effective sustained speed (more on this concept in a bit), is 5Gbps divided by [number of homes in your service group].

Additionally, some portion of that spectrum, those channels, is consumed by television - In fact, almost all of it. Today, only a single 6mhz channel is allocated to your High Speed Data (HSD) connection. That's a big part of what DOCSIS 3.0 promises - the ability to "bond" channels to effectively multiply the available bandwidth by N number of 6mhz channels.

Assuming there's channel spectrum to allocate...

In digital form, some 10 to 16 or so Standard-Def (SD) channels can fit in a single 6mhz channel (multiplexed into a single MPEG container over that channel, for those curious - this is also important because it has implications for compressions; specifically regarding CBR v. VBR). Two to maybe 3 or four High-Def (HD) channels can fit in the same 40Mbps. The range incidentally, is largely a function of compression quality per digital channel; this is probably worth a future post.

Short version: you get either 1 analog channel, 10 to 16 (or so) SD digital channels, or 2 to 4 (or so) HD digital channels.

Thr rub is this: most of the U.S. is still analog. Or at least, enough of it that most cable operators carry around 80 or so analog channels (out of a possible 110 or so). Then they consume another 6 to 8 or so "double carrying" the same channels in digital form, and the rest is allocated to HD channels (triple carrying many channels) and VOD (Video-On-Demand - more on how this works in a future post also).

Which doesn't leave a lot of room for your HSD connection.

Interesting, the core technology and bandwidth available compares reasonably favorably even with newer technologies like optical fiber (remember that we're talking about the connection to the home from the "edge" of the network - "inside" the cable network is often optical fiber, already; the challenge here is the "last mile"). Mostly the advantage of networks like FiOS is smaller service group sizes (owing to larger capital investments and other "late mover" advantages), and less legacy encumberances.

October 2, 2007

Vista: 1 year later

Ok, it hasn't really been a year - the Vista launch was "officially" January 30, for consumers, with the actual release tepidly launching 60 days earliers to businesses and developers on November 30.

But its interesting to note the contrast betwixt Vista's reception and Halo 3 (which just launched September 25).

I don't think there's anything wrong, per se, with Vista (other than the ridiculously massive gap between it and the last major OS release from Microsoft) - and there's much to recommend - but, it reaffirms for me that OS'es increasingly won't matter. Note that although Vista projections are down from MS, XP projections are up - the message seemingly that one's just as good as the other.

I don't mean that snidely - as computing has moved from novelty to utility, consumer interest will be be driven by experience, not capability. That is, "What have you done for me lately?", not "What could you do for me lately?" - which explains Halo's, um, halo.

Natural enough, but it probably has some significant implications in how Microsoft will/should think about the future of its platform... generating infrastructure that creates platform lock-in will be increasingly difficult.

September 24, 2007

Star Wars Redux

I can't believe I'm saying this, but I watched the "Family Guy" season premiere last night, and quite enjoyed it.

It was an hour long, but if you take out commercials and about 10 minutes of "Family Guy" nonsense (most "Family Guy" humour seems to be of the "its-funny-if-it-goes-on-uncomfortably-long" variety anyway) - it was pretty much a 35 minute shot-for-shot recreation of the first Star Wars film (Episode IV, to be clear).

It may even be my preferred edition of Star Wars.

The CG was good, that action scenes were crisp, the pacing was tight, and Han shot first. Sure, the (voice-over) acting was a little wooden, but no worse than the original :)

September 12, 2007

Data problems

Not a new observation, or anything, but there's been an explosion in storage capacity, with revolutions of change still coming.... but our ability to organize and manage that data has only incrementally improved over the last two decades. For example, at the high end, although every medium to large business drowns in an excess of data they have trouble managing, core rDB tech has only evolved incrementally, at least commercially - yet remains the tool of choice.

Cutting edge is considered selecting MySQL instead of Oracle - yeah, that's bold :P

Companies like Google, whose mission statement is about data organization, of course, don't just fall back on tried and true patterns - they created tools (genuine internal infrastructure) like BigTable to help address these issues.

In a similiar vein (courtesy of Slashdot), you should check out the Database Column - a "multi-author blog on database technology and innovation" ("Column", get it? :D). Clearly, there's some interesting thinking going in the space that will change how data management happens - column-based dB tech vs. row-based is really only the tip of the iceberg, but provides a nice visual metaphor for how sideways things will get.

Interestingly enough, the "middeware" thing I referred to previously was in this space. If I manage to get out off my keister with that project some weekend, I'll post a sample application... but don't hold your breath :)

September 4, 2007

Seam Carving Images

Boy - there were a number of interesting things at this year's SIGGRAPPH, but so-called "Seam Carving" really seems to have caught people's attention. It seems to have hit upon exactly the right combination of: (1) Easy to demo, (2) Fake AI/magic (seeming) computer smarts, and (most importantly) (3) Relatively trivial to code. Still, even having read the preprints, I'm surprised at all the attention it seems to be receiving.

Video, which explains what I'm talking about (it is fun to watch in action):



And demos, for Windows, in Java, and even Flash, if you just want to try it online (the Windows version is the coolest).

Pretty clever. (And yes, thank you, I've seen it :P)

August 22, 2007

Secrets of the Cable Universe #2: Bandwidth, pt 1

As with previous installments (ok, the one :P), this post also isn't so much a "secret" as it is a clearer explanation, with some implications, of widely available information. There's been an increasing fervor building around bandwidth, bandwidth management, and its implications as consumers are now finally beginning to consume rich web content at scale (if "Monkeys kissing women" , "Man in underwear" or crappily digitized stolen content can be considered "rich" :P).

So how much bandwidth does the Cable infrastructure provide?
In past lives, I'd always heard that Cable was a "shared" pipe, while technologies like DSL were not - so let's explore what that means.

To go back to first principles, the coaxial cable coming in from the street to your house delivers about 750mhz (or so that's usable) of information spectrum (I'm not going to get into RF modulation and how it works here). In the old days, that 750mhz was split into 6mhz channels, which turned out to be about what you needed to deliver a single ucompressed "standard defintion" (NTSC) audio/video signal - a channel, basically.

And that's why you basically had about 100 or so channels on Cable, and no more, really - there's a limited capacity to what the "cable" from the Cable head-end into your home could deliver. Fundamentally, it was constructed as a multicast technology - broadcast from the Cable company's head-ends, down your street, and split into homes in your neighborhood. Each node from the head end could pump a signal of sufficient strength to service from 200 to 2000 or so homes (really, really, roughly).

And everybody got basically the same content (though so-called "Conditional Access" would encrypt at the head-end and decrypt either at entry to your house, or on your settop box - again, beyond the scope of this discusion).

Then came digital signals, and things got interesting. Turns out, that over these 6mhz bands, you could send, oh, about 40Mbps of information (still multicast, of course - meaning everyone gets the same information). And if you MPEG-2 compressed your video, that worked out to - for Standard-def (SD) video - about 10 or so video streams per channel (figuring 3.5 to 4Mb per video) - or to put it another way, you could pack 10 or more digital "channels" into the bandwidth occupied by a single analog channel.

In part 2, I'll cover how this maps to Internet Connectivity and your Cable modem and how much bandwidth Cable really delivers to the home(its not the number you think - do the math implied above) .

August 15, 2007

Facebook, baby

(First: Sorry for the post dearth - its August, what can I say?)

It started a bit slowly, but since Facebook opened up its doors to all comers, its become quite the deluge from my social circles (way behind on friend approvals still) - it took LinkedIn many years to achieve any critical mass for me.

Zero to hero very quickly... obviously curiosity and, quite frankly, a well thought out product with a positive developer eco-system have been rewarded (remember this idea?). In fact, no coincidence, I think, that developer APIs coincide with Facebook's recent rapid rise beyond the college crowd... this is how you go from narrow to general: by letting your application become a platform.

That is, you succeed best by letting others success feed you.

So, I'd been meaning to blog about this upswing for a few weeks now... and then I ran into this today:
Ick, old married guys on Facebook

It speaks for itself: Perspective is everything :)

August 2, 2007

The Future's A. Kay

In discussing past lives with a new colleague, I found that he worked (back in the day) at Atari, with, of all folks, Alan Kay (who knew Alan was at Atari?!?).

In any case, it reminded me of one of my all-time favorite quotes, uttered by the aformentioned Mr. Kay:

"The best way to predict the future is to invent it."

There's a boldness there that rarely fails to get my blood pumping.

But its occurred to me: Alan's expression is also a rather clever, oh-so-polite and positive (if slightly passive aggressive) way for a technologist to say: "When I want your opinion, I'll give it you" (think about it) - no wonder I like it so much :)

How about you guys in the peanut gallery? Any favorite sayings that spin on multiple axes of rotation?

July 30, 2007

Filling the Gap (AKA Ideas are cheap)

So betwixt my last work environment and my current one, I had begun going down the path of (again) building my own software. I had (have) a number of ideas - but that's really the easy part. And I'd say "good ideas", but, in my humble opinion, the difference between a "good idea" and a "bad idea" is in the execution.

Which is to say, its value you accrue in arrears, not a judgement you can really make a priori.

I suppose - best case - you hope its a good idea before you start doing it.

In any case, I had 3 things I did actual work on to a real degree (in terms of execution): some middleware, a web app-y thing, and a tool. They had some overlapping technology components but with reasonably distinct market segment targets. Given where I am now, I'm thinking I'll release them each in some form over the next few months - no sense letting them rot on my harddrive.

So - check back; more info soon.

July 16, 2007

GPL v3: A mess of your own making...

The GPL v3 was ratified and released a few weeks ago by the Free Software Foundation - which owns, and upgraded to GPL v3, the copyrights to an important swath of Open Source Software (OSS).

The intent of the GPL (GNU Public License) all along has been to empower people who were using software released under GPL ("copyleft" software) to make modifications to said software, and to use and redistribute modifications and improvements to others, for collective benefit. The primary legal intent of this latest revision, as I had mentioned previously, was to close the server and computing devices "holes" in previous versions - whereby companies were:

(a) using GPL software on their servers, but NOT releasing changes or improvements (because they were not distributing the software - which was the previous trigger for obligating a company to share its improvements to any GPL software) to anyone, per se, but rather letting users consume software services remotely, and/or

(b) preventing (effectively) modification and re-distribution of the software because their device was in some way preventing any modified software being loaded or running on the device (so called "TiVo-ization").

There are other OSS licenses which were always intended to be less radical: BSD, MIT, ZLIB, MPL, etc. These licenses have always been of the "do-what-you-want-but-its-not-our-fault" variety, and have arguably been as successful with their projects as the GPL - and the GPL has often been described as "viral" in that any GPL code may not be linked with non-GPL code.

But that's not the point.

The point is that they revised the license to close the loophole. And from there, much hilarity ensued:

Microsoft statement about GPLv3
Groklaw on Microsoft and GPLv3
GPL: fear is the key

Linux creator calls GPLv3 authors "Hypocrites"
Apple and the GPL

So, to summarize: group that OWNS the copyright to some software closes a loophole in how they intended that software to be consumed - and everybody's pissed (note that this is NOT retroactive - nor could it be; that's part of the intent of the philosophy of the license!). The big problem, of course, is all the folks who mistook the GPLv2 as (a) giving them rights they weren't supposed to have (e.g. using it to build closed source server software) and/or (b) took the license as contract for future commitments.

Oops. Sorry.

(btw, Linus Torvalds, of Linux fame, expressly does NOT seem to be confused. He just doesn't like it - and as copyright holder to the Linux kernel, he's free to do as he chooses - and I suppose to make overwrought analogies.)

July 9, 2007

Presence is not an Identity feature

I realized in my "Unique Visitors" post that I skipped some background - didn't "show my work" as every math teacher I had always said.

When Google released GTalk, I remember feeling like they had missed an opportunity to correct the
Presence, Identity, Messaging and Accounts model - that they should have made "location" more than just metadata, but I don't think that's right, actually.

On the PC side of the Instant Messaging (IM) universe, Presence is exerted by an Identity, and (nowadays) often qualified by Location (for Location Based Services) and/or Network of Origin (for interoperation messaging systems), but I think Presence, especially for Comcast, is really a device feature, and Identities are transiently bound to those devices, generally with an (essentially) eternal TTL (time-to-live), though in the case of PCs, that TTL amounts to IM session length.

Identity is a kind of user virtualization construct (usually reasonably synonymous with "Credentials": username and password - at least for our purposes), while Presence advertises availability for messaging, and Messaging is generally a point-to-point session negotiation and communication pipe, though it may be one-to-many (broadcast), or many-to-many (chat). Note that Messaging and Presence need not necessarily be interdependent. Finally, Accounts are a billing relationship construct.

A typical use case in a househould is likely to involve multiple STBs (set top box or other consumer electronics device on the Comcast networks), multiple .net IDs (e-mail addresses), and multiple PCs in the house, where Presence, Identity, Message and Accounts overlap each. And of course, wireless, school/office, and multiple residences complicate the picture.

Usually, scalability for Presence is scale limited by subscription/notfication events, and by that I mean the "m X n" implied by the "Buddy List" construct, where any user may be subscribed to presence notification of any n users (effectively O(n^2) ). Messaging is bound by active simultaneous communication sessions (and type - multimedia for example) and is usually O(n).

In any case, I'm just rambling now - all of that's a roundabout way of saying that messaging is a service feature, not a product :)

July 2, 2007

Forever Haldeman

Between moving, commuting, traveling, and the like, I've been spending a lot of time in transit - which more or less sucks. On the "less" sucking side, its given me the opportunity to do some reading, and I've recently "discovered" the works of Joe Haldeman.

Haldeman is probably best know for his Hugo and Nebula award winning novel "The Forever War" (published in 1974), and it had been on my I-should-read-that-but-probably-won't reading list for a while. To make a long story short(-ish), I finally got around to reading it.

It's good. Quite good.

So much so that I've been devouring my way through all of his works. His writing is definitely of the "hard" sci-fi genre, but closer to Asimov, in spirit, than say, a Stephen Baxter. That is to say, the science is grounded in very real extrapolations of existing technology (and theories), but the technology is a backdrop to explore the human social condition
- particularly warfare. For example, "the Coming" is not about an impending alien visitation, whatever the book jacket says, which I think might have lead to some confusion/disappointment with readers and reviewers. Rather than wallowing in the ideas of science (which, normally, I love doing :)) Haldeman writes, for lack of a better term, "chick sci-fi": its about the relationships.

The only real criticism I have - the only thing I'd argue keeps Haldeman from being more commercially renowned - are the way he closes his novels. The endings are satisfying, but oddly abrupt. I've read more than a few of his novels at this point ("Old Twentieth", "The Coming", "Forever Peace", "Camouflage", etc.) and although none of the ending are bad (at all - they're logical, exciting, and compelling), none of the endings are what I'd call great either. They do the job, but spartanly.

In any case - highly recommended. Go enjoy.

June 27, 2007

Secrets of the Cable Universe #1: VOD ffwd/rwd

I've now been working at Comcast for about 6 weeks-ish, and so far, I'm having a blast. Both culturally and technically its a fairly dramatically different environment (along a different axis than the small-to-big transition that AOL was... more on that another time) - which is always fun.

Although I'm finding my skills, talents, and experience are useful (thankfully), the whole Cable/Telecom universe is completely new to me, so the learning curve is both vast and interesting.

In that vein, I though I might share some of the random but interesting technical tidbits that manifest themselves in odd ways, whether operationally or in terms of the consumer experience. Nothing I'll share is (obviously) actually a secret - its either public information and/or subject to trivial induction from public information.

For example, one of the significant features all the Cable operators (MSOs) have introduced over the last few years is Video-on-Demand (VOD). Unlike the "your-movie-starts-every-15-minutes -on-4-different-channels" model that the satellite providers started with, the new VOD systems actually dynamically allocate a unique "physical" channel from your local cable head end when you select a movie. The video asset is then played over that channel which your set top box (STB) is then tuned to automatically, as if it were any other channel in your channel line up.

So the interesting "secret" is that in order to enable fast forwarding (and rewinding) of the assets, each media file actually has additionaly "trick files": copies of the asset at +/- 2X (or whatever the speed multiplier is). When you press the ffwd or rwd button on your remote control while watching a VOD asset, it's actually dynamically switching to another asset at the right time code, and playing from there.

And that is why you (currently) only have one speed for fast forwarding or rewinding: more would have required many more multiple media assets (one at each speed) in the VOD storage systems.

Clever, but strange...

June 21, 2007

Review: Surf's Up

We took the kids to see Surf's Up over the weekend. I've been getting pretty tired of the anthromorphic kids movies that rehash teen comedy plots from the 80's (especially if they include bears, deer, or penguins), but, unlike at other recent outings to kid flicks, neither my wife nor I fell asleep!

The "mockumentary" conceit of the narrative was engagingly entertaining, with the characters distinct and interesting, even if a bit archetypal. And the camerawork, lighting, and animation were superb - possibly the best I've seen in any animated film to date.

In short, go see it! (if you've got kids - its no Shrek :P)

June 11, 2007

Safari for Windows

Get it here.

(Though be warned, at a 28MB download its WAAAAAAAAAAY larger than either Firefox, IE 7, or Opera)

June 5, 2007

Unique Vistors are not users

Being at a Cable/Media giant now, as you might imagine, we discuss advertising a fair amount, and in particular, exploring the strengths, weaknesses, and, really, differences in how the advertisers, content creators, and distributors think about the eyeball value chain on the web vs broadcast media.

One of the obvious but interesting observations for web metrics is that the commonly used measure for audience is not actually people. That is to say, the "visitors" referred to by "unique visitors" isn't people at all, but devices. And even that's a bit of a misnomer, because its really, for PC users, a per computer per OS user account metric. Whether its a browser cookie, Flash local shared object, or Google Gears data store (the latter two don't get cleared when you delete browser history in your browser, btw) - nevertheless, they are all at the same level of "user" granularity. I'm going to suggest that OS user account is really a poor man's device and data virtualization technology, much in the same way that Multifinder was a poor man's multitasking technology back in the days of the original MacOS, and thus, we're talking about a device metric.

Unique visitors (UVs) really is a direct measure of how many devices connect to a given site. And it is correlated, of course, but not identical to the number of actual users visiting that property. Some sites you may use only at home or at work (one UV per user), while some may be used at work and home (two UVs per user), or, in cases where many users share the device (home computers, or set top boxes, for example) it may be one UV for many actual users.

Magazines will often refer to the "pass along" index of a magazine: that is, how many people might actually read it, but may not have purchased it (House&Garden magazine has a pass along readership of 14 or 15 people per sold copy, whilst National Geographic is around 5 or 6).

In this area, the Internet is surprisingly immature, given the promise (and increasing reality) of behavioural, demographic, and metric oriented targeting of the world's many-to-many publishing medium. This kind of thing becomes important not just for CPM advertising (that is, impression and brand based advertising), but even more so when considering the efficacy of CPA advertising (so called "Cost-Per-Action" advertising).

I mentioned magazines rather specifically, because it appears that Internet advertising growth is coming most directly from print and publishing, and not at all at the expense of broadcast (its shrunk nationally, but more than compensated in other channels). Perhaps we need to extend UV's to be UV/U's (Unique Visitors/Users), much as Nielsen's does for TV ratings/share to extend more actionable transparency to advertisers and targetting technologies?

May 30, 2007

Not Steve Jobs

I've been reading the "Fake Steve Jobs" blog for a while... but today's entry (concerning Microsoft's Surface computing device - yes, I'll be buying one) was frikkin' priceless...

An excerpt:
"And what is up with all these stories like this one where the writer gushes about how you can just squeeze photos to make them smaller or stretch them to make them bigger. Golly, can you believe it? Well, yeah, I can, since I introduced this several months ago and I'm going to be shipping a real product that employs this technique in only a few weeks.

This Surface thing is such classic Gates. He copies our idea, but in a frigtarded, impractical way..."

Funny.

May 29, 2007

The Bandwidth Shell Game

Whilst getting my slashdot groove on yesterday, I encountered this: Will ISPs Spoil Online Video?

The main thrust of the article is that no ISP can actually deliver the "promised" sustained bandwidth for all users on its network (or even a large percentage of its users) at any one time.

The article is basically true, in the facts, and I've touched on the topic of video bandwidth and the 'net in the past, but its (somewhat) unfair to narrow this to an ISP issue. (I say "somewhat" considering my previous gig was at an ISP, and my current employer offers Broadband ISP services, so perhaps I'm not the most objective here...)

For example, every website plans against peak load, not total possible usage - same problem: you can't access promised services (paid or free) as advertised/committed. And, more on point, Google's ever increasing g-mail mailbox size is also bogus in the same sense -they can offer that much storage because not everyone uses 2+GB for mail (very few do, in fact).

Really, all businesses do capacity planning (online AND offline) to determine pricing (and therefore marketing claims), and bandwidth is no different in this regard.

I can't even make a call for the first 30 minutes after American Idol ends - wireless capacity planning never forsaw the Seacrest effect.
And although I went to the Buffalo Wing Factory in Va ("Home of the Flatliner") for some spicy buffalo wings one night after a goodbye party for a departing colleague, they were, in fact, out of Flatliners. Grrrr....

What makes it thorny for most connected users is that the usage profile of the service, of the Internet, continues to evolve very rapidly, making terms of service seem quickly antiquated. What people should bear in mind though, is that the terms of service are simply a reflection of the economic and topological constraints of the network itself - usually in place to guarantee some core QoS (Quality of Service) for as many customers as possible.

Nobody's trying to trick anybody, or game the system - but you can't plan for what you don't know, and the increasing interconnectedness of things make prediction a dicey thing. That is to say, the dumber the network, the less visibility available.

Consider, for example, P2P applications are good (i.e. cost) for the endpoints (origin and destination), but usually MORE traffic (i.e. cost) for the network itself.

May 21, 2007

Microsoft's good at this...

I'd love to see Microsoft's "best practices" guide on the platform effect - they're good at it (generally). Like the developer entanglement Adobe's attempting with their eco-system, hard to argue that 4 gigs of Silverlight storage and streaming is bad for developers... just check out the community response (from a self proclaimed Adobe "Flex Machine", no less).

Though, these days, this seems more like a Google tactic than a Microsoft one... what does that tell you?

Copyright Law Farce

This won't last long, I'd guess. Courtesy of Slashot. Watch it while you can. The, um, chosen "medium" makes it a little tough to watch if you're not attentive, but serious points for creativity and chutzpah - even ends with full disclosure of the creators, and enumerates each clip "borrowed" under fair use.

May 15, 2007

IT and the Edge of the Network

My new work situation brought up an old debate with a good friend (perhaps good debate with an old friend? Works either way I suppose... but I digress): future topology of data and computing models on the network.

Or to put it another way: where do the leaf nodes connect to the edge of the network? Locally, in the home as a gateway for experience (or CPE in my new lingua franca) or remotely, that is, "directly" to remote applications and data stores.

This was/is partially a "client side computing" debate - where and how are performance, security, and storage best optimized.

But the observation at the end of it was this: The world only needs 6 servers arguments are currently in vogue (with consumers, who speak with their time), because, well, IT management sucks. To wit, allow me to posit: It is easier (i.e. better) to use remote applications with remote data for most users because it pushes the information management pain to professionals.

In order of "pain in the ass to maintain": Windows, Mac, Cell phone... not un-coincendentally, also a measure of how closed the software and hardware eco-systems are, in practice. Game consoles are particularly interesting in this regard (I'm rating them as easier than cel even), as everything but the VERY top layers of the stack are single sourced - sounds suspiciously like the RIA platform arguments, no?

(And all the User Access Controls in Vista, and installation hurdles for Apollo only argue against the edge being at the desktop for most applications...)

May 9, 2007

Review: Spiderman 3

Kinda blew chunks.

Sure - there were some snippets that rocked, but they were mere wisps of symphony in a sea of rhythmless plot meanderings and tone-deaf dialog. Spiderman 3 seemed like it was trying to create some sense of depth, direction, and development - as previous installments did more successfully - but everything felt far too forced to be much fun. Too much motion (emotional as well as physical) into too little movie...

My advice?

Wait for it on DVD, where there's a fast forward button, or if you must go see a movie on the big screen, try to find the infinitely smarter, funnier, and edge-of-your-seat exciting Grindhouse (best movie I saw this year), or the guy-flick adrenaline orgy that is 300 (Persian [dramatically]: "Our arrows raining down upon you shall be so numerous that they blot out the sun!" Spartan [laconically]: "Then we will fight in the shade...").

May 4, 2007

New Gig: Comcast Chief Software Architect

Starting Monday, I'll be working again, joining Comcast as SVP and Chief Software Architect.

The first thing I said when talking to people at Comcast (early on) was "Comcast? Software? Isn't that an oxymoron?" Suffice it to say, today they're a Cable Company... but tomorrow?

Well... more information in future posts... but mostly I was motivated
, at a personal level, by three (professional) considerations:

(a) At this point in my career, I wanted to either go do my own thing again (my usual default stance), or take a significant role at a significant organization; Comcast did $26B in revenue last year, and, though still facing far more demand than supply in the market place is still aggressively pursuing a deep business transformation for the new millennium. They're in a great place, and they're running very fast.

(b) I'm gratified that my particular skills and experience will be applicable and valuable, but also wanted to play in areas of complexity where I had little experience (Cable/Telecom) - nothing beats tackling new problems. And Comcast has no shortage of that, though they have a strong market position.

(c) I was quite impressed by the caliber, intellect, and alignment of the executive team. That last, in particular, I've found to be uncommon in large organizations (not just from my AOL experiences, but the half dozen or so "usual suspects" I've spoken to over the last few months).

Updated my LinkedIn profile last week - and am currently drowning in paperwork (employee stuff, moving, buying, schooling, re-locationg, eeeerkkk...) - but its all good :) In particular, my wife grew up in Philly (where Comcast Corp is headquartered), and it was while working near there that we met, so geography is actually a big part of the appeal, too - we have lots of friends and family in the area.

May 1, 2007

MIX '07 and Ray Ozzie

Ray Ozzie, Chief Software Architect of Microsoft (along with Scott Guthrie, General Manager of the tools group) presented the keynote at MIX '07, 2nd annual Microsoft's Web UI and development conference

The core tenet of Ray's argument is that client side computing is vital to delivering rich experiences. I think he made that case well (and I agree with the rationale), but Ray didn't really address how or why Silverlight specifically and WPF generally was better than browser + DHTML/SVG/Flash/Java, or whatever, in terms of ANY richer function and/or end user benefit.

Specifically, the undertone of the arguments, from both Microsoft and Adobe, is that a single sourced runtime is better for the developer - more consistency across a wider variety of platforms (browsers, OS'es, devices, etc.). And, as a practical matter, its hard to disagree with that - and there's enough that's "open sourced" by the vendors to reduce impedence in the development chain.

Speaking of which (and not to be overlooked), the development chain that Microsoft is putting together is nothing short of phenomenal. If there's a "secret sauce" in Windows continued dominance in the Enterprise (and thus, everywhere else) its through the tools, class libraries, etc.; they continue to define the cutting edge of developer productivity. Perhaps somewhat counter-intuitively, it seems particularly well designed for small teams and becomes an easy way to develop prototypes, and, in turn, go from prototype to production.

Platform success, which I'll define as self-sustaining propagation and increasing barrier to exit, is definitely AADD (All About the Developers, Dummy).

There was also a Michael Arrington interview with Ray and Scott, which might as well have been conducted by a Microsoft employee. What few interesting questions were asked went basically unanswered, and at least that way there would have been less mumbling.

April 30, 2007

MIX '07 (and Silverlight)

I'm at MIX '07 in Las Vegas. Come find me if you're there (here).

Also, I should note (based on surprisingly copious amounts of e-mail I received) that I didn't comment on Microsoft's "announcement" of Silverlight at NAB because, well, there's no there there. Silverlight is the new name for what used to be called WPF/e - and the news at NAB was just that: a name change.

The actual technology available was the same February Community Technology Preview that had been available for months. I expect that will change this week, but I was impressed with the vortex of press MS was able to create for their naming ceremony :)

As an aside BubbleMark presents a very simple, very basic, and very artificial performance test (that is, I think the results are STRONGLY un-correlatable to any real world application), but its fun to look at the code for each version.

Updated: Yep - here's the new "1.1" Silverlight build, which includes the cross-platform mini CLR (essentially dot NET lite for Windows and OS X). The distribution size went from 1.1MB to 4.2MB (just a developer build - so no judgement yet). It will be interesting to see what the final size is once Silverlight approaches functional usefulness (its not there yet).

April 26, 2007

Adobe Flex Open Sourced

Adobe announced today that they're open sourcing the Flex SDK (which includes the compiler, debugger, and Flex ActionScript libraries). It shouldn't come as too big a surprise, considering the recent MPL release of their ActionScript 3 VM to the Mozilla Foundation, Tamarin.

The model, as with Postscript, and, quite frankly, Windows, is the "Platform Effect" - monetizing both the runtime (Postscript/Windows/Flash) and providing rich(er) enterprise level authoring tools and functions (Authoring tools, servers, etc.). Releasing the specifications and "core" tools creates the illusion of freedom in tool chain, while actually delivering vendor lock-in - which isn't necessarily a bad thing for developers if (a) there's runtime ubiquity, and (b) the developer's not on the hook for distribution costs.

And getting developer buy-in (lock-in?) creates a "virtuous cycle" of scale for the platform provider... ultimately why API and specification ownership is so critical in the technology business cycle.

Although I think Adobe's Apollo (which is Flex driven) is still slightly off in its execution of product distribution, overall the company is doing a good job driving a giant truck over the ongoing bungling that is WPF (I mean... Silverlight).

And (naturally) this will impact (squeeze out) smaller players like Laszlo, Haxe, and mtasc more than affect the big players...

April 18, 2007

You know you're a geek when...

So... I got a package in the mail yesterday, but rather than getting the scissors from the kitchen, or (god forbid!) just using my hands to open the package, I just flipped open my laptop, and went to Amazon to see what I had ordered....

April 10, 2007

Shift Registers and De Bruijn Sequences

I tell ya' - everything time you think you've got a novel idea, turns out somebody's been there already... doesn't even seem to matter how small it might be.

For example: I have this minor function I wrote ages ago, which I had recently rewritten/had to re-derive. Its purpose was fairly mundane - I wanted to compute the (integer) log base 2 for a power of 2 integer value. Pretty simple and esoteric problem, but it pops up fairly often in graphics/sound programming (especially 3d stuff).

There are lots of solutions, and its not really a significant performance bottleneck or anything, but I always thought my solution was rather novel. The code looks like this : (link)


The idea is pretty simple, but clever (er... if I do say so myself, I suppose :)).

Very (very) briefly: multiplying by a power of 2 is the same as left-shifting by that power (determining that power being the log we're looking for). Therefore, we can construct a bit pattern that uniquely encodes the possible patterns sequence 0 - 31, such that any 5 bit sequence is uniquely of that range. Then multiplying by a power of two number puts a unique sequence into the top 5 bits of the result, which we can then shift down and use a table to "reverse" into our result. We need the table, because of course the encoding isn't linear with respect to the domain.

There are a number of constants you could use to fulfill this criteria, and I always thought of it as a compression/huffman tree encoding problem. The current constant I computed is 0x04ad19df.

Turns out not only is this a well known idea, but, further, its actually a special class of space filling curves (amongst other things) know as De Bruijn sequences. See the great "Bit Twiddling Hacks" page by Sean Eron Anderson for more information. The constant they computed, which of course also uses a different table, is 0x077CB531.

Results are the same.

Ah, well. Time to move on, I think..... :)

April 3, 2007

Hah.

Guess I wasn't the only one who found the Microsoft ads borderline scam-alicious..... except, minus the "borderline": Microsoft Sued Over Vista Marketing

March 26, 2007

Scam I am: Microsoft Vista Advertising

My favorite all-time scam: Guy promises to get you into the college of your choice for $5000 - Money back guarantee... if you don't get it, you don't pay.

The scam? The guy does nothing. People who aren't on the edge of making it anyway won't sign up for this... and some percentage of those who do, will get in - on their own. Those that don't, get their money back... and the rest are happy not knowing any details.

Which brings me to today's topic :)

Now... I like Vista. As I've mentioned, its gots its quirks, but is absolutely a great upgrade from XP... But the ads for Vista (and the "Wow starts now" campaign") are just incredibly wrong headed, if not a borderline scam.

Take, for instance, this ad from Microsoft:


Its not that, subjectively, Windows Vista is nicer, but not a "wow" upgrade... (true, but you knows, its an ad, not the truth :)). No, what, um, boggles me is that, as far as I can tell, *all* of the ads centrally showcase a feature (the "3d flip") that not only will the average user NEVER encounter using the product, and never even figure out how to activate should they so choose - its a feature that tested poorly enough in usability that it was, effectively, relegated to a hidden key-combo for demo-ware only usage....

The "wow, really?" starts now.

March 22, 2007

Adobe Apollo

Adobe posted the first public preview of their Flash-based content runtime (codenamed) Apollo on Monday. Its pretty good - were I Microsoft, I'd be concerned.

I've discussed the ideas at some length before and Adobe's offering is clearly the strongest one out the gate... Microsoft's WPF(/e) strategy is very confused (at best), and XULRunner, from the Mozilla foundation, is potentially promising, but in practice looks to also be unsure of what its real goals are (for example, I think the ECMAScript edition 4 spec that's at its core is poorly maturing a powerful dynamic language).

But the Adobe guys seem to get what the real problems are that the browser itself solves (from a developer perspective), which is to say, a unified cross-platform development model (not for cross platform apps, per se, but to enable broadest developer knowledge) and distribution.

It's an alpha, so there's quite some goofiness, and it suffers from many of the foibles and issues that Flash does, but all in all.... its very credible as a development platform. I think the distribution and navigation aspects skew too heavily toward the desktop application paradigm, and that's a big mistake, but its one strictly of UI, not technology, so hopefully that can be addressed.

One nice bit of icing is the inclusion of a full web browsing component, enabling easy consumption of existing web content and infrastructure in your new "desktop" application. Its also the first instance of the KHTML/WebCore (the same browsing engine in Safari on the Mac and Konqeror in KDE/Linux) that's broadly available on Windows. So if you want to see how your site might look on the Mac, you can check it out with one of the Apollo samples (you can use "Scout") on Windows...


(and minor item for team Adobe: if you haven't exorcised icu and iconv from WebKit on Windows - you can save a few MB from your distribution size - by doing so...)

March 12, 2007

Oh the humanity...

This post will probably read like spam, but....

GREAT website... click here to learn more!

In all seriousness, the website is a nice resource - it posts information on how to get to a live, actual person when voice connected to call centers of a variety of different companies. That's especially useful when you know your problem just isn't a problem automation can help....

...or you get cut off (either talking or on hold)...
...or don't find the right person to help you...
...or don't have the time to wade through their voice menus...
... or... well, you get the idea...

Anyway check it out:
http://www.gethuman.com/us/

Automation systems are fine, in principle, and voice is great for high throughput (and especially that embodied by abstract thought, or requiring dialog), but generally poor for latency (compared to say, text, or the like) - so its not so great for user interfaces.

Futurists have been extolling the coming input revolution for a long time - but clearly speech recognition is/will be only a tiny, tiny part of solving the problem.

March 5, 2007

CPU Evolution

Hardware.info has a nice article outlining the evolution of computing hardware from the CPU/motherboard/bus model to the Co-processor/Streaming model. Its probably long overdue, and will require a substantial re-working of software to fully leverage, but it only makes sense: massively parallel tasks are ideal for computation, but we've been increasingly (and arguably, at long last) gated by the linear HW/SW programming model.

This isn't just the next pin configuration standard, or motherboard communication protocol (ala PCI 2.0 Express)- its a paradigm shift, though in practice the results will be phased and somewhat gradual; there's significant infrastructure in place that'll all have to change to broadly see the benefits.

That said, there are two other "industry" factors I think the article ignores...

First, the increasing proliferation of general purpose computing interface and access points (phones, settop boxes, consoles, PDAs, iPods, PDAs and the like), i.e. NOT the desktop - and secondly, the concurrent and related emergence of the web software model; the network's going to become (is?) the bottleneck for most in the real world, and the PC will be only one of many entrypoints.

Additionally, although AMD (and therefore ATI, whom they recently acquired) and Intel have a shared vision of the "new PC", I'd imagine NVidia imagines it shaping differently...