![]() |
Vinod's Blog Random musings from a libertarian, tech geek... |
|
I tend to stay away from most of the mass market business book mostly because, well, they suck (Tipping Point was an example of a quasi-biz book). We've all seen those joke emails laced with consultant-speak, false war & sports analogies, and biological euphemisms from business pundits. And, quite frankly, the jokes tend to be spot-on and just barely a caricature of most of the paperback biz books out there. I fear for the future business leaders of the country when I hear about the kind of circulation some of these books have. Clayton Christensen's Innovator's Dilemma, however, is easily one of the best business books I've come across in years. I read it the first time around in 98/99 as it was a favorite amongst the Microsoft executive staff. Paul Maritz, for example, had purchased an entire case of this book and was distributing copies quite liberally to many of the top thinkers at MS. I, alas, was relegated to purchasing my own copy It's also interesting how much of the lexicon pioneered by the book in '97 permeated Silicon-Valley-speak. We easily speak of "disruptive technologies", "self-cannibalization", and the like. So much so, that some folks who read the book today might be tempted to simply say "yeah? so what?" without recognizing how recently these terms emerged. The intellectual framework for these buzzwords laid down (or at least popularized) by Christensen is still a very poignant and specific proofpoint. Christensen's innovator's dilemma is presented as [xii]
The key tool to determine when to listen and when not to is whether a market's architecture is being faced with a disruptive or sustaining technology [xv]
He argues that, in a very conventional ROI sense, disruptive technologies present themselves as irrational investments for established firms within a given market --
A central concept to Christensen's framework is the notion of a Value Network. This is the network of relationships, competencies both within the firm and across its network of suppliers and customers within a given market. The network maps to an industry wide architectural model for a given product and service. Network element boundaries generally map to areas where firms are focused on sustaining innovations [p 30]:
People who've been through tech v1.0 product cycles are particularly aware of the degree to which initial product architecture shapes downstream org charts. I would add that in some cases, the vice versa is true -- one of the classic reasons why it's so hard for industry consortia to generate truly useful innovations is because the org charts of the participants adversely impact a proposed architecture. (the parallels to the EU's and UN's governance models are *SO* painful here... but that's the subject for a different blog article) Christensen argues that value networks within existing firms artificially orient the firm towards soliciting product development feedback from existing customers. An adverse selection occurs as the top tier of their customers -- and thus the ones most satisfied by the firms' execution within its existing value network -- are polled to influence future product design. This, of course, unnaturally biases the firm's development direction towards re-inforcing their existing network. The firm's value network, at every little step of the way, makes leveraged reinvestment within the existing networka greater ROI proposition than looking away to alternate networks. A core inflection point is the interplay between demand and supply from different technology sources [p 54]:
Put simply, there's what consumers want to do with your tech, what your tech enables them to do, and what alternative tech enables them to do. The sophistication of what consumers want to do can grow at a slower rate than the power of incumbent firms' technologies (do consumers *really* need a 64 bit CPU in their PC in the near term?). When dominant tech patterns "overshoot" like this, they are prone to being skimmed from the bottom by an alternate value network (e.g. Dell shipping PC's with AMD chips). Christensen uses the term "performance oversupply" to describe this situation. It's a case where the product you're supplying to the market does far more on a given axis than your customers really need. So how should a firm faced with an imminent disruptive technology react? Christensen lays out a few ideas [p 99]:
These theories are all various on "corporate intrapreneurship" -- entrepreneural activity within the socio-economic shelter of an established firm. Here in the valley, a critical component of such an activity are competitive compensation packages for the employees / managers of such a group. There are, however, at least a couple problems with the theoretical completeness of the story that Christensen lays out. FIRST, is the problem of false positives. Especially after the Internet bubble, we are painfully aware of just how often some joker pops up and says "I'm the disruptive technology for market XXXX. The market is worth $500B and I'm going to start by grabbing 0.1% from it's bottom." Christensen's framework was constructed entirely in hindsight starting with the victors and thus only accounts for technologies that were successfully disruptive. He does NOT provide very much theoretical framework for figuring out when candidate disruptive technologies do NOT take hold. Dominant firms would be utterly bankrupt if they had tried to follow each of these threads. The demise of the PC, for example, has been oft predicted (and is even a forecast that Christensen makes back in '97!) but hasn't happened yet nor does it look likely for the next 4-5 years. $1000 PC's killed off the $600 network computers. Now, if you start projecting out almost 10 yrs forward, then just about anything has a shot at being true. For one of these predictions to be interesting, it needs to be prescriptive within a reasonable timeframe. Still, if one takes an abstract enough view of Christensen's framework, there is some explanation -- the PC market was successful in innovating new use cases to ensure that subscriber demand mapped adequately to the architecture's "supply". The growth of the Internet and digital media created new use cases for PC's which staved off the inevitable cheapening of the architecture. In the case of CPU's, the perennial risk is that consumers start responding to better price/performance curves by choosing lower costs for a constant level of performance. SECOND, and somewhat related to the first problem is figuring out when there is true displacement vs additive usage. New technologies can arise which may appear to be disruptive to an existing market from the outset but end up carving out / creating their own equivalently sized niche. Are blogs going to be disruptive technology towards mainstream media or an adjunct to it? (my bet is simply additive) Is blogging software disruptive to mainstream content management software or simply additive? (my bet here is disruptive). There are structural reasons why in some cases we see additivity and in other cases, displacement. One favorite example is that even though DVD and CD players share perhaps 75% of the same componentry (and almost 100% of DVD players can play music CD's) they nevertheless remain different categories at electronic stores. My theory is that content level semantics keep these 2 markets "forked" from each other. For ex., the "shuffle button" is a high demand consumer feature in a CD player but makes no sense in a DVD player due to the nature of typical DVD content. Is the case simply that $0.75 product feature has driven a wedge between these 2 markets? There is something far more profound at work here. ![]() |
|
| ||