The New Software Industry: Craig Mundie

Craig Mundie, Chief Research and Strategy Officer at Microsoft, gave today’s lunch address; this post is out of order because I was not about to whip out my laptop and displace the best conference lunch that I have ever had — grilled Kobe beef over salad greens in a lemon fennel vinaigrette with baby pear tomatoes, spiced candied walnuts, Point Reyes blue cheese and a puff pastry triangle with balsamic reduction, followed by a marquise au chocolat of dark truffle chocolate, garnished with whole hazelnuts, a chocolate leaf and fresh raspberries — so I took notes in an actual paper notebook. Microsoft did host us in their well-appointed conference facilities and provided the afore-mentioned lunch, so they deserve the time to chat us up during lunch.

Mundie, who interpreted all the morning’s discussions about services to the narrowly focussed SaaS definition, discussed how there are opportunities to complement internal enterprise applications with services that are in the cloud.

He spent quite a bit of time discussing processor speed increases, based on the premise that we’ve reached a fundamental limit to processor speeds at around 3GHz (since increases without spontaneous combustion have been achieved in the past by lowering voltages, which just can’t be lowered any further), and how multi-core processors is what will cause the next way of processor speed increases. The result of this, however, is that machines that already are operating at far below capacity will have even more idle cycles. He discussed the idea of “fully productive computing” to absorb the idle cycles by such speculative execution activities as anticipating and preloading the next most likely applications that the user will run — a discussion that turned into a brief ad for Microsoft Vista.

In response to a question about the local platform as a “solution” for privacy concerns, he spoke about how providing notice of information gathering, and choice as to how that information is used, alleviates most of the concerns about privacy in a hosted environment.

That’s it for my coverage of the New Software Industry conference. All the presentations will be available online in about a week, and within a few weeks all of the video recorded during the sessions will be on Google Video.

I’ve already attended the opening reception for TUCON, and I’ll be full on with blogging about that tomorrow, except when I’m presenting late in the day.

The New Software Industry: Jim Morris and Bob Glushko

Jim Morris of CMU West and Bob Glushko of UC Berkeley summarized the day in a final session, and although it’s coming up on 6pm and I’m eager to get back on the 101 up to San Francisco to get to the TUCON reception, I’ve been fascinated by today’s conference and not about to leave early. As Morris pointed out, this was originally a two-day conference crammed into one day.

Glushko gave us the phrases that stuck with him from the sessions today:

  • No-man’s land as a zone on a graph of business models
  • Sweet and sour spots for business models
  • Impact and complexity of the product-service mix
  • Service systems, and how they’re embedded in social and economic systems
  • The “nifty nine”, being the nine SaaS public companies that have achieved (collectively) $1.4B in revenues
  • Data lock-in as the dirty secret of the open source
  • Open source as a lever for putting pressure on your competitor’s business model
  • Emerging architecture, which he considered to be the oxymoron of the day
  • The tension between front stage and back stage design
  • Collective action in the software industry

Morris chimed in with his favourite, the one that I liked as well, where in World of Warcraft you can tell if someone has a Master’s in Dragon Slaying, and how good they are at it, whereas the software industry in general, and the open source community in particular, has no equivalent (but should).

Morris pointed out that Google and Amazon are gathering a huge amount of information about us, and we’re giving it to them for free; at some point, they’re going to leverage this information at some point in the future and make a huge amount of money from it — not by violating the privacy of an individual’s data, but through the aggregate analysis of that data.

At the end of it all, it’s clear to me that this conference is pretty focussed on the new software industry in the valley, or at most, the new software industry in the U.S. It’s true, there’s been a disproportionate amount of software innovation done within 50 miles of where I’m sitting right now, but I think that’s changing, and future “new software industry” conferences will need to be more inclusive of the global software industry, rather than see it as an external factor.

The New Software Industry: David Messerschmitt

David Messerschmitt, a prof at UC Berkeley and the Helsinki University of Technology, finished the formal presentations for the day with a talk on how inter-firm cooperation can be improved in the software industry. This is an interesting wrap-up, since we’ve been hearing about technology, applications and business opportunities all day, and this takes a look at how all these new software industry companies can cooperate to the benefit of all parties.

He started out by proposing a mission statement: focus the software industry’s attention and resources on providing greater value to the user and consumer. This has two aspects: do less harm, and do more direct provision of value to the customer rather than the computational equivalent of administrivia.

In general the software industry has a fairly low customer satisfaction rate of around 75%, whereas specific software sectors such as internet travel and brokerage rank significantly higher. In general, services provided by people have a lower satisfaction rate (likely due to the variability of service levels), and the satisfaction rates are decreasing each year. Complaints are focussed on gratuitous change (change due to platform changes rather than anything that enhances user value) and security, and to some extent on having to change business processes to match an application’s process rather than having the system adapt to their business process. Certainly, there are lessons here for BPM implementations.

Messerschmitt raised the issue of declining enrolment of women in computer science, which he thinks is in part due to the perception that computer science is more about heads-down programming rather than about dealing with users’ requirements. He sees this as a bit of a canary in a coal mine, indicating some sort of upcoming problem for the computing industry in general if it is driving away those who want to deal with the user-facing side of software development. Related to that, he recommends the book Democratizing Innovation by Eric von Hippel, for its study of how customers are providing innovation that feeds back into product design and development, not just in software but in many areas of products.

He ended up by discussing various ways to improve inter-firm cooperation, such as the Global Environment for Networking Innovations (GENI) initiative, ways to accomplish seamless operation of enterprise systems, and referring to a paper that he recently wrote and will be published in July’s IEEE Proceedings, Rethinking Components: From Hardware and Software to Systems. He then listed elements of collective action that can be pursued by industry players, academia and professional organizations to help achieve this end:

  • Systematically look at knowledge gaps and ensure that the research is addressing those gaps
  • Create/educate the human resources that are needed by the industry
  • Understand and encourage complementarities, like broadband and certain types of software
  • Structures and processes: capture end-user innovations for incorporation into a product, and achieve a more orderly evolution of technology with the goal of leaving behind many fewer legacies in the future

He’s definitely of the “a rising tide lifts all boats” mindset.

The New Software Industry: Investment Opportunities Panel

Jason Maynard of Credit Suisse moderated a panel on investment opportunities in the new software industry, which included Bill Burnham of Inductive Capital, Scott Russell (who was with two different venture capital firms but doesn’t appear to be with one at this time, although his title is listed as “venture capitalist”), and Ann Winblad of Hummer Winblad Venture Partners.

This was more of an open Q&A between the moderator and the panel with no presentation by each of them, so again, difficult to blog about since the conversation wandered around and there were no visual aids.

Winblad made a comment early on about how content management and predictive analytics are all part of the collaboration infrastructure; I think that her point is that there’s growth potential in both of those areas as Web 2.0 and Enterprise 2.0 applications mature.

There was a lengthy discussion about open source, how it generates revenue and whether it’s worth investing in; Burnham and Russell are against investing in open source, although Winblad is quite bullish on it but believes that you can’t just lump all open source opportunities together. Like any other market sector, there’s going to be winners and losers here. They all seem to agree, however, that many startups are benefiting from open source components even though they are not offering an open source solution themselves, and that there are great advantages to be had by bootstrapping startup development using open source. So although they might not invest in open source, they’d certainly invest in a startup that used open source to accelerate their development process and reduce development costs.

Russell feels that there are a number of great opportunities in companies where the value of the company is based on content or knowledge rather than the value of their software.

SaaS startups create a whole new wrinkle in venture: the working capital management is much trickier due to the delay in revenue recognition since payments tend to trickle in rather than be paid up front, even though the SaaS company needs to invest in infrastructure. Of course, I’m seeing some SaaS companies that are using hosted infrastructure rather than buying their own; Winblad discussed these sort of rented environments, and other ways to reduce startup costs such as using virtualization to create different testing environments. There are still a lot of the same old problems however, such as sales models. She advises keeping low to the ground, getting something out to a customer in less than a year, getting a partner to help bring the product to market in less than two years. As she put it, frugality counts; the days of spending megabucks on unnecessary expenses went away in 2000 when the first bubble burst, and VCs are understandably nervous about investing in startups that exhibit that same sort of profligate spending.

Maynard challenged them each to name one public company to invest in for the next five years, and why:

  • Russell: China and other emerging markets require banking and other financial data, which companies like Reuters and Bloomberg (more favoured) will be able to serve. He later made comments about how there are plenty of opportunities in niche markets for companies that own and provide data/information rather than software.
  • Burnham: mapping/GPS software like Tele Atlas, that have both valuable data and good software. He would not invest in the existing middleware market, and specifically suggested shorting TIBCO and BEA (unless they are bought by HP) — the two companies whose user conferences that I’m attending this week and next.
  • Winblad: although she focusses on private rather than public investments, she makes Amazon is a good bet since they are expanding their range of services to serve bigger markets, and have a huge amount of data about their customers that allows them to . She thinks that Bezos has a good vision of where to take the company. She recommends shorting companies like CA, because they’re in the old data, infrastructure and services business.

Audience questions following that discussion focussed a lot on asking the VCs opinions on various public companies, such as Yahoo. Burnham feels that Yahoo is now in the entertainment industry, not the software industry, so is not a real competitor to Google. He feels that Google versus Microsoft is the most interesting battle to come. Russell thinks that Yahoo is a keeper, nonetheless.

Questions about investments in mobile produced a pretty fuzzy answer: at some point, someone will get the interface right, and it will be a huge success; it’s very hard for startups to get involved since it involves them doing long negotiations with the big providers.

Burnham had some interesting comments about investing in the consumer versus the business space, and how the metrics are completely different because marketing, distribution and other factors differ so much. Winblad added that it’s very difficult to build a consumer destination site now, like MySpace or YouTube. Not only are they getting into a crowded market, but many of the startups in this area have no idea how to answer basic questions about the details of an advertising revenue model, for example.

Burnham had a great comment about what type of Web 2.0 companies not to invest in: triple-A’s, that is, AdSense, AJAX and arrogance.

Winblad feels that there’s still a lot of the virtualization story to unfold, since it is seriously changing the value chain in data centres. Although VMware has become the big success story in this market, there are a number of other niches that have plenty of room for new players. She also thinks that companies providing specialized analytics — her example was basically about improving financial services sales by analyzing what worked in the past — can provide a great deal of revenue enhancement for their customers. As a final point on that theme, Maynard suggested checking out Swivel, which provides some cool data mashups.

The New Software Industry: Bob Glushko and Shelley Evenson

Bob Glushko, a prof at UC Berkeley, and Shelley Evenson, a prof at CMU, discussed different views on bridging the front stage and back stage in service system design. As a side note, I have to say that it’s fun to be back (temporarily) in an academic environment: many of these presentations are much more like grad school lectures than standard conference presentations. And like university lectures, they cover way too much material in a very short time by speaking at light speed and flipping slides so fast that there’s no time to even read what’s on the slide, much less absorb or document it. If I had a nickel for every time that a presenter today said “I don’t have time to go into this but it’s an important concept” while flipping past an interesting-looking slide, I could probably buy myself the drink that I need to calm myself after the information overload. 🙂

Glushko posits that greater predictability produces a better experience, even if the average level of service is lower, using the example of a self-service hotel check-in versus the variability of dealing with a reception clerk. Although he doesn’t mention it, this is exactly the point of Six Sigma: reducing variability, not necessarily improving service quality.

He goes on to discuss the front stage of services, which is the interaction of the customer or other services with the services, and the back stage, which is the execution of the underlying services themselves. I love his examples: he uses an analogy of a restaurant, with the front stage being the dining room, and the back stage being the kitchen. Front stage designers focus on usability and other user interface factors, whereas the back stage designers focus on efficiency, standardization, data models and the like. This tends to create a tension between the two design perspectives, and begs the question if these are intrinsic or avoidable.

From a design standpoint, he feels that it’s essential to create information flow and process models that span both the back and front stages. The focus of back stage design is to design modular and configurable services that enable flexibility and customization in the front stage, and to determine which back stage services you will perform and which you will outsource/reuse from other service providers. Front stage design, on the other hand, is focussed on designing the level of service intensity (the intensity of information exchange between the customer and the service, whether the service is human or automated), and to implement model-based user interfaces and use these models to generate/configure/specify the APIs of user interfaces for the services. By exposing back stage information in front stage design, more back stage information can improve the immediate experience for a specific customer, and can improve subsequent experiences. Data mining and business intelligence can also improve service for future customers.

Evenson, who specializes in interaction design, has a very different perspective than Glushko, who focusses on the back stage design, but rather than being opposing views, they’re just different perspectives on the same issues of designing service systems.

She started out with a hilarious re-rendering of Glushko’s restaurant example by making the point that she applied colour to make the division of the co-production between front and back stage more visible.

Her slides really went by so fast that I was only able to capture a few snippets: sensors will improve the degree of interaction and usefulness of web-based services; technology influences our sense of self; services are activities or events that form a product through interaction with a customer; services are performances: choreographed interactions manufactured at the point of delivery; services are the visible front end of a process that co-produces value. A service system is a framework that connects service touchpoints so that they can sense, respond and reinforce one another. The system must be dynamic enough to be able to efficiently reflect the expectations people bring to the experience at any given moment. Service systems enable people to have experiences and achieve goals.

She discussed the difficulties of designing a service system, such as the difficulty of prototyping and the difficulty of representing the experience, and pointed out that it requires combining aspects of business, technology and experience. She feels that it’s helpful to create an integrated service design language: systems of elements with meanings (that designers use to communicate and users “read”) plus sets of organizing principles.

The New Software Industry: Martin Griss and Adam Blum

Martin Griss of CMU West and Adam Blum of Mobio Networks had a fairly interactive discussion about integrating traditional software engineering practices into modern service oriented development.

Griss is a big proponent of agile development, and believes that the traditional software development process is too ponderous; Blum admits to benefits from smaller teams and lightweight process for faster delivery, but he believes that some of the artifacts of traditional development methods provide value to the process.

Griss’ problems with traditional development are:

  • Too many large documents
  • It’s too hard to keep the documents in synch with each other and the development
  • People spend too much time in document reviews
  • Use cases are too complex
  • Can’t react well to changes in requirements
  • Schedule and features become omnipotent, rather than actual user requirements

In response, Blum had his list of problems with agile development:

  • Some things really do need upfront analysis/architecture to create requirements and specification, particularly the lower layers in the stack
  • Team management needs to be more complex on larger projects
  • Many agile artifacts are simply “old wine in new bottles”, and it’ simply a matter of determining the right level of detail
  • If you have a team that’s currently delivering well, the introduction of agile processes can disrupt the team and impact productivity — if it’s not broke, don’t fix it
  • Some of the time-boxing of agile development (e.g., SCRUM monthly sprints, daily 10-minute meetings) creates artificial schedule constraints
  • Agile development theory is mostly pseudo-science without many facts to back it up
  • Modern tools can make older artifacts lighter-weight and more usable

Writing requirements and specifications is something that I’ve spent probably 1000’s of hours doing over the years, and many of my customers still require this methodology, so I’m sympathetic to Blum’s viewpoint: sometimes it’s not appropriate or not possible to go agile.

An interesting point emerged from the back-and-forth discussion: it may not be possible to build the development platforms and frameworks themselves (such as what Mobio builds) in an agile fashion, but the applications built on those high-level platforms lend themselves well to agile development. Features to be added to the platform are effectively prototyped in an agile way in applications built on the platform, then are handed off to the more traditional, structured development cycle of the platform itself.

Griss, who was partially looking to just stir up discussion earlier, pointed out that it’s necessary to take the best parts of both ends of the software development methodology spectrum. At the end, it appears that they agree that there are methodologies and artifacts that are important, it’s just a matter of the degree of ceremony to use on any given part of the software development process.

The New Software Industry: Open Source panel

First up after lunch is a panel on the role of open source in service management, moderated by Martin Griss of CMU West, and including Kim Polese of SpikeSource, and Jim Berbsleb and Tony Wasserman of CMU West.

Polese is included in the panel because her company is focussed on creating new business models for packaging and supporting open source software, whereas the other two are profs involved in open source research and projects.

The focus of the session is on how open source is increasingly being used to quickly and inexpensively create applications, both by established companies and startups: think of the number of web-based applications based on Apache and MySQL, for example. In many of these cases, a dilemma is created by the lack of traditional support models for open source components — that’s certainly an issue with the acceptance of open source for internal use within many organizations — so new models are emerging for development, distribution and support of open source.

Open source is helping to facilitate unbundling and modularization of software components: it’s very common to see open source components from multiple projects integrated with both commercial software components and custom components to create a complete application.

A question from the audience asked if there is a sense of misguided optimism about the usefulness open source; Polese pointed out in response that open source projects that aren’t useful end up dying on the vine, so there’s some amount of self-selection that tends to promote successful open source components and suppress those that are less successful through market acceptance.

As I mentioned during the Brainstorm BPM conference a few weeks back, it’s very difficult to blog about a panel — much less structure than a regular presentation, so the post tends to be even more disjointed than usual. With luck, you’ll still get some of the flavour of the panel.

The New Software Industry: Timothy Chou

The morning finished with Timothy Chou, author of The End of Software and the former president of Oracle’s online services group, discussing the radical changes in the software industry due to software-as-a-service. Anyone who entitles his talk “To Infinity and Beyond” and has a picture of Buzz Lightyear on the title slide is okay with me. 🙂

He looks at the economics of why the transformation is occurring, and encourages becoming a student of the economics in order to understand the shift. Considering a sort of Moore’s law for software, traditional software (e.g., SAP) costs around $100/user/month to licence, install and support in various configurations; SaaS (e.g., Salesforce.com) costs around $10/user/month; and internet applications (e.g., Google) are more like $1/user/month.

He makes the point that the SaaS revolution is already occurring by listing nine SaaS companies that have gone public (including Webex and Salesforce.com); these nine went from just over $200M in revenues in 2002 to $1.4B in 2006.

Chou gives us three lessons for the future:

  • Specialization matters. Think Google, which was originally an insanely simple interface for a single task: searching. Or eBay, which just does auctioning. This isn’t just a product functionality or distribution issue, however; the software development process has fundamentally changed. It’s now easier to become a software developer because of the tools, and this drives the development of niche applications. In a world where Citibank has more developers than Oracle, we’re not just buying software from the “professionals” any more; we’re creating it ourselves or buying it from much smaller players.
  • Games matter. Chou uses World of Warcraft as a collaboration example, and it’s a great one. People from all over the world, with different languages and ethnicity, come together for a common goal, then disperse when that goal is achieved. WoW makes specialized skills and skill levels transparent, so that you immediately know if another player’s skills are complementary to your own, and how good he is at that skill. In general, you can’t do that now in business collaboration environments, but it would be great if you could. Also of interest is the world of currency within these games, and how that currency is valued in the real world.
  • Service matters. The service economy is not just about human labour; service is information. Consider the information that Amazon has about books, from finding them to other user reviews to recommendations. The information is there, but some of it is hard to find/analyze. The “surface web” of approximately 100TB is what you could find on Google, but there’s a much deeper web of more than a million TB, mostly inside corporate firewalls. How much better service could we have if we had access to more of that information in the deep web?

The New Software Industry: John Zysman

John Zysman, a professor of Political Science at UC Berkeley, immediately followed Maglio with a related discussion on Services Transformation. The expectation was that Maglio and Zysman have diametrically opposed views and that their joint question period will degrade into fisticuffs — or at least a lively debate — but it turns out that they’re pretty closely aligned on many issues.

A generation ago, services (within a software product company) were seen as a sink hole of productivity, but are now considered to be sources of productivity. It’s not that the service sector has grown or has changed from agriculture to IT, it’s that the sector has been reorganized in significant ways. In order to navigate this, we need to understand three things: strategy and organization; tools; and rules and roles (social-political dynamics).

An example of this sort of transformation is what Zysman referred to as the “American Comeback”, driven by the new consumer electronics, with a shift from electro-mechanical to digital (think Walkman to iPod) as well as modularization and commoditization within the supply chain. He listed stages of service transformations, although I can’t do justice to an explanation of these:

  • Outsourcing
  • Changes in consumption patterns
  • Outsourcing household work
  • The algorithmic transformation: from revolution to delusion

Most of this transformation is based on a change in how services are performed and the application of technology to allow services to be performed in different ways and locations. I heard an interesting example of this last night while having dinner with some of the TIBCO people who I’ll be seeing at TUCON later this week: two of them were from the U.K., one of those two now living in the U.S., and we had a discussion about healthcare in the U.K., U.S. and Canada. One of them made the point that in the U.K., patients sit in the waiting room until the doctor comes out and calls them in, where as in both Canada and the U.S., multiple patients are taken simultaneously to separate examination rooms and prepped by medical assistants, then the doctor just goes from one room to another to do a more specialized part of the work. What’s really interesting is that U.K. and Canada both have socialized medicine, which would tend to favour the less efficient but total service U.K. model, except Canada has a shortage of doctors so has moved to the more efficient U.S. service model.

A couple of random ideas from his talk that I want to capture here for later thought:

  • Should we conceive a services stack?
  • Automating the codifiable parts of a process is the first step in the transformation.
  • By commoditizing a service, you may be “moving the whiteboards of innovation”, i.e., disabling the ability to have innovation in a service.

In discussing rules and roles, Zysman talked about how services are embedded social processes, and how we need to change the way that processes work. How did we end up talking about business process reengineering? I thought that I was taking a break from process today, but as it turns out, there is no escape.

The New Software Industry: Paul Maglio

Paul Maglio, a senior manager of service systems research at IBM’s Almaden Research Center, spoke to us on the science of service systems, looking at the services sector of the economy, including everything from high-end professional services to McJobs in the hospitality industry. The focus of much of his research is on high-value services that simply can’t be automated.

Harkening back to Cusumano’s talk, he showed where services generates 53% of IBM’s gross revenue, but only 35% of their pretax net income; because of that, they’re focussing on service innovation in order to be able to squeeze a bigger margin out of that services portion.

He showed a model of services as a system of relationships between a service provider, a service client and a service target (the reality to be transformed or operated on by the service provider for the sake of the service client). Service systems depend on value co-creation between the provider and the client. if the client wins to the detriment of the provider, it’s a loss leader; in the reverse situation, it’s coercion. If they both win, it’s co-creation.

Although there’s no equivalent to Moore’s Law for services, telling us where the efficiencies will be created in the future, there are some known factors that can be applied to make services more effective, both related to people (location, education) and technology.

In mapping profits against revenues, the steepest curve (biggest return) is information, then technology, then SaaS, then labour. However, most services are a combination of all of these things, so it’s considerably more complex to model.