Robert Shapiro on BPMN 2.0

Robert Shapiro spoke on a webinar today about BPMN 2.0, including some of the history of how BPMN got to this point, changes and new features from the previous version and the challenges that those may create, the need for portability and conformance, and an update on XPDL 2.2. The webinar was hosted by the Workflow Management Coalition, where Shapiro chairs the conformance working group.

He started out with how WPDL started as an interchange format in the mid-90’s, then became XPDL 1.0 around 2001, around the time that the BPMN 1.0 standard was being kicked off. For those of you not up on your standards, XPDL is an interchange format (i.e., the file format) and BPMN prior to version 2.0 is a notation format (i.e., the visual representation); since BPMN didn’t include an interchange format, XPDL was updated to provide serialization of all BPMN elements.

With BPMN 2.0, serialization is being added to the BPMN standard, as well as many other new components including formalization of execution semantics and the definition of choreography model. In particular, there are significant changes to conformance, swimlanes and pools, data objects, subprocesses, and events; Shapiro walked through each of these in detail. I like some of the changes to events, such as the distinction between boundary and regular intermediate events, as well as the concept of interrupting and non-interrupting events. This makes for a more complex set of events, but much more representative.

Bruce Silver, who has been involved in the development of BPMN 2.0, wrote recently on what he thinks is missing from BPMN 2.0; definitely worth a read for some of what might be coming up in future versions (if Bruce has his way).

One key thing that is emerging, both as part of the standard and in practice, is portability conformance: one of the main reasons for these standards is to be able to move process models from one modeling tool to another without loss of information. This led to a discussion about BPEL, and how BPMN is not just for BPEL, or even just for executable processes. BPEL doesn’t fully support BPMN: there are things that you can model in BPMN that will be lost if you serialize to BPEL, since BPEL is intended as a web service orchestration language. For business analysts modeling processes – especially non-executable processes – a more complete serialization is critical.

In case you’re wondering about BPDM, which was originally intended to be the serialization format for BPMN, it appears to have become too much of an academic exercise and not enough about solving the practical serialization problem at hand. Even as serialization is built into BPMN 2.0 and beyond, XPDL will likely remain a key interchange format because of the existing base of XPDL support by a number of BPM and BPA vendors. Nonetheless, XPDL will need to work at remaining relevant to the BPM market in the world of BPEL and BPMN, although it is likely to remain as a supported standard for years to come even if the BPMN 2.0 serialization standard is picked up by a majority of the vendors.

The webinar has about 60 attendees on it, including the imaginatively named “asdf” (check the left side of your keyboard) and several acquaintances from the BPM standards and vendor communities. The registration page for the webinar is here, and I imagine that that will eventually link to the replay of the webinar. The slides will also be available on the WfMC site.

If you want to read more about BPMN 2.0, don’t go searching on the OMG site: for some reason, they don’t want to share draft versions of the specification except to paid OMG members. Here’s a direct link to the 0.9 draft version from November 2008, but I also recommend tracking Bruce Silver’s blog for insightful commentary on BPMN.

Oracle BEA Strategy Briefing

Not only did Oracle schedule this briefing on Canada Day, the biggest holiday in Canada, but they forced me to download the Real Player plug-in in order to participate. The good part, however, is that it was full streaming audio and video alongside the slides.

Charles Phillips, Oracle President, kicked off with a welcome and some background on Oracle, including their focus on database, middleware and applications, and how middleware is the fastest-growing of these three product pillars. He described how Oracle Fusion middleware is used both by their own applications as well as ISVs and customers implementing their own SOA initiatives.

He outlined their rationale for acquiring BEA: complementary products and architecture, internal expertise, strategic markets such as Asia, and the partner and channel ecosystem. He stated that they will continue to support BEA products under the existing support lifetimes, with no forced migration policies to move off of BEA platforms. They now consider themselves #1 in the middleware market in terms of both size and technology leadership, and Phillips gave a gentle slam to IBM for over-inflating their middleware market size by including everything but the kitchen sink in what they consider to be middleware.

The BEA developer and architect online communities will be merged into the Oracle Technology Network: Dev2Dev will be merged into the Oracle Java Developer community, and Arch2Arch will be broadened to the Oracle community.

Retaining all the BEA development centers, they now have 4,500 middleware developers; most BEA sales, consulting and support staff were also retained and integrated into the the Fusion middleware teams.

Next up was Thomas Kurian, SVP of Product Development for Fusion Middleware and BEA product directions, with a more detailed view of the Oracle middleware products and strategy. Their basic philosophy for middleware is that it’s a unified suite rather than a collection of disjoint products, it’s modular from a purchasing and deployment standpoint, and it’s standards-based and open. He started to talk about applications enabled by their products, unifying SOA, process management, business intelligence, content management and Enterprise 2.0.

They’ve categorized middleware products into 3 categories on their product roadmap (which I have reproduced here directly from Kurian’s slide:

  • Strategic products
    • BEA products being adopted immediately with limited re-design into Oracle Fusion middleware
    • No corresponding Oracle products exist in majority of cases
    • Corresponding Oracle products converge with BEA products with rapid integration over 12-18 months
  • Continue and converge products
    • BEA products being incrementally re-designed to integrate with Oracle Fusion middleware
    • Gradual integration with existing Oracle Fusion middleware technology to broaden features with automated upgrades
    • Continue development and maintenance for at least 9 years
  • Maintenance products
    • BEA had end-of-life’d due to limited adoption prior to Oracle M&A
    • Continued maintenance with appropriate fixes for 5 years

For the “continue and converge” category, that is, of course, a bit different than “no forced migration”, but this is to be expected. My issue is with the overlap between the “strategic” category, which can include a convergence of an Oracle and a BEA product, and the “continue and converge” category, which includes products that will be converged into another product: when is a converged product considered “strategic” rather than “continue and converge”, or is this just the spin they’re putting on things so as to not freak out BEA customers who have put huge investments into a BEA product that is going to be converged into an existing Oracle product?

He went on to discuss how each individual Oracle and BEA product would be handled under this categorization. I’ve skipped the parts on development tools, transaction processing, identity management, systems management and service delivery, and gone right to their plans for the Service-Oriented Architecture products:

Oracle SOA product strategy

  • Strategic:
    • Oracle Data Integrator for data integration and batch ETL
    • Oracle Service Bus, which unifies AquaLogic Service Bus and Oracle Enterprise Service Bus
    • Oracle BPEL Process Manager for service orchestration and composite application infrastructure
    • Oracle Complex Event Processor for in-memory event computation, integrated with WebLogic Event Server
    • Oracle Business Activity Monitoring for dashboards to monitor business events and business process KPIs
  • Continue and converge:
    • BEA WL-Integration will be converged with the Oracle BPEL Process Manager
  • Maintenance:
    • BEA Cyclone
    • BEA RFID Server

Note that the Oracle Service Bus is in the “strategic” category, but is a convergence of AL-SB and Oracle ESB, which means that customers of one of those two products (or maybe both) are not going to be happy.

Kurian stated that Oracle sees four types of business processes — system-centric, human-centric, document-centric and decision-centric (which match the Forrester divisions) — but believes that a single product/engine that can handle all of these is the way to go, since few processes fall purely into one of these four categories. They support BPEL for service orchestration and BPMN for modeling, and their plan is to converge a single platform that supports both BPEL and BPMN (I assume that he means both service orchestration and human-facing workflow). Given that, here’s their strategy for Business Process Management products:

Oracle BPM product strategy

  • Strategic:
    • Oracle BPA Designer for process modeling and simulation
    • BEA AL-BPM Designer for iterative process modeling
    • Oracle BPM, which will be the convergence of BEA AquaLogic BPM and Oracle BPEL Process Manager in a single runtime engine
    • Oracle Document Capture & Imaging for document capture, imaging and document workflow with ERP integration [emphasis mine]
    • Oracle Business Rules as a declarative rules engine
    • Oracle Business Activity Monitoring [same as in SOA section]
    • Oracle WebCenter as a process portal interface to visualize composite processes

Similar to the ESB categorization, I find the classification of the converged Oracle BPM product (BEA AL-BPM and Oracle BPEL PM) as “strategic” to be at odds with his original definition: it should be in the “continue & converge” category since the products are being converged. This convergence is not, however, unexpected: having two separate BPM platforms would just be asking for trouble. In fact, I would say that having two process modelers is also a recipe for trouble: they should look at how to converge the Oracle BPA Designer and the BEA AL-BPM Designer

In the portals and Enterprise 2.0 product area, Kurian was a bit more up-front about how WebLogic Portal and AquaLogic UI are going to be merged into the corresponding Oracle products:

Oracle portal and Enterprise 2.0 product strategy

  • Strategic:
    • Oracle Universal Content Management for content management repository, security, publishing, imaging, records and archival
    • Oracle WebCenter Framework for portal development and Enterprise 2.0 services
    • Oracle WebCenter Spaces & Suite as a packaged self-service portal environment with social computing services
    • BEA Ensemble for lightweight REST-based portal assembly
    • BEA Pathways for social interaction analytics
  • Continue and converge:
    • BEA WebLogic Portal will be integrated into the WebCenter framework
    • BEA AquaLogic User Interaction (AL-UI) will be integrated into WebCenter Spaces & Suite
  • Maintenance:
    • BEA Commerce Services
    • BEA Collabra

In SOA governance:

  • Strategic:
    • BEA AquaLogic Enterprise Repository to capture, share and manage the change of SOA artifacts throughout their lifecycle
    • Oracle Service Registry for UDDI
    • Oracle Web Services Manager for security and QOS policy management on services
    • EM Service Level Management Pack as a management console for service level response time and availability
    • EM SOA Management Pack as a management console for monitoring, tracing and change managing SOA
  • Maintenance:
    • BEA AquaLogic Services Manager

Kurian discussed the implications of this product strategy on Oracle Applications customers: much of this will be transparent to Oracle Applications, since many of these products form the framework on which the applications are built, but are isolated so that customizations don’t touch them. For those changes that will impact the applications, they’ll be introduced gradually. Of course, some Oracle Apps are already certified with BEA products that are now designated as strategic Oracle products.

Oracle has also simplified their middleware pricing and packaging, with products structured into 12 suites:

Oracle Middleware Suites

He summed up with their key messages:

  • They have a clear, well-defined, integrated product strategy
  • They are protecting and enhancing existing customer investments
  • They are broadening Oracle and BEA investment in middleware
  • There is a broad range of choice for customer

The entire briefing will be available soon for replay on Oracle’s website if you’re interested in seeing the full hour and 45 minutes. There’s more information about the middleware products here, and you can sign up to attend an Oracle BEA welcome event in your city.

BPEL for Java Developers Webinar

Active Endpoints is hosting a webinar this Thursday on BPEL Basics for Java Developers, featuring Ron Romano, their principal consulting architect. From their information:

A high-level overview of BPEL and its importance in a web-services environment is presented, along with a brief discussion of the basic BPEL activities and how they relate to Java concepts. The following topics will be covered:

  • Parsing the Language of SOA with Java as a guide
  • Breaking out of the VM: evolving from RPC to Web Services
  • BPEL Activities – Receive, Reply, Invoke
  • BPEL Facilities – Fault Handling and Compensation (“Undo”)

The VP of Marketing assures me that he was allowed only two slides at the end of the presentation, and that otherwise this is focused on the technical goodies.

You need to register in advance at the link above.

BPM Think Tank Day 2: BPEL Roundtable

The second roundtable that I attended on last Tuesday’s sessions was on BPEL, headed up by Ismael Ghalimi. It was great to finally meet Ismael in person: we’ve been corresponding by email and blog comments for quite a while, and have even done a webinar together, but this is the first time that we’ve been in the same place at the same time.

We started with a discussion of BPEL4People, and how it’s changed from the original specification (which proposed implementing human-facing tasks as web services rather than changing BPEL) to the current specification (which proposes extensions to BPEL for human-facing tasks).

The title of the roundtable was “is BPEL relevant”, and we covered several aspects of that. First, a few people around the table (which included a few vendors with a vested interest in BPEL) stated that BPEL is relevant in the same way that SQL is relevant: as a standardized language that allows a separation of the design/development environment from the execution environment. Based on the lively discussion, some of these guys have spent a lot of time thinking about the BPEL-SQL analogy. My argument (I have no vested interest, so could have easily argued the opposite way) was that maybe it *should* be relevant in that way, but really isn’t in the consolidated model-design-execute environments that we see in BPM today. The real question may be, at what level is BPEL relevant: model, design, code or execution? Everyone agreed that it’s not relevant to business users or analysts, but it’s not clear where the line of relevance lies.

We also discussed how native BPEL execution providing code monitoring during execution, such that any code faults will have more semantic information included without having to build a monitoring stack on top of it. What remains to be seen is if BPEL4People will provide some level of business-relevant monitoring, or if that still has to be built on top of the execution layer.

What we’re seeing is that for the most part, it’s the larger vendors that are adopting BPEL — possibly as a common language to glue together all the BPM pieces that they’re acquiring — whereas the smaller vendors provided a consolidated (and therefore closed) suite environment where the execution language doesn’t matter, and in fact, their engine may be a competitive differentiator.

Webinar Q&A

I gave a webinar last week, sponsored by TIBCO, on business process modeling; you’ll be able to find a replay of the webinar, complete with the slides, here). Here’s the questions that we received during the webinar and I didn’t have time to answer on the air:

Q: Any special considerations for “long-running” processes – tasks that take weeks or months to complete?

A: For modeling long-running processes, there’s a few considerations. You need to be sure that you’re capturing sufficient information in the process model to allow the processes to be monitored adequately, since these processes may represent risk or revenue that must be accounted for in some way. Second, you need to ensure that you’re building in the right triggers to release the processes from any hold state, and that there’s some sort of manual override if a process needs to be released from the hold state early due to unforeseen events. Third, you need to consider what happens when your process model changes while processes are in flight, and whether those processes need to be updated to the new process model or continue on their existing path; this may require some decisions within the process that are based on a process version, for example.

Q: Do you have a recommendation for a requirements framework that guides analysts on these considerations, e.g. PRINCE2?

A: I find most of the existing requirements frameworks, such as use cases, to be not oriented enough towards processes to be of much use with business process modeling. PRINCE2 is a project management methodology, not a requirements framework.

Q: The main value proposition of SOA is widely believed to be service reuse. Some of the early adopters of SOA, though, have stated that they are only reusing a small number of services. Does this impact the value of the investment?

A: There’s been a lot written about the “myth” of service reuse, and it has proved to be more elusive than many people thought. There’s a few different philosophies towards service design that are likely impacting the level of reuse: some people believe in building all the services first, in isolation of any calling applications, whereas others believe in only building services that are required to meet a specific application’s needs. If you do the former, then there’s a chance that you will build services that no one actually needs — unlike Field of Dreams, if you build it, they may not come. If you do the latter, then your chance of service reuse is greatly reduced, since you’re effectively building single-purpose services that will be useful to another application only by chance.

The best method is more of a hybrid approach: start with a general understanding of the services required by your key applications, and use apply some good old-fashioned architectural/design common sense to map out a set of services that will maximize reusability without placing an undue burden on the calling applications. By considering the requirements of more than one application during this exercise, you will at least be forcing yourself to consider some level of reusability. There’s a lot of arguments about how granular is too granular for services; again, that’s mostly a matter that can be resolved with some design/development experience and some common sense. It’s not, for that matter, fundamentally different than developing libraries of functions like we used to do in code (okay, like I used to do in code) — it’s only the calling mechanism that’s different, but the principles of reusability and granularity have not changed. If you designed and build reusable function libraries in the past, then you probably have a lot of the knowledge that you need to design — at least at a conceptual level — reusable services. If you haven’t built reusable function libraries or services in the past, then find yourself a computer science major or computer engineer who has.

Once you have your base library of services, things start getting more interesting, since you need to make sure that you’re not rewriting services that already exist for each new application. That means that the services must be properly documented so that application designers and analysts are aware of their existence and functionality; they must provide backwards compatibility so that if new functionality is added into a service, it still works for existing applications that call it (without modifying or recompiling those applications); and most important of all, the team responsible for maintaining and creating new services must be agile enough to be able to respond to the requirements of application architects/designers who need new or modified services.

As I mentioned on the webinar, SOA is a great idea but it’s hard to justify the cost unless you have a “killer application” like BPM that makes use of the services.

Q: Can the service discovery part be completely automated… meaning no human interaction? Not just discovery, but service usage as well?

A: If services are registered in a directory (e.g., UDDI), then theoretically it’s possible to discover and use them in an automated fashion, although the difficultly lies in determining which service parameters are mapped to which internal parameters in the calling application. It may be possible to make some of these connections based on name and parameter type, but every BPMS that I’ve seen requires that you manually hook up services to the process data fields at the point that the service is called.

Q: I’d be interested to know if you’re aware of a solid intro or training in the use and application of BPMN. I’ve only found general intros that tend to use the examples in the standard.

A: Bruce Silver offers a comprehensive course in BPMN, which I believe it available as either an online or classroom course.

Q: Does Data Object mean adding external documentation like a Word document into the BPM flow?

A: The origin of the data object is, in part, to serve the requirements of document-centric BPM, where the data object may represent a document (electronic, scanned paper, or a physical paper document) that travels with the workflow. Data objects can be associated with a sequence flow object — the arrows that indicate the flow in a process map — to show that the data artifact moves along that path, or can be shown as inputs and outputs to a process to show that the process acts on that data object. In general, the data object would not be documentation about the process, but would be specific to each instance of the process.

Q: Where is the BPMN standard found?

A: BPMN is now maintained by OMG, although they link through to the original BPMN website still.

Q: What is the output of a BPMN process definition? Any standard file types?

A: BPMN does not specify a file type, and as I mentioned in the webinar, there are three main file formats that may be used. The most commonly used by BPA and BPM vendors, including TIBCO, is XPDL (XML Process Definition Language) from the Workflow Management Coalition. BPEL (Business Process Execution Language) from OASIS has gained popularity in the past year or so, but since it was originally designed as a web service orchestration language, it doesn’t include support all of the BPMN constructs so there may be some loss of information when mapping from BPMN into BPEL. BPDM (Business Process Definition Metamodel), a soon-to-be-released standard from OMG, promises to do everything that XPDL does and more, although it will be a while before the level of adoption nears that of XPDL.

Q: What’s the proper perspective BPM implementers should have on BPMN, XPDL, BPEL, BPEL4People, and BPDM?

A: To sum up from the previous answer: BPMN is the only real contender as a process notation standard, and should be used whenever possible; XPDL is the current de facto standard for interchange of BPMN models between tools; BPDM is an emerging standard to watch that may eventually replace XPDL; BPEL is a web service orchestration language (rarely actually used as an execution language in spite of its name); and BPEL4People is a proposed extension to BPEL that’s trying to add in the ability to handle human-facing tasks, and the only standard that universally causes laughter when I name it aloud. This is, of course, my opinion; people from the integration camp will disagree — likely quite vociferously — with my characterization of BPEL, and those behind the BPDM standard will encourage us all to cast out our XPDL and convert immediately. Realistically, however, XPDL is here to stay for a while as an interchange format, and if you’re modeling with BPMN, then your tools should support XPDL if you plan to exchange process models between tools.

I’m headed for the BPM Think Tank next week, where all of these standards will be discussed, so stay tuned for more information.

Q: How would one link the business processes to the data elements or would this be a different artifact altogether?

A: The BPMN standard allows for the modeler to define custom properties, or data elements, with the scope depending on where the properties are defined: when defined at the process level, the properties are available to the tasks, objects and subprocesses within that process; when defined at the activity level, they’re local to that activity.

Q: I’ve seen some swim lane diagrams that confuse more than illuminate – lacking specific BPMN rules, do you have any personal usage recommendations?

A: Hard to say, unless you state what in particular that you find confusing. Sometimes there is a tendency to try to put everything in one process map instead of using subprocesses to simplify things — an overly-cluttered map is bound to be confusing. I’d recommend a high-level process map with a relatively small number of steps and few explicit data objects to show the overall process flow, where each of those steps might drill down into a subprocess for more detail.

Q: We’ve had problems in the past trying to model business processes at a level that’s too granular. We ended up making a distinction between workflow and screen flow. How would you determine the appropriate level of modeling in BPM?

A: This is likely asking a similar question to the previous one, that is, how to keep process maps from becoming too confusing, which is usually a result of too much detail in a single map. I have a lot of trouble with the concept of “screen flow” as it pertains to process modeling, since you should be modeling tasks, not system screens: including the screens in your process model implies that there’s not another way to do this, when in fact there may be a way to automate some steps that will completely eliminate the use of some screens. In general, I would model human tasks at a level where a task is done by a single person and represents some sort of atomic function that can’t be split between multiple people; a task may require that several screens be visited on a legacy system.

For example, in mutual funds transaction processing (a particular favorite of mine), there is usually a task “process purchase transaction” that indicates that a person enters the mutual fund purchase information to their transaction processing system. In one case, that might mean that they visit three different green screens on their legacy system. Or, if someone wrote a nice front-end to the legacy system, it might mean that they use a single graphical screen to enter all the data, which pushes it to the legacy system in the background. In both cases, the business process is the same, and should be modeled as such. The specific screens that they visit at that task in order to complete the task — i.e., the “screen flow” — shouldn’t be modeled as explicit separate steps, but would exist as documentation for how to execute that particular step.

Q: The military loves to be able to do self-service, can you elaborate on what is possible with that?

A: Military self-service, as in “the military just helped themselves to Poland?” 🙂 Seriously, BPM can enable self-service because it allows anyone to participate in part of a process while monitoring what’s happening at any given step. That allows you to create steps that flow out to anyone in the organization or even, with appropriate network security, to external contractors or other participants. I spoke in the webinar about creating process improvement by disintermediation; this is exactly what I was referring to, since you can remove the middle-man by allowing someone to participate directly in the process.

Q: In the real world, how reliable are business process simulations in predicting actual cycle times and throughput?

A: (From Emily) It really depends on the accuracy of your information about the averages of your cycles. If they are relatively accurate, then it can be useful. Additionally, simulation can be useful in helping you to identify potential problems, e.g. breakpoints of volume that cause significant bottlenecks given your average cycle times.

I would add that one of the most difficult things to estimate is the arrival time of new process instances, since rarely do they follow those nice even distributions that you see when vendors demonstrate simulation. If you can use actual historical data for arrivals in the simulation, it will improve the accuracy considerably.

Q: Would you have multiple lanes for one system? i.e. a legacy that has many applications in it therefore many lanes in the legacy pool ?

A: It depends on how granular that you want to be in modeling your systems, and whether the multiple systems are relevant to the process analysis efforts. If you’re looking to replace some of those systems as part of the improvement efforts, or if you need to model the interactions between the systems, then definitely model them separately. If the applications are treated as a single monolithic system for the purposes of the analysis, then you may not need to break them out.

Q: Do you initially model the current process as-is in the modeling tool?

A: I would recommend that you at least do some high-level process modeling of your existing process. First of all, you need to establish what the metrics are that you’re establishing for your ROI, and often these aren’t evident until you map out your process. Secondly, you may want to run simulations in the modeling tool on the existing process to verify your assumptions about the bottlenecks and costs of the process, and to establish a baseline against which to compare the future-state process.

Q: Business Managers : concerns – failure to achieve ROI ?

A: I’m not exactly sure what this question means, but assume that it relates to the slide near the end of the webinar that discusses role changes caused by BPM. Management and executives are most concerned with risk around a project, and they may have concerns that the ROI is too ambitious (either because the new technology fails or too many “soft” ROI factors were used in the calculation) and that the BPM project will fail to meet the promises that they’ve likely made to the layers of management above them. The right choice of ROI metrics can go a long ways to calming their fears, and educating them on the significant benefits of process governance that will result from the implementation of BPM. Management will now have an unprecedented view of the current state and performance of the end-to-end process. They’ll also have more comprehensive departmental performance statistics without manual logging or cutting and pasting from several team reports.

Q: I am a manager in a MNC and I wanted to know how this can help me in my management. How can I use it in my daily management? One example please?

A: By “MNC” I assume that you mean “multi-national corporation”. The answer is no different than from any other type of organization, except that you’re likely to be collaborating with other parts of your organization in other countries hence have the potential to see even greater benefits. One key area for improvement that can be identified with business process modeling, then implemented in a BPMS, is all of the functional redundancy that typically occurs in multi-nationals, particularly those that grow by acquisition. Many functional areas, both administrative/support and line-of-business, will be repeated in multiple locations, for no better reason than that it wasn’t possible to combine them before technology was brought to bear on it. Process modeling will allow you to identify areas that have the potential to be combined across different geographies, and BPM technology allows processes to flow seamlessly from one location to another.

Q: How much detail is allowed in a process diagram (such as the name of the supplier used in a purchase order process or if the manager should be notified via email or SMS to approve a loan)? Is process visibility preferred compared to good classic technical design, in the BPM world?

A: A placeholder for the name of a supplier would certainly be modeled using a property of the process, as would any other custom data elements. As for the channel used for notifying the manager, that might be something that the manager can select himself (optimally) rather than having that fixed by the process; I would consider that to be more of an implementation detail although it could be included in the process model.

I find your second question interesting, because it implies that there’s somehow a conflict between good design and process visibility. Good design starts with the high-level process functional design, which is the job of the analyst who’s doing the process modeling; this person needs to have analytical and design skills even though it’s unlikely that they do technical design or write code. Process visibility usually refers to the ability of people to see what’s happening within executing processes, which would definitely be the result of a good design, as opposed to something that has to be traded off against good design. I might be missing the point of your question, feel free to add a comment to clarify.

Q: Are there any frameworks to develop a BPM solution?

A: Typically, the use of a BPMS implies (or imposes) a framework of sorts on your BPM implementation. For example, you’re using their modeling tool to draw out your process map, which creates all the underpinnings of the executable process without you writing any code to do so. Similarly, you typically use a graphical mapping functionality to map the process parameters onto web services parameters, which in turn creates the technical linkages. Since you’re working in a near-zero-code environment, there’s no real technical framework involved beyond the BPMS itself. I have seen cases where misguided systems integrators create large “frameworks” — actually custom solutions that always require a great deal of additional customization — on top of a BPMS that tends to demote the BPMS to a simple queuing system. Not recommended.

There were also a few questions specifically about TIBCO, for which Emily Burns (TIBCO’s marketing manager, who moderated the webinar) provided answers:

Q: Is TIBCO Studio compatible with Windows Vista?

A: No, Vista is not yet supported.

Q: Are there some examples of ROI from the industry verticals

A: On TIBCO’s web site, there are a variety of case studies that discuss ROI here: http://www.tibco.com/solutions/bpm/customers.jsp. Additionally, these are broken down into some of the major verticals here: http://www.tibco.com/solutions/bpm/bpm_your_industry.jsp

Q: Is there any kind of repository or library of “typical” process? I’m particularly interested in clinical trials.

A: TIBCO’s modeling product ships with a large variety of sample processes aggregated by industry.

And lastly, my own personal favorite question and answer, answered by Emily:

Q: What’s the TLA for BPM+SOA?

A: RAD 🙂

TUCON: Tom Laffey and Matt Quinn

Last in the morning’s general session was Tom Laffey, TIBCO’s EVP of products and technologies, and Matt Quinn, VP of product management and strategy. Like Ranadivé’s talk earlier, they’re talking about enterprise virtualization: positioning messaging, for example, as virtualizing the network layer, and BPM as enterprise process virtualization. I’m not completely clear if virtualization is just the current analyst-created buzzword in this context.

Laffey and Quinn tag-teamed quite a bit during the talk, so I won’t attribute specific comments to either. TIBCO products cover a much broader spectrum that I do, so I’ll focus just on the comments about BPM and SOA.

TIBCO’s been doing messaging and ESB for a long time, and some amount of the SOA talk is about incremental feature improvements such as easier use of adapters. Apparently, Quinn made a prediction some months ago that SOA would grow so fast that it would swallow up BPM, so that BPM would just be a subset of SOA. Now, he believes (and most of us from the BPM side agree 🙂 ) that BPM and SOA are separate but extremely synergistic practices/technologies, and both need to developed to a position of strength. To quote Ismael Ghalimi, BPM is SOA’s killer application, and SOA is BPM’s enabling infrastructure, a phrase that I’ve included in my presentation later today; like Ismael, I see BPM as a key consumer of what’s produced via SOA, but they’re not the same thing.

They touched on the new release of Business Studio, with its support for BPMN, XPDL and BPEL as well as UML for some types of data modelling. There’s some new intelligent workforce management features, and some advanced user interface creation functionality using intelligent forms, which I think ties in with their General Interface AJAX toolkit.

Laffey just defined “mashup” as a browser-based event bus, which is an interesting viewpoint, and likely one that resonates better with this audience than the trendier descriptions.

They discussed other functionality, including business rules management, dynamic virtual information spaces (the ability to tap into a real-time event message stream and extract just what you want), and the analytics that will be added with the acquisition of Spotfire. By the way, we now appear to be calling analytics “business insight”, which lets us keep the old BI acronym without the stigma of the business intelligence latency legacy. 🙂

They finished up with a 2-year roadmap of product releases, which I won’t reproduce here because I’d hate to have to embarrass them later, and some discussion of changes to their engineering and product development processes.

Intro to BPEL

I just listened in on To BPEL or Not To BPEL, the title of which I believe resolves the pronunciation issue once and for all: although the presenter (Danny van der Rijn, principal architect at TIBCO) said “BEH-pull”, clearly it must be “BEE-pull” to make the title work. 🙂

Intended for those with a technical interest in BPEL, van der Rijn went through the history of BPEL from its origins as a melding of IBM’s WSFL and Microsoft’s XLANG, through the BPEL4WS 1.0 specification in 2002, 1.1 in 2003 and the soon-to-be-approved WS-BPEL 2.0. More importantly, he looked at why BPEL emerged: basically, the web services stack didn’t do enough to allow the orchestration of processes.

He then talked about what you’re not going to do with BPEL — it’s not a process modelling notation, it’s not for service creation — and stated that it’s not for portability: he mentions XPDL as a solution in that area (with no mention of BPDM). What I’m seeing, however, is that although BPEL may not have been intended as an interchange format, that’s exactly what it’s being used for in many cases. For many BPM engines, the “E” in BPEL is apocryphal: BPEL is a format that’s used to import process models from other applications, but it’s then converted to an internal (proprietary) format for the actual execution.

He covers off all the changes in 2.0: data, scoping model, message handling, activities and more, and walks through the basic BPEL components in some amount of detail. Overall, a good technical introduction to BPEL.

Unfortunately, about 40 minutes into the presentation, I received an “Invalid Flash Player Version” stating that I needed Flash Player version 8 to view the current content, and I lost all audio and video of the presentation. Flash? I was supposed to be using the Windows Media Player version of the presentation! On24.com really needs to get their act together: changing system requirements mid-presentation is not cool. Even when I installed the new Flash version and did a successful test, I wasn’t able to get back in. Guess that I’ll have to see the last bit in reruns.

XPDL and BPEL

An interesting bit on the WfMC site comparing XPDL and BPEL that was highlighted in a WfMC mailing this week:

BPEL and XPDL are entirely different yet complimentary standards.  BPEL is an “execution language” designed to provide a definition of web services orchestration, specifically the underlying sequence of interactions, the flow of data from point-to-point. For this reason, it is best suited for straight-through processing or data-flows vis-a-vis application integration.  The goal of XPDL is to store and exchange the process diagram, to allow one tool to model a process diagram, and another to read the diagram and edit, another to “run” the process model on an XPDL-compliant BPM engine, and so on. For this reason, XPDL is not an executable programming language like BPEL, but specifically a process design format that literally represents the “drawing” of the process definition. To wit, it has ‘XY’ or vector coordinates, including lines and points that define process flows. This allows an XPDL to store a one-to-one representation of a BPMN process diagram. For this reason, XPDL is effectively the file format or “serialization” of BPMN, as well as any non-BPMN design method or process model which use in their underlying definition the XPDL meta-model (there are presently about 50 tools which use XPDL for storing process models.)

A good distinction between the best uses of BPEL and XPDL, except for one point: very few vendors are using BPEL as an execution language; they’re using it as an interchange format, which is causing a lot of confusion about what format to use (XPDL or BPEL) to move process maps between a modelling and execution environment. As the above paragraph points out, XPDL maintains the graphical drawing information as well as the execution-specific information; it also supports everything that can be modelled in BPMN (which BPEL currently can’t).

There’s also an article by Jon Pyke of WfMC in Computer Business Review Online where he smacks them down for calling XPDL a failure in a previous article, and states that XPDL is “often incorrectly perceived to be competitive with the business process execution language, BPEL, standard”. XPDL and BPEL aren’t competing in the sense that someone would elect to use one over the other, but they are competitive in that they’re both used as interchange formats, just for different types of processes or in different tools. Unless your BPM engine actually uses BPEL as an execution language (which few do), you’re not going to go from BPMN to XPDL to BPEL and then on to your BPM engine’s proprietary execution language, because there’s no value added from an additional data transformation: you’d do BPMN=>BPEL=>[BPM engine execution language] (obviously skipping the last transformation if the native execution language is BPEL) for web services orchestration-type processes that can be described completely using BPEL, or you’d use BPMN=>XPDL=>[BPM engine execution language] (where the latter may or may not be BPEL) for the larger set of functions supported by XPDL, like human-facing steps. In many cases, the choice of XPDL or BPEL is dictated by what’s supported by the tools that you use for processes modelling; those tools intended to model processes of web services orchestrations are more likely to support BPEL, whereas those targetted at the “BPM suites” market are more likely to use XPDL.

Assorted thoughts on BPEL

There’s been a few interesting posts about BPEL lately.

First, SOA World Magazine (which appears on the WebSphere Journal site, not sure if it actually exists elsewhere since there was no back-link) has a post on BPEL’s Growing Up, covering a brief history, current status and the view forward, including BPEL4People:

Going forward, we’re already seeing the next generation of standards around BPEL being discussed. For example, the “BPEL4People” effort was first announced in late 2005 and is intended to standardize an approach similar to the one described above for incorporating human workflow tasks in BPEL processes. Besides being one of our favorite standards acronyms, BPEL4People is an important area of work since most business processes span both systems and humans.

They neglect to mention that BPEL4People is not really much more than a white paper, although a lot of people talk about it as if it’s a standard just about to hit the big time. I recently linked to a Oracle Contractors blog post where one of the comments on the post (#5) pointed out that “so far, there is no BPEL4PEOPLE”. Or as I put it in my commentary on the link, the emperor is looking around for his boxers.

SOA World Magazine goes on to say:

While BPEL vendors provide easy-to-use graphical tools for creating and editing BPEL processes, the very fact that BPEL processes are so detailed as to be executable makes these tools too complex for most business users. Instead, business users need to be able to specify higher-level process blueprints that can then be filled in by developers to make them executable.

Business Process Modeling Notation (BPMN) is a standard from OAG [sic] to address the above requirement.

Um, not necessarily. Now, the article was written by two guys who work for Oracle, so I can see why they have this view, but I’m not sure that everyone would share the view that developers are required to fill in the details in order to make models created by business analysts usable.

Secondly, the comments about Microsoft supporting BPEL. As David Chappell put it:

Like BizTalk Server today, WF [Windows Workflow Foundation] treats BPEL as a way to move process logic between different workflow engines, not as an executable format (and certainly not as a development language).

He goes on to nail the real reason for Microsoft’s adoption of BPEL:

Adding the ability to export and import BPEL workflows to WF — and thus to Windows itself — will help WF in situations where support for BPEL is a political necessity.

BPEL has become more of an RFP check item than a real requirement, since most end-customer organizations don’t really understand what it is or what it might do for them. And if you believe a recent Burton Group report, BPEL is just a placeholder for WS-CDL until that choreography standard is ready for prime time.