BPM2012: Stephen White Keynote on BPMN

It’s the last day at BPM 2012, and the morning keynote is by Steve White of IBM, a.k.a. “the father of BPMN”, discussing the Business Process Model and Notation (BPMN) standard and its future. He went through a quick history of the development of the standard from its beginnings in BPMI (now part of OMG) in 2001, through the release of the 1.0 specification in 2004, the official adoption as an OMG standard in 2006, 1.1 and 1.2 revisions in 2008 and 2009, then BPMN 2.0 in 2011. Although there’s no official plan for BPMN 3.0, he said that he imagined that it might be in the future.

The original drivers for BPMN were to be usable by the business community for process modeling, and be able to generate executable processes, but these turned out to be somewhat conflicting requirements since the full syntax required to support execution ended up making BPMN too complex for non-technical modelers if considered in its entirety. To complicate things further, the business modelers want a simple notation, yet complain when certain behaviors can’t be modeled, meaning that there’s some amount of conflict even within the first of the two requirements. The approach was to use familiar flowchart structures and shapes, have a small set of core elements for simple modeling, then provide variations of the core elements to support the complexity required for execution.

BPMN, as White states, is not equivalent to BPM: it’s a language to define process behavior, but a number of other languages and technologies are also required to implement BPM, such as data, rules, resources and user interfaces. Hence, it’s one tool in the BPM toolbox, to be used at design time or runtime as required. The case management modeling notation (CMMN) is under development, and there are currently mechanisms for a CMMN model to invoke BPMN. Personally, I think that it might make sense to combine the two modeling standards, since I believe that a majority of business processes contain elements of each.

He walked through the diagram types, elements, and the innovations that we’ve seen in modeling through BPMN such as boundary intermediate events, pools/lanes and message flows, and the separation of process and data flows. He also described the conformance levels – descriptive, analytic, common executable, and full – and their role in modeling tools.

He laid out a bit of the vision for BPMN’s future, which is to extend further into uncontrolled and descriptive processes (case management), but also further into controlled and prescriptive processes (service level modeling). He also mentioned the potential to support for element substitution at different levels in order to better support shared models between IT and business – I find this especially interesting, since it would allow different views of the same process model to have some elements hidden or exposed, or even changed to different element types suitable to the viewer.

When BPMN 1.0 was defined, ad hoc processes (really, one in which the activities can occur in any order or frequency) were included but not really well developed, since the BPM systems at the time mostly only supported prescriptive model execution. In considering case management modeling in general, a case may be fairly prescriptive with some runtime variations, or may be completely free form and descriptive; BPMN is known for prescriptive process modeling, but does support descriptive processes via ad hoc subprocesses. Additional process types and behaviors are required to fully support case management such as milestones, new event types and the ability to modify a process at runtime, and he showed some suggestions for what these might look like in an extension BPMN.

Service level modeling, on the other hand, is even more prescriptive than what we see in BPMN today: it’s lower level, more like a screen flow that happens all within a single BPMN task: no lanes, since it’s all within a single task, with gateways allowed but no parallel paths. Think of it as visual pseudo-code, probably not exposed to a business viewer but modeled by IT to effect the actual implementation. I’m seeing these sorts of screen flow models in BPMS products already such as TIBCO’s AMX BPM, as well as similar functionality from Active Endpoints as an add-in to Salesforce, so this isn’t a complete surprise. I saw an paper on client-side service composition at CASCON that could impact on this sort of service level modeling, and it will be interesting to see how this functionality evolves in BPMN and its impact on BPMS products.

This is my last post from BPM 2012: although I would like to attend a few of the other morning sessions, I’ll probably spend the time doing some last minute reviews of the three-hour tutorial on social BPM that I’ll be giving this afternoon.

BPM2012: Papers on Process Mining

I had a bit of blog fatigue earlier, but Keith Swenson blogged the session on process cloud concepts for case management that I attended but didn’t write about, and I’m back at it for the last set of papers for the day at BPM 2012, all with a focus on process mining.

Repairing Process Models to Reflect Reality

[link]

Dirk Fahland of Eindhoven University presented a paper on process repair, as opposed to process mining, with a focus on adjusting the original process model to maximize fitness, where fitness is measured by the ability to replace traces in the event log: if a model can replay all of the traces of actual process execution, then it is perfectly fit. Their methods compare the process model to the event log using a conformance checker in order to align the event log and the model, which can be accomplished with the methods of Adriansyah et al’s cost-based replayer to find the diagnostic information.

The result includes activities that are skipped, and activities that must be added. The activities to be added can be fed to an existing process discovery algorithm to create subprocesses that must be added to the existing process, and the activities that were skipped are either made optional or removed from the original process model.

Obviously, this is relevant in situations where the process model isn’t automated, that is, the event logs are from other systems, not directly executed from the process model; this is common when processes are implemented in ERP and other systems rather than in a BPMS, and process models are created manually in order to document the business processes and discover opportunities for optimization. However, as we implement more semi-structured and dynamic processes automated by a BPMS, the event logs of the BPMS itself will include many events that are not part of the original process model; this could be a useful technique for improving understanding of ad hoc processes. By understanding and modeling ad hoc processes that occur frequently, there is the potential to identify emergent subprocesses and add those to the original model in order to reduce time spent by workers creating the same common ad hoc processes over and over again.

There are other measurements of model quality besides fitness, including precision, generalization and simplicity; future research will be looking at these as well as improving the quality of alignment and repair.

Where Did I Misbehave? Diagnostic Information in Compliance Checking

[link to pdf paper]

Elham Ramezani of Eindhoven University presented a paper on compliance checking. Compliance checking covers the full BPM lifecycle: compliance verification during modeling, design and implementation; compliance monitoring during execution; and compliance auditing during evaluation. The challenge is that compliance requirements have to be decomposed and used to create compliance rules that can be formalized into a machine-understandable form, then compared to the event logs using a conformance checker. This is somewhat the opposite of the previous paper, which used conformance checking to find ways to modify the process model to fit reality; this looks at using conformance checking to ensure that compliance rules, represented by a particular process model, are being followed during execution.

Again, this is valuable for processes that are not automated using a BPMS or BRMS (since rules can be strictly enforced in that environment), but rather processes executing in other systems or manually: event logs from systems are compared to the process models that represent the compliance rules using a conformance checker, and the alignment calculated to identify non-compliant instances. There were some case studies with data from a medical clinic, detecting non-compliant actions such as performing an MRI and CT scan of the same organ, or registering a patient twice on one visit.

There was an audience question that was in my mind as well, which is why to express the compliance rules in Petri nets rather a declarative form; she pointed out that the best conformance checking available for aligning with event logs use operational models such as Petri nets, although they may consider adding declarative rules to this method in the future in addition to other planned extensions to the research. She also mentioned that they were exploring applicability to monitoring service level agreement compliance, which has a huge potential for business applications where SLA measurements are not built into the operational systems but must be detected from the event logs.

FNet: An Index for Advanced Business Process Querying

[link to pdf paper]

Zhiqiang Yan, also of Eindhoven University (are you seeing a theme here in process mining?), presented on querying within a large collection of process models based on certain criteria; much of the previous research has been on defining expressive query languages (such as BPMN-Q) that can be very slow to execute, but here they have focused on developing efficient techniques for executing the queries. They identify basic features, or small fragments, of process models, and advanced elements such as transitive or negative edges that form advanced features.

To perform a query, both the query and the target process models are decomposed into features, where the features are small and representative: specific sequences, join, splits and loops. Keywords for the nodes in the graphs are using in addition to the topology of the basic features. [There was a great deal of graph theory in the paper concerned with constructing directed graphs based on these features, but I think that I forgot all of my graph theory shortly after graduation.]

The results seem impressive: two orders of magnitude increase in speed over BPMN-Q. As organizations continue to develop large repositories of process models and hope to get some degree of reuse, process querying will become more important in practical applications.

Using MapReduce to scale events correlation discovery for business processes mining

[link]

The last paper of this session, and of the day, was presented by Hicham Reguieg of Blaise Pascal University in Clermont-Ferrand. One of the challenges in process mining and discovery is big data: the systems that are under consideration generate incredible amounts of log data, and it’s not something that you’re going to just open up in a spreadsheet and analyze manually. This paper looks at using MapReduce, a programming model for processing large data sets (usually by distributing processing across clusters of computers), applied to the specific step of event correlation discovery, which analyzes the event logs in order to find relationships between events that belong to the same business process.

Although he didn’t mention the specific MapReduce framework that they are using for their experiments, I know that there’s a Hadoop one – inevitable that we would start seeing some applicability for Hadoop in some of the big data process problems.

BPM2012: Papers on Process Model Analysis

More from day 2 of BPM 2012.

The Difficulty of Replacing an Inclusive OR-Join

[link]

Cédric Favre of IBM Research presented the first paper of the session on some of the difficulties in translation between different forms of process models. One specific problem is replacing an inclusive OR join from a language such as BPMN, that supports them, to one that does not, such as Petri nets, while maintaining the same behavior in the workflow graph.

In the paper, they identify which IOR joins can be replaced locally using XOR and AND logic, and a non-local replacement technique. They also identify processes where an IOR join in a synchronization role cannot be replaced by XOR and AND logic.

This research is useful in looking at automated translation between different modeling languages, although questions raised by the audience pointed out some of the limitations of the approach, as well as considering that acyclic models (which were all that were considered in this research) could be easily translated from BPMN to BPEL, and that many BPEL to Petri net translators already exist.

Automatic Information Flow Analysis of Business Process Models

[link to pdf paper]

Andreas Lehmann of University of Rostock presented a paper on detecting where data and information leaks can occur due to structural flaws in processes; they define a data leak as direct (but illegal) access to a data object, while an information leak is when secret information can be inferred by someone who should not have access to that information. This research specifically looks at predefined structured processes within an organization; the issues in collaborative processes with ad hoc participants is obviously a bit more complex.

In a process where some tasks are confidential and others are observable (public within a certain domain, such as within a company), confidential tasks may be prerequisites for observable tasks, meaning that someone who knows that the observable task is happening also knows that the confidential task must have occurred. Similarly, if the confidential and observable tasks are mutually exclusive, then someone who knows that the observable task has not occurred knows that the confidential task has occurred instead. These are both referred to as “interferences”, and they have developed an approach to detect these sorts of interferences, then create extended Petri nets for the flow that can be used to identify reachability (which identifies whether an information leak can occur). Their work has included optimizing the algorithms to accomplish this information leak detection, and you can find out more about this at the service-technology website.

Definitely some interesting ideas here that can be applicable in a number of processes: their example was an insurance claim where an internal fraud investigation would be initiated based on some conditions, but the people participating in the process shouldn’t know that the investigation had begun since they were the ones being investigated. Note that their research is only concerned with detecting the information flows, but does not provide methods for removing information leaks from the processes.

BPM2012: Wil van der Aalst BPM Research Retrospective Keynote

Day 2 of the conference tracks at BPM 2012 started with a keynote from Wil van der Aalst of Eindhoven University, describing ten years of BPM research on this 10th occasion of the International Conference on BPM. The conference started in Eindhoven in 2003, then moved to Potsdam in 2004, Nancy in 2005, Vienna in 2006, Brisbane in 2007, Milan in 2008 (my first time at the conference), Ulm in 2009, Hoboken in 2010, Clermont-Ferrand in 2011, then on to Tallinn this year. He showed a word cloud for each of the conference proceedings in the past, which was an interesting look into the hot topics at the time. The 2013 conference will be in Beijing – not sure if I’ll be attending since it’s a long trip – and I expect that we’ll hear where the 2014 conference will be before we leave Tallinn this week.

In his paper, he looked at the four main activities in BPM – modeling, analysis, enactment and management – and pointed out that much of the research focused on the first two, and we need more on the latter two. He also discussed a history of what we now know as BPM, from office automation to workflow to BPM, with contributions from many other areas from data modeling to operations management; having implemented workflow systems since the early 1990’s, this is a progression that I’m familiar with. He went through 20 BPM use cases that cover the entire BPM lifecycle, and mapped 289 research papers in the proceedings from the entire history of the BPM conferences against them:

  1. Design model
  2. Discover model from event data
  3. Select model from collection
  4. Merge models
  5. Compose model
  6. Design configurable model
  7. Merge models into configurable model
  8. Configure configurable model
  9. Refine model
  10. Enact model
  11. Log event data
  12. Monitor
  13. Adapt while running
  14. Analyze performance based on model
  15. Verify model
  16. Check conformance using event data
  17. Analyze performance using event data
  18. Repair model
  19. Extend model
  20. Improve model

He described each of these use cases briefly, and presented a notation to represent their characteristics; he also showed how the use cases can be chained into composites. The results of mapping the papers against the use cases was interesting: most papers were tagged with one or two of these use cases, although some addressed several use cases.

He noted three spikes in use cases: design model, enact model, and verify model; he found the first two completely expected, but that verifying models was a surprising focus. He also pointed out that having few papers addressing use case 20, improve model, is a definite weakness in the research areas.

He also analyzed the research papers according to six key concerns, a less granular measure than the use cases:

  1. Process modeling languages
  2. Process enactment infrastructures
  3. Process model analysis
  4. Process mining
  5. Process flexibility
  6. Process reuse

With these, he mapped the interest in these key concerns over the years, showing how interest in the different areas has waxed and waned over the years: a hype cycle for academic BPM topics.

He spent a bit of time on three specific challenges that should gain more focus research: process flexibility, process mining and process configuration; for example, considering the various types of process flexibility based on whether it is done at design time or runtime, and how it can be by specification, by deviation, by underspecification or by change.

One clear goal of his talk is to help make BPM research more relevant as it matures, in part through more evidence-based BPM research, to encourage vendors and practitioners to adopt the new ideas that are put forward in BPM. He makes some recommendations for research papers in the future:

  • Avoid introducing new languages without a clear purpose
  • Artifacts (software and data) need to be made available
  • Evaluate results based on a redefined criterion and compare with other approaches
  • Build on shared platforms rather than developing prototypes from scratch
  • Make the contribution of the paper clear, potentially by tagging papers with one of the 20 use cases listed above

As BPM research matures, it makes sense that the standards are higher for research topics in general and definitely for having a paper accepted for publication and presentation at a conference. Instead of just having a theory and prototype, there’s a need for more empirical evidence backing up the research. I expect that we’ll see an improvement in the overall quality and utility of BPM research in the coming years as the competition becomes more intense.

BPM2012: Papers on BPM Applications

We had a session of three papers this afternoon at BPM 2012 on how BPM is applied in different environments.

Event-Driven Manufacturing Process Management Approach

[link]

The first paper, presented by Antonio Estruch from Universitat Jaume I in Castellón, Spain, is on automated manufacturing, where several different information systems (SCADA/PLCs, MES, ERP systems) are all involved in the manufacturing process but possibly not integrated. These systems generate events, and this paper is on the detection and analysis of the complex events from these various systems, providing knowledge on how to handle these events while complementing the existing systems.

BPMN 2.0 is proposed to model the processes including the events; all those events that we love (or hate) in BPMN 2.0 are perfect in a scenario such as this, where multiple non-integrated systems are generating events, and processes need to be triggered in response to those events. This can be used for quality control, where the complex events can detect the potential presence of poorly manufactured items that may have escaped detection by the lower level instrumentation. This also allows the modeling and measurement of key performance indicators in the manufacturing processes.

There are some existing standards for integrating with manufacturing solutions, but more than this is required in order to make a usable BPM solution for manufacturing, such as easily configurable user interfaces for the manufacturing-specific event visualization, event management capabilities, and some data processing and analytics for assisting with the complex event processing over time.

They have been testing out this approach using manufacturing simulation software and an open source BPMS, but want to expand it in the future with more CEP patterns and more complete prototypes for real-world scenarios.

Process-Based Design and Integration of Wireless Sensor Network Applications

[link]

Stefano Tranquillini of University of Trento presented a paper on using BPM to assist in programming wireless sensor network (WSN) applications, which are currently stand-alone systems coded by specialized developers. These networks of sensors and actuators, such as those that control HVAC systems in meeting rooms using sensors for temperature, CO2, presence and other factors, have been the subject of other research, where the sensors are exposed as web services and orchestrated within processes, with an extension to process language specifically for sensors. The idea of the research described of this paper is to develop a modeling notation that allows integrated development of the business process and the WSN.

They created an extension to BPMN, BPMN4WSN, and created a modeling environment for designing the systems that include both business processes and WSN logic. This presents WSN interactions as a specific task type; since these can represent complex interactions of sensors and actuators, it’s not as simple as just a web service call, although this allows it to be abstracted as such within a BPMN model. The resulting model is both a deployable business process that deals with the business logic (e.g., billing for power consumption) and code generation for the WSN logic, plus the endpoints and communication proxy that connect the business process to the WSN.

Future work in this area is to make the code more efficient and reusable, create a unified modeling notation, create control flow for WSN nodes rather than the simpler sensors and actuators, and find a solution for multi-process deployment on a WSN.

You can find out more about the research project at the makeSense website.

Modeling Rewards and Incentive Mechanisms for Social BPM

[link to pdf paper]

Ognjen Scekic of the Vienna University of Technology presented a paper on incentives in social BPM, where he defines social business processes as those executed by an ad hoc assembled team of workers, where the team extends beyond the originator’s organizational scope. In addition to requiring environments to allow for rich collaboration, social processes in business require management and assignment of more complex tasks as well as incentive practices.

In general, incentives/rewards are used to align the interests of employees and organizations. A single incentive targets a specific behavior but may also have unwanted results (e.g., incenting only on call time in a call center without considering customer satisfaction), so typically multiple incentives are combined to produce the desired results. He makes a distinction between an incentive, which is offered before the task is completed, and a reward, which is offered after task completion. Consumer social computing uses simple incentive mechanisms, but that is insufficient for complex business processes; there are no general models and systems for modeling, executing, monitoring and adapting reward/incentive mechanisms.

The research identified seven major incentive mechanisms (e.g., pay-for-performance), each with its own evaluation methods. These can interface with the crowd management systems being used for social computing. Eventually, the idea is to have higher level tools so that an HR manager can assemble the incentives themselves, which will then be deployed to work with the social computing platform. Their Rewarding Model (RMod) provides a language for composing and executing these different incentive mechanisms based on time, state and organizational structure. Essentially, this is a rules-based system that evaluates conditions either at specific time intervals or based on inbound events, and triggers actions in response.

He described a scenario that used information from IBM India’s IT incident management system, generating automated team reorganization and monetary rewards in response to the work performed, as well as a scenario for a rotating presidency with a maximum number of consecutive times holding the position.

Although most of this work does not appear to be specific to social BPM, any of the incentives/rewards that result in automated team structure reorganization are likely only applicable in self-organized collaborative teams. Otherwise, these same methods could be applied to manage incentives in more structured teams and processes, although those likely already have incentive/rewards schemes in place as part of their structure.

BPM2012: Papers on Process Quality

It’s the first day of the 2012 conference on BPM research (yesterday we had the pre-conference workshops), and the first set of papers is on process quality.

Tying Process Model Quality to the Modeling Process: The Impact of Structuring, Movement, and Speed

[link to pdf paper]

The first paper, presented by Jan Claes of Ghent University and with several co-authors, looked at the possible links between process model quality and the modeling process itself, which has ramifications for teaching process modeling and related tools. Their initial research defined an understandability metric, then measured the correlation between different modeling practices and understandability. They found that structured modeling was positively correlated with understandability: if the model was created using a structured approach, that is, focusing on developing each block then assembling into the larger model, it was more understandable. Time spent modeling was negatively correlated: the longer that it took to create the model, the less understandable it is, which is similar to a finding that they referenced about how faster programmers tend to deliver code with fewer defects. A third factor, the number of times that the model objects were moved during modeling, showed only a slight correlation (personally, I find that people who are a bit obsessive tend to move model components more often, but that doesn’t necessarily lead to less understandable models).

This is fairly early in this research, and a number of areas need to be explored further. First, the understandability metric may need to be refined further; they have defined a measure of perspicuity that is about clarity of understanding, not necessarily structural correctness. Other factors need to be considered, such as the demographics and prior knowledge of the subjects.

Capabilities and Levels of Maturity in IT-Based Case Management

[link to pdf paper]

Jana Koehler of Lucerne University presented on how well case management systems support case managers in social work, healthcare and complex insurance claims. She set out key characteristics for a case management system: complex assessment instruments, setting objectives jointly with the client, and complex coordination, controlling and monitoring. Then, she discussed key capabilities: information handling (visualization, access and assessment), case history (insights, from simple descriptive artifacts to diagnostic and predictive capabilities), decisions (individual decisions through to best practices), and collaboration and administration.

The result is the C3M maturity model for IT-based case management (that is, supported by some sort of system): similar to other maturity models, this includes the stages of individualistic, supported, managed, standardized and transformative. The paper included a chart of the maturity levels, showing the main capability, benefit and risk at each level. A maturity model such as this can be helpful in evaluating case management systems by identifying capabilities, and providing potential roadmaps for vendors.

Business Process Architecture: Use and Consistency

[link to download for Springer subscribers]

The last paper in the process quality section was on business process architecture, presented by Remco Dijkman of Eindhoven University. He started with a definitely of a business process architecture as a representation of the business processes in an organization and the relationships between processes; their evaluation shows that “explicitly representing and analyzing relations between process models can help improving the correctness and consistency of the business process architecture as a whole”. They listed the different types of relations between processes (triggering, flow, composition and specialization) as well as the events that define the relationship between processes. This process architecture is not an executable process, even though it may have the look of a process model, but rather a high-level abstract view.

The goal of all of this is not just to define process architecture, but to create a framework for assessing the quality of a particular architecture based on patterns and anti-patterns within the relations between the processes; several pages of the paper cover a detailed description of the patterns and anti-patterns. They did a case study of constructing a process architecture for a subset of the SAP reference model, producing a count of each type of pattern and anti-pattern encountered. Looking at the anti-patterns specifically highlights areas in the reference model that may be problematic; although it doesn’t find many types of problems, it is a good first-stage analysis tool.

Their future plans in this research include formalization of the process architecture, visualization, and design of the architecture based on a complex organization.

Overall, a good set of papers looking at the issues of improving quality in processes.

ACM Workshop at BPM2012: BPMN Smackdown by @swensonkeith

In the last portion of the ACM workshop at BPM 2012, we had a couple of short non-research papers, the first of which was by Keith Swenson, in which he posits that BPMN is incompatible with ACM. He starts by saying that it’s not a critique of BPMN in particular, but of any two-dimensional flow diagram notation. He also makes a distinction between production case management and adaptive case management – a distinction that I find to be a bit artificial since I don’t think that there’s a hard line between them – where PCM systems have developers creating systems for people to use, whereas ACM has people doing the work themselves. The distinction between PCM and ACM has created a thin, rarified slice of what remains defined as ACM: doctors and lawyers are favorite examples, and it is self-evident that you’re not going to get either doctors or lawyers to draw event-driven BPMN models with the full set of 100+ elements for their processes, or to follow rigidly defined processes in order to accomplish their daily tasks. Instead, their “processes” should be represented as checklists, so that users can completely understand all of the tasks, and can easily modify the process as required.

He states that drawing a diagram (such as BPMN) requires a level of abstract thinking that is common with developers but not with end users, hence BPMN is really a programming language. Taking all of that together, you can see where he’s coming from, even if you disagree: if a system uses BPMN to model processes, most people will not understand  how BPMN models work [if they are drawn in full complexity by developers, I would add], therefore won’t modify them; if all users can’t modify the process, then it’s not ACM. Furthermore, creating a flow model with temporal dependencies where no such dependencies exist in reality hinders adaptability, since people will be forced to follow the flow even if there is another way to accomplish their goals that might be more appropriate in a particular context.

Therefore,

BPMN ⇒~ACM

My problem with this is that BPMN has been used by developers to create complex flow models because both the language and their organization allows them to, but that’s not the only way to use it. You can use a limited subset of BPMN to create flow models – in cases where flow models are appropriate, such as when there are clear temporal dependencies – that are understandable by anyone involved in those processes. You can create a BPMN diagram that is a collection of ad hoc tasks that don’t have temporal dependencies, which is semantically identical to a checklist. You can create alternative views, so that a model may be viewed in different forms by different audiences. In other words, just like Jessica Rabbit, BPMN isn’t bad, it’s just drawn that way.

ACM Workshop at BPM2012: ACM in Practice

The first part of the afternoon at the ACM workshop at BPM 2012 moved away from theory and research, and into actual implementations of ACM plus the emerging CMMN standard.

Helle Frisak Sem of Computas presented a paper that she co-authored with her colleagues Steinar Carlsen and Gunnar John Coll, describing an ACM system that is in production at the Norwegian Food Safety Authorty (NFSA) for food safety inspections, information and investigations. This implementation was the recipient of a 2012 ACM award. At its core, the control activity module of the system has the concept of a case that is a rich folder of information about a person or business. The case manager performs tasks (such as schedule and document food inspections) in the context of a case. Each task type has a complete task template that contains all of the possible steps relevant to this task type; at runtime, the user sees a derived list of steps based on conditions in order to complete the task (similar to Ilia Bider’s respondent systems theory), which includes concepts of step dependencies and optional versus mandatory steps. Steps may appear and disappear based on changing conditions, and the user can complete the steps in any order unless there are specific dependencies. Each step, as completed, contributes to the case folder so that a complete record of every task exists. In addition to regular inspections, the system has an emergency response module for managing incidents such as livestock disease outbreak: unlike the more structured inspection tasks, this is used more for logging the incidents, proposing actions and logging decisions, as well as logging media requests and responses.

The control activity module is much more structured and pre-defined, and is hence domain-specific; the emergency control module is domain independent, since it does not contain much, if any, specific domain knowledge. A couple of questions emerged: first, whether domain-neutral systems are really ACM systems, or whether domain specificity is one of the characteristics of ACM. Secondly, the degree of adaptability that is required to be considered ACM, given a spectrum from structured to unstructured: process-driven server integration, human process management, production case management, adaptive case management. As you can imagine, I really like this spectrum because it’s very close to being a relabelling of the “spectrum” diagram that I created last year in which I stated that it’s not about BPM versus ACM, rather a spectrum of process functionality – it’s much more productive in the real world to think about the majority of the processes that fit somewhere in the middle of the spectrum, not at either extreme.

The next paper was Rüdiger Pryss of Ulm University describing a mobile task management system for medical ward rounds (i.e., doctors with iPads). Conveniently, the university also has a hospital, and they started out looking at ways to integrate workflow into ward rounds, but found that it didn’t work with the way that doctors worked when they were doing rounds, which traditionally uses pen and paper to create a to-do list as they walk around and see patients. Moving to an iPad-based system for managing their tasks on the rounds required the doctors to change their methods, although a lot of work on user experience was done in order to replicate their preferred way of working while maintaining input speed through templates and voice input. It was also able to add significant value by integrating patient information as well as predefined workflows for specific tasks to be performed by others, such as xrays. Interestingly, although the technical aspects of task management improved, the patient communication degraded since the doctors were documenting on the iPad while they were with the patient instead of waiting until after seeing the patient to document on paper, as they did previously; it was overall less time but the experience needs to be reworked, possibly with two doctors using linked iPads to interview and document simultaneously.

Last up in this section was a paper authored by Mike Marin, Richard Hull and Roman Vaculin of IBM, presented by one of their colleagues from the Haifa research lab, on the emerging OMG standard for case management  modeling and notation (CMMN). Unfortunately, OMG does not release any information about proposed or in-progress standards, only published ones, so many of us have never seen this before. At the heart of CMMN is a case folder object, based on CMIS, which includes folders, documents and properties for both. The top-level behavioral model includes tasks (where work is performed, both manual and automated), stages (hierarchical clustering of work) and milestones (business-related operational objectives); progression through stages is controlled by worker requests and by sentries (rules), and dependencies can be indicated although there is not strictly a flow model. Stages in case instances can have scope lists, which indicate discretionary tasks. OMG manages the BPMN standard, and there is definitely a lot of BPMN-ness about CMMN. I think that a key question will be whether the two standards can be merged into a single standard.

ACM Workshop at BPM2012: Supporting Collaborative Work

We heard two more papers in the morning, the first presented by Nicolas Mundbrod of Ulm University on system support for collaborative knowledge work (paper co-authored by Jens Kolb and Manfred Reichert). This is the first of the papers today that is starting to show some of the crossover with social software: they studied the characteristics of collaborative knowledge work – uncertainty, goal orientation, emergence of work, and growing knowledge base – in order to determine what functionality is required to support it. From this, they defined nine dimensions by which to measure collaborative knowledge work: knowledge action types (e.g., acquisition, application, dissemination), methodology (e.g., explicit, tacit), interdisciplinarity (range from domain-specific to interdisciplinary), organizational frame (e.g., project, case, spontaneous), spatial proximity (range from direct to remote), involved knowledge workers (range from two to countless), temporary constraints (e.g., fixed, relative), information interdependency (range from no focus to main focus on interdependencies), and number of repetitions (range from unique to frequent). Based on the dimensions and characteristic, they developed a collaborative knowledge work lifecycle based on the BPM lifecycle and knowledge work lifecycle: orientation leading into template design, collaboration runtime, and records evaluation. Records evaluation is not just after-the-fact analysis of cases, but acts as an information source during the collaborative runtime. They feel that there are a number of tools that target specific aspects of the collaborative lifecycle, but that more research is required on systems to support this type of knowledge work, especially for the cross over between knowledge work and more structured workflow. There were some interesting discussions following, including about other related research such as modifying the knowledge work environment (including which steps are required) based on the experience of the individual worker so that novice workers can be guided without annoying experienced workers.

Staying with the theme of systems for supporting work, Irina Rychkova of University Paris 1 Pantheon-Sorbonne presented on automated support for case management processes with declarative configurable specifications. She maintains that process models are important for a shared understanding of work, but that a traditional BPMS cannot properly manage case management processes because of the unpredictability, variability and emergent nature of process instances. In her research, she attempted to model a flexible process (mortgage application) with BPMN, but found a number of challenges: the tradeoffs between flexibility and complexity when creating the model, especially when dealing with optional and required information; and runtime adaptability, requiring a lot of human expertise and decision-making and reducing reusability. Instead, she proposes configuration mechanisms to allow processes to be configured (data objects and rules) during runtime, which allows for a more adaptable process as well as collecting information to improve models in the future. She maintains that the problem with BPMN is not the language itself, but about modeling style: for case management, instead of using an imperative style that defines tasks in a specific order, we need to use a declarative style where tasks can be defined without explicit ordering, but with rules that allow tasks to be dynamically enabled and disabled based on conditions. BPMN works well for an imperative style of process models, but some new notation – or extension to BPMN – is required to represent configurable data objects, optional data objects, complex/composite structure of data objects, and conditionally obligatory/optional/alternative data objects based on rules. Similarly, there is a need to model the rules that drive these configurations during runtime. There is quite a bit of other research being done on declarative/goal-based process models, and some number of products emerging in this area. There are also a lot of differing opinions on whether BPMN is suitable for modeling case management processes. It’s not clear that BPMN will emerge as the standard for this sort of modeling, but it’s worth considering if it can be extended to suit because of its already widespread (albeit shallow) adoption.

ACM Workshop at BPM2012: Systems Theory and Activity Modalities

It’s the first day of the annual research/academic conference on BPM, this year held in Tallinn, Estonia, and I’m attending the ACM workshop organized by Irina Rychkova of University Paris 1 Pantheon-Sorbonne, Ilia Bider of Stockholm University and IbisSoft, and Keith Swenson of Fujitsu. This is my fifth year at this conference, and I always find it a great opportunity to see what’s going on in the academic research – some of which will eventually make its way into commercial products – as well as meet people from both the research and industry sides.

The workshop days are organized as a series of papers presented on a common theme, kicked off with a keynote from one of the session chairs. In this case, Ilia Bider gave the keynote on a non-workflow theory of business processes. He has a really interesting approach, since he models enterprises as complex multilevel adaptable systems using systems theory (a topic of particular interest, given that my degree is in systems design engineering, which included some amount of systems theory), where process instances (cases) are temporal/situational respondent systems created from system assets as required to address a situation system. This means that an instance is based, in some part, on a process template (the assets from which it is created) as well as sensors that inform it about the situation systems. He describes process instances as moving through state space, and the process model as a set of formal rules describing the valid paths of the possible trajectories. A process model, then, is a state space with a goal defined as a surface within the space, and the set of trajectories defined prescriptively (e.g., using a flow diagram), by constraint-based rules or non-prescriptive methods. Moving from theory to the systems that support adaptive processes/cases, the system must provide a shared map of the multi-dimensional state space so that everyone can see the goal and current position in order to plan the next moves; support for collaboration coordination for complex movements in multiple dimensions simultaneously; and guidance for moving along the trajectories. Unlike structured BPM systems, the latter is not the main focus, although may include concepts of obligation (must), prohibition (can’t), recommendations (should) and discouragements (should not). If the landscape of the state space changes during the execution of an instance, it’s more important to support the visibility of the space and the goals, and allow people to move within the space as required to achieve the goals. Interesting stuff. You can read more of his publications here.

The next paper on ACM from an activity modality perspective, was presented by Lars Taxén of Linköping University in Sweden. He proposes that there are six activity modalities to consider when shifting from a process-centric view to an information-centric view – objectivation, contextualization, spatialization, temporalization, stabilization, and transition – where these represent the innate predispositions that we use for decision-making and taking action. BPMN focuses almost purely on temporalization by modeling the flow; as well as being one-dimensional, it is prescriptive, which doesn’t support adaptive cases/processes very well. Process isn’t unimportant in ACM, but it is there to serve data artifacts in some way; he doesn’t suggest completely discarding process models, but finding more declarative representations than BPMN. Business processes are, of course, multi-dimensional, it has been somewhat of a disservice to focus purely on prescriptive flow models as being the sole embodiment of business processes. I definitely have not done his paper justice, since I did not read it in advance so was only summarizing on the fly from his short presentation. However, in the discussion following, there was an interesting proposal that ACM is BPM plus these other dimensions. Slowly, we move towards a grand unification theory for ACM and BPM. [link to pdf paper]

I didn’t intend to publish a post for each paper, but as we break for morning coffee, I’ll publish this and resume after the break.