CamundaCon 2023 Day 2: Process and Decision Standards

Falko Menge and Marco Lopes from Camunda gave a presentation on the involvement of Camunda with the development of OMG’s core process and decision standards, BPMN and DMN. Camunda (and Falko in particular) has been involved in OMG standards for a long time, and embrace these two standards in their products. Sadly, at least to me, they gave up support for the case management standard, CMMN, due to a lackluster market adoption; other vendors such as Flowable support all three of the standards in their products and have viable use cases for CMMN.

Falko and Marco gave a shout out to universities and the BPM academic research conference that I attended recently as promoters of both the concepts of standards and future research into the standards. Camunda has not only participated in the standards efforts, but the co-founders wrote a book on Real-Life BPMN as they discovered the ways that it can best be used.

They gave a good history of the development of the BPMN standard and also of Camunda’s implementation of it, from the early days of the Eclipse-based BPMN modeler to the modern web-based modelers. Camunda became involved in the BPMN Model Interchange Working Group (MIWG) to be able to exchange models between different modeling platforms, because they recognized that a lot of organizations do much broader business modeling in tools aimed at business analysts, then want to transfer the models to a process execution platform like Camunda. Different vendors choose to participate in the BPMN MWIG tests, and the results are published so that the level of interoperability is understood.

DMN is another critical standard, allowing modelers to create standardized decision models and also supports the Friendly-Enough Expression Language (FEEL) for scripting within the models. The DMN Technolgy Compatibility Kit (TCK) is a set of decision models and expected results that provides test results similar to that of the BPMN MIWG tests: information about the vendors’ products test coverage are published so that their implementation of DMN can be assessed by potential customers.

Although standards are sometimes decried as being too difficult for business people to understand and use (they’re really not), they create an environment where common executable models of processes and decisions can be created and exchanged across many different vendor platforms. Although there are many other parts of a technology stack that can create vendor lock-in, process and decision models don’t need to be part of that. Also, someone working at a company that uses BPMN and DMN modeling tools can easily move to a different organization that uses different tools without having to relearn a proprietary modeling language. From a model richness standpoint, many vendors and researchers working together towards a common goal can create a much better and more extensive standard (as long as they’re not squabbling over details).

They went on to discuss some of the upcoming new standards: BPM+ Harmonization Model and Notation (BHMN), Shared Data Model and Notation (SDMN), and Knowledge Package Model and Notation (KPMN), all of which are in some way involved in integrating BPMN and DMN due to the high degree of overlap between these standards in many organizations. Since these standards aren’t close enough to release, they’re not planned for a near-future version of Camunda, but they’ll be added to the product management roadmap as the standards evolve and the customer requirements for the standards becomes clear.

Happy birthday to BPMN 2.0

Bruce Silver points out that it’s been 10 years since the finalization of BPMN 2.0, the standard notation that we use for modeling (and sometimes executing) business processes. The standard wasn’t published until some time after that, and there have been revisions over the years, but BPMN 2.0 was the start of a wave of standardization in the BPMS market since it included the notation (a serialization file format) that made model interchange between products possible. There’s always been some amount of controversy swirling around BPMN: some consider it too difficult for non-technical people to understand, some consider it too restrictive for technical implementations, and more. I believe that a subset of BPMN is a good tool for process modeling by business people, and that it’s a powerful process implementation tool for developers, but I agree that it’s not the only tool you should have in your toolbox.

Bruce’s post takes us back to some basic definitions of a process, and why BPMN is a better fit for modeling processes. He also covers some of the things that are not great about the standard:

The ability to understand the process behavior in detail simply by inspecting the diagram, unfortunately, was not top of mind in the BPMN 2.0 task force in 2010. They were solely focused on making the diagrams executable on an automation engine. But to most BPMN users today, precise description based on the diagram alone is much more important.

To help with the adoption of the standard, Bruce developed conventions for the use of the standard, publishing his BPMN Method & Style books and training. Some modeling vendors have even incorporated this into their product, so that you can validate your models against his conventions.

Regardless of whether you love or hate BPMN, it’s impossible to deny the impact that it has had on the BPMS market.

My writing on the Trisotech blog: better analysis and design of processes

I’ve been writing some guest posts over on the Trisotech blog, but haven’t mentioned them here for a while. Here’s a recap of what I’ve posted over there the past few months:

In May, I wrote about designing loosely-coupled processes to reduce fragility. I had previously written about Conway’s Law and the problems of functional silos within an organization, but then the pandemic disruption hit and I wanted to highlight how we can avoid the sorts of cascading supply chain process failures that we saw early on. A big part of this is not having tightly-coupled end-to-end processes, but separating out different parts of the process so that they can be changed and scaled independently of each other, but still form part of an overall process.

In July, I helped to organize the DecisionCAMP conference, and wrote about the BPMN-CMMN-DMN “triple crown”: not just the mechanics of how the three standards work together, but why you would choose one over the other in a specific design situation. There are some particular challenges with the skill sets of business analysts who are expected to model organizations using these standards, since they will end up using more of the one that they’re most familiar with regardless of its suitability to the task at hand, as well as challenges for the understandability of multi-model representations that require a business operations reader of the models to be able to see how this BPMN diagram, that CMMN model and this other DMN definition all fit together.

In August, I focused on better process analysis using analytical techniques, namely process mining, and gave a quick intro to process mining for those who haven’t seen it in action. For several months now, we haven’t been able to do a lot of business “as is” analysis through job shadowing and interviews, and I put forward the idea that this is the time for business analysts to start learning about process mining as another tool in their kit of analysis techniques.

In early September, I wrote about another problem that can arise due to the current trend towards distributed (work from home) processes: business email compromise fraud, and how to foil it with better processes. I don’t usually write about cybersecurity topics, but I have my own in-home specialist, and this topic overlapped nicely with my process focus and the need for different types of compliance checks to be built in.

Then, at the end of September, I finished up the latest run of posts with one about the process mining research that I had seen at the (virtual) academic BPM 2020 conference: mining processes out of unstructured emails, and queue mining to see the impact of queue congestion on processes.

Recently, I gave a keynote on aligning intelligent automation with incentives and business outcomes at the Bizagi Catalyst virtual conference, and I’ve been putting together some more detailed thoughts on that topic for this months’ post. Stay tuned.

Disclosure: Trisotech is my customer, and I am compensated for writing posts for publication on their site. However, they have no editorial control or input into the topics that I wrote about, and no input into what I write here on my own blog.

DecisionCAMP 2019: Industry use cases in airport gate allocation, financial risk monitoring, and composite material design

The Decision Model for Gate Allocation. Silvie Spreeuwenberg, Librt

Day 3 of DecisionCAMP 2019 started with three use cases from industry. First, Silvie Spreeuwenberg presented on decision models for allocating airport gates, specifically at Schiphol airport in Amsterdam. Although gate plans are made a day in advance based on flight schedules, they change constantly due to early arrivals, late departures and other unexpected disruptions to the schedule. Any given day, there are 50-100 gate changes one hour before an aircraft arrival; although this was seen as a disruption, this could also be considered an opportunity for optimization.

Day-ahead decision model for assigning gates. From Silvie Spreeuwenberg’s presentation.

There were a lot of rules used for the planning and reassignment that had more to do with preferences than actual optimization; they really wanted to drive towards the objective of optimizing asset usage and therefore airport capacity. There are a lot of factors involved, such as having sufficient gate area capacity to handle the number of passengers for a flight, or having buses available to offload flights that can’t be assigned a gate. They have created a policy for aircraft stand allocation which includes some identifiable decision tables, although these are just at the strategy documentation phase.

Definitely a complex problem that has applicability at every major airport around the world.

A hybrid implementation of multi-channel, multi-modal, high volume financial risk monitoring. Martijn Tromm and Marten Schokking, Oracle

Marten Schokking and Martijn Tromm presented a use case from Rabobank using decision management and machine learning for customer risk assessment in terms of KYC (know your client) and AML (anti-money laundering). This is used during client onboarding, but also during periodic reviews as well as reviews triggered by specific events. There are scoring rules that use data input from a variety of sources, including client information from a CRM, interview responses and policies.

Risk model for customer risk assessment. From Marten Schokking and Martijn Tromm’s presentation.

There are government regulations requiring that this be done for all clients at certain times. A triggering event, such as a change in the customer’s circumstances, will cause a customer interview and other data analysis to recalculate the risk; this may result in a more detailed manual review of the risk. At this point, there is still a lot of employee work which is creating a challenge in completing the customer risk assessments within the regulatory deadlines; they are looking at how to automate the basic assessment using machine learning in order to reduce the manual work required.

The risk model has been built using Oracle Policy Automation rules engine integrated with the Siebel CRM. They are reusing rules across channels where possible, and the use of natural language in the rules definition helps with traceability to the policies. They are continuing to innovate with rules, such as having context-driven rules based on user behavior on specific channels, and having a fast two-day turnaround for rule changes related to certain types of policy changes. The ability to predict the impact of policy changes based on actual data allows for operational planning to accommodate those changes.

The Role of DMN and BPMN in the Design of Composite Materials. Dario Campagna, ESTECO

Dario Campagna presented on how the COMPOSELECTOR project is integrating material modeling and business process management in a decision support system for composite material design; this type of design can have complex requirements, business decisions and simulation workflows. Using application cases from Dow, Airbus and Goodyear, they modeled the business flow using BPMN and DMN. ESTECO, which creates software tools for engineering design, is a contributor to the COMPOSELECTOR project.

Subprocess in Dow flow showing decision invocation. From Dario Campagna’s presentation.

While BPMN is used to model the flow at the business level, DMN decision tables are used to make decisions on the class of materials and manufacturing process, then on the simulation workflows to use based on business and engineering KPIs. DMN provides the link from the business layer to the engineering layer, then to the simulation layer. Using DMN provides a higher level of consistency in decision-making, which leads to better design and lower costs.

Decision table used to select simulation workflow. From Dario Campagna’s presentation.

We saw a brief video of a demo of the system in use: a business-level manager selects high-level parameters and KPIs for the proposed design; this selects one or more simulation models for the material design, which is then confirmed or decided by an engineer; the results of the simulation are passed back to the manager for final decision-making. This has the effect of integrating the business and technical sides of the design process, and include modeling and simulation results in the business-level (human) decisions in a standardized way.

CamundaCon 2019 breakout: DMN and BPMN for reusable survey forms at Indiana Farm Bureau

Sowmya Raghunathan and Corinna Cohn presented on a claims intake implementation that uses BPMN and DMN in an interesting way: driving the intake forms used by a claims administrator when gathering first notice of loss (FNOL) information from a claimant. The idea is that the claims admin doesn’t need to have any information about the claim type, and the claimant doesn’t get asked any irrelevant questions, because the form always presents the next best question based on previous responses: a wizard-like model, but driven by BPMN and DMN.

As the application and technical architects at Indiana Farm Bureau Insurance, they were able to give us a good view of how they use the tools for this: BPMN for orchestrating the DMN and UI communication as well as storing the responses, DMN for defining the questions and question/response mapping, and a UI component for implementing the survey forms. They consider this a headless application, but of course, it does surface via the form UI; from a Camunda process standpoint, however, it is decoupled piece of the architecture that interfaces with the claims system.

Technical architecture of the survey DMN/BPMN system

We saw a demo of one of the claim forms at work, where the previous questions and responses can be seen, and changes to the previous responses may cause changes to subsequent questions based on the DMN decision tables behind the scenes. They use a couple of DMN tables just as configuration tables for the UI for the questions and options (e.g., radio buttons versus free-form responses), then a Next Question decision table to determine the next question based on the previous response: this table is based on a directed acyclic graph that links questions (nodes) via answers (links), which allows for easy re-navigation of the graph if an earlier response is changed.

DMN decision table for FNOL questions related to auto claim

BPMN is used to navigate and determine the next question in a dynamic question subprocess, and if the survey can be exited; once sufficient information has been collected, the FNOL is initiated in the claims systems.

Dynamic Questions BPMN subprocess

The use of DMN means that the questions can be changed very easily since they’re not embedded in the code; this means that they can be created and modified by business analysts rather than requiring developers to code these into the UI directly.

FNOL BPMN process

The entire framework is reusable, and could be quickly reconfigured to be used for any type of survey. That’s great, because a few years ago, I saw a very similar use case for this in a clinical situation for a stroke assessment questionnaire: in a hospital setting, when someone arrives at an emergency department and is suspected of having had a stroke, there are standard questions to ask in order to evaluate the patient’s condition. At the time, I thought that this would be a perfect use case for DMN and BPMN, although that was beyond the scope of the project at that time.

A match made in BPMN/DMN heaven: @bpmswatch joining @Trisotech

Trisotech recently announced that Bruce Silver – who writes and teaches the gold standard Method & Style books and courses on BPMN and DMN, and who has forgotten more about BPMN than most people ever learned – is joining Trisotech as a principal consultant. Congrats all around, although Bruce may regret this when he’s needed at Trisotech Montreal headquarters in January when it’s -30C. Winking smile

Bruce even has his first post on the Trisotech blog, about practical DMN basics. Essential reading for getting started with DMN.

Disclosure: Trisotech is a consulting client of mine. I’m not being paid for writing this post, I just like these guys because they’re smart and do great work. You can read about my relationship with vendors here.

bpmNEXT 2019 demos: automation services with @Trisotech and @bpmswatch

The day started with my keynote on rolling your own digital automation platform using BPM and microservices, which set the stage for the two demos and the round table discussion that followed.

Business Automation as a Service, with Denis Gagne of Trisotech

Denis demoed a new product release from Trisotech, their business automation as a service platform: competing with services such as Zapier and IFTTT but with better process and decision management, and more complex service types available for integration. He showed creating a service built on a Twitter trigger, using BPMN to model the orchestration and FEEL as the scripting language in script activities, and incorporating a machine learning sentiment score and a decision service for categorizing the results, with the result displayed in the color of a flashing smart light bulb. Every service created exposes an Open API and REST API by default, and is deployed as a self-contained microservice. He showed a more complex example of marketing automation that extracts data from an input form, uses a geo-locator to find the customer location, uses a DMN decision model to assign to a sales team based on geography and other form parameters, then creates a lead in Microsoft Dynamics CRM. He finished up with an RPA task example that included the funniest execution of an “I am not a robot” CAPTCHA ever. Key point here is that Trisotech has moved from a pure modeling vendor into the execution space, integrated with any Open API service, and deployable across a number of different cloud platforms using standard protocols. Looking forward to playing around with this.

Business-Composable Services for the Mortgage Industry, with Bruce Silver of Method and Style

Bruce showed the business automation services that he’s created using Trisotech’s platform for the mortgage industry. Although he started looking at decision services around how to determine if someone should be approved for a mortgage (or how large of a mortgage), process was also required to do things like handle mapping and validation of data. Everything is driven by a standard application form and a standard set of underwriting rules used in the US mortgage industry, although this could be modified to suit other markets with different rules. The DMN rules are written in business-readable language, allowing them to be changed by non-developers. The BPMN process does the data validation and mapping before invoking the underwriting decision service. The entire process can be published as a service to be called from any environment, such as a web app used by underwriters inside a financial company or by an online prequalification review done directly by the consumer. The plan is to make these models and services available to see what the adoption is like, to help highlight the value and drive the usage of BPMN and DMN in practice.

Industry Round Table: The Coming Impact of Decision Services and Machine Learning on Business Automation

We finished the morning of day 2 with a discussion that included three of the earlier demo presenters: Denis Gagne, Bruce Silver and Scott Menter. They each gave a short talk on how decision services and machine learning are changing the automation landscape. Some ideas discussed:

  • It’s still up in the air whether DMN will “cross the chasm” and become generally used (to the same degree as, for example, BPMN); this means that vendors need to fully support it, potentially as an execution as well as requirements language.
  • Having machine learning algorithms expressed as DMN can improve transparency of decisions, which is essential in some jurisdictions (e.g., GDPR). There is a need for “explainable AI”.
  • The population using DMN is lower than BPMN, and the skill level is higher, although still well within the capabilities of data-focused business people who are comfortable with formulas and expression languages.
  • There’s a distinction between symbolic (rules-based) and sub symbolic (neural network) AI algorithms in terms of what they can do and how they perform; however, sub symbolic AI is less of a black box in terms of decision transparency.
  • If we here at bpmNEXT aren’t thinking about the ethics of automation, who will? Consider the labor disruption of automation, or decisions that make a choice involving the value of life (the AI “trolley problem”), or old norms used as training data to create biased machine learning.
  • We’re still in a culture of having people at a certain skill level (e.g., surgeons, pilots) make their own decisions, although they might be advised by AI. How soon before we accept automated decisions at that level?
  • Individually-targeted decisions are happening now by what is presented to specific people through platforms like Google Search and Amazon. How is our behavior being controlled by the limited set of options presented to us?
  • The closer that a technology gets to the end effect, the more responsibility that the creator of the technology needs to take in how it is used.
  • Machine learning may be the best way to discover the best transparent decision logic from human action (unfortunately that will also include the human biases), allowing for people to understand how and why specific decisions are made.
  • When AI is a black box, it needs to be understood as being a black box, so that adequate constructs can be created around it for testing and usage.

Great discussion and audience participation, and a good follow-on from the two demos that showed decision services in action.

Camunda BPM 7.5: CMMN, BPMN element templates, and more

I attended an analyst briefing earlier today with Jakob Freund, CEO of Camunda, on the latest release of their product, Camunda BPM 7.5. This includes both the open source version available for free download, and the commercial version with the same code base plus a few additional features and professional support. Camunda won the “Best in Show” award at the recent bpmNEXT conference, where they demonstrated combining DMN with BPMN and CMMN; the addition of the DMN decision modeling standard to BPMN and CMMN modeling environments is starting to catch on, and Camunda has been at the front edge of the wave to push that beyond modeling into implementation.

They are managing to keep their semi-annual release schedule since they forked from Activiti in 2013: version 7.0 (September 2013) reworked the engine for scalability, redid the REST API and integrated their Cockpit administration module; 7.1 (March 2014) focused on completing the stack with performance improvements and more BPMN 2.0 support; 7.2 (November 2014) added CMMN execution support and a new tasklist; 7.3 (May 2015) added process instance modification, improved authorization models and tasklist plugins; and 7.4 (November 2015) debuted the free downloadable Camunda Modeler based on the bpmn.io web application, and added DMN modeling and execution. Pretty impressive progression of features for a small company with only 18 core developers, and they maintain their focus on providing developer-friendly BPM rather than a user-oriented low-code environment. They support their enterprise editions for 18 months; of their 85-90 enterprise customers, 25-30% are on 7.4 with most of the rest on 7.3. I suspect that a customer base of mostly developers means that customers are accustomed to the cycle of regression testing and upgrades, and far fewer lag behind on old versions than would be common with products aimed at a less technical market.

Today’s 7.5 release marches forward with improvements in CMMN and BPMN modeling, migration of process instances, performance and user interface.

Oddly (to some), Camunda has included Case Management Model & Notation (CMMN) execution in their engine since version 7.2, but has only just added CMMN modeling in 7.5: previously, you would have used another CMMN-compliant modeler such as Trisotech’s then imported the model into Camunda. Of course, that’s how modeling standards are supposed to work, but a bit awkward. Their implementation of the CMMN modeler isn’t fully complete; they are still missing certain connector types and some of the properties required to link to the execution engine, so you might want to wait for the next version if you’re modeling executable CMMN. They’re seeing a pretty modest adoption rate for CMMN amongst their customers; the messaging from BPMS vendors in general is causing a lot of confusion, since some claim that CMMN isn’t necessary (“just use ad hoc tasks in BPMN”), others claim it’s required but have incomplete implementations, and some think that CMMN and BPMN should just be merged.

Camunda 7.5 element templatesOn the BPMN modeling side, Camunda BPM 7.5 includes “element templates”, which are configurable BPMN elements to create additional functionality such as a “send email” activity. Although it looks like Camunda will only create a few of these as samples, this is really a framework for their customers who want to enable low-code development or encapsulate certain process-based functionality: a more technical developer creates a JSON file that acts as an extension to the modeler, plus a Java class to be invoked when it is executed; a less technical developer/analyst can then add an element of that type in the modeler and configure it using the properties without writing code. The examples I saw were for sending email and tweets from an activity; the JSON hooked the modeler so that if a standard BPMN Send Task was created, there was an option to make it an Email Task or Tweet Task, then specify the payload in the additional properties that appeared in the modeler (including passed variables). Although many vendors provide a similar functionality by offering things such as Send Email tasks directly in their modeling palettes, this appears to be a more standards-based approach that also allows developers to create their own extensions to standard activities. A telco customer is using Camunda BPM to create their own customized environments for in-house citizen developers, and element templates can significantly add to that functionality.

Camunda 7.5 instance migration 4 - manual mapping of unrecognized steps between modelsThe process instance migration feature, which is a plugin to the enterprise Cockpit administration module but also available to open source customers via the underlying RESET and Java APIs, helps to solve the problem of what to do with long-running processes when the process model changes. A number of vendors have solutions for this, some semi-automated and some completely manual. Camunda’s take on it is to compare the existing process model (with live instances) against the new model, attempt to match current steps to new steps automatically, then allow any step to be manually remapped onto a different step in the new model. Once that migration plan is defined, all running instances can be migrated at once, or only a filtered (or hand-selected) subset. Assuming that the models are fairly similar, such as the addition or deletion of a few steps, this should work well; if there are a lot of topology changes, it might be more of a challenge since there could need to roll back instance property values if instances are migrated to an earlier step in the process.

They have also improved multi-tenancy capabilities for customers who use Camunda BPM as a component within a SaaS platform, primarily by adding tenant identifier fields to their database tables. If those customers’ customers – the SaaS users – log in to Cockpit or a similar admin UI, they will only see their own instances and related objects, without the developers having to create a custom restricted view of the database.

Camunda 7.5 process duration reportThey’ve released a simple process instance duration report that provides a visual interface as well a downloadable data. There’s not a lot here, but I assume this means that they are starting to build out a more robust and accessible reporting platform to play catch-up with other vendors.

Lastly, I saw their implementation of external task handling, another improvement based on customer requests. You can see more of the technical details here and here: instead of a system task calling an Camunda 7.5 external-task-patternexternal task asynchronously then wait for a response, this creates a queue that an external service can poll for work. There are some advantages to this method of external task handling, including easier support for different environments: for example, it’s easier to call a Camunda REST API from a .NET client than to put a REST API on top of .NET; or to call a cloud-based Camunda server from behind a firewall than to let Camunda call through your firewall. It also provides isolation from any scaling issues of the external task handlers, and avoids service call timeouts.

Camunda BPM 7.5

There’s a public webinar tomorrow (June 1) covering this release, you can register for the English one here (11am Eastern time) and the German one here (10am Central European time).

Bruce Silver Now Stylish With DMN As Well As BPMN

I thought that Bruce Silver’s blog had been quiet for a while: turns out that he moved to a new, more representative domain name, and my feed reader wasn’t updating from there. He’s rebranding his business, including his blog, under Method & Style, mirroring the title of his popular book and training BPMN Method and Style , and now his new book and training options for DMN: DMN Method and Style: The Practitioner’s Guide to Decision Modeling with Business Rules .

His blog has a ton of new content on DMN, starting with a great piece that compares the path of the DMN standard with that of BPMN, which is considerably more mature. He discusses the five key elements of DMN, then goes into each of those in detail in the next five posts: Decision Requirements Diagrams, Decision Tables, FEEL (a new expression language developed for DMN), Boxed Expressions and the Metamodel and Schema. It’s really interesting to read his analysis comparing the evolution of the two standards: there was a time when everyone thought that BPMN was just about the visual notation, but to make it really useful, the interchange format and execution semantics have to come along at some point. Still, it’s useful to get started in DMN now with DRDs and decision tables, since that at least makes the decision models explicit instead of being buried in text requirements.

Once you’ve brushed up on his posts covering the five key elements, you can also read about conformance levels that vendor can choose to implement, and what didn’t make it into DMN 1.1, which is the first real version of the standard.

He doesn’t pull any punches in his discussion, and is not very complimentary on some aspects of the standard and how some vendor choose to implement it. Just as he is with BPMN. Smile

bpmNEXT 2014 Thursday Session 1: Intelligence And A Bit More BPMN

Harsh Jegadeesan of SAP set the dress code bar high by kicking off the Thursday demos in a suit jacket, although I did see Thomas Volmering and Patrick Schmidt straightening his collar before the start. He also set a high bar for the day’s demo by showing how to illuminate business operations with intelligent process intelligence. He discussed a scenario of a logistics hub (such as Amazon’s), and the specific challenges of the hub operations manager who has to deal with inbound and outbound flights, and sorting all of the shipments between them throughout the day. Better visibility into the operations across multiple systems allows problems to be detected and resolved while they are still developing by reallocating the workforce. Harsh showed a HANA-based hub operations dashboard, where the milestones for shipments demark the phases of the value chain: from arrival to ground handling to warehouse to outbound buffer to loading and takeoff. Real-time information is pulled from each of the systems involved, and KPIs show; drill downs can show the lower level aggregate or even individual instance data to determine what is causing missed KPIs – in the demo, shipments from certain other hubs are not being unloaded quickly enough. But more than just a dashboard, this allows the hub operations manager to add a task directly in the context of the problem and assign it (via an @mention) to someone else, for example, to direct more trucks to unload the shipments. The dashboard can also make recommendations, such as changing the flights for specific shipments to improve the overall flow and KPIs. He showed a flight map view of all inbound and outbound flights, where the hub operations manager can click on a specific flight and see the related data. He showed the design environment for creating the intelligent business operations process by assembling SAP and non-SAP systems using BPMN, mapping events from those systems onto the value chain phases (using BPAF where available), thereby providing visibility into those systems from the dashboard; this builds a semantic data mart inside HANA for the different scenarios to support the dashboard but also for more in-depth analytics and optimization. They’ve also created a specification for Process Façade, an interface for unifying process platforms by integrating using BPMN, BPAF and other standards, plus their own process-based systems; at some point, expect this to open up for broader vendor use. Some nice case studies from process visualization in large-scale enterprises.

Dominic Greenwood of Whitestein on intelligent process execution, starting by defining an intelligent process: it has experiences (acquired data), knowledge (actionable information, or analytical interpretation of acquired data), goals (adoptable intentions, or operationally-relevant behavioral directives), plans (ways to achieve goals through reusable action sequences, such as BPMN processes) and actions (result of executing plans). He sees intelligent process execution as an imperative because of the complexity of real-world processes; processes need to dynamically adapt, and process governance needs to actively apply constraints in this shifting environment. An intelligent process controller, or reflective agent, passes through a continuous cycle of observe, comprehend, deliberate, decide, act and learn; it can also collaborate with other intelligent process controllers. He discussed a case study in transportation logistics – a massively complex version of the travelling salesman problem – where a network of multi-modal vehicles has to be optimized for delivery of goods that are moved through multiple legs to reach their destinations. This involves knowledge of the goods and their specific requirements, vehicle sensors of various types, fleet management, hub/port systems, traffic and weather, and personnel assignments. DHL in Europe is using this to manage 60,000 orders per day, allocated between 17,500 vehicles that are constantly in motion, managed by 300 dispatchers across 24 countries with every order changing at least once while en route. The intelligent process controllers are automating many of the dispatching decisions, providing a 25-30% operational efficiency boost and a 12% reduction in transportation costs. A too-short demo that just walked through their process model to show how some of these things are assigned, but an interesting look into intelligent processes, and a nice tie-in to Harsh’s demonstration immediately preceding.

Next up was Jakob Freund of camunda on BPMN everywhere; camunda provides an open-source BPM framework intended to be used by Java developers to incorporate process automation into their applications, but he’s here today to talk about bpmn.io: an open-source toolkit in Javascript that provides a framework for developers and a BPMN web modeler, all published on GitHub. The first iteration is kicking off next week, and the web modeler will be available later this year. Unlike yesterday’s demonstrators who firmly expressed the value of no-code BPM implementations, Jakob jumped straight into code to show how to use the Javascript classes to render BPMN XML as a graphical diagram and add annotations around the display of elements. He showed how these concepts are being used in their cockpit process monitoring product; it could also be used to demonstrate or teach BPMN, making use of functions such as process animation. He demonstrated uploading a BPMN diagram (as XML) to their camunda community site; the site uses the Javascript libraries to render the diagram, and allows selecting specific elements in the diagram and adding comments, which are then seen via a numeric indicator (indicating the number of comments) attached to the elements with comments. He demonstrated some of the starting functionality of the web modeler, but there’s a lot of work to do there still; once it’s released, any developer can download the code and embed that web modeler into their own applications.

We finished the first morning session with Keith Swenson of Fujitsu on letting go of control: we’re back on the topic of agents, which Keith initially defined as autonomous, goal-directed software that does something for you, before pointing out that that describes a lot of software today. He expanded that definition to mean something more…human-like. A personal assistant that can coordinate your communications with those of other domains. These type of agents do a lot of communication amongst themselves in a rules-based dynamic fashion, simplifying and reducing the communication that the people need to do in order to achieve their goals. The key to determining what the personal assistants should be doing is to observe emergent behavior through analytics. Keith demonstrated a healthcare scenario using Cognoscenti, an open-source adaptive case management project; a patient and several different clinicians could set goals, be assigned tasks, review documents and other activities centered around the patient’s care. It also allows the definition of personal assistants to do specific rules-based actions, such as cloning cases and synchronizing documents between federated environments (since both local and cloud environments may be used by different participants in the same case), accepting tasks, and more; copying between environments is essential so that each participant can have their information within their own domain of control, but with the ability to synchronize content and tasks. The personal assistants are pretty simple at this point, but the concept is that they are helping to coordinate communications, and the communications and documents are all distributed via encrypted channels so safer than email. A lot of similarities with Dominic’s intelligent process controllers, but on a more human scale. As many thousand of these personal assistant interactions occur, patterns will begin to emerge of the process flows between the people involved, which can then be used to build more intelligence into the agents and the flows.