BPM2023 Day 2 Keynote: AI in Processes

The second day of the main conference kicked off with a keynote by Marta Kwiatkowska, Professor of Computer Science at Oxford, on AI and machine learning in BPM. She started with some background on AI and deep learning, and linked this to automated process model discovery (process mining), simulation, what-if analysis, predictions and automated decisions. She posed the question of whether we should be worried about the safety of AI decisions, or at least advance the formal methods for provable guarantees in machine learning, and the more challenging topic of formal verification for neural networks.

She has done significant research on robustness for neural networks and the development of provable guarantees, and offered some recent directions of these applications in BPM. She showed the basics of calculating and applying robustness guarantees for image and video classification, and also for text classification/replacement. In the BPM world, she discussed using language-type prediction models for event logs, evaluating the robustness of decision functions to causal interventions, and the concept of reinforcement learning for teaching agents how to choose an action.

As expected, much of the application of AI to process execution is to the decisions within processes – automating decisions, or providing “next best action” recommendations to human actors at a particular process activity. Safety assurances and accountability/explainability are particularly important in these scenarios.

Given the popularity of AI in general, a very timely look at how it can be applied to BPM in ways that maintain robustness and correctness.

BPM2023 Day 1: RPA Forum

In the afternoon breakouts, I attended the RPA (robotic process automation) forum for three presentations.

The first presentation was “What Are You Gazing At? An Approach to Use Eye-tracking for Robotic Process Automation”, presented by Antonio Martínez-Rojas. RPA typically includes a training agent that captures what and where a human operator is typing based on UI logs, and uses that to create the script of actions that should be executed when that task is automated using the RPA “bot” without the person being involved – a type of process mining but based on UI event logs. In this presentation, we heard about using eye tracking — what the person is looking at and focusing on — during the training phase to understand where they are looking for information. This is especially interesting in less structured environments such as reading a letter or email, where the information may be buried in non-relevant text, and it’s difficult to filter out the relevant information. Unlike the UI event log methods, this can find what the user is focusing on while they are working, which may not be the same things in the screen that they are clicking on – an important distinction.

The second presentation was “Accelerating The Support of Conversational Interfaces For RPAs Through APIs”, presented by Yara Rizk. She presented the problem that many business people could be better supported through easier access to all types of APIs, including unattended RPA bots, and proposed a chatbot interface to APIs. This can be extracted by automatically interrogating the OpenAPI specifications, with some optional addition of phrases from people, to create natural language sentences: what is the intent of the action based on the API endpoint name and description plus sample sentences provided by the people. Then, the sentences are analyzed and filtered, and typically also with some involvement from human experts, and used to train the intent recognition models required to drive a chatbot interface.

The last presentation in this session was “Migrating from RPA to Backend Automation: An Exploratory Study”, presented by Andre Strothmann. He discussed how RPA robots need to be designed and prioritized so that they can be easily replaceable, with the goal to move to back-end automation as soon as it is available. I’ve written and presented many times about how RPA is a bridging technology, and most of it will go away in the 5-10 year horizon, so I’m pretty happy to see this presented in a more rigorous way than my usual hand-waving. He discussed the analysis of their interview data that resulted in some fundamental design requirements for RPA bots, design guidelines for the processes that orchestrate those bots, and migration considerations when moving from RPA bots to APIs. If you’re developing RPA bots now and understand that they are only a stopgap solution, you should be following this research.

BPM2023 Day 1: Design Patterns and Modeling

We’ve started the breakout paper presentations and I’m in the session on design patterns and modeling. For these breakouts, I’ll mostly just offer a few notes since it’s difficult to get an in-depth sense in such a short time. I’ll provide the paper and author names in case you want to investigate further. Note that some of the images that I include are screenshots from the conference proceedings: although the same information was shown in the presentations, the screenshots are much more legible than my photos made during the presentations.

The first paper is “Not Here, But There: Human Resource Allocation Patterns” (Kanika Goel, Tobias Fehrer, Maximilian Röglinger, and Moe Thandar Wynn), presented by Tobias Fehrer. Patterns help to document BPM best practices, and they are creating a set of patterns specifically for human resource allocation within processes. They did a broad literature review and analysis to distill out 15 patterns, then evaluated and refined these through interviews with process improvement specialists to determine usefulness and pervasiveness. The resulting patterns fall into five categories: capability (expertise, role, preference), utilization (workload, execution constraints), reorganization (empower individual workers to make decisions to avoid bottlenecks), productivity (efficiency/quality based on historical data), and collaboration (based on historical interactions within teams or with external resources). This is a really important topic in human tasks within processes: just giving the same work to the same person/role all the time isn’t necessarily the best way to go about it. Their paper summarizes the patterns and their usefulness and pervasiveness measures, and also considers human factors such as the well-being and “happiness” of the process participants, and identifying opportunities for upskilling. Although he said explicitly that this is intended for a priori process design, there’s likely knowledge that can also be applied to dynamic runtime resource allocation.

The second presentation was “POWL: Partially Ordered Workflow Language” (Humam Kourani and Sebastiaan van Zelst), presented by Humam Kourani. He introduced their new modeling language, POWL, that allows for a better discovery and representation of partial orders, that is, where some activities have a strict order, while others may happen in any order. This is fairly typically in semi-structured case management, where there can be a combination of sets of tasks that can be performed in any order plus some predefined process segments.

The third presentation was “Benevolent Business Processes – Design Guidelines Beyond Transactional Value” (Michael Rosemann, Wasana Bandara, Nadine Ostern, and Marleen Voss), presented by Michael Rosemann. Benevolent processes consider the needs of the customer as being as important as (or even more important) the needs of the “provider”, that is, the organization that owns the process. BPM has historically been about improving efficiency, but many are looking at other metrics such as customer satisfaction. In my own writing and presentations, I make an explicit link between customer satisfaction and high-level revenue/cost metrics, and the concept of benevolent processes fits well with that. Benevolence goes beyond customer-centric process design to provide an immediate, unexpected and optional benefit to the recipient. A thought-provoking view on designing processes that will create fiercely loyal customers.

The final presentation in this session was “Action-Evolution Petri Nets: a Framework for Modeling and Solving Dynamic Task Assignment Problems” (Riccardo Lo Bianco, Remco Dijkman, Wim Nuijten, and Willem Van Jaarsveld), presented by Riccardo Lo Bianco. Petri nets have no mechanisms for calculating assignment decisions, so their work looks at how to model task assignment that attempts to optimize that assignment. For example, if there are two types of tasks and two resources, where one resource can only perform one type of task, and another resource can perform either type of task, how is the work best assigned? A standard assignment would just randomly assign tasks to resources, filtered by resource capability, but that may result in poor results depending on the composition of the tasks waiting in the queue. They have developed and shared a framework for modeling and solving dynamic task assignment problems.

Good start to the breakout sessions, and excellent insights on some difficult process modeling research problems.

BPM2023 Day 1 Keynote: Pfizer Vaccine Development Process Optimization

Following yesterday’s workshops, the main conference kicked off today with an introduction from the organizers, which included a diagram of the paper acceptance process that was mined from the actual activities in the paper submission and review platform — did anyone else find this funny? We’ve also moved from the beautiful, historic and non-air-conditioned university buildings to the artistic and somewhat cooler TivoliVredenburg concert hall.

We then had the opening keynote by Marc Kaptein, Medical Director at Pfizer. He was involved in vaccine development, among other activities at Pfizer, and provided some insights into how process optimization allowed them to manufacture billions of vaccine doses in record time. He is a medical doctor but also a Six Sigma black belt, which means he has a strong sense of how good processes matter.

He walked us through the COVID crisis from the initial thoughts that it could be isolated geographically (spoiler: it couldn’t), to the research that led to the development of mRNA type vaccines. He mentioned a number of the international researchers in different organizations who contributed along the way, and gave a great explanation of how mRNA vaccines work. He credited the CEO of Pfizer with going all in with whatever funding was required to develop a vaccine — “If we don’t do it, who will?” — and for refusing US government funding since the concomitant oversight/interference would have slowed them down. (Moderna took the funding and ended up releasing later than Pfizer)

Once they had done their initial trials and proved the efficacy of the vaccine, they were then faced with the manufacturing process problem: how to get from one to one billion in the shortest period of time? Even before they had trialed the vaccine, they were readying the production facilities to ramp up for this volume. They also invested in ensuring the supply of the lipids and mRNA, including ways of decreasing the time required to create mRNA. They also optimized more of the mechanical aspects of production, such as speeding up the vial filling process, moving to 100% automated inspection, and improving the capping and labeling processes. They improved their production maintenance cycle from a month to five days. They expanded their freezer farm facilities, and ended up building their own dry ice manufacturing facility since their suppliers couldn’t keep up with their demands.

All of these improvements led to almost 50% reduction in production cycle time from 110 days down to 60 days, which allowed them to produce an amazing 1.2 billion doses in 2021 and 2 billion in 2022. Kaptein pointed out that this won’t be the last pandemic, and these improvements to the process and production speed will serve everyone in the future.

Disclaimer: I’ve had five vaccinations including AstraZeneca, Pfizer and Moderna, so I’m completely non-partisan about big pharma. 🙂

BPM2023 Utrecht Workshop on Business Process Optimization

11 years ago, Marlon Dumas from Tartu University chaired the BPM2012 conference in Tallinn, Estonia, which I was able to attend. We had met at a previous conference (maybe Milan?), then since that time I’ve worked with his company Apromore to create a white paper on process mining. Today, he gave a keynote at the workshop here at BPM2023 on the status and perspectives of business process optimization.

He started with the origins of process optimization from 20+ years ago: start with discovery to create the as-is model, then develop of the to-be model, testing of the new model, and eventually deployment. Adding simulation to the loop allows for some testing and predicted performance to feed back into the design prior to deployment. This type of process analysis had a lot of flaws due to some fundamentally flawed assumptions about the correctness of the process model and simulation parameters, the skills and behaviour of the process participants, and general resource management. Marlon (and many others) have endorsed these imperfect methods in the past, and he invited us to tear up his earlier book on BPM. 😆

Since then, he has been working on a better sort of simulation model based on discovery from event logs: think of it as using process mining as an automated generator for more complex simulation parameters rather than just the base process model. They have shared their work for other researchers to review and extend.

This has opened the door to more automated process optimization techniques: search-based, which adds domain knowledge to the simulation model discovery to generate a set of possible process changes that can then be simulated and tested to determine the best improvement opportunities. Optimization, as he pointed out, is a multi-dimensional problem since we are always working towards the improvement of more than one performance indicator. Dimensions of improvement may include optimization of decision rules, flow, tasks and/or resources. They’ve done some additional work on an optimization engine that’s also shared on GitHub.

He moved on to talking about conversational process optimization, which makes search-based optimization just a step in a broader approach that puts a human expert in the loop to guide the exploration of the optimization space. In this approach, a conversational UI has an interactive discussion with a human expert, then combines that with the search-based optimization techniques, then presents that back to the expert for review and further conversation and optimization.

As the presentation finished and we were moving to questions, security kicked us out of our room for overcrowding, so we adjourned to the outdoor square. Lots of great discussion, with Marlon mentioning that the field of Operations Research is okay except that it’s the domain of a bunch of mathematicians, and urging us to cast off the shackles of process models. Also a good bit about the optimization of resource workload and allocation to maximize efficiency: people work best when they are “happy” (a proxy for “unstressed and productive”), which means having neither too much nor too little work, and the right mix of work. 

Marlon published his slide deck on Slideshare, which allowed me to steal a few screenshots rather than trying to photograph the live presentation.

BPM2023 Utrecht Workshop on BPM and Social Software

It’s been a minute! Last time that I attended the BPM academic research conference was in the “before times”, back in 2019 in Vienna. This week, I’m in Utrecht for this year’s version, and it’s lovely here – beautiful historic buildings and stroopwafels in the conference bag!

I’m starting with the workshop on BPM and social software. This was the first workshop that I attended at my first trip to the BPM conference in 2008 in Milan, also chaired by Rainer Schmidt.

All three of these presentations looked at different aspects of how traditional structured process modeling and orchestration fails at addressing end-to-end process management, and how social (including social media) constructs can help. In the first presentation, processes are too unstructured for (easy) automation; in the second, there’s a need for better ways to provide feedback from process participants to the design; and in the third, organizations can’t even figure out how to get started with BPM.

The first in the workshop was Joklan Imelda Camelia Goni, who was accompanied by her supervisor Amy van Looy, on “Towards a Measurement Instrument for Assessing Capabilities when Innovating Less-Structured Business Processes”. Some of the background research for this work was a Delphi study that I participated in during 2021-2022, so it was interesting to see how her research is advancing. There is a focus on capabilities within organizations: how capable are certain people or departments at determining the need for innovation and creating the innovation in (often manual) processes.

Next was Mehran Majidian Eidgahi on “Integrating Social Media and Business Process Management: Exploring the Role of AI Agents and the Benefits for Agility” (other paper contributors are Anne-Marie Barthe-Delanoë, Dominik Bork, Sina Namaki Araghi, Guillaume Mace-Ramete and Frédérick Bénaben). This looks at the problem of structured business process models that have been orchestrated/automated, but that need some degree of agility for process changes. He characterizes BPM agility at three stages: discovering, deciding and implementing, and sees that much of the work has focused on discovery (process mining) and implementing, but not as much on deciding (that is, analysis or design). Socializing BPM with the participants can bring their ideas and feedback into the process design, and they propose a social BPM platform for providing reactions, feedback and suggestions on processes. I’ve seen structures similar to this in some commercial BPM products, but one of the main issues is that the actual executing model is not how the participants envision it: it may be much more event-driven rather than a more traditional flow model. He presented some of their other research on bringing AI to the platform and framework, which provides a good overview of the different areas in which AI may be applied.

The last presentation in the workshop was by Sebastian Dunzer on “Design Principles for Using Business Process Management Systems” (other paper contributors Willi Tang, Nico Höchstädter, Sandra Zilker and Martin Matzner). He looks at the “pre-BPM” problem of how to have organizations understand how they could use BPM to improve their operations: in his words, “practice knows of BPM, but it remains unclear how to get started”. This resonates with me, since much of my consulting over the years has included some aspect of explaining that link between operational business problems and the available technologies. They did an issue-tracking project with a medium-sized company that allowed them to use practical applications and simultaneously provide research insights. Their research outcome was to generate design principles that link IT artifacts and users through functional relationships.

Many thanks to the conference chair, Hajo Reijers, for extending an invitation to me for the conference. I’ll be at more workshops later today, and the rest of the conference throughout the week.

150 episodes of the Process Pioneers podcast

The Process Pioneers podcast recently published their 150th episode, which is a significant milestone considering that most podcasts wither into inactivity pretty quickly. It’s hosted by Daniel Rayner, managing director of APAC for GBTEC, so I suppose that technically they sponsor it, but it’s really just a free-ranging discussion that doesn’t talk about their products (or at least didn’t when I was on it).

You can see my interview with Daniel on the podcast from last year, it was a lot of fun and we delved into some interesting points.

21st International BPM Conference in Utrecht – opportunities to attend, present or sponsor

Way back in 2008, I took a chance and attended the international academic research conference on business process management in Milan. I was hooked, as you might have gathered from the 1000s of words that I blogged that week. Since then, I’ve attended a few more: Ulm, Germany in 2009; Hoboken, US in 2010; Clermont-Ferrand, France in 2011 (where I had the honour of delivering a keynote); Tallinn, Estonia in 2012; and Vienna, Austria in 2019 (where I gave a talk at a workshop). They are always hosted by a university that has a BPM research program, and often the sessions are held in the university lecture rooms which gives it a more relaxed atmosphere than your usual industry conference.

I’m fascinated by the research topics, and one common theme of my blogging from these conferences is that software vendors need to send their product owners/developers here, both to hear about and present ideas on research in BPM and related fields. There’s so many good ideas and smart people, you can’t help but come away having learned something of value. Starting in 2010, the conference started to include an industry track to be more inclusive of people who were not in academia or research environments. At some point, they also started offering companies the opportunity to sponsor the conference: I believe that some vendors sponsored coffee breaks and meals, or had booths at parts of the event. A good way to raise their profile with the attendees, which include not only academics but a lot of people from industry as well. And, as I’ve pointed out, it’s a great place for companies to meet promising young researchers who might be looking for a job in the future.

This year, the conference is in Utrecht, The Netherlands, on September 11-15, 2023. I’m hoping to attend after three years of hiatus due to that pesky little virus; I did attend some of the sessions virtually in the past couple of years but it’s just not the same as being there. If you want to submit a paper or give a presentation, you can see the important dates here – note that the abstracts for research papers are due next week, with other deadlines coming up shortly. If you just want to attend, they have an early bird registration price until July 18. If your company wants to sponsor the event in any way, there’s some information here along with contact information.

I’m really looking forward to getting back to this, and to other conferences this year, after dipping my toe back in the in-person conference pool with speaking slots last September (Hyland CommunityLIVE) and October (CamundaCon). I’ll also still be participating in virtual conferences, which allows me to attend more than I would normally have time or budget for, including speaking on a Voices in Tech panel next week. There is no question that the way we attend conferences has changed in the past three years. Some conferences are staying completely virtual, some are making a hard shift back to in-person only, while some are going the hybrid route. Meanwhile, companies that slashed their conference budget for attendees and sponsorships are reconsidering their spending in the light of increased attendance at in-person conferences. It’s going to take another year or two to see whether people will flock back to in-person conferences, or prefer to stick with the virtual style.

Voices in Tech panel

Edit with correction to above graphic: the panel is at 10am Eastern which is 3pm Central European time because we’re in that hellish period where North American clocks have moved forward but European ones haven’t.

I posted earlier this week on Mastodon that I’ve been taking a bit of a break but now getting back to things, and one of events that’s on my upcoming agenda is presenting on a panel Voices in Tech: Building Effective Automation Teams, hosted by Camunda and also sponsored by Infosys. This will take place online on March 15th, but you can head over to the link now and sign up. I will have the pleasure of reconnecting with co-panelists Uzma Khan of the Ontario Teachers’ Pension Plan, who I have known for many years, and Smriti Gupta of Infosys, who I shared the stage with at CamundaCon in Berlin last October. We will be joined by Ola Inozemtceva, a senior product marketing manager at Camunda, and the moderator will be Lana Ginns, product marketing manager at Camunda.

This is the 2023 version of the International Women’s Day panel that Camunda has been organizing for a few years now, and I really like that the focus is not on the fact that all of the panelists are women, but that we are “brilliant trailblazers in the tech world, who inspire people every day to redefine technology and how it can transform the world”. We’ll be discussing challenges and best practices with building high-performing orchestration teams, which ties in nicely with the series of video blogs that I’ve been doing lately for Trisotech on best practices in business automation.

I hope to see you there (virtually).