Skip to main content

Semantics of BPMN

Recently I have been involved with the standardization of BPMN. I find BPMN fascinating because it lies on the precise boundary between the IT world and the business world. Boundaries tend to be uncomfortable places to be around and BPMN is no exception.

My focus has been primarily on the semantics of BPMN. There have been many issues concerning the semantics of BPMN features. From a strictly programming language point of view BPMN seems to throw away most of the hard learned lessons of programming in the last 40 years: it is unstructured, full of gotos and no objects in sight.

But that is to forget the true purpose of BPMN, it is not a programming language. It is not even a scripting language, it is a language to help humans to manage their own activities. Even when BPMN is supported by automation, it is still primarily humans doing the work.

Anyway, the traditional approach to the semantics of languages like BPMN is Petri Nets. The merit of Petri nets is that they are quite simple to understand and quite simple to implement too.

However, BPMN has a bunch of features that are quite difficult to capture in petri nets without getting quite twisted. These are mostly due to the rich variety of merge behaviors supported by BPMN.

A classic one is the pass-through merge: where two streams are merged into one and all the tokens are forwarded. Simple, but combined with parallelism can get you very quickly into trouble:


And-Deadlock-1



This diagram, which shows an and-split forking off into three parallel branches, two of which are merged by a pass-through merge and the resulting pair merged back by an and-merge.

Interpreted as a petri net, three tokens originate from the split, two are merged onto the same 'wire' and the and-merge merges tokens on its two wires. This will result in a deadlock after the first successful firing of the and-merge because there is still a token left hanging about that is not collected by the and-merge.

The instinctive reaction of an IT person would be to say "don't do that"; but we are expecting business analysts to use BPMN, not programmers. And, this kind of situation can show up as easily in a large 400+ node diagram as a 3-node diagram.

One 'fix' is to allow annotations on the wires that go into an and-merge: specifically how many tokens that wire is supposed to accept. In my view, that is even worse because there may be no way of setting that count reliably, and further, it is low-level hacking of the worst kind: programming by numbers.

Here is another approach: instead of using petri nets, use graph rewriting semantics. In a graph rewriting approach, there are three kinds of nodes and wires: dormant, active and expired. The BPMN diagram becomes the initial graph which is rewritten until the entire graph is expired (or until only all the end-event nodes are active). This is how the pass-through is evaluated using graph rewriting:


And-Graph-2


The six stages show the evolution of the graph from a single active and-split ending up at the single active and-merge. The green is active, and the gray/purple signifies expired. (Normally you would remove expired entities but I have kept them for illustration.

Essentially, the pass-through node becomes a kind of 'cloner', and the dead-lock problem disappears! Neat!

I think that this may well end up to be the way that BPMN's semantics is defined. Given the experience of the G-machine (used in combinator approaches to functional programming languages) it's also not a bad way to implement BPMN automation.

Unless, of course, some problem shows up to make it unsuitable after all.

Popular posts from this blog

Comments Should be Meaningless

This is something of a counterintuitive idea: Comments should be meaningless What, I hear you ask, are you talking about? Comments should communicate to the reader! At least that is the received conventional wisdom handed does over the last few centuries (decades at least). Well, certainly, if you are programming in Assembler, or C, then yes, comments should convey meaning because the programming language cannot So, conversely, as a comment on the programming language itself, anytime the programmer feels the imperative to write a meaningful comment it is because the language is not able to convey the intent of the programmer. I have already noticed that I write far fewer comments in my Java programs than in my C programs.  That is because Java is able to capture more of my meaning and comments would be superfluous. So, if a language were able to capture all of my intentions, I would never need to write a comment. Hence the title of this blog.

Sub-turing complete programming languages

Here is an interesting intuition: the key to liberating software development is to use programming languages that are not, by themselves, turing-complete. That means no loops, no recursion 'in-language'. Why? Two reasons: any program that is subject to the halting problem is inherently unknowable: in general, the only way to know what a turing-complete program means is to run it. This puts very strong limitations on the combinatorics of turing-complete programs and also on the kinds of support tooling that can be provided: effectively, a debugger is about the best that you can do with any reasonable effort. On the other hand, a sub-turing language is also 'decidable'. That means it is possible to predict what it means; and paradoxically, a lot easier to provide a rich environment for it etc. etc. An interesting example of two languages on easier side of the turing fence are TeX and CSS. Both are designed for specifying the layout of text, TeX is turing complete and CSS

On programming languages and the Mac

Every so often I dig out my Xcode stuff and have a go at exploring developing an idea for Mac OS X. Everytime the same thing happens to me: Objective-C is such an offensive language to my sensibilities that I get diverted into doing something else. All the lessons that we have learned the hard way over the years -- the importance of strong static typing, the importance of tools for large scale programming -- seem to have fallen on deaf ears in the Objective-C community. How long did it take to get garbage collection into the language? I also feel that some features of Objective-C represent an inherent security risk (in particular categories) that would make me very nervous to develop a serious application in it. As it happens, I am currently developing a programming language for Complex Event Processing. Almost every choice that I am making in that language is the opposite to the choice made for Objective-C -- my language is strongly, statically typed; it is designed for parallel exe