Planet BPM

February 03, 2017

Sandy Kemsley: AIIM breakfast meeting on Feb 16: digital transformation and intelligent capture

I’m speaking at the AIIM breakfast meeting in Toronto on February 16, with an updated version of the presentation that I gave at the ABBYY conference in November on digital transformation and...

[Content summary only, click through for full article and links]

by sandy at February 03, 2017 01:15 PM

February 02, 2017

Drools & JBPM: AI Engineer - Entando are Hiring

Entando are looking to hire an AI Engineer, in Italy, to work closely with the Drools team building a next generation platform for integrated and hybrid AI. Together we'll be looking at how we can build systems that leverage and integrate different AI paradigms for the contextual awareness domain - such as enhancing our complex event processing,  building fuzzy/probability rules extensions or looking at Case Based Learning/Reasoning to help with predictive behavioural automation.

The application link can be found here.

by Mark Proctor ( at February 02, 2017 08:41 PM

Drools & JBPM: Drools & jBPM are Hiring

The Drools and jBPM team are looking to hire. The role requires a generalist able work with both front-end and back-end code. We need a flexible and dynamic person who is able to handle what ever is thrown at them and relishes the challenge of learning new things on the fly. Ideally, although not a requirement, you'll be able to show some contributions to open source projects. You'll work closely with some key customers implementing their requirements in our open source products.

This is a remote role, and we can potentially hire in any country there is a Red Hat office, although you may be expected to do very occasional travel to visit clients.

The application link for the role can be found here:


by Mark Proctor ( at February 02, 2017 08:22 PM

January 26, 2017

Keith Swenson: How do you want that Standard?

You know the old adage:  If want something real bad, you get it real bad.  If you want something worse, you get it worse.  This comes to mind when I think about the DMN proposed standard.  Why?  There is something about the process that technical standards go through…

I am writing again about the Decision Model and Notation (DMN) specification which is a promising approach to modeling decision logic.  It is designed by a committee.


A group of people come together to accomplish a goal.  We call it a committee.  It most certainly is not an efficient machine for producing high quality design ideas.  Actually it seem more closely to a troupe of clowns doing slapstick than a machine.  I have participated in that, I know.  There are many different agendas:

The True Believer: This is someone who really really want to make an excellent contribution to the industry.  They work very very hard.  Unfortunately they follow a course set by myths and legends of the last system they designed.  They fear going down the wrong blind alley.  They tend to zealously follow a new, innovative, and possibly untested, direction.  The true believers spends a lot of time on Reddit.

The Gold Digger: This is a consultant who knows that complicated complex documentation of any kind needs a host of experts who can help explain it to people.  Like everyone, they fear ambiguity in the spec, but also they fear incompleteness and simplicity.  Justified by an attempt to be complete, they tend to drive the spec to be endlessly long and complex and to include as many side topics as possible.  The gold digger sticks to Linked In.

The Vendor Defender: The defender knows that the principle risk is that someone else will implement this before they do.  Therefor they contribute copiously when the spec appears to be going in a way contrary to their existing technology investments, but sit back and relax when it appears that the committee is going nowhere.  Their fear is that the spec will be finished before they have resources to implement it.  They tend to quickly bring up all the (obscure) problems with the spec (particularly ones that conflict with their existing approach) but are slothful when it comes to finding solutions that they don’t already have.  The defender watches MSNBC and CNN.

The Parade Master: This is a person who is primarily interested in the marketing value of having a well branded name, a logo, and the ability to claim support by many different products.  Their fear is that nobody will pay attention to the effort.  They tend to push the spec to be very easy to implement in superficial ways in order to claim support and to include all the proper buzz terms in all the right places.  You can find them on Twitter.

The Professor: This is a person from academia who is probably quite knowledgeable about all the existing approaches, even some from ancient history more than 5 years ago.  The professor typically proposes well thought out, consistent approaches without regard to pragmatic aspects of whether the average user can understand it or not.  Their fear is that this effort will needlessly duplicate an earlier one, or fail to leverage an earlier good work.  The professor, beyond blogging, has hacked Siri and Google Analytics together to bring them feeds from The Onion.

Levels Of Conformance

Different people with different agendas work together to make a document that leads the industry in a new direction.  Some want a super complete super detailed, some want everything that works and only things that work, and other want a minimal set just barely enough to glorify the claim to have implemented it.  The solution is to allow for levels of compliance, and DMN is no exception.  There are three levels of conformance:

  • Level 3 – implementations must conform to the visual notation guidelines of the spec, both for the overall picture (DRG) as well as for the parts that compose the overall graph (DRD).   There are requirements on the metadata of these parts.  And the decision models must be expressed in the FEEL expression language.
  • Level 2 – like above, but the actual expressions can be in a simplified language
  • Level 1 – live above except that the actual expressions that there is no requirement on how the conditions that you base the decision on are expressed.  The expressions do not need to be executable, and could in fact be arbitrary pseudo code that looks like a conditional expression but that “are not meant to be interpreted automatically.”

Level 1 compliance is essentially useless for designing a decision model that actually makes decisions for you.  Since the expressions can be literally anything, there is no possibility to design a model once and use it for anything other than printing out and looking at.  Clearly, vendors are making decision tables that work, but they each work differently, with completely different kinds of expressions, and different interpretations.

Even within the areas that are supposedly enforced, there are many optional aspects of the model.  There are diagrams that are listed with the caveat that it is only an example and many other examples would be possible — without stating how those might be constrained.  There are places which actually state that the design is implementation dependent.

This is quite convenient for vendors.  You can take almost any implementation of decision tables, and claim level 1 conformance as long as a make the graphics conform to some fairly basic layout requirements.

What Does the Customer Want?

The purpose of a specification in the goals of the standard.  DMN lists these goals:

  • The primary goal of DMN is to provide a common notation that is readily understandable by all business users, ….   DMN creates a standardized bridge for the gap between the business decision design and decision implementation. DMN notation is designed to be useable alongside the standard BPMN business process notation.
  • Another goal is to ensure that decision models are interchangeable across organizations via an XML representation.

You want to be able to make decision models that can be created by one person, and understood by another.  The decision logic written by one person must be unambiguous, it must be clear, and it must not be mistaken for meaning something else.  Level 1 conformance simply does not meet either goal to any degree.  The decision expressions can use any syntax, any vocabulary, and any semantics.  By way of analogy, it is a little bit like saying that the message from the designers can use any language (French, German, or Creole) just as long as they use the roman alphabet.  The fundamental thing about a decision is how you write the basic conditions.

Clearly, allowing any expression language — even ones that are not formalized — helps the vendors.  They all have different languages, and the spec does not require that they do anything about that.

It is similarly clear that if you take a model from a level 1 tool, and bring it to another tool, there is no guarantee that it can read it and display it.  Most of the tools require that the expressions be in their own expression language, and so if it is not in that language it will most likely fail to be read.

What Do You Need?

If you are considering DMN as a user, consider what you need.  You are going to invest a lot of hours into learning the details of DMN.


by kswenson at January 26, 2017 06:09 AM

January 23, 2017

Sandy Kemsley: BPM skills in 2017–ask the experts!

Zbigniew Misiak over at BPM Tips decided to herd the cats, and asked a number of BPM experts on the skills that are required – and not relevant any more – as we move into 2017 and beyond. I was happy...

[Content summary only, click through for full article and links]

by sandy at January 23, 2017 01:20 PM

January 19, 2017

Sandy Kemsley: AIIM Toronto seminar: @jasonbero on Microsoft’s ECM

I’ve recently rejoined AIIM — I was a member years ago when I did a lot of document capture and workflow implementation projects, but drifted away as I became more focused on process...

[Content summary only, click through for full article and links]

by sandy at January 19, 2017 03:42 PM

January 18, 2017

Keith Swenson: DMN Technology Compatibility Kit (TCK)

A few months ago I wrote about the Decision Model and Notation standard effort. Since getting involved at that time, I am happy to report a lot of progress, but at the same time there is much further to go.

What is DMN?

Decision Model and Notation promises to be a standard way for business users to define complex decision logic so that other business users (that is non-programmers) can view and understand the logic, while at the same time the logic can be evaluated and used in process automation and other applications.

A decision table is an example of a way of expression such logic that is both visually represented as well as executable. DMN takes decision tables to the next level. It allows you to build a graph (called a DRG) of element, where each element can be a decision table or one of a number of other kinds of basic decision expression blocks. That very high level simplified view of DMN should be sufficient for this discussion.

Pipe Dream?

I have seen a lot of standards specs in my time. Most standards are documents that are drawn up by a group of technologists who have high hopes of solving an important problem. Most standards documents are not worth the paper they are printed on. The ones that don’t make it are quickly forgotten. The difference between the proposed standards that disappear (the pipe dreams) and those that survive has to do with adoption. Anyone can write a spec and propose a standard but only adopted standards matter.

I became convinced early last year that the time was right for something beyond decision tables, and DMN seemed to be drawing the right comments from the right people. However, I was shocked to find that nobody had actually implemented it. A couple of vendors claimed to implement it, but when I pressed further, I found that what they claimed to implement was a tiny fraction, and often that fraction had been done in a incompatible way. In other words, the vendor had something similar to DMN, and they were calling it DMN in order to get a free ride on the band wagon.

Running Code

The problem with a specification that does not have running code is that the English language text is subject to interpretation. Until implemented, the precise meaning of phrases of the spec can not be known. I say: the code is 10 times more detailed than the spec can ever be; until you have the code you can not be sure of the intent of the spec. Once code is written and running, you can compare implementations and sort out the differences.

What is a TCK?

What we need is running code. In the Java community since the 1990s there have been groups that get together to build parts of the implementation, or running technological pieces that help in making the implementations a reality. It is more than a spec. The TCK might include code that is part of the final implementation. Or it might be test cases that could be run. Or anything else beyond the code that helps implementers create a successful implementation.

At the 2016 bpmNEXT conference we decided to form a TCK for DMN. The goal is simple: DMN offers to be a standard way of expressing conditional logic, and we need to assure that that logic runs the same on every implementation. What we need then is simply a set of test cases: a sample DMN diagram, with some context data, and the expected results.

Let’s Collect some DMN Models

The DMN specification defines an XML based file format for a DMN diagram. Using this, you can write out a DMN diagram to a file, and read it back in again. All of the tags necessary are defined, along with specific name spaces. Each tag is clearly associated with an element of the DMN diagram. This part of the spec is quite clear. It really should be just a matter of contacting vendors with existing implementations, and asking them to send some example files.

I was surprised to find that of the 16 vendors who claim DMN compatibility, essentially none of them could read and write the standard format.  Without the ability transfer a model from one to the other, there is no easy way to assure that separate implementations actually function the same way.  Reading and writing a standard file format is relegated in the spec to a level 3 compatibility requirement.  The committee does not provide DMN file examples aimed at assuring that import/export works consistently across implementations.

Trisotech was building a modeling tool that imported and exported the format, but they hope to leverage other implementations to evaluate the rules. Bruce Silver in his research for his book on DMN had implemented his own evaluation engine to read and execute the format. There was a challenge in May to encourage support of the format.  Open Rules, Comunda, and One Decision demonstrated or released converters.   Red Hat was committed to creating a DMN evaluation engine based directly on the standard file format and execution semantics.  It is all hampered because Level 1 compliance allows vendors to claim compatibility with virtually no assurance that the users efforts will be transferable elsewhere.

There is, however, a deep commitment in the DMN community to make the standard work.  From Bruce’s and Red Hat’s implementations we were able to assemble a set of test decision models with good coverage of the full DMN standard.

The Rest of the Test Case

The other thing we need is technically outside of the standard, and that is a way to define a set of input data and expected results. The DMN standard defines the names and types of data values that must be supplied to the model, but it is expected that each execution environment will get those values either from a running process or another kind of application data store, and where the data comes from is outside the scope of the DMN standard.

We decided to define a simple transparent XML file structure. The file contains a set of ‘tests’. Each test contains a set of input data which is named according to the requirement of the DMN model being tested. Each test also has a set of data values which are the expected outputs of the execution. We even defined how to compare the floating point numbers, and to what precision a matching value must match.

Testing to see if your implementation is correct becomes a very simple task. Regardless of the technology used to implement the DMN standard, one needs code that can read the test values from the input file, given them to the DMN model execution, take the results, and compare to expected results. IF they match, you pass the test. IT does not matter whether your implementation is in Java, C++, XSLT, C#, Microsoft BASIC, Perl, or whatever. Any language can read the test input values and compare the output to the expected results.


A “runner” is needed to load the test into the engine, and to evaluate the results.  Most vendors will need to imlement their own runner according to their own technical needs.  The TCK only defines file formats to be read for the test.  The TCK does make a Java-based runner available also open source, but it is not necessary that any given implementation use that runner.


The status is today that we have:

  • A set of tests ready today
  • All available as open source
  • Tests touch upon a broad range of DMN requirements.
  • Each test is defined according to a specific capability mentioned in the DMN document.
  • Each test has a DMN model is expressed in file format defined by the standard.
  • Test input and expected values are in a file format that is simple to read.
  • Every test has been executed in two completely independent implementations of DMN: one written in Java, and the other written in XSLT.
  • The entire test suite is completely transparent: each file can be examined and reviewed by any member of the public by accessing them at the DMN-TCK GitHub site.

Over time we will improve these tests, and develop many more tests, to increase the coverage of DMN capability.  We hope to get contributions from more vendors who want to see DMN succeed.  Yet we already have a good, useful test set today.

If you are a consumer of decision logic, and you are thinking of purchasing and implementation of DMN, and you don’t want to be locked into a vendor specific not-quite-standard implementation, you should ask your vendor whether they can run these tests. Or better yet, you can try running them yourself. You simply can’t have a serious implementation of DMN, without demonstrating that the implementation can run these fairly straightforward DMN TCK tests.  If the tests don’t run, ask your vendor why.  Do you feel comfortable with the answer?


The success of DMN depends upon getting implementation that run the same way. Talking about DMN will never assure they run the same way. Advertisements and brochures do not assure that you investment in a DMN model will be usable anywhere else. The only way to assure this is to have a common core of tests and that can quickly and easily demonstrate that they work and get the same results. That is what you want in any decision logic: the same results for the same inputs every time. Ask your vendor if they can run the DMN-TCK tests.


No effort like this succeeds without a lot of dedication and long hours by key team members. Eight people have contributed to this TCK, but I want to especially highlight two in particular. Edson Tirelli is technical lead for the Red Hat DMN project was tireless in his thorough examination of the specification and implementation in Java. Bruce Silver has also been a monumental motivation for the TCK, and made a separate implementation in XSLT. By working through all the differences of these two implementations, and coming to a common understanding of all the points of the spec, gives us all confidence that the existing tests are robust and accurate.

by kswenson at January 18, 2017 07:44 PM

January 17, 2017

Thomas Allweyer: Lesenswerte Studie zu Prozessmanagement und Digitaler Transformation

Welche Rolle spielt das Prozessmanagement für die digitale Transformation von Unternehmen? Diese Frage untersuchten Wissenschaftler der Zürcher Hochschule für Angewandte Wissenschaften (ZHAW) in ihrer diesjährigen Studie zum Business Process Management. Zwar wird immer wieder betont, dass erfolgreiche Digitalisierungsprojekte nicht ohne optimal darauf abgestimmt Geschäftsprozesse funktionieren, doch scheinen sich die meisten Prozessmanagement-Aktivitäten nach wie vor eher um die Effizienz interner Prozesse als etwa um das Kundenerlebnis zu kümmern. Und in der Tat wird Effizienz von einem sehr großen Teil der in der Studie Befragten als Zielsetzung des Prozessmanagements genannt. Die wichtigste Motivation stellt aber das Erreichen einer hohen Transparenz dar. Und auch die Kundenzufriedenheit gewinnt als Prozessmanagement-Ziel an Bedeutung. Sie wird mittlerweile ähnlich hoch priorisiert wie die Effizienz.

Die Macher der Studie konstatieren, dass die durch das Prozessmanagement gewonnene Transparenz durchaus genutzt wird, um Digitalisierungspotenzial für Kundeninteraktionen und schwach strukturierte Prozesse zu identifizieren – allerdings erfolgt dies vielfach nicht systematisch. So werden etwa Prozessmodelle meist nicht mit Customer Journeys verknüpft, wie sie in Digitalisierungsinitiativen zum Einsatz kommen. Es ist daher zu befürchten, dass neu entwickelte Front-End-Lösungen nicht in Form durchgängiger Prozesse mit den Back-End-Systemen integriert werden und somit neue Silos entstehen. Auch bei der Flexibilisierung von Prozessen sind viele Unternehmen noch zögerlich. So fristet das Thema „Adaptive Case Management“ nach wie vor ein Schattendasein. Ähnlich sieht es mit der Nutzung von Kundendaten aus: Nur selten werden diese Daten zur kundenorientierten Optimierung und Gestaltung der Prozesse genutzt.

Zusätzlich zur Online-Befragung wurden im Rahmen eines Workshops fünf Fallstudien untersucht, die im Studienbericht ausführlich vorgestellt werden. Sie stammen aus unterschiedlichen Branchen wie Fahrzeug-Leasing, Versicherungen, öffentlicher Verwaltung und Telekommunikation. Dabei handelt es sich zum Teil tatsächlich um die durchgängige Digitalisierung von Kundeninteraktionen und die Veränderung von Geschäftsmodellen. So etwa im Falle eines Fahrzeug-Leasing-Anbieters, dessen Vertrieb bislang hauptsächlich über Fahrzeughändler erfolgte. Künftig kann die Abwicklung auch komplett online erfolgen, wobei sich der Leasingnehmers per Video identifiziert. Ein Projekt des Kantons Zürich ermöglicht es den Bürgern, sämtliche mit einem Umzug verbundenen Behördeninteraktionen komplett elektronisch abzuwickeln. Andere Projekte befassen sich eher mit herkömmlicher Prozessautomatisierung, beispielsweise für das Service Management. Vielfach sind zunächst interne Prozessverbesserungen als Voraussetzung für die Digitalisierung der kundenbezogenen Prozesse erforderlich.

Im Fazit fordern die Autoren der Studie, dass sich das Prozessmanagement stärker mit den Methoden und Werkzeugen anderer Managementdisziplinen auseinandersetzen müsse, wie z. B. Innovationsmanagement, Enterprise Architecture Management, Wissensmanagement und Customer Experience Management. Dann könnten Chancen und Grenzen der Prozessdigitialisierung wirksamer ausgelotet werden.

Download der Studie

by Thomas Allweyer at January 17, 2017 10:18 AM Orchestrierung von Microservices und die JAX

2016 kam aus meiner Sicht spätestens der Durchbruch der Idee von Microservices. Das Thema ist super präsent und wird auch nicht durch Ignorieren verschwinden. Wir selbst haben zu “BPM + Microservices” bereits 2015 einen Artikel im Java Magazin veröffentlicht: Wie lässt sich Ordnung in einen Haufen (Micro-)Services bringen? Diese Gedanken wurden auch in dem Whitepaper BPM & Microservices aufgearbeitet. Inzwischen ist einige Zeit ins Land gegangen und es gab viele Diskussionen dazu. Wir haben viel gelernt, so würde ich persönlich zum Beispiel inzwischen eher von “Orchestrierung” als von “BPM” sprechen. Das und viele andere aktuelle Praxiserfahrungen werde ich in meiner …

by Bernd Rücker at January 17, 2017 08:22 AM

January 13, 2017

Sandy Kemsley: BPM books for your reading list

I noticed that Zbigniew’s reading list of BPM books for 2017 included both of the books where I have author credit on Amazon: Social BPM, and Best Practices for Knowledge Workers. You can find the...

[Content summary only, click through for full article and links]

by sandy at January 13, 2017 04:20 PM

January 08, 2017

Drools & JBPM: DMN runtime example with Drools

As announced last year, Drools 7.0 will have full runtime support for DMN models at compliance level 3.

The runtime implementation is, at the time of this blog post, feature complete and the team now is working on nice to have improvements, bug fixes and user friendliness.

Unfortunately, we will not have full authoring capabilities in time for the 7.0 release, but we are working on it for the future. The great thing about standards, though, is that there is no vendor lock-in. Any tool that supports the standard can be used to produce the models that can be executed using the Drools runtime engine. One company that has a nice DMN modeller is Trisotech, and their tools work perfectly with the Drools runtime.

Another great resource about DMN is Bruce Silver's website Method & Style. In particular I highly recommend his book for anyone that wishes to learn more about DMN.

Anyway, I would like to give users a little taste of what is coming and show one example of a DMN model and how it can be executed using Drools.

The Decision Management Community website periodically publishes challenges for anyone interested in trying to provide a solution for simple decision problems. This example is my solution to their challenge from October/2016.

Here are the links to the relevant files:

* Solution explanation and documentation
* DMN source file
* Example code to execute the example

I am also reproducing a few of the diagrams below, but take a look at the PDF for the complete solution and the documentation.

Happy Drooling!

by Edson Tirelli ( at January 08, 2017 10:25 PM

January 06, 2017

Thomas Allweyer: Wie entwickelt man Prozesslandkarten?

Prozesslandkarten sind ein häufig genutztes Werkzeug zur Strukturierung der Abläufe eines Unternehmens. Aber wie entwickelt man eine Prozesslandkarte? Und was macht eine gute Prozesslandkarte aus? Mit diesen Fragen beschäftigt sich der Artikel „Prozesslandkarten entwickeln – Vorgehen, Qualitätskriterien und Nutzen“ von Appelfeller, Boentert und Laumann, der in der Zeitschrift Führung und Organisation (zfo), Ausgabe 6/2016, erschienen ist.

In der Literatur finden sich insgesamt fünf idealtypische Ansätze für die Entwicklung einer Prozesslandkarte: 

  1. Ableitung der Prozesse aus den Zielen der Organisation (zielbasierter Ansatz)
  2. Zusammensetzen der Prozesse aus einzelnen Aktivitäten (aktivitätenbasierter Ansatz)
  3. Ableiten der Prozesse aus den in der Organisation bearbeiteten Objekte (objektbasierter Ansatz)
  4. Ableiten der Prozesslandkarte aus der Landkarte einer anderen existierenden oder idealtypischen Organisation (referenzmodellbasierter Ansatz)
  5. Zerlegen der Unternehmensfunktionen in Teilfunktionen und Zusammensetzen zu Prozessen (funktionsbasierter Ansatz)

In der Praxis werden meist mehrerer dieser Ansätze kombiniert. Die Autoren erläutern eine mögliche Vorgehensweise am Beispiel der Prozesslandkarte für die Fachhochschule Münster.

Schließlich werden noch eine Reihe von Qualitätsmerkmalen diskutiert. So soll eine Prozesslandkarte den abteilungsübergreifenden Prozessgedanken vermitteln und die strategische Ausrichtung der Organisation stützen. Weitere Kriterien betreffen u. a. die geeignete Benennung der Prozesse, den systematischen Aufbau und die Eignung für die verschiedenen Nutzergruppen.


by Thomas Allweyer at January 06, 2017 09:21 AM

January 02, 2017 Camunda in 2016 and 2017

Camunda has had an outstanding 2016:

Tremendous Growth

More than 120 customers are now using Camunda BPM Enterprise which allowed us to grow our annual revenue by an incredible 82%.

Our revenue stream is subscription based, and more than 98% of our customers decided to renew their subscription (some of them entering their fourth year of subscription). Since we are not talking about SaaS here, but rather the enterprise subscription for an open source software product, this number speaks to the actual value of our enterprise services such as support and maintenance.

Spreading world-wide

Our customers as well as our 50 system integration partners are …

by Jakob Freund at January 02, 2017 08:07 AM

December 28, 2016

Thomas Allweyer: Flexible Case Management-Systeme werden noch wenig genutzt

Business Process Management-Systeme (BPMS) eignen sich mit ihrem modellbasierten Ansatz nach Meinung der Gartner-Group gut als Basis für Case Management-Frameworks (CMF) zur Unterstützung schwach strukturierter, wissensintensiver Prozesse. Und so widmen die Analysten eigens einen ihrer „Magic Quadrant“-Reports den BPM-Plattform-basierten CMFs. Die BPMS-Hersteller stehen mit ihren Case Management-Modulen u. a. in Konkurrenz mit Anbietern von Systemen für Enterprise Content Management (ECM) oder Customer Relationship Management (CRM), die ihre Produkte ebenfalls um Case Management-Funktionalitäten angereichert haben.

Zudem gibt es für viele konkrete konkrete Anwendungsbereiche etablierte Standardsoftware. Der Vorteil BPMS-basierter CMFs ist insbesondere die wesentlich höhere Flexiblität und Anpassbarkeit. Vielfach werden für diese Plattformen auch vorgefertigte Templates für bestimmte Branchen und Anwendungsfälle angeboten.

Die Verbreitung von Case Management-Frameworks ist noch vergleichsweise gering. Die Autoren der Studie schätzen, dass bislang weniger als 20% der in Frage kommenden Unternehmen diese Technologie nutzt. Und dies werde sich in den nächsten Jahren auch nur langsam ändern.

In den letzten Jahren wurde viel über Adaptives Case Management (ACM) diskutiert, bei dem die Mitarbeiter die Abläufe während der Bearbeitung entwickeln und verändern. Bislang handelt es sich dabei laut Gartner noch eher um einen Hype als um die Realität. Bei den meisten Anbietern sind die Anpassungsmöglichkeiten zur Laufzeit noch weitgehend auf die bei der Entwicklung festgelegten Optionen eingeschränkt. Auch die Nachfrage nach umfassenden Adaptionsmöglichkeiten hält sich bislang noch in Grenzen.

Download des Reports bei Appian (Registrierung erforderlich)

by Thomas Allweyer at December 28, 2016 02:42 PM

December 22, 2016

Sandy Kemsley: RPA just wants to be free: @WorkFusion RPA Express

Last week, WorkFusion announced that their robotic process automation product, RPA Express, will be released in 2017 as a free product; they published a blog post as well as the press release, and...

[Content summary only, click through for full article and links]

by sandy at December 22, 2016 09:23 PM

December 17, 2016

Drools & JBPM: Introducing Drools Fiddle

Drools Fiddle is the fiddle for Drools. Like many other fiddle tools, Drools Fiddle allows both technical and business users to play around with Drools and aims at making Drools accessible to everyone. 

The entry point to Drools Fiddle is the DRL editor (top left panel), which allows to define and implement both fact models and business rules, using the Drools Rule Language. Once the rules are defined, they can be compiled into a KieBase by clicking on the Build button.

If the KieBase is successfully built, the visualization panel on the right will visualize the fact types as well as the rules as graph nodes. For instance, this DRL will be displayed as follows:

declare MyFactType
    value : int

rule "MyRule"
   f : MyFactType(value == 42)
   modify( f ) {setValue( 41 )}


All the actions that are performed on the working memory will be represented by arrows in this graph. The purpose of the User icon is to identify all the actions performed directly by the user. 

For example, let's see how we can dynamically insert fact instances into the working memory. After the KieBase compilation, the Drools Facts tab is displayed on the left:

This form allows you to create instances of the fact types that have been previously declared in the DRL. For each instances inserted in the working memory a blue node will be displayed in the Visualization tab. The arrow coming from the User icon shows that this action was performed manually by the user.

Once your working memory is ready, you can trigger the fireAllRules method by clicking on the Fire button. As a result, all the events occurring in the engine: rule matching, fact insertion/update/deletion are displayed in the visualization tab.
In the above example, we can see that the fact inserted by the user in step 1 triggered the rule "MyRule" which in turn modified the value of the fact from 42 to 41.

Some additional features have been implemented in order to enhance the user experience: 
  • Step by step debugging of the engine events.
  • Persistence: the Save button associates a unique URI to a DRL snippet in order to share it with the community, e.g.:
So far, only the minimum set of functionalities have been implemented to showcase the Drools Fiddle concept but there are still a lot of exciting features in the pipe:

  • Multi tabbed DRL editor
  • Decision  table support
  • Sequence diagram representation of rule engine events
  • Fact history visualization
  • Improvement of log events visualization
  • KieSession persistence to resume stateful sessions
  • Integration within Drools Workbench
The source code of Drools Fiddle is available on GitHub under Apache v2 License and you can access the application at Should you wish to contribute, pull requests are welcome ;)

We would love to have the feedback of the Drools community in order to improve the fiddle and make it evolve in the right direction.

by Julien Vipret & Matteo Casalino

by Julien VIPRET ( at December 17, 2016 01:35 PM

December 12, 2016

Thomas Allweyer: BPMN-Praxishandbuch um CMMN und DMN erweitert

Das Praxishandbuch BPMN der camunda-Gründer Jakob Freund und Bernd Rücker liegt sein kurzem in der fünften Auflage vor. Als wesentliche Neuerungen sind kompakte Beschreibungen der beiden neueren Standards aus dem BPMN-Umfeld hinzugekommen. Dabei handelt es sich zum einen um „Case Management Model and Notation“ (CMMN) zur Beschreibung schwach strukturierter, flexibler Fallbearbeitungen, zum anderen um „Decision Model and Notation“ zur Modellierung und Spezifikation von Entscheidungslogik. Dabei werden nicht nur die Standards und ihre Notationselemente selbst beschrieben, sondern auch das sinnvolle Zusammenspiel der drei Notationen. So können sich stark strukturierte BPMN-Prozesse und flexible, in CMMN beschriebene, Fallbearbeitungen gegenseitig auslösen. Wo komplexere Entscheidungen anstehen, kann es sowohl in BPMN- als auch in CMMN-Modellen hilfreich sein, auf Entscheidungsdiagramme und -tabellen gemäß DMN zu verweisen.

Grundlegend überarbeitet wurde zudem das Kapitel zur Automatisierung. Auch hier wurden die Themen Fallmanagement und die Ausführung von Entscheidungslogik aufgenommen. Zudem flossen neuere Praxistipps aus der Erfahrung zahlreicher Prozessautomatisierungsprozesse ein. Im Gegenzug wurden die in früheren Auflagen vorhandenen ausführlichen XML-Beispiele entfernt, da sie wohl kaum gelesen wurden. Ansonsten wurde die Struktur des Buchs nicht verändert. Nach wie vor erhält der Leser eine umfassende Einführung in die BPMN und das camunda-Methodenframework.

Freund, J.; Rücker, B:
Praxishandbuch BPMN – Mit Einführung in CMMN und BPMN
5. Auflage, Hanser 2016.
Das Buch bei amazon.

by Thomas Allweyer at December 12, 2016 09:13 AM

December 07, 2016

Sandy Kemsley: TechnicityTO 2016: Challenges, Opportunities and Change Agents

The day at Technicity 2016 finished up with two panels: the first on challenges and opportunities, and the second on digital change agents. The challenges and opportunities panel, moderated...

[Content summary only, click through for full article and links]

by sandy at December 07, 2016 08:35 PM

Sandy Kemsley: TechnicityTO 2016: Open data driving business opportunities

Our afternoon at Technicity 2016 started with a panel on open data, moderated by Andrew Eppich, managing director of Equinix Canada, and featuring Nosa Eno-Brown, manager of Open...

[Content summary only, click through for full article and links]

by sandy at December 07, 2016 07:04 PM

Sandy Kemsley: TechnicityTO 2016: IoT and Digital Transformation

I missed a couple of sessions, but made it back to Technicity in time for a panel on IoT moderated by Michael Ball of AGF Investments, featuring Zahra Rajani, VP Digital Experience at Jackman...

[Content summary only, click through for full article and links]

by sandy at December 07, 2016 05:32 PM

Sandy Kemsley: Exploring City of Toronto’s Digital Transformation at TechnicityTO 2016

I’m attending the Technicity conference today in Toronto, which focuses on the digital transformation efforts in our city. I’m interested in this both as a technologist, since much of my...

[Content summary only, click through for full article and links]

by sandy at December 07, 2016 02:36 PM

December 05, 2016

December 01, 2016

Sandy Kemsley: What’s on your agenda for 2017? Some BPM conferences to consider

I just saw a call for papers for a conference for next October, and went through to do a quick update of my BPM Event Calendar. I certainly don’t attend all of these events, but like to keep track of...

[Content summary only, click through for full article and links]

by sandy at December 01, 2016 05:31 PM

November 24, 2016 Camunda BPM 7.6 Roadshow

Berlin 16.01. | Hamburg 17.01. | Düsseldorf 18.01. | Stuttgart 19.01. | München 20.01. | Zürich 24.01. | Wien 25.01.

Die wichtigsten Neuigkeiten zu Camunda BPM 7.6

Die Camunda BPM 7.6 Roadshow wird im Januar 2017 in insgesamt 7 Städten vorbeischauen. Die Veranstaltung geht jeweils von 9-12 Uhr und ist kostenlos.

In dieser Veranstaltung erfahren Sie alles über Camunda BPM, die neuen Funktionen in Version 7.6 und vieles mehr.

Camunda-Mitgründer Bernd Rücker und weitere Camunda-Ansprechpartner werden vor Ort sein, und wir freuen uns auf ein Wiedersehen bzw. persönliches Kennenlernen.

Achtung: Die Teilnahme ist kostenlos, jedoch sind die Plätze begrenzt. Bis bald!

Termine, Agenda und Anmeldung 16.01. 9-12 Uhr Berlin Jetzt kostenlos anmelden 17.01. 9-12 Uhr Hamburg Jetzt kostenlos anmelden 18.01. 9-12 Uhr Düsseldorf Jetzt kostenlos anmelden 19.01. 9-12 …

by Jakob Freund at November 24, 2016 09:45 AM

November 18, 2016

Sandy Kemsley: Intelligent Capture enables Digital Transformation at #ABBYYSummit16

I’ve been in beautiful San Diego for the past couple of days at the ABBYY Technology Summit, where I gave the keynote yesterday on why intelligent capture (including recognition technologies and...

[Content summary only, click through for full article and links]

by sandy at November 18, 2016 03:04 PM

November 11, 2016

Drools & JBPM: Red Hat BRMS and BPMS Roadmap Presentation (Nov 22nd, London)

Original Link :

Featuring Drools, jBPM, OptaPlanner, DashBuilder, UberFire and Errai
For our second JBUG this November we’re delighted to welcome back Red Hat Platform Architect, Mark Proctor who will be part of a panel of speakers presenting roadmap talks on each component technology.
We’re fortunate to have this opportunity for so many project leads to be in one room at the same time, and it’s a fantastic opportunity to come along and ask questions about the future plans for BRMS and BPMS.
The talk will look at how the 7 series is shifting gears, presenting a vision for low-code application development in the cloud - with a much stronger focus on quality and maturity over previous releases.
Key topics will include:
  • The new Rich Client Platform
  • The new BPMN2 Designer
  • New Case Management and Modelling
  • Improved Advanced Decision Tables and new Decision Model Notation
  • Improved Forms and Page building
  • Fully integrated DashBuilder reporting
  • New OptaPlanner features & performance improvements
There will be opportunities for questions and the chance to network with the team over a beer and slice of pizza.
Attendees must register at the Skills Matter website prior to the meet-up. Please – only register if you intend to come along. Follow this link to register:
18:30 – 18:45     Meet up at Skills Matter with a beer at the bar
18:45 – 19:45     Part One
19:45 – 20:00     Refreshment break
20:00 – 20:30     Part Two
20:30                    Pizza, beer and networking
Mark Proctor
Mark is a Red Hat Platform Architect and co-creator of the Drools project - the leading Java Open Source rules system. In 2005 Mark joined JBoss as lead of the Drools project. In 2006, when Red Hat acquired JBoss, Mark’s role evolved into his current position as platform architect for the Red Hat JBoss BRMS (Business Rules Management System) and BPMS (Business Process Management System) platforms - which incorporate the Drools and jBPM projects.
Kris Verlaenen
Kris is the JBoss BPM project lead, and is interested in pretty much everything related to business process management. He is particularly fascinated by healthcare - an area that has already demonstrated the need for flexible business processes.
Geoffrey De Smet
Geoffrey is the founder and project lead of OptaPlanner (, the leading open source constraint satisfaction solver in Java. He started coding Java in 1999, regularly participates in academic competitions, and enjoys assisting developers in optimizing challenging planning problems of real-world enterprises. He is also a contributor to a variety of other open source projects.
Mauricio Salatino
Mauricio Salatino is a Drools/jBPM Senior Software Engineer in Red Hat, and author of the jBPM5 and jBPM Developer Guide, and the Drools 6 Developer Guide. His main task right now is to develop the next generation cloud capability for the BRMS and BPMS platforms - which includes the Drools and jBPM technologies.
Max Barkley
Max is a Software Engineer at Red Hat and the Errai project lead. Joining Red Hat as an intern in 2013, he took on his current role after graduating H.B.Sc. Mathematics from the University of Toronto in 2015.

by Mark Proctor ( at November 11, 2016 02:17 AM

November 10, 2016

Thomas Allweyer: Paper zum Download: Jetzt kommen die Roboter und automatisieren die Prozesse

Das Thema Prozessautomatisierung war in der Vergangenheit untrennbar verknüpft mit Workflow- oder Business Process Management-Systemen (BPMS). In jüngerer Zeit macht jedoch ein neuer Ansatz von sich reden: Robotic Process Automation (RPA). Anstelle umfangreicher Automatisierungsprozesse werden Software-Roboter installiert, die einfach die vorhandenen Benutzungsoberflächen verwenden und daher keine tiefergehende Integration benötigen. In Fallstudien wird von immensen Einsparungen berichtet – auch gegenüber vergleichbaren Integrationsprojekten auf Basis herkömmlicher BPM-Systeme. Grund genug, sich einmal genauer mit RPA zu beschäftigen.

Mein Paper „Robotic Process Automation – Neue Perspektiven für die Prozessautomatisierung“ beleuchtet den RPA-Ansatz. Es werden die typischen Merkmale von RPA-Systemen erläutert, mögliche Einsatzbereiche aufgezeigt, Nutzenpotenziale herausgearbeitet und eine Abgrenzung zu anderen Systemen vorgenommen. Ein wichtiger Punkt sind die zu erwartenden Auswirkungen auf Mitarbeiter und Arbeitsplätze. Schließlich wird eine zusammenfassende Einschätzung vorgenommen, und es werden mögliche weitere Entwicklungen diskutiert.

Download: Robotic Process Automation – Neue Perspektiven für die Prozessautomatisierung.

by Thomas Allweyer at November 10, 2016 11:43 AM

October 31, 2016

Drools & JBPM: Drools 7 to support DMN (Decision Model and Notation)

The Decision Model and Notation (DMN) specification is a relatively new standard by OMG (Object Management Group) that aims to do for business rules and business decisions what BPMN (it's sibling specification) did for business processes: standardize the notation and execution semantics to enable both its use by business users, and the interchange of models between tools from different vendors.

The Drools team has been actively following the specification and the direction it is taking. The team believes that, in accordance with its long time commitment to open standards, it is now time to support the specification and provide a compliant implementation for the benefit of its users.

The specification defines among other things:

  1. an expression language called FEEL used to express constraints and decisions
  2. a graphical language to model decision requirements
  3. a metamodel and runtime semantics for decision models
  4. an XML-based interchange format for decision models

As part of the investigation, the Drools team implemented a PoC that is now public and available here. The PoC already covers:

  • a complete, compliance level 3, FEEL language implementation.
  • complete support for the XML-based interchange format for marshalling and unmarshalling.
  • A partial implementation of the metamodel and runtime semantics 

We expect to have a complete runtime implementation released with Drools 7.0 (expected for Q1/2017).

On a related note, this is also a great opportunity for community involvement. This being a standard implementation, and relatively isolated from other existing components, it is the perfect chance for any community member that wishes to get involved with Drools and open source development to get his/her hands dirty and help bring this specification to life. Contact me on the Drools mailing list or on IRC if you would like to help.

We will publish over the next few weeks several blogs on this subject, with both general explanations about the specification and with details of our plans and our implementation. Bellow you can find a quick Q&A. Feel free to ask additional questions you might have about this subject on the mailing list.

Happy Drooling!

Questions & Answers

1. What DMN version and what compliance level will Drools support?

Drools is implementing DMN version 1.1 support at compliance level 3.

2. Is DMN support integrated with the Drools platform?

Yes, the DMN implementation leverages the whole Drools platform (including, among other things, the deployment model, infrastructure and tooling). DMN models are a first class citizen in the platform and an additional asset that can be included in kjars. DMN models will be supported in the kie-server and decision services exposed via the usual kie-server interfaces.

3. Is Drools DMN integrated with jBPM BPMN?

At the moment of this announcement, the integration is not implemented yet, but we expect it will be fully functional by the time Drools and jBPM 7.0 release (Q1 2017).

4. Will FEEL be a supported dialect for DRL rules? 

At the moment this is not clear and requires additional research. While FEEL works well as part of the XML-based interchange format, its syntax (that supports spaces and special characters as part of identifiers) is ambiguous and cannot be easily embedded into another language like DRL. We will discuss this topic further in the upcoming months.

by Edson Tirelli ( at October 31, 2016 07:55 PM BPM Day auf der WJAX

Nächste Woche ist es wieder so weit – die WJAX in München öffnet ihre Pforten. Es gibt am Mittwoch, den 09.11., wieder einen BPM-Day, mit einem spannenden Programm: Zuerst wird Kai Jamella über BPM und Microservices reden. Dann werde ich selbst eine Einführung in Workflow mit BPMN bzw. Case Management mit CMMN geben. DMN widme ich mich dann in einem eigenen interaktiven Vortrag. Darauf folgen Erfahrungsberichte von Wolfgang Strunk (Sixt Leasing) sowie Ringo Roidl und David Ibl (jeweils Lebensversicherung von 1871 a. G. München (LV 1871)). Wir sehen uns!

by Bernd Rücker at October 31, 2016 07:53 AM

October 28, 2016

Sandy Kemsley: Keynoting at @ABBYY_USA Technology Summit

I’ve been in the BPM field since long before it was called BPM, starting with imaging and workflow projects back in the early 1990s. Although my main focus is on process now (hence the name of my...

[Content summary only, click through for full article and links]

by sandy at October 28, 2016 05:12 PM

Sandy Kemsley: Strategy to execution – and back: it’s all about alignment

I recently wrote a paper sponsored by Software AG called Strategy To Execution – And Back, which you can find here (registration required). From the introduction: When planning for business success,...

[Content summary only, click through for full article and links]

by sandy at October 28, 2016 12:00 PM

October 25, 2016

Drools & JBPM: Drools 6.5.0.Final is available

The latests and greatest Drools 6.5.0.Final release is now available for download.

This is an incremental release on our previous build that focus on a few key improvements to round up the 6.x series.

You can find more details, downloads and documentation here:

Read below some of the highlights of the release.

You can also check the new releases for:
Happy drooling.

What's new?

Core Engine

Configurable ThreadFactory 

Some runtime environments (like for example Google App Engine) don't allow to directly create new Threads. For this reason it is now possible to plug your own ThreadFactory implementation by setting the system property drools.threadFactory with its class name.

Use of any expressions as input for a query 

It is now possible to use as input argument for a query both the field of a fact as in:

query contains(String $s, String $c)
$s := String( this.contains( $c ) )

rule PersonNamesWithA when
$p : Person()
contains( $, "a"; )

Update with modified properties 

Property reactivity has been introduced to avoid unwanted and useless (re)evaluations and allow the engine to react only to modification of properties actually constrained or bound inside of a given pattern. However this feature is automatically available only for modifications performed inside the consequence of a rule. Conversely a programmatic update is unaware of the object’s properties that have been changed, so it is unable of using this feature.

To overcome this limitation it is now possible to optionally specify in an update statement the names of the properties that have been changed in the modified object as in the following example:

Person me = new Person("me", 40);
FactHandle meHandle = ksession.insert( me );

me.setAddress("California Avenue");
ksession.update( meHandle, me, "age", "address" ); 

Monitoring framework improvements 

A new type of MBean has been introduced in order to provide monitoring of the KieContainers, and the JMX MBeans hierarchical structure have been revisited to reflect the relationship with the related MBeans of the KieBases. The JMX objectnaming has been normalized to reflect the terminology used in the Kie API.A new type of MBean has been introduced in order to provide monitoring for Stateless KieSession, which was not available in previous releases.

Drools Workbench

Guided Rule Editor : Support formulae in composite field constraints 

Composite field constraints now support use of formulae. When adding constraints to a Pattern the "Multiple Field Constraint" selection ("All of (and)" and "Any of (or)") supports use of formulae in addition to expressions.

Authoring - Project Editor - Reimport button 

The "Reimport" button invalidates all cached dependencies, in order to handle scenarios where a specific dependency was updated without having its version modified.

by Edson Tirelli ( at October 25, 2016 09:33 PM

October 20, 2016

Sandy Kemsley: Another rift in the open source BPM market: @FlowableBPM forks from @Alfresco Activiti

In early 2013, Camunda – at the time, a value-added Activiti consulting partner as well as a significant contributor to the open source project – created a fork from Activiti to form what is now the...

[Content summary only, click through for full article and links]

by sandy at October 20, 2016 02:56 PM

October 19, 2016 5 Reasons to switch from Activiti to Camunda

Recently the former key engineers of Alfresco Activiti have announced their resignation from Alfresco. They forked Activiti and started their own project. Though the success of that project remains to be seen (and personally, I wish them all the best), one thing is clear: The talent that has exited the company has left a gaping hole and brings the future direction of the platform into question. Though there are still many developers at Alfresco, we suspect that the Activiti project in two years won’t bear much resemblance to the current Activiti project. This has to be a significant concern to …

by Jakob Freund at October 19, 2016 09:42 AM

October 14, 2016

Sandy Kemsley: Bridging the bimodal IT divide

I wrote a paper a few months back on bimodal IT: a somewhat controversial subject, since many feel that IT should not be bimodal. My position is that it already is – with a division between “heavy”...

[Content summary only, click through for full article and links]

by sandy at October 14, 2016 12:15 PM

October 13, 2016

Sandy Kemsley: AIIM Toronto seminar: FNF Canada’s data capture success

Following John Mancini’s keynote, we heard from two of the sponsors, SourceHOV and ABBYY. Pam Davis of SourceHOV spoke about EIM/ECM market trends, based primarily on...

[Content summary only, click through for full article and links]

by sandy at October 13, 2016 04:15 PM

Sandy Kemsley: AIIM Toronto keynote with @jmancini77

I’m at the AIIM Toronto seminar today — I pretty much attend anything that is in my backyard and looks interesting — and John Mancini of AIIM is opening the day with a talk on...

[Content summary only, click through for full article and links]

by sandy at October 13, 2016 02:26 PM

Sandy Kemsley: Case management in insurance

I recently wrote a paper on how case management technology can be used in insurance claims processing, sponsored by DST (but not about their products specifically). From the paper overview: Claims...

[Content summary only, click through for full article and links]

by sandy at October 13, 2016 11:56 AM

October 12, 2016

Sandy Kemsley: Camunda Community Day: @CamundaBPM technical sessions

I’m a few weeks late completing my report on the Camunda Community Day. The first part was on the community contributions and sessions, while the second half documented here is about Camunda showing...

[Content summary only, click through for full article and links]

by sandy at October 12, 2016 04:39 PM

October 11, 2016

Thomas Allweyer: BPM-Labor ruft zur Teilnahme an Studie „Status Quo Agile“ auf

Bereits zum dritten Mal führt das BPM-Labor an der Hochschule Koblenz die Studie „Status Quo Agile“ zur Verbreitung und zum Nutzen agiler Methoden durch. Als Partner beteiligt sich diesmal neben der Deutschen Gesellschaft für Projektmangement auch die Organisation, die von Ken Schwaber gegründet wurde, einem der Erfinder von Scrum. Thematisch geht es unter anderem um hybride Methoden, die verschiedene Ansätze kombinieren, sowie um Fragen nach der Skalierung agiler Verfahren und Herausforderungen bei ihrer Einführung. Teilnehmen kann jeder, der beruflich oder fachlich mit Projektmanagement und agilen Methoden zu tun hat. Die Umfrage ist unter der Adresse bis zum 7. November geöffnet.

by Thomas Allweyer at October 11, 2016 07:41 PM

October 07, 2016

Thomas Allweyer: Marktstudie zu Systemen für Organisationsabteilungen von Finanzdienstleistern

Banken sind verpflichtet, ihre Organisationsrichtlinien schriftlich zu dokumentieren und jedem Mitarbeiter zugänglich zu machen. Anstelle eines herkömmlichen Organisationshandbuchs erfolgt diese Dokumentation heute häufig in Form von Prozessmodellen, die mit weiteren benötigten Informationen angereichert werden. Der von der Firma Procedera erstellte Marktüberblick betrachtet Prozessmodellierungswerkzeuge speziell aus dem Blickwinkel von Finanzdienstleistern.

Insgesamt werden 13 Prozessmodellierungstools besprochen. Manche Modellierungstools bieten nur wenige ausgefeilte Möglichkeiten, textuelle Inhalte zu verwalten. Daher kann die Ergänzung um Systeme zur Verwaltung von Content erforderlich sein. Die Studie gibt einen Überblick über 22 derartiger ergänzender Organisationshandbuch-Systeme. Schließlich werden noch vier ausgewählte Projektmanagement-Systeme vorgestellt. Weitere Informationen zur Studie, die für Banken und Sparkassen kostenlos erhältlich ist, finden sich in diesem Beitrag auf der Webseite, die auch viele weitere Informationen zum Thema Prozessmanagement und Organisation bei Finanzdienstleistern enthält.

by Thomas Allweyer at October 07, 2016 06:12 AM

September 29, 2016

Drools & JBPM: Google Summer of Code 2016: Drools & Minecraft

Another successful Google Summer of Code program took place this year. We worked together with Samuel Richardson from the USA to get the first integration between the Drools Engine and the popular game engine Minecraft. The scope of the project was to experiment how Drools can be used to declaratively define a game's logic. I initially thought about modelling point & click games such as Escape The Room, Monkey Island, Maniac Mansion, etc but after looking at how to work with Minecraft I've opened the concept to wider game definitions. 

We worked with Sam into creating a generic game engine that will take the Rules Definitions and drive the game (the Minecraft Mods) . Sam created a couple of Minecraft MODs that provides a scenario for the game which interacts and delegate to Drools the game's logic.
You can find the work for the Drools Game Engine here:

These two games are using the rules described here:
and here:

We spent a lot of time in trying to get the separation right, so now you can consume the game server itself indepentendly of the UI. This opens up the doors so you can use the engine withoug Minecraft. For that reason we have created also a set of services that exposes the Game Engine via rest in case that you want to interact with it remotely.

You can take a look at the main GameSession interface which is in charge of defining how to create new game sessions and enables the UI to register callbacks so actions can be executed when the logic of the game says so.

Because of this separation you will see that each game has its own test suite where both, the Rules and the GameSession API is tested to make sure that new games can be created and the rules are behaving as expected.

There is a lot of things to improve still, both in the Game Engine and in the MODs, so feel free to get in touch with us if you want to participate on the project. Hopefully we can build enough features to include it in the Drools Project.

by salaboy ( at September 29, 2016 11:49 AM

September 24, 2016

Vishal Saxena: Quiet - as a duck

What can I blog about after reading this phenomenal post on linked in. When your software compares with Scratch or coderdojo for enterprise, I would say "Raise a toast" and "Thank youDave" - May be I will get to meet you and talk to you in person. Your words mean a world to us.

by Vishal Saxena ( at September 24, 2016 01:20 AM

September 23, 2016

Sandy Kemsley: Camunda Community Day: community contributions

Two years ago, I attended Camunda’s open source community day, and gave the opening keynote at their enterprise user conference the following day. I really enjoyed my experience at the open source...

[Content summary only, click through for full article and links]

by sandy at September 23, 2016 11:44 AM

September 20, 2016

Sandy Kemsley: Ten years of social BPM

Ten years ago today, I gave my first public presentation on social BPM, “Web 2.0 and BPM”, at the now-defunct BPMG Process 2006 conference in London: Web 2.0 and BPM from Sandy Kemsley...

[Content summary only, click through for full article and links]

by sandy at September 20, 2016 11:18 AM

Thomas Allweyer: Update des BPM Toolmarktmonitors für Prozessmodellierung und Analyse

Auch in diesem Jahr gibt es einen Update der von BPM&O herausgegebenen Studie für Tools zum Design und zur Analyse von Geschäftsprozessen. Einerseits wurden die Angaben der Hersteller zu ihren Tools aktualisiert, andererseits kam die Software „PYX4“ als neues Tool hinzu. Zudem wurde für die Kostenbetrachtung ein neues Szenario aufgenommen, das der unternehmensweiten Einführung eines BPM-Werkzeugs entspricht.
Hier gibt es den Download der kostenlosen Studie (Registrierung erforderlich)

by Thomas Allweyer at September 20, 2016 09:13 AM

September 14, 2016

Thomas Allweyer: Prozessmanagement ist wichtig, bislang aber noch nicht so erfolgreich umgesetzt

Die Mehrheit der über 400 Teilnehmern an der internationalen Studie „BPM Compass“ sieht Prozessmanagement als wichtiges Thema für ihr Unternehmen an, und sie erwarten, dass die Bedeutung künftig noch steigt. Allerdings sind sie mit dem Erfolg bisher weniger zufrieden. So stehen für die meisten die Erhöhung der Qualität und der Transparenz ganz oben auf der Liste der BPM-Ziele. Doch werden diese Ziele bei weniger als 50% zufriedenstellend erreicht. Über 40% meinen, dass ihr Unternehmen das Thema Prozessmanagement nicht im Griff hat.
Die Studie ist ein Gemeinschaftsprojekt der Professoren Komus (Hochschule Koblenz), Gadatsch (Hochschule Bonn-Rhein-Sieg) und Mendling (Wirtschaftsuniversität Wien). Der Ergebnisbericht kann hier angefordert werden.

by Thomas Allweyer at September 14, 2016 07:38 AM

September 12, 2016

Thomas Allweyer: Prozessautomatisierung im Bereich Engineering und Simulation

beepmn-logoDie italienische Firma Esteco befasst sich vor mit simulationsgestützten Ansätzen der Produktentwicklung. In diesem Kontext entsteht derzeit auch eine Plattform zur Prozessmodellierung und -ausführung. Für die Unterstützung der Produktentwicklung sind einerseits kollaborative Features vorgesehen, andererseits die Verarbeitung sehr großer Dateien, und schließlich die Ausführung von Tasks in einer geschützten Sandbox-Umgebung. Entstehende Probleme wirken sich somit nicht auf den restlichen Prozess aus. Weiterhin sind Schnittstellen zu wissenschaftlicher Software vorgesehen, z. B. soll das Functional Mockup Interface (FMI) unterstützt werden, ein Standard zum Austausch dynamischer Simulationsmodelle.

Ein mögliches Einsatzgebiet könnten Prozesse zur Konfiguration und Ausführung von komplexen Optimierungsberechnungen sein. Der Ansatz ist in dem wissenschaftlichen Paper „Exploiting Web Technologies to Connect Business Process Management and Engineering“ beschrieben. Von der Prozessmanagement-Plattform „BeePMN“ steht hier eine Betaversion zum Test zur Verfügung. Bislang kann man darin BPMN-Modelle erstellen. Die oben genannten Funktionalitäten zur Ausführung und Unterstützung von Engineering-Aktivitäten sollen sukzessive hinzugefügt werden.

by Thomas Allweyer at September 12, 2016 04:12 PM

September 06, 2016

Thomas Allweyer: Internet of Things und Citizen Developers dieses Jahr wichtig für Gartner

Kürzlich hat Gartner den neuesten „Magic Quadrant“ über iBPMS (Intelligent Business Process Management Suites) herausgebracht. Die Bewertungskriterien haben sich nur wenig verändert. So spielen die Fähigkeiten zur Unterstützung von Szenarien des Internet of Things (IoT) eine größere Rolle. Neben den Möglichkeiten zur Integration mit entsprechenden Plattformen und die Fähigkeit zur Verarbeitung der anfallenden Datenströmen werden kontextbezogene Erkenntnisse von den Systemen gefordert. Z. B. sollen Analysen des kritischen Pfads oder der Workload-Verteilung eine dynamische Optimierung der Prozesse zur Laufzeit ermöglichen.

In den Fokus der Analysten sind außerdem die „Citizen Developers“ getreten. Dabei handelt es sich um fachlich orientierte Anwender, die keinen Programmierhintegrund haben. Sie sollen in die Lage versetzt werden, selbst Prozessanwendungen zu erstellen, mit nur geringer Unterstützung durch IT-Spezialisten. Mehrere Anbieter propagieren ihre Lösungen ja bereits unter dem Schlagwort „Low Code“ oder „No Code“. Neu aufgenommen wurden in diesem Jahr die Systeme von Axon Ivy und Bizagi. Unter den als „Leaders“ klassifizierten Anbietern finden sich die üblichen Verdächtigen Pegasystems, Appian und IBM.

Link zum Download der Studie bei Pegaystems (Registrierung erforderlich)

by Thomas Allweyer at September 06, 2016 07:20 AM

September 01, 2016

Thomas Allweyer: BPMN: Was macht man ohne ereignisbasierten Gateway?

Aus Lager bestellen - ereignisbasierter Gateway_smEin Prozess mit einem ereignisbasierten Gateway (Klicken zum Vergrößern)

Möchte man in einem BPMN-Prozess auf unterschiedliche Ereignisse reagieren, so erweist sich der ereignisbasierte Gateway als nützlich. An einer Verzweigung mit einem gewöhnlichen exklusiven Gateway wird ein Sequenzfluss auf Grundlage von Daten gewählt. Er heißt daher auch datenbasierter Gateway. Zum Beispiel kann ein Beschaffungsanstrag an einen Manager zur Genehmigung geleitet werden, wenn die Bestellsumme über einem bestimmten Limit liegt. Bei einem ereignisbasierten Gateway erfolgt die Auswahl des ausgehenden Sequenzflusses hingegen auf Grundlage eines eingetretenen Ereignisses. In jedem ausgehenden Sequenzfluss muss auf den Gateway ein empfangendes Ereignis (oder ein Empfangs-Task) folgen. Wenn der Gateway aktiviert wird, wird auf das Eintreten der folgenden Ereignisse gewartet. Das Ereignis, das zuerst eintritt, gewinnt, und der entsprechende Sequenzfluss wird ausgewählt.

In der obigen Abbildung wird zunächst eine Lagerbestellung erfasst und ans Lager versandt. Anschließend wird am ereignisbasierten Gateway gewartet. Geht eine Bestätigung ein bevor ein Tag vergangen ist, wird sie geprüft und der Prozess ist beendet. Ist hingegen zuerst ein Tag vergangen ohne dass eine Bestätigung eingetroffen ist, so wird im Lager nachgefragt, bevor erneut am ereignisbasierten Gateway gewartet wird. In einem realen Prozess würde noch eine Abzweigung nach dem Zeitereignis benötigt, über die der Prozess beendet wird, wenn eine bestimmte Zahl von Tagen vergangen ist. Ansonsten könnte eine Endlosschleife entstehen.

Leider verfügen nicht alle BPM-Systeme zur Prozessausführung über den ereignisbasierten Gateway. So nutze ich für die Lehre die kostenfreie Community Edition des Systems „Bonita“, deren Modellpalette keinen ereignisbasierten Gateway erhält. Man muss daher eine andere Methode finden um dennoch dasselbe Verhalten zu erzielen. Das Finden derartiger „Workarounds“ ist eine nützliche Übung zur Vertiefung des eigenen Verständnisses der BPMN-Ablauflogik.

Die folgende Abbildung zeigt eine mögliche Lösung. Hier wurde das Warten auf die Ereignisse in einen Unterprozess verlagert. Nach dem Start des Unterprozesses wird der Sequenzfluss zunächst an einem parallelen Gateway in zwei parallele Pfade aufgeteilt, denn es soll ja zugleich auf beide Ereignisse gewartet werden. Im Hauptprozess wurde eine boolesche Variable „Rechtzeitige Bestätigung-Flag“ definiert, die angibt, ob die Bestätigung rechtzeitig angekommen ist. Als Default-Wert enthält sie zunächst den Wert „true“. Geht nun als erstes die Bestellung ein, so wird der Unterprozess durch das Terminierungsereignis beendet. Im übergeordneten Prozess wird an dem datenbasierten Gateway der untere Pfad gewählt, da das Flag unverändert den Wert „true“ hat.

Aus Lager bestellen - Unterprozess_smUnterprozess zum Warten auf Ereignisse (Klicken zum Vergrößern)

Ist hingegen ein Tag vorüber bevor eine Bestätigung eingegangen ist, so wird der Wert des Flags auf „false“ gesetzt, und es wird im übergeordneten Prozess der obere Pfad gewählt. Bevor erneut gewartet wird, muss der Wert des Flags wieder auf den Default-Wert „true“ zurückgesetzt werden. Dies wurde nicht als eigener Task modelliert, es kann z. B. in dem Nachfrage-Task mit durchgeführt werden.

Die Terminierungsereignisse sind notwendig, da nach dem Eintreten des eines Ereignisses ansonsten immer noch auf das andere Ereignis gewartet und der Unterprozess nicht beendet würde.

Die nächste Abbildung zeigt eine andere Lösungsmöglichkeit. Hier wurde das Warten als Task mit angehefteten Ereignissen modelliert. Es handelt sich um eine Art Dummy-Task, der nichts tut, außer eben Warten. Er wird nicht auf normalem Wege beendet, sondern nur über die angehefteten, abbrechenden Ereignisse. Wenn das erste dieser beiden Ereignisse eintritt, wird der Task abgebrochen und es wird dem von dem Ereignis ausgehenden Ausnahmefluss gefolgt.

Aus Lager bestellen - abbrechende Ereignisse_smEin Warte-Task mit abbrechenden Ereignissen (Klicken zum Vergrößern)

Bei der Implementierung als ausführbarer Prozess wurde das unbegrenzte Warten des Tasks dadurch erreicht, dass er einem Dummy-Benutzer zugeordnet wurde, d. h. einem Benutzer, der zu keiner realen Person gehört. Der Task wartet daher in einer Taskliste, die nie jemand zu sehen bekommt.

Da der Prozess mit einem expliziten Startereignis beginnt, könnte man fordern, dass jeder Task einen regulären ausgehenden Sequenzfluss besitzen muss. Selbstverständlich könnte man einen Sequenzfluss aus dem Wartetask herausführen und zu einem Endereignis gehen lassen. Allerdings würde dieser Sequenzfluss niemals genutzt.

Mehr zu BPMN in:
BPMN 2.0 - 3. Auflage - Titel 183px

Das hier beschriebene Beispiel stammt aus dem BPMS-Buch:

by Thomas Allweyer at September 01, 2016 08:42 AM

Thomas Allweyer: BPMN: What to do without event-based gateways

Order from warehouse - event-based gateway_smA process with an event-based gateway (click to enlarge)

If you want to model the reactions to different events in a BPMN process, the event-based gateway is very useful. At a normal exclusive splitting gateway, a sequence flow is selected based on data. It is therefore called data-based gateway. For example, a procurement request may be routed to a manager for approval if the total amount exceeds a certain limit. At an event-based gateway, on the other hand, a sequence flow is selected based on the occurence of an event. There must be a catching event (or a receive task) in each outgoing sequence flow. When the gateway is activated, the process waits until one of the following events occurs. The first event to occur wins, and the respective sequence flow is selected.

In the above figure, a warehouse order is entered into a software system and sent to the warehouse. The process then waits at the splitting event-based gateway until a confirmation is received or until one day is over. If the first event is the reception of the confirmation, it is checked for correctness, and the process is finished. If, however, one day is over before the reception is received, the upper path is selected, and the status of the order is inquired from the warehouse, before the process waits again for a confirmation to arrive. In real life, it would be necessary to add a data-based gateway after the timer event so that the process can be finished after several unsuccessful inquiries, thus preventing an endless loop.

Unfortunately, not all BPM systems for process execution have the event-based gateway in their palette of modeling symbols. For example, I am using the free community edition of Bonita in my lectures which unfortunately does not have the event-based gateway. It is therefore necessary to find another way for achieving the same behaviour as described above. Finding such a workaround can be a good exercise for improving the understanding of the BPMN flow logic.

One possible solution is shown in the following figure. Here, a sub-process has been used for modeling the wait. When the sub-process is activated, the sequence flow is split by a parallel gateway into two parallel paths, since the process waits for both events at the same time. In the parent process, a boolean variable has been defined to indicate whether the confirmation has been received in time. By default, this flag has the value true. If the event „Confirmation received“ is the first one to occur, the sub-process is finished, and the data-based gateway in the parent process selects the lower path, since the flag still has the value „true“. The terminate end events in the sub-process are required to finish the sub-process when one of the events is triggered. Otherwise, the sub-process would continue to wait for the other event.

Order from warehouse - subprocess_smWaiting for events in a sub-process (click to enlarge)

If one day is over before the confirmation is received, the flag’s value is set to „false“, and the upper path is selected at the data-based gateway in the parent-process. Before the sub-process is activated again, the flag needs to be reset to its default value „true“. This is not modeled as a separate task, but it is included in the inquiry task.

The next figure shows another possibility for achieving the same behaviour. The task „Waiting for Confirmation“ is more or less a dummy task, because it doesn’t do anything – other than waiting. This task does not finish in a regular way, but only via the attached interrupting events. When the first interrupting event occurs, the waiting task is aborted, and the event’s outgoing exception flow is activated.

Order from warehouse - interrupting events_smA waiting task with interrupting events (click to enlarge)

When implementing this process in the BPMS, the endless waiting of the task has been achieved by assigning the task to a dummy user, i. e. a user that does not belong to a real person. The task therefore waits in a task list that nobody ever gets to see – before it is aborted by an attached event.

Since the process starts with an explicit start event, it may be considered incorrect to have an activity without a regular outgoing sequence flow. It would not be any problem to add such an outgoing sequence flow leading to an end event. However, this sequence flow would never be used.

More about BPMN in the second edition of „BPMN 2.0 – Introduction to the Standard for Business Process Modeling“:
BPMN 2.0 Frontpage-tiny

by Thomas Allweyer at September 01, 2016 08:14 AM

August 17, 2016

Drools & JBPM: Red Hat BPMS and BRMS 7.0 Roadmap Document - With a Focus on UI Usability

BPMS and BRMS  6.x put in a lot of foundations, but the UI aspects fell short in a number of areas with regards to maturity and usability.

In the last 4 years Red Hat has made considerable investment into the BPMS and BRMS space. Our engineering numbers have tripled, and so have our QE numbers. We also now have a number of User Experience and Design (UXD0 people, to improve our UI designs and usability.

The result is we hope the 7x series will take our product to a whole new level, with a much stronger focus on maturity and usability, now with the talent and bandwidth to deliver.

We had an internal review where we had to demonstrate how we were going to go about delivering a kick ass product in 7.0. I thought I would share, in this blog, what we produced, which is a roadmap document with a focus on UI Usability. The live version can be found at google docs, here - feel free to leave comments.

Enjoy :)

BPMS and BRMS Platform Architect.

Other Links:
Drools 7.0 Happenings  (Includes videos)
Page and Form Builder Improvements (Video blog)
Security Management (Detailed blog on 7.0 improvements)
User and Group Management (Default blog on 7.0 improvements)

About This Document

This document presents the 7.0 roadmap with an eye on usability, in terms of where, how and who for. It is an aggressive and optimistic plan for 7.0 and it is fully expected that some items or a percentage of some items will eventually be pushed to 7.1, to ensure we can deliver close to time. Longer term 7.1 and onward items are not discussed or presented in this document, although it does touch on some of the items which would be raised as a result of reading this document - such as the “What’s not being improved” (for 7.0) section.

Wider field feedback remains limited, with a scarcity of specifics. This creates challenges in undertaking a more evidence based approach to planning, which can stand up strongly to scrutiny on all sides. However, engineering and UXD have been working with the field, primarily through Jim Tyrrell and Justin Holmes over the last year on this topic and this document represents the culmination of many discussions over the last year. As such it represents a good heuristic, based on the information and resources available to us at the time.

Understanding Feedback from the  Field

Broadly speaking, we have two types of customers:
  1. Those who want developers to use our product, often times embedded in their apps
  2. Those who want a cross-functional team to use our product

Generally speaking, we do quite well with customer 1, but we have a huge challenge with customer 2. The market has set a pretty clear expectation, on features and quality for targeted audiences, with IBM ODM and Pega’s BPM/Case Management. Almost every customer type 2 either has a significant deployment of these two competitors in place, or the decision maker has done significant work with these products in the past. Moreover, customer 2 is interested in larger, department or organization wide deployments. Customer 1 is usually interested on project level deployments.

Customer 2 is primarily upset with our authoring experience, both in eclipse and in Business Central. It is uncommon that customer 1 or 2 is upset with missing features or functions from our runtime (especially now that 6.3 has been released with a solid execution server and management function), and when she is, our current process to resolve these gaps works well. Therefore, the field feedback in this document (and our current process) is focused on the authoring experience. This isn’t to say other elements of the product are perfect, but simply an acknowledgement that we have limited time and energy and that the authoring experience is the most important barrier to success with customer 2.

The key issues that we have authoring side are fundamental (customer stories available here at request - some are a bit off color). Generally, these issues fall into 3 areas which are further enumerated in “Product Analysis and Planned Changes.”
  1. Lack of support for a team centric workflow - Functional
    1. See Asset Manager (we need to add detail here)
  2. Knowledge Asset Editors
    1. See BPMN2 designer (functional / reliable), decision table editor (usable) and data modeller (usable), forms (usable)
  3. Navigation between functions and layout of those functions in the design perspective
    1. See Design (Authoring perspective)
    2. Deepak - (usable/reliable)
    3. Aimee - Functional

Introduction - The Product Maturity Model and what is Usability

Version 6.x has done well getting BRMS and BPMS to where it is today, with a strong revenue stream. The product maturity model (see image below) is a useful tool for discussing product improvements. It demonstrates that we are low on the model and need to mature and move up if we are to continue to improve sales. Too many aspects of the system, within the UI, may be considered neither functional (F), nor reliable (R), nor usable (U). The purpose of this document is to articulate a plan to address these issues, and in particular highlight the type of users the tool is being designed for and what they’ll be doing with it. The goal for 7.0 is to get as close to the “chasm” described in the model, with an aim to go beyond it as 7.x matures.

When discussing usability it’s very important we understand whether we are talking about lack of features (F), too many or too serious defects (R) or poor UI design (U).

Quite often people report an issue as usability simply because they want to go from A to D, but get stuck at B or C. Either because the functionality is not there to complete the task, or it’s too buggy and they cannot progress. So while good UI design is important, we must balance our efforts across F, R and U to become usable - a focus on UI design only will not help usability, if the underlying product is neither reliable or functional. Commonly this is called Human Centered Design.  By leveraging this common vocabulary, we can foster a more effective and inclusive dialogue with the wider team. So going forward, we are asking our stakeholders to employ the usability model presented here, and in particular the Functional, Reliable, Usable and Convenient terms.

High Level Goal

A minimal viable product for case management is the main goal for 7.0. Case management provides a well defined end-to-end use case for product management, engineering and UXD. This is more than just adding another feature. When a user creates an end-to-end case management solution they will need to use most aspects of our system. Case management also has a clear set of target audiences (personas) for design UI and case worker UI. This allows us to identify both where and how and who for our “fit and finish” efforts are spent to improve things. Ensuring a strong  directed focus on what we do and making it easier to communicate this, with hopefully a more realistic understanding of expectations from others within the organisation.

High Level Plan

When considering the plan as a whole, the initial target user for 7.0, or persona, for the design ui is that of a casual or low skilled developer, who typically favours tooled (low code) environments where possible. See Deepak in Persons:
Where possible and it makes sense, designs will be optimized for the less technical, citizen developers, of Aimee and Cameron Personas. With either optional advanced functionality for Deepak or common denominator designs suitable for all personas. While
Citizen developers are not the primary focus for 7.0, it will become increasingly important and should ideally be targeted for 7.1 onwards, so it’s important as much as practically possible is done for this direction n 7.0.  See “The advent of the citizen developer”.

Throughout this work, where possible and time permitting, designs will be put in place, either as alternative persona support or common denominator persona support for the, 
7.0 will primarily be focusing on all the components and parts that a Business Central user will come into contact with, while building a case management solution. For each of those areas we will try to have a sustained effort, over a long period of time to ensure depth and maturity, with UXD fully involved.

The aim for case management, the targeted components it uses and the Deepak persona is to achieve an acceptable level for functional, reliable and usable. For 7.1 we hope to look more holistically across the system and cross the chasm to become convenient. To become convenient we will need a strong effort in looking at the end-to-end user interaction using the system and trying to streamline all the steps they go through and making it easier and faster for them to achieve the goals they set out to achieve.

Detailed plans here
Detailed resource allocation, here.

Product Changes Done (6.3)

  • The whole business central was updated to PatternFly for v6.3. (See screenshots at end).
  • Execution server UI has been fully redesign with UXD involvement and great field feedback. (See screenshots at end).
    • “I want to congratulate you on the great work on the new kie server management features and UI. It's surprisingly intuitive and does just what it needs to do. Keep up the good work!” (Justin Holmes, Business Automation Practice Lead).
  • The process runtime views have been augmented with the redesigned and newly integrated DashBuilder. They look great and have already had good feedback.  (See screenshots at end).

Product Analysis and Planned Changes

The 7.0 development cycle only started early/mid May, we do not yet have UXD input (wireframes/css/html) for every area. This UXD input will take time and will be produced incrementally across the product, throughout the 7.0 life cycle. What we do have, and is included below, where those efforts will be.
  • Design (Authoring perspective)
    • Problem:
      • The authoring perspective is designed for power users, and fails to work for less technical personas.
      • The project configuration works just like normal editors, which is confusing.
      • The project explorer mixes switching org/repo/project and navigation, which crowds the area. It’s also repository oriented.
      • Versioning, branching are too hard and commits do not squash, creating long unreadable logs for every small save.
    • Solution:
      • See UXD wire diagrams for most of what is described here, although there is still more to do.
      • Create new views for navigating projects, that is content and information oriented and more suitable for the casual coder and moving towards citizen developer. Make things project oriented.
      • Centralise project settings, and improve their reliability and usability.
  • Support for Collaborative Team Based Workflow
    • Problem:
      • Most customers using Business Central want it to support a team, which generally reflects the Deepak, Aimee, Paula and Cameron from our personas.
      • We have no clear workflow for changes to be approved and promoted in the team.
        • The asset manager (versioning and branch management) needs an overhaul. It is extremely confusing to the point of not being functional even for technical users. The current feature does weird branching/merging with git in a single repo, so it’s too technical for Aimee but confusing for Deepak as it doesn’t follow conventions.
        • The screens are way too small to be usable and the actual workflow can be quite confusing
        • The feature hasn’t been QE’d
      • The single git repository model can make integrating Business Central into a CI/CD flow complicated. It’s doable now that we have git hooks, but it is far from convenient. Give our strength in the CI/CD space, this needs to get to convenient.
    • Solution
      • Underlying changes going on for the cloud work (every user gets their own fork) will put in place the backside which will make this easier to progress. Exactly how we will improve the UXD here, to hide and simplify GIT has to be investigated. We have a hiring slot open for someone to focus on this area.
      • Will move to a repository per user. This will support a pull request type workflow in the tool between users.
      • Repo per user will simplify CI/CD
      • To be clear 7.0 will work to improve around the scope of what we have in 6x now, as we have limited time left for 7.0 on this now. With the aim of being minimally viable for deepak. It’s not clear how easy we can make this for aimee too. Likewise wider collaborative workflow, really needs to be considered future work, to avoid expectation problems.
  • BPMN Designer
    • Problem
      • The BPMN designer is the most important area in the product and also the area that gets the most complaints. These complaints are primarily about reliability, Oryx was inherited from an old community project (for time to market) and came with too much technical debt. There are lots of small details, which can detract from the overall experience.
      • Oryx is not testable and regressions happen with almost every fix, making it very hard and costly to stabilise.
    • Solution
      • Work with the Lienzo (a modern canvas library) community to build a new visio like tool, that can support BPMN2, and provide a commercial quality experience.
      • Have a strong focus on enabling testability.
      • Real time drawing of shapes and lines during drag. Including real time alignment and distribution guidelines and snap.
      • Proper orthogonal lines, with multipoint support, and heuristics to provide minimal number of turns for each line.
      • Reduced and more attractive property panels (designed by UXD) for each of the node types, focusing on hiding technical details and (also) targeting less technical users.
      • Change palette from accordion to vertical bar with fly-outs. Support standard and compact palettes.
    • Eclipse
      • To unify authoring experience across web and Eclipse, we are investigating using web-based modelling components inside of Eclipse, without the need for business-central or any other server. However this is a research topic and we are unable to promise anything. We plan to investigate decision tables first, as they are simpler, as they require a single view (and also use lienzo), which may make 7.0. If that goes well, we will look into the designer - but this is not planned for 7.0.
      • Until we have a supported Lienzo based BPMN2 designer for eclipse, we will continue to support and maintain the existing eclipse plug in. The existing items, such as project wizards, will remain and have support.
  • Administration/Settings
    • Problem:
      • Administration and settings are spread out in different locations and are neither consistent nor intuitive. In some cases, such as imports, they have been buggy.
    • Solution:
      • Centralise administrations and settings and ensure they are consistent and intuitive.
      • Ensure all administration and settings are reliable.
      • Work with UXD on improving designs.
        • Designs TBD.
  • Case Management
    • This does not exist yet, but UXD are involved. They have produced visionary documents, which go beyond what we can implement now, and are working with us to produce more incremental and simpler steps that we can achieve for 7.0
  • Decision Tables
    • Problem
      • There are not a lot of complaints about decision tables, other than they could be more attractive. The main issue is they are not functional compared to our competitors.
    • Solution
      • Focus the two Drools UI developers solely on decision tables and moving towards Decision Model and Notation, an OMG standard for decision tables that compliments BPMN2.
    • Must support tabular chaining (part of DMN spec),  design time verification and validation and excel import/export.
    • Work with UXD to improve the aesthetics.
  • Reporting (DashBuilder)
    • Problem
      • Dashbuilder is already a mature and well featured product, with few complaints.  However it came from Polymita and uses a different technology stack, which produces a design miss match - as it’s not PatternFly. Nor can its charts be easily integrated into other pages, which is necessary for process views and case management.
    • Solution
      • An effort has been going on for some time to port Dashbuilder to the same technology as the rest of the platform and adopt Pattern Fly. The results for this can already be seen in the improved jBPM process views for 6.3 and we should have full migration for 7.0
  • Forms
    • Problem
      • This is an inherited Polymita item which was written in a different technology stack and it never integrated well nor is it PatternFly, creating an impedance mismatch.
      • It has some powerful parts to it, but it’s layout capabilities are too limited, where users are restricted to new items in rows only. There is no row spanning, or grid like views.
    • Solution
      • A new effort has been going on for some time now that ports the forms to the same technology stack as the rest of the platform and adopt PatternFly.
      • We are focusing around a bootstrap grid layout system, to ensure we have intuitive and powerful layout capabilities. We have invested in a dynamic-grid system for bootstrap grids, to avoid the issue of having to design your layout first, as it’s hard to change after.
    • Working with UXD to redesign each of the editors for the form components.
  • Data Modeller
    • Problem
      • There are less complaints on this item than others, probably due to it’s more simplistic nature. But UXD have a number requests, to try and improve the overall experience anyway.
    • Solution
      • Support simple business types, optionally and in addition to java types.
    • i.e. number, string, currency, but we won’t lose the ability to use the Java types when required.
    • Layout changes and CSS improvements
    • Longer term we need a visual ERD/UML style modeller, but that will not happen for 7.0
  • Data Services/Management
    • This does not exist yet, but is necessary for case management to work end-to-end. It entails the system allowing data sources to be used, tables to be viewed and their data to be edited. More importantly it allows design time data driven components for forms.

What's Not being improved for 7.0

  • 7.1 will need to have a stronger focus on trying to become more convenient and pleasurable. This will require stronger focus on streamlining how the user uses the tool as a whole, making it easier and faster for them to get things done. Wizards and task oriented flows will be essential here, and general improved interaction design.
  • General
    • Refactoring
  • BRMS
    • Guided Editor
    • Scenario/Simulation
      • We hope to pick this up for 7.1 in 2017.
    • DSLs
  • BPMS
    • Redesign of the navigation
    • Major redesign of process instance list or task list (though adding features to support case management)
      • More focus on building custom case applications that can be tailored specifically to what the customer needs
  • Product Installer
  • It is unclear if the product team will be improving the usability of the installer and patching.
  • Product Portal and Download
    • It is unclear if the product team will be improving how product and patches are found.

Other Notable Roadmap Work

  • Drools
    • Drools is currently focusing on trying to enable multi-core scalability for CEP use cases and also high availability for CEP use cases. There is also ongoing longer term research into pojo-rules and a drl replacement (will most likely be a superset of java).
  • jBPM
    • Horizontal scaling for the cloud is the main focus for jBPM and represents a number of challenges for jBPM, related to how processes running on different services work with each other, as well as how signals and messages are routed and information collected and aggregated.
  • OptaPlanner
    • Horizontal scaling through Partitioned Search is the main focus for OptaPlanner.

Organisational Changes Done and Ongoing

  • The group is now focusing engineers for longer periods of time to specific parts of the product. This will bring about depth and maturity to those areas the engineers work on.
    • 6.x focus was on rapid breadth expansion of features. This gave time to market, which allowed the revenue growth we have, but comes with the pains we have now. The shift to depth will help address this.
  • Migrating to PatternFly
    • Allows engineering and UXD to be more fully engaged. Ensures our product is consistent with all other Red Hat products. Allow Business Central to leverage ongoing research from the PatternFly team.
  • UXD team has increased from 1 person to 2.5. With one person dedicated to providing HTML and CSS to developers.
  • Usability testing of primary workflows and new features with participants representing target Personas for the given workflows/features.
  • The field has become and continues to become more engaged, via the BPM and BRMS Community of Practice initiative, and in particular Justin Holmes and Jim Tyrell’s involvement.
    • They have attended multiple team meetings now, and provide constant feedback and guidance. This has been invaluable.
    • The field engages with UXD in a twice monthly meeting, which lead the effort in developing Personas. These design tools provide a structure to discussions about who our users are and what we need to build in order to make them happy. Today, these personas are all focused on the design/authoring experience, as this is currently the field’s biggest perceived gap in features and we want to focus our effort as much as possible.
    • Jim Tyrrell is proposing to lead a regular field UXD review, to review any changes going on in community, as they happen.  This effort should be scheduled to be done every 3 weeks or so.
    • We also should think about bringing in System Integrator Consulting Partners to help with designing our offering.
    • Engineering releases of the product are being consumed by SA’s and Consultants in order to do exploratory testing before GA.
  • More continuous sustaining effort: organisational and planning changes to support a continuous effort on improving the quality of the platform across the board.  Rather than continuous switching of developer’s focus or postponing bug fixing towards the end of the cycle, there should be a continuous effort to fix known issues (large and small) to improve the overall quality and experience.  Currently set at 20% on average across the team (where some developers are much more focused on sustaining than others).
  • The documentation team have agreed to move to the same tooling (asciidoc) and content source (git repo) as engineering. This should make it easier for them to stay in sync and add value.
    • For 6.x and prior the documentation team had been silo’d before and using a completely different tool chain and document source. They were unable to effectively track community docs, meaning that products docs were lagging behind, as well as lacking content and often wrong. This means the product docs devalued the product, compared to community. We would typically hear field people say they wish they could just show community docs to customers, rather than product docs - this is a situation that cannot be allowed to continue.
  • A subcontractor has been hired to assist with user guide and getting started documentation in a tutorial format, as well as installation and setup - to improve the onboarding experience. This work is currently focused on 6x, but it will be updated to 7.0 towards the end of the project life cycle.
  • QE are now working far more closely with engineering, adding tests upstream into community, ensuring they run earlier and regressions found faster. We have also been working to embed the QE team within engineering, so that there is a greater communication and thus understanding and collaboration between engineering and QE (which did not happen on 6x or earlier).
  • We have greatly improved our PR process. With gatekeepers and an insistence that all code now, backend and frontend is reviewed for tests. 6x has no community provided UI tests, this is no longer the case for 7x.
  • We have also improved our CI/CD situation.

6.3 Improvement Images

Execution Server

Data Modeller (Before and After)

jBPM Runtime Views (Before and After)

by Mark Proctor ( at August 17, 2016 12:27 AM

August 05, 2016

Drools & JBPM: Page and Form builder for Bootstrap responsive grid views - a progress update

Eder has made great progress on the page and form builder,  which are built on top of Bootstrap responsive grid views.

We love the responsive aspects of Bootstrap grid views, but felt existing tools (such as Layoutit) exposed the construction of the grid too much to users. Further changing the structure of a page after it was made and populated is not easy. We wanted something that built the grid automatically and invisibly based on the dragging and positioning of components.

The latest results can be seen in this youtube video (best to watch full screen and select HD):

We have other videos, from earlier revisions of the tool, that you can also watch, as well as peripheral related tools.
Page Builder
Form Builder
Page/App Deployment
Page Permissions
User and Groups

by Mark Proctor ( at August 05, 2016 11:54 PM

July 29, 2016

Drools & JBPM: Security management in jBPM & Drools workbenches

The jBPM 7 release will include an integrated security management solution to allow administrator users to manage the application’s users, groups and permissions using an intuitive and friendly user interface. Once released, users will be able to configure who can access the different resources and features available in the workbench.

In that regards, a first implementation of the user & group management features was announced about 3 months ago (see the announcement here). This is the second article of this series and it describes what are permissions and how they extend the user and group management features in order to deliver a full security management solution. So before going further, let's introduce some concepts:

Basic concepts

Roles vs Groups

Users can be assigned with more than one role and/or group. It is always mandatory to assign at least one role to the user, otherwise he/she won’t be able to login.

Roles are defined at application server level and they are defined as <security-role> entries in the application’s web.xml descriptor. On the other hand, groups are a more flexible concept, since they can be defined at runtime. Both can be used together without any trouble. Groups are recommended as they are a more flexible than roles.


A permission is basically something the user can do within the application. Usually, an action related to a specific resource. For instance:

  • View a perspective 
  • Save a project 
  • View a repository 
  • Delete a dashboard 

A permission can be granted or denied and it can be global or resource specific. For instance:

  • Global: “Create new perspectives” 
  • Specific: “View the home perspective” 

As you can see, a permission is a resource + action pair. In the concrete case of a perspective we have: read, update, delete and create as the actions available. That means that there are four possible permissions that could be granted for perspectives. 

Permissions do not necessarily need to be tied to a resource. Sometimes it is also neccessary to protect access to specific features, like for instance "generate a sales report". That means, permissions can be used not only to protect access to resources but also to custom features within the application.

Authorization policy

The set of permissions assigned to every role and/or group is called the authorization (or security) policy. Every application contains a single security policy which is used every time the system checks a permission.

The authorization policy file is initialized from a file called WEB-INF/classes/ under the application’s WAR structure.

NOTE: If no policy is defined then the authorization management features are disabled and the application behaves as if all the resources & features were granted by default. 

Here is an example of a security policy file:

# Role "admin"

# Role "user"

Every entry defines a single permission which is assigned to a role/group. On application start up, the policy file is loaded and stored into memory.


The Security Management perspective is available under the Home section in the workbench's top menu bar.

The next screenshot shows how this new perspective looks:          

               Security Management Perspective             

Compared to the previous version this new perspective integrates into a single UI the management of roles, groups & users as well as the edition of the permissions assigned to both roles & groups. In concrete:
  • List all the roles, groups and users available 
  • Create & delete users and groups 
  • Edit users, assign roles or groups, and change user properties
  • Edit both roles & groups security settings, which include: 
    • The home perspective a user will be directed to after login 
    • The permissions granted or denied to the different workbench resources and features available 

All of the above together provides a complete users and groups management subsystem as well as a permission configuration UI for protecting access to some of the workbench resources and features.

Role management

By selecting the Roles tab on the left sidebar, the application shows all the application roles:

Unlike users and groups, roles can not be created nor deleted as they come from the application’s web.xml descriptor.

NOTE: User & group management features were described in detail in this previous article

After clicking on a role in the left sidebar, the role editor is opened on the screen’s right, which is exactly the same editor used for groups. 

Security settings editor

Security Settings 

The above editor is used to set several security settings regarding both roles and groups.

Home perspective

This is the perspective where the user is directed after login. This makes it possible to have different home pages for different users, since users can be assigned to different roles or groups.


It is used to determine what settings (home perspective, permissions, …​) have precedence for those users with more that one role or group assigned.

Without this setting, it wouldn’t be possible to determine what role/group should take precedence. For instance, an administrative role has higher priority than a non-administrative one. For users with both administrative and non-administrative roles granted, administrative privileges will always win, provided the administrative role’s priority is greater than the other.


Currently, the workbench support the following permission categories.

  • Workbench: General workbench permissions, not tied to any specific resource type. 
  • Perspectives: If access to a perspective is denied then it will not be shown in any of application menus. Update, Delete and Create permissions change the behaviour of the perspective management plugin editor. 
  • Organizational Units: Sets who can Create, Update or Delete organizational units from the Organizational Unit section at the Administration perspective. Sets also what organizational units are visible in the Project Explorer at the Project Authoring perspective. 
  • Repositories: Sets who can Create, Update or Delete repositories from the Repositories section at the Administration perspective. Sets also what repositories are visible in the Project Explorer at the Project Authoring perspective. 
  • Projects: In the Project Authoring perspective, sets who can Create, Update, Delete or Build projects from the Project Editor screen as well as what projects are visible in the Project Explorer. 

For perspectives, organizational units, repositories and projects it is possible to define global permissions and add single instance exceptions afterwards. For instance, Read access can be granted to all the perspectives and deny access just to an individual perspective. This is called the grant all deny a few strategy.

The opposite, deny all grant a few strategy is also supported:

NOTE: In the example above, the Update and Delete permissions are disabled as it does not makes sense to define such permissions if the user is not even able to read perspectives.

Security Policy Storage

The security policy is stored under the workbench’s VFS. Most concrete, in a GIT repo called “security”. The ACL table is stored in a file called “” under the “authz” directory. Next is an example of the entries this file contains:


Every time the ACL is modified from the security settings UI the changes are stored into the GIT repo. Initially, when the application is deployed for the first time there is no security policy stored in GIT. However, the application might need to set-up a default policy with the different access profiles for each of the application roles. 

In order to support default policies the system allows for declaring a security policy as part of the webapp’s content. This can be done just by placing a file under the webapp’s resource classpath (the WEB-INF/classes directory inside the WAR archive is a valid one). On app start-up the following steps are executed:

  • Check if an active policy is already stored in GIT 
  • If not, then check if a policy has been defined under the webapp’s classpath 
  • If found, such policy is stored under GIT 

The above is an auto-deploy mechanism which is used in the workbench to set-up its default security policy.

One slight variation of the deployment process is the ability to split the “” file into small pieces so that it is possible, for example, to define one file per role. The split files must start by the “security-module-” prefix, for instance: “”. The deployment mechanism will read and deploy both the "" and all the optional “security-module-?.properties” found on the classpath.

Notice, despite using the split approach, the “” must always be present as it is used as a marker file by the security subsystem in order to locate the other policy files. This split mechanism allows for a better organization of the whole security policy.

Authorization API

Uberfire provides a complete API around permissions. The AuthorizationManager is the main interface for checking if permissions are granted to users.
AuthorizationManager authzManager;

Perspective perpsective1;
User user;
boolean result = authzManager.authorize(perspective1, user);
Using the fluent API can also be expressed as:

authorizationManager.check(perspective1, user)
.granted(() -> ...)
.denied(() -> ...); 
The security check calls always use the permissions defined in the security policy.

For those interested in those APIs, an entire chapter can be found in the Uberfire's documentation.


The features described above will bring even more flexibility to the workbench. Users and groups can be created right from the workbench, new assets like perspectives or projects can be authored and, finally, specific permissions can be granted or denied for those assets.  
In the future, along the improvement of the authoring capabilities more permission types will be added. The ultimate goal is to deliver a zero/low code, very flexible and customizable tooling which allows to develop, build and deploy business applications in the cloud.

by David Gutiérrez ( at July 29, 2016 06:58 AM

July 28, 2016

Thomas Allweyer: Buch mit Anleitung zur Erstellung von Prozessapplikationen mit Bonita

Cover Designing Efficient BPM ApplicationsDas vorliegende, englischsprachige Buch führt in die Entwicklung von Prozessanwendungen mit dem BPM-System „Bonita“ ein, dessen kostenfreie Community Edition ich selbst auch in der Lehre einsetze und in meinem BPMS-Buch verwende. Die Möglichkeit zur Erstellung kompletter Prozessanwendungen wurde vergangenes Jahr zusammen mit einigen weiteren Neuerungen in Bonita-Version 7 eingeführt.

Bei vielen BPMS-Installationen bildet eine Task-Liste das zentrale User-Interface. Sie enthält alle von einem Mitarbeiter durchzuführenden Aufgaben, die aus unterschiedlichen Prozessen stammen können. Im Gegensatz dazu hat eine Prozessanwendung eine individuell angepasste Oberfläche. Für den Benutzer ist es gar nicht unmittelbar ersichtlich, dass er mit einem BPMS arbeitet. Als Beispiel wird in dem Buch eine Reisekosten-Anwendung entwickelt. Einstiegspunkt ist eine Webseite mit einer Übersicht der eigenen Reiseanträge und ihrem Genehmigungsstatus. Von hier aus kann man neue Reiseanträge stellen oder vorhandene ändern bzw. stornieren. Dabei wird dann jeweils ein Prozess gestartet. Vorgesetzte sehen auf der Startseite zudem die von ihren Mitarbeitern gestellten Anträge und können diese genehmigen oder ablehnen.

Das Buch führt einen nacheinander durch die einzelnen Schritte zur Entwicklung dieser Prozessanwendung. Zunächst wird die als Startseite dienende Webseite erstellt und mit manuell eingegebenen Beispieldaten getestet. In den weiteren Schritten wird das BPMN-Modell des Reisekostenantrags aufgebaut und sukzessive erweitert. Hinzu kommen die Zuordnung zu den Bearbeitern, das Datenmodell, die Benutzerdialoge für die einzelnen Schritte, die Formulierung von Bedingungen an Verzweigungen, sowie Schnittstellen zu externen Systemen, etwa zum automatischen Versand von E-Mails. Für den Fall, dass ein Manager die Bearbeitung eines Antrags vergisst, werden verschiedenen Eskalationsschritte eingebaut. In der Endausbaustufe der Anwendung spielen mehrere Prozesse zusammen. So sorgt etwa der Prozess zum Stornieren eines Antrags dazu, dass der zugehörige, ggf. noch laufende Antragsprozess abgebrochen wird.

Die einzelnen Schritte sind sehr detailliert beschrieben, so dass sie sich gut am System nachvollziehen lassen. Möchte man das Beispiel hingegen nicht selbst mit dem Bonita-System umsetzen, so lohnt sich die Lektüre weniger, da allgemeine Ausführungen im Vergleich zu den Schritt-für-Schritt-Anleitungen nur einen kleinen Teil einnehmen. An den meisten Stellen wird ganz gut erklärt, warum was gemacht wird. Leider ist dies bei einigen Parametern und Einstellungen für die Benutzeroberfläche nicht immer der Fall. Hier muss manchmal einfach ein Stück Code abgetippt werden, ohne dass er im Einzelnen erläutert wird. Das erschwert es, die Inhalte später auf eigene Entwicklungen zu übertragen.

An einer Stelle muss auch eine auf der Website zum Buch bereitgestellte Erweiterung, eine sogenannte „REST API Extension“, importiert werden. Leider funktionierte diese bei mir im Beispielprozess nicht so wie beschrieben. Eventuell hängt dies damit zusammen, dass ich eine neuere Bonita-Version einsetze als die im Buch verwendete. Leider lassen sich solche REST API-Extensions auch nur in der kostenpflichtigen Bonita-Edition selbst erstellen, so dass sie auch nicht näher untersucht werden konnte. Auch eine im Buch vorgestellte Integration mit dem Google-Kalender lässt sich leider nur nutzen, wenn man einen kostenpflichtigen Google Apps Service Account besitzt.

Trotz der genannten Einschränkungen ist das Buch für jeden nützlich, der sich ernsthaft in das System einarbeiten möchte, da es die verschiedenen auf der Website von Bonita bereitgestellten Tutorials und Dokumentationen um eine Reihe aufschlussreicher Beispiele zur Lösung verschiedener Fragestellungen ergänzt.

Christine McKinty, Antoine Mottier:
Designing Efficient BPM Applications: A Process-Based Guide for Beginners
O’Reilly, 2016
Das Buch bei amazon

by Thomas Allweyer at July 28, 2016 01:07 PM

July 20, 2016

Thomas Allweyer: Fachbuch zu Prozessmanagement in Einkauf und Logistik

Cover Prozessmanagement in Einkauf und LogistikDas vorliegende Buch liefert einen fundierte, prozessorientierte Darstellung der Bereiche Einkauf und Logistik. Prozesse in diesen bereichen weisen viele spezielle Eigenschaften auf. Entsprechend gibt es auch zahlreiche Methoden und Konzepte, die sich speziell mit der Analyse und der Gestaltung der Lieferketten befassen. Diese werden in dem Buch im Kontext eines durchgängigen Prozessmanagements dargestellt.

Das Werk besteht aus insgesamt sechs Kapiteln. Gegenstand des einführenden Kapitels sind die grundlegenden Konzepte des Prozessmanagements einerseits und des Einkaufs und der Logistik andererseits. Abschließend werden die Einflüsse aktueller Megatrends wie Globalisierung oder Ressourcenknappheit auf die Supply Chains betrachtet. Kapitel zwei stellt verschiedene Methoden zur Prozessmodellierung vor. Neben allgemeinen Notationen wie BPMN werden vor allem solche Methoden erläutert, die einen speziellen Bezug zum Logistikbereich haben, wie z. B. Materialflussmatrix oder Wertstromanalyse. Gegenstand des dritten Kapitels ist die Prozessanalyse. Auch hier wird einerseits eine allgemein gültige Vorgehensweise beschrieben, andererseits wird ein besonderer Schwerpunkt auf die Analyse der Dienstleistungsqualität in Logistik und Einkauf gelegt.

Im vierten Kapitel geht es um die Neugestaltung und Verbesserung von Prozessen. Als ausgewählte Konzepte zur Prozessverbesserung werden Methoden des Lean Managements, das Outsourcing von Logistikdienstleistungen sowie Industrie 4.0 besprochen. Bei dem umfassenden und noch recht jungen Thema Industrie 4.0 werden anstelle konkreter Handlungsempfehlungen ausgewählte Praxisbeispiele beschrieben.

Konsequenzen für die Aufbauorganisation werden in Kapitel 5 thematisiert. Hier geht es insbesondere um die Gestaltung einer prozessorientierten Beschaffungsorganisation und den Aufbau flexibler und widerstandsfähiger Supply Chains. Eine zentrale Rolle spielt hierbei das Risikomanagement. Das abschließende Kapitel 6 befasst sich mit dem Supply Chain Controlling. Unter anderem werden die verschiedenen qualitativ oder quantitativ bewertbaren Aspekte auf der operativen und der strategischen Controlling-Ebene diskutiert.

Liebetruth, Th.:
Prozessmanagement in Einkauf und Logistik
Springer 2016
Das Buch bei amazon.

by Thomas Allweyer at July 20, 2016 10:16 AM

July 06, 2016

Sandy Kemsley: 10 years on WordPress, 11+ blogging

This popped up in the WordPress Android app the other day: This blog started in March 2005 (and my online journalling goes back to 2000 or so), but I passed through a Moveable Type phase before...

[Content summary only, click through for full article and links]

by sandy at July 06, 2016 02:19 PM

July 05, 2016

Sandy Kemsley: Take Mike Marin’s CMMN survey: learn something and help CMMN research

Mike Marin, who had a hand in creating FileNet’s ECM platform and continued the work at IBM as chief architect on their Case Manager product, is taking a bit of time away from IBM to complete...

[Content summary only, click through for full article and links]

by sandy at July 05, 2016 01:10 PM

Thomas Allweyer: Mehr als nur der Kontrollfluss: Integriertes Methodenportfolio für ausführbare Prozesse

Cover Hagenberg Business Process Modelling MethodProzessmodellierungsnotationen wie BPMN sind ein sehr gutes Hilfsmittel um den Kontrollfluss von Geschäftsprozessen abzubilden. Für die Prozessautomatisierung sind aber noch eine Reihe weiterer Aspekte wichtig, die sich nicht so gut modellieren lassen. Beispiele sind die Spezifikation von Benutzerdialogen oder komplexere Fälle der Zuordnung von Aktivitäten zu Akteuren. So kann man z. B. mit BPMN nicht modellieren, dass eine bestimmte Aufgabe nur von dem Benutzer durchgeführt werden darf, der vorher bereits eine andere Aufgabe in demselben Prozess ausgeführt hat.

Die „Hagenberg Process Modelling Method“ umfasst Methoden zur Abbildung derartiger Aspekte und integriert sie mit BPMN-Modellen. Die Bezeichnung geht auf die österreichische Stadt Hagenberg zurück. Am dortigen Software Competence Center wurden die Forschungen durchgeführt, die der Methodik zugrunde liegen.

Bei dem englischsprachigen Buch handelt es sich um eine wissenschaftliche Veröffentlichung, die einige Vorkenntnisse erfordert. Es richtet sich somit hauptsächlich an Wissenschaftler sowie an Hersteller von Modellierungswerkzeugen und BPM-Systemen.

Es werden folgende Methoden und Methoden-Erweiterungen beschrieben:

  • Erweiterung von BPMN-Tasks um „deontische Operatoren“. Mittels Farben und Ergänzungen der Task-Bezeichnungen wird unterschieden, ob Tasks z. B. verpflichtend, erlaubt oder verboten sind – auch in Abhängigkeit von den Ergebnissen vorangehender Aktivitäten. Damit lassen sich BPMN-Diagramme kompakter darstellen, da zahlreiche Gateways entfallen können.
  • Modellierung von Akteuren. Im Gegensatz zu herkömmlichen BPMN-Diagrammen, bei denen die Akteur-Zuordnung meist mittels Pools und Lanes stattfindet, werden die möglichen Akteure bei den Aktivitäten eingetragen. Hierbei lässt sich u. a. auch unterscheiden, ob mehrere Rollen gemeinsam oder alternativ tätig werden. Die verwendeten Rollen werden in einem separaten Rollendiagramm modelliert. Und schließlich werden einzuhaltende Regeln formuliert, mit denen sich beispielsweise ausdrücken lässt, dass zwei Aktivitäten von unterschiedlichen Personen ausgeführt werden müssen.
  • Modellierung der Benutzer-Interaktionen. Zur Spezifikation der Benutzerdialoge wird ein weiterer Diagrammtyp verwendet, das Workflow Chart. Darin werden die im User Interface angezeigten Formulare mit den nachfolgenden Server-Aktionen modelliert. Es werden zweierlei Arten von Server-Aktionen unterschieden. Sofortige Aktionen werden direkt nach dem Absenden eines Formulars durchgeführt. Verzögerte Aktionen werden in Benutzer-Tasklisten eingetragen. Sie werden also erst ausgeführt, wenn sie von einem Benutzer gestartet werden.
    Es ergeben sich Überschneidungen mit BPMN-Diagrammen. Da es sich bei den verzögerten Aktionen zugleich um eigenständige Tasks handelt, sind diese sowohl im Workflow Chart als auch im Prozessdiagramm vorhanden.
  • Erweiterte Kommunikationsmöglichkeiten mittels Ereignissen. Zwar umfasst der BPMN-Standard sehr viele Ereignistypen, doch gibt es noch weitere relevante Aspekte, wie z. B. die Lebensdauer eines Triggers oder die Anforderung, dass die Benutzer entscheiden können, auf welche Ereignisse im Prozess reagiert werden soll. Hierfür werden zusätzliche Eigenschaften für Ereignisse definiert und „Ereignis-Pools“ für Ereignisse eingeführt, die keinem speziellen Prozess zugeordnet sind.

Die aufgeführten Konzepte werden in dem Buch unter Verwendung von Abstract State Machines formal beschrieben und anhand von Anwendungsbeispielen illustriert. Schließlich wird beschrieben wie die aufgeführten Methoden integriert und bei der Entwicklung ausführbarer Prozesse eingesetzt werden können. Zur Ausführung der mit dem vorgestellten  Methodenportfolio erstellten Modelle wird eine Software-Plattform benötigt. Die Autoren stellen die Architektur einer solchen „Enhanced Process Platform“ ausführlich dar.

Wer sich mit der Entwicklung von BPM-Tools und -Methoden befasst, dürfte von dem Buch profitieren. Es werden viele relevante Fragestellungen diskutiert, die in herkömmlichen Methoden wenig oder nicht abgedeckt sind. Zu fragen wäre, ob es nicht noch weitere, ebenso wichtige Aspekte gibt, die auch in der Hagenberg-Methode nicht abgedeckt sind, wie z. B. die Integration von Geschäftsregeln, die sich nicht auf die Akteur-Zuordnung beziehen, oder die Definition von Messpunkten zur Kennzahlenermittlung. Auch ein Vergleich des vorgestellten Ansatzes mit den Konzepten der CMMN (Case Management Model and Notation) wäre interessant.

Für eine erfolgreiche Umsetzung in die Praxis wäre es sicher hilfreich, die grafischen Darstellungen der Methoden intuitiver zu gestalten. Die vorgestellten Diagramme sind noch wenig nutzerfreundlich, insbesondere wenn auch eher fachlich orientierte Modellierer angesprochen werden sollen.

Felix Kossak et al.:
Hagenberg Business Process Modelling Method
Springer 2016
Das Buch bei amazon.

by Thomas Allweyer at July 05, 2016 07:50 AM

June 29, 2016

Thomas Allweyer: Denkanstöße für die „Process Revolution“

Cover Process RevolutionDas englischsprachige E-Book des australischen Unternehmensberaters Craig Reid liefert zahlreiche Anregungen und Denkanstöße für die heute notwendigen Veränderungen von Unternehmen und ihren Prozessen. Der Hauptteil besteht aus über 50 Mini-Kapiteln. Jedes dieser zwei- bis dreiseitigen Mini-Kapitel greift einen Aspekt heraus, illustriert ihn mit einem Beispiel aus der Praxis und gibt Tipps, wie man das Thema im eigenen Unternehmen angehen kann. Darin finden sich zum Teil altbekannte Prinzipien der Prozessorientierung, wie z. B. die Reduzierung redundanter Prüf-Aktivitäten oder die Auflösung funktionaler Silos. Im Zentrum steht aber vor allem der Kunde und seine Erfahrungen mit dem Unternehmen. Und so warnt der Autor davor, Prozesse zu stark zu strukturieren und zu standardisieren, wenn dadurch das Kundenerlebnis leidet.

Reid predigt den ständigen Wandel und agiles Vorgehen. Mehrere Kapitel setzen sich außerdem kritisch mit zu ausführlichen Prozessdokumentationen und schwergewichtigen Methoden auseinander. Nützlicher seien einfache, für die Mitarbeiter verständliche Dokumentationsmittel, wie z. B. simple Flowcharts auf Packpapier.

Schließlich geht es bei sämtlichen Prozessinitiativen darum, Wert für das Unternehmen zu schaffen. Und während das eigene Unternehmen noch dabei ist, detaillierte Prozessmodelle zu analysieren, hat die Konkurrenz vielleicht schon längst neue Innovationen umgesetzt.

Fazit: Dieses Buch geht nicht ins Detail, doch es macht Spaß es zu lesen. Dabei vermittelt es eine prozessorientierte und agile Denkweise und motiviert dazu, das eine oder andere Thema direkt im eigenen Umfeld anzugehen.

Craig Reid:
The Process Revolution
The Process Improvement Group 2016
Anmeldung zum Newsletter und Download des E-Books

by Thomas Allweyer at June 29, 2016 09:00 AM