Planet BPM

October 20, 2014

Thomas Allweyer: Welche Projektmanagement-Praktiken sorgen wirklich für Erfolg?

Zwar existieren zahlreiche Ansätze und Methoden im Projektmanagement, doch gibt es wenig systematische Untersuchungen darüber, welche Faktoren tatsächlich für den Projekterfolg ausschlaggebend sind. Daher führt das BPM-Labor der Hochschule Koblenz in Zusammenarbeit mit der Deutschen Gesellschaft für Projektmanagement die Studie “Erfolgsfaktoren im Projektmanagement” durch. Zur Teilnahme an der online durchgeführten Befragung sind alle Personen mit praktischer Projekterfahrungen aufgerufen. Anhand strukturierter Fragen werden Projektrahmen, -typ und genutzte Praktiken von jeweils einem erfolgreichen und einem weniger erfolgreichen Projekt erhoben. Die Teilnehmer erhalten neben dem Studienbericht Sonderauswertungen und Schlüsselaussagen für die eigene Branche. Zudem kann man die Teilnahme an einem Workshop zum agilen Projektmanagement gewinnen.

Eine Beteiligung an der Umfrage ist bis zum 26.11. unter möglich.

by Thomas Allweyer at October 20, 2014 08:30 AM

October 16, 2014

Tom Debevoise: New Book: The Microguide to Process and Decision Modeling in BPMN/DMN


The Microguide to Process and Decision Modeling in BPMN/DMN is now available on Amazon.  A little bit about the book: the landscape of process modeling has evolved as have the best practices. The smartest companies are using decision modeling in combination with process modeling. The principle reason is that decisions and processes are discovered and managed in separate, yet, interrelated ways.

Decision Model and Notation (DMN) is an evolution of Business Process Model and Notation (BPMN) 2.0 into an even more powerful and capable tool set and the Microguide book covers both specifications. It also focuses on the best practices in decision and process modeling. A number of these best practices have emerged, creating robust, agile, and traceable solutions.  Decision management and decision modeling are critical, allowing for simpler, smarter, and more agile processes. 

A simple decision and gateway control of an execution path to respond to a purchasing decision.

As the figure above shows, the proper use of decision modeling uncovers critical issues that the process must address to comply with the decision. Decision-driven processes act on the directives of decision logic: decision outputs affect the sequence of things that happen, the paths taken, and who should perform the work. Processes provide critical input into decisions, including data for validation and identification of events or process-relevant conditions. The combination of process and decision modeling is a powerful one.

In most business processes, an operational decision is the controlling factor driving processes. This is powerful, as many governments and enterprises focus on minimizing the event response lag because there is often a financial benefit to faster responses. Straight-through processing and automated decision making, not just automated processes, is also emphasizing the importance of decisions in processes. Developing a decision model in DMN provides a detailed, standardized approach that precisely directs the process and creates a new level of traceability.

Decision modeling can therefore be considered an organizing principle for designing many business processes. Most process modeling in BPMN is accomplished by matching a use case, written or otherwise, with workflow patterns. Process modeling is critical to the creation of a robust and sustainable solution. Without decision modeling, however, such an approach can result in decision logic becoming a sequence of gateways and conditions such that the decision remains hidden and scattered among the process steps.

Without decision modeling, critical decisions, such as how to source a requisition when financial or counter-party risk is unacceptable, or what to offer a customer, are lost to the details of the process. When the time comes to change or improve a decision, a process model in BPMN alone might not meet the need. Providing a notation for modeling decisions separately from processes is the objective of DMN.

by Tom Debevoise at October 16, 2014 09:19 PM

October 15, 2014

Sandy Kemsley: AIIM Information Chaos Rescue Mission – Toronto Edition

AIIM is holding a series of ECM-related seminars across North America, and since today’s is practically in my back yard, I decided to check it out. It’s a free seminar so heavily...

[Content summary only, click through for full article and links]

by sandy at October 15, 2014 05:34 PM

Bruce Silver: BPMN Explained – Part 2

Yesterday I tried to explain BPMN to those who don’t know what it is.  OK, they are probably saying, if BPMN is so great, why do I hear these complaints about it?  Yes, that’s a good question.

First, you need to understand exactly who is complaining.  If it’s a legacy tool vendor wedded to their proprietary (“much better!”) notation, well that speaks for itself.  Ditto if it’s a gray-haired process improvement consultant whose idea of a modern tool is a whiteboard that prints.  Which is most of them.  But even if you cross those guys off the list, there are normal end users who complain about it.

One complaint is there are too many shapes and symbols.  Actually, there are only three primary shapes, called flow nodes: activity, the rounded rectangle, denoting an action step in the process; gateway, the diamond, denoting conditional branching and merging in the flow; and event, the circle, denoting either the start or end of a process or subprocess, or possibly the process’s reaction to a signal that something happened.  Just three, much fewer than a legacy flowcharting notation.  In BPMN, the solid arrow, called sequence flow, must connect at both head and tail to one of these three shape types.

The problem is that the detailed behavior of the flow nodes is actually determined by their icons, markers, and border styles.  There are way too many of those, I will readily admit.  Only a small fraction of them are widely used and important to know; the rest you can simply ignore.  When I started my BPMN training many years ago, I identified a basic working set of shapes and symbols called the Level 1 palette, mostly carried over from traditional flowcharting.  The purpose was to eliminate the need to learn useless BPMN vocabulary that would never be used.  When BPMN 2.0 came out 4 years ago, they did a similar thing, officially this time, but for a different purpose.  The so-called Descriptive process modeling conformance class is essentially the Level 1 working set.  Its purpose, from OMG’s standpoint, was to limit the set of shapes and symbols a tool vendor must support in order to claim BPMN support.  So… if you are new to BPMN, just stick to the Level 1/Descriptive working set.  It will handle most everything you are trying to show, and good BPMN tools in fact let you restrict the palette to just those elements.

I sometimes hear the opposite complaint, that BPMN does not have a standard way to visualize important information, like systems, organizational units, task completion times, or resource costs, available in their current process modeling tool.  Actually, many BPMN tools do have ways to include these things, but each in their own tool-specific way.  BPMN just describes the process logic, that is, how the process starts and ends and the order of the steps.  It doesn’t describe the internal details of a task, like its data or user interface, or decision logic, or systems involved, or important simulation parameters.  Its scope is quite limited.  There are some emerging standards for those other things that will eventually link up with BPMN, but they are not yet widely adopted.  Anyway, it’s important to distinguish the information a BPMN tool can support from information that is part of BPMN itself.

Finally, some people don’t like the fact that BPMN has rules.  A tool validating models against those rules might determine, for instance, that the way you’ve been modeling something for years is invalid in BPMN.  You can ignore that, of course, but remember the goal of BPMN is clear communication of the process logic.  A diagram that violates the rules of the specification probably does not do that very well.  Like any new language, BPMN asks that you take a little time to learn it.  It’s actually not that hard.

The post BPMN Explained – Part 2 appeared first on Business Process Watch.

by bruce at October 15, 2014 05:22 PM

October 14, 2014

Bruce Silver: BPMN Explained

On Twitter someone posted to me: “Have you ever seen a short overview of BPMN that makes sense to people who have never heard of it?”  Hmmm… Probably not.  So here is my attempt.

Business Process Modeling Notation, or BPMN, is a process diagramming language.  It describes, in a picture, the steps in a business process from start to end, an essential starting point whether you are simply documenting the process, analyzing it for possible improvement, or defining business requirements for an IT solution to a process problem. Dozens of process diagramming languages have existed since the 1980s at least, so what’s so special about BPMN?

First, BPMN is an open industry standard, under the auspices of the Object Management Group.  It is not owned by a particular tool or consulting company.  A wide variety of tools support it, and the meaning of the business process diagram is independent of the tool used to create it. With BPMN you don’t need to standardize on a single tool for everyone in the organization, since they all share a common process modeling language.

Second, unlike flowcharts created in a tool like Visio or Powerpoint, the meaning of each BPMN shape and symbol is quite precise – it’s defined in a specification – and in principle independent of the personal interpretation of the person who drew it.  I say “in principle” because it is possible to violate the rules of the BPMN specification, just like it is possible to write an English sentence that violates accepted rules of grammar or spelling.  Nothing drastic happens in that case, but the diagram’s effectiveness at communication is decreased.

Third, BPMN is a language shared by business and IT, the first process modeling language able to make that claim.  When BPMN was first developed about 10 years ago, the only available process modeling standards at that time – UML activity diagrams and IDEF, among others – were rejected as “IT standards” that would not be accepted by business users.  To business users, a process diagram looked like a swimlane flowchart, widely used by BPM practitioners but lacking precise definition in a specification.  BPMN adopted the basic look and feel of a swimlane flowchart, and added to it the precision and expressiveness required by IT.  In fact, that precision and expressiveness is sufficient to drive a process automation engine in a BPM Suite (BPMS).  The fact that the visual language used by the business to describe a proposed To-Be process is the same as the language used by developers to build that process in a BPMS has opened up a new era of business-empowered process solutions in which business and IT collaborate closely throughout a faster and more agile process improvement cycle.

Even if you have no intention to create an automated process solution in a BPMS, BPMN diagrams can reveal information critical to process documentation and analysis that is missing in traditional swimlane flowcharts: exactly how the process starts and ends, what each instance of the process represents, how various exceptions are handled, and the interactions between the process and the customer, external service providers, and other processes.  The rules of the BPMN specification do not require these elements, but use of best-practice modeling conventions in conjunction with a structured methodology can ensure they are included.  My book BPMN Method and Style and my BPMessentials training of the same name are based on such an approach.

So yes, there is a cost to adopting BPMN, whether you are moving from casual tooling like Powerpoint or Visio flowcharts or from a powerful but proprietary language like ARIS EPC.  There is a new diagram vocabulary to learn, diagramming rules, as well as the aforementioned conventions and methodology such as Method and Style.  But the benefits of speaking a common process language are tremendous.  The investment in process discovery and analysis is far more than the cost of a tool or the time required to draw the diagrams.  It involves hundreds of man-hours of meetings, information gathering from stakeholders, workshops, and presentations to management.  The process diagram is a distillation of all that time and effort.  If it cannot be shared across the whole project team – business and IT – or to other project teams across the enterprise, now or in the future, you are throwing away much of that investment.  BPMN provides a way to share it, without requiring everyone to standardize on a single tool.

The post BPMN Explained appeared first on Business Process Watch.

by bruce at October 14, 2014 05:24 PM

Thomas Allweyer: Social BPM-Fähigkeiten von Prozessmanagement-Tools

Cover_Social_BPM_StudieDer Begriff “Social BPM” ist nicht ganz leicht zu fassen. Angesichts der Verbreitung von sozialen Netzwerken und “Social Software” im Unternehmen liegt es nahe, dass diese auch sehr nützlich für das Geschäftsprozessmanagement sein können – zumal es bei der Definition und Ausführung von Prozessen praktisch immer erforderlich ist, dass mehrere Beteiligte erfolgreich zusammenarbeiten. Doch welche konkreten Einsatzmöglichkeiten gibt es für Newsfeeds, Kontakte, Kommentarfunktionen, Wikis etc. im Prozessmanagement und welchen Nutzen bringen Sie?

Die Autoren der vorliegenden Studie haben zunächst die bestehenden Potenziale in den unterschiedlichen Phasen des Prozessmanagement-Kreislaufs herausgearbeitet. So bietet Social BPM in der Phase der Prozessidentifikation und -modellierung den Vorteil, dass sich viele Beteiligte aktiv einbringen können. Z. B. können mehrere Personen gemeinsam an einem Modell arbeiten, sich über Änderungen informieren lassen und Kommentare abgeben. In der Prozessimplementierung und -ausführung helfen beispielsweise zielgerichtete Informationen auf rollenbasierten Prozessportalen. Workflow-Aufgaben können in unternehmensinterne soziale Netzwerke eingespeist und Workflows durch Social Media-Ereignisse gestartet werden. Bei der Prozessüberwachung und der kontinuierlichen Verbesserung können auftretende Probleme schnell allen Betroffenen mitgeteilt werden. Virtuelle Communities können dazu dienen, Prozesse weiterzuentwickeln.

Bei den insgesamt zehn Anbietern von BPM-Software, die an der Schwerpunktstudie teilgenommen haben, handelt es sich überwiegend um Hersteller von Modellierungswerkzeugen. Zwar geben fast alle an, die Ausführung zumindest von Freigabeworkflows u. ä. zu unterstützen, doch liegt der Schwerpunkt bei den meisten ganz deutlich im Bereich Modellierung und Analyse. Entsprechend beziehen sich die angebotenen Social BPM-Funktionen hauptsächlich auf die Phase der Prozessidentifikation und -modellierung. Zwar hat die Mehrheit der Anbieter erst seit etwa 2010 damit begonnen, dedizierte Social Software-Funktionalitäten in ihre Produkte einzubauen, doch bieten viele schon seit Langem Möglichkeiten zur Zusammenarbeit, etwa zentrale Repositories zur verteilten Modellierung. Oftmals wird in diesem Zusammenhang auch die Bezeichnung “Collaborative BPM” verwendet.

Praktisch alle betrachteten Produkte verfügen über Prozessportale, Kommentar- und Bewertungsfunktionen, Newsfeeds und Abonnements sowie eine Aufgabenverwaltung für die Tätigkeiten im Modell-Lebenszyklus. Vereinzelt wird eine vereinfachte Modellierung angeboten, z. B. mittels tabellarischer Darstellungen. Sie ermöglicht es auch Mitarbeitern, die keine Modellierungs-Schulungen erhalten haben, ihre eigenen Prozesse zu erfassen. Eher selten ist die Einbindung von Wikis und Blogs, wobei mehrere Hersteller die Integration von Wikis in ihre Modellierungsplattformen für die Zukunft angekündigt haben.

Neben diesen vorgegebenen Kategorien konnten die Hersteller weitere vorhandene soziale Funktionen nennen. Die Antworten reichen von Abstimmungsfunktionen über Wissens- und Ideen-Management bis hin zur Karriereplanung und zum Case Management. Diese weite Spanne macht deutlich wie vielfältig das Themengebiet Social BPM ist.

Zumeist sind die sozialen Funktionen komplett in das Prozessmanagementwerkzeug integriert. Mehrere Anbieter bieten stattdessen oder zusätzlich eine Integration mit anderen Plattformen an, allen voran mit Microsoft Sharepoint.

Was beim Durchlesen der Studie etwas erstaunt: Die Nutzung von sozialen Funktionen bei der Prozessausführung kommt fast gar nicht vor. Das liegt sicherlich in einem hohen Maße am Teilnehmerfeld, das kaum BPMS-Hersteller mit dem Schwerpunkt Ausführung enthält. Andererseits finden sich in der vorangegangenen Überblicksstudie unter den 28 Herstellern eine Reihe von BPMS-Anbietern. Man kann spekulieren, woran es liegt, dass davon kaum einer an der Social BPM-Studie teilgenommen hat. Entweder haben sie in diesem Bereich wenig anzubieten, oder das Thema Social BPM wird fast ausschließlich unter dem Aspekt der kollaborativen Modellierung gesehen. Dabei liegen in der Prozessausführung deutlich höhere Potenziale als im Bereich Prozessmodellierung, da wesentlich mehr Mitarbeiter Prozesse durchführen als Prozesse modellieren.

Möglicherweise wird aber immer noch eine starke Trennung zwischen stark strukturierten Prozessen und kollaborativen Aufgaben vorgenommen. Beim BPMS-Einsatz für stark strukturierte Prozesse werden soziale Funktionen außen vorgelassen, und für kollaborative Aufgaben werden eventuell unternehmensinterne soziale Netzwerke eingesetzt, aber unabhängig von BPMS. Da es jedoch sowohl bei stark strukturierten Prozessen oftmals die Notwendigkeit zur Zusammenarbeit gibt, als auch bei kollaborativen Tätigkeiten gewisse Steuerungsaufgaben automatisiert werden könnten, wäre eine stärkere Integration mit Sicherheit sinnvoll. Nützliche Ansätze hierzu liefern auch die viel diskutierten Konzepte zum Adaptive Case Management – doch hinkt auch in diesem Bereich die praktische Umsetzung noch hinter der Diskussion hinterher.

Jens Drawehn, Oliver Höß:
Business Process Management Tools 2014 – Social BPM.
Fraunhofer Verlag 2014.
Weitere Infos und Bestellung beim IAO

by Thomas Allweyer at October 14, 2014 09:50 AM

Drools & JBPM: Decision Camp - 2014 - What you are missing out on

Here is the Decision Camp 2014 agenda, so you can see what you are missing out on, if you aren't there :)


9:30 - 10:00 am
11 am - 12 pm
CTO Panel
Mark Proctor, Red Hat
Dr. Jacob Feldman, OpenRules
Carlos Serrano-Morales, Sparkling Logic
Moderated by James Taylor
12 - 1 pm

General Sessions

We will host Best Practices sessions all day, presented by fellow practitioners or technology providers
General sessions will have break out tracks for rules writers and software architects
1 - 2 pm
An Intelligence Led Approach to Decision Management in Tax AdministrationDr. Marcia Gottgtroy, Inland Revenue New Zealand
Decision Tables as a Programming tool
Howard Rogers, RapidGen Software
Are Business Rules Obsolete?
Kenny Shi, UBER Technologies
2 - 3 pm
Customer Support Process AutomationErwin De Ley, iSencia Belgium
Building Domain-Specific Decision ModelsDr. Jacob Feldman, OpenRules
4 - 5 pm
Explanation-based E-Learning for Business Decision Making and Education 
Benjamin Grosof & Janine Bloomfield, Coherent Knowledge Systems


9:30 - 10 am

Vertical Day

Davide Sottara is our chair for the Healthcare day 

Vertical Day
Financial Services

10 - 11 am
Dr. Davide Sottara, PhD
12 - 1 pm
Lunch & Networking
1 - 2 pm
Cloud-based CEP in Healthcare 
Mariano Nicolas De Maio, PlugTree
Analytics for Payment Fraud
Carlos Serrano-Morales, Sparkling Logic
4 - 5 pm
Speaker Panel
All Speakers

by Mark Proctor ( at October 14, 2014 08:08 AM

Drools & JBPM: Classic Games Development with Drools

I realised I didn't upload my slides from Decision Camp 2013, where I talked about using Games to learn Rule Based Programming. Sorry about that, here they are better late than never:
Learning Rule Based Programming Using Games

The talk provides a gentle introduction into rule engines and then covers a number of different games, all of which are available to run from the drools examples project.
  • Number Guess
  • Adventure Game
  • Space Invaders
  • Pong
  • Wumpus World
While there is no video for the presentation I gave, I have made videos for some of these games in the past. Although beware some of them may be a little out of date now, compared to the version that is in our current examples project.

by Mark Proctor ( at October 14, 2014 02:33 AM

October 10, 2014

Drools & JBPM: 3 Days until Decision Camp 2014, San Jose (13-15 Oct)

Only 3 days to go until Decision Camp 2014 arrives, a free conference in the San Jose area for business rules and decision management practictioners. The conference is multi-track with 3 concurrent tracks at same time. The Decision Camp full agenda can be found here.

Like last year  RuleML will be participating and presenting, such as Dr. Benjamin Grosof. Which is a great opportunity to catch up on the latest happenings in the rules standards industry.

Last year I did a games talk, this year I'm doing something a little more technical, to reflect my current research. Here is my title and abstract.
Demystifying Truth Maintenance and Belief Systems
Basic Truth Maintenance is a common feature, available in many production rule systems, but it one that is not generally well understood. This talk will start by introducing the mechanics of rule engines and how that is extended for the common TMS implementation. It will discuss the limitations of these systems and introduce Justification based Truth Maintenance (JTMS) as a way to add contradictions, to trigger retractions. This will lead onto Defeasible Logic, which while sounding complex, facilitates the resolving of conflicting rules of premisses and contradictions, in a way that follows typical argumentation theory. Finally we will demonstrate how the core of this can be abstracted to allow pluggable beliefs, so that JTMS and Defeasible can be swapped in and out, along other systems such as Bayesian Belief Systems.

by Mark Proctor ( at October 10, 2014 05:11 PM

Thomas Allweyer: BizDevs – die neue Art der Zusammenarbeit von Business und IT?

Cover process-driven applications with BPMNSeit kurzem ist die englische Ausgabe des Buchs über prozessgesteuerte Anwendungen von Volker Stiehl verfügbar. Interessant sind hierzu die Ausführungen des Autors im SAP Community Network, in denen er die Hintergründe und die Ansätze des Buchs beschreibt. Er trennt die Anwendung eine fachliche Prozess- und eine Implementierungs-Schicht, die beide mit BPMN beschrieben werden. Die Prozesse beider Schichten werden ausgeführt und interagieren miteinander. Dabei tragen Business und IT in gleichem Maße die Verantwortung für das ausführbare Modell der fachlichen Prozess-Schicht.

In seinem Blogbeitrag prägt Stiehl die Bezeichung “BizDevs” für die enge Kooperation von Business und Entwicklern (Developers), die für die Entwicklung prozessgetriebener Anwendungen nötig ist. Dabei lehnt er sich an den mittlerweile verbreiteten Ausdruck “DevOps” für die enge Integration von Software-Entwicklung und -Betrieb an. Wünschenswert wäre eine solche engere Zusammenarbeit allemal. Und der in dem Buch vorgestellte Architekturansatz könnte für viele BPMS-basierte Anwendungen wegweisend sein.

Stiehl, Volker:
Process-Driven Applications with BPMN
Springer 2014
Das Buch bei amazon

Besprechung der deutschen Ausgabe

by Thomas Allweyer at October 10, 2014 07:52 AM

October 06, 2014

Thomas Allweyer: Flexibleres BPM durch Graphdatenbanken

graphdatenbankGastbeitrag von Helmut Heptner. Wir erleben einen Übergang von traditionellen Geschäftsprozessmanagement-Systemen auf Basis relationaler Datenbanksysteme, die im Zeitalter von „Big Data“ den Anforderungen selbst nach Umstellung und Neuprogrammierung nicht gewachsen wären, hin zu Systemen basierend auf Graphdatenbanken. Vorreiter dieser Entwicklung sind populäre Social Network Anbieter wie Facebook, Google+ und Twitter, um nur einige Beispiele zu nennen. Allen gemeinsam sind große Teilnehmerzahlen und eine unüberschaubar große Anzahl an Beziehungen zwischen den Anwendern, die dennoch bei Bedarf in Sekundenschnelle zu den unterschiedlichsten Auswertungen kombiniert werden können.

Was ist eine Graphdatenbank und wie unterscheidet sie sich von klassischen Datenbanken?

Eine relationale Datenbank ist vereinfacht dargestellt eine Sammlung von Tabellen (den Relationen), in deren Zeilen Datensätze abgespeichert sind. Dieses Modell ist wegen der wachsenden Datenmengen und der ständig ansteigenden Zahl der bestehenden und möglichen Beziehungen zwischen den Daten für viele Bereiche, vor allem auch als Grundlage für Business Process Management Systeme (BPMS) nicht optimal geeignet. Berechnungen dauern umso länger, je größer die Datenmenge ist und je komplexer die Beziehungen zwischen den Daten sind.

Heutige Anforderungen werden durch Graphdatenbanken besser erfüllt. So genannte NoSQL-Technologien gewinnen rasend schnell an Beliebtheit. Der bekannteste und größte Anbieter dieser Technologie ist wohl Neo Technology mit Neo4j (, einer in Java implementierten Open-Source-Graphdatenbank. Die Entwickler selbst beschreiben Neo4j auf ihrer Webseite als eine transaktionale Datenbank-Engine, die Daten anstatt in Tabellen in Graphen speichert. Für einen praktischen Einstieg in Theorie und Praxis der Graphdatenbanken ist mit den dort bereitstehenden Informationen und Videos eine hilfreiche Anlaufstelle.

Bekanntestes Beispiel für die Anwendung von Graphdatenbanken ist der “Social Graph” von facebook. Dieser Graph nutzt Beziehungen zwischen Menschen. Die für einen Graph typischen Knoten repräsentieren Menschen, jedem Knoten wird dabei der Name der Person zugeordnet. Die Kanten (zweites Element von Graphdatenbanken) repräsentieren Beziehungen. Diese sind unter anderem durch einen Typ charakterisiert wie „gefällt mir“, „ist befreundet mit“, „gefällt mir nicht“ u.a. Einfache Beispiele für solche Graphen sind Stammbäume mit Familienangehörigen als Knoten und Kanten als Beziehungen zwischen Eltern und Kindern, Streckenpläne des öffentlichen Nahverkehrs, IT-Netzwerkstrukturen oder eben auch Prozessabläufe im BPM.

Emil Eifrem, CEO von Neo Technology, drückt das so aus: “Die weltweit innovativsten Unternehmen – darunter Google, Facebook, Twitter, Adobe und American Express, haben bereits auf die Graphen-Technologien umgestellt, um die Herausforderungen komplexer Daten im Kern anzugehen.”

Wenn es um große (“Big Data”), verteilte und unstrukturierte Datenmengen wie u.a. beim Geschäftsprozessmanagement geht, sind die Graphdatenbanksysteme traditionellen Datenbanksystemen meist haushoch überlegen.

Vorteile: Wieso und in welchen Szenarien Graphdatenbanken traditionelle Datenbanken übertrumpfen

Aktuell gibt es drei Trends in der Datenverarbeitung:

  • Durch die Zunahme an Anwendern, Systemen und Daten, die erfasst werden sollen, steigen die Datenmengen exponentiell an. Ausdruck dieser Entwicklung ist das seit etwa 2013 etablierte Buzz Word “Big Data”.
  • Die Datenmengen befinden sich nicht mehr auf nur einem zentralen System, sondern oft verteilt, um Redundanz sicherzustellen, Performance zu optimieren und die Auslastung zu steuern. Bekannte Beispiele für diese Entwicklung sind Amazon, Google und andere Cloud-Anbieter.
  • Datenstrukturen werden komplexer und vernetzter durch das Internet, soziale Netzwerke und offene Schnittstellen zu Daten aus verschiedensten Systemen.

Diese Trends sind mit etablierten Datenbanksystemen nicht mehr zu beherrschen. Graphdatenbanksysteme sind nicht nur eine Antwort auf diese Herausforderungen, diese machen sogar ihre Stärke aus:

  • Im Gegensatz zum Entwurf für eine relationale Datenbank ist die Datenmodellierung für einen Graphen deutlich einfacher. Im Grunde reicht es aus, Geschäftsprozessschritte als Elemente aufzuzeichnen, untereinander mit Pfeilen zu verbinden und abschließend Bedingungen und Eigenschaften zu definieren. Ein so erstelltes Datenmodell kann meist unverändert in die Datenbank übernommen werden. Dadurch muss man nicht länger Programmierer oder Datenbankspezialist sein. Alle Beteiligten können das Modell verstehen und an wechselnde Anforderungen anpassen, ohne die Integrität des Graphen und der zugehörigen Infrastruktur anzutasten.
  • Bei Geschäftsprozessen ist das flexible Datenmodell der Graphdatenbanken deutlich agiler als andere Systeme. Das liegt schon in der Natur der Sache begründet: Geschäftsprozesse werden als Graphen modelliert. Entscheidungen, die auf sich entwickelnden unternehmenskritischen Daten basieren, können mit Hilfe von Abhängigkeiten und Regeln ebenfalls abgebildet werden. Die Modellierung von Geschäftsprozessen durch Graphen unterstützt die Agilität, weil schnell und wiederholbar auf Prozessänderungen und Prozessmanagement reagiert werden kann.
  • Graphdatenbanken sind leistungsfähiger als konkurrierende Technologien, da sie die Beziehungen bei Abfragen nicht neu berechnen, sondern ihnen folgen. Der Grund: Die Beziehungen werden in der Graphdatenbank bereits beim Einfügen erstellt und stehen danach sofort zur Verfügung. Abfragen beginnen am Startknoten und folgen den Beziehungen zwischen den Knoten. Das ermöglicht beispielsweise Echtzeitabfragen und damit sofortige, exakte und nutzbringende Interaktionen. Die Graphdatenbanksysteme sind also vor allem deshalb auf dem Vormarsch, weil sie bestehende Relationen zwischen den Daten nicht erst zur Laufzeit berechnen, sondern einfach nutzen.
  • Weitere Stärken der Graphdatenbanken liegen im Design (Zuverlässigkeit) und in über Jahrhunderte ausgereiften mathematischen und erkenntnistheoretischen Grundlagen (Dateneinsicht).

Die Vorteile bei der Verwaltung von Geschäftsprozessen

Dass die Graphdatenbanken vor allem in großen Unternehmen mit vielen und komplexen Prozessen Vorteile bieten, liegt vor allem in der naturgemäßen Abbildbarkeit von Geschäftsprozessen und im flexiblen Design der Graphen. Erfahrungsgemäß unterliegen Geschäftsprozesse kontinuierlichen Änderungen und müssen “on-the-fly” angepasst werden. Bei Verwendung relationaler und anderer Datenbankmodelle ist das nicht ohne weiteres möglich, das BPMS muss unter Umständen durch Spezialisten aufwändig aktualisiert werden und steht möglicherweise nicht unterbrechungsfrei zur Verfügung. Bei einem BPMS, das auf einer Graphdatenbank-Technologie basiert wie Comindware Tracker, sind solche Änderungen dagegen unterbrechungsfrei im Live-Betrieb möglich.

Ein weiterer Vorteil liegt in der Anpassungsfähigkeit der Graphdatenbanken an Entwicklungen im Unternehmen. Geschäftsprozesse reifen mit der Zeit, immer mehr Mitarbeiter werden einbezogen, immer neue Bedingungen und Abhängigkeiten müssen berücksichtigt werden. In Graphdatenbanken werden einfach neue Knoten definiert, Übergänge mit Bedingungen hinzugefügt und Eigenschaften definiert – schon sind die dahinter liegenden Geschäftsprozesse korrekt abgebildet.

Hauptvorteil der Graphdatenbanken ist deren Fähigkeit, nicht nur die Daten zu verwalten und für Auswertungen bereitzustellen, sondern auch die Geschäftsregeln zu speichern. Damit wird der an den Prozessen beteiligte Mitarbeiter von Routinearbeiten entlastet und das Potential der Wissensarbeiter steht während der Prozesse zur Verfügung.


Während bei relationalen Datenbankanwendungen oft genug auf Bekanntes zurückgegriffen werden kann, beschreiten Entwickler von Graphdatenbank-Anwendungen Neuland. Comindware, Anbieter der BPM-Lösung Comindware Tracker, wurde 2008 gegründet und ist ein junges, modernes Unternehmen, das sich den Anforderungen an moderne BPM-Systeme stellt und schnell auf 70 Mitarbeiter gewachsen ist. Die Gründer erkannten, dass die meisten Geschäftsprozesse unstrukturiert sind oder sich im Laufe der Zeit ändern und dass eine moderne BPM-Lösung diesen Anforderungen gewachsen sein muss. Comindware hat die Graphdatenbank Neo4j genutzt und auf deren Basis für eigene Lösungen die patentierte ElasticData Technologie entwickelt.


Der Autor und ehemalige Acronis Geschäftsführer Helmut Heptner ist seit März 2012 Geschäftsführer der Comindware GmbH und verantwortet das operative Geschäft in Zentraleuropa. Comindware zählt zu den Pionieren im adaptiven Business Process Management und beschäftigt derzeit weltweit über 70 Mitarbeiter. Die Lösung Comindware Tracker wird von Unternehmen wie Gazprom Avia und einem großen deutschen Autobauer eingesetzt.

by Thomas Allweyer at October 06, 2014 08:38 AM

October 03, 2014

Drools & JBPM: 10 Days until Decision Camp 2014, San Jose (13-15 Oct)

Only 10 days to go until Decision Camp 2014 arrives, a free conference in the San Jose area for business rules and decision management practictioners. The conference is multi-track with 3 concurrent tracks at same time. The Decision Camp full agenda can be found here.

Like last year  RuleML will be participating and presenting, such as Dr. Benjamin Grosof. Which is a great opportunity to catch up on the latest happenings in the rules standards industry.

Last year I did a games talk, this year I'm doing something a little more technical, to reflect my current research. Here is my title and abstract.
Demystifying Truth Maintenance and Belief Systems
Basic Truth Maintenance is a common feature, available in many production rule systems, but it one that is not generally well understood. This talk will start by introducing the mechanics of rule engines and how that is extended for the common TMS implementation. It will discuss the limitations of these systems and introduce Justification based Truth Maintenance (JTMS) as a way to add contradictions, to trigger retractions. This will lead onto Defeasible Logic, which while sounding complex, facilitates the resolving of conflicting rules of premisses and contradictions, in a way that follows typical argumentation theory. Finally we will demonstrate how the core of this can be abstracted to allow pluggable beliefs, so that JTMS and Defeasible can be swapped in and out, along other systems such as Bayesian Belief Systems.

by Mark Proctor ( at October 03, 2014 04:12 PM

October 02, 2014

Drools & JBPM: Trace output with Drools

Drools 6 includes a trace output that can help get an idea of what is going on in your system,  and how often things are getting executed, and with how much data.

It can also help to understand that Drools 6 is now a goal based algorithm, using a linking mechanism to link in rules for evaluation. More details on that here:

The first thing to do is set your slf4j logger to trace mode:
<appender name="consoleAppender" class="ch.qos.logback.core.ConsoleAppender">
<!-- %l lowers performance -->
<!--<pattern>%d [%t] %-5p %l%n %m%n</pattern>-->
<pattern>%d [%t] %-5p %m%n</pattern>

<logger name="org.drools" level="trace"/>

<root level="info"><!-- TODO We probably want to set default level to warn instead -->
<appender-ref ref="consoleAppender" />

Let's take the shopping example, you can find the Java and Drl files for this here:

Running the example will give output a very detailed and long log of execution. Initially you'll see objects being inserted, which causes linking. Linking of nodes and rules is explained in the Drools 6 algorithm link. In summary 1..n nodes link in a segment, when object are are inserted.
2014-10-02 02:35:09,009 [main] TRACE Insert [fact$Customer@56bc3fac]
2014-10-02 02:35:09,020 [main] TRACE LinkNode notify=false nmask=1 smask=1 spos=0 rules=

Then 1..n segments link in a rule. When a Rule is linked in it's schedule on the agenda for evaluation.
2014-10-02 02:35:09,043 [main] TRACE  LinkRule name=Discount removed notification
2014-10-02 02:35:09,043 [main] TRACE Queue RuleAgendaItem [Activation rule=Discount removed notification, act#=0, salience=0, tuple=null]
2014-10-02 02:35:09,043 [main] TRACE Queue Added 1 [Activation rule=Discount removed notification, act#=0, salience=0, tuple=null]

When it eventually evaluates a rule it will indent as it visits each node, as it evaluate from root to tip. Each node will attempt to tell you how much data is being inserted, updated or deleted at that point.
2014-10-02 02:35:09,046 [main] TRACE Rule[name=Apply 10% discount if total purchases is over 100] segments=2 TupleSets[insertSize=1, deleteSize=0, updateSize=0]
2014-10-02 02:35:09,047 [main] TRACE 1 [ AccumulateNode(12) ] TupleSets[insertSize=1, deleteSize=0, updateSize=0]
2014-10-02 02:35:09,047 [main] TRACE Segment 1
2014-10-02 02:35:09,047 [main] TRACE 1 [ AccumulateNode(12) ] TupleSets[insertSize=1, deleteSize=0, updateSize=0]
2014-10-02 02:35:09,047 [main] TRACE rightTuples TupleSets[insertSize=2, deleteSize=0, updateSize=0]
2014-10-02 02:35:09,056 [main] TRACE 2 [RuleTerminalNode(13): rule=Apply 10% discount if total purchases is over 100] TupleSets[insertSize=1, deleteSize=0, updateSize=0]

You can use this information to see how often rules evaluate, how much linking and unlinking happens, how much data propagates and more important how much wasted work is done.  Here is the full log:
2014-10-02 02:35:08,889 [main] DEBUG Starting Engine in PHREAK mode
2014-10-02 02:35:08,927 [main] TRACE Adding Rule Purchase notification
2014-10-02 02:35:08,929 [main] TRACE Adding Rule Discount removed notification
2014-10-02 02:35:08,931 [main] TRACE Adding Rule Discount awarded notification
2014-10-02 02:35:08,933 [main] TRACE Adding Rule Apply 10% discount if total purchases is over 100
2014-10-02 02:35:09,009 [main] TRACE Insert [fact$Customer@56bc3fac]
2014-10-02 02:35:09,020 [main] TRACE LinkNode notify=false nmask=1 smask=1 spos=0 rules=
2014-10-02 02:35:09,020 [main] TRACE LinkSegment smask=2 rmask=2 name=Discount removed notification
2014-10-02 02:35:09,025 [main] TRACE LinkSegment smask=2 rmask=2 name=Apply 10% discount if total purchases is over 100
2014-10-02 02:35:09,028 [main] TRACE LinkNode notify=true nmask=1 smask=1 spos=0 rules=[RuleMem Purchase notification], [RuleMem Discount removed notification], [RuleMem Discount awarded notification], [RuleMem Apply 10% discount if total purchases is over 100]
2014-10-02 02:35:09,028 [main] TRACE LinkSegment smask=1 rmask=1 name=Purchase notification
2014-10-02 02:35:09,028 [main] TRACE LinkSegment smask=1 rmask=3 name=Discount removed notification
2014-10-02 02:35:09,043 [main] TRACE LinkRule name=Discount removed notification
2014-10-02 02:35:09,043 [main] TRACE Queue RuleAgendaItem [Activation rule=Discount removed notification, act#=0, salience=0, tuple=null]
2014-10-02 02:35:09,043 [main] TRACE Queue Added 1 [Activation rule=Discount removed notification, act#=0, salience=0, tuple=null]
2014-10-02 02:35:09,043 [main] TRACE LinkSegment smask=1 rmask=1 name=Discount awarded notification
2014-10-02 02:35:09,043 [main] TRACE LinkSegment smask=1 rmask=3 name=Apply 10% discount if total purchases is over 100
2014-10-02 02:35:09,043 [main] TRACE LinkRule name=Apply 10% discount if total purchases is over 100
2014-10-02 02:35:09,043 [main] TRACE Queue RuleAgendaItem [Activation rule=Apply 10% discount if total purchases is over 100, act#=1, salience=0, tuple=null]
2014-10-02 02:35:09,043 [main] TRACE Queue Added 2 [Activation rule=Apply 10% discount if total purchases is over 100, act#=1, salience=0, tuple=null]
2014-10-02 02:35:09,043 [main] TRACE Added Apply 10% discount if total purchases is over 100 to eager evaluation list.
2014-10-02 02:35:09,044 [main] TRACE Insert [fact$Product@df4b72]
2014-10-02 02:35:09,044 [main] TRACE Insert [fact$Product@2ba45490]
2014-10-02 02:35:09,044 [main] TRACE Insert [fact$Purchase@37ff4054]
2014-10-02 02:35:09,045 [main] TRACE BetaNode insert=1 stagedInsertWasEmpty=true
2014-10-02 02:35:09,045 [main] TRACE LinkNode notify=true nmask=1 smask=1 spos=1 rules=[RuleMem Purchase notification]
2014-10-02 02:35:09,045 [main] TRACE LinkSegment smask=2 rmask=3 name=Purchase notification
2014-10-02 02:35:09,045 [main] TRACE LinkRule name=Purchase notification
2014-10-02 02:35:09,046 [main] TRACE Queue RuleAgendaItem [Activation rule=Purchase notification, act#=2, salience=10, tuple=null]
2014-10-02 02:35:09,046 [main] TRACE Queue Added 1 [Activation rule=Purchase notification, act#=2, salience=10, tuple=null]
2014-10-02 02:35:09,046 [main] TRACE BetaNode insert=1 stagedInsertWasEmpty=true
2014-10-02 02:35:09,046 [main] TRACE LinkNode notify=true nmask=1 smask=1 spos=1 rules=[RuleMem Apply 10% discount if total purchases is over 100]
2014-10-02 02:35:09,046 [main] TRACE LinkSegment smask=2 rmask=3 name=Apply 10% discount if total purchases is over 100
2014-10-02 02:35:09,046 [main] TRACE LinkRule name=Apply 10% discount if total purchases is over 100
2014-10-02 02:35:09,046 [main] TRACE Added Apply 10% discount if total purchases is over 100 to eager evaluation list.
2014-10-02 02:35:09,046 [main] TRACE Insert [fact$Purchase@894858]
2014-10-02 02:35:09,046 [main] TRACE BetaNode insert=2 stagedInsertWasEmpty=false
2014-10-02 02:35:09,046 [main] TRACE BetaNode insert=2 stagedInsertWasEmpty=false
2014-10-02 02:35:09,046 [main] TRACE Rule[name=Apply 10% discount if total purchases is over 100] segments=2 TupleSets[insertSize=1, deleteSize=0, updateSize=0]
2014-10-02 02:35:09,047 [main] TRACE 1 [ AccumulateNode(12) ] TupleSets[insertSize=1, deleteSize=0, updateSize=0]
2014-10-02 02:35:09,047 [main] TRACE Segment 1
2014-10-02 02:35:09,047 [main] TRACE 1 [ AccumulateNode(12) ] TupleSets[insertSize=1, deleteSize=0, updateSize=0]
2014-10-02 02:35:09,047 [main] TRACE rightTuples TupleSets[insertSize=2, deleteSize=0, updateSize=0]
2014-10-02 02:35:09,056 [main] TRACE 2 [RuleTerminalNode(13): rule=Apply 10% discount if total purchases is over 100] TupleSets[insertSize=1, deleteSize=0, updateSize=0]
2014-10-02 02:35:09,057 [main] TRACE Segment 1
2014-10-02 02:35:09,057 [main] TRACE 2 [RuleTerminalNode(13): rule=Apply 10% discount if total purchases is over 100] TupleSets[insertSize=1, deleteSize=0, updateSize=0]
2014-10-02 02:35:09,057 [main] TRACE Rule[name=Apply 10% discount if total purchases is over 100] segments=2 TupleSets[insertSize=0, deleteSize=0, updateSize=0]
2014-10-02 02:35:09,057 [main] TRACE 3 [ AccumulateNode(12) ] TupleSets[insertSize=0, deleteSize=0, updateSize=0]
2014-10-02 02:35:09,057 [main] TRACE Rule[name=Purchase notification] segments=2 TupleSets[insertSize=1, deleteSize=0, updateSize=0]
2014-10-02 02:35:09,057 [main] TRACE 4 [JoinNode(5) - [ClassObjectType$Purchase]] TupleSets[insertSize=1, deleteSize=0, updateSize=0]
2014-10-02 02:35:09,057 [main] TRACE Segment 1
2014-10-02 02:35:09,057 [main] TRACE 4 [JoinNode(5) - [ClassObjectType$Purchase]] TupleSets[insertSize=1, deleteSize=0, updateSize=0]
2014-10-02 02:35:09,058 [main] TRACE rightTuples TupleSets[insertSize=2, deleteSize=0, updateSize=0]
2014-10-02 02:35:09,058 [main] TRACE 5 [RuleTerminalNode(6): rule=Purchase notification] TupleSets[insertSize=2, deleteSize=0, updateSize=0]
2014-10-02 02:35:09,058 [main] TRACE Segment 1
2014-10-02 02:35:09,058 [main] TRACE 5 [RuleTerminalNode(6): rule=Purchase notification] TupleSets[insertSize=2, deleteSize=0, updateSize=0]
2014-10-02 02:35:09,058 [main] TRACE Fire "Purchase notification"
[[ Purchase notification active=false ] [ [fact$Purchase@37ff4054]
[fact$Customer@56bc3fac] ] ]
Customer mark just purchased shoes
2014-10-02 02:35:09,060 [main] TRACE Fire "Purchase notification"
[[ Purchase notification active=false ] [ [fact$Purchase@894858]
[fact$Customer@56bc3fac] ] ]
Customer mark just purchased hat
2014-10-02 02:35:09,061 [main] TRACE Removing RuleAgendaItem [Activation rule=Purchase notification, act#=2, salience=10, tuple=null]
2014-10-02 02:35:09,061 [main] TRACE Queue Removed 1 [Activation rule=Purchase notification, act#=2, salience=10, tuple=null]
2014-10-02 02:35:09,061 [main] TRACE Rule[name=Discount removed notification] segments=2 TupleSets[insertSize=1, deleteSize=0, updateSize=0]
2014-10-02 02:35:09,061 [main] TRACE 6 [NotNode(8) - [ClassObjectType$Discount]] TupleSets[insertSize=1, deleteSize=0, updateSize=0]
2014-10-02 02:35:09,061 [main] TRACE Segment 1
2014-10-02 02:35:09,061 [main] TRACE 6 [NotNode(8) - [ClassObjectType$Discount]] TupleSets[insertSize=1, deleteSize=0, updateSize=0]
2014-10-02 02:35:09,061 [main] TRACE rightTuples TupleSets[insertSize=0, deleteSize=0, updateSize=0]
2014-10-02 02:35:09,061 [main] TRACE 7 [RuleTerminalNode(9): rule=Discount removed notification] TupleSets[insertSize=1, deleteSize=0, updateSize=0]
2014-10-02 02:35:09,061 [main] TRACE Segment 1
2014-10-02 02:35:09,061 [main] TRACE 7 [RuleTerminalNode(9): rule=Discount removed notification] TupleSets[insertSize=1, deleteSize=0, updateSize=0]
2014-10-02 02:35:09,061 [main] TRACE Fire "Discount removed notification"
[[ Discount removed notification active=false ] [ null
[fact$Customer@56bc3fac] ] ]
Customer mark now has a discount of 0
2014-10-02 02:35:09,063 [main] TRACE Removing RuleAgendaItem [Activation rule=Discount removed notification, act#=0, salience=0, tuple=null]
2014-10-02 02:35:09,063 [main] TRACE Queue Removed 1 [Activation rule=Discount removed notification, act#=0, salience=0, tuple=null]
2014-10-02 02:35:09,063 [main] TRACE Fire "Apply 10% discount if total purchases is over 100"
[[ Apply 10% discount if total purchases is over 100 active=false ] [ [fact 0:6:2063009760:1079902208:6:null:NON_TRAIT:120.0]
[fact$Customer@56bc3fac] ] ]
2014-10-02 02:35:09,071 [main] TRACE Insert [fact$Discount@341a8659]
2014-10-02 02:35:09,071 [main] TRACE LinkSegment smask=2 rmask=3 name=Discount removed notification
2014-10-02 02:35:09,071 [main] TRACE LinkRule name=Discount removed notification
2014-10-02 02:35:09,071 [main] TRACE Queue RuleAgendaItem [Activation rule=Discount removed notification, act#=0, salience=0, tuple=null]
2014-10-02 02:35:09,071 [main] TRACE Queue Added 1 [Activation rule=Discount removed notification, act#=0, salience=0, tuple=null]
2014-10-02 02:35:09,071 [main] TRACE BetaNode insert=1 stagedInsertWasEmpty=true
2014-10-02 02:35:09,071 [main] TRACE LinkNode notify=true nmask=1 smask=1 spos=1 rules=[RuleMem Discount awarded notification]
2014-10-02 02:35:09,071 [main] TRACE LinkSegment smask=2 rmask=3 name=Discount awarded notification
2014-10-02 02:35:09,071 [main] TRACE LinkRule name=Discount awarded notification
2014-10-02 02:35:09,071 [main] TRACE Queue RuleAgendaItem [Activation rule=Discount awarded notification, act#=7, salience=0, tuple=null]
2014-10-02 02:35:09,071 [main] TRACE Queue Added 3 [Activation rule=Discount awarded notification, act#=7, salience=0, tuple=null]
Customer mark now has a shopping total of 120.0
2014-10-02 02:35:09,071 [main] TRACE Removing RuleAgendaItem [Activation rule=Apply 10% discount if total purchases is over 100, act#=1, salience=0, tuple=null]
2014-10-02 02:35:09,071 [main] TRACE Queue Removed 2 [Activation rule=Apply 10% discount if total purchases is over 100, act#=1, salience=0, tuple=null]
2014-10-02 02:35:09,071 [main] TRACE Rule[name=Discount removed notification] segments=2 TupleSets[insertSize=0, deleteSize=0, updateSize=0]
2014-10-02 02:35:09,072 [main] TRACE 8 [NotNode(8) - [ClassObjectType$Discount]] TupleSets[insertSize=0, deleteSize=0, updateSize=0]
2014-10-02 02:35:09,072 [main] TRACE Segment 1
2014-10-02 02:35:09,072 [main] TRACE 8 [NotNode(8) - [ClassObjectType$Discount]] TupleSets[insertSize=0, deleteSize=0, updateSize=0]
2014-10-02 02:35:09,072 [main] TRACE rightTuples TupleSets[insertSize=1, deleteSize=0, updateSize=0]
2014-10-02 02:35:09,073 [main] TRACE 9 [RuleTerminalNode(9): rule=Discount removed notification] TupleSets[insertSize=0, deleteSize=1, updateSize=0]
2014-10-02 02:35:09,073 [main] TRACE Segment 1
2014-10-02 02:35:09,073 [main] TRACE 9 [RuleTerminalNode(9): rule=Discount removed notification] TupleSets[insertSize=0, deleteSize=1, updateSize=0]
2014-10-02 02:35:09,073 [main] TRACE Removing RuleAgendaItem [Activation rule=Discount removed notification, act#=0, salience=0, tuple=null]
2014-10-02 02:35:09,073 [main] TRACE Queue Removed 1 [Activation rule=Discount removed notification, act#=0, salience=0, tuple=null]
2014-10-02 02:35:09,073 [main] TRACE Rule[name=Discount awarded notification] segments=2 TupleSets[insertSize=1, deleteSize=0, updateSize=0]
2014-10-02 02:35:09,073 [main] TRACE 10 [JoinNode(10) - [ClassObjectType$Discount]] TupleSets[insertSize=1, deleteSize=0, updateSize=0]
2014-10-02 02:35:09,073 [main] TRACE Segment 1
2014-10-02 02:35:09,073 [main] TRACE 10 [JoinNode(10) - [ClassObjectType$Discount]] TupleSets[insertSize=1, deleteSize=0, updateSize=0]
2014-10-02 02:35:09,074 [main] TRACE rightTuples TupleSets[insertSize=1, deleteSize=0, updateSize=0]
2014-10-02 02:35:09,074 [main] TRACE 11 [RuleTerminalNode(11): rule=Discount awarded notification] TupleSets[insertSize=1, deleteSize=0, updateSize=0]
2014-10-02 02:35:09,074 [main] TRACE Segment 1
2014-10-02 02:35:09,074 [main] TRACE 11 [RuleTerminalNode(11): rule=Discount awarded notification] TupleSets[insertSize=1, deleteSize=0, updateSize=0]
2014-10-02 02:35:09,074 [main] TRACE Fire "Discount awarded notification"
[[ Discount awarded notification active=false ] [ [fact$Discount@341a8659]
[fact$Customer@56bc3fac] ] ]
Customer mark now has a discount of 10
2014-10-02 02:35:09,074 [main] TRACE Removing RuleAgendaItem [Activation rule=Discount awarded notification, act#=7, salience=0, tuple=null]
2014-10-02 02:35:09,074 [main] TRACE Queue Removed 1 [Activation rule=Discount awarded notification, act#=7, salience=0, tuple=null]
2014-10-02 02:35:09,074 [main] TRACE Delete [fact$Purchase@894858]
2014-10-02 02:35:09,074 [main] TRACE LinkSegment smask=2 rmask=3 name=Purchase notification
2014-10-02 02:35:09,074 [main] TRACE LinkRule name=Purchase notification
2014-10-02 02:35:09,074 [main] TRACE Queue RuleAgendaItem [Activation rule=Purchase notification, act#=2, salience=10, tuple=null]
2014-10-02 02:35:09,074 [main] TRACE Queue Added 1 [Activation rule=Purchase notification, act#=2, salience=10, tuple=null]
2014-10-02 02:35:09,075 [main] TRACE LinkSegment smask=2 rmask=3 name=Apply 10% discount if total purchases is over 100
2014-10-02 02:35:09,075 [main] TRACE LinkRule name=Apply 10% discount if total purchases is over 100
2014-10-02 02:35:09,075 [main] TRACE Queue RuleAgendaItem [Activation rule=Apply 10% discount if total purchases is over 100, act#=1, salience=0, tuple=null]
2014-10-02 02:35:09,075 [main] TRACE Queue Added 2 [Activation rule=Apply 10% discount if total purchases is over 100, act#=1, salience=0, tuple=null]
2014-10-02 02:35:09,075 [main] TRACE Added Apply 10% discount if total purchases is over 100 to eager evaluation list.
Customer mark has returned the hat
2014-10-02 02:35:09,075 [main] TRACE Rule[name=Apply 10% discount if total purchases is over 100] segments=2 TupleSets[insertSize=0, deleteSize=0, updateSize=0]
2014-10-02 02:35:09,075 [main] TRACE 12 [ AccumulateNode(12) ] TupleSets[insertSize=0, deleteSize=0, updateSize=0]
2014-10-02 02:35:09,075 [main] TRACE Segment 1
2014-10-02 02:35:09,075 [main] TRACE 12 [ AccumulateNode(12) ] TupleSets[insertSize=0, deleteSize=0, updateSize=0]
2014-10-02 02:35:09,075 [main] TRACE rightTuples TupleSets[insertSize=0, deleteSize=1, updateSize=0]
2014-10-02 02:35:09,075 [main] TRACE 13 [RuleTerminalNode(13): rule=Apply 10% discount if total purchases is over 100] TupleSets[insertSize=0, deleteSize=1, updateSize=0]
2014-10-02 02:35:09,075 [main] TRACE Segment 1
2014-10-02 02:35:09,075 [main] TRACE 13 [RuleTerminalNode(13): rule=Apply 10% discount if total purchases is over 100] TupleSets[insertSize=0, deleteSize=1, updateSize=0]
2014-10-02 02:35:09,075 [main] TRACE Delete [fact$Discount@341a8659]
2014-10-02 02:35:09,075 [main] TRACE LinkSegment smask=2 rmask=3 name=Discount removed notification
2014-10-02 02:35:09,075 [main] TRACE LinkRule name=Discount removed notification
2014-10-02 02:35:09,075 [main] TRACE Queue RuleAgendaItem [Activation rule=Discount removed notification, act#=0, salience=0, tuple=null]
2014-10-02 02:35:09,075 [main] TRACE Queue Added 3 [Activation rule=Discount removed notification, act#=0, salience=0, tuple=null]
2014-10-02 02:35:09,075 [main] TRACE UnlinkNode notify=true nmask=1 smask=0 spos=1 rules=[RuleMem Discount awarded notification]
2014-10-02 02:35:09,076 [main] TRACE UnlinkSegment smask=2 rmask=1 name=[RuleMem Discount awarded notification]
2014-10-02 02:35:09,076 [main] TRACE UnlinkRule name=Discount awarded notification
2014-10-02 02:35:09,076 [main] TRACE Queue RuleAgendaItem [Activation rule=Discount awarded notification, act#=7, salience=0, tuple=null]
2014-10-02 02:35:09,076 [main] TRACE Queue Added 2 [Activation rule=Discount awarded notification, act#=7, salience=0, tuple=null]
2014-10-02 02:35:09,076 [main] TRACE Rule[name=Purchase notification] segments=2 TupleSets[insertSize=0, deleteSize=0, updateSize=0]
2014-10-02 02:35:09,076 [main] TRACE 14 [JoinNode(5) - [ClassObjectType$Purchase]] TupleSets[insertSize=0, deleteSize=0, updateSize=0]
2014-10-02 02:35:09,076 [main] TRACE Segment 1
2014-10-02 02:35:09,076 [main] TRACE 14 [JoinNode(5) - [ClassObjectType$Purchase]] TupleSets[insertSize=0, deleteSize=0, updateSize=0]
2014-10-02 02:35:09,076 [main] TRACE rightTuples TupleSets[insertSize=0, deleteSize=1, updateSize=0]
2014-10-02 02:35:09,076 [main] TRACE 15 [RuleTerminalNode(6): rule=Purchase notification] TupleSets[insertSize=0, deleteSize=1, updateSize=0]
2014-10-02 02:35:09,076 [main] TRACE Segment 1
2014-10-02 02:35:09,076 [main] TRACE 15 [RuleTerminalNode(6): rule=Purchase notification] TupleSets[insertSize=0, deleteSize=1, updateSize=0]
2014-10-02 02:35:09,076 [main] TRACE Removing RuleAgendaItem [Activation rule=Purchase notification, act#=2, salience=10, tuple=null]
2014-10-02 02:35:09,076 [main] TRACE Queue Removed 1 [Activation rule=Purchase notification, act#=2, salience=10, tuple=null]
2014-10-02 02:35:09,076 [main] TRACE Rule[name=Discount removed notification] segments=2 TupleSets[insertSize=0, deleteSize=0, updateSize=0]
2014-10-02 02:35:09,076 [main] TRACE 16 [NotNode(8) - [ClassObjectType$Discount]] TupleSets[insertSize=0, deleteSize=0, updateSize=0]
2014-10-02 02:35:09,076 [main] TRACE Segment 1
2014-10-02 02:35:09,076 [main] TRACE 16 [NotNode(8) - [ClassObjectType$Discount]] TupleSets[insertSize=0, deleteSize=0, updateSize=0]
2014-10-02 02:35:09,076 [main] TRACE rightTuples TupleSets[insertSize=0, deleteSize=1, updateSize=0]
2014-10-02 02:35:09,077 [main] TRACE 17 [RuleTerminalNode(9): rule=Discount removed notification] TupleSets[insertSize=1, deleteSize=0, updateSize=0]
2014-10-02 02:35:09,077 [main] TRACE Segment 1
2014-10-02 02:35:09,077 [main] TRACE 17 [RuleTerminalNode(9): rule=Discount removed notification] TupleSets[insertSize=1, deleteSize=0, updateSize=0]
2014-10-02 02:35:09,077 [main] TRACE Fire "Discount removed notification"
[[ Discount removed notification active=false ] [ null
[fact$Customer@56bc3fac] ] ]
Customer mark now has a discount of 0
2014-10-02 02:35:09,077 [main] TRACE Removing RuleAgendaItem [Activation rule=Discount removed notification, act#=0, salience=0, tuple=null]
2014-10-02 02:35:09,077 [main] TRACE Queue Removed 1 [Activation rule=Discount removed notification, act#=0, salience=0, tuple=null]
2014-10-02 02:35:09,077 [main] TRACE Rule[name=Discount awarded notification] segments=2 TupleSets[insertSize=0, deleteSize=0, updateSize=0]
2014-10-02 02:35:09,077 [main] TRACE 18 [JoinNode(10) - [ClassObjectType$Discount]] TupleSets[insertSize=0, deleteSize=0, updateSize=0]
2014-10-02 02:35:09,077 [main] TRACE Segment 1
2014-10-02 02:35:09,077 [main] TRACE 18 [JoinNode(10) - [ClassObjectType$Discount]] TupleSets[insertSize=0, deleteSize=0, updateSize=0]
2014-10-02 02:35:09,077 [main] TRACE rightTuples TupleSets[insertSize=0, deleteSize=1, updateSize=0]
2014-10-02 02:35:09,077 [main] TRACE 19 [RuleTerminalNode(11): rule=Discount awarded notification] TupleSets[insertSize=0, deleteSize=1, updateSize=0]
2014-10-02 02:35:09,077 [main] TRACE Segment 1
2014-10-02 02:35:09,077 [main] TRACE 19 [RuleTerminalNode(11): rule=Discount awarded notification] TupleSets[insertSize=0, deleteSize=1, updateSize=0]
2014-10-02 02:35:09,077 [main] TRACE Removing RuleAgendaItem [Activation rule=Discount awarded notification, act#=7, salience=0, tuple=null]
2014-10-02 02:35:09,077 [main] TRACE Queue Removed 1 [Activation rule=Discount awarded notification, act#=7, salience=0, tuple=null]
2014-10-02 02:35:09,077 [main] TRACE Removing RuleAgendaItem [Activation rule=Apply 10% discount if total purchases is over 100, act#=1, salience=0, tuple=null]
2014-10-02 02:35:09,077 [main] TRACE Queue Removed 1 [Activation rule=Apply 10% discount if total purchases is over 100, act#=1, salience=0, tuple=null]

by Mark Proctor ( at October 02, 2014 01:56 AM

October 01, 2014

Keith Swenson: Process Mining MOOC on Coursera

Whether you call it Process Mining, or Automated Process Discovery, nobody can deny that this field that combines big data analytics with business process is at the center of an important transformation in the workplace.  Process mining is useful to kickstart the implementation of predefined BPM diagrams, and it is also useful in unpredictable case management to see what has been done and whether it is compliant with all the rules.  What would you give to attend a complete, college level course on process mining?  What if it was free?

What if it was free, and it was being taught by Wil van der Aalst, arguably the foremost expert on workflow and process mining? What if it started next month, and you could attend from anyplace in the world?  Would you sign up?  I would.  And I have.

Prof van der Aalst from the Technical University of Eindhoven is teaching the course “Process Mining: Data science in Action” on Coursera starting Nov 12.   It is available to everyone everywhere.  It will last 6 weeks, and require about 4-6 hours of work per week.  It is not just an important part of data science, it is data science in action:

Data science is the profession of the future, because organizations that are unable to use (big) data in a smart way will not survive. It is not sufficient to focus on data storage and data analysis. The data scientist also needs to relate data to process analysis. Process mining bridges the gap between traditional model-based process analysis (e.g., simulation and other business process management techniques) and data-centric analysis techniques such as machine learning and data mining. Process mining seeks the confrontation between event data (i.e., observed behavior) and process models (hand-made or discovered automatically). This technology has become available only recently, but it can be applied to any type of operational processes (organizations and systems). Example applications include: analyzing treatment processes in hospitals, improving customer service processes in a multinational, understanding the browsing behavior of customers using a booking site, analyzing failures of a baggage handling system, and improving the user interface of an X-ray machine. All of these applications have in common that dynamic behavior needs to be related to process models. Hence, we refer to this as “data science in action”.

Many of you have seen one of my many talks on process mining, so you know that I believe this is an important, emerging field, one which Fujitsu has been a part of.  This course will be a chance to get below the surface of what we normally present in a 45-minute webinar, and more than you can get from reading the Process Mining Manifesto.  It is the first major MOOC on process mining.  There two reasons why this is notable:

  • First of all, BPM is becoming more evidence-based and the MOOC “Process Mining: Data science in Action” provides a concrete starting point for more fact-driven process management. It fits nicely with the development of data science as a new profession. There is a need for “process scientists”, now and in the future.
  • Second, it is interesting to reflect on MOOCs as a new medium to train BPM professionals and to make end-users aware of new BPM technologies. Such online courses allow for much more specialized BPM courses offered to thousands of participants.

I for one am looking forward to it.  Here is a short video to explain:

by kswenson at October 01, 2014 10:48 AM

September 30, 2014

Keith Swenson: BPM Poster

Here it is, the poster on the definition of BPM, with all the terms defined and explained!poster

This is based on the effort to gain consensus around a single common definition for BPM.  The definition by itself can not convey the meaning, if the terms are not explained.  You have seen this before in my post “One Common Definition for BPM.”  What we have done is to put all the information together into a single poster.

Click here to access the PDF of the poster

It looks best printed 36 inches by 24 inches (90cm by 60cm).  Most of us don’t have printers that big.  You can print it in Acrobat across multiple pieces of paper, and tape them together, but that can be a lot of work.  I am looking a way to allow you to simply order the poster and have it sent to you in a tube.  Once I have found that, I will update the post here.

Or come by the Fujitsu booth and ask for one.

by kswenson at September 30, 2014 11:45 AM

September 29, 2014

Keith Swenson: 3 Innovative Approaches to Process Modeling

In a post titled “Business Etiquette Modeling” I made a plea for modeling business processes such that they naturally deform themselves as needed to accommodate changes.  If we model a fixed process diagram, it is too fragile, and can be costly to manually maintain.  While I was at the EDOC conference and the BPM conference, I saw three papers that introduce innovations which are not completely defined solutions, they represent solid research on steps in the right direction.  Here is a quick summary of each.

(1) Implementation Framework for Production Case
Management: Modeling and Execution

(Andreas Meyer, Nico Herzberg, Mathias Weske of the Hasso Plattner Institute and Frank Puhlmann of Bosch, EDOC 2014 pages 190-199)

This approach is aimed specifically at production case management which means that it is to support a knowledge worker, who has to decide in real time what to do, however the kinds of things that such a worker might do are well known in advance.  The example used is that of a travel agent:  we can identify all the various things that a travel agent might be able to do, but they might combine these actions in an unlimited variety of ways.  If we draw a fixed diagram, we end up restricting the travel agent unnecessarily.  Think about it: a travel agent might book one hotel one day, book flights the next, book another hotel, then change the flights, then cancel one of the hotel bookings — it is simply not possible to say that there is a single, simple process that a travel agent will always follow.

Instead of drawing a single diagram, the approach suggested is to draw separate little process snippets of all the things that a travel agent might do.  Here is the interesting part: the same activity might appear in multiple snippets.  At run time the system combines the snippets dynamically based on conditions.  Each task in each snippet is linked to things that are required before that task would be triggers, so based on the current case instance information, a particular task might or might not appear as needed.  Dynamic instance data determines how the current process is constructed.  Activities have required inputs and produce outputs which is part of the conditions on whether they are included in a particular instance.

modelshotAbove are some examples of the process snippets that might be used for a travel agent.   Note that “Create Offer” and “Validate Offer” appear in two different snippet with slightly different conditions.  The ultimate process would be assembled at run time in a way that depends upon the details of the case.  I would have to refer you to the paper for the full details on how this works, but I was impressed by Andreas’ presentation.  I am not sure this is exactly the right approach, but I am sure that we need this kind of research in this direction.

(2) Informal Process Essentials

(C. Timurhan Sungur, Tobias Binz, Uwe Breitenbücher, Frank Leymann, Universtity of Stuttgart, EDOC 2014 page 200-209)

They describe the need to support “informal processes” which is not exactly what I am looking for.  Informal means “having a relaxed, friendly, or unofficial style, manner, or nature; a style of writing or conversational speech characterized by simple grammatical structures.”  What I am looking for are processes that are well crafted, official, meaningful, accurate, and at the same time responsive to external changes.   Formal/informal is not the same relationship as fixed/adaptive.  However, they do cover some interesting ideas that are relevant.  They specify four properties:

  1. Implicit Business Logic – the logic is not explicit until run time
  2. Different Relationships Among Resources – interrelated sets of individuals are used to accomplish more complex goals
  3. Resource Participation in Multiple Processes – people are not dedicated to a single project.
  4. Changing Resources – dynamic teams assembled as needed.

These properties look a lot like innovative knowledge worker pattern, and so this research is likely to be relevant.  They find the following requirements to be able to meet the need:

  1. Enactable Informal Process Representation
  2. Resource Relationships Definition
  3. Resource Visibility Definition
  4. Support for Dynamically Changing Resources

It seems that these approaches need to focus more on resources, roles, and relationships, and less on the specific sequences of activities.  Then from that, one should be able to generate the actual process needed for a particular instance.

The tricky part is how to find an expert who can model this.  Once of the reasons for drawing a BP diagram is that it is that drawing a diagram simplifies the job of creating the process automation.   Getting to the underlying relationships might be more accurate and adaptive, it is not simpler.

(3) oBPM – An Opportunistic Approach to Business Process Modeling and Execution

(David Grünert, Elke Brucker-Kley and Thomas Keller, Institute for Business Information Management, Winterthur, Switzerland, BPMS2 Workshop at BPM 2014)

This paper comes the closest to Business Etiquette Modeling, because it is specifically about the problem of creating a business with a strict sequence of user tasks.  This top-down approach tends to be over-constrained.  Since this is the BPM and Social Software Workshop, the paper tries to find a ways to be more connected to social technology, and to take a more bottom up approach.  They call it “opportunistic” BPM because the idea is that the actual process flow can be generated after the details of the situation are known.  Such a process can take advantage of the opportunities automatically, without needing a process designer to tweak the process every time.

The research has centered on modeling roles, the activities that those roles typically so, and also associating with the artifacts that are either generated or consumed.  They leverage an extension of the UML use case modeling notation, and it might look a little like this:

usecaseshotThe artifacts (documents, etc) have a state themselves.  When a particular document enters a particular state, it enables a particular activity for a particular role.  To me this shows a lot of promise.  Upon examination, there are weaknesses to this approach: modeling the state diagram for a document would seem to be a challenge because the states that a document can be in are too intricately tied to the process you want to perform.  It might be that our preconception of the process might overly restrict the state chart, which in turn limits what processes could be generated.   Also, there is a data model that Grünert admitted would have to be modeled by a data model expert, but perhaps there are a limited number of data models, and maybe they don’t change that often.  Somehow, all of this would have to be discoverable automatically from the working of the knowledge workers in order to eliminate the huge up front cost of having to model all this explicitly.  Again, I refer you to the actual paper for the details.


What this shows is that there is research being done to take process to the next level.  Perhaps a combination of these approaches might leave us with the ultimate solution: a system that can generate process maps on demand that are appropriate for a specific situation.  This would be exactly like your GPS unit which can generate a route from point A to point B give the underlying map of what is possible.  That is what we are looking for, is a way to map what the underlying role interactions could possibly be, along with a set of rules about what might be appropriate when.  Like in a GPS when you add a new highway, you might add a new rule, and all the existing business processes would automatically change if that new rule applies to that case.  We are not there yet, but this research shows promise.

by kswenson at September 29, 2014 04:48 PM

September 23, 2014

Thomas Allweyer: Von der Pyramide zum Haus – Neue Auflage des Praxishandbuch BPMN

Cover Praxishandbuch BPMN 2.0 - vierte AuflageVon dem weit verbreiteten Praxishandbuch BPMN 2.0 von Jakob Freund und Bernd Rücker ist kürzlich die vierte Auflage. Wesentlicher Unterschied zur dritten Auflage: Das bislang als Pyramide dargestellte camunda Methodenframework wurde geändert und wird nun in Form eines Hauses visualisiert. Ausschlaggebend für die Änderung waren einige Missverständnisse, die im Zusammenhang mit der Pyramide gelegentlich auftraten. Darin war die Ebene des technischen, d. h. des ausführbaren Prozessmodells unterhalb der Ebene des operativen Prozessmodells angesiedelt. Dies veranlasste viele Leser zur Auffassung, dass die technische Ebene zwangsläufig eine Verfeinerung der operativen Ebene sein müsse. Damit verbanden sie die Erwartung, dass die technischen Prozessmodelle immer nach den operativen Modellen entstehen müssten und dass die Verantwortungen für diese Ebenen säuberlich zwischen Fachabteilung und IT getrennt seien.

Diese Auffassungen entsprechen aber nicht den Intentionen der Verfasser. In der neuen Darstellung als Haus enthält das Dach nach wie vor die Ebene des strategischen Prozessmodells. Das Haus selbst besteht jedoch nur aus einem Stockwerk, dem operativen Prozessmodell. Es ist unterteilt in einen “menschlichen Prozessfluss” und einen “technischen Prozessfluss”, die sich beide auf derselben Ebene befinden. Der menschliche Prozessfluss wird von den Prozessbeteiligten durchgeführt. Die Abarbeitung des technischen Prozessflusses erfolgt durch ein Softwaresystem, typischerweise eine Process Engine. Zumeist bestehen enge Interaktionen zwischen menschlichem und technischem Fluss. Im Zuge einer agilen Prozessentwicklung werden beide Flüsse gemeinsam entwickelt, wobei Fach- und IT-Experten eng zusammenarbeiten.

Ansonsten sind im Buch nur kleinere Änderungen vorgenommen worden. Da der XML-basierte BPEL-Standard für ausführbare Prozesse stark an Bedeutung verloren hat, wird hierauf nur noch kurz eingegangen. Schließlich wurde noch ein kurzer Überblick über die Open Source Plattform “camunda BPM” eingefügt, die unter Leitung der Autoren entwickelt wurde.

Freund, J.; Rücker, B.:
Praxishandbuch BPMN 2.0. 4. Auflage.
Hanser 2014
Das Buch bei amazon.

by Thomas Allweyer at September 23, 2014 06:36 AM

September 22, 2014 Thanks for an awesome BPMCon 2014

Awesome location, awesome talks and most of all: awesome attendees. This year’s BPMCon was indeed the “schönste BPM-Konferenz” I’ve ever seen. Thank you so much to all who made it happen, including Guido Fischermanns for the moderation, Sandy Kemsley for her Keynote about the Zero-Code BPM Myth, all those BPM practitioners who presented their lessons [...]

by Jakob Freund at September 22, 2014 05:46 PM

September 19, 2014

Drools & JBPM: The Birth of Drools Pojo Rules

A few weeks back I blogged about our plans for a clean low level executable mode, you can read about that here.

We now have our first rules working, and you can find the project with unit tests here. None of this requires drools-compiler any more, and allows people to write DSLs without ever going through DRL and heavy compilation stages.

It's far off our eventually plans for the executable model, but it's a good start that fits our existing problem domain. Here is a code snippet from the example in the project above, it uses the classic Fire Alarm example from the documentation.

We plan to build Scala and Clojure DSLs in the near future too, using the same technique as below.

public static class WhenThereIsAFireTurnOnTheSprinkler {
Variable<Fire> fire = any(Fire.class);
Variable<Sprinkler> sprinkler = any(Sprinkler.class);

Object when = when(
expr(sprinkler, s -> !s.isOn()),
expr(sprinkler, fire, (s, f) -> s.getRoom().equals(f.getRoom()))

public void then(Drools drools, Sprinkler sprinkler) {
System.out.println("Turn on the sprinkler for room " + sprinkler.getRoom().getName());

public static class WhenTheFireIsGoneTurnOffTheSprinkler {
Variable<Fire> fire = any(Fire.class);
Variable<Sprinkler> sprinkler = any(Sprinkler.class);

Object when = when(
expr(sprinkler, Sprinkler::isOn),
not(fire, sprinkler, (f, s) -> f.getRoom().equals(s.getRoom()))

public void then(Drools drools, Sprinkler sprinkler) {
System.out.println("Turn off the sprinkler for room " + sprinkler.getRoom().getName());

by Mark Proctor ( at September 19, 2014 06:03 PM

September 18, 2014

Sandy Kemsley: What’s Next In camunda – Wrapping Up Community Day

We finished the camunda community day with an update from camunda on features coming in 7.2 next month, and the future roadmap. camunda releases the community edition in advance of the commercial...

[Content summary only, click through for full article and links]

by sandy at September 18, 2014 04:12 PM

Sandy Kemsley: camunda Community Day technical presentations

The second customer speaker at camunda’s community day was Peter Hachenberger from 1&1 Internet, describing how they use Signavio and camunda BPM to create their Process Platform, which is...

[Content summary only, click through for full article and links]

by sandy at September 18, 2014 02:59 PM

Sandy Kemsley: Australia Post at camunda Community Day

I am giving the keynote at camunda’s BPMcon conference tomorrow, and since I arrived in Berlin a couple of days early, camunda invited me to attend their community day today, which is the open...

[Content summary only, click through for full article and links]

by sandy at September 18, 2014 11:53 AM

September 17, 2014

Drools & JBPM: Decision Camp is just 1 Month away (SJC 13 Oct)

Decision Camp, San Jose (CA), October 2014, is only one month away, and is free for all attendees who register. Follow the link here, for more details on agenda and registration.

by Mark Proctor ( at September 17, 2014 02:50 AM

September 16, 2014

Drools & JBPM: Workbench Multi Module Project Structure Support

The upcoming Drools and jBPM community 6.2 release will be adding support for Maven multi-module projects. Walter has prepared a video, showing the work in progress. While not shown in this video, the multi-module projects will have managed support to assist with automating version updates, releases, and will have full support for multiple version streams across GIT branches.

There is no audio, but it's fairly self explanatory. The video starts by creating a single project, and then showing how the wizard can convert it to a multi-module project. It then proceeds to add and edit modules, also demonstrating how the parent pom information is configured. The video also shows how this can work across different repositories without a problem - each with their own project structure page. Repositories can also be unmanaged, which allows for user created single projects, much as we have now with  6.0 and 6.1, which means previous repositories will still continue to work as they did before.

Don't forget to switch the video to 720p, and watch it full screen. Youtube does not always select that by default, and the video is fuzzy without it.

by Mark Proctor ( at September 16, 2014 10:25 PM

September 15, 2014

Sandy Kemsley: Survey on Mobile BPM and DM

James Taylor of Decision Management Solutions and I are doing some research into the use and integration of BPM (business process management) and DM (decision management) technology into mobile...

[Content summary only, click through for full article and links]

by sandy at September 15, 2014 04:52 PM

Drools & JBPM: Setting up the Kie Server (6.2.Beta version)

Roger Parkinson did a nice blog on how to setup the Kie Server 6.2.Beta version to play with.

This is still under development (hence Beta) and we are working on improving both setup and features before final, but following his blog steps you can easily setup your environment to play with it.

Only one clarification: while the workbench can connect and manage/provision to multiple remote kie-servers, they are designed to work independently and one can use REST services exclusively to manage/provision the kie-server. In this case, it is not necessary to use the workbench.

Here are a few test cases showing off how to use the client API (a helper wrapper around the REST calls) in case you wanna try:

Thanks Roger!

by Edson Tirelli ( at September 15, 2014 03:59 PM

Thomas Allweyer: Soll man noch das “klassische” BPMS-Konzept vermitteln?

Bislang habe ich sehr positive Reaktionen auf mein neues BPMS-Buch erhalten. Unter anderem kam aber auch die Frage auf, ob das klassische, Prozessmodell-getriebene BPMS-Konzept, das ich in dem Buch mit vielen Beispielprozessen erläutere, überhaupt noch zeitgemäß ist. Sollte man sich angesichts eines immer größeren Anteils an Wissensarbeitern nicht stattdessen lieber mit neueren und flexibleren Ansätzen beschäftigen, wie Adaptive Case Management (ACM)?

Sicherlich muss man die klassische BPMS-Philosophie kritisch hinsichtlich ihrer Eignung für verschiedene Einsatzbereiche hinterfragen. Für die meisten schwach strukturierten und wissensintensiven Prozesse ist es tatsächlich nicht sinnvoll und meist auch gar nicht möglich, den kompletten Ablauf im Voraus in Form eines BPMN-Modells festzulegen. Für solche Prozesse ist Adaptive Case Management besser geeignet. Das heißt aber nicht, dass der herkömmliche BPM-Ansatz komplett überholt wären. Das Buch soll einen fundierten Einstieg in das Themengebiet bieten. Es gibt eine Reihe von Gründen, weshalb ich mich darin auf Prozessmodell-basierte BPMS beschränkt habe:

  • Die überwiegende Mehrzahl aller BPMS, die heute auf dem Markt verfügbar sind, verwenden den Prozessmodell-basierten Ansatz. Zwar gibt es durchaus reine ACM-Systeme, doch sind diese zumindest momentan noch in der Minderheit. Häufig wird Case Management auf klassischen BPM-Plattformen als zusätzliche Funktionalität angeboten.
  • Das klassische BPM-Konzept ist in Theorie und Praxis recht weit entwickelt. Die entsprechenden Systeme haben einen hohen Reifegrad erreicht. Es handelt sich somit um einen etablierten Ansatz, der eine Grundlage dieses Fachgebiets darstellt.
  • Bei ACM hingegen handelt es sich um einen recht neuen Ansatz, der sich noch stark in Entwicklung befindet. Daher ist es schwierig, entsprechende Grundlagen zu identifizieren, die nicht bereits in wenigen Jahren überholt sein werden.
  • Die Kenntnis der klassischen BPM-Grundlagen hilft beim Verständnis von ACM und anderen neuen Ansätzen. So finden sich Konzepte wie Definitionen und Instanzen von Prozessen auch bei ACM in Form von Fall-Templates und Fällen wieder. Ebenso sollte man etwa verstehen, worum es bei der Korrelation von Nachrichten geht. Ob eine Nachricht einer Prozessinstanz oder einem Fall zugeordnet wird, stellt keinen großen Unterschied dar. Manche Vorteile des ACM-Ansatzes erschließen sich erst richtig, wenn man sie mit dem klassischen Konzept vergleicht, wo z. B. Mitarbeiter während der Prozessdurchführung nicht so einfach ganz neue Bearbeitungsschritte hinzufügen können.
  • Auch bei der Bearbeitung von Fällen gibt es oftmals Teile, die in Form von strukturierten Prozessen ablaufen. Das klassische BPM-Konzept wird daher wohl nicht komplett abgelöst. Stattdessen werden sich ACM und BPMS-Funktionalitäten sinnvoll ergänzen.
  • Die Zahl der strukturierten und standardisierten Prozesse dürfte in Zukunft keineswegs sinken. Zum einen gibt es immer mehr komplett automatisierte Prozesse, die zwangsläufig sehr stark strukturiert sind – zumindest solange, bis sich hochgradig intelligente und autonome Software-Agenten auf breiter Front durchgesetzt haben. Zum anderen müssen immer mehr Prozesse skalierbar sein um effizient über das Internet abgewickelt werden zu können. Hierzu müssen sie stark strukturiert und standardisiert sein. Wenn jemand bei einem großen Internethändler etwas bestellt, dann wird nicht erst individuell darüber nachgedacht, wie man die Wünsche dieses Kunden erfüllen kann. Es läuft vielmehr ein komplett standardisierter Prozess ab. Es mag sein, dass Prozesse mit starker Mitarbeiterbeteiligung künftig verstärkt mit Hilfe von ACM unterstützt werden. Klassische Process Engines wird man dann eher bei der Steuerung komplett automatisierter Prozesse finden. Die Zahl der Einsatzmöglichkeiten wird damit aber nicht geringer.

Wer sich als für den Einstieg zunächst mit den Grundlagen klassischer BPMS beschäftigt, liegt daher auf jeden Fall richtig. Und am besten versteht man diese, wenn man sie selbst ausprobiert. Daher gibt es die zahlreichen Beispielprozesse zu dem Buch, die man herunterladen und mit dem Open Source-System “Bonita” ausführen kann.

by Thomas Allweyer at September 15, 2014 10:54 AM

September 13, 2014

Keith Swenson: BPM2014 Keynote: Keith Swenson

I was honored to give the keynote on the second day of the BPM2014 conference, and promised to answer questions, so here are the slides and summary.

Slides are at slideshare:

(Slideshare no longer has the ability to put an audio together with the slides, so I apologize that the slides alone probably don’t make a lot of sense.  I hope to get invited to present the same talk at another event where they video.)

Twitter Responses






Nice of you to notice!  The talk went on schedule and as far as I know there was nothing that I forgot to say.




It is a little of both.  There is a tendency for managers of all types, especially less experienced managers, to want to over-constrain the processes.  At the same time, programmers tend to implement restrictions very literally and without any wiggle room.  I don’t think we can let either one off the hook.




This was one of my key points:  if our goal is to make ‘business’ successful, maybe there is more to it than just increasing raw efficiency in terms of reducing expenses.  Maybe an excellent business needs to keep their knowledge workers experienced, and possibly our IT systems should be helping to exercise the knowledge workers.


This tweet got the most favorites and retweets.  I had not realized that this was not clear before, so let me state it here.  I included in the presentation the definition of BPM that was gathered earlier this year.  I mentioned that this was not exactly the definition that I had formerly thought, but the discussion included a broad spectrum of BPM experts, and so I am willing to go along with this definition.

Under this new definition, ANYTHING and EVERYTHING that makes your business processes better is included.  Some of you thought this all the time.  Previously, I had subscribed to a different (and wrong) definition of BPM, which was a bit more restrictive, and that is why in the past I have stressed the distinction between BPM and ACM.  However, this new, agreed upon definition allows BPM method to have and to not have models, to have and not have execution, etc.  So BPM clearly includes ACM because it also is a way of supporting business and processes.  This is the definition now that so many have pledged to support, and I can support it as well.

I am still teaching myself to say “Workflow-style BPM” or “traditional-BPM” instead of simply ‘BPM’, and I have not completely mastered that change.




There is no doubt:  knowledge work is more satisfying to do.   I spoke to some attendees afterwards, who felt I was being ‘unfair’ to the routine workers:  they are doing their jobs too, why pick on them just because their job is routine?   I am not sure how to respond to that.  I think most people find routine work dull and boring.  Sure, it is a job, but most people would like to be doing more interesting things, and that generally is knowledge work that depends upon expertise you acquire.  In general, automatic routine work will allow a typical business to employ more knowledge workers, particularly if the competitors are doing so.  It is somewhat unlikely to think that all routine worker individuals will switch and become knowledge workers, but some will, and for the most part the transition will occur by hiring exclusively knowledge workers, and losing routine workers by attrition.


by kswenson at September 13, 2014 08:00 AM

September 11, 2014

Tom Baeyens: 5 Types Of Cloud Workflow

Last Wednesday, Box Workflow was announced. It was a expected move for them to go higher up the stack as the cost of storage “races very quickly toward zero”.  It made me realize there are actually 4 different types of workflow solutions available on the cloud.

Box, Salesforce, Netsuite and many others have bolted workflow on top of their products.  In this case workflow is offered as a feature on a product with a different focus.  The advantage is they are well integrated with the product and that it’s available when you have the product already.  The downside can be that the scope is mostly limited to the product.

Another type is the BPM as a service (aka cloud enabled BPM).  BPM as a service has an online service for which you can register an account and use the product online without setting up or maintaining any IT infrastructure for it.  The cloud poses a different set of challenges and opportunities for BPM.  We at Effektif provide a product that is independent, focused on BPM and which is born and raised in the cloud.  In our case, we could say that our on-premise version is actually the afterthought.  Usually it’s the other way round.  Most cloud enabled BPM products were created for on-premise first and have since been tweaked to run on the cloud.  My opinion ‘might’ be a bit biased, but I believe that today’s hybrid enterprise environments are very different from the on-premise-only days.   Ensuring that a BPM solution integrates seamless with other cloud services is non-trivial.   Especially when it needs to integrate just as well with existing on-premise products.

BPM platform as a service (bpmPaaS) is an extension of virtualization.  These are prepackaged images of BPM solutions that can be deployed on a hosting provider.  So you rent a virtual machine with a hosting provider and you then have a ready-to-go image that you can deploy on that machine to run your BPM engine.  As an example, you can have a look at Red Hat’s bpmPaaS cartridge.

Amazon simple workflow service is in many ways unique and a category on it‘s own in my opinion.  It is a developer service that in essence stores the process instance data and it takes care of the distributed locking of activity instances.  All the rest is up to the user to code.  The business logic in the activities has to be coded.  But what makes Amazon’s workflow really unique is that you can (well.. have to) code the logic between the activities yourself as well.  There's no diagram involved.  So when an activity is completed, your code has to perform the calculation of what activities have to be done next.  I think it provides a lot of freedom, but it’s also courageous of them to fight the uphill battle against the user’s expectations of a visual workflow diagram builder.

Then there is IFTTT and Zapier.  These are in my opinion iconic online services because they define a new product category.  At the core, they provide an integration service.  Integration has traditionally been one of the most low level technical aspects of software automation.  Yet they managed to provide this as an online service enabling everyone to accomplish significant integrations without IT or developer involvement.  I refer to those services a lot because they have transformed something that was complex into something simple.  That, I believe, is a significant accomplishment.  We at Effektif are on a similar mission.  BPM has been quite technical and complex.  Our mission is also to remove the need for technical expertise so that you can build your own processes. 

by Tom Baeyens ( at September 11, 2014 09:57 AM

September 10, 2014 BPM meets Digital Age – Win the new book „Management by Internet“ by Willms Buhse

Evaluation of Digital Age BPM ideasTogether with eight fearless BPM experts from four different organizations, we went on an exciting journey to bring together Digital Age and BPM. Supported by Dr. Willms Buhse and his experts from doubleYUU, we have developed a number of possibilities to combine web 2.0 and social media features as well as digital leadership aspects with business process management.

Today, we are going to introduce the results of this workshop series in more detail. – And please don’t miss the chance to win a copy of the inspiring book “Management by Internet” by Willms Buhse at the end of this article. The book covers a lot of the aspects, which we combined with BPM and provides practical examples how to benefit from the Digital Age as a manager.

Overall goal of the workshop series was to increase the acceptance and the benefit of BPM by the implementation of Digital Age elements. Within the first workshop session, we developed more than 70 ideas which we clustered into six areas of interest for further evaluation: ‘Participation’, ‘Training and Communication’, ‘Feedback and Exchange’, ‘Search Engine’, ‘Process Transparency’, and ‘Mobile Access’.

Based on an evaluation of these six areas by BPM experts from the participating organizations, we started to develop prototypes during the second workshop for the eleven highest ranked ideas in an overnight delivery session. Afterwards, these prototypes went through a second evaluation cycle by employees of the participating organizations.

Search like googleBiggest winners of the evaluation by the employees were the ideas related to the ‘Search Engine’. Obviously, employees expect the search engine of the BPM system to be as fast and precise as google. But – as we have learned from Willms and his team – it is absolutely not fair to compare google with the search engine of a BPM system. Google processes much more search requests which can be analyzed and google invests an immense amount of money to optimize its algorithms. But there is still the expectation by the employees to have a search like google. Thus, we discussed ideas like tagging, result ranking, and previews to push the BPM search engine towards google expectations.

The "Like-Button" failed...Biggest looser of the evaluation was the “Like-Button” which was represented by a “heart” in our prototypes. By having a closer look onto the results, we realized that it probably doesn’t make sense to “like” a process. Result of our discussion was to redesign the button to a “Helpful”-Button which can be clicked by users to indicate that the process description was helpful for them.

Now, we are going to wrap-up all the learnings for a more detailed presentation of the results during our Conference in Novembers as well as to prepare the prototypes for further evaluation. In addition, we will present detailed insights about the current implementation status of Digital Age BPM at one of the participating organizations at the conference. So if you are interested in more details, please meet us at the conference. :-)

To provide even more insights of the Digital Age elements which we have discussed during the workshop, we are going to raffle a copy of the new “Management by Internet” book by Willms Buhse. So don’t wait and enter the lottery here…

Best regards,

by Mirko Kloppenburg at September 10, 2014 12:48 PM

September 09, 2014

Keith Swenson: Business Etiquette Modeling: a new paradigm for process

The AdaptiveCM 2014 workshop this past Monday provided a very interesting discussion of the state of the art in adaptive case management and other non-workflow oriented ways for supporting knowledge work. While there I presented, and we discussed, an intriguing new way to think about processes which I call “Business Etiquette Modelling”

Processes Emerge from Individual Interaction

The key to this approach is to treat a business process as an epiphenomenon that is a secondary effect that results from business interactions, but is not primary to business interactions.  The primary thing that is happening is interactions between people.  If those interactions are tuned properly, business results.

I have found the following video to be helpful in to giving a concrete idea of emergent behavior that we can discussion.  Watch the video, particularly between 0:30 and 1:30.  plainbirdsThe behavior of the flock of birds, called murmurating, is the way that the groups of birds appears to bunch, expand, and swirl.  The birds themselves have no idea they are doing this.  Take a look (click this link to access the video – strange problem with WordPress at the moment):

The behavior of the flock is analogous to the business that an organization is engaged in.  With regular top-down or outside-in processes, you start with the emergent business behavior that you want to support, and model that directly.  To refer to the analogy, you draw descriptions of the bunching, flowing, and swirling of the flock, and from that you would come up with specific flight paths that individual birds would need to follow to get that overall behavior.  However, that is not how the birds actually do it!

You can simulate this murmuration behavior by endowing individual birds with a few simple rules:  match speed with nearby other birds, try to stay near the group of birds, and leave enough space to avoid hitting other birds.  Computer simulation using these rules produces flock behavior very similar to starlings shown in the video.


On the left you see the emergent flock behavior, and on the right the rules that produce that, but there is no known way to derive the rules from the flock behavior.  (These rules were found by trial & error experimentation in the simulator.)

The behavior of the birds in a flock emerges from the behaviors of the individual bird interactions — there is no guidance at the flock level.  This is very much like business:  an organization has many individual people interacting, and the business emerges as a result.  Obviously the interaction of people is far more complex than the birds, and business equally more complex than flock behavior, but the analogy holds: business can be modified indirectly by changing the rules of behavior of individuals.

Top-Down Design Runs Into Trouble

Consider the bird flock again, and approach trying to reproduce this the way that we do with a typical BPM approach.  In BPM we would define the overall process that is desired, and then we would determine the steps of everyone along the way to make that happen.  BirdProcessFor the bird flock, that would be like outlining the shape of the flock, stating that the goal is a particular shape, and a particular swooping, and then calculate the flight paths of each of the birds in order to get the desired output.  That might seem like a daunting task for so many birds, but it is doable.  The result is that you will have a precisely defined flock flying pattern.

This pattern would be very fragile.  If you tried to fly where a tree was in the way, some of the pre-calculated bird trajectories would hit the tree.  If there was a hawk in the region, some of the birds would quite likely be captured, because the path is fixed.  To fix this, you would have to go back to the overall flock design, come with a shape that avoids the specific tree, or makes a hole for the predator, and then calculate all the bird trajectories again.  The bird flock behavior has become fragile because any small perturbation in the context requires manually revisiting, and modifying, the overall plan.

With the bottom-up approach, these situations are cleanly handled by adding a couple more rules: avoid trees and other stationary things, and to always keep a certain distance from a predator.  By adding those rules in, the behavior of the flock becomes stable in the face of those perturbations.  If we design the rules properly, the birds are able to determine their own flight paths.  They do so as they fly, and automatically take into account any need to change the overall flock structure.  Flock automatically avoid trees, and they automatically make a hole where a predator flies.  Note of course that we can not be 100% sure of what the flock will exactly look like when it is flying, but we do know that it will have the swooping behavior, as well as avoiding trees and predators.

The problem with modeling the high level epiphenomenon directly is that once you specify the exact flight paths of the birds, the result is very fragile.  Yes, you get a precise overall behavior, but you get only that exact behavior.  When the conditions change, you are stuck, and it is hard to change.  If however you model the micro-level rules, the resulting macro level behavior automatically adapts without any additional work to the new, unanticipated situation.

What is an Etiquette Model?

Etiquette is a term that refers to the rules of interactions between individuals.  Each individual follows their own rules, and if these rules are defined well enough, proper business behavior will emerge.  We can’t call this “Business Rule Modeling” because that already exists, and means something quite different. The term ‘etiquette’ implies that the rules are specifically for guiding the behavior individuals at the interpersonal level.

The etiquette model defines explicitly how individuals in particular roles interact with others.  There would be a set of tasks that might be performed as well as conditions of when to perform that task structured as a kind of heuristic that can be used as needed. Seletion criteris might include specific goals that an individual might have (such as “John is responsible for customer X.”) as well as global utilities, (such as “try to minimize costs” or “assure that the customer goes away satisfied.”)   The set of heuristics are over-constrained, meaning that the individual does not simply follow all the rules, but would have to weigh the options and choose the best guess for the specific situation.


For example, a role like “Purchasing Agent” would be fully defined by all the actions that a purchasing agent might make, and the conditions that would be necessary for such a role player to take action.   They might purchase something only when the requesting party presents a properly formed “purchase request” document, and which carries the proper number of approvals from the right people in the organization.  Defined this way, any number of different business processes might have a “purchase by purchaser” within it, and the rules for purchasing would be consistent across all of them.  If there is a need to make a change to the behavior of the purchaser, those ‘etiquette’ rules could be changed, and as a result all of the processes that involve purchasing would be automatically modified in a consistent way.

Isn’t this the Functional Orientation that BPM avoids?

The answer is yes and no.   Yes, it is modeling very fine grained behavior with a set of heuristics that tell what one individual will do to the exclusion of all others.  There is a real danger that the rules for one role might be defined in such a way as to throw a tremendous burden on everyone else in the organization.  This could decreasing the overall efficiency of the organization.  We can not optimize one role’s etiquette rules in exclusion of all other roles — we need to consider how the resulting end-to-end business process appears.

Given the heuristics and guidelines for all the individuals that will be involved in a process, it is possible to simulate what the resulting business processes will be.  Using predictive analytics, we can estimate the efficiency of that process, and particular waste points can be identified.  This can be used to modify the etiquette of the individual participants so that overloaded individuals do slightly fewer things, and underloaded individuals do a bit more, and so that the overall end-to-end process is optimized.

The result is that you achieve the goals of BPM: you are engaged in a practice of continually improving your business processes.  But you do so without directly dictating the form of the process!  You dictate how individuals interact, and the business process naturally emerges from that.

Is this Worth the Trouble?

The amazing result of this approach is that the resulting business process is anti-fragile!   When a perturbation appears in the organization, the business processes can automatically, and instantly, change to adapt to the situation.  A simple example is a heuristic for one role to pick up some tasks from another role, if that other role is overloaded.  Normally it is more efficient for Role X to do that task, but if because of an accident, several of the people who normally play Role X end up in the hospital for a few weeks, the business process automatically, and instantly, adjusts to the new configuration, without any involvement of a process designer or anyone.

Consider a sales example.  There can be multiple heuristics for closing a deal: one that explores all possible product configurations to identify the ideal match with the customer and maximizes revenue for the company, and another heuristic that gets things approximately right but closes very quickly.  As you get closer to the end of the month, the priority to close business in the month might shift from the more accurate heuristic, to the quick-and-dirty heuristic in order to get business into that month’s accounting results.  These kinds of adaptations are incredibly hard to model using the standard workflow diagram type approach.

The Amazon Example

Wil van der Aalst in his keynote at EDOC 2014 reminded me of a situation that happened to me recently with some orders from Amazon.  On one day I ordered two books and one window sticker from Amazon.  On the next day, I remembered about another book, and ordered that.  The result was that a few days later I received all three books in a single shipment, and the window sticker came a week after that separately.  The first order was broken into two parts for shipping, and then the second order was combined together with part of the first order.

This is actually very hard to model using BPMN.  You can make a BPMN process of a particular item, such as a book, which starts by being ordered and ultimately shipped, but the treatment of the order, by splitting when necessary, and combining when necessary will not appear in the BPMN diagram.  It is hard (or impossible) to include the idea to “optimize shipping costs” into a process that represent the behavior of only a single item of the purchase.

When you model the Business Etiquette of the particular roles, it is very easy to include a heuristic to split an order into parts when the parts are coming from different vendors.  Not every order is split up.  There are guidelines for when to use this heuristic that dictate when it should and should not be done.   Same for the shipper, who might have a heuristic to combine shipments if they are close enough together, and then shipping costs can be reduced.

This approach allows for supporting things like the Kanban method which constrains the number of instances that can be in a particular step at a time.  BPMN has no way to express these kinds of constraints that cross multiple processes.


Let’s discuss this approach.  My rather cursory search did not turn up any research on this approach to representing business process by representing the interactions between individual roles in the organization, although on Monday at the BPM conference I saw a good paper called “Opportunistic Business Process Modeling” which was a start in this direction.  I will make links to research projects if I find some.

This approach also works well for adaptive case management.  The heuristics and guidelines can be used within a case to guide the behavior of the case manger and other participants.  If this is done, then even though you can not predict the course of a single instance, you can use predictive analytics to approximate the handling of future cases.  This technique might be a new tool in the BPM toolkit.

by kswenson at September 09, 2014 04:45 AM

September 05, 2014

Keith Swenson: Final Keynote EDOC 2014: Barbara Weber

Barbara Weber is a professor at University of Innsbruck in Austria.  Next year she will be hosting the BPM 2015 conference at that location.  She gave a talk on how they are studying the difficulties of process modeling.   My notes follow:

Most process model research is focusing on the end product of process models. Studies have shown that a surprisingly large number, from 10% to 50% of existing models, have errors.  Generally process models are created and then the quality of the final model is measured, in terms of complexity of model, model notation, secondary notation, and measure accuracy, speed, and mental effort.   Other studies take collections of industrial models, and measure size, control flow complexity and other metrics, and look for errors like deadlocks and livelocks.

Standard process modeling lifecycle is (1) elicitation, and then (2) formalization. Good communications skills needed in first part. Second part requires skills in a particular notation. She calls this PPM (Process of process modeling). Understanding this better would help both practice and teaching. This can be captured from a couple of different perspectives.

1) logging of modeling interactions
2) tracking of eye movement
3) video and audio
4) biofeedback collecting heart rate etc.

Nautilus Project focused on logging modeling environment. Cheetah experimental platform (CEP) guides modelers through sessions and the other things is that it records the entire thing and plays it back later.  The resulting events can be imported to a process mining tool and analyze the process of process modeling.  She showed some details of the log file that is captured.

Logging at the fine grained level was not going anywhere, because the result was looking like a spaghetti diagram.  They broke the formalization stage into five phases:  

  • problem understanding: what the problem is, what has to be modeled, what notation to use
  • method finding: how to map the things into the modeling notation
  • Modeling: actually doing the drawing on the canvas
  • Reconciliation: is about then improving the understandability of the model, like factoring, layout, and typographic clues all of which make maintenance easier
  • Validation – search for quality issues, comparing external and internal representation, syntactic and semantic, and pragmatic quality issues

They validated this with user doing a “think aloud” work.  They could then map the different kinds of events to these phases.  For example creating elements are modeling phase, while moving and editing existing is more often reconciliation phase.  She showed to charts from two users: one spent a lot of time in problem understanding, and then build quickly, the other user proceeded quite a bit more slowly, adding and removing things over time.

Looking at different users, they found (unsurprisingly) that less experienced users take a lot more time in the ‘problem understanding’ phase.  In ‘method finding’ they found that people with a lot of domain knowledge were significantly more effective.  At the end there are long understanding phases that occur around the clean up.  They did not look at ‘working memory capacity’ as a factor, even though it is well known that this is a factor in most kinds of modeling.  

Second project “Modeling Mind” took a look at eye movements and other biofeedback while modeling.  These additional events in the log will add more dimensions of analysis.  With eye tracking you measure number of fixations, and mean fixation duration.  Then define areas of interesting (modeling canvas, text description, etc.)  They found that eye trace patterns matched well to the phases of modeling.  Initial understanding they spend a lot of time on the text description with quick glances elsewhere.  During the building of the model, naturally you look at the canvas and the tool bar.  During reconciliation there is a lot of looking from model to text and back.

What they would then like is to get a continuous measure of mental effort.  That would give an indication of when people are working hard, and when that changes.  These might give some important clues.  Nothing available at the moment to make this easy, but they are trying to capture this.  For example, maybe measuring the size of the pupil.  Heart rate variability is another way to approximate this.

Conclusion: it is not sufficient to look only at the results of process modeling — the process maps that result — but we really need to look at the process of process modeling: what people are actually doing at the time, and how they accomplish the outcome.  This is the important thing you need to know in order to build better modeling environments, better notations and tools, and ultimately increase the quality of process models.  This might also produce a way to detect errors that are being made during the modeling, and possibly ways to avoid those errors.

Note that there was today no discussion of elicitation phase (process discovery) but that is an area of study they are doing as well.

The tools they use (Cheetah) is open source, and so there are opportunities for others to become involved.


Can the modeling tool simulate a complete modeling environment?  Some of the advanced tools check at run time and don’t allow certain syntactic errors.  Can you simulate this? –  The editor models BPMN, and there is significant ability to configure the way it interacts with the user.

Sometimes it is unclear what is the model, and what is the description of the model.  Is this kept clearly separated in your studies?  Do we need more effort to distinguish these more in modelers?  – WE consider that modeling consists of everything including understanding what you have to do, sense making, and then the drawing of the model.  

This is similar to cognitive modeling.  Have you considered using brain imaging techniques?  – we will probably explore that.  There is a student now starting to look at these things. We need to think carefully whether the subject is important enough for such a large investment.

Have you considered making small variations in the description, for example tricky key word, and see how this effects the task?  – we did do one study where we had the same, slightly modified requirements to model.  These can have a large effect.

Starting from greenfield scenario, right?  What about using these for studying process improvement on existing models? – some little bit of study of this.  The same approach should work well.  Would definitely be interesting to do more work on this.


by kswenson at September 05, 2014 08:15 AM

Thomas Allweyer: BPM in Practice diskutiert ACM, Internet of Things und mehr

“Enterprise BPM 2.0, Adaptive Case Management und das Internet of Things – Wie passt das alles zusammen?”, fragt Dirk Slama, Autor des empfehlenswerten Buchs “Enterprise BPM“, in seiner Keynote auf dem Workshop “BPM in Practice” am 9. Oktober in Hamburg. Das Adaptive Case Management und seine praktischen Anwendung wird in den anschließenden Parallel-Tracks von mehreren Referenten aufgegriffen und vertieft. Weitere Themen sind die Validierung von Prozessmodellen in Szenarien, Process Mining, Werkzeug- und organisations-übergreifende Kollaboration, Decision Management und die praktische Umsetzung vom Modell zur Automatisierung in 45 Minuten.

Das genaue Programm und ein Anmeldeformular finden sich hier.

by Thomas Allweyer at September 05, 2014 08:08 AM

September 04, 2014 Neue Auflage: Praxishandbuch BPMN 2.0

Die neueste Auflage gibt es ab sofort im Handel – zum Beispiel bei Amazon. Leider gehen bei Amazon damit wie immer alle Bewertungen der vorherigen Auflage verloren, sprich wir fangen wieder bei Null an. Falls also jemand Zeit und Lust haben sollte, (erneut) seine Meinung über das Buch dort kund zu tun, wären wir mehr [...]

by Jakob Freund at September 04, 2014 08:13 AM

September 03, 2014

Keith Swenson: Opening Keynote EDOC 2014: Wil van der Aalst

Wil van del Aalst, the foremost expert in workflow and process mining, spoke this morning on the overlap between Data Science and Business Process, and showed how process mining is the super glue between them.  What follows is the notes I made at the event.

Data science is a rapidly growing field.  As evidence he mentioned the Philips currently has 80 openings for data scientists, and plan to hire 50 more every year in the next few years.  That is probably a lot more than computer scientists.   Four main questions for data science:

  • what happened?
  • why did it happen?
  • what will happen in the future?
  • what is the best that could happen?

These are the fundamental questions of data science, and it is incredibly important. A good data scientist is not just computer science, not just statistics, not just databases, but a combination of nine or ten different subjects.

People talk about Big Data, and usually on to Map reduce, and Hadoop, etc.  But this is not the key:  He calls that “Big Blah Blah”.  Process is the important subject.  The reason for mining data is to improve the organization or the service it provides.  For example, improve the functioning of the hospital by examining data.  Or improving the use of X-Ray machines.  (Yes, that is him, at the right, hard at work solving the problems of x-ray machines.)

Process mining breaks out into four fields:  process model analysis, then there is the data mining world which focuses on the data without consideration of the process.  The third area is performance questions about how well the process is runing, and the last area is compliance: how many times is the process being done correctly or incorrectly.

He showed an example of a mined process.  It seems the ProM will output SVG animations that can be played back later showing the flow of tokens through the process.  He talked about the slider in ProM that increases or decreases the complexity of the displayed diagram, by selecting or unselecting differing amounts of the unusual traces.  They also show particular instances of a process using red dashed lines placed on top of the normal process in blue solid lines.  He reminded everyone that the diagrams we not modeled, but mined directly from the data without human input.

Data mining is quite a bit more appealing to business people than pure process modeling because it has real performance measures in it.  IT people are also interested because the analytic output related to real world situations.  Process mining works at design time, but it also works at run time.  You can mine the processes from event streams as they are being created.

There will be more and more data in the future to mine.  Internet of things: you shaving device will be connected to the internet.  Even the baby bite-ring will be connected so that parents will know when the baby is getting teeth. 

He showed an ER diagram of key process mining concepts.  Mentioned specifically the XES event format.

Can you mine SAP?  Yes, but a typical SAP installation has tens of thousands of tables.  You need to understand the data model. You need to scope and select the data for mining.  This is a challenge.  You need to flatten the event data.    A nice log table, with case id (instance id), event id, timestamp, activity name, and other attributes.  Produces a flat model without complicated relationships.  Very seldom people look at more complicated models with many-to-many relationships, and this remains one of the key challenges.

Gave an example of booking tickets for a concert venue.  It is easy to extract the events that occurred.  The hard part is to understand what questions you want to ask about the events.  First choice is to decide what the process instance Id is from all the things going on.  If the process is the lifecycle of a ticket, that would be one.  If it is the lifecycle of the seat you get a different process model.  Or the lifecycle of a booking yet another process is generated.  If we focus on lifecycle of a ticket, then process mining is complicated by the fact that multiple tickets may share the same booking, and the same set of payments.  What if a band cancels a concert?  That would effect many tickets and many bookings.

Another classical example is Amazon where you might look at orderlines, orders, and/or delivery.  I can order 2 books today, 3 more tomorrow, and they may come in 4 different shipments spread over the next few weeks.  Try to draw a process model of this using BPMN?  Very difficult.  You need to think clearly about this before you start drawing pictures.

Data quality problems.  There may be missing data, incorrect data, imprecise data, and additional irrelevant data.  He gave examples of these for process instances (cases) events, and many other attributes.  so in summary: three main challenges: finding the data, flattening the data, and data quality problems.

He gave 12 guidelines for logging (G4L) so that systems are designed to capture high quality information in the first place, so that big data might be able to make use of these later.

Process mining and conformance checking is trying to say something about the real process, but all you can see is “examples” of existing processes.  There is a difference between examples, and the real process.  We can not know what the real process is when you have not seen all possible examples.  If you look at hospital data, there may be one patient who was 80 years old, drunk, and had a problem.  This example may or may not say something about how other people are handled.

  • True Positives: traces possible in the model, and also possible in the real process
  • True Negatives: not possible in the model, and not found in real life
  • False Positives: traces that are possible inthe model, but can not (or did not) happen reality
  • False Negatives: traces not possible in the model, but happen in real life.

Showed a Venn diagram of this.  Try to apply precision metrics to process mining, but you can’t do much.  Your process log only contains a fraction of what is really possible.  From this sample, you can look at what matches the model or not, and that gives you some measure of the log file, but not necessarily reality.  An event log will never say “this can not happen.”  You only see positive examples.  If you look at a sample of university students, MOST students will follow a unique path.  If you look at hospital patients, most will follow a unique path.  Hard then to talk about the fraction that fits a particular process.  Consider a silicon wafer test machine: you have one trace with 50,000 events.  No two traces will match exactly with this number of events.

You never are interested in making a model that fits 100% of the event log.  If you had a model that contained all possible traces, it would not be very useful.  He used an analogy of four forces on an airplane: lift, drag, gravity, thrust.  Lift = fitness (ability to explain the observed behavior), gravity = simplicity (Occam’s Razor), thrust = generalization (avoid over-fitting), and drag is precision (avoid under-fitting).  Different situations require differing amounts of each.

Everything so far has been about one process.  How then do you go to multiple processes?  If we look at the study behavior of Dutch students and international students.  We might find the the behavior of Dutch students is usually different from international students.  Comparative process mining allow you to mine parts of the process, and show the two processes side by side.  Interested in differences in performance, and differences in conformance.  Notion of a process cube:  dimensions of time, department, location, amount, gender, level, priority, etc.  Can do database, extract with a particular filter, and generate the process, but this is tedious.  Solution is put everything in a process cube, and then able to apply process mining on slices of the cube.  For example a car rental agency looking at three different offices, three different time periods, and three different types of customers.  Gave a real example of building permits in different Dutch municipalities.

He records all his lectures, and the students can watch the lectures off-line.  There is a lot of interesting data because they know what parts of the lectures that students watch multiple times.  Students can control the speed of playback, and look at parts the students typically play faster.  They are correlating this with grades at the end of the course.  They can compare different students from different origins and see how they compare.  Standard OLAP techniques do not generally work here because we are dealing with events.  Showed a model of students who passed, versus students who failed.  For students who passed, the most likely first event is “watch lecture 1″.  For the students that failed, the most likely first event is “take an exam”.  (only after failing they go back and watch the lectures).

In conclusion: many of these things are mature enough to use in an industrial situation.  But there are many challenges mentioned.  There is a MOOC on Coursera on Process Mining this fall.  There are 3000 registered students, and it will start in October.


In many year at SAP I have not seen a lot of reflections on past decisions.  Is this really going to be used?  SAP is not designed well to capture events.  If you go to a hospital, things are much easier to mine, even if the systems are built ad-hoc.  Also, there is a lack of maturity on process mining.  You really need to be trained, and you need to see it work.

Philosophically, does the nature of process really matter?  It is crucial that you isolate your notion of a process instance.  One you have identified the process you have in mind, the process mining will work well.  But it is a broad spectrum of process types.  There are spaghetti processes, and lasagna processes.  A lasagna process is fairly structured, and process mining of the overall process is not interesting, because people already know it.  Instead you want to look at bottlenecks.  For spaghetti processes every trace is unique, and the value comes from an aggregate overview of the process and the exceptions.

Is the case management metaphor more valuable than a process management metaphor?  This is an illustration that the classical workflow metaphor is too narrow.  The problem is that there are in reality many-to-many relationships, but when we go to the model we have to simplify.  It is quite important for this community to bridge this gap.  This is probably the main reason that process modeling formats have not become standard. It is too simple.  For example, using the course data, there is a model of the student, and a completely different model of the course, coming from the exact same data.

About real-time event detection, how do you construct a sliding window of evens to mine?  how does mining relate to complex event processing?  Event correlation: how to translate lower level things into higher level things.  Generating a model is extremely fast, so this can be done nearly real time.  Map-reduce could be used to distribute some of the processing.  On the other hand, conformance checking is extremely expensive.  Complexity of that problem remains an issue. We are developing online variants of the process mining, which no longer require storing of the entire event log.  

What about end users?  Model driven engineering … it is possible to incorporate end users into engineering.  How far are we away from involving end users into process mining?  There will probably be different types of end users.  First type will be data scientists to do the analysis of the data and get the competitive advantage.  Once educated, data scientists will have no problem leveraging process mining.  There are other kinds of users, that can be involved in varying degrees.  For example, use the map of Germany as a metaphor.  Some people are very interested in a map, but most people casually look and don’t worry about it.  But, if you project data on the map, then a lot more people are interested.  The same with process maps: put information that is relevant to people on it, and people will become more interested and more involved.


by kswenson at September 03, 2014 08:54 AM

September 02, 2014

Drools & JBPM: Activity Insight coming in Drools & jBPM 6.2

The next Drools and jBPM 6.2 release will include new Activity pages, that provides insight into projects. Early versions of both features should be ready to test drive in the up coming beta2 release, end of next week.

The first Activity page captures events and publishes them as timelines, as a sort of social activities system - which was previous blogged in detail here.  Notice it also now does user profiles This allows events such as "new repository" or "file edited" to be captured, indexed and filtered to be displayed in custom user dashboards. It will come with a number of out of the box filters, but should be user extensible over time.

click to enlarge

We have a video here, using an old CSS and layout. The aim is to allow for user configurable dashboards, for different activity types.

We have also added GIT repository charting for contributors, using the DashBuilder project. There is a short video showing this in action here.

click to enlarge

by Mark Proctor ( at September 02, 2014 08:38 PM

September 01, 2014

Thomas Allweyer: Buchvorstellung: Prozessqualität hängt vom Zusammenspiel mit den IT-Systemen ab

Cover Aligning Business Processes and Information SystemsDer Titel führt möglicherweise ein wenig in die Irre: Es geht in diesem englischsprachigen Buch nicht allgemein um die gegenseitige Anpassung von Geschäftsprozessen und Informationssystemen, sondern speziell um die Auswirkungen auf die Prozessqualität. Im ersten Teil entwickelt der Autor ein Referenzmodell zur umfassenden Beschreibung aller unterschiedlichen Aspekte, die die Qualität von Geschäftsprozessen ausmachen. Das Business Process Quality Reference Model (BPRQM) basiert auf einem Standard für die Qualität von Softwareprodukten. Die Definitionen der dort verwendeten Qualitätsmerkmale wurden begrifflich an den Anwendungsbereich der Geschäftsprozesse angepasst. Sicher lässt sich hinterfragen, ob es für Geschäftsprozesse nicht noch ganz andere Qualitätsmerkmale als für Softwareprodukte gibt. In einer Fallstudie aus einer Uni-Klinik wird aber gezeigt, dass sich das Modell in der Praxis gut anwenden lässt. Hierbei wurde auf Basis dieses Qualitätsmodells ein Fragebogen zur Bewertung eines Prozesses entwickelt, der erfolgreich für die Schwachstellenanalyse eingesetzt wurde und bei den beteiligten Prozessexperten auf eine hohe Akzeptanz stieß.

Möchte man umfassende Qualitätsinformationen in ein Geschäftsprozessmodell integrieren, so steht man vor dem Problem, dass es sehr viele verschiedene Qualitätsattribute gibt, die sich nicht alle gleichzeitig darstellen lassen. Es wird daher eine Erweiterung für BPMN und andere Prozessmodellierungsnotationen vorgestellt, bei denen jede Aktivität Icons für die verwendeten Kategorien von Qualitätsmerkmalen erhält. Sind für eine Aktivität beispielsweise Merkmale aus den Kategorien “Reife” und “Verfügbarkeit” definiert, so werden hierfür zwei Icons angezeigt. Klickt man auf ein Icon, so werden die zugehörigen Qualitätsattribute und ihre Werte in einem Eigenschaftsfenster dargestellt.

Im zweiten Teil des Buchs wird mit Hilfe einer Simulationsstudie untersucht, welchen Einfluss das Zusammenspiel von Geschäftsprozessen und IT-Systemen auf die gesamte Prozessqualität hat. Isolierte Betrachtungen der Prozesse einerseits und der IT-Systeme andererseits sind weniger aussagekräftig, da die Qualität eines ansonsten optimierten Prozesses etwa durch mangelnde Verfügbarkeit eines IT-Systems beeinträchtigt wird. In der Studie wird beispielhaft das Qualitätsmerkmal “Performance” untersucht, dass sich besonders gut für eine Simulation eignet. An einer Fallstudie wird gezeigt, dass die vom Autor entwickelte Methode zur integrierten Simulation von Prozessen und IT-Systemen eine bessere Voraussage der Performance ermöglicht als herkömmliche Verfahren. Insbesondere starke Lastschwankungen kann diese Methode besser vorhersagen.

Das Buch ist aus einer Dissertation entstanden, weshalb der Schreibstil recht wissenschaftlich ist. So dürften einige längere Passagen, die sich mit anderen Arbeiten auseinandersetzen, für praktisch ausgerichtete Leser kaum interessant sein. Dabei haben die vom Autor entwickelten Methoden durchaus eine hohe praktische Relevanz. Sie liefern wertvolle Impulse für die Weiterentwicklung der Themen Prozessqualität und Simulation.

Robert Heinrich:
Aligning Business Processes and Information Systems
New Approaches to Continuous Quality Engineering
Springer 2014.
Das Buch bei amazon.

by Thomas Allweyer at September 01, 2014 02:31 PM

August 27, 2014

Thomas Allweyer: Umfrage Status Quo Prozessmanagement gestartet

In regelmäßigen Abständen ermittelt die Studie “Status Quo Prozessmanagement”, wie die unterschiedlichen Facetten des Themas Prozessmanagement in der Praxis umgesetzt sind, und welche Trends sich erkennen lassen. Diesmal wird die Umfrage gemeinsam von BPM&O und Bearingpoint durchgeführt. Alle Teilnehmer der aktuellen Studie erhalten Ende des Jahres die ausführlichen Ergebnisse.
Zur Umfrage Status Quo Prozessmanagement.

by Thomas Allweyer at August 27, 2014 08:08 AM

August 26, 2014

Drools & JBPM: Pluggable Knowledge with Custom Assemblers, Weavers and Runtimes

As part of the Bayesian work I've refactored much of Kie to have clean extension points. I wanted to make sure that all the working parts for a Bayesian system could be done, without adding any code to the existing core.

So now each knowledge type can have it's own package, assembler, weaver and runtime. Knowledge is no longer added directly into KiePackage, but instead into an encapsulated knowledge package for that domain, and that is then added to KiePackage. The assembler stage is used when parsing and assembling the knowledge definitions. The weaving stage is when weaving those knowledge definitions into an existing KieBase. Finally the runtime encapsulates and provides the runtime for the knowledge.

drools-beliefs contains the Bayesian integration and a good starting point to see how this works:

For this to work you and a META-INF/kie.conf file and it will be discovered and made available:

The file uses the MVEL syntax and specifies one or more services:
'assemblers' : [ new org.drools.beliefs.bayes.assembler.BayesAssemblerService() ],
'weavers' : [ new org.drools.beliefs.bayes.weaver.BayesWeaverService() ],
'runtimes' : [ new org.drools.beliefs.bayes.runtime.BayesRuntimeService() ]

Github links to the package and service implementations:
Bayes Package
Assembler Service
Weaver Service
Runtime Service

Here is a quick unit test showing things working end to end, notice how the runtime can be looked up and accessed. It's using the old api in the test, but will work fine with the declarative kmodule.xml stuff too. The only bit that is still hard coded is the ResourceType.Bayes. As ResourceTypes is an enum. We will probably refactor that to be a standard Class instead, so that it's not hard coded.

The code to lookup the runtime:
StatefulKnowledgeSessionImpl ksession = (StatefulKnowledgeSessionImpl) kbase.newStatefulKnowledgeSession();
BayesRuntime bayesRuntime = ksession.getKieRuntime(BayesRuntime.class);

The unit test:
KnowledgeBuilder kbuilder = new KnowledgeBuilderImpl();
kbuilder.add( ResourceFactory.newClassPathResource("Garden.xmlbif", AssemblerTest.class), ResourceType.BAYES );

KnowledgeBase kbase = getKnowledgeBase();
kbase.addKnowledgePackages( kbuilder.getKnowledgePackages() );

StatefulKnowledgeSessionImpl ksession = (StatefulKnowledgeSessionImpl) kbase.newStatefulKnowledgeSession();

BayesRuntime bayesRuntime = ksession.getKieRuntime(BayesRuntime.class);
BayesInstance instance = bayesRuntime.getInstance( Garden.class );
assertNotNull( instance );

jBPM is already refactored out from core and compiler, although it uses it's own interfaces for this. We plan to port the existing jBPM way to this and actually all the Drools stuff will eventually be done this way too. This will create a clean KIE core and compiler with rules, processes, bayes or any other user knowledge type are all added as plugins.

A community person is also already working on a new type declaration system, that will utilise these extensions. Here is an example of what this new type system will look like:

by Mark Proctor ( at August 26, 2014 11:55 PM

August 25, 2014

Drools & JBPM: Drools - Bayesian Belief Network Integration Part 4

This follows my earlier Part 3 posting in May.

I have integrated the Bayesian System into the Truth Maintenance System, with a first end to end test. It's still very raw, but it demonstrates how the TMS can be used to provide evidence via logical insertions. 

The BBN variables are mapped to fields on the Garden class. Evidence is applied as a logical insert, using a property reference - indicating it's evidence for the variable mapped to that property.  If there is conflict evidence for the same field, then the fact becomes undecided. 

The rules are added via a String, while the BBN is added from a file. This code uses the new pluggable knowledge types, which allow pluggable parsers, builders and runtimes. This is how the Bayesian stuff is added cleanly, without touching the core - but I'll blog about those another time.

String drlString = "package org.drools.bayes; " +
"import " + Garden.class.getCanonicalName() + "; \n" +
"import " + PropertyReference.class.getCanonicalName() + "; \n" +
"global " + BayesBeliefFactory.class.getCanonicalName() + " bsFactory; \n" +
"dialect 'mvel'; \n" +
" " +
"rule rule1 when " +
" String( this == 'rule1') \n" +
" g : Garden()" +
"then " +
" System.out.println(\"rule 1\"); \n" +
" insertLogical( new PropertyReference(g, 'cloudy'), bsFactory.create( new double[] {1.0,0.0} ) ); \n " +
"end " +

"rule rule2 when " +
" String( this == 'rule2') \n" +
" g : Garden()" +
"then " +
" System.out.println(\"rule2\"); \n" +
" insertLogical( new PropertyReference(g, 'sprinkler'), bsFactory.create( new double[] {1.0,0.0} ) ); \n " +
"end " +

"rule rule3 when " +
" String( this == 'rule3') \n" +
" g : Garden()" +
"then " +
" System.out.println(\"rule3\"); \n" +
" insertLogical( new PropertyReference(g, 'sprinkler'), bsFactory.create( new double[] {1.0,0.0} ) ); \n " +
"end " +

"rule rule4 when " +
" String( this == 'rule4') \n" +
" g : Garden()" +
"then " +
" System.out.println(\"rule4\"); \n" +
" insertLogical( new PropertyReference(g, 'sprinkler'), bsFactory.create( new double[] {0.0,1.0} ) ); \n " +
"end " +

KnowledgeBuilder kBuilder = KnowledgeBuilderFactory.newKnowledgeBuilder();
kBuilder.add( ResourceFactory.newByteArrayResource(drlString.getBytes()),
ResourceType.DRL );
kBuilder.add( ResourceFactory.newClassPathResource("Garden.xmlbif", AssemblerTest.class), ResourceType.BAYES );

KnowledgeBase kBase = KnowledgeBaseFactory.newKnowledgeBase();
kBase.addKnowledgePackages( kBuilder.getKnowledgePackages() );

StatefulKnowledgeSession kSession = kBase.newStatefulKnowledgeSession();

NamedEntryPoint ep = (NamedEntryPoint) ksession.getEntryPoint(EntryPointId.DEFAULT.getEntryPointId());

BayesBeliefSystem bayesBeliefSystem = new BayesBeliefSystem( ep, ep.getTruthMaintenanceSystem());

BayesBeliefFactoryImpl bayesBeliefValueFactory = new BayesBeliefFactoryImpl(bayesBeliefSystem);

ksession.setGlobal( "bsFactory", bayesBeliefValueFactory);

BayesRuntime bayesRuntime = ksession.getKieRuntime(BayesRuntime.class);
BayesInstance<Garden> instance = bayesRuntime.createInstance(Garden.class);
assertNotNull( instance );

Garden garden = instance.marginalize();
assertTrue( garden.isWetGrass() );

FactHandle fh = ksession.insert( garden );
FactHandle fh1 = ksession.insert( "rule1" );
instance.globalUpdate(); // rule1 has added evidence, update the bayes network
garden = instance.marginalize();
assertTrue(garden.isWetGrass()); // grass was wet before rule1 and continues to be wet

FactHandle fh2 = ksession.insert( "rule2" ); // applies 2 logical insertions
garden = instance.marginalize();
assertFalse(garden.isWetGrass() ); // new evidence means grass is no longer wet

FactHandle fh3 = ksession.insert( "rule3" ); // adds an additional support for the sprinkler, belief set of 2
garden = instance.marginalize();
assertFalse(garden.isWetGrass() ); // nothing has changed

FactHandle fh4 = ksession.insert( "rule4" ); // rule4 introduces a conflict, and the BayesFact becomes undecided

try {
fail( "The BayesFact is undecided, it should throw an exception, as it cannot be updated." );
} catch ( Exception e ) {
// this should fail

ksession.delete( fh4 ); // the conflict is resolved, so it should be decided again
garden = instance.marginalize();
assertFalse(garden.isWetGrass() );// back to grass is not wet

ksession.delete( fh2 ); // takes the sprinkler belief set back to 1
garden = instance.marginalize();
assertFalse(garden.isWetGrass() ); // still grass is not wet

ksession.delete( fh3 ); // no sprinkler support now
garden = instance.marginalize();
assertTrue(garden.isWetGrass()); // grass is wet again

by Mark Proctor ( at August 25, 2014 04:04 AM

August 24, 2014

Keith Swenson: Collective Adaptive Systems (CAS)

The BPM 2014 conference, Sept 7-12, has been moved from Israel to Eindhoven Holland (because of unrest in the middle east) and I will be giving a keynote on Wednesday Sept 10.  There will be an interesting workshop on Business Processes in Collective Adaptive Systems (BPCAS’14) on Monday, associated with a group called FoCAS (Fundamentals of Collective Adaptive Systems).

What is a Collective Adaptive System?

Also sometimes called “Adaptive Collective Systems,” they are described as “heterogeneous collections of autonomous task-oriented systems
that cooperate on common goals forming a collective system.”  While being wide open to interpretation, there is a key point that the units are assumed to (potentially) autonomous.  I think this is a more natural way of looking at human organizations which form automatically from humans who are themselves quite complex and autonomous.

FoCAS cropped-focas-web-logo2describes its purpose:  “The socio-technical fabric of our society more and more depends on systems that are constructed as a collective of heterogeneous components and that are tightly entangled with humans and social structures. Their components increasingly need to be able to evolve, collaborate and function as a part of an artificial society.

Nature – A strong orientation to working the way that biological systems work.  Natural systems are referenced frequently as they try to tease out the essential capabilities behind the working of ecosystems, cellular systems, herd dynamics, etc.  I particularly like the non-machine, non-Taylorist approach.

Automated or Facilitated? – There are a mix of approaches.  Some of the research seems oriented toward facilitating humans in an organization, and some is toward replacing humans with automated, yet flexible, systems.

Non-Uniform – Another thing I like about this approach is they do not assume that there is a single uniform process system.  So much of BPM research assumes that all actors will interact with a single process.  This approach assumes from the beginning that there will be many diverse components interacting in complex ways.  Diversity is the important ingredient for stability in the face of unexpected changes.

FoCAS offers a free book to get an overview of the situation:  “Adaptive Collective Systems: Herding Black Sheep”  offers 75 pages that cover the need and various approaches that they are trying.

Research Projects

These projects (all in Europe) are associated with FoCAS:

  • Allow Ensembles – human oriented pervasive business processes.  Define processes as flow, but it is expected in real life that the process will need to be changed.  No single system, but the idea that there will be a large number of separate systems.  There are different goals at different levels: individual and collective.  Non-functional requirements are called “utilities” (e.g. reduce smog, increase efficiency).  Processes are defined in cells and cells collaborate.  Example given is a travel scenario for two people that has to be adapted.  Clearly the person involved is able to modify the route, although it is not clear whether they want to make this ‘automatic’ or not.  Supply chain is another example.
  • Assisi | bf – Project to interact with collections of animals (or presumably humans) in order to influence behavior.  Examples were bees and fish.  Ultimately this is for influencing huma n”swarm intelligence.”  Compares their work to google, wikipedia, facebook, and twitter.
  • CASSTING – Stands for collective adaptive systems (CAS) and Synthesis With Non-Zero-Sum Games (STING).  They use a game theory approach to evolve the correct independent units.
  • DIVERSIFY – Goal is to learn how biodiversity emerges in ecosystems. These systems are plastic and able to adapt to many kinds of changes.  This is quite different from the software we use today which is usually picked from one a a small number of varients.  This is fragile.  If systems can be made more diverse, it might be more robust.  One challenge is that it is a overlap between math/statistics, computer science, and biology.  What is the nature of software that support diversity.  Simply scrambling code will not work.
  • QUANTICOL – Quantitative modeling of collective adaptive systems.  Made of components which have state and communicate with other components.  Looking at smart grid. Edinborough has a bus system which reports positions every 30 seconds.  They are looking at how to adapt to emerging roadwork or traffic patterns.  Can traffic lights be tweaked to optimize the system.   Looks to me like ‘automatic’ adapting without explicit ways for people to manipulate the system.
  • Smart Society – Key to making something robust is diversity. Ethics, trust, and reputation.  A large semantic gap between human systems and computer systems.
  • Swarm organ – Machines and technology are quite fragile.  Biology can do amazing things, like self healing.  Studying morphgenesis: how do cells form organs, and there are multiple strategies for how this might happen.  Why do it one way, and why do it another.  The idea is that you might make self-organizing systems that form themselves into the systems that we use.


Robust systems will need to be designed in this way, with a lot of collaborating yet diverse systems each advocating different goals.  People need to be part of these systems, and must interact fluidly with them.  The collective adaptive system approach a distinctly non-Taylorist approach that is worth watching.

by kswenson at August 24, 2014 10:10 AM

August 23, 2014

Sandy Kemsley: Moving Hosts

I’m moving hosts for this blog this weekend; if you can’t reach the site, try clearing your cache or just waiting a while for the new DNS to propagate. Update: all done. If you see anything weird,...

[Content summary only, click through for full article and links]

by sandy at August 23, 2014 04:04 PM

August 21, 2014

Thomas Allweyer: Fraunhofer IAO: Erste von mehreren BPM-Tool-Studien liefert Marktüberblick

BPM-Tools Fraunhofer IAO 2014Gleich vier Marktstudien zu BPM-Tools hat das Stuttgarter Fraunhofer-Institut für Arbeitswirtschaft und Organisation (IAO) angekündigt. Die erste, ein allgemeiner Marktüberblick, liegt bereits vor. Sie soll im Laufe des Jahres um Studien zu den Themen Social BPM, Compliance in Geschäftsprozessen und Überwachung von Geschäftsprozessen ergänzt werden. In dem Marktüberblick sind insgesamt 28 Anbieter mit 27 Werkzeugen vertreten. Dabei ist das Teilnehmerfeld recht heterogen. Die Spanne reicht von einfachen Modellierungswerkzeugen bis zu umfangreichen BPM-Suiten inklusive Prozessausführung und ‑monitoring. Auch ein reines Process Mining-Tool ist vertreten.

Die von den Herstellern per Online-Fragebogen erhobenen Informationen beziehen sich zu großen Teilen auf die Anbieter und Konditionen. Zur eigentlichen Funktionalität wurden nur wenige Fragen gestellt. Hier wird auf die noch folgenden Studien zu speziellen Themen verwiesen. Immerhin erfährt man, dass fast alle betrachteten Werkzeuge über ein integriertes Repository verfügen, und dass BPMN die mit Abstand am weitesten verbreitete Notation ist. Generell sehen die Autoren der Studie einen Trend zu umfassenderen Werkzeugen, die alle Phasen des Prozesslebenszyklus unterstützen.

Die Studie gibt zunächst eine Einführung in das Prozessmanagement und aktuelle Entwicklungen. Anschließend werden die wesentlichen Ergebnisse des Marktüberblicks zusammengefasst. Nähere Einzelheiten zu den Produkten und Anbietern kann man den Einzeldarstellungen entnehmen. Zu jedem Hersteller sind die Antworten zum Online-Fragebogen abgedruckt, sowie vier Seiten Selbstdarstellung.

Der Marktüberblick kann hier heruntergeladen werden.

by Thomas Allweyer at August 21, 2014 10:25 AM

August 19, 2014

Drools & JBPM: Drools Mailing List migration to Google Groups

Drools community member,

The Drools team are moving the rules-usesrs and rules-dev list to Google Groups. This will allow users to have a combined email and web access to the group.
New Forum Information : (click link to view)

The rules-users mailing list has become high volume and it seems natural to split the group into those asking for help with setup, configuration, installation and administration and those who are asking for help with authoring and executing of rules. For this reason rules-users will be split into two groups - drools-setup and drools-usage.  

Drools Setup -!forum/drools-setup (click link to subscribe)
Drools Usage -!forum/drools-usage (click link to subscribe)

The rules-dev mailing list will move to drools-development. 

Drools Development -!forum/drools-development (click link to subscribe)

Google Groups limits the number of invitations, so we were unable to send invitations. For this reason you will need to manually subscribe. 

The Drools Team

by Mark Proctor ( at August 19, 2014 03:38 PM

August 15, 2014

Drools & JBPM: Drools Execution Server demo (6.2.0.Beta1)

As some of you know already, we are introducing a new Drools Execution Server in version 6.2.0.

I prepared a quick video demo showing what we have done so far (version 6.2.0.Beta1). Make sure you select "Settings -> 720p" and watch it in full screen.

by Edson Tirelli ( at August 15, 2014 12:53 AM

August 12, 2014

Thomas Allweyer: Open Innovation – Prozessoptimierungen in der Radiologie gesucht

logo_medvÜber die Open Innovation-Plattform des in der Region Nürnberg angesiedelten Medizintechnik-Clusters “Medical Valley” werden Experten gesucht, die zur Lösung konkreter Problemen aus der Medizintechnik beitragen. Dass es sich dabei nicht immer um rein technische Lösungen handeln muss, zeigt eine aktuelle Ausschreibung zu Prozessoptimierung in der Radiologie.

Häufig führt ein ineffizienter Informationsaustausch zwischen Patienten, überweisenden Ärzten und Radiologie-Zentren zu langen Durchlaufzeiten und hohen Kosten. Gesucht werden daher Vorschläge zur Verbesserung des gesamten Prozesses zur Patientenuntersuchung mit bildgebenden Verfahren. Die Vorschläge sollen insbesondere auch eine geeignete IT-Unterstützung berücksichtigen. Einreichungen sind bis zum 29.9.2014 über die Medical Valley-Plattform möglich.

by Thomas Allweyer at August 12, 2014 08:30 AM

August 11, 2014

Drools & JBPM: JUDCon 2014 Brazil: Call for Papers

The International JBoss Users and Developer Conference, and premier JBoss developer event “By Developers, For Developers,” is pleased to announce that the call for papers for JUDCon: 2014 Brazil, which will be held in São Paulo on September 26th, is now open! Got Something to Say? Say it at JUDCon: 2014 Brazil! Call for papers ends at 5 PM on August 22nd, 2014 São Paulo time, and selected speakers will be notified by August 29th, so don't delay!

by Edson Tirelli ( at August 11, 2014 01:17 PM

August 09, 2014

August 06, 2014

Thomas Allweyer: Kongress zum Prozessmanagement in der Finanzindustrie

pex logoProzessmanager aus Banken und Versicherungen treffen sich vom 27. bis 29. Oktober in Wiesbaden zur “PEX Process Excellence Finance”. Wie erreicht man Process Excellence und Agilität in einem immer stärker regulierten Umfeld? Diese Frage dürfte viele der Teilnehmer beschäftigen. Zahlreiche Praxisvorträge von Referenten namhafter Finanzinstitute werden hierfür umfangreichen Diskussionsstoff liefern.

So wird beispielsweise vorgestellt, wie das Prozessmanagement einer Bank dabei half, ihr Geschäftsmodell erfolgreich von einer Transaktionsbank zu einem Versorger für Wertpapierabwicklungsdienstleistungen umzustellen. Nach wie vor sind viele Banken damit beschäftigt, die Erstellung ihrer Dienstleistungen stärker zu industrialisieren. So werden die Rolle von Shared Service Centers, die Integration von Service Partnern und eine Verbesserung der Kundenorientierung in Backoffice-Prozessen thematisiert. Auch die verstärkte Digitalisierung des Bankgeschäfts steht in Wiesbaden auf der Agenda, ebenso wie Erfolgsfaktoren für Prozess- und Veränderungsmanagement.

Gemeinsam mit Sven Schnägelberger werde ich in einem Workshop einen Überblick über aktuelle Entwicklungen von Werkzeugen und Technologien für BPM vorstellen und Hinweise für die Auswahl der passenden Lösung geben.

Weitere Informationen gibt es auf der Website zur PEX Finance 2014.


by Thomas Allweyer at August 06, 2014 11:25 AM

August 04, 2014

Keith Swenson: Organize for Complexity Book

Niels Pflaeging’s amazing little book, Organize for Complexity, gives good advice on how to create self managing organization that are resilient and stable.

There is a lot to like about the book.  It is short: only 114 pages.  Lots of hand drawn diagrams illustrate the concepts.  Instead of bogging down in lengthy descriptions, it keeps statements clear and to the point.

Alpha and Beta

Alpha is a taylorist way of running an organization.  It is the embodiment of command & control, theory X, hierarchical, structured, machine-like, bureaucratic traditional organizations.  The reason that alpha style organizations have worked is an accident of history.  Complexity of marketplaces, and subsequently manufacturing environments, were long ago quite complex, but the dawn of the industrial age brought a century or so where the markets were sluggish and complexity quite diminished.  During this period of diminished complexity, alpha style organizations were able to thrive.  However, this came to an end in the 1970’s or 1980’s, and the world has become more complex again.

OrganizeForComplexityBeta is the style of organizing that is effective at dealing with complexity with a focus on theory Y, decentralization, agile, and self organizing.  He suggests we should form people into teams with a clear boundary.  Keep everything completely transparent within the team so everyone knows what is going on.  Give challenges to the entire team (or better, let them self-identify the tasks) and recognize accomplishments of the team, and not individuals.  Done correctly, the members of the teams will work out the details, taking on the tasks best suited to themselves, without regard to roles, titles, job positions, status symbols, etc.

The book spends a good deal of time motivating why this works.   One subject which I have covered a lot on this blog: a machine-like approach can not work against complexity.  Analytic decomposition of a complex situation, and addressing parts of a complex system can actually do more harm than good.  The one ‘silver bullet’ is that human beings have the ability to work in the face of complexity, so you must set up the organization to leverage native human intelligence. (Reminds me of human 1.0.)

Networked Organizations

The goal is to make an organization networked along informal lines, and also along value creating lines.  Instead of centralized command center pushing ideas out, the network is formed with a periphery which deals directly with the market, while there is a center which supports the periphery.  The network is driven by the periphery … very much the same as a pull organization.  I agree, and have argued that such an organization is indeed more robust and able to handle complexity (see ‘“Pull” Systems are Antifragile‘).  The networked organization decentralizes decision making, putting it closer to the customer, resulting in fast and better decisions.


Since teams are self organizing, leadership works a little … differently.  Leadership needs to focus on improving the system, and not so much on the tasks and activities.  Radical transparency, connectedness, team culture are all important.  You might even call it collaborative planning.  He even spends some time discussing the steps you might have to do to transform an organization from an ‘alpha’ to a ‘beta’ working mode.


I really love the book.  It should be quite accessible to managers and leaders in any organization.  Like most inspirational books, it makes things sound easier than they are.  Ideally, each team, and each team member, would get paid proportionally to the value the team/member provides each time period — as if the organization was a form of idealized market.  Some forms of value are nebulous and defy measurement.  Also, people band into organizations in order gain the stability that comes from a fixed structure so that they don’t have to worry about how their own bills will be paid at the end of the month.  There will always be someone taking the risk, and as a result having a commanding influence.  One can’t be a purist; and it is pragmatic to expect that a mixture of alpha and beta will always be in force.  Still, the book gives an excellent overview of the principles of a networked organization to strive for, along with a reasonable explanation supporting why they work, as the title suggests, in the face of complexity.


by kswenson at August 04, 2014 02:35 PM

August 01, 2014

Keith Swenson: The third era of process support: Empathy

Rita Gunther McGrath’s post this week on the HBR Blog called Management’s Three Eras: A Brief History has a lesson for those of us designing business process technology.  The parallel between management and process technology might be stronger than we normally admit.

According to McGrath, management didn’t really exist before the industrial revolution, at which time it came in to being to coordinate these larger organizations.  The organization was conceptualized as a machine to produce products.  The epitome of this thinking is captured by FW Taylor and others who preached scientific management.

Early process technology was similarly oriented around viewing the organization as a machine.  Workflow, and later business process management (BPM), was all about finding the one best process, and constructing machinery that help to enforce those best processes.

The second phase of management emerged in the decades after WWII when organizations started to focus on expertise and to provide services.  Peter Drucker invented the term “knowledge work” and Douglas McGregor called Theory Y a management style distinguished from the earlier Theorey X.  Command and control does not work, and a new contract with workers is needed to retain their talent and expertise.

There is a second phase in process technology as well, with the dramatic rise in interest in Case Management technologies recently to support knowledge workers, to allow them to leverage their expertise, and to enable far more agile organizations necessary to provide services.

Glistening Dew Along the High Sierra TrailMcGrath proposes that we are at the dawn of a third era in management.  The first era was machine-like to produce products, the second collaborative to provide advanced services, the third will be to create “complete and meaningful experiences.”  She says this is a new era of empathy.  A pull organization would be empathetic in the sense that customer desires rather directly drive the working of the organization.  This might be the management style that Margaret Wheatley, Myron Kellner-Rogers, Fritjof Kapra, and other new path writers are hinting at.

We should brace ourselves for a similar emergence of technology that will enhance and improve our ability to work together in this more empathetic style.  A hyper-social organization might be the organizing principle.  What will that new process technology look like?  I don’t know, but we have some time to sort that out.

Management I emerged in the 1800’s to 1950, while that early process technology appeared in the 1980’s and 1990’s.   Management II emerged in the 1950’s and 1960’s and the process technology started appearing in a real way around 2010.  If Management III is appearing now, perhaps we have until 2020 to get to the point where the technology to support it is being worked out. That leaves us plenty of time to work out the details.

Or maybe not.  What if Management III is emerging concomitant with social and enterprise 2.0 technology we see starting to be used today?  What if Management I was originally tied inherently with the rise of use of steam and electric power, while Management II inherently came with technology of telephones and telefaxes?  If Management III is tied directly to new social technologies, it might be that by the time it fully emerges, the technology base will be set.  We see the technology support for management I & II as separate because the information technology came later, but that is not the case for management III.  It might be happening now.

Surely in the future, when we look back on these times, we will recognize the early attempts at systems that support an empathy style of management starting here and now.  We need only look for it, and recognizes it for what it is.


by kswenson at August 01, 2014 02:34 PM

Thomas Allweyer: Die Gewinner des BPMS-Buchs stehen fest

Herzlichen Dank an alle, die an der Verlosung des BPMS-Buchs teilgenommen haben.

Je ein Exemplar haben gewonnen:

  • Dr. Wiebke Dresp, Rösrath
  • Tim Pidun, Dresden
  • Dr. Tobias Walter, Offenbach

Herzlichen Glückwunsch! Die Bücher sind auf dem Weg zu Ihnen.

Weitere Informationen zu dem Buch unter

by Thomas Allweyer at August 01, 2014 10:49 AM

July 30, 2014

Sandy Kemsley: BP3 Brazos Portal For IBM BPM: One Ring To Rule Them All?

Last week BP3 announced the latest addition to their Brazos line of UI tooling for IBM BPM: Brazos Portal. Scott Francis gave me a briefing a few days before the announcement, and he had Ivan...

[Content summary only, click through for full article and links]

by sandy at July 30, 2014 12:38 PM

July 28, 2014 Leading BPM – Agenda of 2014 Conference revealed!

21-07-2014 22-09-19In a growing number of organizations, focus of BPM is moving towards leadership oriented topics to increase acceptance and benefit of process management systems. Basics like process modeling and compliance management are already quite mature and widely discussed. Thus, we are going to put the motto “Leading BPM” into reality and will pay attention to upcoming areas such as “real” BPM training (not system training) for employees and management, change management aspects, and activities to strengthen acceptance of BPM systems.

To facilitate these topics, we joined forces with BPM experts from business areas like engineering, finance, aerospace, social sector, and chemical industry to perform a number of workshops to identify best practices. Combined with latest insights from scientific world and practical examples, results of the workshops will be presented at the 2014 Process Management Conference on Nov 24/25 at Lufthansa Training & Conference Center Seeheim in the area of Frankfurt, Germany.

In addition, we will focus on future-oriented BPM topics and present detailed results of our “Digital Age BPM” workshop series which we have performed in cooperation with digital leadership expert Willms Buhse and his doubleYUU team. In a group of five organizations from various sectors, we experimented with bringing BPM and digital age aspects such as social media, web 2.0, agile management, and mobile devices together. Results are quite fascinating and will be presented on day two of the conference.

For the first time ever, we will give the BPM2thePeople Award to an organization form social or education sector for its achievements in applying BPM methods. Read more about the award on its website:

For sure, the conference will also offer enough space for knowledge exchange with other BPM experts. And especially for a facilitated networking, we will offer several speed dating sessions during the breaks.

Finally, Samsung will support us with latest mobile devices to continue our paperless conference approach and to enable live polls and digital networking. Many thanks to Samsung! :-)

So don’t miss this year’s Conference and register now!



PS: Currently, we are offering an early bird discount of 10 percent!

Again, this will be a local conference in Germany, but if enough non-German-speaking experts are interested, we will think about ways to share the know-how with the international community as well. Please feel free to contact the team.

by Mirko Kloppenburg at July 28, 2014 12:05 PM

Keith Swenson: Wirearchy – a pattern for an adaptive organization?

What is a Wirearchy?  How does it work?  When should it be considered?  When should it be avoided?  What are the advantages?  This post covers the basics elements of a Wirearchy.

What is a Wirearchy?

Jon Husband has a blog “” which as you can tell from the name is dedicated to the subject.

It is an organizing principle.  Instead of the top down, command and control hierarchy that we are used to, a wirearchy instead organizes around champions and channels.  It is an organization designed around a networked world.  He says:

The working definition of Wirearchy is “a dynamic two-way flow of  power and authority, based on knowledge, trust, credibility and a focus on results, enabled by interconnected people and technology”.

The description reads a little like the communist manifesto where the employee is being liberated from the oppression of bureaucracy, where “rapid flows of information are like electronic grains of sand, eroding the pillars of rigid traditional hierarchies.”  There is no doubt that information technology is having a profound effect on how we organize, and a Wirearchy is an honest attempt to distill the trends that are already happening around us.


Husband feels that Taylorism, or Scientific Management, is coded into the traditional hierarchy.  Scientific management can be seen as the application of Enlightenment (reductionist) principles to work processes.  Breaking highly complicated manufacturing into a sequence of discrete well defined steps, so that work can be passed from person to person in a factory like setting. It is surprising that he draws a parallel between hierarchies and scientific management, because the latter is between 100 and 200 years old, while hierarchies have been used since ancient times, and don’t seem to be related to the industrial revolution at all.  Hierarchies worked for the Egyptians.

“first we shape our structures, then our structures shape us” -Churchill

Is it Technology?

Husband claims that the concept of wirearchy has nothing to do with technology.  I think I know what he means: it is an organization of human interactions, not specifically a something designed in a piece of software.  Thus a wirearchy would then be what we used to call “the grape vine” – an informal network of communications.  In this sense wirearchies have always existed.

To say that it has nothing to do with technology is not really honest.  It is the expansion of telecommunications technologies that allow so many more people to be connected than before.  It is the information technology that allows a wirearchy to be more than just a gossip network.

Indeed Husband seem to contradict himself.  Consider the advise to a manager: “become knowledgeable about online work systems and how the need for collaboration is changing the nature of work.”   A wirearchy is not instigated by an specific technology system, but there is no doubt that a wirearchy results from new modes of communications from social technology in general.

Not a Revolution

Husband does not expect traditional hierarchies to be replaced by wirearchies.  Hierarchies remain, but wirearchies explain some of the changes we are seeing in the interconnected world.

I really want to compare this to Francois Gossieaux’s “Human 1.0″ which is that social technologies are allowing us to working together in a much more natural way.  People have always built their own networks, but during the industrial revolution there was a strong incentive to organize into much more rigid organizational structures.  Call those rigid structures from industrialization and scientific management “human 2.0″.  Then social networks will allows us be just as productive, but get back to relating to each on in a way that people always have.

The Big Shift: Push vs. Pull

Hagel et. al. talk about social technology bringing about a shift from push oriented organizations, to pull organizations.  The point of a wirearchy is that initiatives do not start from the top, and get pushed to the workers.  Instead, initiatives can start from anyplace, and be carried out by ad-hoc teams that know each other and share common goals.  That sound very much the same as a pull organization: the edges of the organization in direct contact with the customer make key decisions about what will be offered, and then are supported by the rest of the organization to deliver the results.  The hierarchy does not go away, instead the focus is on how it  is used, and where the initiative come from.


One of the central themes is responsiveness to change.  He says people should “be aware of, and identify, the changes and prepare for more change on an ongoing basis.”  In other words, prepare to be Agile.  Don’t forget, it was Alvin Toffler in his 1970 book “Future Shock” said exactly the same thing: in the future success will depend less on perfecting a particular mode of work, and instead in learning how to rapid and continually adopt new patterns of work.” The idea that we need to adapt quickly is not new.

But Still … Highly Relevant

Reading the above I seem critical of the originality of wirearchy, but let me clarify.  Wirearchy is a way of seeing and talking about what is happening.  Many others are seeing the same thing, and that is why it is so important.  Here are some highlights of posts he has written:

Harold Jarche has written a number of posts on wirearchy:


Organizations that do not adapt to the changes that social technology brings to the market and to the office will be left behind by those who adapt.  There is no question that such pressures exist.  It is useful to talk about a wirearchy as a view of how organizations are changing, and as a guiding principle to help determining the better future course of action available to organizaitons.



by kswenson at July 28, 2014 10:39 AM

July 25, 2014

Thomas Allweyer: Agile Methoden weiter auf dem Vormarsch

Zum zweiten Mal nach 2012 hat das BPM-Labor der Hochschule Koblenz unter Leitung von Ayelt Komus eine Bestandsaufnahme zur Verbreitung agiler Verfahren durchgeführt. Die Macher der Studie freuten sich über mehr als 600 Teilnehmern aus 30 Nationen. “Zwei Jahre später sind agile Methoden wie Scrum und IT-Kanban weiter etabliert und zunehmend auch außerhalb der Software-Entwicklung in der täglichen Praxis angekommen”, fassen die Autoren das Ergebnis zusammen.

Fast zwei Drittel der Studienteilnehmer haben erst in den letzten vier Jahren begonnen, agil zu arbeiten. Meist werden agile Methoden nicht in Reinform angewandt, sondern in Kombination mit Elementen anderer, oftmals klassischer Vorgehen. Als meist genutzte Methode wird nach wie vor Scrum eingesetzt. Kanban und Design Thinking haben aber deutlich höhere Wachstumsraten als andere Methoden. Insgesamt wurden agile Methoden auch in der aktuellen Umfrage wieder deutlich positiver und erfolgreicher beurteilt als klassische Projektmanagement-Methoden.

Der Abschlussbericht der Studie ist über die Seite erhältlich.

by Thomas Allweyer at July 25, 2014 10:21 AM

July 21, 2014

Drools & JBPM: Drools Executable Model (Rules in pure Java)

The Executable Model is a re-design of the Drools lowest level model handled by the engine. In the current series (up to 6.x) the executable model has grown organically over the last 8 years, and was never really intended to be targeted by end users. Those wishing to programmatically write rules were advised to do it via code generation and target drl; which was no ideal. There was never any drive to make this more accessible to end users, because extensive use of anonymous classes in Java was unwieldy. With Java 8 and Lambda's this changes, and the opportunity to make a more compelling model that is accessible to end users becomes possible.

This new model is generated during the compilation process of higher level languages, but can also be used on its own. The goal is for this Executable Model to be self contained and avoid the need for any further byte code munging (analysis, transformation or generation); From this model's perspective, everything is provided either by the code or by higher level language layers. For example indexes etc must be provided by arguments, which the higher level language generates through analysis, when it targets the Executable model.
It is designed to map well to a Fluent level builders, leveraging Java 8's lambdas. This will make it more appealing to java developers, and language developers. Also this will allow low level engine feature design and testing, independent of any language. Which means we can innovate at an engine level, without having to worry about the language layer.
The Executable Model should be generic enough to map into multiple domains. It will be a low level dataflow model in which you can address functional reactive programming models, but still usable to build a rule based system out of it too.

The following example provides a first view of the fluent DSL used to build the executable model
DataSource persons = sourceOf(new Person("Mark", 37),
new Person("Edson", 35),
new Person("Mario", 40));

Variable<Person> markV = bind(typeOf(Person.class));

Rule rule = rule("Print age of persons named Mark")
input(markV, () -> persons),
expr(markV, person -> person.getName().equals("Mark"))
on(markV).execute(mark -> System.out.println(mark.getAge())

The previous code defines a DataSource containing a few person instances and declares the Variable markV of type Person. The rule itself contains the usual two parts: the LHS is defined by the set of inputs and expressions passed to the view() method, while the RHS is the action defined by the lambda expression passed to the then() method.

Analyzing the LHS in more detail, the statement
input(markV, () -> persons)
binds the objects from the persons DataSource to the markV variable, pattern matching by the object class. In this sense the DataSource can be thought as the equivalent of a Drools entry-point.

Conversely the expression
expr(markV, person -> person.getName().equals("Mark"))
uses a Predicate to define a condition that the object bound to the markV Variable has to satisfy in order to be successfully matched by the engine. Note that, as anticipated, the evaluation of the pattern matching is not performed by a constraint generated as a result of any sort of analysis or compilation process, but it's merely executed by applying the lambda expression implementing the predicate ( in this case, person -> person.getName().equals("Mark") ) to the object to be matched. In other terms the former DSL produces the executable model of a rule that is equivalent to the one resulting from the parsing of the following drl.
rule "Print age of persons named Mark"
markV : Person( name == "Mark" ) from entry-point "persons"
It is also under development a rete builder that can be fed with the rules defined with this DSL. In particular it is possible to add these rules to a CanonicalKieBase and then to create KieSessions from it as for any other normal KieBase.
CanonicalKieBase kieBase = new CanonicalKieBase();

KieSession ksession = kieBase.newKieSession();
Of course the DSL also allows to define more complex conditions like joins:
Variable<Person> markV = bind(typeOf(Person.class));
Variable<Person> olderV = bind(typeOf(Person.class));

Rule rule = rule("Find persons older than Mark")
input(markV, () -> persons),
input(olderV, () -> persons),
expr(markV, mark -> mark.getName().equals("Mark")),
expr(olderV, markV, (older, mark) -> older.getAge() > mark.getAge())
on(olderV, markV)
.execute((p1, p2) -> System.out.println(p1.getName() + " is older than " + p2.getName())
or existential patterns:
Variable<Person> oldestV = bind(typeOf(Person.class));
Variable<Person> otherV = bind(typeOf(Person.class));

Rule rule = rule("Find oldest person")
input(oldestV, () -> persons),
input(otherV, () -> persons),
not(otherV, oldestV, (p1, p2) -> p1.getAge() > p2.getAge())
.execute(p -> System.out.println("Oldest person is " + p.getName())
Here the not() stands for the negation of any expression, so the form used above is actually only a shortcut for
not( expr( otherV, oldestV, (p1, p2) -> p1.getAge() > p2.getAge() ) )
Also accumulate is already supported in the following form:
Variable<Person> person = bind(typeOf(Person.class));
Variable<Integer> resultSum = bind(typeOf(Integer.class));
Variable<Double> resultAvg = bind(typeOf(Double.class));

Rule rule = rule("Calculate sum and avg of all persons having a name starting with M")
input(person, () -> persons),
accumulate(expr(person, p -> p.getName().startsWith("M")),
on(resultSum, resultAvg)
.execute((sum, avg) -> result.value = "total = " + sum + "; average = " + avg)
To provide one last more complete use case, the executable model of the classical fire and alarm example can be defined with this DSL as it follows.
Variable<Room> room = any(Room.class);
Variable<Fire> fire = any(Fire.class);
Variable<Sprinkler> sprinkler = any(Sprinkler.class);
Variable<Alarm> alarm = any(Alarm.class);

Rule r1 = rule("When there is a fire turn on the sprinkler")
expr(sprinkler, s -> !s.isOn()),
expr(sprinkler, fire, (s, f) -> s.getRoom().equals(f.getRoom()))
.execute(s -> {
System.out.println("Turn on the sprinkler for room " + s.getRoom().getName());
.update(sprinkler, "on")

Rule r2 = rule("When the fire is gone turn off the sprinkler")
expr(sprinkler, Sprinkler::isOn),
not(fire, sprinkler, (f, s) -> f.getRoom().equals(s.getRoom()))
.execute(s -> {
System.out.println("Turn off the sprinkler for room " + s.getRoom().getName());
.update(sprinkler, "on")

Rule r3 = rule("Raise the alarm when we have one or more fires")
execute(() -> System.out.println("Raise the alarm"))
.insert(() -> new Alarm())

Rule r4 = rule("Lower the alarm when all the fires have gone")
execute(() -> System.out.println("Lower the alarm"))

Rule r5 = rule("Status output when things are ok")
not(sprinkler, Sprinkler::isOn)
execute(() -> System.out.println("Everything is ok"))

CanonicalKieBase kieBase = new CanonicalKieBase();
kieBase.addRules(r1, r2, r3, r4, r5);

KieSession ksession = kieBase.newKieSession();

// phase 1
Room room1 = new Room("Room 1");
FactHandle fireFact1 = ksession.insert(new Fire(room1));

// phase 2
Sprinkler sprinkler1 = new Sprinkler(room1);


// phase 3
In this example it's possible to note a few more things:

  • Some repetitions are necessary to bind the parameters of an expression to the formal parameters of the lambda expression evaluating it. Hopefully it will be possible to overcome this issue using the -parameters compilation argument when this JDK bug will be resolved.
  • any(Room.class) is a shortcut for bind(typeOf(Room.class))
  • The inputs don't declare a DataSource. This is a shortcut to state that those objects come from a default empty DataSource (corresponding to the Drools default entry-point). In fact in this example the facts are programmatically inserted into the KieSession.
  • Using an input without providing any expression for that input is actually a shortcut for input(alarm), expr(alarm, a -> true)
  • In the same way an existential pattern without any condition like not(fire) is another shortcut for not( expr( fire, f -> true ) )
  • Java 8 syntax also allows to define a predicate as a method reference accessing a boolean property of a fact like in expr(sprinkler, Sprinkler::isOn)
  • The RHS, together with the block of code to be executed, also provides a fluent interface to define the working memory actions (inserts/updates/deletes) that have to be performed when the rule is fired. In particular the update also gets a varargs of Strings reporting the name of the properties changed in the updated fact like in update(sprinkler, "on"). Once again this information has to be explicitly provided because the executable model has to be created without the need of any code analysis.

by Mario Fusco ( at July 21, 2014 04:48 PM

July 20, 2014

Drools & JBPM: jBPM6 Developer Guide coming out soon!

Hello everyone. This post is just to let you know that jBPM6 Developer Guide is about to get published, and you can pre-order it from here and get from a 20% to a 37% discount on your order! With this book, you can learn how to:
  • Model and implement different business processes using the BPMN2 standard notation
  • Understand how and when to use the different tools provided by the JBoss Business Process Management (BPM) platform
  • Learn how to model complex business scenarios and environments through a step-by-step approach
Here you can find a list of what you will find in each chapter:  

Chapter 1, Why Do We Need Business Process Management?, introduces the BPM discipline. This chapter will provide the basis for the rest of the book, by providing an understanding of why and how the jBPM6 project has been designed, and the path its evolution will follow.  
Chapter 2, BPM Systems Structure, goes in depth into understanding what the main pieces and components inside a Business Process Management System (BPMS) are. This chapter introduces the concept of BPMS as the natural follow up of an understanding of the BPM discipline. The reader will find a deep and technical explanation about how a BPM system core can be built from scratch and how it will interact with the rest of the components in the BPMS infrastructure. This chapter also describes the intimate relationship between the Drools and jBPM projects, which is one of the key advantages of jBPM6 in comparison with all the other BPMSs, as well as existing methodologies where a BPMS connects with other systems.
Chapter 3, Using BPMN 2.0 to Model Business Scenarios, covers the main constructs used to model our business processes, guiding the reader through an example that illustrates the most useful modeling patterns. The BPMN 2.0 specification has become the de facto standard for modeling executable business processes since it was released in early 2011, and is recommended to any BPM implementation, even outside the scope of jBPM6.  
Chapter 4, Understanding the Knowledge Is Everything Workbench, takes a look into the tooling provided by the jBPM6 project, which will enable the reader to both define new processes and configure a runtime to execute those processes. The overall architecture of the tooling provided will be covered as well in this chapter.
Chapter 5, Creating a Process Project in the KIE Workbench, dives into the required steps to create a process definition with the existing tooling, as well as to test it and run it. The BPMN 2.0 specification will be put into practice as the reader creates an executable process and a compiled project where the runtime specifications will be defined.
Chapter 6, Human Interactions, covers in depth the Human Task component inside jBPM6. A big feature of BPMS is the capability to coordinate human and system interactions. It also describes how the existing tooling builds a user interface using the concepts of task lists and task forms, exposing the end users involved in the execution of multiple process definitions’ tasks to a common interface.
Chapter 7, Defining Your Environment with the Runtime Manager, covers the different strategies provided to configure an environment to run our processes. The reader will see the configurations for connecting external systems, human task components, persistence strategies and the relation a specific process execution will have with an environment, as well as methods to define their own custom runtime configuration.
Chapter 8, Implementing Persistence and Transactions, covers the shared mechanisms between the Drools and jBPM projects used to store information and define transaction boundaries. When we want to support processes that coordinate systems and people over long periods of time, we need to understand how the process information can be persisted.  
Chapter 9, Integration with other Knowledge Definitions, gives a brief introduction to the Drools Rule Engine. It is used to mix business processes with business rules, to define advanced and complex scenarios. Also, we cover Drools Fusion, and added feature of the Drools Rule Engine to add the ability of temporal reasoning, allowing business processes to be monitored, improved and covered by business scenarios that require temporal inferences.  
Chapter 10, KIE Workbench Integration with External Systems, describes the ways in which the provided tooling can be extended with extra features, along with a description of all the different extension points provided by the API and exposed by the tooling. A set of good practices is described in order to give the reader a comprehensive way to deal with different scenarios a BPMS will likely face.
Appendix A, The UberFire Framework, goes into detail about the based utility framework used by the KIE Workbench to define its user interface. The reader will learn the structure and use of the framework, along with a demonstration that will enable the extension of any component in the workbench distribution you choose. Hope you like it! Cheers,

by Marian Buenosayres ( at July 20, 2014 09:10 PM

July 18, 2014

Drools & JBPM: Kie Uberfire Social Activities

The Uberfire Framework, has a new extension: Kie Uberfire Social Activities. In this initial version this Uberfire extension will provided an extensible architecture to capture, handle, and present (in a timeline style) configurable types of social events.

  • Basic Architecture
An event is any type of "CDI Event" and will be handled by their respective adapter. The adapter is a CDI Managed Bean, which implements SocialAdapter interface. The main responsibility of the adapter is to translate from a CDI event to a Social Event. This social event will be captured and persisted by Kie Uberfire Social Activities in their respectives timelines (basically user and type timeline). 

That is the basic architecture and workflow of this tech:

Basic Architecture

  • Timelines

There is many ways of interact and display a timeline. This session will briefly describe each one of them.

a-) Atom URL

Social Activities provides a custom URL for each event type. This url is accessible by: http://project/social/TYPE_NAME.

The users timeline works on the same way, being accessible by http://project/social-user/USER_NAME .

Another cool stuff is that an adapter can provide his pluggable url-filters. Implementing the method getTimelineFilters from SocialAdapter interface, he can do anything that he want with his timeline. This filters is accessible by a query parameter, i.e. http://project/social/TYPE_NAME?max-results=1 .

B-) Basic Widgets

Social Activities also includes some basic (extendable) widgets. There is two type of timelines widgets: simple and regular widgets.

Simple Widget

Regular Widget

The ">" symbol on 'Simple Widget' is a pagination component. You can configure it by an easy API. With an object SocialPaged( 2 ) you creates a pagination with 2 items size. This object helps you to customize your widgets using the methods canIGoBackward() and canIGoForward() to display icons, and  forward() and backward() to set the navigation direction.
The Social Activities component has an initial support for avatar. In case you provide an user e-mail for the API, the gravatar image will be displayed in this widgets.

C-) Drools Query API

Another way to interact with a timeline is throught the Social Timeline Drools Query API. This API executes one or more DRLs in a Timeline in all cached events. It's a great way to merge different types of timelines.

  • Followers/Following Social Users

A user can follow another social user.  When a user generates a social event, this event is replicated in all timelines of his followers. Social also provides a basic widget to follow another user, show all social users and display a user following list.

It is important to mention that the current implementation lists socials users through  a "small hack". We search the uberfire default git repository for branch names (each uberfire user has his own branch),  and extract the list of social users.

This hack is needed as we don’t have direct access of the user base (due the container based auth).

  • Persistence Architecture

The persistence architecture of Social Activities is build on two concepts: Local Cache and File Persistence. The local cache is a in memory cache that holds all recent social events. These events are kept only in this cache until the max events threshold is reached. The size of this threshold is configured by a system property (default value 100).

When the threshold is reached, the social persist the current cache into the file system (system.git repository - social branch). Inside this branch there is a social-files directory and this structure:

  • userNames: file that contains all social users name
  • each user has his own file (with his name), that contains a Json with user data.
  • a directory for each social type event .
  • a directory "USER_TIMELINE" that contains specific user timelines

Each directory keeps a file "LAST_FILE_INDEX" that point for the most recent timeline file.

Inside each file, there is a persisted list of Social Events in JSON format:

({"timestamp":"Jul16,2014,5:04:13PM","socialUser":{"name":"stress1","followersName":[],"followingName":[]},"type":"FOLLOW_USER","adicionalInfo":["follow stress2"]})

Separating each JSONs there is a HEX and the size in bytes of the JSON. The file is read by social in reverse order.

The METADATA file current hold only the number of social events on that file (used for pagination support).

It is important to mention that this whole structure is transparent to the widgets and pagination. All the file structure and respective cache are MERGED to compose a timeline.

  • Clustering
In case that your application is using Uberfire in a cluster environment, Kie Social Activities also supports distributed persistence. His cluster sync is build on top of UberfireCluster support (Apache Zookeeper and Apache Helix).

Each node broadcast social events to the cluster via a cluster message  SocialClusterMessage.NEW_EVENT containing Social Event data. With this message, all the nodes receive the event and can store it on their own local cache. In that point all nodes caches are consistent.
When a cache from a node reaches the threshold, it lock the filesystem to persist his cache on filesystem. Then the node sends a SOCIAL_FILE_SYSTEM_PERSISTENCE message to the cluster notifying all the nodes that the cache is persisted on filesystem.
If during this persistence process, any node receives a new event, this stale event is merged during this sync.

  • Stress Test and Performance

In my github account, there is an example Stress Test class used to test the performance of this project.  This class isn't imported to our official repository.

The results of that test, find out that Social Actitivies can write ~1000 events per second in my personal laptop (Mb Pro,  Intel Core i5 2.4 GHZ, 8Gb 1600MHz DDR3, SSD). In a single instance enviroment, it writes 10k events in 7s, writed 100k in 48s, and 500k events in 512s.
  • Demo
A sample project of this feature can be found at my GitHub account or you can just download and install the war of this demo. Please take a note that this repository moved from my account to our official uberfire extensions repository.

  • Roadmap
This is an early version of Kie Uberfire Social Activities. In the nexts versions we plan to provide:

  • A "Notification Center" tool, inspired by OSX notification tool; (far term)
  • Integrate this project with dashbuilder KPI's;(far term)
  • A purge tool, able to move old events from filesystem to another persistence store; (short term)
  • In this version, we only provide basic widgets. We need to create a way to allow to use customized templates on this widgets.(near term)
  • A dashboard to group multiple social widgets.(near term)

If you want start contributing to Open Source, this is a nice opportunity. Fell free to contact me!

by ederign ( at July 18, 2014 07:40 PM

Thomas Allweyer: Mein neues Buch: Eine praxisorientierte Einführung in Business Process Management-Systeme

Frontpage BPMS-Buch_klIn dem neuen Buch geht es um Business Process Management-Systeme (BPMS), also um Systeme zur Prozessausführung. Wie lernt man am besten, wie ein solches System funktioniert? Indem man es selbst ausprobiert. Ähnlich wie man zum Erlernen einer Programmiersprache viele Beispielprogramme erstellt und zum Laufen bringt, sollte man für den Einstieg in BPMS möglichst viele ausführbare Prozesse modellieren und zur Ausführung bringen. Aus diesem Grund enthält das Buch über 50 Beispielprozesse, die man auf der Webseite zum Buch herunterladen und selbst ausprobieren kann.

Darunter finden sich nicht nur einfache Standardprozesse, wie sie in typischen Einsteiger-Tutorials verwendet werden, sondern auch Umsetzungen komplexerer Aufgabenstellungen, wie z. B. Mehrfachteilnehmer, Ausnahmebehandlungen, Kollaboration mehrerer Prozesse in unterschiedlichen Systemen, und viele mehr.

Dabei spielt die Prozessmodellierung mit BPMN eine zentrale Rolle. Ein ausführbarer Prozess besteht aber nicht nur aus einem Prozessmodell, sondern auch noch aus zahlreichen weiteren Elementen, wie z. B. Daten, Benutzer-Dialogen, Benutzer-Rollen und Organisationsstrukturen, Geschäftsregeln, Anwendungsfunktionalität, usw. Auch diese Aspekte werden ausführlich anhand vieler weiterer Beispiele erläutert und praktisch angewendet. So lernt der Leser, wie man komplexe Datenobjekte anlegt und benutzt, Nachrichtenflüsse definiert, Benutzer-Dialoge und Screenflows spezifiziert, Skripte erstellt, Web Services einbindet, Benutzer dynamisch auswählt, Entscheidungstabellen einsetzt, und vieles mehr.

Auch die Bearbeitung der einzelnen Schritte im Prozessportal und die Administration eines BPMS kommen nicht zu kurz, ebenso wie das Monitoring und Controlling der Prozesse. Ganz bewusst liegt der Fokus des Buchs auf dem klassischen BPMS-Konzept. Neuere Entwicklungen, wie Adaptive Case Management oder Social BPM werden zwar angesprochen, aber nicht vertieft. In diesen Bereichen ist noch sehr vieles im Fluss. Das klassische BPMS-Konzept wird auch in Zukunft eine wesentliche Rolle spielen, vor allem im Bereich standardisierter Prozesse. Und auch für das Verständnis neuerer Entwicklungen ist die fundierte Kenntnis des etablierten BPMS-Ansatzes eine wichtige Voraussetzung.

Damit die Beispielprozesse von jedem Leser ausprobiert und selbst weiterentwickelt werden können, wurden sie mit der frei verfügbaren, kostenlosen Community Edition des Systems Bonita BPM erstellt. Die im Buch vermittelten Grundlagen sind aber allgemeingültig und lassen sich auch auf andere BPM-Systeme übertragen. Da jedes System seine Besonderheiten hat, wird an manchen Stellen beispielhaft erläutert, wie eine bestimmter Aspekt in Bonita umgesetzt wurde. Das jeweilige Prinzip sollte sich bei jedem typischen BPM-System ebenfalls wiederfinden, wobei sich die konkrete Art der Umsetzung unterscheiden kann. Das Buch enthält keine Details zur Bonita-Bedienung. Die notwendigen Informationen zur Ausführung der Prozesse mit Bonita finden sich auf der Webseite zum Buch.

Auch für Anwender anderer BPMS ist das Buch daher nützlich. Bonita kann problemlos als zusätzliche Lernumgebung auf handelsüblichen PCs installiert werden. Ein zusätzlicher Lerneffekt entsteht, wenn man einzelne Beispielprozesse in einem anderen System umsetzt. An entsprechenden Erfahrungen bin ich sehr interessiert und veröffentliche auch gerne auf andere Systeme portierte Prozesse auf der Webseite.

Da der Funktionsumfang der verwendeten Community Edition von Bonita nicht so umfangreich wie der manches kommerziellen Systems ist, war es an mehreren Stellen erforderlich, kreative Lösungen und Workarounds zu entwickeln. So stehen in diesem System z. B. keine komplexen und keine ereignisbasierten Gateways zur Verfügung. Aus didaktischen Gründen sind solche Einschränkungen oftmals gar nicht schlecht, da es besonders lehrreich ist, wenn man sich überlegt, wie man das gewünschte Verhalten auf anderem Wege erreichen kann.

Das Buch richtet sich an alle Einsteiger in Business Process Management-Systeme, die die Konzepte nicht nur theoretisch verstehen, sondern auch praktisch anwenden wollen. Zielgruppe sind somit zum einen Studenten der Informatik, der Wirtschaftsinformatik und verwandter Studiengänge, zum anderen aber auch Entwickler und Prozessmodellierer aus der Praxis, die sich in die Thematik einarbeiten wollen. Auch im Vorfeld einer Systemauswahl ist es nützlich, sich schon einmal intensiv mit den konkreten Problemstellungen der BPMS-basierten Entwicklung auseinanderzusetzen, um mit den Anbietern auf Augenhöhe diskutieren und konkrete Fragen stellen zu können.

Und hier noch eine kleine Verlosung: Wer das Buch gerne kostenlos erhalten möchte, kann bis zum 31.7.2014 eine Mail mit dem Betreff “Verlosung BPMS-Buch” an schicken. Unter allen Einsendern werden drei Exemplare des Buchs verlost. Wer teilnimmt, stimmt zu, dass im Falle eines Gewinns sein Name und Ort veröffentlicht werden. Der Rechtsweg ist ausgeschlossen.

Webseite zum Buch – mit den Prozessen zum Download
Das Buch bei amazon bestellen.

by Thomas Allweyer at July 18, 2014 09:13 AM