Planet BPM

July 21, 2014

Drools & JBPM: Drools Executable Model (Rules in pure Java)

The Executable Model is a re-design of the Drools lowest level model handled by the engine. In the current series (up to 6.x) the executable model has grown organically over the last 8 years, and was never really intended to be targeted by end users. Those wishing to programmatically write rules were advised to do it via code generation and target drl; which was no ideal. There was never any drive to make this more accessible to end users, because extensive use of anonymous classes in Java was unwieldy. With Java 8 and Lambda's this changes, and the opportunity to make a more compelling model that is accessible to end users becomes possible.

This new model is generated during the compilation process of higher level languages, but can also be used on its own. The goal is for this Executable Model to be self contained and avoid the need for any further byte code munging (analysis, transformation or generation); From this model's perspective, everything is provided either by the code or by higher level language layers. For example indexes etc must be provided by arguments, which the higher level language generates through analysis, when it targets the Executable model.
It is designed to map well to a Fluent level builders, leveraging Java 8's lambdas. This will make it more appealing to java developers, and language developers. Also this will allow low level engine feature design and testing, independent of any language. Which means we can innovate at an engine level, without having to worry about the language layer.
The Executable Model should be generic enough to map into multiple domains. It will be a low level dataflow model in which you can address functional reactive programming models, but still usable to build a rule based system out of it too.

The following example provides a first view of the fluent DSL used to build the executable model
DataSource persons = sourceOf(new Person("Mark", 37),
new Person("Edson", 35),
new Person("Mario", 40));

Variable<Person> markV = bind(typeOf(Person.class));

Rule rule = rule("Print age of persons named Mark")
input(markV, () -> persons),
expr(markV, person -> person.getName().equals("Mark"))
on(markV).execute(mark -> System.out.println(mark.getAge())

The previous code defines a DataSource containing a few person instances and declares the Variable markV of type Person. The rule itself contains the usual two parts: the LHS is defined by the set of inputs and expressions passed to the view() method, while the RHS is the action defined by the lambda expression passed to the then() method.

Analyzing the LHS in more detail, the statement
input(markV, () -> persons)
binds the objects from the persons DataSource to the markV variable, pattern matching by the object class. In this sense the DataSource can be thought as the equivalent of a Drools entry-point.

Conversely the expression
expr(markV, person -> person.getName().equals("Mark"))
uses a Predicate to define a condition that the object bound to the markV Variable has to satisfy in order to be successfully matched by the engine. Note that, as anticipated, the evaluation of the pattern matching is not performed by a constraint generated as a result of any sort of analysis or compilation process, but it's merely executed by applying the lambda expression implementing the predicate ( in this case, person -> person.getName().equals("Mark") ) to the object to be matched. In other terms the former DSL produces the executable model of a rule that is equivalent to the one resulting from the parsing of the following drl.
rule "Print age of persons named Mark"
markV : Person( name == "Mark" ) from entry-point "persons"
It is also under development a rete builder that can be fed with the rules defined with this DSL. In particular it is possible to add these rules to a CanonicalKieBase and then to create KieSessions from it as for any other normal KieBase.
CanonicalKieBase kieBase = new CanonicalKieBase();

KieSession ksession = kieBase.newKieSession();
Of course the DSL also allows to define more complex conditions like joins:
Variable<Person> markV = bind(typeOf(Person.class));
Variable<Person> olderV = bind(typeOf(Person.class));

Rule rule = rule("Find persons older than Mark")
input(markV, () -> persons),
input(olderV, () -> persons),
expr(markV, mark -> mark.getName().equals("Mark")),
expr(olderV, markV, (older, mark) -> older.getAge() > mark.getAge())
on(olderV, markV)
.execute((p1, p2) -> System.out.println(p1.getName() + " is older than " + p2.getName())
or existential patterns:
Variable<Person> oldestV = bind(typeOf(Person.class));
Variable<Person> otherV = bind(typeOf(Person.class));

Rule rule = rule("Find oldest person")
input(oldestV, () -> persons),
input(otherV, () -> persons),
not(otherV, oldestV, (p1, p2) -> p1.getAge() > p2.getAge())
.execute(p -> System.out.println("Oldest person is " + p.getName())
Here the not() stands for the negation of any expression, so the form used above is actually only a shortcut for
not( expr( otherV, oldestV, (p1, p2) -> p1.getAge() > p2.getAge() ) )
Also accumulate is already supported in the following form:
Variable<Person> person = bind(typeOf(Person.class));
Variable<Integer> resultSum = bind(typeOf(Integer.class));
Variable<Double> resultAvg = bind(typeOf(Double.class));

Rule rule = rule("Calculate sum and avg of all persons having a name starting with M")
input(person, () -> persons),
accumulate(expr(person, p -> p.getName().startsWith("M")),
on(resultSum, resultAvg)
.execute((sum, avg) -> result.value = "total = " + sum + "; average = " + avg)
To provide one last more complete use case, the executable model of the classical fire and alarm example can be defined with this DSL as it follows.
Variable<Room> room = any(Room.class);
Variable<Fire> fire = any(Fire.class);
Variable<Sprinkler> sprinkler = any(Sprinkler.class);
Variable<Alarm> alarm = any(Alarm.class);

Rule r1 = rule("When there is a fire turn on the sprinkler")
expr(sprinkler, s -> !s.isOn()),
expr(sprinkler, fire, (s, f) -> s.getRoom().equals(f.getRoom()))
.execute(s -> {
System.out.println("Turn on the sprinkler for room " + s.getRoom().getName());
.update(sprinkler, "on")

Rule r2 = rule("When the fire is gone turn off the sprinkler")
expr(sprinkler, Sprinkler::isOn),
not(fire, sprinkler, (f, s) -> f.getRoom().equals(s.getRoom()))
.execute(s -> {
System.out.println("Turn off the sprinkler for room " + s.getRoom().getName());
.update(sprinkler, "on")

Rule r3 = rule("Raise the alarm when we have one or more fires")
execute(() -> System.out.println("Raise the alarm"))
.insert(() -> new Alarm())

Rule r4 = rule("Lower the alarm when all the fires have gone")
execute(() -> System.out.println("Lower the alarm"))

Rule r5 = rule("Status output when things are ok")
not(sprinkler, Sprinkler::isOn)
execute(() -> System.out.println("Everything is ok"))

CanonicalKieBase kieBase = new CanonicalKieBase();
kieBase.addRules(r1, r2, r3, r4, r5);

KieSession ksession = kieBase.newKieSession();

// phase 1
Room room1 = new Room("Room 1");
FactHandle fireFact1 = ksession.insert(new Fire(room1));

// phase 2
Sprinkler sprinkler1 = new Sprinkler(room1);


// phase 3
In this example it's possible to note a few more things:

  • Some repetitions are necessary to bind the parameters of an expression to the formal parameters of the lambda expression evaluating it. Hopefully it will be possible to overcome this issue using the -parameters compilation argument when this JDK bug will be resolved.
  • any(Room.class) is a shortcut for bind(typeOf(Room.class))
  • The inputs don't declare a DataSource. This is a shortcut to state that those objects come from a default empty DataSource (corresponding to the Drools default entry-point). In fact in this example the facts are programmatically inserted into the KieSession.
  • Using an input without providing any expression for that input is actually a shortcut for input(alarm), expr(alarm, a -> true)
  • In the same way an existential pattern without any condition like not(fire) is another shortcut for not( expr( fire, f -> true ) )
  • Java 8 syntax also allows to define a predicate as a method reference accessing a boolean property of a fact like in expr(sprinkler, Sprinkler::isOn)
  • The RHS, together with the block of code to be executed, also provides a fluent interface to define the working memory actions (inserts/updates/deletes) that have to be performed when the rule is fired. In particular the update also gets a varargs of Strings reporting the name of the properties changed in the updated fact like in update(sprinkler, "on"). Once again this information has to be explicitly provided because the executable model has to be created without the need of any code analysis.

by Mario Fusco ( at July 21, 2014 04:48 PM

July 20, 2014

Drools & JBPM: jBPM6 Developer Guide coming out soon!

Hello everyone. This post is just to let you know that jBPM6 Developer Guide is about to get published, and you can pre-order it from here and get from a 20% to a 37% discount on your order! With this book, you can learn how to:
  • Model and implement different business processes using the BPMN2 standard notation
  • Understand how and when to use the different tools provided by the JBoss Business Process Management (BPM) platform
  • Learn how to model complex business scenarios and environments through a step-by-step approach
Here you can find a list of what you will find in each chapter:  

Chapter 1, Why Do We Need Business Process Management?, introduces the BPM discipline. This chapter will provide the basis for the rest of the book, by providing an understanding of why and how the jBPM6 project has been designed, and the path its evolution will follow.  
Chapter 2, BPM Systems Structure, goes in depth into understanding what the main pieces and components inside a Business Process Management System (BPMS) are. This chapter introduces the concept of BPMS as the natural follow up of an understanding of the BPM discipline. The reader will find a deep and technical explanation about how a BPM system core can be built from scratch and how it will interact with the rest of the components in the BPMS infrastructure. This chapter also describes the intimate relationship between the Drools and jBPM projects, which is one of the key advantages of jBPM6 in comparison with all the other BPMSs, as well as existing methodologies where a BPMS connects with other systems.
Chapter 3, Using BPMN 2.0 to Model Business Scenarios, covers the main constructs used to model our business processes, guiding the reader through an example that illustrates the most useful modeling patterns. The BPMN 2.0 specification has become the de facto standard for modeling executable business processes since it was released in early 2011, and is recommended to any BPM implementation, even outside the scope of jBPM6.  
Chapter 4, Understanding the Knowledge Is Everything Workbench, takes a look into the tooling provided by the jBPM6 project, which will enable the reader to both define new processes and configure a runtime to execute those processes. The overall architecture of the tooling provided will be covered as well in this chapter.
Chapter 5, Creating a Process Project in the KIE Workbench, dives into the required steps to create a process definition with the existing tooling, as well as to test it and run it. The BPMN 2.0 specification will be put into practice as the reader creates an executable process and a compiled project where the runtime specifications will be defined.
Chapter 6, Human Interactions, covers in depth the Human Task component inside jBPM6. A big feature of BPMS is the capability to coordinate human and system interactions. It also describes how the existing tooling builds a user interface using the concepts of task lists and task forms, exposing the end users involved in the execution of multiple process definitions’ tasks to a common interface.
Chapter 7, Defining Your Environment with the Runtime Manager, covers the different strategies provided to configure an environment to run our processes. The reader will see the configurations for connecting external systems, human task components, persistence strategies and the relation a specific process execution will have with an environment, as well as methods to define their own custom runtime configuration.
Chapter 8, Implementing Persistence and Transactions, covers the shared mechanisms between the Drools and jBPM projects used to store information and define transaction boundaries. When we want to support processes that coordinate systems and people over long periods of time, we need to understand how the process information can be persisted.  
Chapter 9, Integration with other Knowledge Definitions, gives a brief introduction to the Drools Rule Engine. It is used to mix business processes with business rules, to define advanced and complex scenarios. Also, we cover Drools Fusion, and added feature of the Drools Rule Engine to add the ability of temporal reasoning, allowing business processes to be monitored, improved and covered by business scenarios that require temporal inferences.  
Chapter 10, KIE Workbench Integration with External Systems, describes the ways in which the provided tooling can be extended with extra features, along with a description of all the different extension points provided by the API and exposed by the tooling. A set of good practices is described in order to give the reader a comprehensive way to deal with different scenarios a BPMS will likely face.
Appendix A, The UberFire Framework, goes into detail about the based utility framework used by the KIE Workbench to define its user interface. The reader will learn the structure and use of the framework, along with a demonstration that will enable the extension of any component in the workbench distribution you choose. Hope you like it! Cheers,

by Marian Buenosayres ( at July 20, 2014 09:10 PM

July 18, 2014

Drools & JBPM: Kie Uberfire Social Activities

The Uberfire Framework, has a new extension: Kie Uberfire Social Activities. In this initial version this Uberfire extension will provided an extensible architecture to capture, handle, and present (in a timeline style) configurable types of social events.

  • Basic Architecture
An event is any type of "CDI Event" and will be handled by their respective adapter. The adapter is a CDI Managed Bean, which implements SocialAdapter interface. The main responsibility of the adapter is to translate from a CDI event to a Social Event. This social event will be captured and persisted by Kie Uberfire Social Activities in their respectives timelines (basically user and type timeline). 

That is the basic architecture and workflow of this tech:

Basic Architecture

  • Timelines

There is many ways of interact and display a timeline. This session will briefly describe each one of them.

a-) Atom URL

Social Activities provides a custom URL for each event type. This url is accessible by: http://project/social/TYPE_NAME.

The users timeline works on the same way, being accessible by http://project/social-user/USER_NAME .

Another cool stuff is that an adapter can provide his pluggable url-filters. Implementing the method getTimelineFilters from SocialAdapter interface, he can do anything that he want with his timeline. This filters is accessible by a query parameter, i.e. http://project/social/TYPE_NAME?max-results=1 .

B-) Basic Widgets

Social Activities also includes some basic (extendable) widgets. There is two type of timelines widgets: simple and regular widgets.

Simple Widget

Regular Widget

The ">" symbol on 'Simple Widget' is a pagination component. You can configure it by an easy API. With an object SocialPaged( 2 ) you creates a pagination with 2 items size. This object helps you to customize your widgets using the methods canIGoBackward() and canIGoForward() to display icons, and  forward() and backward() to set the navigation direction.
The Social Activities component has an initial support for avatar. In case you provide an user e-mail for the API, the gravatar image will be displayed in this widgets.

C-) Drools Query API

Another way to interact with a timeline is throught the Social Timeline Drools Query API. This API executes one or more DRLs in a Timeline in all cached events. It's a great way to merge different types of timelines.

  • Followers/Following Social Users

A user can follow another social user.  When a user generates a social event, this event is replicated in all timelines of his followers. Social also provides a basic widget to follow another user, show all social users and display a user following list.

It is important to mention that the current implementation lists socials users through  a "small hack". We search the uberfire default git repository for branch names (each uberfire user has his own branch),  and extract the list of social users.

This hack is needed as we don’t have direct access of the user base (due the container based auth).

  • Persistence Architecture

The persistence architecture of Social Activities is build on two concepts: Local Cache and File Persistence. The local cache is a in memory cache that holds all recent social events. These events are kept only in this cache until the max events threshold is reached. The size of this threshold is configured by a system property (default value 100).

When the threshold is reached, the social persist the current cache into the file system (system.git repository - social branch). Inside this branch there is a social-files directory and this structure:

  • userNames: file that contains all social users name
  • each user has his own file (with his name), that contains a Json with user data.
  • a directory for each social type event .
  • a directory "USER_TIMELINE" that contains specific user timelines

Each directory keeps a file "LAST_FILE_INDEX" that point for the most recent timeline file.

Inside each file, there is a persisted list of Social Events in JSON format:

({"timestamp":"Jul16,2014,5:04:13PM","socialUser":{"name":"stress1","followersName":[],"followingName":[]},"type":"FOLLOW_USER","adicionalInfo":["follow stress2"]})

Separating each JSONs there is a HEX and the size in bytes of the JSON. The file is read by social in reverse order.

The METADATA file current hold only the number of social events on that file (used for pagination support).

It is important to mention that this whole structure is transparent to the widgets and pagination. All the file structure and respective cache are MERGED to compose a timeline.

  • Clustering
In case that your application is using Uberfire in a cluster environment, Kie Social Activities also supports distributed persistence. His cluster sync is build on top of UberfireCluster support (Apache Zookeeper and Apache Helix).

Each node broadcast social events to the cluster via a cluster message  SocialClusterMessage.NEW_EVENT containing Social Event data. With this message, all the nodes receive the event and can store it on their own local cache. In that point all nodes caches are consistent.
When a cache from a node reaches the threshold, it lock the filesystem to persist his cache on filesystem. Then the node sends a SOCIAL_FILE_SYSTEM_PERSISTENCE message to the cluster notifying all the nodes that the cache is persisted on filesystem.
If during this persistence process, any node receives a new event, this stale event is merged during this sync.

  • Stress Test and Performance

In my github account, there is an example Stress Test class used to test the performance of this project.  This class isn't imported to our official repository.

The results of that test, find out that Social Actitivies can write ~1000 events per second in my personal laptop (Mb Pro,  Intel Core i5 2.4 GHZ, 8Gb 1600MHz DDR3, SSD). In a single instance enviroment, it writes 10k events in 7s, writed 100k in 48s, and 500k events in 512s.
  • Demo
A sample project of this feature can be found at my GitHub account or you can just download and install the war of this demo. Please take a note that this repository moved from my account to our official uberfire extensions repository.

  • Roadmap
This is an early version of Kie Uberfire Social Activities. In the nexts versions we plan to provide:

  • A "Notification Center" tool, inspired by OSX notification tool; (far term)
  • Integrate this project with dashbuilder KPI's;(far term)
  • A purge tool, able to move old events from filesystem to another persistence store; (short term)
  • In this version, we only provide basic widgets. We need to create a way to allow to use customized templates on this widgets.(near term)
  • A dashboard to group multiple social widgets.(near term)

If you want start contributing to Open Source, this is a nice opportunity. Fell free to contact me!

by ederign ( at July 18, 2014 07:40 PM

Thomas Allweyer: Mein neues Buch: Eine praxisorientierte Einführung in Business Process Management-Systeme

Frontpage BPMS-Buch_klIn dem neuen Buch geht es um Business Process Management-Systeme (BPMS), also um Systeme zur Prozessausführung. Wie lernt man am besten, wie ein solches System funktioniert? Indem man es selbst ausprobiert. Ähnlich wie man zum Erlernen einer Programmiersprache viele Beispielprogramme erstellt und zum Laufen bringt, sollte man für den Einstieg in BPMS möglichst viele ausführbare Prozesse modellieren und zur Ausführung bringen. Aus diesem Grund enthält das Buch über 50 Beispielprozesse, die man auf der Webseite zum Buch herunterladen und selbst ausprobieren kann.

Darunter finden sich nicht nur einfache Standardprozesse, wie sie in typischen Einsteiger-Tutorials verwendet werden, sondern auch Umsetzungen komplexerer Aufgabenstellungen, wie z. B. Mehrfachteilnehmer, Ausnahmebehandlungen, Kollaboration mehrerer Prozesse in unterschiedlichen Systemen, und viele mehr.

Dabei spielt die Prozessmodellierung mit BPMN eine zentrale Rolle. Ein ausführbarer Prozess besteht aber nicht nur aus einem Prozessmodell, sondern auch noch aus zahlreichen weiteren Elementen, wie z. B. Daten, Benutzer-Dialogen, Benutzer-Rollen und Organisationsstrukturen, Geschäftsregeln, Anwendungsfunktionalität, usw. Auch diese Aspekte werden ausführlich anhand vieler weiterer Beispiele erläutert und praktisch angewendet. So lernt der Leser, wie man komplexe Datenobjekte anlegt und benutzt, Nachrichtenflüsse definiert, Benutzer-Dialoge und Screenflows spezifiziert, Skripte erstellt, Web Services einbindet, Benutzer dynamisch auswählt, Entscheidungstabellen einsetzt, und vieles mehr.

Auch die Bearbeitung der einzelnen Schritte im Prozessportal und die Administration eines BPMS kommen nicht zu kurz, ebenso wie das Monitoring und Controlling der Prozesse. Ganz bewusst liegt der Fokus des Buchs auf dem klassischen BPMS-Konzept. Neuere Entwicklungen, wie Adaptive Case Management oder Social BPM werden zwar angesprochen, aber nicht vertieft. In diesen Bereichen ist noch sehr vieles im Fluss. Das klassische BPMS-Konzept wird auch in Zukunft eine wesentliche Rolle spielen, vor allem im Bereich standardisierter Prozesse. Und auch für das Verständnis neuerer Entwicklungen ist die fundierte Kenntnis des etablierten BPMS-Ansatzes eine wichtige Voraussetzung.

Damit die Beispielprozesse von jedem Leser ausprobiert und selbst weiterentwickelt werden können, wurden sie mit der frei verfügbaren, kostenlosen Community Edition des Systems Bonita BPM erstellt. Die im Buch vermittelten Grundlagen sind aber allgemeingültig und lassen sich auch auf andere BPM-Systeme übertragen. Da jedes System seine Besonderheiten hat, wird an manchen Stellen beispielhaft erläutert, wie eine bestimmter Aspekt in Bonita umgesetzt wurde. Das jeweilige Prinzip sollte sich bei jedem typischen BPM-System ebenfalls wiederfinden, wobei sich die konkrete Art der Umsetzung unterscheiden kann. Das Buch enthält keine Details zur Bonita-Bedienung. Die notwendigen Informationen zur Ausführung der Prozesse mit Bonita finden sich auf der Webseite zum Buch.

Auch für Anwender anderer BPMS ist das Buch daher nützlich. Bonita kann problemlos als zusätzliche Lernumgebung auf handelsüblichen PCs installiert werden. Ein zusätzlicher Lerneffekt entsteht, wenn man einzelne Beispielprozesse in einem anderen System umsetzt. An entsprechenden Erfahrungen bin ich sehr interessiert und veröffentliche auch gerne auf andere Systeme portierte Prozesse auf der Webseite.

Da der Funktionsumfang der verwendeten Community Edition von Bonita nicht so umfangreich wie der manches kommerziellen Systems ist, war es an mehreren Stellen erforderlich, kreative Lösungen und Workarounds zu entwickeln. So stehen in diesem System z. B. keine komplexen und keine ereignisbasierten Gateways zur Verfügung. Aus didaktischen Gründen sind solche Einschränkungen oftmals gar nicht schlecht, da es besonders lehrreich ist, wenn man sich überlegt, wie man das gewünschte Verhalten auf anderem Wege erreichen kann.

Das Buch richtet sich an alle Einsteiger in Business Process Management-Systeme, die die Konzepte nicht nur theoretisch verstehen, sondern auch praktisch anwenden wollen. Zielgruppe sind somit zum einen Studenten der Informatik, der Wirtschaftsinformatik und verwandter Studiengänge, zum anderen aber auch Entwickler und Prozessmodellierer aus der Praxis, die sich in die Thematik einarbeiten wollen. Auch im Vorfeld einer Systemauswahl ist es nützlich, sich schon einmal intensiv mit den konkreten Problemstellungen der BPMS-basierten Entwicklung auseinanderzusetzen, um mit den Anbietern auf Augenhöhe diskutieren und konkrete Fragen stellen zu können.

Und hier noch eine kleine Verlosung: Wer das Buch gerne kostenlos erhalten möchte, kann bis zum 31.7.2014 eine Mail mit dem Betreff “Verlosung BPMS-Buch” an schicken. Unter allen Einsendern werden drei Exemplare des Buchs verlost. Wer teilnimmt, stimmt zu, dass im Falle eines Gewinns sein Name und Ort veröffentlicht werden. Der Rechtsweg ist ausgeschlossen.

Webseite zum Buch – mit den Prozessen zum Download
Das Buch bei amazon bestellen.

by Thomas Allweyer at July 18, 2014 09:13 AM

July 11, 2014

Keith Swenson: bpmNEXT talk on Personal Assistants

Here is a video from my presentation at bpmNEXT of March 2014 presenting the idea that in the future we might see a kind of agent, which I call a personal assistant, cloning and synchronizing projects such that the large scale processes actually emerge from the interactions of these agents.


The presentation stands on its own, you can access the slides at slideshare, so I won’t repeat any of that here, but rather to give you some of the context.

bpmNEXT is a meeting of the elite in the process technology world, and it is always a great thrill to meet and debate with everyone all together in one place.  Asilomar is a such a nice location to hang out, and the hosts always make sure there is plenty of wine to lubricate the conversation.  About 6 months earlier Jim Sinur released a new book talking about agents, and I think a lot of people are rather misinformed about agents.  In a certain sense, a BPMSuite is actually just an agent because it is programmable. If programmability and autonomy is the only thing to an agent, then what is the big deal?  So to every person attending the conference, I kept asking “what is an agent?”  Is this really something new, or just the same old thing with inflated terminology.

I think there is a real use for an agent to help work out the interface between different domains of control.  That is a really difficult problem.  The SOA people ignored it, and simply said that we would have WSDL interfaces in UDDI repositories.  WSDL does not work because it does not define the meaning behind the data values.  Data values are defined only by name and type, which really tells you nothing.  Different organizations typically use different names for the same thing, so a WSDL interface falls down when the names don’t match.

What if an autonomous agent could work out those details for us?  Within my organization it is pretty easy to come to agreement on terms and processes, but when bridging to another organization, there is a whole negotiation that needs to go on.  You can easily imagine an interchange something like this:

  • Agent A:  Hey there!  I have some work to be done, could you do it?
  • Agent B:  Well, yes, I do consulting from time to time, what do you need done?
  • Agent A: I can’t really tell you until you sign the non-disclosure.
  • Agent B: Well, what kind of work would it be, and I can tell you if I might do it.
  • Agent A: it is in the area of helping with a patient.  Do you help with skeletial problems on the back?
  • Agent B: Yes, I help a lot of people with back problems, it sounds like the sort of thing I might be able to help with.  What time frame are we talking about?
  • Agent A: Patient is in mild discomfort, so I would expect a consultation in the next two weeks would be acceptable.
  • Agent B: Great I have several openings next week.  What kind of non-disclosure agreement should be set up?
  • Agent A: The normal.  Here (passing document) is the standard form.  I see we have used this same form in the past.
  • Agent B: OK, I have noted that this agreement is in force with this patient.  Can I have the name of the patient.
  • Agent A: It is ‘Alex Demo’ and here is the task that is assigned: “investigate back problem”.   Would you like to take this assignment?
  • Agent B: Yes, I automatically accept tasks with that description.  Can you give me the pointer to the case folder?
  • Agent A: OK, the task has been marked as accepted, and you have been given rights as a ‘attending subspecialist’.  Here (passing URL) is the link.
  • Agent B: OK, I am downloading the associated files, and I will take it from here.  I will update you when I have some results.
  • (Agent B notifies Charles about the new case, and at the same time sends a request to Alex for preferred appointment times.)

The dialog is described using the first person pronoun ‘I’ but understand that the agents are speaking on behalf of their owners.  The owners have ‘programmed’ in some sense of the word, the agents to take these actions on their behalf.  That is why I use the term “personal assistant”.

The point about this exchange is that we programmers aways want to simplify this into a single exchange:  (1a) send the job request, and (1b) receive the result back.   This exchange makes use of progressive disclosure on both sides.  The delegating side does not want to disclose information about the patient until it is clarified that the receiving party is willing and able to help.  Similarly, the receiving side may not want to disclose the full laundry list of services that can be performed, especially when different parties describe those tasks using different terms.  I have probably grossly oversimplified the exchange over the work to be done, which very well might include identifiers of specific work which comes from standard tables of services.  Also, keep in mind that the requester does not really know what actual treatment is needed:  part of Charles’ job is to determine that.  So the exchange is not really about doing a particular treatment, but rather about taking ownership of the case for a particular aspect of solving the problem.

Agent B might have all sorts of rules that need to be tested or satisfied before accepting the job.  Agent A might have rules as well, such as probing for background information on previous patients.  It is possible that information is being gathered so that the humans can then make the decision to offer/accept the task before proceeding.  The high level takeaway is that there is no simply a WSDL definition on one side, and a call to the service on the other.

In light of all this, I am demonstrating a framework and a protocol that can accomplish this kind of negotiation.  Yes, it has to get a lot more elaborate, but we have to start someplace, and that place is in basic referral, replication, and synchronization of case data.

What really drives me is the way that this will cause processes that emerge directly from the rules.  Over time, pathways will emerge, from medical centers to supporting specialists, to pharmacies and other service providers.  Just like it is in the business world, each party decides the kinds of jobs it will offer and/or accept depending upon the specialization of the person.   The processes themselves can form out of those rules without being specified in elaborate detail in advance.  The processes that emerge will be resilient and will automatically adapt to environmental changes.  It is a whole new world.

by kswenson at July 11, 2014 10:00 PM

July 10, 2014

Keith Swenson: AdaptiveCM Workshop in Germany September 1

Things are shaping up for a really great workshop to spend a day talking about the latest research findings and possibilities for Adaptive Case Management.  It will be September 1 in Ulm Germany.I am hoping to see all of those Europeans who have a hard time getting the travel budget to come to America.  Register now.


8:00-9:00 – Registration
Session 1: Opening (Ilia Bider)
9:00-09:15 – Presentation of participants
9:15-10:30 – Keynote: “There is Nothing Routine about Innovation”. Keith Swenson
10:30-11:00 – Coffee Break
Session 2. Research. Session (Keith Swenson)
11:00-11:30 “Research Challenges in Adaptive Case Mangement: A Literature Review”. Matheus Hauder, Simon Pigat and Florian Matthes
11:30-12:00 “Examining Case Management Demand using Event Log Complexity Metrics”. Marian Benner-Wickner, Matthias Book, Tobias Brückmann and Volker Gruhn
12:00-12:30 – “Process-Aware Task Management Support for Knowledge-Intensive Business Processes: Findings, Challenges, Requirements”. Nicolas Mundbrod and Manfred Reicher
12:30-14:00 Lunch
Session 3. Practice
14:00-14:30 “A Case for Declarative Process Modelling: Agile Development of a Grant Application System”. Søren Debois, Thomas Hildebrandt, Morten Marquard and Tijs Slaats
14:30-15:00 “Towards a pattern recognition approach for transferring knowledge in ACM”. Thanh Tran Thi Kim, Christoph Ruhsam, Max J. Pucher, Maximilian Kobler and Jan Mendling
15:00-15:30 “How can the blackboard metaphor enrich collaborative ACM systems?”. Helle Frisak Sem, Steinar Carlsen and Gunnar John Coll
15:30-16:00 – Coffee Break
Session 4. Ideas
16:00-16:30 “Towards Aspect Oriented Adaptive Case Management”. Amin Jalali and Ilia Bider
16:30-17.30 – Brainstorming
17:30-17:45 – Closing


Separately, I will also be demonstrating the Cognoscenti system as an open source platform for use in research around adaptive case management.

Hope to see you there

by kswenson at July 10, 2014 09:03 PM

July 02, 2014

John Evdemon: Blog moved

I'm finally starting to blog again but I've decided to move to a different platform. My new blog is at and has two formats: A Noteblog Traditional long-form blog Most of my Twitter posts are available on my Link Blog ....(read more)

by John_Evdemon at July 02, 2014 09:45 PM

June 27, 2014

Drools & JBPM: Compiling GWT applications on Windows

If you're a developer using Microsoft Windows and you've ever developed a GWT application of any size you've probably encountered the command-line length limitation (

The gwt-maven-plugin constructs a command line statement to invoke the GWT compiler containing a list of what can be a very extensive classpath declaration. The length of the command line statement can easily exceed the maximum supported by Microsoft Windows; leaving the developer unable to compile their GWT application without resorting to tricks such as mapping a drive to their local Maven repository thus shortening the classpath entries.

Hopefully this will soon become a thing of the past!

I've submitted a Pull Request to the gwt-maven-plugin project to provide a more concrete solution. With this patch the gwt-maven-plugin is able to compile GWT applications of any size on Microsoft Windows without developers needing to devise tricks.

Until the pull request is accepted and merged you can compile kie-drools-wb or kie-wb by fetching my fork of the gwt-maven-plugin and building it locally. No further changes are then required to compile kie-wb.

Happy hunting!

by Michael Anstis ( at June 27, 2014 04:24 PM

Thomas Allweyer: Modellierung, Simulation und Ausführung in der Cloud mit IYOPRO

Screenshot IYOPRODer Produktname IYOPRO ist eine Abkürzung von “Improve Your Processes”. Die Cloud-basierte Lösung bietet in der Tat einiges, was bei der Prozessverbesserung sehr nützlich sein kann: Angefangen von der Prozessmodellierung über die Simulation und Prozesskostenrechnung bis zur Prozessausführung.

Bemerkenswert ist insbesondere die nahtlose Integration all dieser Funktionalitäten. Bei vielen anderen Produkten benötigt man mehrere getrennte Komponenten, z. T. gar von verschiedenen Herstellern, um das abzudecken, was in IYOPRO komplett integriert ist. So ist etwa kein gesondertes Deployment auf einen Server erforderlich, um einen Prozess auszuführen, da er sich von Anfang an im integrierten Repository befindet. Der Modelleditor und das Prozessportal für die Prozessausführung lassen sich ebenso über die einheitliche Oberfläche im Browser bedienen wie die Simulation oder das Reporting.

Bereits der Funktionsumfang der kostenlos verfügbaren Basisversion zur Prozessmodellierung ist bemerkenswert und geht in vielem über das hinaus, was man von kostenlosen Modellierungswerkzeugen gewohnt ist. Dass man hierarchische Prozesslandkarten und BPMN-Kollaborationsdiagramme erstellen kann, ist noch nicht so ungewöhnlich. IYOPRO bietet daneben aber auch Mehrsprachigkeit, Berechtigungsmanagement, gemeinsame Modellierung im Team, die Generierung von Prozessdokumentationen im Word-Format und die Animation des Sequenzflusses. Das gibt es in dieser Form sonst nur bei kostenpflichtigen Angeboten.

Die Modellierung im Browser erfolgt sehr flüssig und intuitiv. Viele Tätigkeiten, wie die Ausrichtung von Symbolen, die Auswahl des nachfolgenden Elements oder das Einpassen des Gesamtdiagramms in das Modellierungsfenster, lassen sich recht elegant durchführen. Und wer bei einem horizontalen Pool die hochkant angezeigte Beschriftung ändern möchte, der muss seinen Kopf nicht querlegen. Vielmehr dreht sich Modell für die Texteingabe um 90 Grad, um sich danach in seine Ausgangsposition zurückzudrehen. Solche Kleinigkeiten entscheiden mit darüber, wie angenehm die Arbeit für den Modellierer ist. Die eingebaute Konformitätsprüfung weist einen darauf hin, wenn man gegen die BPMN-Syntax verstößt oder z. B. ein Element nicht beschriftet hat.

Wer seine Prozesse simulieren oder mit der integrierten Process Engine ausführen möchte, muss zu einer der kostenpflichtigen IYOPRO-Versionen greifen. Für die zur Prozessausführung notwendigen Ergänzungen der BPMN-Modelle stehen entsprechende Modelltypen zur Verfügung. So kann man z. B. Organigramme als Grundlage für die Rollendefinition modellieren, ebenso wie Datenmodelle zur Generierung von Datenbank-Schemata. Desweiteren gibt es einen Form-Editor, die Einbindung von Web Services und weitere Tools, wie sie von einem leistungsfähigen BPM-System benötigt werden.

Ein besonderer Schwerpunkt von IYOPRO ist die ausgefeilte Komponente zur dynamischen Simulation von Prozessen. Sie erlaubt eine sehr exakte Spezifikation der Prozesslogik mit den verschiedensten statistischen Verteilungen, Ressourcenanforderungen, Schichtkalendern, usw. Die Simulation wird insbesondere auch als Werkzeug für die Prozesskostenrechnung verwendet. Mit Hilfe der Simulation lässt sich für gemeinsam genutzte Ressourcen ermitteln, welche Anteile der Nutzung auf die einzelnen Prozesse entfallen. Hierdurch lassen sich die betreffenden Kosten besser verursachungsgerecht aufteilen. Zwar erfordert eine Simulation zunächst einen recht hohen Aufwand zur Datenerhebung und Validierung, doch können sich die erzielten Einsparungen durch aufgedecktes Optimierungspotenzial und bessere Entscheidungsgrundlagen durchaus recht schnell amortisieren.

Sicherlich lassen sich auf dem Markt Modellierungssuiten mit einem größeren Methodenrepertoire finden. Ebenso gibt es Systeme zur Prozessausführung, die einen höheren Funktionsumfang aufweisen. Dafür punktet IYOPRO mit der hohen Durchgängigkeit über alle Komponenten von der fachlich orientierten Modellierung über die Analyse bis zur Ausführung. Als Cloud-Lösung ist keine Installation erforderlich, und es fallen ausschließlich laufende Kosten für die Softwarelizenzen an. Insbesondere für viele mittelständische Unternehmen dürfte diese Kombination sehr interessant sein.

by Thomas Allweyer at June 27, 2014 06:50 AM

June 25, 2014 Let’s go US

We are excited to announce the official incorporation of camunda Inc., registered in San Francisco, California. Camunda Inc. will market our product camunda BPM in North Amercia. Besides FINRA and Sony, there are already several US based enterprise edition customers, and with BP3 and Trisotech, there are also strong partners available for consulting services around [...]

by Jakob Freund at June 25, 2014 09:31 PM

June 24, 2014

Keith Swenson: Late-Structured Processes

The term “unstructured” has always bothered me, because without structure you have randomness.  When knowledge workers get things done, it is not random in any way.  They accomplish things in a very structured way, it is just not possible to know ahead of time how it will be structured.

Last week at the BPM & Case Management Summit I presented my talk on how different technology should be brought to bear based how predictable the work being supported is.  There is work on the left of the spectrum that is very predictable, and on the right very unpredictable.

Examples of highly predictable work is that being done at an automobile factory or a fast food restaurant.  This work is predictable mainly because the environment is carefully controlled.  The factory is designed to supply the right things at the right time, and while there may be some (anticipated) variability in the mix of models being produced, one can clearly predict that each car will need four tires, mounted on four rims, attached to the wheel, etc.  A fast food restaurant takes an order, and fulfills it in a few minutes in a very repeatable way.

SevenDomainsSnapshotAs you move to the right across the spectrum, we consider shorter predictability horizons.  Integration with other IT systems (the second pillar) means you have to be prepared on a monthly/yearly scale for systems to change.   Human processes (the third pillar) need to cope with people going on vacations, getting sick, learning new skills, and changing positions with a weekly/monthly predictability horizon.   The fourth pillar is production case management where the operations that one might do are well known, but when to do them is decided on a daily basis.  With adaptive case management (fifth pillar) you also have an hourly/daily predictability horizon, but the operations themselves can not always known in advance,and the knowledge worker plays a bigger role in planning the course of events.

Now compare the predictability horizon with the length of the process.  In the case of the fast food, I can predict a month in advance how a particular type of food will be prepared (after the order is received) and it only takes a couple minutes to do the preparation.  We call this predictable because the process is much shorter than the predictability horizon.  The other extreme might be patient care which can take months or years, while our ability to predict is quite a bit shorter than that.  New procedures, new treatments, new drugs are continually entering the market, while a given patient episode might last months or even years.  While treating the patient, decisions are made, and course of treatment can be predicted for certain durations, it is just that those durations are shorter than the overall process.  When this situation occurs, we call it unpredictable because we can not say when the process begins how the process will unfold.

Patient care is not random and it is not unstructured.  Unstructured implies that there is no thinking being done, and that there is no planning necessar and there is no control.  The truth is exactly the opposite; there is quite a bit of thinking and planning being done, there is quite a bit of control of what happens.  The work is not unstructured, it is simply structured while the work is going on.  The planning and the working happen at the same time, and not as discrete phases in the lifecycle of the process.

For this reason I propose the term “late-structured” to explain what knowledge workers do in case management.   They actively plan and structure the work, it is just that they don’t do it as a separate phase.  There are other implications of this:  since you can not separate the planning from the working, clearly both the planning and the working need to be done by the same person.  Knowledge workers must plan, to some extent, their own work.   Also, there is little point in creating elaborate models of the work, since further planning will change that, and it is likely that each instance of the process will be unique.

There is no loss of control.  Late structured processes can still be analyzed after the fact the same way that any process can, and so one can assess how efficient the work was done, as well as whether it complies to all the laws and customs.

When using the term “unstructured,” it is easy to get confused about nature of the work, thinking instead that things unfold randomly in an uncontrolled way.  If you think about it as late-structured work, where the length of the process is longer than the ability to predict what will happen, but prediction and planning still proceed, you gain a better understanding of what is really going on.

by kswenson at June 24, 2014 06:05 PM

Thomas Allweyer: Version 3.0 BPM Common Body of Knowledge jetzt auf Deutsch erschienen

Cover CBOK 3.0 deutschNachdem die englische Ausgabe des BPM Common Body of Knowledge in der Version 3.0 bereits seit einiger Zeit auf dem Markt ist, ist sie nun auch auf Deutsch erschienen. Dass dies etwas gedauert hat, liegt daran, dass der englische Text nicht nur übersetzt worden ist. Vielmehr haben ihn insgesamt zehn Autoren an die Gegebenheiten in den deutschsprachigen Ländern angepasst.

Über die englische Ausgabe habe ich bereits einen Blogeintrag geschrieben.

Zur deutschen Ausgabe schreibt Guido Fischermanns einige Bemerkungen in seinem Blog.

European Association of Business Process Management EABPM (Hrsg.):
BPM CBOK® – Business Process Management BPM Common Body of Knowledge, Version 3.0, Leitfaden für das Prozessmanagement
Verlag Dr. Götz Schmidt, Wettenberg 2014.
Das Buch bei amazon.

by Thomas Allweyer at June 24, 2014 10:13 AM

June 23, 2014

Sandy Kemsley: BPM In Healthcare: Exploring The Uses

I recently wrote a paper on BPM in healthcare for Siemens Medical Systems: it was interesting to see the uses of both structured processes and case management in this context. You can download it...

[Content summary only, click through for full article and links]

by Sandy Kemsley at June 23, 2014 02:11 PM

June 21, 2014

Keith Swenson: BPM and Case Management Summit 2014

Here are some notes from this years BPM & Case Management Summit in Washington DC.

Wow, what a conference!  This is the first major summit that includes case management.  The location was excellent, and so was the venue: The Ritz.  A number of new vendors there, particularly in the case management space:  Frame Solutions,  AINS eCase,  Emerge Adapt Case Blocks.  It was great to see so many old friends, as well as some new ones as well.  It was nice to see Connie Moore who was awarded the Marvin L. Manheim Award For Significant Contributions in the Field of Workflow.


pan of the meeting room thanks to Chuck Webster

Jim Sinur

JimSinurStartThe first keynote was given by Jim Sinur, who said that Adaptive Case Management is the on-ramp for intelligent business processes.  It was a good overview of the current situation in process management:  old style automation is doing well, but the current challenges are newer, more flexible, less structured, and more knowledge worker oriented processes.

He presented the spectrum of process types, as well as his process IQ five-axis spider chart.  He challenged us to ask the question of what will process be like when we have the equivalent of 1000 Watsons available in the cloud to research answers to questions for us?  Reinforced that we will have ‘personal assistants’ to help us run our processes.


It was quite an honor to see two people from the Norwegian Food Safety Authority (NFSA).  I have written about this use case before.  It is such a important use for the kind of flexibility that case management affords.   Most interesting comment came at the end, in response to a question:  even though extensive use cases were created to explore and understand what the users needed to be able to do, no modeling was done in BPMN of CMMN.  Instead, the text of the use case was taken directly to the ‘Task Template’ which is a simple list of tasks that drives a particular scenario.

Setrag Khoshofian

Talked about the “internet of things” (IoT). The market is estimated in the trillions of dollars. Big data today is nothing compared to what we will have when all these things start chatting with each other. “The largest and most durable wearable computer will be the car.” The process of everything.

Used the acronym Social Mobile Analytic Cloud Things: SMACT

Where is the knowledge? You might have policy and procedure manage, however you still need access to experts. Sometimes it is all written down, but only certain people know how to understand and interpret what is written. Applications are developed, but then changed and the design artifacts no longer match. Knowledge is sometimes represented in the code. Also in the patterns of interactions. You can extract this (process mining) and the results is often surprising.

He presented a spectrum of work along these lines:

  1. system, very structured work – flow charts, very popular, useful
  2. clerical worker
  3. knowledge assisted worker. This is the majority of white collar workers. Get assistence from various types of intelligence in the BPM environment.
  4. knowledge worker, Unstructured, dynamic, Knowledge workers do not like to be told what to do.

On problem with self driving cars is if they get hacked. Can we really assume that this will be taken care of?

Device directed warranty scenario: Imagine there is a sensor that determines that the CO2 level in a car is too high. It sends a message to the manufacturer, brought this together with product info, customer info, warranty info into a CASE. Then it is determined that service is required, and the right people are notified. Then a sub case for service order, and a sub-case of a warranty claim.  This is idea of the kind of thing that might be possible today with the IoT.


Presentation of the Living Systems Process Suite where goals drive everything. Governance goal describes how something should be achieved in order to be optimized. Layered process scoping: strategic goals over multiple instances, tactical goals for a particular item or case. then process activities. When you get down to the process they use BPMN. These layered goals give them the ACM capability.

They call them “agents” because they act independent process evaluators: the current situation is compared against the conditions you set to bring the system in line with the goals.  If current state is found, later, to be wrong, the agent can kill that process, and start another. Agents are intelligent enough to start, stop, and modify running processes.  Can insert ad-hoc tasks (issue request, performing query, acting on results).

A question was asked: what about conflicting goals? Goals are in a hierarchy, and that helps prioritize the agents, but you need to take care when designing the goals to avoid a dead lock situation.

Clay Richardson

First keynote on the second day, excellent as well, about “design thinking.”  He sees BPM systems moving from holistic to specific, from linkages to context, from logic to empathy, and from deductive logic to abductive logic.

One of the keys is empathy.   Not empathy with the system, but empathy with the customer.  We migth see a transition from process models to journey maps, from capability maps to personas, and from target operating model (TOM) to storytelling (of how the customer engages).  He feels there are two camps: transaction BPM and engagement BPM.

He cited an example of a Domino’s Pizza app:  it shows where the pizza is in the process:  tossing it, in the oven, on the way, or delivery person knocking on the door.  This more than just the minimal to buy a pizza, it really represents the desires of the customer to know what is happening now.

Instead of focus on cost efficiency, we should focus on revenue growth.  Reconnect to customer journey and customer experience.

Roger Baker, Chief Strategy Officer, Agilex

Gave an excellent talk on agile methodology and why it is needed.  Agile method is defined as 2 week sprints, small teams, requirements discovery, constant prioritization, continuous testing, frequent small releases, and communications, communications, communications.   About 1/3 of what is in a requirements document are things the writers wish they had but will never use.  He said these are like the “froth on the beer” — you want to see it but otherwise not useful.  Agile development is a full contact approach, from execs to workers.  Strict adherence to schedule.  The hardest is “truth telling” — people don’t want to tell you they are having a problem, but if not they can explode.  Raise a problem when you see it, and get help.  If you have a problem and stay quiet, then we will find someone else to do the job.

He shifted the VA around to agile approach, and were delivering, so congress passed a new law in Jan 2011 which changed all the rules.  The VA delivered on 83% of milestones.  You have to plan on some failures, and if so, fail fast.

Waterfall assumes:

  • detailed requirements are clear from the beginning of the project
  • Assumes they don’t change
  • progress can be measured by documents produced
  • assumes that mega programs are manageable by normal humans
  • it systems are it responsibility

Agile assumes

  • Detailed requirements are NOT clear. They will knowit when they see it
  • Requirements and priorities will change
  • produced software is the only measure
  • users and management need constant reassurance
  • everyone must be involved

Only the business knows the process.  Business must take ownership of the process.


Steinar Carlsen

Talking about organizations, and value formation. People do tasks. They don’t necessarily do processes. They have to relate to customers, authorities, partners, and in a constant flux of change.

How is coordination of value production achieves? Email? Heresay? Sharepoint? Proposition: should have an integrated task management system. When a task spins off another task, you have an emergent task management system.

Step details: mandatory, repeatable, pre-condition, include-condition, post-condition.

To design tasks, use “knowledge editor” Not a graphical tool, but instead text based, and saved in XML.


Rudy Montoya – CIO, Texas Attorney General

Keynote speaker on the third day.  He was involved in creating some case management systems for things like crime victims compensation & legal case management

As an example of the explosion in West, Texas.  When it went off, they had to respond at a time when they had no idea if this was a crime, or whether it was terrorism.  The old system required that all information had to be together before they created the case. They needed to verify that a crime occurred before starting the case.  There is a lot of work necessary to get to that point.  Case management starts with the data that exists, and builds forward to the classification of the case and particulars.

They solved this in about 12 months implemented in 3 Phases:

1) eliminated legacy doc mgmt system
2) replace mainframe
3) implement a web portal

Euan McCreath

Very interesting presentation on how Emerge Adapt have implemented a real adaptive case management system.  Great slide on the difference between an adaptive approach and a traditional approach:

2014-06-18 10.57.59

Key elements defined were data structures. Then buckets. The process was very simple. Could create new buckets on the fly. New tasks could be created. Buckets are related to work queues. Could move from any state to any other state, but after a while certain moves were locked out by constraints in the process model.

My Talk

I presented to the following slides:

And, as evidence, Charles Webster took this photo of me:


Sorry everyone who gave talks and I was not able to see them.  There were simply too many to see them all!

Other blog posts:

by kswenson at June 21, 2014 10:36 AM

June 19, 2014

Thomas Allweyer: Award für BPM-Initiativen im Bildungs- und Sozialbereich

Wer sich im Bildungs- und Sozialbereich mit Prozessmanagement beschäftigt, kann sich für den neu geschaffenen “BPM2thePeople”-Award bewerben. Die vorgestellte Initiative sollte eine Vorbildfunktion haben und damit auch für andere Organisationen interessante Aspekte umfassen. Als zweites Kriterium wird der Innovationsgrad bewertet. Und schließlich geht es um den effizienten Ressourceneinsatz.

Der Award wird von der Process Management Alliance ausgeschrieben, die ursprünglich aus einer Initiative von Lufthansa Technik entstanden ist und die jährliche veranstaltet.

Dotiert ist der Preis mit 2.500 Euro, die Preisverleihung findet auf der diesjährigen am 24. und 25.11. in Seeheim bei Frankfurt statt.

Bewerben kann man sich hier.

by Thomas Allweyer at June 19, 2014 12:26 PM

June 17, 2014 Webinar: BPMN with camunda BPM

I will give a webinar on July 17 about the best practices around BPMN, especially in terms of business-IT-alignment. Will this be a camunda BPM pitch as well? Of course But hey, that’s how it goes: 1) Collect 4+ years of intensive consulting experience around BPMN, write a book etc. etc. 2) Discover that the [...]

by Jakob Freund at June 17, 2014 11:46 PM BPM2thePeople Award – Spread the word and win a conference ticket!

This week, we are starting a new project to foster process management awareness in the education and social sector. – The BPM2thePeople Award is our prize for best practice examples that increase the quality of processes in organizations from education or social sector.

The winner of the award serves as an ideal for other organizations and supports the future development towards full establishment of BPM in these sectors.

Nowadays, many organizations from education or social sector are still afraid of such topics and feel insecure about implementing BPM projects to encourage the management of their processes. – It is time to change that thinking, now!

All organizations from these two sectors (e.g., school, kindergarten, university, home for the elderly, workshop for the handicapped) that invest in BPM could be the winner of the BPM2thePeople Award. The award will be handed over during our Process Management Conference in November 2014 and the winner will receive a recognition of 2.500 Euro.

The final decision will be made by a jury of BPM experts from business and research based on the three criteria “role model function”, “innovation”, and “efficiency” of the projects. But even if an organization does not see its project in all of those dimension, they should not hesitate to apply until end of August!

Due to the fact that this blog is primarily read by BPM professionals, we ask you to spread the word and invite people from education or social sector to apply for the award. Please share this post or simply go to the website of the award and invite others:

As a THANK YOU we will raffle a ticket for this year’s Process Management Conference among all supporters.

Best regards,

by Mirko Kloppenburg at June 17, 2014 06:56 PM

June 16, 2014

Sandy Kemsley: Webinar On Collaborative Business Process Analysis In The Cloud

I’m giving a webinar on Wednesday, June 18 (11am Eastern) on social cloud-based BPA, sponsored by Software AG – you can register here to watch it live. I’ve written a white paper going into this...

[Content summary only, click through for full article and links]

by Sandy Kemsley at June 16, 2014 11:51 AM

Keith Swenson: Open Source Adaptive Case Management

Interested in trying out Adaptive Case Management without a huge investment?  Cognoscenti might be the option for you.  This post contains most of the contents of a paper I will be presenting in Germany in September on the Cognoscenti open source system which I have used in demos at the last two BPMNext conferences. To anyone wanting to experiment with ACM capabilities, a free solution might be worth trying.

The EDOC conference in Germany is mainly for researchers, and so most of this post focuses more on ways to experiment with the capabilities, and less about simply using the capabilities out of the box.

Demo: Cognoscenti
Open Source Software for Experimentation on
Adaptive Case Management Approaches

Abstract: Cognoscenti is an experimental system for exploring different approaches to supporting of complex, unpredictable work patterns. The tendency with such work environments is to make increasingly sophisticated interaction patterns, which ultimately overwhelm the user with options. The challenge is to keep the necessary cognitive concepts very simple, allow the knowledge worker a lot of freedom, but at the same time offer structural support where necessary for security and access control. Cognoscenti is freely available as an open source platform with a basic set of capabilities for tracking documents, notes, goals, and roles which might be used for further exploration into knowledge worker support patterns.


Fujitsu has leadership in the business process space going back to 1991. In 2008, the Advanced Software Design Team started a prototype project from scratch to explore innovative directions in enterprise team work support. Cognoscenti became the test bed for experimental collaboration features to demonstrate properties of an adaptive case management system for supporting knowledge workers. Features that proved to work well were subsequently implemented in the other products. In 2013 internal company changes left the project without any specific strategic value. Since some people were using it as a productivity tool for managing their work, the decision was made to make it available as an open source project for anyone to use and possibly to help maintain.

One experiment was to implement preliminary versions of the “Project Exchange Protocol” which allows case management systems, and business process management (BPM) systems, to exchange notes, documents, and goals using only representational state transfer (REST) oriented web service calls. Cognoscenti is available as a free reference implementation of these protocols for testing of protocol implementations. This paper seeks to demonstrate the open source system, its capabilities, and how research project might use the software for their own research.


Cognoscenti stores information in XML files in the file system. This was done for two reasons:

1) to avoid complication in installing the system. Requiring and initializing a database restricts the environments that it can be deployed to. The XML offers a flexible schema that can be evolved efficiently –– a task that can be quite complicated in a database. This allows prototype projects build on Cognoscenti to experiment easily with capabilities.

2) to allow direct manipulation of the files by users. The documents appear as files in the file system which can be opened and edited directly – even when the Cognoscenti server is not running. Changes are detected by file date and size.

Conceptual Object Model

The root of everything is an index which is initialized by scanning the file system. From this you can retrieve “Site” objects, “Project” objects, and “UserProfile” objects.

The Site object represents a space both on the disk and address space on the web. A Site has a set of owners and executives all of whom are allowed to create projects in the site. A Site has a visual style that applies to all projects contained by that site. The site is mapped to a particular folder in the file system, and all of the contained projects are folders within that one.

The Project object is the space where most of the work takes place. A project has a collection of notes (small message like documents with wiki-style formatting), attached documents, goals, roles, history, and email messages. All of the artifacts for a project is stored in the project folder on disk. There is a special sub folder named “.cog” which is where all the housekeeping information is kept about the project, such as old versions of documents, etc. When the server detects that a file has changed, it will display an option to the user to commit those changes, which causes a copy of that file to be saved as a version inside the housekeeping folder.

While Sites and Projects are represented in one directory tree, user information is from a folder that is disassociated from the sites and projects. The UserProfile object contains personal information for a particular user, OpenID addresses, email addresses, and settings. Because the user preferences are disassociated from the sites and projects, a user may play any role in any site or project without restriction. A user logs in once, and can access any number of projects and sites that they have access to.

Implementation Details

Cognoscenti is written in Java and runs in any servlet container, such as Apache TomCat. The user interface is based on Spring framework, which some browser side capability from Yahoo User Interface and Google Windows Toolkit, however grafting a new user interface for specialized purpose projects is easily supported.

The entire code base is licensed under the Apache license, freely available to anyone who wants it.

Innovative Concepts

Security and Access Control

Cognoscenti is first and foremost a collaborative case management system designed for lots of people to work safely with sensitive information, like health care information, social worker information, legal case information, etc. Access control needs to be a primary consideration. It is easy, or trivial even, to make a system that restricts access to particular artifacts to particular named users. But there is a problem with that: managing the many-to-many relationship between all the artifacts and users directly can be tedious and overwhelming. This leads either to users leaving the access too open so that too many people can have access, or alternatively leaving the access too restricted so that people can not get the information that they need to do the job.

An indication that users a frustrated with the access control mechanism is seen when they take a document out of the document repository in order to email it to people they want to give it to. This subversion of access control mechanism is dangerous, because email itself is an unsafe medium for sensitive documents.

The developers of Cognoscenti view security as a usability problem: it must be easy enough to use, so that people get the security right so that only the right the people who need access are getting it. These principles must be followed:

1) It must be easy for a normal, non-technical business user to express the correct security constraint to meet their needs.

2) Such an expression must meet the natural requirements of a social situation, and not merely the technical requirements of the system.

3 )As teams change and evolve, the security constraint in constructed in such a way that it tracks the changing requirements, without needing tedious maintenance by the users.

4) No surprises: the meaning of the access control settings must be clear to non-technical users.

These requirements are considerably higher than most current systems. For example, the Windows file system requires the user to do a kind of set algebra in order to determine whether a particular user can see a particular document or not.

Affordances for Change

If the project is entirely static in terms of membership, it is not difficult to get any such system set correctly so that the fixed set of members have proper access. However, projects are not static. Imagine a police detective working to solve a crime, and needs the help of an expert. That expert will need access to the case folder. Imagine how it would be if the police detective had to invite the expert, and then go to every document and give them access. The preferred expert might not be available, and the job might be done by the expert’s assistant. Imagine how it would be if the detective then had to change the access control of all the documents. And once the immediate goal is done, it might be appropriate to remove them from being able to access. In a real project we expect new people to joining and leaving every day. It does not take too much change before the management of the access rights overwhelms the detective (and he resorts to email).

One experiment built into Cognoscenti is the idea that if a person is assigned to an active goal, they automatically get access to the documents. Goals also have an ability that the person assigned to a goal and delegate the assignment to another person, in effect automatically giving them access to the project folder without further trouble. This has an additional interesting aspect that when the goal is completed, the person doing the goal, if they have no other access, will then automatically lose access which is appropriate in certain situation.


It became clear that part of the solution will involve creating intermediate constructs, called roles, which represent groups of people who are treated equivalently. Roles, by themselves, are not very innovative, but in a standard implementation of roles, the maintenance of the roles can be tedious and time consuming. Cognoscenti explores the usability problems around roles and use of roles.

Roles are highly contextual, so some experimentation was one to associate roles automatically with certain actions, or to have roles modified as the result of actions in a natural way that does not require extensive maintenance by the users. For example, adding a user to an email message might, optionally, also add that user to an associated role.

Roles were unified with the concept of a view. That is, a role is a group of people in a particular context, but it also contains elements that control how those people see the project. The reason for this is to reduce the number of different conceptual objects that the user must deal with.

Role names are also use as a form of tagging of the content. A document can be associated with particular roles as they are added into the folder, as a way of categorizing the documents. Goals can be associated with roles so that when a person is added to a role, they automatically are assigned the goals, and they have access to the documents. The use of roles gives a lot of flexibility, but the challenge remains to make the usability easy enough so that the case manager does not need to spend a lot of time creating a bunch of roles ahead of time, and instead roles are created easily, in a natural way, whenever needed by the emerging case.

Representation of Goals

Central to any work management system is the idea of tasks, activities, or goals. The challenge here was to explore the usability problems that prevent most users from keeping an accurate task list. Effort focused on how to make it really easy to create goals and assign them to others. Much as attention was given to make goal lists as easy as a checklist. The challenge is to make the creation of a new goal, the assignment of a person to that, and the notification of that user, easier than sending an email asking someone to do something. If it is easier than an email, people will use it. It also needs to be easy for the person receiving the request to access the case even when they had no prior knowledge of that particular system.

An adaptive system needs to build over time reusable templates for reuse when similar situations are recognized in the future. It would be easy to provide a programming language of some sort to allow automation of future cases, however, this approach is not suitable because the intended knowledge workers are not themselves programmers. Effort was spent on trying to make templates that result from normal use of the system, without having to focus on programming like activities.

The second challenge with templates was deciding what is and is not significant in a previous case. In some cases a previous use of a role should create a role with the same users in it, and in other cases the role should be empty.

A third challenge is deferred templates use. Many template systems assume that the template will be known and invoked at the time of case creation. The problem is that users do not always know which template is appropriate at the creation time. Knowledge users will be handed a case to work on, without knowing anything about the case. The job of the knowledge worker is to discover the details and handle whatever work needs to be done, figuring it out on the fly. A knowledge worker needs a place to work, to start collecting those details, and later determine which template to bring in.

Restructuring Over Time

Another use case challenge is that knowledge workers don’t necessarily know what parts will be significant at the time that they start working. What might initially looks like a simple goal might turn into a major project by itself. And sometimes what is expected to be a large project turns out to be trivial.

An experimental feature put into Cognoscenti is the ability to create a simple goal, and then when it looks a little more complicated, put subgoals under it. If it continues to gain complexity, the original goal can be converted to a complete project on its own. Project can be linked to goals in other projects, as if they were that goal. Status reports can be compiled from goals across multiple projects to make it look like it is consolidate in one project. Many experiments were done with trying to make it easy for users to convert back and forth from goals to projects.

Document Repository Support

Knowledge workers are often required to use organizational document repositories, and the philosophy behind Cognoscenti is that such repositories are good for organizations in general. The designers of cognoscenti however designed features to help knowledge workers when they are required to use multiple repositories – often different document storage places for different aspects of their lives. For example a doctor may keep patient data in the clinic system, but at the same time is part of a local university research organization which has thought leading documentation in a different location, while the community outreach program they volunteer has yet another.

One of the challenges with secure document repositories is letting your coworkers who are involved in a project access the same information. For example a doctor accepts a job to verify the results of a research paper located in a secure repository, but would like their recent intern to make the first pass. There are two standard ways to do this: download the file and email it to the intern, or to print it out and give the hard copy to the intern. Both of these are unacceptable because if the document is updated in the original repository, they have no access to the updated version. It is equally unacceptable for the doctor to give the username and password for the intern to access the repository directly.

Cognoscenti resolves by using a synchronized copy. The doctor accessed the repository using Cognoscenti which places a copy of the document into the project. Now the doctor can give the intern access to the copy. But the copy is synchronized with the original – optionally in both directions – so that changes in one can easily be refreshed to the other.

As you might easily imagine this is technically quite easy to do, but making it usable for users – specifically making it easier than emailing a copy of the document – requires some careful thinking about the user interface.

Federated Case Support

Just as knowledge workers are required to more than one document repository; it is also the case that Cognoscenti will not be the only case management system that is used by the pool of people who need to contribute to this case. Therefore, Cognoscenti is designed to live in a world where it presents views of a case to others, and that other case systems will have synchronized copies of those views. There is an explicit upstream / downstream relationship between cases which can be either one way or two way. Again, this is not technically difficult, but the real research is on making what ends up being a complicated collection of capabilities understandable enough, and easy enough that users will actually use them.

Project Exchange Protocol

In order to implement federated case support across different vendors or different types of case support, the protocol for exchange of information needs to be defined independently of single implementation. Workflow Management Coalition has been working on interoperability of collaborative systems for more than 20 years, and this effort is related to the work of the WfMC. Cognoscenti represents a reference implementation of a standard protocol


The open source project, the source, executables and available documentation can be accessed from the following URL:

An online video demo using Cognoscenti from the BPMNext conference is available at .

Plans and Directions

The goal in presenting this demo at EDOC 2014 is not to show numerous accomplishments, but rather to introduce a platform that may be useful for other experimentation in usability. The system is freely available to anyone, and runs in a non-proprietary open environment.

It is the desire of the author that Cognoscenti can be helpful in resolving some of the stickier issues around usability of knowledge work environments, by making a full collaborative adaptive case management system available for free for use in clinical trials involving real knowledge workers.


Many thanks to Fujitsu for supporting this work on the open source project.
Significant contributions to the development of Cognoscenti
came from Shamim Quader, Sameer Pradhan, Kumar Raja, Jim Farris,
Sandia Yang, CY Chen, Rajiv Onat, Neal Wang, Dennis Tam, Shikha Srivastava,
Anamika Chaudhari, Ajay Kakkar, Rajeev Rastogi, and many more
people at Fujitsu around the world.


by kswenson at June 16, 2014 10:06 AM

June 12, 2014

Thomas Allweyer: Alles über die Enterprise Architecture-Notation ArchiMate

Mastering_Archimate_CoverArchiMate ist eine standardisierte Notation zur Modellierung von Enterprise-Architekturen, die in letzter Zeit eine zunehmende Verbreitung findet. So haben bereits eine ganze Reihe von Modellierungstool-Herstellern ArchiMate in ihre Methodenportfolio aufgenommen. Die offizielle Spezifikation kann von der Webseite der Open Group heruntergeladen werden, einem Konsortium für IT-Standards. Aber wie bei den meisten Standards ist das offizielle Spezifikationsdokument nicht unbedingt die beste Grundlage um die Notation zu erlernen.

Das englischsprachige Buch “Mastering ArchiMate” liefert eine fundierte Einführung in die verschiedenen Notationselemente und das zugrunde liegende Metamodell von ArchiMate. Darüber hinaus wird aber auch sehr ausführlich beschrieben, wie die einzelnen Konstrukte konkret eingesetzt werden können, um verschiedene Aspekte und häufig in der Praxis vorkommende Architekturen zu modellieren.

ArchiMate bietet oft viele verschiedene Möglichkeiten einen bestimmten Sachverhalt abzubilden, weshalb es einiger Erfahrung und einheitlicher Modellierungskonventionen bedarf, um brauchbare Modelle zu entwickeln. Zudem können zielgruppenspezifische Modell-Sichten genutzt werden, in denen nur Ausschnitte aus der gesamten Modellpalette genutzt werden. Hier macht sich die Erfahrung des Autors Gerben Wierda bezahlt, der als Lead Enterprise Architect bei einem Finanzdienstleister sehr umfangreiche ArchiMate-Modelle erstellt hat.

Sehr aufschlussreich sind die verschiedenen vorgestellten Patterns. Hierzu gehören etwa die Modellierung von Desktop-Anwendungen, Zwei- und Dreischichten-Architekturen, Software-as-a-Service-Szenarien, Hochverfügbarkeits-Datenbankclustern und vielen weiteren. Auch wenn sich die Frage stellt, ob und zu welchem Zweck man das jeweilige System in der Praxis tatsächlich in der vorgestellten Detailtiefe modellieren würde, lernt man doch viel über ArchiMate. Manche Diskussion erscheint möglicherweise etwas akademisch. Schließlich kann man sich lange darüber streiten, ob es sich bei einer Excel-Datei mit Makros um ein Datenobjekt oder eine Anwendung handelt, und ob Excel dann eher als Anwendung oder als Teil der Infrastruktur betrachtet werden soll. Andererseits zwingen einen solche Überlegungen, sich genaue Gedanken über die Strukturierung der IT zu machen.

An vielen Stellen diskutiert Wierda auch grundsätzliche Fragen, die sich bei der Architekturmodellierung ergeben, wie z. B. den Unterschied zwischen Business Processes und Business Functions. Dem Zusammenhang zwischen der Geschäftsprozessmodellierung mit BPMN und der EA-Modellierung mit ArchiMate ist ein eigenes Kapitel gewidmet. Da eine Enterprise Architecture auch Geschäftsprozesse, Funktionen, Rollen etc. umfasst, liegt es nahe, diese Inhalte mit den entsprechenden Konstrukten in BPMN-Modellen zu verknüpfen.

Schließlich werden auch die Vor- und Nachteile von ArchiMate besprochen und Verbesserungsvorschläge entwickelt. Der Autor beurteilt ArchiMate trotz einiger Schwächen als sehr gut in der Praxis anwendbar. Gelegentlich sieht er sich aber auch veranlasst, einige Regeln von ArchiMate etwas locker auszulegen um ein gut verständliches Diagramm zu entwickeln.

Das einführende Kapitel bieten einen gut verständlichen Einstieg in die Gundlagen von ArchiMate. Ein Großteil des Buchs ist aber keine leichte Kost und eher für ArchiMate-Experten zu empfehlen. Dem Autor ist dies durchaus bewusst, weshalb er auf der Webseite zum Buch eine verbilligte Kurzfassung anbietet, die nur etwa die erste Hälfte des Buchs umfasst. Ein Auszug mit dem Einführungskapitel kann sogar kostenlos angefordert werden.

Insbesondere für Einsteiger dürften viele Beispielmodelle aufgrund ihres hohen Detaillierungsgrades zumindest auf den ersten Blick abschreckend wirken. Wierda erwähnt, dass er mit Modell-Landschaften arbeitet, die mehrere Zigtausend Elemente umfassen. Diese Modelle erfüllen auch die Aufgaben einer Konfigurationsmanagement-Datenbank, in der alle IT-bezogenen Elemente des Unternehmens verwaltet werden. Ob es wirklich immer sinnvoll ist, all diese Details in grafischen Modellen zu verwalten, darf bezweifelt werden. Zumal die verschiedenen von Wierda vorgestellten Modell-Sichten offensichtlich die Pflege z. T. redundanter Modellinformationen erfordern.

Für den praktischen Einsatz dürfte es sinnvoller sein, weniger detaillierte grafische Modelle zu erstellen und die einzelnen IT-Assets in einer gewöhnlichen Konfigurationsmanagement-Datenbank zu verwalten. Das schmälert keineswegs die große Leistung, die Wierda mit seiner fundierten und umfassenden Analyse und Erläuterung des ArchiMate-Standards vollbracht hat. Nur ist ein Großteil dieser Ausführungen eben eher für Experten geeignet.

Gerben Wierda:
Mastering ArchiMate
Edition II
Das Buch bei amazon.
Webseite zum Buch

by Thomas Allweyer at June 12, 2014 02:15 PM New Whitepaper: The Zero-Code BPM Myth

Yay! We had 400+ registrations for our webinar with Sandy Kemsley, covering the “Zero-Code BPM Myth” and comparing that to a developer-friendly BPM approach like camunda BPM delivers. In case you missed it, there is a recording: And there is also a whitepaper! Sandy wrote it and I think it is a very fine [...]

by Jakob Freund at June 12, 2014 12:52 AM

June 11, 2014

Sandy Kemsley: Developer-Friendly BPM

I gave a webinar today sponsored by camunda on developer-friendly BPM, discussing the myth of zero-code BPM. I covered the different paradigms of BPM development, that is, fully model-driven versus...

[Content summary only, click through for full article and links]

by Sandy Kemsley at June 11, 2014 09:37 PM

June 10, 2014

Sandy Kemsley: Becoming A Digital Enterprise: McKinsey At PegaWORLD

The day 2 keynotes at PegaWORLD 2014 wrapped up with Vik Sohoni of McKinsey, who talked about becoming a digital enterprise, and the seven habits that they observe in successful digital enterprises:...

[Content summary only, click through for full article and links]

by Sandy Kemsley at June 10, 2014 03:08 PM

Sandy Kemsley: PegaWORLD: Service Excellence At BNY Mellon

Jeffrey Kuhn, EVP of client service delivery at BNY Mellon, spoke in the morning keynote at PegaWORLD about the journey over the 230-year history of the bank towards improved customer focus....

[Content summary only, click through for full article and links]

by Sandy Kemsley at June 10, 2014 01:38 PM

June 09, 2014

Sandy Kemsley: PegaWORLD Breakout: The Process Of Everything

Setrag Khoshafian and Bruce Williams of Pega led a breakout session discussing the crossover between the internet of things (IoT) — also known as the internet of everything (IoE) or the...

[Content summary only, click through for full article and links]

by Sandy Kemsley at June 09, 2014 04:25 PM

Sandy Kemsley: A Vision Of Business Transformation At PegaWORLD

The second half of today’s keynote started with a customer panel of C-level executives: Bruce Mitchell, CTO at Lloyds Banking Group, Jessica Kral, CIO for Medicare & Retirement at...

[Content summary only, click through for full article and links]

by Sandy Kemsley at June 09, 2014 03:07 PM

Sandy Kemsley: PegaWORLD Gets Big

My attendance at PegaWORLD has been spotty the past few years because of conflicts with other conferences during June, so it was a bit of a surprise to show up in DC this week to a crowd of more than...

[Content summary only, click through for full article and links]

by Sandy Kemsley at June 09, 2014 01:25 PM

Keith Swenson: XWand Cloud for Financial Data Exchange

Fujitsu is announcing today XWand Cloud, a new online server for financial information exchange. What is it? Why is it important?

XBRL Format

The offering is centered around the eXtensible Business Reporting Language (XBRL). This is a comprehensive format for exchanging financial information between parties. Each XBRL document is a financial report of some type. Think of it as a spreadsheet full of values. Normally the problem with a spreadsheet is while a number (e.g. $2,534,210) is completely clear, but what exactly this number represents is not so clear. Specifying which kinds of values are included in this number, what time period it is for, geography, or which parts of the company has to come separately from the number itself.

That is where the taxonomy comes in. Part of XBRL “filing” is a set of associated documents that define the terms both descriptively and mathematically. The root taxonomy is produced by regulatory agency, and so all companies have to comply with these meanings, but industries and individual companies can extend the taxonomies in order to report thing that are specific to their business, and not just the things common to every business.

Furthermore, each value reported is associated clearly with a point in time, or a time period, and potentially a specific region. The result is a complete definition that can be automatically read and understood by the receiver.

The XBRL format has revolutionized the US Securities and Exchange Commission which adopted XBRL a few years ago, and today all 15,000 publicly traded companies must report their figures to the SEC in XBRL format. The SEC then automatically receive this information, and because the semantic definition of the figures is available, extract those figures that it needs, and can compare companies to each other on an apples-to-apples basis. At the same time, this has opened up a a large potential for analytics across industry sectors, because these reports are freely available from the SEC to analysts, who can easily consume the reports, and use the values. XBRL greatly improves the efficiency of monitoring public companies.

Not only in the US, XBRL is being widely adopted in Europe as well, where the European Union’s European Banking Authority (EBA) requires reports from the other central banks to be delivered in XBRL format, and the EIOPA, the European insurance industry regulator, has also stated that insurance companies, under Solvency II regulation, will have to submit their first interim reports in XBRL most probably from the beginning of 2016.

XWand and Fujitsu Cloud

Fujitsu has been a leader in the XBRL space, playing a key role in the creation of the standard. Fujitsu’s product, Interstage XWand, is recognized as a leading product in the marketplace.

The other key ingredient is Fujitsu’s Trusted Public S5 cloud which is a high availability high security cloud hosting environment that is able to handle this kind of application.

XWand Cloud

XWand Cloud bring these together in a free offering that spotlights both products. When submitting a financial report to the SEC, the one thing that financial filers want to know is whether the entire report is valid according to a set of standard validation rules. XWand can do this.


Marked as a “beta” the offering is currently modest. Users can register for a free account in order to easily and quickly upload their reports to XWand Cloud and get a thorough validation check. The resulting report pinpoints where the problems in the document are, if any. If the filing is proper and complete, the report will show a cleam bill of health.

Fujitsu is not saying anything about where this is ultimately headed, but the cloud based platform opens possibilities for collaboration around the reports, possibly reviews and approvals, as well as selective distribution of the information to third parties. There is a growing community of XBRL suppliers, and XWand Cloud could be a meeting place where such parties can offer specialized services.

Cross Organization Integration

I have been saying for a while that the correct way to integrate ICT systems from different companies is to exchange documents that are a fully self-describing in the way that XBRL is. Because the XBRL document comes with the precise semantics described by the taxonomy, the standard service oriented architecture problem with “API versions” is avoided. The two parties must agree on the standard taxonomy (set by the regulatory agency or possibly an industry authority) but they then map these values into their own independent systems. What we are seeing is beginning of true integration of financial systems across organizations.

I believe that use of XBRL will expand beyond the financial field. The same technology could be used to describe ust about anything, for example giving product information to an e-commerce store, product materials definitions for a outsourcing, descriptions of services that might be provided and exchanged, etc. This approach will allow looser, and more complete, integration across many fields.

Who knows where it will go? XWand Cloud is a small step in this direction, bringing Fujitsu’s Interstage XWand capability together with Fujitsu’s cloud offering.

by kswenson at June 09, 2014 12:25 PM

Sandy Kemsley: Webinar On Developer-Friendly BPM And The Zero-Code Myth

I’m giving a webinar on Wednesday this week (June 11) on developer-friendly BPM and the myth of zero-code BPM when it comes to many complex integrated core business processes. It’s sponsored by...

[Content summary only, click through for full article and links]

by Sandy Kemsley at June 09, 2014 11:12 AM

June 06, 2014

Keith Swenson: SSL Browser Nonsense

Thank you Word Press! WordPress has turned on HTTPS for all blogs, and my blog is hosted at WordPress. They deserve recognition for being proactive in the fight for privacy. But we need more from the browsers.

Let me ask you a question.  Did you access this blog at HTTP://   Or did you use HTTPS://  The second one, https is more secure, more private.  Click on this link and try it.

You will probably get a threatening warning.  Oh no!  This might not be the site you were looking for.  But, with the HTTP, you are equally unsure about the site.  It still might not be the site.  Did you get a warning with HTTP?  No you didn’t.

The reason you get this, is because I am too cheap to go buy a certificate for this site.  My blog is available for free, and I don’t make any business from it, it is pretty hard for me to justify spending to get a certificate for this purpose.  At the same time privacy experts are suggesting that all internet traffic should be HTTPS.  The warning is unnecessary, especially given that on HTTP you don’t get a warning either.  Since HTTPS without a certificate is not less secure that plain HTTP, there is no reason for the warning in this situation.  Here is what those warnings look like today.

On Firefox, the warning looks like this:



This scary warning still has a “Get me out of here!” button.  To get by this, you have to first open the heading that says “I Understand the Risks” and only at that point the button to add an exception is exposed.  Click that, and Mozilla remembers it!  In the future, visits to this site, you will not get the scary warning.  Kudos to Mozilla as this is a significant advance in usability.  At least you get the scary warning only once.   After clicking through, the address link looks like this:


It looks mostly like a normal HTTP site, and the warning symbol is suitable.  If you are accessing a branded site, you will not see the site icon, which is reasonable since you don’t have assurance that the site is genuine, how one might make the argument that you didn’t have that assurance with HTTP either, so why show it there?  If it had been fully signed, it would look like this:


On Chrome is looks like this:


This is direct and to the point.  This is better than Mozilla because the button to proceed is immediately available.  After pressing this, like Mozilla, Chrome remembers the fact, and you are not bugged next time you come here.  After clicking through, you get a display on the address bar like one of these:

https_icon_chrome  https_security_chrome2

I feel this is pretty suitable.  You should not have any assurance that this is an authentic site, and it should look mostly like a regular HTTP site with some indication OK.  Regular HTTP should be shown also with a red line through it, since you have no assurance in that case that the site is authentic.  As I said earlier, it is inconsistent to make a big deal out of not certifying the site with HTTPS, when HTTP is equally uncertified.  Here is what Chrome looks like for a fully signed site:


On Internet Explorer it looks like this:


The scary recommendation is to “close and do not continue.”  As I have pointed out elsewhere, there is actually no greater chance that this is a rogue site than if you were using HTTP which has no certificate at all.  Therefor this recommendation is unwarranted.  with IE you will get this scary warning every time you visit the site.  It does not remember that you clicked through and approved this once.  What is perhaps even more concerning is the address bar:


This looks completely like a regular HTTP site, and that is good.   When you access a fully trusted site, it looks very similar, only the color is green!  It does show a lock icon an you can access more information about the certificate.  The only problem is the warning page coming up every single time.


What should the behavior be?

Quite simple, there should not be any warning at all when using an uncertified connection. It should look and act essentially exactly the same as a regular HTTP link, although some visual indicator in the address bar is acceptable.

The lock symbol, or the special site specific display, should be displayed only when a correct, signed certificate is presented and the browser can then indicate that the site is authentic.

If the browser wants to go the extra mile in keeping people safe, it should remember whether a site used a certificate last time.  If so, any link to the site using HTTP should be automatically converted to HTTPS if you click on it.  Then, if the certificate for a site that you know should have a certificate fails to provide a correct one, then, and only then, display the scary warning.  It should say:

This site normally has a signed certificate, but this time something is wrong with the certificate, and this might be an impostor site.  Are you sure you want to proceed?

That is it … display the warning ONLY if you have reason to believe that the site intended to have a proper certificate in the first place.


by kswenson at June 06, 2014 12:47 PM

BPM+ (Martin Wieschollek): Toolmarktmonitor 2014

Die BPM&O hat eine Marktstudie zu BPM-Tools veröffentlicht. In dieser Marktstudie werden 22 Tools aus D-A-CH vorgestellt und verglichen. Die Schwerpunkte der dargestellten Softwareprodukte liegen in den Bereichen Design und Analyse von Geschäftsprozessen. Jeder, der sich einen aktuellen Überblick verschaffen möchte oder über die Einführung eines neuen Tools nachdenkt kann hier nützliche Informationen finden.

by Martin Wieschollek at June 06, 2014 08:35 AM

June 04, 2014

BPM+ (Martin Wieschollek): Der Prozess-Steckbrief

Nicht nur beim Aufbau einer Prozesslandkarte, sondern immer dann wenn man über Prozesse redet, ist es sehr hilfreich zu beschreiben was eigentlich “der Prozess” ist. Hierfür findet man zum Beispiel das SIPOC-Diagramm aus dem Six Sigma. Im SIPOC-Diagramm werden alle wesentlichen Bestandteile eines Prozesses beschrieben: S – Supplier (Lieferant I – Inputs (Einsatzfaktoren) P – [...]

by Martin Wieschollek at June 04, 2014 12:35 PM

May 28, 2014

Sandy Kemsley: June BPM Conferences

After a month at home, I’m hitting the road for a few vendor events. First, a couple where I’m attending, but not presenting: IBM Content 2014 in Toronto (so technically not hitting much of the road...

[Content summary only, click through for full article and links]

by Sandy Kemsley at May 28, 2014 11:24 AM

May 26, 2014 First results in pushing ‘Digital Age BPM’ ahead!

digital_age_bpm_workshop_session1Good news: We found fearless BPM experts!

Two weeks ago, we met for the first session of our workshop series. After getting to know each other and understanding the process management systems in the different organizations, we started to familiarize ourselves with the possibilities of web 2.0 and social media in the context of BPM.

After dreaming and dreading what the future might hold for us, we began to design useful approaches of embedding social media in process management environments. The initial skepticism faded after we generated many applicable ideas – but was not washed away. Our main goal became manifested in one word: Benefit. Whatever we are going to design, the premise is the advantage the users are gaining by applying it.

Throughout clustering, we defined six core focuses: ‘Participation’, ‘Training and Communication’, ‘Feedback and Exchange’, ‘Search Engine’, ‘Process Transparency’, and ‘Mobile Access’.

To meet our premise of ‘Benefit’ we combined the concrete ideas of ‘Digital Age’ applications within the core focuses with personas. Personas are fictional characters that stand for different stakeholder groups by representing their average character traits and capabilities.

Hence, the question is: “Which ‘Digital Age’ application is beneficial to whom?”

The task for our next workshop session is to find out which web 2.0 applications and social media functions might contribute to a real advantage in the context of BPM. We all agreed on doing some ‘homework’ until the next workshop – by dint of these evaluations we hope to get a detailed understanding of the perceived benefit of our concrete ‘Digital Age’ applications.

We are curious what the results of our evaluation will reveal, we excitedly prepare the next workshop and we are looking forward to push the “Digital Age BPM” further ahead!

Best regards,


by Mirko Kloppenburg at May 26, 2014 08:57 PM Webinar: Developer-Friendly BPM

“Buy now! BPM without programming!” This is how many BPM vendors lure their customers into the ‘zero-code BPM trap’. But as soon as you try to create a solution that goes beyond the vacation workflow from the vendor presentation, the suffering begins. In this free live webinar, the independant BPM industry expert Sandy Kemsley challenges [...]

by Jakob Freund at May 26, 2014 01:08 AM

May 23, 2014

Bruce Silver: Visualizing Responsive Processes

Merging BPMN and CMMN standards in OMG is, for the moment at least, a dead issue.  The question remains how best to visually represent logic formerly known as case management, which I will henceforth refer to as responsive processes.  Responsive processes are driven forward by events (including ad-hoc user action) and conditions, rather than by sequence flow.  In a responsive process, an activity is enabled to start when its preconditions are satisfied.

I believe that a BPMN 2.0 process engine that can execute event subprocesses, including those with Escalation and Conditional triggers, can implement many if not most features of a responsive process, as IBM’s BPM 8.5.5 amply demonstrates.  To be more precise, it should be able to implement a responsive process in which all activities, including those that CMMN calls discretionary, are specified at design time.  CMMN goes beyond this, however, in allowing the design, or “plan,” to be modified arbitrarily at runtime on an instance by instance basis.  We cannot assume that a BPMN 2.0 engine can handle this, but at this point I am not sure how critical this feature is.  It may turn out to be critical, but for now let’s call it responsive-plus.

Whether or not you agree with me that BPMN as a “language” can handle responsive processes, you probably agree that as a notation it fails to visually communicate the responsive process logic.  IBM’s Case Designer is a little better  at that, and CMMN is a little better still. But I think all of these fall well short of the mark.  So I have been thinking about what kind of notation would achieve that goal.

As I have said previously, responsive process/case logic is inherently much more difficult to represent in a printed diagram than flowchart-based logic.  Scoped event logic tends toward some kind of state diagram, but I think it’s safe to say that business users (and most business analysts) would have a hard time with state diagrams and find them unacceptable.  There is one form of diagram that could possibly fit the bill if sufficiently enhanced, and that is a Gantt chart, such as you might find in Microsoft Project.  In a Gantt chart, activities are enabled by preconditions called dependencies.  A Gantt’s chart has a very primitive notion of dependency, which is limited to completion (or possibly start) of another activity.  It has no notion of end state, for example – an activity completing successfully versus unsuccessfully.

A Gantt chart takes the form of a table – an indented list of activities, each row specifying the activity’s start and end (both anticipated and actual).  The indents (usually reinforced by a hierarchical numbering scheme) provide aggregation of activities for summary reporting, but if we give these summary activities their own entry and exit conditions they become subprocesses, or what CMMN calls stages.  Anticipated start – actually, enablement – is based on the dependency logic, and actual start is based on when work actually starts.  (CMMN has this distinction in the runtime state model, but BPMN unfortunately does not.)  Each row in the Gantt chart also contributes a bar in a calendar view of the table.  A vertical line slicing through the calendar view separates past from future.  Things to the left of the line are actuals, things to the right are anticipated.

Gantt charts provide something that most BPM users instinctively desire – an estimate of when the process will complete, based on its current state.   (BPM Suites are happy to provide this in the runtime if you purchase the predictive analytics module, but Gantt makes it part of the model itself.)  BPMN has no standard property to record the mean activity duration, although many modeling tools provide this to support simulation.  Gantt charts require that property.

Gantt charts also have the responsive-plus feature of being modifiable at runtime, including addition of new activities and dependencies.  That sounds great!  But they cheat, because a normal Gantt chart describes only a single instance of the process.  It does not pretend to describe the general case, including alternative paths to the end state.  In fact, the whole idea of exception end states – for the process as a whole or for individual activities – is absent.

Economy and expressiveness are key to visually communicating responsive process logic.  We want to pack the most semantic value into the simplest diagram possible.  The fewer distinct shapes and icons the better. Connectors are extremely valuable in communicating the dependency logic.  Not all Gantt charts have them, but MS Project uses them quite effectively.  An arrow into the left end of a bar indicates a precondition; an arrow into the right end of a bar indicates a completion condition.  In MS Project, the precondition is always either completion (arrow out of the right end) or actual start (arrow out of the left end) of an activity.  We’d like to extend this to event triggers and data conditions as well.  CMMN supports 4 basic event types: state change in an activity (such as completion), state change in an information item, timer (relative to some other selected trigger), and ad-hoc user action.  I think that’s about right, but we probably need to add BPMN Message, Error, and possibly Signal, and maybe distinguish interrupting and non-interrupting.  As with sequence flows in BPMN, the label of a connector can be used to suggest the data condition.  (The full data condition could be stuffed into the spreadsheet part of the Gantt, to the left of the chart.)  For example, we should use line styles on the connectors and border styles on events to denote different triggering semantics.  If done right, we could eliminate CMMN’s diamond sentry shapes, which add graphical complexity but little incremental semantic value.

Like CMMN, our responsive process model needs an information model that can be referenced in both data conditions and in state change events.  BPMN 2.0 doesn’t really have this, and without it, Conditional events are kind of useless because the only data visible to them are process variables and properties.  The information model should include both data and documents, so changes in content value, metadata, and lifecycle state can all be recognized as events.  CMMN already has this, but it does not reveal the logic clearly in the printed diagram.

In a followup post, I will put up some examples of what this could look like.



The post Visualizing Responsive Processes appeared first on Business Process Watch.

by bruce at May 23, 2014 09:01 PM camunda BPM Online Training available

Get a camunda BPM training when and where you want. The new self-paced online course is now available. The course provides praticipants with a head-start needed for creating powerful process applications. It includes more than 6 hours of easy-to-follow training videos, hands-on exercises and lab tutorials as well as weekly 2-hour live sessions (Monday 8am [...]

by Jakob Freund at May 23, 2014 07:29 PM

Drools & JBPM: Running drools-wb with GWT's SuperDevMode

Like most, I like surprises!

Some surprises aren't always welcome though; and one such surprise bit me yesterday.

As a good citizen I upgraded my installation of Google Chrome when advised a new version was available. With hind-sight I don't know why I so gleefully went along with the upgrade (after all, I'd recently removed the latest version from my mobile telephone as it didn't "feel" as good... anyway I digress).

The surprise was that Chrome 35 stops supporting GWT's "DevMode" (something I'd long been used to with FireFox) and as from GWT 2.6.0 support for "DevMode" is to come to an end ("GWT Development Mode will no longer be available for Chrome sometime in 2014, so we improved alternate ways of debugging. There are improvements to Super Dev Mode, asserts, console logging, and error messages.")

Options were to find an installation of Chrome 34, or switch to SuperDevMode (that seems inevitable). Electing for the latter, I present my findings on how to configure your webapp, IDE and run (or debug) it in "SuperDevMode".

These instructions are for IDEA (NetBeans will probably follow a similar route).

(1) Create a regular GWT Launcher:

(2) Create a new GWT Launcher for SuperDevMode:

(3) Add the following to your webapp's gwt.xml (module) file:

  <!-- SuperDevMode -->
  <add-linker name="xsiframe"/>
  <set-configuration-property name="devModeRedirectEnabled" value="true"/>
  <set-property name="compiler.useSourceMaps" value="true"/>
  <set-configuration-property name='xsiframe.failIfScriptTag' value='false'/>

(4) Launch your regular webapp (the "classic" GWT Launcher):

... <tick>, <tock>, <tick>, <tock> while it compiles and launches...

(5) Launch the SuperDevMode code server (the "SuperDevMode" GWT Launcher):

... <tick>, <tock>, <tick>, <tock> while it compiles and launches...

​(6) Drag the "Dev Mode On" and "Dev Mode Off" buttons to your bookmark bar (as advised) - but we don't normally read these sort of things, right! ;)

(7) Go back to the webapp's browser tab

(8) Click on the "Dev Mode On" bookmark you created in step (6)

(9) Click on "compile"

(10) Delete the "codesvr" part of the URL and press enter (dismiss the popups that appear; which ones depends on what browser your GWT module targets; e.g. I had to dismiss a popup about using Chrome but the GWT model targets FireFox).

​(11) Done!

(12) What's that? You want to debug your application?!?

This isn't too bad. Just launch both your "classic" GWT Launcher in debug mode and the "SuperDevMode" GWT Launcher in normal mode.

Server-side code needs break-points in IDEA, and client-side break-points need to be added using Chrome's Developer Tools (you'll need to make sure "sourceMaps" are enabled, but this appears to be the default in Chrome 35).

Accessing Chrome's debugger:



It takes a bit of getting used to debugging client-side stuff in Chrome, and server-side stuff in IDEA, but it's not terrible (although don't expect to be able to introspect everything in Chrome like you used to in IDEA).

I hope this helps (probably more so as others find "DevMode" stops working for them.... and when we move to GWT 2.6.1 --- for IE10 support --- so it is coming sooner than later).

Have fun!


by Michael Anstis ( at May 23, 2014 09:47 AM

May 22, 2014

Drools & JBPM: London (May 26th) Drools & jBPM community contributor meeting

London, Chiswick, May 26th to May 30th

During next week a large percentage of the Drools team, some of the jBPM team and some community members will be meeting in London (Chiswick). There won’t be any presentations, we’ll just be in a room hacking, designing, exchanging ideas and planing. This is open to community members who wish to contribute towards Drools or jBPM, and want help with those contributions. This also includes people working on open source or academic projects that utilise Drools or jBPM. Email me if you want to attend, our locations may very (but within chiswick) each day. 

We will not be able to make the day time available to people looking for general Drools or jBPM guidance (unless you want to buy us all lunch). But we will be organising evenings things (like bowling) and could make wed or thu evening open to people wanting general chats and advice. Email me if you’re interested, and after discussing with the team, I’ll let you know.

Those currently attending:
Mark Proctor (mon-fri) Group architect
Edson Tirelli (mon-fri) Drools backend, and project lead
Mario Fusco (mon-fri) Drools backend
Davide Sottara (wed-fri) Drools backend
Alex Porcelli (mon-fri) Drools UI
Michael Anstis (thu-fri) Drools UI
Kris Verlaenen (wed-thu) jBPM backend, and project lead
Mauricio Salatino (mon-fri) jBPM tasks and general UI

by Mark Proctor ( at May 22, 2014 11:28 AM

May 20, 2014

Bruce Silver: Method and Style Wizard Generates BPMN Automatically

itp commerce has just released a new BPMN Method and Style wizard that automatically creates well-structured BPMN from a simple interview.  In my BPMN training, the “Method” is the hardest part because it asks students to describe the process top-down and abstractly, as opposed to the bottom-up “what came next?” format of the SME fact-finding.  It’s especially hard when you’re first learning the shapes and symbols, and have all those label-matching style rules to keep in mind, as well.  Process Modeler for Visio now lets a wizard do all the work.  Modelers just need to answer questions about activities, their end states, and what comes after what.  The wizard generates hierarchical BPMN automatically.  Great job, guys!

I made a 13-minute video that explains the issue and demonstrates the tool in action.  Check it out here.

If the “good BPMN” idea is something you’re interested in, there’s still room in my next BPMN Method and Style class, June 3-5, which includes the bpmnPRO gamified eLearning app and post-class certification.  More info on that here.

The post Method and Style Wizard Generates BPMN Automatically appeared first on Business Process Watch.

by bruce at May 20, 2014 11:22 PM

Thomas Allweyer: Kostenloses Tool ermöglicht Modellieren im Browser – online und offline

Screenshot BIC Design Free Web EditionImmer mehr Hersteller von Modellierungstools bieten kostenlose Einsteigerversionen an, die ein unkompliziertes Ausprobieren der Grundfunktionalitäten im Rahmen kleinerer Szenarien ermöglichen. Auch die Bochumer Firma GBTEC hat jetzt eine Free Web Edition ihres Modellierungstools BIC Design veröffentlicht, mit der man nicht nur Prozesse in BPMN oder EPK modellieren kann, sondern auch Wertschöpfungsketten, Organigramme und IT-Landschaften. Daneben gibt es noch Universaldiagramme, in denen man sämtliche Freiheiten hat, beliebige Elemente miteinander zu verbinden. Die Modellierung erfolgt komplett im Browser. Allerdings werden die Modelle nicht wie bei anderen Tools in der Cloud gespeichert, sondern lokal auf dem Computer des Benutzers. Auch wenn der Computer offline ist, kann man weiter modellieren.

Eine Installation ist nicht erforderlich. Beim ersten Aufruf der Free Web Edition über den Link des Herstellers wird das Tool geladen. Die Modelle befinden sich ausschließlich im lokalen Speicher des Browsers. Beim nächsten Öffnen des Links stehen damit automatisch wieder die eigenen Modelle zur Verfügung. Das funktioniert auch dann, wenn man nicht im Internet ist. Lediglich für einige spezielle Funktionen, wie Export oder Drucken, muss man online sein. Das beschriebene Konzept hat allerdings zur Folge, dass die Modelle beim Leeren des Browser-Caches gelöscht werden. Man sollte seine Arbeitsergebnisse daher regelmäßig mit Hilfe der Export-Funktion sichern.

Beim Arbeiten mit der HTML 5-basierten Web-Oberfläche fällt nicht mehr auf, dass man im Browser arbeitet. Sämtliche Oberflächen-Elemente verhalten sich so, wie man es von lokalen Anwendungen gewohnt ist. So öffnen sich neue Modelle nicht etwa in gesonderten Browser-Tabs, sondern in der integrierten Modellierungsoberfläche. Die meisten Funktionen lassen sich komfortabel über Kontextmenüs der einzelnen Modellobjekte erreichen.

Der Funktionsumfang ist für ein kostenloses Tool beachtlich. Selbstverständlich können Modellhierarchien aufgebaut werden, wobei auch Modelle unterschiedlichen Typs einbezogen werden können. So kann man etwa dem Pool eines BPMN-Modells ein Organigramm hinterlegen. Die aufgebaute Modellhierarchie mitsamt den enthaltenen Objekten und ihren verschiedenen Attributen kann beispielsweise in Form von Prozesshandbüchern und Excel-Reports ausgewertet werden. Das Tool erkennt gleichnamige Objekte und fragt bei einer Namensänderung, ob alle Objekte desselben Namens mit geändert werden sollen. Allerdings funktioniert dies nur innerhalb eines Modells und nicht über verschiedene Modelle hinweg.

Vielfältige Formatierungsmöglichkeiten helfen dabei, optisch ansprechende Modelle zu entwickeln. Zwar mag es fast selbstverständlich erscheinen, dass man Modelle mit Freiformtexten und beliebigen grafischen Elementen anreichern kann, doch fehlt diese Möglichkeit bei manch anderem Tool, insbesondere bei reinen BPMN-Tools. Dass man ein komplettes Diagramm oder Teile davon um einen beliebigen Winkel neigen kann, ist noch weniger verbreitet. Zwar wird man nicht so häufig diagonale Modelle wie in der obigen Abbildung benötigen, doch ist es durchaus praktisch, ein Modell kurz einmal um 90 Grad kippen zu können, damit es besser auf eine Seite passt.

Das Modellieren funktioniert recht intuitiv. Lediglich wenn man den Verlauf von Kanten ändern möchte, muss man etwas herumprobieren, bis man den Bogen heraus hat, an welchen Stellen man die Kanten greifen muss um das gewünschte Resultat zu erreichen. BPMN-Modellierer werden sich wundern, warum man bei jeder Kante den gewünschten Kantentyp (Sequenzfluss, Nachrichtenfluss, …) auswählen muss. Schließlich ist in den meisten Fällen ja nur eine Kantentyp erlaubt. Auch finden keine weiteren Syntaxüberprüfungen statt. So wird man z. B. nicht daran gehindert, einen Nachrichtenfluss innerhalb eines Pools zu ziehen, obwohl die BPMN-Spezifikation dies verbietet.

Wer die Vollversion von BIC Design oder andere Repository-basierte Modellierungsplattformen kennt, wird die Möglichkeit zur Sichten-übergreifenden Integration und Navigation vermissen. So ist es beispielsweise nicht direkt möglich, alle Prozesse herauszufinden, an denen eine bestimmte Organisationseinheit beteiligt ist. Diese Features sind der Vollversion vorbehalten. Modelle, die man mit der Free Web Edition erstellt hat, können in die Vollversion übernommen werden.

Link zu BIC Design Free Web Edition

by Thomas Allweyer at May 20, 2014 08:21 AM

May 19, 2014 BPMCon 2014 – Agenda komplett

Was haben LVM Versicherung, DVAG, Provinzial NordWest und Wüstenrot gemeinsam? Sie sprechen am 19. September auf der schönsten BPM-Konferenz des Jahres! Die diesjährige BPMCon findet in einem spektakulären Bauwerk am Berliner Spreeufer statt. Neben handfesten Berichten aus der Praxis entzaubert die kanadische BPM-Koryphäe Sandy Kemsley den “Mythos Zero Coding BPM”, der Geisterjäger Bernd Rücker bezwingt [...]

by Jakob Freund at May 19, 2014 11:06 PM

May 15, 2014

Thomas Allweyer: Modellierungstools – Große Unterschiede bei den Total Costs of Ownership

Wie hoch sind die Kosten für die Einführung und Nutzung von Prozessmodellierungsools? Laut dem neu erschienenen BPM&O-Toolmarktmonitor fallen in typischen Projekten bis zehn Personen über fünf Jahre durchschnittlich 8.500 € pro Einzelplatzlizenz an. Das sind 1.700 € pro Jahr. Einbezogen wurden die Aufwände für Installation, Konfiguration, Lizenzen, Wartung und Schulung. Dabei gibt es zwischen den insgesamt 22 untersuchten Toolanbietern große Unterschiede: Die Spanne liegt zwischen 2.000 € und 20.000 € für den betrachteten Fünfjahreszeitraum. Für größere Projekte und Unternehmenslizenzen können sich diese Kosten deutlich reduzieren.

In der Studie wurden im deutschsprachigen Raum erhältliche Tools einbezogen, die speziell auf das Design und die Analyse von Prozessen ausgerichtet sind. Die Prozessautomatisierung wurde explizit nicht untersucht. Es ging vielmehr um Funktionalitäten in den Bereichen Modellierung, Modellverwaltung, Reporting, Prozesscontrolling, Prozessportal und Simulation. Insgesamt wurden 155 Einzelkriterien abgefragt. Die Studie fasst die von den Anbietern gemachten Angaben zusammen. Bei einer konkreten Toolauswahl muss man daher natürlich noch genauer nachprüfen, wie bestimmte angegebene Funktionen konkret in dem jeweiligen Tool umgesetzt sind.

Bei den im deutschsprachigen Markt vertretenen Toolanbietern handelt es sich überwiegend um kleinere Unternehmen mit bis zu 50 Mitarbeitern, die meist schon viele Jahre am Markt aktiv sind. Der Abdeckungsgrad der abgefragten Funktionalitäten ist in den meisten Bereichen recht hoch, insbesondere im Bereich des Reporting und des Portals. Größere Unterschiede gab es vor allem in den Bereichen Controlling/Monitoring und Simulation.

Die Studie gibt einen ganz guten Überblick über die prinzipielle Abdeckung verschiedener Funktionsbereiche durch die am Markt vertretenen Tools. Allerdings wird zu jeder untersuchten Kategorie nur für einige ausgewählte Funktionen angegeben, wie viele Tools darüber verfügen. Man erfährt also nicht, um welche Tools es sich jeweils handelt. Dies muss man ggf. im Rahmen einer eigenen Toolauswahl selbst bei den einzelnen Anbietern abfragen. Als Hilfestellung hierfür wird in der Studie ein Vorgehen zur Toolauswahl und -einführung vorgestellt. Zudem enthält die Studie zu jedem Toolhersteller ein kurzes Profil.

Download der Studie bei BPM&O (Registrierung erforderlich)

by Thomas Allweyer at May 15, 2014 08:43 AM

May 14, 2014

Bruce Silver: BPMN and CMMN Compared

IBM’s presentation at bpmNEXT of their implementation of case management inside of BPMN (and their subsequent launch of same at Impact) inspired Paul Harmon to start a lively thread on BPTrends on whether BPMN and CMMN should be merged.  To me the answer is an obvious “yes,” but I doubt it will happen anytime soon.  Most of the sentiment on BPTrends is either against or (more often) completely beside the point.  Fred Cummins, a honcho on the OMG committee that oversees both standards, was sneeringly dismissive of the idea.  BPMN, you see, is procedural while CMMN is declarative. There’s no comparison.  Yeah, right.

OK, so let’s look at the CMMN spec.  Here is the one example of a case model in the spec, which I will explain.


The outer container, with the tab that looks like a manila folder, is the case file.  All activities in the case are contained within it.  Isn’t that like a pool in BPMN?  No, nothing at all like it!

The octagons, called stages, are fragments of case logic.  You can nest stages inside other stages.  Isn’t that sort of like a subprocess in BPMN?  NO!  Stop saying that.

The rounded rectangles are tasks, and the icon in the upper left signifies the task type.  I know that sounds like BPMN tasks, but I assure you, NOTHING LIKE THEM!

The rounded rectangles with the dashed border are discretionary, meaning things in the diagram that may not be executed in every instance.  Oh, BPMN has nothing like that!

The # markers mean retriggerable tasks.  In BPMN all non-interrupting events are implicitly retriggerable.  So there’s a big difference right there.

The dashed connectors (I think they are supposed to be dotted) represent dependencies.  The white diamond on a shape means an entry condition, and the connector into that diamond means that completion of the task at the other end of the connector is part of the entry condition.  In BPMN, instead of a diamond at the end of a connector, we have the diamond at the start of the connector, which is a solid arrow… so NOTHING AT ALL LIKE THIS!  Well, actually there is a difference, since there could be other parts of the entry condition, such as “a user just decided to do it.”  And you’re right, BPMN sequence flow can’t do that!  But a BPMN Escalation event subprocess can do that.

The double rings that look like BPMN intermediate events are CMMN event listeners.  The two shown here mean “a user just decided to do it.”  Kind of like an Escalation event sub in BPMN.  The black diamonds are exit conditions.  So this diagram means a user could decide to set the milestone Claims processed and close the case, or just close the case.

Here is the same case logic in BPMN.  What???!!


The operational semantics are essentially identical. They both include activities initiated ad-hoc by a user and possibly other conditions, sometimes constrained by the current state of the case/process.  Neither one really communicates the event dependency logic clearly in the diagram, although CMMN does a better job:  A BPMN Escalation event could represent ad hoc user action or an explicit throw, and Parallel-Multiple event could represent any event plus condition; CMMN at least tries to suggest the dependency with a connector.  But honestly, representing this type of logic clearly in a printed diagram is really hard!

Actually there is a lot in the CMMN spec to like, and it would be good if BPMN were expanded to include it.  Timer events, for example, are much more usable.  In BPMN, the start of the timer is the start of the activity or process level the event is attached to, and the deadline is a literal value.  In CMMN, the start is some selected event and the deadline is an expression.  Is that something that only “knowledge workers” need, as opposed to the mindless droids that use BPM?  I doubt it.  State changes in any case information – not just “documents” as some would have you believe, but data as well – can trigger case activities, and BPMN should have that also.

Here is the simple truth: There is a mix of procedural and declarative logic in most business processes.   CMMN expresses the declarative logic a bit better than BPMN, but only “hints” at the simplest procedural logic, as you see in the claims example.  As anyone who has been through my BPMN Method and Style book or training knows, the key to communicating process logic in a diagram is labeling, and CMMN fails totally there.  The thing most in need of labeling – the dependency connector – doesn’t even exist in the semantic model!  An entry condition merely has a sourceRef pointer to a task or other precursor object.  No connector element means no name attribute to hold a label.  I looked through the schema; maybe I just missed it…  Also, CMMN for some unexplained reason has NO graphical model at all!  After a false start, BPMN 2.0 eventually came up with a nice solution for that, completely separable from the semantic model, but CMMN didn’t use it (or substitute something else).  I guess model interchange between tools wasn’t a priority there.

The bottom line is that both BPMN and CMMN would benefit by unification.  The separation is purely vendor-driven and counterproductive.


The post BPMN and CMMN Compared appeared first on Business Process Watch.

by bruce at May 14, 2014 11:04 PM

May 07, 2014

Drools & JBPM: Drools - Bayesian Belief Network Integration Part 3

This follows my earlier Part 2 posting in April,

Things now work end to end, and I have a clean separation from the Creation of the JunctionTree and initialisation of all state, and the state that change after evidence insertion. This separation ensures that multiple instances of the same bayesian network can be created cheaply.

I'm now working on integrating this into the belief system. One issue I have is that I can automate the update of the bayesian network as soon as the evidence changes. The reason for this is updating of the  network is expensive, if you insert three pieces of evidence, you only want it to update one not three times. So for now I will add a dirty check, and allow users to call update. For best practice I will recommend people separate reasoning of the results of bayesian and entering new evidence, so that it becomes clearer when it's efficient to call update.

For now I'm only dealing with hard evidence. We will be using superiority rules to resolve conflict evidence for a variable. Any unresolved conflicts will leave a variable marked as "Undecided". Handling of soft or virtual evidence would be nice, this would add way to resolve conflicted evidence statistically; but for now this is out of scope. There is a paper here on who to do it, if anyone wants to help me :)

I'll be committing this to github in a few days, for now if anyone is interested,  here is the jar in a zip form from dropbox.

The XMLBIF parser provided by Horacio Antar is now integrated and tested. I'm just working on refactoring Drools for pluggable knowledge types, to fully integrate Bayesian as a new type of knowledge.

Graph<BayesVariable> graph = new BayesNetwork();

GraphNode<BayesVariable> burglaryNode = graph.addNode();
GraphNode<BayesVariable> earthquakeNode = graph.addNode();
GraphNode<BayesVariable> alarmNode = graph.addNode();
GraphNode<BayesVariable> johnCallsNode = graph.addNode();
GraphNode<BayesVariable> maryCallsNode = graph.addNode();

BayesVariable burglary = new BayesVariable<String>("Burglary", burglaryNode.getId(), new String[]{"true", "false"}, new double[][]{{0.001, 0.999}});
BayesVariable earthquake = new BayesVariable<String>("Earthquake", earthquakeNode.getId(), new String[]{"true", "false"}, new double[][]{{0.002, 0.998}});
BayesVariable alarm = new BayesVariable<String>("Alarm", alarmNode.getId(), new String[]{"true", "false"}, new double[][]{{0.95, 0.05}, {0.94, 0.06}, {0.29, 0.71}, {0.001, 0.999}});
BayesVariable johnCalls = new BayesVariable<String>("JohnCalls", johnCallsNode.getId(), new String[]{"true", "false"}, new double[][]{{0.90, 0.1}, {0.05, 0.95}});
BayesVariable maryCalls = new BayesVariable<String>("MaryCalls", maryCallsNode.getId(), new String[]{"true", "false"}, new double[][]{{0.7, 0.3}, {0.01, 0.99}});

BayesVariableState burglaryState;
BayesVariableState earthquakeState;
BayesVariableState alarmState;
BayesVariableState johnCallsState;
BayesVariableState maryCallsState;

JunctionTreeNode jtNode1;
JunctionTreeNode jtNode2;
JunctionTreeNode jtNode3;

JunctionTree jTree;

BayesEngine engine;

public void setUp() {
connectParentToChildren(burglaryNode, alarmNode);
connectParentToChildren(earthquakeNode, alarmNode);
connectParentToChildren(alarmNode, johnCallsNode, maryCallsNode);


JunctionTreeBuilder jtBuilder = new JunctionTreeBuilder(graph);
jTree =;

jtNode1 = jTree.getRoot();
jtNode2 = jtNode1.getChildren().get(0).getChild();
jtNode3 = jtNode1.getChildren().get(1).getChild();

engine = new BayesEngine(jTree);

burglaryState = engine.getVarStates()[burglary.getId()];
earthquakeState = engine.getVarStates()[earthquake.getId()];
alarmState = engine.getVarStates()[alarm.getId()];
johnCallsState = engine.getVarStates()[johnCalls.getId()];
maryCallsState = engine.getVarStates()[maryCalls.getId()];

public void testInitialize() {
// johnCalls
assertArray(new double[]{0.90, 0.1, 0.05, 0.95}, scaleDouble( 3, jtNode1.getPotentials() ));

// maryCalls
assertArray( new double[]{ 0.7, 0.3, 0.01, 0.99 }, scaleDouble( 3, jtNode2.getPotentials() ));

// burglary, earthquake, alarm
assertArray( new double[]{0.0000019, 0.0000001, 0.0009381, 0.0000599, 0.0005794, 0.0014186, 0.0009970, 0.9960050 },
scaleDouble( 7, jtNode3.getPotentials() ));

public void testNoEvidence() {

assertArray( new double[]{0.052139, 0.947861}, scaleDouble(6, engine.marginalize("JohnCalls").getDistribution()) );

assertArray( new double[]{0.011736, 0.988264 }, scaleDouble( 6, engine.marginalize("MaryCalls").getDistribution() ) );

assertArray( new double[]{0.001, 0.999}, scaleDouble(3, engine.marginalize("Burglary").getDistribution()) );

assertArray( new double[]{ 0.002, 0.998}, scaleDouble( 3, engine.marginalize("Earthquake").getDistribution() ) );

assertArray( new double[]{0.002516, 0.997484}, scaleDouble(6, engine.marginalize("Alarm").getDistribution()) );

public void testAlarmEvidence() {
BayesEngine nue = new BayesEngine(jTree);

nue.setLikelyhood( new BayesLikelyhood( graph, jtNode3, alarmNode, new double[] { 1.0, 0.0 }) );


assertArray( new double[]{0.9, 0.1}, scaleDouble( 6, nue.marginalize("JohnCalls").getDistribution() ) );

assertArray( new double[]{0.7, 0.3 }, scaleDouble( 6, nue.marginalize("MaryCalls").getDistribution() ) );

assertArray( new double[]{0.374, 0.626}, scaleDouble( 3, nue.marginalize("Burglary").getDistribution() ) );

assertArray( new double[]{ 0.231, 0.769}, scaleDouble( 3, nue.marginalize("Earthquake").getDistribution() ) );

assertArray( new double[]{1.0, 0.0}, scaleDouble( 6, nue.marginalize("Alarm").getDistribution() ) ); }

public void testEathQuakeEvidence() {
BayesEngine nue = new BayesEngine(jTree);

nue.setLikelyhood(new BayesLikelyhood(graph, jtNode3, earthquakeNode, new double[]{1.0, 0.0}));

assertArray( new double[]{0.297, 0.703}, scaleDouble( 6, nue.marginalize("JohnCalls").getDistribution() ) );

assertArray( new double[]{0.211, 0.789 }, scaleDouble( 6, nue.marginalize("MaryCalls").getDistribution() ) );

assertArray( new double[]{.001, 0.999}, scaleDouble( 3, nue.marginalize("Burglary").getDistribution() ) );

assertArray( new double[]{1.0, 0.0}, scaleDouble( 3, nue.marginalize("Earthquake").getDistribution() ) );

assertArray( new double[]{0.291, 0.709}, scaleDouble( 6, nue.marginalize("Alarm").getDistribution() ) );

public void testJoinCallsEvidence() {
BayesEngine nue = new BayesEngine(jTree);

nue.setLikelyhood( new BayesLikelyhood( graph, jtNode1, johnCallsNode, new double[] { 1.0, 0.0 }) );

assertArray( new double[]{1.0, 0.0}, scaleDouble( 6, nue.marginalize("JohnCalls").getDistribution() ) );

assertArray( new double[]{0.04, 0.96 }, scaleDouble( 6, nue.marginalize("MaryCalls").getDistribution() ) );

assertArray( new double[]{0.016, 0.984}, scaleDouble( 3, nue.marginalize("Burglary").getDistribution() ) );

assertArray( new double[]{0.011, 0.989}, scaleDouble( 3, nue.marginalize("Earthquake").getDistribution() ) );

assertArray( new double[]{0.043, 0.957}, scaleDouble( 6, nue.marginalize("Alarm").getDistribution() ) );

public void testEarthquakeAndJohnCallsEvidence() {
BayesEngine nue = new BayesEngine(jTree);
nue.setLikelyhood( new BayesLikelyhood( graph, jtNode1, johnCallsNode, new double[] { 1.0, 0.0 }) );

nue.setLikelyhood( new BayesLikelyhood( graph, jtNode3, earthquakeNode, new double[] { 1.0, 0.0 }) );

assertArray( new double[]{1.0, 0.0}, scaleDouble( 6, nue.marginalize("JohnCalls").getDistribution() ) );

assertArray( new double[]{0.618, 0.382 }, scaleDouble( 6, nue.marginalize("MaryCalls").getDistribution() ) );

assertArray( new double[]{0.003, 0.997}, scaleDouble( 3, nue.marginalize("Burglary").getDistribution() ) );

assertArray( new double[]{ 1.0, 0.0}, scaleDouble( 3, nue.marginalize("Earthquake").getDistribution() ) );

assertArray( new double[]{0.881, 0.119}, scaleDouble( 6, nue.marginalize("Alarm").getDistribution() ) );

by Mark Proctor ( at May 07, 2014 03:41 AM

May 06, 2014

Bruce Silver: Details on BPMN Master Class

Details of my BPMN Master Class on June 2 and 9 have now been finalized.  If you know BPMN Method and Style and you want to take the next step, this class is for you!

The class is split into two 5-hour sessions one week apart, so students will have time to complete problem sets assigned at the end of the first session and mail them in before the second session, when selected solutions will be presented.  Here is the outline of the class:

Day 1

  1. Overview and Objectives
  2. Method and Style Review
    • Instance alignment
    • Hierarchical modeling and gateway end state test
    • Avoiding deadlocks, multimerge, and unsafe models
    • Big 3 event types – Message, Timer, Error
    • Loop vs MI activity
  3. Batching and Multi-Pool Models
  4. Signal, Conditional, Escalation Events
  5. Event Subprocesses
  6. Problem Set Assignment

Day 2

  1. Problem Set Presentations and Discussion
  2. Enterprise Process Map
  3. Case Management and Declarative BPMN
    • CMMN vs BPMN
    • Ad hoc activities in BPMN
    • Event-condition-action pattern
    • Declarative BPMN
  4. Your Scenarios and Patterns
  5. Master Class Certification Exercise

The Master Class is open to students who are already Method and Style certified, but it begins with a quick review of some of its more technical concepts: alignment of the activity and process instance; using gateways to test child-level end state, merging parallel and conditionally parallel flows; basic patterns for Message, Timer, and Error events; and the difference between Loop and Multi-Instance activities.  We then go into mostly new material, beginning with how to deal with batching in end-to-end business processes, using multiple BPMN processes coordinated via messages and shared data.  We’ll spend some time on the “lesser” Level 2 event types – Signal, Conditional, and Escalation – why each is a little strange, and the most important use cases for each one.  We finish Day 1 with event subprocesses, which will prove extremely valuable when we get to case management and “declarative BPMN” on Day 2.

At the end of the session, four homework exercises will be assigned based on the Day 1 material. Students will mail in their solutions prior to Day 2, at which time selected solutions will be presented to the class and discussed.  Students are also invited to send in their own questions and scenarios, which we will discuss on Day 2 as well.  That thorny problem you have been struggling with in your own process models?  Send it in, and we’ll discuss various ways to model it on Day 2. In addition, on Day 2 we will discuss how BPMN models relate to enterprise BPM architecture models, a topic rarely given adequate treatment.  We’ll also explore how BPMN can do what it’s not supposed to be able to do: case management.  We’ll look at how escalation event subprocesses, parallel-multiple events, and other BPMN 2.0 constructs can be used to describe ad hoc behavior and “declarative” process models.

At the end of Day 2, we explain the certification exercise.  As in the BPMN Method and Style class, students have 60 days to complete the certification.  I’ll be using itp commerce Process Modeler for Visio in my slides, but students have the option of using Signavio instead.  Sixty-day use of either tool is provided as part of the training.

Sound interesting?  The class runs June 2 and 9, live-online, from 11am to 4pm ET each day (that’s 5pm to 10pm CET in Europe).  We will use internet audio, and students are encouraged to use a headset and microphone to facilitate 2-way voice discussion.  Click here to register by credit card, or contact me by email to sign up by PO.

The post Details on BPMN Master Class appeared first on Business Process Watch.

by bruce at May 06, 2014 06:26 PM

Sandy Kemsley: The Case For Smarter Process At IBMImpact 2014

At analyst events, I tend to not blog every presentation; rather, I listen, absorb and take some time to reflect on the themes. Since I spent the first part of last week at the analyst event at IBM...

[Content summary only, click through for full article and links]

by Sandy Kemsley at May 06, 2014 04:59 PM

May 05, 2014 Try out now: camunda BPM Enterprise

Our brand-new Enterprise Edition offers you an extended monitoring of your process instances: You can now find and inspect completed process instances as well as inspect the things that already happened to instances still running. You can have a look at the visual audit trail in the BPMN diagram, inspect the lifecycle history of completed [...]

by Jakob Freund at May 05, 2014 10:31 PM

Thomas Allweyer: Forrester vergleicht Systeme zum Dynamic Case Management

Insgesamt 13 Anbieter von Systemen zum Dynamic Case Management hat die Analystenfirma Forrester in ihrem gerade erschienenen Report miteinander verglichen. Beim dynamischen oder adaptiven Case Management geht es um die Unterstützung und Abwicklung wissensintensiver, nicht genau vorher bestimmbarer Tätigkeiten, für die sich herkömmliche fluss-orientierte BPM-Systemen nicht so gut eignen. Forrester unterscheidet dabei zwei grundsätzliche Kategorien: Design Time Case Management und Run Time Case Management.

Beim Design Time Case Management lassen sich die meisten Aktivitäten und große Teile des Ablaufs vorher bestimmen. Dabei gibt es aber Variationsmöglichkeiten, um während der Durchführung auf die Besonderheiten und die Entwicklung des Falls eingehen zu können. Das Case Management-System stellt den Mitarbeitern die Informationen zum Kontext des Falls zur Verfügung, die zur individuellen Anpassung benötigt werden. Design Time Case Management ist vor allem für stark regulierte Aktivitäten geeignet, wo die Einhaltung bestimmter Regeln sichergestellt und nachgewiesen werden muss.

Im Gegensatz dazu eignet sich Run Time Case Management vor allem für hoch komplexe Aufgaben, bei denen es praktisch gar keine vorhersehbaren Abläufe gibt. Die tatsächlichen Abläufe entwickeln sich erst während der Durchführung. Hier gibt das System keine Arbeitsschritte vor, sondern zu erreichende Ziele sowie zu berücksichtigende Bedingungen und Regeln. Hier ist die zielgerichtete Versorgung des Mitarbeiters mit allen Informationen zum Fall noch wichtiger, aber auch die Möglichkeit zur Kommunikation und zur flexiblen Einbeziehung anderer Mitarbeiter.

Die Analysten evaluierten die Eignung der 13 Systeme zunächst separat für die beiden Kategorien Design Time und Run Time Case Management. Zusätzlich erstellten sie noch eine Gesamtbeurteilung. Auch wenn die Einteilung in die beiden genannten Kategorien vielleicht einige nützliche Anhaltspunkte gibt, dürfte es in der Praxis doch eher einen fließenden Übergang zwischen den zwei Case Management-Typen geben. Daher wäre es auch interessant, ob ein bestimmtes System ein durchgängiges Konzept bietet, das beide Ansätze ermöglicht. Dann könnte man je nach Aufgabenstellung das jeweils geeignete Maß an Kontrolle einerseits und Flexibilität andererseits bereitstellen.

Ebenso wäre eine genauere Betrachtung des Übergangs zum “klassischen” BPM-Ansatz hilfreich, mit seinen sehr stark strukturierten, recht genau vorher definierten Prozessen. Nicht umsonst umfasst das aktuelle – von Forrester-Konkurrent Gartner geprägte – Schlagwort des “intelligenten” BPMS (iBPMS) Systeme, die neben anderen Eigenschaften klassische Workflow-Steuerung und dynamisches Case Management gleichermaßen unterstützen.

Immerhin finden sich unter den von Forrester am besten eingestuften Dynamic Case Management-Anbietern einige, deren Softwareplattformen auch im BPM-Markt etabliert sind. In der Gesamtbewertung wurden Pegasystems, IBM, Be Informed und Kana als “Leaders” eingestuft, wobei die beiden letztgenannten hierzulande noch eher wenig bekannt sind. Beim Run Time Case Management wurde außerdem der in Österreich beheimatete Hersteller ISIS Papyrus sehr hoch bewertet.

The Forrester Wave: Dynamic Case Management, Q1 2014
Download bei Pegasystems (Registrierung erforderlich)

by Thomas Allweyer at May 05, 2014 08:26 AM

May 02, 2014

Bruce Silver: Sudden Impact: IBM Merges Case into BPM (but forgets to announce it)

In the most significant enhancement to its BPMS since the Lombardi acquisition, IBM revealed at Impact this week that case management functionality will be a native feature of BPM 8.5.5, the June 2014 release.  I hesitate to say IBM “announced” it, because it was barely mentioned at Impact.  In fact, far more attention was paid to IBM Case Manager, aka Filenet P8, even though nothing new was announced for that product, which has had integration with BPM since the version 7 BPEL offering!  This is clearly an area where the internal politics has proved a higher mountain to climb than the technical obstacles, and it seems the effects of that still linger.

But I don’t want to dwell on that, because I really like the way IBM has implemented the feature.  It reinforces – I would actually say it proves – the notion that there is no essential difference between BPM and case management.  Processes come in all flavors, from straight-through to structured workflow to completely ad hoc activities, and in fact real business processes probably include bits of all of these.  You should not need separate middleware platforms to handle each bit.  That is so obvious, only the vendors don’t see it!

In BPM 8.5.5, case activities don’t require a separate process engine.  They are instantiated and monitored using the good old BPMN-based BPM runtime.  There is a new Case Designer tool, oriented nominally to “knowledge workers” – although, honestly, the biggest difference to me from the regular Process Designer is that it runs in a browser instead of Eclipse.


Figure 1. BPM Case Designer (Source: IBM)

Case activities “float” in the case definition.  You can put them in the Process Designer BPMN as well, where they have no sequence flows in or out.  They can be defined as required or optional in the case, the latter indicated by a dashed border.  They can be instantiated at runtime either manually by a user or by a few defined preconditions – adding a document to the case, change in a variable or case property, or a data expression becomes true.  The “implementation” of a case activity can be a normal User task, subprocess, or called process.  It took me a minute to understand the difference between a case activity and its User task implementation;  basically, the user who launches the case activity can assign the task to someone else, and that makes sense.


Figure 2. Case activities in Process Designer (Source: IBM)

Case management does require a new process portal, one that provides shared access to the whole case folder, instead of just a task list.  As you see below, it provides the case data, documents, tasks assigned to that user, and, on the right, case activities that can be launched by the user (as well as those completed or in progress).  It appears to be completely integrated with the rest of the BPM end user experience, not a separate thing off to the side.


Figure 3. Case Details in Process Portal (Source: IBM)

In other words, BPM 8.5.5 seamlessly blends structured and unstructured processes, and all combinations thereof, in a single product.  For years you could blend them by integrating separate platforms, but honestly, who on earth wants to do that?  Even Filenet customers don’t want to do that.  BPM has its own native content store, and can integrate with external ECM (including Filenet) via CMIS.  For some reason, IBM insists on calling the new capability “basic” case management, as if you still need Filenet to do “real” case management.  It’s simply baffling to me.  You still need Filenet to do advanced content management, but the process part, I think not.  If IBM puts any marketing behind the new case functionality – and it’s not clear at this point whether it will or not – I predict it will be overwhelmingly adopted by IBM BPM users.

I also think it’s a game-changer for the case management world in general.  Have you heard of CMMN – case management modeling notation – a new draft standard for case modeling in OMG?  It was started because BPMN  supposedly could not possibly handle the demands of case management.  I would say – and have said it already – that the execution semantics of BPMN already handle about 90% of it; the problem is the notation.  IBM has essentially filled in that last 10% on the notation side, maybe bending a rule or two slightly on the semantic side.  I hope that Oracle, SAP, and others will get together with IBM to push through a BPMN 2.1 (it’s not even enough change for a 3.0, in my opinion) that incorporates case activities.

Well, what about “adaptive”?, you ask.  Isn’t case management supposed to be adaptive?  Actually, very little ACM is really adaptive today, other than letting the knowledge worker decide what to do next.  But what’s more interesting is the Whitestein-style adaptive, meaning goal-directed.  Independently triggered activities is a prerequisite for that, but you also need the goal-seeking logic.  At Impact, IBM introduced a new product on the ODM side that could do it, although it’s not aimed at this use case today.  Called Decision Server Insights, it combines events, rules, and predictive analytics to trigger business actions.  As you can see from the marketing diagram below, the initial approach emphasizes extreme scale – millions of events, thousands of rules, etc.  I fear this is pointing it in the same needle-in-a-haystack direction that has kept CEP a small niche for so long.  But why not use that technology to provide the goal-seeking adaptation needed for next-generation BPM/ACM?  Now that would be truly game-changing!


Figure 4.  Decision Server Insights (Source: IBM)


The post Sudden Impact: IBM Merges Case into BPM (but forgets to announce it) appeared first on Business Process Watch.

by bruce at May 02, 2014 05:33 PM

April 30, 2014

Drools & JBPM: Decision Camp 2014 : Call for Speakers : Oct 13-15, San Jose

Decision Camp is on again for 2014, registration is now open.


Who Should Attend

Practitioners are Business Analysts or Business Experts, Developers or Architects that
use or consider using Decision Management technologies such as Business Rules, Predictive Analytics, Business Intelligence
and Decision Optimization

Join practitioners like you as well as the renowned experts from industry, coming from consulting companies and technology vendors

Why Attend

Decision CAMP is the
first event for Decision Management practitioners.

It is filled with hands-on activities and insightful experience-sharing sessions.​

If you are new to Business Rules or Predictive Analytics, join us to speed up your learning curve and get more out of those technologies

The Call for speakers is now also open:

Who should submit an abstract

We are looking for keynotes, case studies, general sessions, and technical workshops. We are particularly looking for case studies.If you are a Business Analyst, Rules Writer, Rules Analyst or Rules Architect, and your job function includes harvesting, eliciting, capturing business rules, or more generally speaking decision logic, 
then you are the perfect speaker for the event!

If you are an Enterprise Architect, head of Software Development, or Software Architect, and your job function includes the integration of business rules / decision management technologies in your systems,
then you are a wonderful speaker too!

We are looking for practitioners from both sides.

Submissions will not be selected if they appear to directly promote any products or services, or are of a commercial nature.

by Mark Proctor ( at April 30, 2014 02:26 AM

April 29, 2014

Bruce Silver: BPMN Class/Certification June 3-5

There are still seats available for my next BPMN Method and Style class.  The live-online class will be held June 3-5 from 11am to 4pm ET each day (5pm-10pm CET). You’ll learn how to create process models that are not only correct per the standard but that reveal the process logic clearly and completely from the printed diagram. Students in this class will also receive access to bpmnPRO, my gamified eLearning app for BPMN Method and Style. You also get 60-day license to Process Modeler for Visio and, as always, post-class BPMessentials certification is included at no additional cost. Price for the class is $1145 per student (1-4), $995 (5-9), $895 (10+). Click here for more details, or click here to register. Act while space is left.

The post BPMN Class/Certification June 3-5 appeared first on Business Process Watch.

by bruce at April 29, 2014 10:21 PM

April 26, 2014 Get a free camunda BPM video training

We will publish a new online-training for camunda BPM very soon. We’ve been asked for that again and again, which is why I am more than happy that it’s finally available. There is a little sneak preview available already: You can enjoy the first module (basically a video tutorial) for free! Get it here: [...]

by Jakob Freund at April 26, 2014 11:32 PM

Thomas Allweyer: Wenig Entwicklung bei der Umsetzung von BPM

Cover State of BPMDie Umfrage “The State of BPM 2014″ ist bereits die fünfte ihrer Art. Da sie seit 2005 regelmäßig im Abstand von zwei Jahren durchgeführt wurde und dabei der verwendete Fragenkatalog weitgehend unverändert blieb, liefert sie einen guten Überblick über die Veränderungen, die bei der Anwendung von Prozessmanagement stattgefunden haben. Und dabei fällt die Bilanz eher verhalten aus: Insgesamt scheint die praktische Umsetzung von BPM in den vergangenen Jahren zumindest keine riesigen Fortschritte gemacht zu haben. Die meisten Unternehmen befinden sich auf der zweiten Stufe des CMM-Reifegradmodells. Sie haben zwar wichtige Prozesse definiert, aber kein durchgängiges, unternehmensweites Prozessmanagement aufgebaut, bei dem regelmäßig Prozesskennzahlen gemessen und die Prozesse ständig weiterentwickelt würden.

Als wichtigstes Ziel werden Kostenreduzierungen durch effizientere Prozesse angestrebt. Auch der Einsatz von ausgefeilten Prozessmodellierungswerkzeugen und BPMS zur Prozessautomatisierung gehört bei der Mehrheit der Befragten noch nicht zum Alltag. Das nützlichste Tool ist für die meisten ein rein grafisches Werkzeug wie MS Visio. Immerhin: Prozessmanagement wird von den meisten Befragten als integrativer Management-Ansatz aufgefasst, und die meisten Projekte haben mittlerweile einen übergreifenden Fokus, d. h. sie beschränken sich nicht mehr auf die isolierte Betrachtung von Einzelprozessen.

Interessanterweise zeigte die vorangegangene Studie aus dem Jahr 2011 ein wesentlich positiveres Bild. So waren etwa der durchschnittliche Prozessmanagement-Reifegrad ebenso wie die Einsatzhäufigkeit von BPMS deutlich gestiegen. Demgegenüber sind die Werte der aktuellen Befragung aus dem Jahr 2013 wieder auf das Niveau von 2007 und 2009 zurückgegangen. Die Autoren der Studie führen die positiveren Werte von 2011 auf eine etwas andere Zusammensetzung der Teilnehmer zurück, deren Zahl damals auch ungewöhnlich hoch war. Sie sehen einen langfristigen Trend eher gleichbleibenden oder nur leicht steigenden Interesses am Thema BPM.

Paul Harmon, Celia Wolf:
The State of Business Process Management – 2014
Download auf BPMTrends

by Thomas Allweyer at April 26, 2014 11:44 AM

April 23, 2014

Drools & JBPM: jBPM accepted on Google Summer of Code 2014!

I'm thrilled to share with the whole Drools & jBPM community that once again a student has being accepted in the Google Summer Of Code Program to work in the projects for the Summer. This year Google will be funding  Nicolas Gomes (from Argentina) to work on the jBPM project. Nicolas task will be to work towards the integration of the KIE Workbench with a Content Management System such as Magnolia.
GSoC 2013
The integration will involve the backend services and front end screens to work with documents from end to end.
Here you can find all the accepted projects this year (2014):
Nicolas has also started a blog where he will sharing the integration progress,
You can also follow him on twitter:
twitter: @nicolasegomez
As I will be mentoring the work, I will be also sharing some updates and videos about how the work is being done. So stay tuned and feel free to leave comments in Nicolas' blog regarding his proposals for the work that needs to be done. If you are looking to do something similar please get in touch with Nicolas or with myself so we can coordinate the work.

by salaboy ( at April 23, 2014 09:37 AM

April 22, 2014

Drools & JBPM: New feature overview : PMML

Today, I'll introduce a new 6.1 experimental feature, just being released from our incubator: Drools-PMML.

I'll spend the next days trying to describe this new module from the pages of this blog. Some of you, early adopters of Drools-Scorecards, will probably have heard the name. Thanks to the great work done by Vinod Kiran, PMML was already living inside that module. However, the Predictive Model Markup Language ( is much more than that. It is a standard that can be used to encode and interchange classification and regression models such as neural networks or decision trees. These quantitative models complement nicely the qualitative nature of business rules.

So, how does it work? Exactly like with all other KIE assets such as processes, rules or decision tables. First, you have to create the model: there are plenty of statistical/data mining softwares that generate or consume PMML. In the future, even Drools might be able to do that!
Once you have your model, just deploy it in your Kie Module and let it become part of your Kie Base. Betting you are now familiar with the kmodule.xml configuration, let me show an example using the programmatic APIs:

String pmmlSource = // ... path to your PMML file

KieServices ks = KieServices.Factory.get();
KieFileSystem kfs = ks.newKieFileSystem();
kfs.write( ResourceFactory.newClassPathResource( pmmlSource )
.setResourceType( ResourceType.PMML ) );
ks.newKieBuilder( kfs ).buildAll().getResults();
KieSession kSession = ks.newKieContainer(
ks.getRepository().getDefaultReleaseId() )

Let's imagine that you have a predictive model called "MockColdPredictor". It is used to predict the probability of catching a cold given the environmental conditions. It has one input called "temperature" and one output "coldProbability". The most basic way to invoke the model is to insert the input value(s) using a generated entry-point:

ksession.getEntryPoint( "in_Temperature" ) // "in_" + input name (capitalized)
.insert( 32.0 ); // the actual value

The result will be generated and can be extracted, e.g., using a drools query:

QueryResults qrs = kSession.getQueryResults(
"ColdProbability", // the name of the output (capitalized)
"MockColdPredictor", // the name of the model
Variable.v ); // the Variable to retrieve the result
Double probability = (Double) qrs.iterator().next().get( "$result" );

In the future posts, we'll discuss in detail how PMML defines models, input and output fields. Based on this, we'll see how to integrate models and which options are available.

At the moment, these model types are supported:
  • Clustering
  • Simple Regression
  • Naive Bayes
  • Neural Networks
  • Scorecards
  • Decision Trees
  • Support Vector Machines
The version of the standard is 4.1. The support for these models is being consolidated, and more will be added soon. 

Behind the scenes: the models are interpreted: that is, they are converted to an appropriate combination of rules and facts that emulate the calculations. A compiled version, where the models are evaluated directly, is possible and will be added in the future. So, the evaluation is slower than a "native" engine. The  goal is not to prove that production rules can outperform matrix operations at... doing matrix math :) Rather, the idea is to provide a uniform abstraction of a hybrid knowledge base, making it easier to integrate rule and non-rule based reasoning.

So, my next post will describe the structure of a PMML model.
Stay tuned!
-- Davide

Disclaimer: as a new feature, it is likely to suffer from bugs and issues. Feedback will be welcome.

Acknowledgments :
Thanks to the University of Bologna (Bologna, Italy), the KMR2 Project and Arizona State University (Scottsdale, AZ) which, over time, have supported this project.

Publications (more to follow) :
- D. Sottara, P. Mello, C. Sartori, and E. Fry. 2011. Enhancing a Production Rule Engine with Predictive Models using PMML. In Proceedings of the 2011 workshop on Predictive markup language modeling (PMML '11). ACM, New York, NY, USA, 39-47. DOI=10.1145/2023598.2023604

by Sotty ( at April 22, 2014 01:23 AM

April 17, 2014

Drools & JBPM: Number One Vendor in Japan!!!!

Red Hat JBoss BRMS is number one BRMS vendor on Japan (Reported by ITR 2014)

by Mark Proctor ( at April 17, 2014 06:53 PM

April 15, 2014

Keith Swenson: Zero-code BPM Systems

The concept of “zero code” is the wish-fulfillment myth of the BPM industry.  Many advertisements will claim that the processes can be designed without any coding.  This complete hog-wash.  There is, however, a possibility for a zero-code system, but let’s imagine how that would have to work.

People desire systems that automatically do thing for them. They tried it, and found that they had to tell the computer what to do in what situation — this is coding.

Marketing the Illusion

Then a clever trick was played: the idea of typing a bunch of text in an unfamiliar syntax was associated with the idea of coding. People thought that the hard part about getting the computer to do what you want is the typing part. If we can eliminate typing, we can eliminate coding. That is, if we can make a system where you tell the computer what to do without typing, and only by using a mouse to drag and drop shapes, then you won’t have to do any coding, right? But you still have to learn what a bunch of obscure symbols mean, and what their interaction might be. It turns out that the “typing” is not the hard part about the coding.

Dragging and dropping symbols on a map to tell the system exactly what to do is still coding, and it is just as hard as typing the code in text, because the hard part is figuring out what should be done in what situation.  You have not eliminated coding, you just disguised it.

Zero-Code Systems Today

There are some examples of no-coding systems if we look for them, we can talk about two that stand out:

1) Google Page Rank: When Yahoo was the top of the search world it employed a bunch of people to code up an index to the web. That is, they looked at a page, and decided where it should fit. Google invented Page Rank which gleaned associations just from the links between existing pages. No person had to decide that “giraffe” was associated with “african animals” instead that emerged from the data set without human involvement, that is, without coding. The algorithm can be tricked by purposefully constructing misleading links, but this is not programming in any traditional sense.

2) Google Translate: Every phrase translated gives you the option to see both the original and the translated version, and then you can improve the translation. Nobody has to sit down and program the translator by defining rules for it to follow under certain situations. Instead from the examples, it works out the rules by itself.

Actually, these are both a little more complicated than I describe here, bootstrapped by some initial coding, however it is fair to say that they improve without explicitly being told what to do.  Two properties are common of these: (a) they learn by example, and not by being given explicit rules, and (b) nobody is in control nor has to take responsibility for the results.

Zero Code BPM Systems

There is then a possibility for a BPM system that learns by example.  Here is what it would look like:

  • When installed, there would be no rules around what a person does.  No process diagram.
  • There would be a basic set of capabilities around recording information and communicating this to others.
  • People would identify people they work with, and make lists of colleagues very much like “friends” in Facebook, or “links” in LinkedIn.
  • Workers would simply start writing and sending documents to others.
  • After enough cases exist, the system would mine the history and show workers what the work patterns look like.
  • The system would use these patterns to suggest a course of action by recognizing similarity with earlier cases.  Similarity would consider instance data values, as well as timing and previous actions.
  • The emergent patterns of work could be used for predictive simulation of how a particular instance might go, letting people know when it might reach a certain stage or be finished.
  • The patterns could be used to distinguish different classes of people, essentially discerning their role without having to explicitly state it.
  • The pattern from one person (or set of people) could be compared with another set to try and determine which one has better outcomes, or achieves these more efficiently.
  • Simulations could be run using the pattern from one set of people, with the data from another set of people, to see if the other pattern might have done a better job.  Such results are never conclusive, but still might be idea provoking.

At no point in this scenario did we say that anyone drew up a process diagram.  Nobody specified what work was to be done, nor how to do it. Nobody was in control of assuring the efficiency of the organization, except in the normal way by training people and giving them performance reviews. This is a zero coding system.  The system itself is adaptive and learns, not because people can program it really fast and easily, but because no programming is needed.  Instead, it learns by example, and nobody is in control.

Back to the Real World

I am not trying to say that a zero-code system is better than a custom coded system. Instead, the purpose of this exercise to show a clear example of what a zero-code BPM system would look like, so that when someone claims that their system requires no coding, you will know what to compare them to.  Or to be more concise: a process diagram is simply another form of coding — don’t be fooled.

My example does not include integration with other existing information systems.  In my experience coding is required for any level of system integration.  Anyone claiming to do integration without coding is hoping the listener is very dim indeed.

Zero-code systems are all around us: email does not need programming before use; an office suite does not need to be programmed before use.  But these systems do not claim to support business processes in any significant way.

Nobody yet in the industry is ready for a BPM system that can not be controlled. There is a deep set management belief that workers must be controlled and that if you let workers do whatever they want, they will behave chaotically. There is a belief in the separation of brains and brawn:  one set of exclusive people will figure out what to do, code it into the system, so that another set can do it without thinking.  This market believes that command and control is needed to coordinate action.

starlingsWhich is one reason why I showed a flock of murmurating starlings in my BPMNext talk. Clearly organization emerges without needed any single individual in control, and without telling other exactly what to do.  But, of course, that is just a bunch of birds flying — the idea doesn’t apply to a human organization with thousands of individuals working and interacting with other individuals.   Or does it?

In conclusion, the industry is clearly not ready for a BPM system which exerts no control over the participants in the process. Exerting control is what programming is all about, and therefor you will never see a zero-code BPM system sold as a BPM system.

by kswenson at April 15, 2014 05:51 PM

April 14, 2014

Bruce Silver: Last Call for April BPMN Class

We still have room in our BPMessentials BPMN Method and Style class next week.  It runs April 22-24 from 11am-4pm ET each day (that’s 5pm-10pm CET).  As a bonus, students in this class will receive free access to bpmnPRO, my gamified eLearning app for Method and Style, great preparation for both the Method and Style certification and my new BPMN Master Class on June 2 and 9.  Only those with Method and Style certification will be admitted to the Master Class.  Click here for more details on the class, or just click here to register.

The post Last Call for April BPMN Class appeared first on Business Process Watch.

by bruce at April 14, 2014 06:24 PM