Planet BPM

April 17, 2014

Drools & JBPM: Number One Vendor in Japan!!!!

Red Hat JBoss BRMS is number one BRMS vendor on Japan (Reported by ITR 2014)

by Mark Proctor ( at April 17, 2014 06:53 PM

April 15, 2014

Keith Swenson: Zero-code BPM Systems

The concept of “zero code” is the wish-fulfillment myth of the BPM industry.  Many advertisements will claim that the processes can be designed without any coding.  This complete hog-wash.  There is, however, a possibility for a zero-code system, but let’s imagine how that would have to work.

People desire systems that automatically do thing for them. They tried it, and found that they had to tell the computer what to do in what situation — this is coding.

Marketing the Illusion

Then a clever trick was played: the idea of typing a bunch of text in an unfamiliar syntax was associated with the idea of coding. People thought that the hard part about getting the computer to do what you want is the typing part. If we can eliminate typing, we can eliminate coding. That is, if we can make a system where you tell the computer what to do without typing, and only by using a mouse to drag and drop shapes, then you won’t have to do any coding, right? But you still have to learn what a bunch of obscure symbols mean, and what their interaction might be. It turns out that the “typing” is not the hard part about the coding.

Dragging and dropping symbols on a map to tell the system exactly what to do is still coding, and it is just as hard as typing the code in text, because the hard part is figuring out what should be done in what situation.  You have not eliminated coding, you just disguised it.

Zero-Code Systems Today

There are some examples of no-coding systems if we look for them, we can talk about two that stand out:

1) Google Page Rank: When Yahoo was the top of the search world it employed a bunch of people to code up an index to the web. That is, they looked at a page, and decided where it should fit. Google invented Page Rank which gleaned associations just from the links between existing pages. No person had to decide that “giraffe” was associated with “african animals” instead that emerged from the data set without human involvement, that is, without coding. The algorithm can be tricked by purposefully constructing misleading links, but this is not programming in any traditional sense.

2) Google Translate: Every phrase translated gives you the option to see both the original and the translated version, and then you can improve the translation. Nobody has to sit down and program the translator by defining rules for it to follow under certain situations. Instead from the examples, it works out the rules by itself.

Actually, these are both a little more complicated than I describe here, bootstrapped by some initial coding, however it is fair to say that they improve without explicitly being told what to do.  Two properties are common of these: (a) they learn by example, and not by being given explicit rules, and (b) nobody is in control nor has to take responsibility for the results.

Zero Code BPM Systems

There is then a possibility for a BPM system that learns by example.  Here is what it would look like:

  • When installed, there would be no rules around what a person does.  No process diagram.
  • There would be a basic set of capabilities around recording information and communicating this to others.
  • People would identify people they work with, and make lists of colleagues very much like “friends” in Facebook, or “links” in LinkedIn.
  • Workers would simply start writing and sending documents to others.
  • After enough cases exist, the system would mine the history and show workers what the work patterns look like.
  • The system would use these patterns to suggest a course of action by recognizing similarity with earlier cases.  Similarity would consider instance data values, as well as timing and previous actions.
  • The emergent patterns of work could be used for predictive simulation of how a particular instance might go, letting people know when it might reach a certain stage or be finished.
  • The patterns could be used to distinguish different classes of people, essentially discerning their role without having to explicitly state it.
  • The pattern from one person (or set of people) could be compared with another set to try and determine which one has better outcomes, or achieves these more efficiently.
  • Simulations could be run using the pattern from one set of people, with the data from another set of people, to see if the other pattern might have done a better job.  Such results are never conclusive, but still might be idea provoking.

At no point in this scenario did we say that anyone drew up a process diagram.  Nobody specified what work was to be done, nor how to do it. Nobody was in control of assuring the efficiency of the organization, except in the normal way by training people and giving them performance reviews. This is a zero coding system.  The system itself is adaptive and learns, not because people can program it really fast and easily, but because no programming is needed.  Instead, it learns by example, and nobody is in control.

Back to the Real World

I am not trying to say that a zero-code system is better than a custom coded system. Instead, the purpose of this exercise to show a clear example of what a zero-code BPM system would look like, so that when someone claims that their system requires no coding, you will know what to compare them to.  Or to be more concise: a process diagram is simply another form of coding — don’t be fooled.

My example does not include integration with other existing information systems.  In my experience coding is required for any level of system integration.  Anyone claiming to do integration without coding is hoping the listener is very dim indeed.

Zero-code systems are all around us: email does not need programming before use; an office suite does not need to be programmed before use.  But these systems do not claim to support business processes in any significant way.

Nobody yet in the industry is ready for a BPM system that can not be controlled. There is a deep set management belief that workers must be controlled and that if you let workers do whatever they want, they will behave chaotically. There is a belief in the separation of brains and brawn:  one set of exclusive people will figure out what to do, code it into the system, so that another set can do it without thinking.  This market believes that command and control is needed to coordinate action.

starlingsWhich is one reason why I showed a flock of murmurating starlings in my BPMNext talk. Clearly organization emerges without needed any single individual in control, and without telling other exactly what to do.  But, of course, that is just a bunch of birds flying — the idea doesn’t apply to a human organization with thousands of individuals working and interacting with other individuals.   Or does it?

In conclusion, the industry is clearly not ready for a BPM system which exerts no control over the participants in the process. Exerting control is what programming is all about, and therefor you will never see a zero-code BPM system sold as a BPM system.

by kswenson at April 15, 2014 05:51 PM

April 14, 2014

Bruce Silver: Last Call for April BPMN Class

We still have room in our BPMessentials BPMN Method and Style class next week.  It runs April 22-24 from 11am-4pm ET each day (that’s 5pm-10pm CET).  As a bonus, students in this class will receive free access to bpmnPRO, my gamified eLearning app for Method and Style, great preparation for both the Method and Style certification and my new BPMN Master Class on June 2 and 9.  Only those with Method and Style certification will be admitted to the Master Class.  Click here for more details on the class, or just click here to register.

The post Last Call for April BPMN Class appeared first on Business Process Watch.

by bruce at April 14, 2014 06:24 PM

Thomas Allweyer: Umfrage zum Einsatz agiler Methoden gestartet

Zum zweiten Mal führt das BPM-Labor der Hochschule Koblenz unter Leitung von Ayelt Komus eine Untersuchung über den Einsatz agiler Methoden in der Praxis durch. Längst ist der Einsatzbereich von Methoden wie Scrum und Kanban nicht mehr auf die eigentliche Software-Entwicklung beschränkt. Auch die Entwicklung anderer Produkte, das Prozessmanagement und viele weitere Bereiche profitieren von diesen Ansätzen.

Die Teilnahme an der Umfrage ist bis zum 19. Mai auf der Seite möglich. Dort kann man auch die Ergebnisse der ersten Status Quo Agile-Studie aus dem Jahr 2012 anfordern.

by Thomas Allweyer at April 14, 2014 09:13 AM

April 13, 2014

Drools & JBPM: Come meet us at Red Hat Summit in SFO

This week, Red Hat Summit is taking place in San Francisco, and a lot of us (engineer, product management, solution architects, etc.) will be there at the Moscone Conference Center.  If you are attending Summit or DevNation (the developer-oriented conference co-located with Summit), feel free to come and see Mark Proctor's presentations on Drools / JBoss BRMS and/or my presentation on jBPM / JBoss BPM Suite.
There will be plenty of opportunities to meet us as well, like for example the DevNation hacknight on Wednesday, but should you want to meet up but can't find us, try reaching out to us on twitter, @markproctor or @KrisVerlaenen.
There will also be plenty of opportunity to go and watch one of the demos at the JBoss booth, and the Usability Team has set up booth as well where you can go and check out JBoss BPM Suite 6 and provide feedback, so definitely go take a look.
Hope to see you all there, hopping in my flight now !

by Kris Verlaenen ( at April 13, 2014 09:24 AM

April 11, 2014

Drools & JBPM: Deploying kie-drools-wb on Tomcat

There have been a few emails recently to the Drools User mailing list stating problems deploying KIE Drools Workbench to Tomcat. We had a run of them shortly after the initial release of 6.0.1 too.

Suspecting there might be an issue I thought there be no better way to spend a Friday afternoon than to take a look and give it a try. In short I was able to deploy both 6.0.1 and 6.1.0-SNAPSHOT to Tomcat 7 with little problem.

Most of what I type below is already included in the Tomcat WAR's README.txt. This is tucked away inside the WAR and hence not blatantly obvious to some.


Starting with a clean install of Tomcat 7.

1. Copy "kie-tomcat-integration" JAR into TOMCAT_HOME/lib (org.kie:kie-tomcat-integration)
2. Copy "JACC" JAR into TOMCAT_HOME/lib ( in JBoss Maven Repository)
3. Copy "slf4j-api" JAR into TOMCAT_HOME/lib (org.slf4j:artifactId=slf4j-api in JBoss Maven Repository)
4. Add valve configuration into TOMCAT_HOME/conf/server.xml inside <Host> element as last valve definition:

   <Valve className="org.kie.integration.tomcat.JACCValve" />

5. Edit TOMCAT_HOME/conf/tomcat-users.xml to include roles and users, make sure there will be 'analyst' or 'admin' roles defined as it's required to be authorized to use kie-drools-wb
6. Delete inside WEB-INF/classes/META-INF/services
7. Rename to inside WEB-INF/classes/META-INF/services
8. Increase Java's PermGen space by adding file TOMCAT_HOME/bin/ containing export JAVA_OPTS="-Xmx1024m -XX:MaxPermSize=256m"
9. Start Tomcat with TOMCAT_HOME/bin/
10. Go to Management Console, http://localhost:8080/management
11. Deploy modified WAR

If you do not complete these steps the WAR works "out of the box" but you'll need to define Users in WEB-INF/classes/


Starting with a clean install of Tomcat 7.

1. Copy "kie-tomcat-integration" JAR into TOMCAT_HOME/lib (org.kie:kie-tomcat-integration)
2. Copy "JACC" JAR into TOMCAT_HOME/lib ( in JBoss Maven Repository)
3. Copy "slf4j-api" JAR into TOMCAT_HOME/lib (org.slf4j:artifactId=slf4j-api in JBoss Maven Repository)
4. Add valve configuration into TOMCAT_HOME/conf/server.xml inside Host element as last valve definition:

   <Valve className="org.kie.integration.tomcat.JACCValve" />

5. Edit TOMCAT_HOME/conf/tomcat-users.xml to include roles and users, make sure there will be 'analyst' or 'admin' roles defined as it's required to be authorized to use kie-drools-wb
6. Start Tomcat with TOMCAT_HOME/bin/
7. Go to Management Console, http://localhost:8080/management
8. Deploy modified WAR

The differences between 6.0.1 and 6.1.0 are caused by where we've cleared up some of the code committed to the release branch in between releases.

by Michael Anstis ( at April 11, 2014 03:20 PM

April 09, 2014

Drools & JBPM: Webinar (April 10th): Business Process Simulation

Get More Value out of BPM with BPSim Simulation


Thursday, April 10th, 11:00AM Eastern

Business process design with BPMN2 can be complex, with many feasible options for implementing a business strategy. How can process designers evaluate alternative approaches before committing to a costly rollout of an untried process design?
Recently, there have been great strides in new process simulation tools, intended to answer this question and help analysts optimize their process designs before any real data is available to test them against. The BPSim standard is now emerging to provide a common framework for defining and exchanging simulation data in conjunction with BPMN2 models.
Join's Nathaniel Palmer with guest Kris Verlaenen to learn how BPSim compatible tools can help you understand how business process designs will perform in practice, and where to look to improve and optimize them.
As part of the presentation we will see a live demonstration of Red Hat's new process simulation tool, included with JBoss BPM Suite. Attendees will also receive access to the developer version of BPM Suite, and the exclusive demonstration models used in the presentation.
Why You Should Attend:
  • Learn How to Use Business Process Simulation for Data-Driven Process Excellence
  • Gain Both Actionable Ideas and a Complete Working BPM Suite with Process Simulation Capabilities
  • Jumpstart Your BPM Programs With Pre-built and Ready-to-Run Simulation Models
Who Should Attend:
  • Business Architects and BPM Practitioners Looking to Get Started with Process Simulation
  • Business Analysts Seeking to Leverage BPM and Simulation for Data-Driven Process Optimization
  • Anyone Looking to Get Started with BPM, BPMN2 Process Modeling, or Process Simulation
Register Now

Nathaniel Palmer
CTO & VP, Business Process Management, Inc.
A best-selling author, practitioner, and rated as the #1 most influential thought leader in BPM by independent research, Nathaniel is co-author of a dozen books on innovation and knowledge work, including “Intelligent BPM” (FSI 2013), “How Knowledge Workers Get Things Done” (FSI 2012), “Social BPM” (Future Strategies), “Mastering the Unpredictable” (MK Press, 2008), “Excellence in Practice (FSI 2007), as well as the “Encyclopedia of Database Systems” (Springer Reference, 2007) and “The X-Economy” (Texere, 2001). Nathaniel has been the Chief Architect for projects involving investments of $200 Million or more, and frequently tops the lists of the most recognized names in his field. He was the first individual named as Laureate in Workflow.

Kris VerlaenenKris Verlaenen
jBPM Project Lead, Red Hat
Kris is the JBoss jBPM project lead and the lead technical architect behind the Red Hat JBoss BPM Suite 6. After finishing his Ph.D. in computer science in 2008, he joined JBoss. In 2010, he became the jBPM project lead. He also has a keen interest in the healthcare domain, one of the areas that has shown a great need for a flexible processes and advanced rule and event-processing integration.

by Kris Verlaenen ( at April 09, 2014 02:21 PM

Keith Swenson: Overautomation – the Value of Returning to Manual Work

I regularly post about the advantages of using natural (as opposed to artificial) intelligence in the workplace.  I also carefully say that there are two kinds of work: routine work that should be automated, and unpredictable work that should not be automated, and it should be fairly easy to distinguish the two.  But is it?  Toyota is taking some surprising actions putting intelligent workers back in positions formerly thought to be routine.  It turns out those positions weren’t as routine as they originally thought.

Automation normally replaces intelligent workers with machinery.  Let’s face it, for much of history most people have held jobs that required little or no thinking.  Automating routine work is a good thing, because it frees people to participate in the more challenging knowledge work.

I often use the automobile factory floor as an example of a place to automate as much as possible.  The factory is a controlled environment, where external unpredictable influences can be mostly eliminated.  Inside that environment, one should automate as much as possible.

In the surprising article “Toyota is becoming more efficient by replacing robots with humans” and from Bloomberg “‘Gods’ Make Comeback at Toyota as Humans Steal Jobs From Robots” we learn that Toyota is actually reversing this trend.  That is, they are putting people back in to routine jobs that could be done by robots.  Or could they?

The reason for doing this is very sound:  only by actually doing the job can you understand the job, and suggest improvements to the job.  It is not good enough to simply watch the robots and try to find areas of waste.  It suggests that there is more to the job than what can be seen.  That tacit knowledge comes from the manual work, as you focus your action toward an explicit goal, the mind is always working on other, tacit objectives.  For these unconscious processes to work, you need time, and you need engagement.

The article cites some dramatic improvements.  This does not by any means indicate the end of automation.  Suggestions from these workers will be used to improve the automation elsewhere.  The automation remains great for doing the exact same thing over and over.  The full job is not merely doing the same thing over and over.  The world changes, and the factory can not be completely isolated from it, and it needs to change as well.  The machines that automate the factory lines do not suggest ways to improve that assembly line.  Instead, you need some number of intelligent people working the same job.

Perhaps this will start a new trend and a new job role.  For every business process analyst, there will be a “master worker” — someone who actually does the entire job manually. The master worker not only understands the process in detail, but actually manually performs it on a small number of cases. Instead of calling a web service, the master worker actually carries the work from place to place.  Sound crazy?  I don’t know.  Maybe.

However, one thing we should not be surprised about:  it is not always easy to distinguish routine work from knowledge work.  Sometimes there is a knowledge aspect of a job that is not obvious.  Reductionists tend to assume the job is simpler than it really is and will tend to automate some things that should not be.  So as automation proceeds, it will be natural to experience “overautomation” when you go too far.  We should be looking for overautomation, and then de-automating some things.  It is natural that that through this process of automating, and de-automating, we will explore the boundary of what can and can not be automated, and eventually find the optimal amount of automation.


See Also


by kswenson at April 09, 2014 10:26 AM

April 08, 2014

Thomas Allweyer: Prozessanwendungen mit SAP BPM entwickeln

Cover BPM mit SAPEine gut nachvollziehbare und umfassende Einführung in die Entwicklung und Ausführung von Prozessen mit SAP Netweaver BPM bietet das vorliegende Buch. Anhand eines Beispielprozesses zur Abwicklung von Ersatzteilaufträgen beschreiben die Autoren den kompletten Weg von der fachlichen Prozessbeschreibung bis zum ausführbaren Prozessmodell mit allen erforderlichen Details. Ausführliche Schritt-für-Schritt-Anleitungen mit zahlreichen Screenshots helfen dabei, diese Entwicklung selbst am System nachzuvollziehen. Zudem stehen das Prozessmodell und sämtliche verwendete Dateien zum Download zur Verfügung. Auch die Konfiguration des Systems, die Ausführung und Administration der laufenden Prozesse sowie das Monitoring und Reporting des Prozessgeschehens werden ausführlich beschrieben.

Im umfangreichen Technologie-Stack der SAP stellt das BPMS nur eine Komponente dar, deren voller Nutzen sich insbesondere im Zusammenspiel mit anderen Komponenten entfaltet. Häufig ist es für Anwender nicht einfach zu durchschauen, wie die verschiedenen SAP-Komponenten zusammenwirken und welche Rollen sie dabei jeweils spielen. Zum Teil werden vergleichbare Funktionen gleich von mehreren Systemen angeboten. So kann man zur Analyse von Prozessen nicht nur die entsprechende Funktionalität des BPM-Systems selbst verwenden, sondern auch das gesonderte System “SAP Operational Process Intelligence”. Das Buch schafft den Durchblick durch die verschiedenen im Umfeld des BPMS angebotenen SAP-Produkte. So gehen die Möglichkeiten des Operational Process Intelligence-Systems weit über die im BPMS eingebaute Analysefunktionen hinaus, indem z. B. auch weitere für die Ausführung verwendeten Systeme einbezogen und Echtzeit-Analysen durchgeführt werden können. Auch das Zusammenspiel mit dem Business Rules Management (BRM) wird beschrieben, ebenso wie der Einsatz des SAP Process Integration-Systems (PI) zur Anbindung anderer Systeme und Geschäftspartner.

Das Buch besteht aus vier Teilen. Der erste Teil führt in das Thema BPM und in die Prozessmodellierung mit BPMN ein. Hierbei wird auf die Besonderheiten der BPMN-Modellierung für das SAP BPM-System eingegangen. Es werden nicht alle Elemente des BPMN-Standards unterstützt, und z. T. sind besondere Modellierungsregeln zu beachten.

Gegenstand des umfangreichsten zweiten Teils ist die komplette Entwicklung eines ausführbaren Prozessmodells. Zunächst wird detailliert beschrieben, wie das BPMN-Modell des Fallbeispiels erstellt wird. Dieses Modell wird sodann um die weiteren erforderlichen Inhalte ergänzt. Hierzu gehören einzubindende Services, der Datenfluss, das User Interface und Geschäftsregeln. Für die Services und den Datenfluss kommen Standards aus dem Bereich der Web Services zum Einsatz, wie WSDL, XSD und XPath. Für die Entwicklung des User Interface stellt SAP verschiedene Möglichkeiten zur Verfügung, die entweder online oder offline benutzt werden können. Bei der offline-Variante wird ein Formular per E-Mail versandt, vom Bearbeiter ausgefüllt und wiederum per E-Mail zurückgeschickt.

Zur Analyse kann das BPM-System auf verschiedene Daten zugreifen, die bei der Prozessdurchführung anfallen, wie z. B. die aufgetretenen Ereignisse oder die durchgeführten Aktivitäten. Zusätzlich ist es möglich, prozessspezifische Daten zu analysieren. Z. B. könnte man bei einem Auftragsbearbeitungsprozess die bestellten Artikel und Mengen auswerten. Hierfür modelliert man eine Reporting-Aktivität und legt darin fest, welche Attribute ausgewertet werden sollen. Bei der asynchronen Kommunikation zwischen unterschiedlichen Systemen muss sich der Modellierer auch noch um die Korrelation der ausgetauschten Nachrichten kümmern, d. h. der Zuordnung zu den richtigen Prozessinstanzen. Schließlich wird erläutert, wie andere Programme das BPM-System über das BPM Public API aufrufen können.

Im dritten Teil geht es um das Deployment und die Ausführung der Prozesse. Die jeweils durchzuführenden Aufgaben können den Benutzern über das SAP Netweaver Portal im zentralen Arbeitsvorrat bereitgestellt werden. Dieser zentrale Arbeitsvorrat bündelt Aufgaben aus verschiedenen Systemen zu einer einheitlichen Sicht. Daneben bietet SAP BPM auch eine eigene Inbox an. Die Verwaltung und Überwachung der Prozesse erfolgt über die Komponente SAP NetWeaver Administration. Um das BPM-System an das jeweilige Einsatszenario und die existierende Systemlandschaft anzupassen, gibt es zahlreiche Konfigurationsmöglichkeiten.

Teil vier des Buchs widmet sich dem bereits angesprochenen System Operational Process Intelligence für weitergehende Analysen sowie dem Zusammenspiel mit weiteren SAP-Komponenten als Teil der SAP Netweaver Process Orchestration.

Mit seinen ausführlichen Anleitungen richtet sich das Werk vorrangig an Prozessmodellierer und -administratoren, die einen Einstieg in SAP Netweaver BPM finden wollen. Doch auch für Leser, die zunächst nur die Möglichkeiten dieses BPM-Systems kennen und verstehen lernen wollen, dürfte das Buch interessant sein. Hilfreich ist es auf jeden Fall, wenn man einen Zugriff auf eine Installation des Systems hat und die Beispiele selbst ausprobieren kann.

Birgit Heilig, Martin Möller:
Business Process Management mit SAP Netweaver BPM
Galileo Press 2014
Das Buch bei amazon.

by Thomas Allweyer at April 08, 2014 06:46 AM BPMN-Guru gesucht

Du hast Spaß daran Best Practices aufzuschreiben, Methoden aus Projekteinsätzen zu destillieren, Folien für die Nachwelt zu hinterlassen oder Artikel und Aufsätze zu schreiben? Du verteidigst deine Ideen auch gerne in einem Haifischbecken anderer Berater? Du findest es spannend neue Trainingsmodule rund um BPM(N) umzusetzen oder moderne eLearning-Konzepte zu entwickeln? Du arbeitest aber natürlich trotzdem [...]

by Bernd Rücker at April 08, 2014 05:48 AM

Drools & JBPM: Drools - Bayesian Belief Network Integration P2

A while back I mentioned I was working on Bayesian Belief Network integration, and I outlined the work I was doing around Junction Tree building, and ensuring we had good unit testing.

Today I finally got everything working end to end, including the the addition of hard evidence. The next stage is to integrate this into our Pluggable Belief System. One of the things we hope to do is use Defeasible style superiority rules as a way to resolving conflicting evidence.

For those interested, here is the fruits of my labours, showing end to end unit testing of the Eathquake example, as covered here.

Graph<BayesVariable> graph = new BayesNetwork();

GraphNode<BayesVariable> burglaryNode = graph.addNode();
GraphNode<BayesVariable> earthquakeNode = graph.addNode();
GraphNode<BayesVariable> alarmNode = graph.addNode();
GraphNode<BayesVariable> johnCallsNode = graph.addNode();
GraphNode<BayesVariable> maryCallsNode = graph.addNode();

BayesVariable burglary = new BayesVariable<String>("Burglary", burglaryNode.getId(), new String[]{"true", "false"}, new double[][]{{0.001, 0.999}});
BayesVariable earthquake = new BayesVariable<String>("Earthquake", earthquakeNode.getId(), new String[]{"true", "false"}, new double[][]{{0.002, 0.998}});
BayesVariable alarm = new BayesVariable<String>("Alarm", alarmNode.getId(), new String[]{"true", "false"}, new double[][]{{0.95, 0.05}, {0.94, 0.06}, {0.29, 0.71}, {0.001, 0.999}});
BayesVariable johnCalls = new BayesVariable<String>("JohnCalls", johnCallsNode.getId(), new String[]{"true", "false"}, new double[][]{{0.90, 0.1}, {0.05, 0.95}});
BayesVariable maryCalls = new BayesVariable<String>("MaryCalls", maryCallsNode.getId(), new String[]{"true", "false"}, new double[][]{{0.7, 0.3}, {0.01, 0.99}});

JunctionTree jTree;

public void setUp() {
connectParentToChildren( burglaryNode, alarmNode);
connectParentToChildren( earthquakeNode, alarmNode);
connectParentToChildren( alarmNode, johnCallsNode, maryCallsNode);

alarmNode.setContent( alarm );
johnCallsNode.setContent( johnCalls );
maryCallsNode.setContent( maryCalls );

JunctionTreeBuilder jtBuilder = new JunctionTreeBuilder( graph );
jTree =;

public void testInitialize() {
JunctionTreeNode jtNode = jTree.getRoot();

// johnCalls
assertArray(new double[]{0.90, 0.1, 0.05, 0.95}, scaleDouble( 3, jtNode.getPotentials() ));

// burglary, earthquake, alarm
jtNode = jTree.getRoot().getChildren().get(0).getChild();
assertArray( new double[]{0.0000019, 0.0000001, 0.0009381, 0.0000599, 0.0005794, 0.0014186, 0.0009970, 0.9960050 },
scaleDouble( 7, jtNode.getPotentials() ));

// maryCalls
jtNode = jTree.getRoot().getChildren().get(1).getChild();
assertArray( new double[]{ 0.7, 0.3, 0.01, 0.99 }, scaleDouble( 3, jtNode.getPotentials() ));

public void testNoEvidence() {
NetworkUpdateEngine nue = new NetworkUpdateEngine(graph, jTree);

JunctionTreeNode jtNode = jTree.getRoot();
marginalize(johnCalls, jtNode);
assertArray( new double[]{0.052139, 0.947861}, scaleDouble( 6, johnCalls.getDistribution() ) );

jtNode = jTree.getRoot().getChildren().get(0).getChild();
marginalize(burglary, jtNode);
assertArray( new double[]{0.001, 0.999}, scaleDouble( 3, burglary.getDistribution() ) );

marginalize(earthquake, jtNode);
assertArray( new double[]{ 0.002, 0.998}, scaleDouble( 3, earthquake.getDistribution() ) );

marginalize(alarm, jtNode);
assertArray( new double[]{0.002516, 0.997484}, scaleDouble( 6, alarm.getDistribution() ) );

jtNode = jTree.getRoot().getChildren().get(1).getChild();
marginalize(maryCalls, jtNode);
assertArray( new double[]{0.011736, 0.988264 }, scaleDouble( 6, maryCalls.getDistribution() ) );

public void testAlarmEvidence() {
NetworkUpdateEngine nue = new NetworkUpdateEngine(graph, jTree);

JunctionTreeNode jtNode = jTree.getJunctionTreeNodes( )[alarm.getFamily()];
nue.setLikelyhood( new BayesLikelyhood( graph, jtNode, alarmNode, new double[] { 1.0, 0.0 }) );


jtNode = jTree.getRoot();
marginalize(johnCalls, jtNode);
assertArray( new double[]{0.9, 0.1}, scaleDouble( 6, johnCalls.getDistribution() ) );

jtNode = jTree.getRoot().getChildren().get(0).getChild();
marginalize(burglary, jtNode);
assertArray( new double[]{.374, 0.626}, scaleDouble( 3, burglary.getDistribution() ) );

marginalize(earthquake, jtNode);
assertArray( new double[]{ 0.231, 0.769}, scaleDouble( 3, earthquake.getDistribution() ) );

marginalize(alarm, jtNode);
assertArray( new double[]{1.0, 0.0}, scaleDouble( 6, alarm.getDistribution() ) );

jtNode = jTree.getRoot().getChildren().get(1).getChild();
marginalize(maryCalls, jtNode);
assertArray( new double[]{0.7, 0.3 }, scaleDouble( 6, maryCalls.getDistribution() ) );

public void testEathQuakeEvidence() {
NetworkUpdateEngine nue = new NetworkUpdateEngine(graph, jTree);

JunctionTreeNode jtNode = jTree.getJunctionTreeNodes( )[earthquake.getFamily()];
nue.setLikelyhood( new BayesLikelyhood( graph, jtNode, earthquakeNode, new double[] { 1.0, 0.0 }) );

jtNode = jTree.getRoot();
marginalize(johnCalls, jtNode);
assertArray( new double[]{0.297, 0.703}, scaleDouble( 3, johnCalls.getDistribution() ) );

jtNode = jTree.getRoot().getChildren().get(0).getChild();
marginalize(burglary, jtNode);
assertArray( new double[]{.001, 0.999}, scaleDouble( 3, burglary.getDistribution() ) );

marginalize(earthquake, jtNode);
assertArray( new double[]{ 1.0, 0.0}, scaleDouble( 3, earthquake.getDistribution() ) );

marginalize(alarm, jtNode);
assertArray( new double[]{0.291, 0.709}, scaleDouble( 3, alarm.getDistribution() ) );

jtNode = jTree.getRoot().getChildren().get(1).getChild();
marginalize(maryCalls, jtNode);
assertArray( new double[]{0.211, 0.789 }, scaleDouble( 3, maryCalls.getDistribution() ) );

public void testJoinCallsEvidence() {
NetworkUpdateEngine nue = new NetworkUpdateEngine(graph, jTree);

JunctionTreeNode jtNode = jTree.getJunctionTreeNodes( )[johnCalls.getFamily()];
nue.setLikelyhood( new BayesLikelyhood( graph, jtNode, johnCallsNode, new double[] { 1.0, 0.0 }) );

jtNode = jTree.getRoot();
marginalize(johnCalls, jtNode);
assertArray( new double[]{1.0, 0.0}, scaleDouble( 2, johnCalls.getDistribution() ) );

jtNode = jTree.getRoot().getChildren().get(0).getChild();
marginalize(burglary, jtNode);
assertArray( new double[]{0.016, 0.984}, scaleDouble( 3, burglary.getDistribution() ) );

marginalize(earthquake, jtNode);
assertArray( new double[]{ 0.011, 0.989}, scaleDouble( 3, earthquake.getDistribution() ) );

marginalize(alarm, jtNode);
assertArray( new double[]{0.043, 0.957}, scaleDouble( 3, alarm.getDistribution() ) );

jtNode = jTree.getRoot().getChildren().get(1).getChild();
marginalize(maryCalls, jtNode);
assertArray( new double[]{0.04, 0.96 }, scaleDouble( 3, maryCalls.getDistribution() ) );

public void testEathquakeAndJohnCallsEvidence() {
JunctionTreeBuilder jtBuilder = new JunctionTreeBuilder( graph );
JunctionTree jTree =;

NetworkUpdateEngine nue = new NetworkUpdateEngine(graph, jTree);

JunctionTreeNode jtNode = jTree.getJunctionTreeNodes( )[johnCalls.getFamily()];
nue.setLikelyhood( new BayesLikelyhood( graph, jtNode, johnCallsNode, new double[] { 1.0, 0.0 }) );

jtNode = jTree.getJunctionTreeNodes( )[earthquake.getFamily()];
nue.setLikelyhood( new BayesLikelyhood( graph, jtNode, earthquakeNode, new double[] { 1.0, 0.0 }) );

jtNode = jTree.getRoot();
marginalize(johnCalls, jtNode);
assertArray( new double[]{1.0, 0.0}, scaleDouble( 2, johnCalls.getDistribution() ) );

jtNode = jTree.getRoot().getChildren().get(0).getChild();
marginalize(burglary, jtNode);
assertArray( new double[]{0.003, 0.997}, scaleDouble( 3, burglary.getDistribution() ) );

marginalize(earthquake, jtNode);
assertArray( new double[]{ 1.0, 0.0}, scaleDouble( 3, earthquake.getDistribution() ) );

marginalize(alarm, jtNode);
assertArray( new double[]{0.881, 0.119}, scaleDouble( 3, alarm.getDistribution() ) );

jtNode = jTree.getRoot().getChildren().get(1).getChild();
marginalize(maryCalls, jtNode);
assertArray( new double[]{0.618, 0.382 }, scaleDouble( 3, maryCalls.getDistribution() ) );

by Mark Proctor ( at April 08, 2014 12:29 AM

April 06, 2014

Drools & JBPM: Exercise 1: Public Training San Francisco 2014

In this opportunity we’ll go over one of the exercises you will be able to see in the Drools and jBPM Public Training. It involves rule execution for a particular case: 
We’re going to represent the scenario of a cat trapped on a limb, and all the things needed to provide solutions to the situation. For that, we will need:
  • Pet: with a name, a type and a position
  • Person: will have a pet assigned to him, and can call the pet down
  • Firefighter: Will be able to get the cat down from the tree as a last resort
Once we have a representation, we need to start defining rules to determine different types of situations and act accordingly: Rule “Call Cat when it is in a tree”
When my Cat is on a limb in a tree
  Then I will call my Cat
Rule “Call the Fire Department”
When my Cat is on a limb and it doesn’t come down when I call
  Then call the Fire Department Rule “Firefighter gets the cat down”
When the Firefighter can reach the Cat
  Then the Firefighter follows steps to retrieve the Cat
Each of these rules will have a specific DRL representation, based on the model we defined: rule "Call Cat when it is in a tree"
        $p: Person($pet: pet, petCallCount == 0)
        $cat: Pet(this == $pet,
             position == "on a limb",
             type == PetType.CAT)
        //$cat.getName() + " come down!"
        update($p); end rule "Call the fire department"
        $p: Person($pet: pet, petCallCount > 0)
        $cat: Pet(this == $pet,
            position == "on a limb",
            type == PetType.CAT)
        Firefighter firefighter = new Firefighter("Fred");
end rule "Firefighter gets the cat down"
        $f: Firefighter()
        $p: Person($pet: pet, petCallCount > 0)
        $cat: Pet(this == $pet, position == "on a limb",
            type == PetType.CAT)
        $cat.setPosition("on the street");

And then, some Java code to end up firing said rules:

KieServices kservices = KieServices.Factory.get(); KieSession ksession = kservices.getKieClasspathContainer().newKieSession();
Person person = new Person("John!");
Pet pet = new Pet("mittens", "on a limb",

When we have these components defined, we will take advantage of the course to start modifying it to see how rules interact with each other, by:
  • Creating rules that insert dogs
  • Creating rules that make dogs chase cats that are on the same place as they are
  • Handling a firefighter that doesn’t show up
  • Anything you can think of!
Stay tuned for more info!

by Marian Buenosayres ( at April 06, 2014 06:04 AM

April 04, 2014

Bruce Silver: BPMN as an Execution Language

One of the reasons that BPMN so quickly displaced BPEL in the BPM space is it had a graphical notation that exactly mirrored the semantic elements. What you see is what you get.  So whenBPMN 2.0 changed the acronym to Business Process Model and Notation, I stubbornly refused to acknowledge the “and”.  For me it was all about the notation.  The whole basis of Method and Style was making the process logic crystal clear from the diagram, so if some behavior was not captured in the notation, it didn’t count.  But now I am changing my tune.

At bpmNEXT last week, one presentation after the next kept reinforcing a thought that has been much on my mind lately: that many of BPMN’s supposed limitations – it can’t describe case management, for instance, or goal-directed behavior – are actually limitations of the notation, not of the semantic model.  And why is that important?  Because it means that a BPMS that can execute BPMN can execute those other behaviors as well.  You don’t need a special engine to do it.

At bpmNEXT, Scott Menter of BPLogix described using a Gantt chart to drive process automation – activity enablement by prerequisites rather than by explicit control flow.  His main argument was that the simplicity of the Gantt chart made the process easier for business users to understand, and to prove it he contrasted that with a nasty rat’s nest of a traditional flowchart.  Hmmmm.  Yes, the Gantt is cleaner to look at, but that’s because it hides the process logic. Not so, says Scott. You can select an activity in the Gantt and its predecessors are highlighted in the tool.  Sure, you can do that in the live tool, but not in a paper or pdf printout of the model.

And here was lesson number one:  There is an inherent tradeoff between simplicity and transparency.  BPMN notation is messy because it reveals the process logic clearly.  If you don’t need to visualize the flow logic, you could make a much simpler diagram.  That lesson was reinforced again in the Q&A following my own presentation, together with Stephan Fischli and Antonio Palumbo representing BPMessentials, on a structured interview wizard that automatically generates a properly structured BPMN model following Method and Style principles.  Someone commented that the text representation of the process in the wizard might be a better way to describe the process logic to a business person.  That was never our intention.  For us, the wizard was just a means to an end – the BPMN – but it reinforces the point: a flowchart is not always the most intuitive description of all process types for all users.

John Reynolds of IBM provided lesson number two. He gave a sneak peek at how IBM BPM will expand to include case management.  He presented it by introducing a new BPMN element – I think it’s called an ad-hoc task or case task – that can be instantiated at runtime either by a user or by an event plus a condition.  The notation has the dashed border of a BPMN event subprocess, plus a couple new markers.  Even if such a new extension element were needed, IBM’s implementation probably puts a bullet in the head of CMMN, but I actually think BPMN 2.0 has this already!  The non-interrupting Parallel Multiple event subprocess – yeah, it’s there, look it up, p. 245 – already has that behavior.  IBM’s new task type could be considered a “visual shortcut,” similar to a few others in BPMN 2.0, an alternative notation for a more complicated standard serialization.  The parallel events in this case are either Message, Signal, or Escalation (representing manual instantiation) for the trigger plus Conditional for the condition.  IBM’s notation is cleaner, but I believe standard BPMN 2.0 engines can already execute it.

However, when your process or case is represented by a bunch of free-floating ad-hoc tasks or event subprocesses, the logic that connects them is invisible in the diagram.  Like the Gantt chart, you need to either trace hyperlinks in a live tool, or somehow parse the BPMN XML, to understand it. But what if we could find a new graphical representation of the BPMN that reveals it more intuitively?  Keep BPMN as the semantic language, but provide an alternative, non-flowchart visual representation for certain process types.  This is really an intriguing idea.

The third lesson came from a presentation by Dominic Greenwood of Whitestein.  He mostly focused on intelligent agents, another recurring theme at bpmNEXT, but the purpose of those agents appears to be mostly the same as in Whitestein’s presentation last year: goal-driven process execution.  This is really important and exciting, and Whitestein is clearly the leader in this area.  They have a notation that represents goals and subgoals linked to standard BPMN process fragments, but what is missing from those diagram is the logic behind that linkage.  For me, the most interesting part of Whitestein is not intelligent agents – that seems like just an implementation style – but rather what is the language for modeling goal-directed processes?  How do you design them?

There is a common thread linking BPLogix, IBM, and Whitestein.  It’s the notion of independent BPMN process fragments assembled dynamically at runtime based on a combination events and conditions, ad-hoc user action, and some goal-seeking logic.  The control flow paradigm of conventional BPMN is great at revealing the process logic in the diagram, but it can’t describe these new behaviors.  The BPMN semantics still work, but the diagram does not.  We need new ways to visualize it; maybe Gantt is a good starting point.  And beyond that, how you go from goals to the specific events and conditions that enable the BPMN fragments is another vital area.  How do you model it?  How can you visualize it in a diagram?

It’s a lot to think about.

The post BPMN as an Execution Language appeared first on Business Process Watch.

by bruce at April 04, 2014 11:31 PM

Bruce Silver: More on Indirect Call Activity

In a recent post I discussed the utility of allowing the calledElement attribute of a BPMN Call Activity to be an expression that evaluates to a QName (in BPMN’s special usage, a prefixed id) rather than a literal QName value.  Tom Debevoise asked what that might look like in the XML and I offered something off the top of my head.  But on further reflection, I don’t think that would work.  Here is something that has a better chance.

The BPMN xsd defines the tFormalExpression datatype as a mixed complex type, so it cannot be used for an attribute; it must be a child element, and, until the BPMN 2.0 schema changes, it would have to be an extension element in a separate namespace.  Fortunately, the calledElement attribute is optional in the schema, so we can omit it.  Let’s call the extension element e:calledElement, where the prefix e: means the extension namespace.

Assume we want to call some variant of the process Handle Compliance identified by a data object in the calling process named code.  We’ll assume all the variants are defined in the same target namespace as the calling process. Prior to the call, the calling process needs to populate the data object with the code value.  The simplest way to do it is to make the code value the id of the called process variant.  In that case, serialization of the indirect call activity would then look something like this:

<callActivity id="myIndirectCall" name="Handle Compliance">
  <extensionElements xmlns:e="">
    <e:calledElement language="" 

where tns designates the targetNamespace.  getDataObject is a special XPath function, discussed in the BPMN spec, that returns the value of a data object referenced by name.

This should be schema-valid, but of course the BPMN engine would have to understand the intended behavior.

The post More on Indirect Call Activity appeared first on Business Process Watch.

by bruce at April 04, 2014 07:27 PM

Thomas Allweyer: Use Cases werden nach wie vor häufig eingesetzt

Cover Use Case-StudieUse Cases werden in der Software-Entwicklung bereits seit 25 Jahren als Mittel zur Darstellung der funktionalen Anforderungen verwendet. Nicht zuletzt als Bestandteil der Unified Modeling Language (UML) haben sie eine recht hohe Verbreitung gefunden. Insbesondere im Zusammenhang mit agilen Methoden erfuhren die Use Cases in den letzten Jahren einige Kritik, da bei dieser Methode häufig schon recht detaillierte Abläufe im Voraus erarbeitet werden. Agile Entwickler arbeiten daher gerne mit User Stories, die sich eher auf die Ziele des Benutzers konzentrieren als bereits frühzeitig Bedienungsdetails festzulegen. Die vorliegende Studie untersucht den Einsatz von Use Cases in der Praxis. Ein verlässliches Bild über die tatsächliche Verbreitung dieser Methodik liefert die Studie nicht, wurde doch ein Großteil der 83 Teilnehmer aus dem Umfeld eines GI-Arbeitskreises mit dem Schwerpunkt Use Cases gewonnen. Daher dürften hauptsächlich Menschen geantwortet haben, die sich sowieso für dieses Thema interessieren. Und so verwundert es nicht, dass über 90% bereits Use Cases einsetzen oder dies planen. Dennoch deutet die Tatsache, dass sich dieses Teilnehmerfeld aus ganz unterschiedlichen Branchen finden ließ, darauf hin, dass Use Cases vielerorts nach wie vor zum festen Methodenrepertoire der Softwareentwicklung gehören.

Use Cases werden erwartungsgemäß vor allem zur Spezifikation der funktionalen Anforderungen auf Anwenderebene verwendet. Eine wichtige Rolle spielen sie für die Kommunikation mit Auftraggebern und im Entwicklungsteam. Als ein wesentlicher Vorteil der Methodik wird die saubere Strukturierung der Anforderungen genannt. Zudem verhelfen Use Cases zu einem gemeinsamen Verständnis der Anforderungen. Doch es gibt auch Herausforderungen. So finden es die Use Case-Anwender schwierig, die geeignete Granularität zu finden und die Vollständigkeit sicherzustellen. Die Aussage, dass Use Cases einen wichtigen Beitrag zum gesamten Projekterfolg leisten, trifft nach Meinung von 76% der Befragten “eher” oder “voll” zu.

Da bei etwa der Hälfte der Befragten Software mit agilen Verfahren entwickelt wird, wäre es auch noch interessant zu untersuchen, ob und wie Use Cases auch in der agilen Entwicklung eingesetzt werden, oder ob sich ihr Einsatz auf klassische Verfahren beschränkt.

Download der Studie bei der Firma HKBS

by Thomas Allweyer at April 04, 2014 10:41 AM

April 03, 2014

Drools & JBPM: Drools Presentation at Square (SFO) : 11th of April

Friday the 11th of April, 11am, I'll be presenting internally at Square. So if your there, and want to know what we are up to with Drools, or interested in rules and event processing, then come along.


by Mark Proctor ( at April 03, 2014 11:06 PM


This year, Red Hat is organizing DevNation for the first time (April 13-17, San Francisco),  a new open source, polyglot conference for application developers and maintainers.  It combines for example the old JUDCon and CamelOne conferences, but offers top notch keynotes, sessions, labs, hackfests, and panels geared for those who build (with) open source.  It is _the_ place for a developer to get excellent technical information from the experts directly, and/or hang out with pizza and beer !

Co-located is Red Hat Summit (April 14-17, San Francisco),  meant for anyone looking to exponentially increase their understanding of open source technology and identify powerful solutions for their business needs (although typically at a slightly higher level compared to DevNation). From community enthusiasts and system administrators to enterprise architects and CxOs, there are sessions and tracks for each level of interest and need.

This year, I'll be doing a "BRMS 6" presentation (available for both DevNation and Red Hat Summit attendees), giving a quick overview of the BRMS 6.0 features, but also sharing a lot of technical information on some of the most important new features, such as the new convention and configuration approach to building and deploying.

But this is just one tiny part of the huge amount of interesting keynotes, presentations, workshops, etc. you'll be able to attend.  Looking forward to speaking to some of you, or maybe even touching some code during the hackfest (bring your laptop and we'll get you started)!

Drop in Clinic
There will also be another all day drop in clinic, where Dr Kris Verlaenen (jBPM project lead) and I will be hanging out and coding all day. So you can drop in an hang out with us. Either to code, or ask questions or just admire Kris' big brain.

Building and Deploying with Red Hat JBoss BRMS 6
Mark Proctor — BRMS and BPMS Platform Architect

Tue April 15th, 3:40pm - 4:40pm
Red Hat JBoss BRMS 6 introduces a large number of new features and changes, with a strong focus on methodologies around building and deploying. In this session, Mark Proctor will explain:
  • Convention- and configuration-based approaches to authoring rule projects.
  • Building and deployment that is now aligned with Maven best practices.
  • A powerful, new, flexible, and extensible workbench that delivers an integrated web-based system for authoring and management.

You’ll learn everything that’s new in Red Hat JBoss BRMS 6 and how it can make delivering your projects easier than ever.

by Mark Proctor ( at April 03, 2014 06:19 PM

March 31, 2014

Keith Swenson: Not an Agent, but a Personal Assistant

It seemed that most of the talks at BPMNext mentioned agents at some level — but the meanings varied.  After many discussions it seems that we might focus on a special type of agent we can call a “Personal Assistant” which has the potential to dramatically change large scale BPM, and here is why.

Agent Definition

I asked dozens of attendees the question:  What is the definition of an agent and how is it different from other software?  The answers were surprisingly consistent around three requirements.  The software agent must:

  • do something for someone
  • work autonomously
  • guided by goals

Then I pointed out that this definition is common to almost all software out there.  All software is designed to do something for someone.  Working autonomously is very common, usually what we call “running in the background.”  Finally, all software has some way to tell when it is done working.  Examples of this include your word processor when it formats your document in the background, your email server receiving and forwarding email, a google alert telling you when a web page has been created, your anti-spam filter, and your virus protection software, etc.  Agents are plentiful and widely used today.

Even so, there is a feeling that what we mean by agent should be so much more than this.  Some said it should behave like a human. It should go beyond what it was programmed to do. It should have some learning capability. It should interact with other real people. It should be more like the operating system character in the movie “Her.” Nathaniel Palmer reminded me of the Apple Knowledge Navigator vision where your agent appeared on screen with a human head, and you talk to it, while it assists you in taking care of a lot of the tedious aspects of communications. It would arrange appointments, coordinate calls, find stuff, and in general make it easier to communicate. If you see business as a conversation between people, then this kind of intelligent agent might really help.

Lets call it a Personal Assistant so we can distinguish this kind of agent.

Visualizing the Personal Assistant

It is personal; it represents a real person (you) to others.  Not that anyone else would confuse the assistant for a real person, but that it is an intermediary between people to facilitate their communications.  Sort of like this:


It is useful not only for reaching out to others, but also in responding to others trying to reach you.

Meanwhile, it is reasonable to expect that the personal assistants will be communicating a lot more with each other than the people.  They might be checking on status, cloning projects, and updating status when things change.




Most of the presentations at BPMNext talked about agents being used within a closed system.  That is the old model of delivering an application or solution which is all contains, and that all of the users visit for a specific purpose.  For any given person, there can only be a small number of such systems that they use, and those system will not normally be under their control.  A user might have the option to configure an agent in the system, but probably only

The Personal Assistant, instead of being an agent within a particular closed system, is an agent outside of such a system, and can reach in to play a role for you.  That is, my personal assistant lives in my sandbox, and is completely under my control, but when doing its job, reaches out and interacts with another sandbox to explore assignment there, and to potentially complete them.  This opens exciting possibilities.

The personal assistant is one type of agent, which plays a very specific role, which I hope to explore in future posts.

by kswenson at March 31, 2014 09:52 AM

March 28, 2014

Drools & JBPM: Drools & jBPM Public Training @San Francisco

If you're going to the Red Hat Summit on April, take advantage of this opportunity:

Plugtree is organizing a public training on Drools and jBPM the week after Red Hat Summit in the San Francisco area for April 21st to the 25th in four different modalities:

  • Drools: April 21st to 23rd
  • jBPM: April 21st, 24th and 25th
  • Full (Drools + jBPM): April 21st to the 25th

This workshop introduces Business Process and Rules Management, preparing you to be immediately effective in using both Drools and jBPM to improve your applications. In the training, we will cover:
  • All the different syntax for defining rules
  • Drools runtime configuration tricks
  • Writing BPMN2 files and projects from scratch, to the point of having runnable modules.
  • jBPM configuration to gain full control of your process-based applications.
  • Kie Workbench user guides, including tips for integration with other systems.
  • Integration tips for architectural design of rule-based and process-based applications.
If you're interested in this training, you can download the full agenda, or click here to register. You can contact us at if you have any questions. Hope to see you there!
We offer options for Drools only (days 1 to 3), jBPM only (days 1, 4 and 5) and full training (days 1 to 5). Everyone can assist these trainings, regardless of their attendance to the Red Hat Summit

by Marian Buenosayres ( at March 28, 2014 01:31 PM

March 27, 2014

Sandy Kemsley: bpmNEXT 2014 Wrapup And Best In Show

I couldn’t force myself to write about the last two sessions of bpmNEXT: the first was a completely incomprehensible (to me) demo, and the second spent half of the time on slides and half on a demo...

[Content summary only, click through for full article and links]

by Sandy Kemsley at March 27, 2014 10:05 PM

Sandy Kemsley: bpmNEXT 2014 Thursday Session 2: Decisions And Flexibility

In the second half of the morning, we started with James Taylor of Decision Management Solutions showing how to use decision modeling for simpler, smarter, more agile processes. He showed what a...

[Content summary only, click through for full article and links]

by Sandy Kemsley at March 27, 2014 07:26 PM

Sandy Kemsley: bpmNEXT 2014 Thursday Session 1: Intelligence And A Bit More BPMN

Harsh Jegadeesan of SAP set the dress code bar high by kicking off the Thursday demos in a suit jacket, although I did see Thomas Volmering and Patrick Schmidt straightening his collar before the...

[Content summary only, click through for full article and links]

by Sandy Kemsley at March 27, 2014 05:38 PM

Sandy Kemsley: bpmNEXT 2014 Wednesday Afternoon 2: Unstructured Processes

We’re in the Wednesday home stretch; this session didn’t have a specific theme but it seemed to mostly deal with unstructured processes and event-driven systems. The session started with John...

[Content summary only, click through for full article and links]

by Sandy Kemsley at March 27, 2014 12:45 AM

March 26, 2014

Sandy Kemsley: bpmNEXT 2014 Wednesday Afternoon 1: Mo’ Models

Denis Gagne of Trisotech was back after lunch at bpmNEXT demonstrating socializing process change with their BPMN web modeler. He showed their process animation feature, which allows you to follow...

[Content summary only, click through for full article and links]

by Sandy Kemsley at March 26, 2014 10:32 PM

Sandy Kemsley: bpmNEXT 2014: BPMN MIWG Demo

The BPMN Model Interchange Working Group is all about (as you might guess from the name) interchanging BPMN models between different vendors’ products: something that OMG promised with the BPMN...

[Content summary only, click through for full article and links]

by Sandy Kemsley at March 26, 2014 07:03 PM

Sandy Kemsley: bpmNEXT 2014 Wednesday Morning: Cloud, Synthetic APIs and Models

I’m not going to say anything about last night, but it’s a bit of a subdued crowed here this morning at bpmNEXT. We started the day with Tom Baeyens of Effektif talking about cloud workflow...

[Content summary only, click through for full article and links]

by Sandy Kemsley at March 26, 2014 05:53 PM

Sandy Kemsley: bpmNEXT 2014: Work Management And Smart Processes

Bruce Silver always makes me break the rules, and tonight I’m breaking the “everything is off the record after the bar opens” rule since he scheduled sessions after dinner and with an open bar in the...

[Content summary only, click through for full article and links]

by Sandy Kemsley at March 26, 2014 03:29 AM

Sandy Kemsley: bpmNEXT 2014 Tuesday Session: It’s All About Mobile

I’ll blog this year the same as last year’s bpmNEXT demos, with each session of multiple demos in a single post. The posts are a bit long, but they are usually grouped into themes so it works better...

[Content summary only, click through for full article and links]

by Sandy Kemsley at March 26, 2014 12:17 AM

March 25, 2014

Sandy Kemsley: bpmNEXT 2014 Begins!

We’re at the lovely oceanside Asilomar conference grounds a couple of hours drive south of San Francisco for this year’s bpmNEXT conference. Last year’s inaugural conference was a great experience –...

[Content summary only, click through for full article and links]

by Sandy Kemsley at March 25, 2014 10:31 PM

Thomas Allweyer: Perspektiven des Veränderungsmanagements

Cover Change-Prozesse erfolgreich gestaltenUm die Veränderungskompetenz des Managements ist es in vielen Firmen nicht gut bestellt. In einer Studie trauten 70 Prozent der Befragten ihrem Management nicht zu, ein attraktives Zukunftsbild zu vermitteln. Das ist erschreckend, denn schließlich sind Unternehmen im digitalen Zeitalter mehr denn je darauf angewiesen, sich schnell an geänderte Herausforderungen anzupassen sowie neue Geschäftsmodelle und Prozesse umzusetzen. Für das Gelingen von Veränderungsmaßnahmen gibt es kein allgemein gültiges Patentrezept – wohl aber viele Erfahrungen und praxiserprobte Ansätze, die in konkreten Unternehmenssituationen erfolgreich waren. Das vorliegende Buch enthält insgesamt dreizehn Beiträge, die verschiedene Themenstellungen und Konzepte des Veränderungsmanagements beleuchten. Bei den Autoren handelt es sich sämtlich um praxiserfahrene Berater, die sich im “Q-Pool 100″ organisiert haben, der “Offiziellen Qualitätsgemeinschaft internationaler Wirtschaftstrainer und -berater”.

Das von den Beiträgen abgedeckte Spektrum ist sehr breit. Es reicht von der Wachstumbewältigung eines Internet-Startups bis zur Unternehmensnachfolge, vom Innovationsmanagement bis zur Teamentwicklung, von der Führungskompetenz-Entwicklung bis zur Gestaltung des Change-Prozesses.

Beispielhaft seien zwei Beiträge herausgegriffen. So erläutert Wolfgang Müller das “go-i-Prinzip” der fairen Change-Kommunikation. “Go-i” bezeichnet das im japanischen Denken verwurzelte Streben nach Konsens. Häufig wird ein Kompromiss geschlossen, wenn die verschiedenen Beteiligten unterschiedliche Interessen und Ziele haben. Ein Kompromiss bedeutet jedoch, dass keiner bekommt, was er eigentlich will. Beim go-i-Prinzip versucht man hingegen, einen echten Konsens zu finden, bei dem alle Beteiligten ihre Ziele erreichen. Als Voraussetzung hierzu muss man sich im ersten Schritt seiner eigenen Ziele wirklich klar werden. Im zweiten Schritt geht es darum, die Ziele seines Gegenübers zu verstehen. Dann kann man im dritten Schritt eine Konsens-Lösung entwickeln, die im vierten Schritt verbindlich vereinbart und umgesetzt wird. Müller illustriert anhand der Standortverlagerung eines Werkes, wie solche Konsens-Lösungen mit Mitarbeitern aussehen können, die einen Umzug ablehnen. So wurde für eine engagierte Mitarbeitern die Möglichkeit gefunden, sich mit einigen Kollegen am bisherigen Standort selbstständig zu machen und künftig als Zulieferer für das Unternehmen zu arbeiten.

Ursula Vranken beschreibt in ihrem Beitrag die typischen Phasen, die ein Start Up-Unternehmen von der Gründung bis zum Wachstum und der Expansion in andere Länder durchläuft. Welche unterschiedlichen Herausforderungen sich in jeder Phase stellen, beschreibt sie anhand der Entwicklung, die die Domainhandelsbörse Sedo durchlaufen hat. Schließlich werden zu jeder Phase Ratschläge und Tipps für das Personalmanagement gegeben.

Generell zeichnen sich die meisten Beiträge durch zahlreiche Praxisbeispiele und konkrete Handlungsempfehlungen aus. Zwar finden sich auch eher grundlegende Überlegungen zur Unternehmenskultur bis hin zur – freilich im weiteren Sinne aufgefassten – Spiritualität, doch werden gerade auch zu den eher “weichen” Themen, wie Führungskompetenz oder Kommunikation, viele handfeste Empfehlungen gegeben. An einigen Stellen schimmern Werbebotschaften der allesamt als Berater tätigen Autoren durch, doch überwiegen zumeist die nutzbringenden Inhalte. Unter den zahlreichen Facetten des Themas Change Managements, die das Buch aufzeigt, dürften auch erfahrene Veränderungsmanager noch neue Anregungen gewinnen.

Dieter Hohl (Hrsg.):
Change-Prozesse erfolgreich gestalten.
Menschen bewegen – Unternehmen verändern
Haufe 2012
Zur Bestellung beim Verlag

by Thomas Allweyer at March 25, 2014 01:15 PM

March 24, 2014

Keith Swenson: Assistants Transform Data, Synchronize as Well

In previous post I introduce a scenario for cooperation between doctors, and show that a personal assistant is a good way to connect those in real time.  Here are some additional details that we should consider more carefully.

(Update: this post has had the terms changed to align with: Not an Agent, but a Personal Assistant.)

Not as Easy as it Looks


This diagram makes the personal assistant’s job look like it is simply invoking a subprocess.  Better, the primary care physician has a process, which calls a process from Charles, the back specialist.  However, this is only one case out of many possibilities.

The Fan-Out Problem

I picked one scenario, and gave the players names so we can talk about them, but in reality there are many primary care physicians, and many specialist who might be referred to, for many different specializations.  fanout

This means that Charles, our back specialist in the middle, needs to be prepared to receive referrals from any number of other doctors.  Similarly as the scenario continues, he needs to be able to refer the patient to any number of other specialists, the case in point being a physical therapist.  Even in this simple scenario in a small community there are thousands of possible routes.


If you know ahead of time that a process will call a particular subprocess, then it is easy to arrange that the processes use the same schema and represent the same information in the same way.  What we need to remember is that this is about two different people, designing schema at different times, possible for slightly different purposes, and getting them to work automatically together.

Dying for Standards

Why don’t we get together and come up with some standards that would allow all processes to be hooked together all the time.  For example, when a primary care physician refers a patient to a back care specialist, they should always do this in the same way.  We might invent then a controlled vocabulary that defines all the possible terms in an unambiguous way, and require everyone to use them properly.   This is precisely how the problem is solved in a closed system: in a single development project.  It is not possible to use this approach because medical knowledge is always expanding.  New treatments, new techniques, new drugs are being invented everyday.  It all simply moves too fast to make a single dictionary with all terms well defined.

A data format standard, HL7, is a laudable attempt to make a common structure.  In anything to do with medicine this should certainly be used as a framework for storing patient data.  But HL7 is not done.  The basic framework is there, but the details are not specified for all situations.  The group has strategic objectives going out to the year 2020, but we can’t wait until then to design the system.  A realistic approach will have to incorporate the idea that these standards are being developed along side treating patients, and the systems will have to muddle along with imperfect information representations.

Semantic Mapping

The model that will work is one that is working today by the Securities and Exchange Commission (SEC).  Information being passed around in instance document refer to published taxonomies / ontologies.  Different parties can publish taxonomies that extend other taxonomies.  As long as everyone uses the same basic taxonomy, published by the SEC, then all the documents can be exchanged.  At the same time, subdivisions of the market can use their extended taxonomies to transfer more highly specialized information.  15,000 publicly traded companies file their financial reports with the SEC using this method today, and it works.

This becomes a key job for the personal assistant to access these taxonomies/ontologies, and to translate between them.  The relationship might look a bit like this:


Synchronizing in Both Directions

Much of this discussion has focused on Task Introduction, those things that must be done when a task if first offered and you are expected to either pick it up or not.  The personal assistant also has a role to play while the task is being performed.  If the task takes a while, there may be intermediate results.  There may be tests that support some conclusions which should be communicated to others to keep everyone coordinated.

A good example is that a doctor may be given the goal of treating a particular problem, and that treatment may take many months.  Treating this interaction as you might a subroutine call: information is passed in at the beginning, and all the results come back when the treatment is finished, does not support the real exchange of information that is needed.  While treatment is proceeding, the patient may go back to the primary doctor because of a completely different problem.  That problem might or might not be a side effect of the treatment.  The only way to know this, is for the doctor to be informed about the treatment, and progress.

The general model should not be like a subroutine call.  Instead, the general model should be one where both the calling doctor, and the called doctor, exchange information to keep each other in sync while the treatment is proceeding.  This is another task that can be taken up by the personal assistant, to regularly push updates back to the caller so they can be informed about progress.

Assistant is Personal

The job of a personal assistant is to really act on your behalf.  It does all of these:

  • Receiving and screening notification – filter the spam for relevant notifications.
  • Task Introduction – find offered tasks, gather additional information about the task to evaluate using a set of rules whether this task is interesting.
  • Task Acceptance – sending a notice back to the sender that the offer is interesting and going to be considered by a human.
  • Clone Project – based again on rules it may automatically retrieve all the accessible information in the project, and put it safely in a local place for access.
  • Determine the Right Template – again based on rules, and start the process if necessary.
  • Transform – access the taxonomies that give the semantic meaning of the data, and use that to transform the data to a form that you are used to, and to transform back again when responding.
  • Synchronize – in both directions: pull down new documents and information that appear at the original site, and to push back modified information, or new documents, to the originating doctor’s site, in anticipation of the need.

Spelled out this way, personal assistants seem quite a bit less magical than most of the marketing rhetoric builds them up to be.  At the same time, this outlines a clear and important mode of use for personal assistants for cooperating knowledge workers.


by kswenson at March 24, 2014 03:16 PM

March 21, 2014

Keith Swenson: Personal Assistants can connect a Doctor to a Specialist

In the previous post, I introduced a scenario for discussing personal assistants.  In this post, I explore how personal assistants are useful at a tool for connecting the primary care physician to the back care specialist.

(Update: this post has had the terms changed to align with: Not an Agent, but a Personal Assistant.)

The Primary Care Process

What does a primary care physician, Betty, do?  Many things, and for this scenario we focus on how they diagnose a problem that the patient has.  There are many problems that they detect and address immediately.  There are also regular tests (e.g. measuring weight and height) which are done every time in order to see the trend over time.  There are particular tests that should be done at specific intervals to check for particular problems.  But for this scenario it is attempting to diagnose a problem to a degree that one can distinguish between one of many possible follow on steps.  You might visualize the process as something like this:


There is a step at the beginning “Confer” where Betty talks with the patient to gather information on how the patient feels and what problems are presenting.   Then there is a step which is involves running some further tests, which might be done by the doctor or by other people who specialize in tests, like a MRI, CT Scan, or blood tests.  After than there are a large number of treatments, maybe 50 or 100 of them.  Betty will weigh the evidence and determine that a particular treatment should be tried. Betty will refer the patient to the specialist by assigning the treatment task to the referred doctor.  The system will take over from there.

Referral and Introduction

Charles is the back care specialist to whom the patent, Alex, is being referred.  Before Charles can do anything, he must be introduced to Alex, and he must be introduced to the task.  Let’s call what is happening here “Task Introduction“.

Many simplistic treatments of business process consider task introduction to be outside the scope of the business process.  Tasks are assigned the minimal of explanations: “Approve Expense Report” assumes that the performer of this task knows what needs to be done, and those additional details of what must and must not be approved is learned through a completely different channel.

Task Introduction then includes everything from when Charles first learns that there is something to be done, until when he understands what he is being asked to do.  Basic notification can be accomplished with an email message.  Charles might then follow a link to a web UI that contains additional detail about the patient, the doctor, and the test results that have already been produced.

Remote Participation

As Pictured above, Charles might interact directly with the case system of the primary care doctor. This would certainly be the case if these doctors worked for the same company, but in this scenario they don’t. Instead, Charles clones the case instance.


Case Instance Cloning

Case cloning is when a local case is made that matches the remote case, and contains a copy of the accessible contents of the remote case.  Why would Charles want to do this?  Because Charles does not have rights on the remote case to make changes there.  If he has his own goals he wants to perform, such as “research” and “recommend” as shown,then he needs a place to manage those goals.

The clone contains a copy of the data, like the CT Scans or MRI scans, but not because Charles needs access.  Charles can access the originals, but if Charles has others in the office, then he needs a copy so that he can allow his assistants to access and process them.  Goals can be set in this cloned case, more documents and information can be stored and managed there.  It is even possible that if updates were made

Where does the Personal Assistant come in?

The personal assistant can automatically accomplish some of this.  Remember that Betty assigns a task to Charles as a way of referring this work to him.  Instead of picking this up manually, it might be picked up by a personal assistant, which then uses rules to do a lot for Charles before he has to get involved.  Here is what the personal assistant software can accomplish:

  • Receiving and screening notification – if Charles gets a lot of requests to do task, those emails can go to the personal assistant which can filter for interesting tasks.
  • Task Introduction – pick up additional information about the task to evaluate using a set of rules whether this task is interesting.
  • Task Acceptance – sending a notice back to the sender that the offer is interesting and going to be considered by a human.
  • Clone Project – based again on rules it may automatically retrieve all the
  • Determine the Right Template – again based on rules, and start the process if necessary.
  • (& Transform – I will talk about this in the next post)
  • (& Synchronize – when necessary)

Is that all?  Why give this the trumped up name of personal assistant?  The reason is that this piece of software is “acting on Charles’ behalf” which is what personal assistants do.  The task was assigned to Charles, but the personal assistant picked it up.  The personal assistant might actually do some negotiation, perhaps clarifying the terms.  The personal assistant is bridging from the Charles’ environment, working for Charles, and reaching over to act on Betty’s environment.  The personal assistant might react by directly taking action on the remote system based on rules.  Or it might as pictured do some ground work for Charles by setting up an environment for Charles to complete the work in his own way.  This is precisely where you need personal assistants to take action.


Isn’t this just a Subprocess?

It would just be a subprocess if it was designed from the beginning to fit together by a single designer.  In the next post I will talk about the problems that arise because the system that Betty uses is designed by one company, and the system that Charles uses is designed by someone else.  The personal assistant has a critical role in getting these to work together.

by kswenson at March 21, 2014 11:28 AM

March 20, 2014

Bruce Silver: BPMN: Seeking Indirection

A frequent complaint about BPMN is that it cannot adequately describe many common business process scenarios, particularly when all possible flow paths are not known in advance.  Actually, it can handle a good number of those, but many fall into a “gray area” – patterns that may or may not be technically allowed, depending on your interpretation of the spec.  One of those scenarios concerns variant forms of an activity.  If activity A has only two or three variants, the modeling is straightforward:  You just have a gateway that branches to variant X, variant Y, and variant Z, each shown explicitly as a separate activity in the diagram.  But what if there are dozens of variants?  For example, consider a nationwide insurance carrier that must conform to differences in each state.  You could have a gateway with 50 branches, but I don’t think many people would consider that a satisfactory solution. I sure wouldn’t.

Instead, you’d like is to say the next step is some variant of activity A determined by a variable (data object).  I propose that a Call Activity can do this, as long as all of the variants have the same set of inputs and outputs.  Some of you may say, of course, who would think otherwise?  But technically it is a gray area.  It is not explicitly discussed in the spec, but I believe it is allowed by the BPMN 2.0 metamodel and narrative text, although there is a small issue with the schema.

The spec says the Call Activity invokes a particular callable element, either a process or global task.  A definition of the callable element is external to the calling process definition.  The only requirement of the metamodel and spec text is that the Call Activity’s ioSpecification element must match the data inputs and outputs of the callable element.  So as long as all the variants of activity A are defined with the same set of inputs and outputs (even if some are not used), the metamodel would seem to allow a Call Activity to invoke one of the variants determined at runtime in the calling process instance.

The schema is a slight problem, since the Call Activity’s calledElementRef attribute technically is an id prefixed with the namespace of the callable element.  In other words, the schema implies it is fixed at design time, not settable at runtime.  You could resolve this by saying that the calledElementRef is an expression that resolves to a prefixed id.  There are other examples where an attribute dynamically defined as an expression of runtime data would add flexibility over the static design-time value technically required by the schema, such as a duration or dateTime value of a Timer event.  In non-executable models it is quite common to use a Timer event label that implies a dynamic value, and I believe that many BPM Suites can handle such dynamic values in execution.

I think this is a case where the schema is imposing a constraint that is not explicitly stated in the spec text or metamodel.  For the Timer event example, it’s not an issue for models conforming to the Descriptive or Analytic subclass, since only the label (@name) is serialized in the XML.  But it is a problem for the dynamically called activity example.  It would be better if BPMN 2.1 changed the data type of the calledElementRef from QName to a new named type that is essentially an expression that resolves to a prefixed id.  [Note: using QName in the specific way that BPMN 2.0 does does not comport with the definition of the QName datatype anyway, so this would be killing two birds with one stone.]

There are numerous other ways in which dynamic and ad-hoc behavior can be modeled in BPMN 2.0, without resorting to CMMN or other initiatives that may or may not ever get traction.  If you are interested in this topic, check out my BPMN Master Class in June, where this will be a focus of discussion.

The post BPMN: Seeking Indirection appeared first on Business Process Watch.

by bruce at March 20, 2014 08:03 PM BPMN Interchange Demo at OMG Technical Meeting and bpmNEXT

camunda goes San Fran as two of us will be at BPMNext 2014 in California next week. bpmNEXT will showcase what’s next in the area of business process management from 25-27 March 2014. Jakob already attended the event last year and was invited to give another talk at this year’s conference. He already mentioned this in [...]

by nastasja.johnston at March 20, 2014 02:42 PM

March 19, 2014

Sandy Kemsley: AWD Advance14: The New Face Of Work

I’m spending the last session of the last day at DST’s AWD Advance conference with Arti Deshpande and Karla Floyd as they talk about how their more flexible user experience came to be. They looked at...

[Content summary only, click through for full article and links]

by Sandy Kemsley at March 19, 2014 09:01 PM

Sandy Kemsley: AWD Advance14: Product Strategy

I presented earlier today so I haven’t been doing any blogging, but I didn’t want to miss the repeat of the product strategy session with Roy Brackett, Mike Lovell and John Vaughn. They’re hitting...

[Content summary only, click through for full article and links]

by Sandy Kemsley at March 19, 2014 07:51 PM

Bruce Silver: Early Bird Discount for April BPMN Class

I’ve set up a special Early Bird discount for my BPMessentials BPMN Method and Style Live-Online class April 22-24.  The price is $730 – same as the web/on-demand class – and represents a 36% discount from the regular $1145 price.  But you need to act fast: the special price expires on March 28.  Click here to register.

As always, the class includes a 60-day license to the BPMN tool and post-class certification.  For this class, I am also offering the bpmnPRO eLearning game for free as well, a $49.95 value.  So it’s a great deal.

The post Early Bird Discount for April BPMN Class appeared first on Business Process Watch.

by bruce at March 19, 2014 07:30 PM

Keith Swenson: A Scenario for Discussing Personal Assistants

There is an important role for a type of intelligent agent we might call a personal assistant.  What are personal assistants?  What new do they bring to mix?  What effect will they have?  This post explores the boundaries, and introduces a scenario which might be used to discuss the effect of agents.

(Update: this post has had the terms changed to align with: Not an Agent, but a Personal Assistant.)

What is a Personal Assistant?

It is based on the idea of an agent, and a dictionary will provide the following definitions of that:

  • a person who acts on behalf of another
  • a person or company that provides a particular service organizing transactions between two other parties.
  • a person or thing that takes an active role or produces a specified effect.

Sound like a program?  Or maybe a process?  For example: I run the process and it automatically updates the DB for me.  That seems so disappointing.  Clearly, if we view agents simply as another form of programming, then they don’t add anything new to the mix, and they won’t solve anything new either.  A personal assistant can mean a variety of things, but generally we emphasize these aspects of a personal assistant:

  • asynchrony – personal assistants are specialized do their work for you at a different time.  A switch that operates a remote device should not be considered an personal assistant.  A BPM process can do things at different times, so it is not just this.
  • responsive – an personal assistant is programmed to receive events and respond to them.  BPM processes respond to external events as well, so it is not just this either.
  • autonomy – the need to in a way behave and to act ‘on its own’ in some sense of the phrase.   Responding to, and acting in response to, events can be considered autonomy, and a BPM process can do this.
  • rules – necessary, but again, a BPM process can have rules
  • negotiation – nothing is every presented in exactly the form that an personal assistant can consume, and a good personal assistant will engage in a form of protocol to clarify what is needed and what can be consumed, and possible clarify what can be provided in response.
  • semantic matching – we can’t expect all information to be structed in a single universal way, so there has to be a way to map from an external format to an internal one, and some sort of semantic mapping is probably necessary.

These are all the result of programming, and you could do all this in a standard BPM process, however taken all together they provide some specialized capabilities beyond what we normally consider simple process programming.

A Scenario

To demonstrate all these capabilities we can use a medical care scenario.  The story starts with a patient, Alex.  Alex has an unexpected pain in his back.  Alex starts by conferring with his primary care physician, Betty, a general practitioner who can identify the most common things, and advise about next steps.  Before making a preliminary diagnosis, Betty will order some routine tests and measurements.  Based on those, and based on what Alex said about the symptoms, she determines that Alex probably has a back problem.  Alex resists the urge to say “that is what I told you” while Betty makes a referral to a back specialist Charles.


Charles works in a completely different company, so integration with Betty’s system is minimal if at all.  During Alex’s appointment, Charles is going to want to see the earlier tests, and may do some probing himself.  While Charles would have loved to perform surgery, he determines that this problem can probably be addressed by a good round of physical therapy, and refers Alex to Dennis.

Alex sets up a schedule to meet with Dennis weekly and work through a set of stretches and exercises.  While this is going on, status is reported back to Charles and Betty.  This scenario has a happy ending: after 4 months Alex is feeling completely cured, decides to give up donuts, and to work out more, and fills out a pile of paperwork so that the doctors get their fair remuneration from the insurance company.


Why is this a good scenario for discussing personal assistants?  There are four reasons:

  • Health care is an important field and rapidly expanding field.  You will never find one doctor who knows everything, and so you will always need to consult experts outside of the immediate organizations.
  • Neither Betty, Charles, nor Dennis own the entire process.  They all work for different organizations and we can not assume that there is one IT department setting up a single system.  We have to assume that these requests transfer across systems; that those systems were not designed by the same people; that each system has some characteristics unique to that organization.
  • Still, they have to work together to to provide coordinated and consistent care for Alex.   Somehow, the difference between the systems must be bridged.
  • Finally, medical information can potentially be very sensitive.  The information must be carefully guarded, and shared on a need to see only basis.

The information flows both ways in this scenario: it will be to Alex’s benefit that the early tests are available to the others, but it also is important to communicate the status as Alex improves back to the earlier doctors as well.  The circle will be closed the next time Alex visits Betty for another check up and Betty wants to know how the treatment was concluded.

The next couple of posts (Personal Assistant will connect a Doctor to a Specialist and Assistants Transform Data, Synchronize as Well) explore exactly how a personal assistants play in this scenario — what they can and can not do to coordinate the work of these doctors.

by kswenson at March 19, 2014 10:27 AM

March 18, 2014

Sandy Kemsley: AWD Advance14: Case Management And Unpredictability

I finished off the first day at DST’s AWD Advance conference with Judith Morley’s presentation on case management, which dealt with knowledge work and the unpredictable processes that they deal with...

[Content summary only, click through for full article and links]

by Sandy Kemsley at March 18, 2014 09:56 PM

Sandy Kemsley: AWD Advance14: From Workflow To Process Flow

You can tell that a lot of DST’s customers are dragging their feet moving to new technology when there has to be a session on moving from the old-style table-driven workflows to the newer portal and...

[Content summary only, click through for full article and links]

by Sandy Kemsley at March 18, 2014 07:41 PM

Sandy Kemsley: AWD Advance 2014: A Morning Of Strategy, Architecture And Customer Experience

I still think that DST is BPM’s best kept secret outside of their own customer base and the mutual fund industry in which they specialize: if I mention DST to most people, even other BPMS vendors,...

[Content summary only, click through for full article and links]

by Sandy Kemsley at March 18, 2014 06:06 PM Aus wird das camunda BPM network

On March 14 we merged the online community with more than 10,000 members into the new camunda BPM network. I personally created the first version of this community in 2004, and together with Robert Emsbach grew the community in the German speaking area. I learned a lot about community building during that time, especially [...]

by Jakob Freund at March 18, 2014 08:27 AM

March 13, 2014 We need your support to push “Digital Age BPM” ahead!

We are looking for fearless BPM experts who are interested in joining a workshop series to dive into the young field of “Digital Age BPM”!

Target of this workshop series is to explore the potential of web 2.0, social media, and digital leadership elements in the context of BPM.

By implementing these elements, we expect good chances to influence the acceptance of BPM within an organization. Thus, the workshop will aim to increase the maturity of this very young topic, to develop and evaluate prototypes, and to facilitate its cognition.

Potential elements are a function to “like” processes or to “tweet” comments on processes, the implementation of blogs for communication with process participants, or the usage of tablets for modelling and training. The fantasy is not limited!

Dr. Willms BuhseTogether with the Digital Leadership expert Dr. Willms Buhse we are going to offer three workshops for prototype development and evaluation. Finally, we are planning to introduce the results at the Process Management Conference in November 2014.

As participant, you not only receive the workshop results but also the opportunity to adopt the concepts and developed prototypes to your own organization, afterward.

Please visit the workshop site for detailed information and make sure to register as soon as possible. Places are limited to ten organizations and will be granted according to first come first serve principle.

Read more…



We look forward to a unique and fantastic event! Don’t miss it! :-)

Best regards,

PS: The workshop will be held in German language only. A summary of the results will be published on our site in English as well.

by Mirko Kloppenburg at March 13, 2014 09:34 PM

Drools & JBPM: DevNation and Red Hat Summit (April 13-17, San Francisco)

This year, Red Hat is organizing DevNation for the first time (April 13-17, San Francisco), a new open source, polyglot conference for application developers and maintainers.  It combines for example the old JUDCon and CamelOne conferences, but offers top notch keynotes, sessions, labs, hackfests, and panels geared for those who build (with) open source.  It is _the_ place for a developer to get excellent technical information from the experts directly, and/or hang out with pizza and beer !
Co-located is Red Hat Summit (April 14-17, San Francisco), meant for anyone looking to exponentially increase their understanding of open source technology and identify powerful solutions for their business needs (although typically at a slightly higher level compared to DevNation). From community enthusiasts and system administrators to enterprise architects and CxOs, there are sessions and tracks for each level of interest and need.
This year, I'll be doing a "deep dive into jBPM6" presentation (available for both DevNation and Red Hat Summit attendees), giving a quick overview of the jBPM 6.0 features, but also sharing a lot of technical information on some of the most important new features, like the new jBPM execution server with new remote APIs.  This version is also supported as part of the JBoss BPM Suite 6.0 release.
But this is just one tiny part of the huge amount of interesting keynotes, presentations, workshops, etc. you'll be able to attend.  Looking forward to speaking to some of you, or maybe even touching some code during the hackfest (bring your laptop and we'll get you started)!
Deep dive into jBPM6
Kris Verlaenen — jBPM project lead, Red Hat

Businesses must clearly define their business processes, and quickly respond to new challenges. To do so, business analysts, developers, and end users need the tools to create, understand, analyze, and execute business processes.

In this session, Kris Verlaenen will demonstrate the capabilities of jBPM 6 and dive deeper into some of its core capabilities. You’ll learn how to:

  • Model business processes interacting with remote services.
  • Combine business processes with data, forms, and business rules.
  • Build and deploy business processes using Git and Maven.
  • Interact remotely with the jBPM execution server (REST/Java).

by Kris Verlaenen ( at March 13, 2014 04:07 PM

March 11, 2014

John Evdemon: Quick braindump on apps, services and components

Also posted to my new "blog-in-progress" here Someone asked me for a quick email on apps, services and components. Feedback and flames welcome. An app is a logical grouping of components and services to perform a business objective. - Logical because the components may not all be owned by or located within the organization that built the app - An app is built to change by swapping and versioning the services and components that make it up - See below for suggested definitions of component...(read more)

by John_Evdemon at March 11, 2014 11:09 PM

Drools & JBPM: Looking for student contributions: GSoC 2014 
Students can participate in the Google Summer of Code (GSoC) annual program, where they can work on their favorite free and open-source project during the summer and where Google awards stipends (US$5,500) to all students who successfully complete a requested and approved project.
JBoss is participating again this year, so make sure to submit your proposal in time (by March 21st) to be able to participate in this unique opportunity !
There's a large list of possible topics you can choose from, but you can always submit your own ideas as well.
An up-to-date list of project ideas related to jBPM is maintained on this page, and includes the following ideas you could pick on from if you're interested.

jBPM on android

The jBPM core engine itself is so lightweight that it could actually be run on android as well.  Based on an existing prototype, this could be extended so jBPM could actually be used to develop and execute simple applications on android.  This for example could include creating custom nodes for common android functions (like opening a web page, getting current location, etc.), configuring persistence to use the persistence mechanism offered by android, simple client interfaces for inspecting human task lists, managing process instances, etc.
The blog entry describing a first prototype can be found here.

Integrating jBPM with your own preferred project(s)

jBPM allows you to integrate with external services by creating your own domain-specific nodes that are added to the process palette and can be used inside your business processes to model specific services.  While some of these services might be very specific to your problem domain, a lot of generic and reusable integrations could be implemented, like integration with Email, RSS feeds, Google Calendar, REST services, known web services to for example retrieve stock data, weather information, etc.  These could then be added to a repository or library of domain-specific nodes so that the process author could for example select which of those he wants to use as part of his process.
We would like to extend the set of integrations that we support out-of-the-box by adding new integrations with existing services and projects.  This is an ideal opportunity to integrate jBPM with the some of the projects you love!

jBPM performance on steroids

Using a business process engine always add a certain amount of overhead to your application.  How minimal this overhead might be in some cases (depending on the features you have currently configured), optimization can usually speed up your execution significantly.  In this case, we would like to investigate whether processes could be translated to Java code so they can be executed more efficiently.  Based on a simple prototype that already demonstrates this is possible, we would like to extend this approach for more constructs and use cases (for example translate parts of your process to Java on the fly to speed up execution).

Document management system

jBPM allows you to basically invoke any external service by adding custom nodes to the palette to interact with these services, so they can be used directly inside your processes.  One common service that does show up on a lot of wish lists is a document management system.  This would allow you to create, retrieve and update documents as part of the business process, while using an existing document management system to keep track of these documents.  This could also include extensions to the current task forms to allow viewing, uploading and/or updating documents, etc.

Mobile client(s) for jBPM

BPM becomes more and more effective if it integrates well with the everyday tasks and tools of the business users that are responsible for executing and monitoring these processes.  While jBPM provides a lot of services out-of-the-box, integrating these in a mobile device like a mobile phone or a handheld device would make it easier for business users and end users to start using these.  This could include running our web-based process designer on a handheld device, or mobile client applications to start processes, manage task lists or monitor execution.

From BPEL to BPMN2

We would like to investigate whether it would be possible to translate business processes using the BPEL language into the new BPMN 2.0 specification, as supported by jBPM5.  While a transformation from BPMN2 to BPEL is currently available for a large subset of the BPMN2 specification, the transformation in the other direction has mostly been neglected.  This would however enable you to migrate your existing BPEL processes to the new BPMN2 format and execute them on jBPM5.

Social BPM using jBPM

Social BPM is all about integration new social features like collaboration, tagging, mashups, linking, and other Web 2.0 features into business process modeling, execution and management.  This could include collaboration features between different authors on the same process, using for example RSS feeds or new social media to notify changes, the use of tagging on business processes so this information could for example be used for searching, auditing, etc.

Process mining for jBPM

Process mining is almost a complete research area on its own, compared to business process manamagent.  We would like to investigate how existing process mining techniques (both for detecting and analysing business processes or history logs) and tools could be applied and integrated into the jBPM space.

jBPM and Drools for access control

While jBPM is a generic business process engine and Drools is a generic business rules engine, it could easily be applied in different application domains.  One of these domains is security and access control, where both technologies can be used for managing and enforcing access control.  Business rules could be used to describe authorization rules, business processes could be used to describe the different approval processes necessary to grant privileges, the jBPM and Drools engine could be extended with additional authentication and authorization features, etc.

jBPM and Drools for clinical decision support

The advanced capabilities of jBPM for modeling adaptive and flexible processes make jBPM an excellent candidate for describing and executing clinical processes, like for example to describe the treatment of patients.  Business rules can be used to augment these care plans with additional logic to handle exceptional situations, handle data-driven decisions, etc.  The goal of this project is to define a reference architecture that could be used to describe and execute a few specific use cases in this area and implement representative examples as part of a prototype.

by Kris Verlaenen ( at March 11, 2014 07:58 PM

Keith Swenson: Encryption Role in Data Security

Ed Snowdon spoke yesterday at the SXSW conference on the importance of using encryption to keep the data the runs our businesses (and personal life) safe.   I refresh the call to eliminate the scary warning that browsers give when using a self-signed encryption key.  It does not make anyone safer, and stands in the way of regular usage of HTTPS.

The edward-snowden-sxsw-lg-970x0solution is to use cryptography to keep data secure.  Snowdon talked about full disk encryption as being critical.  He cited the example of Google Mail when it switched everyone to HTTPS as a prime example of simple actions that make everyone more secure.  Simply by encrypting the HTTP will prevent many potential problems, but there is a design flaw of modern browsers that make this difficult: the scary self-signed warning.


In October 2011 I made a post called “The Anti-SSL Conspiracy” where I outline this particular problem common that still exists today.  For review, there are essentially three levels of secure HTTP:

  • Completely unencrypted – all text is readable on every computer the data is routed through, and you can not guarantee the identity of the server;
  • Self Signed – data is encrypted and guaranteed private, but the certificate is not signed by an authority so you can’t guarantee the identity of the server;
  • Signed by a Signature Authority – the certificate was purchased from one of the well known “trusted” companies who make some assurance about the identity of the server.

These provide increasing value in the order they are listed.  What is surprising is that Mozilla, Chrome, and Internet Explorer (IE) all present a scary warning for the middle option.  Before the page is displayed, it displays a large, red, warning that the signature of the site is not valid, and gives you the options of “Go ahead (not recommended)” and “Get me out of here”

The irony is that neither of the first two cases guarantee the identity of the server!  For unencrypted traffic, the browser delivers the results without a warning.  Even though the self-signed is more secure, the browser displays a warning scaring people away from it.

To get people to use a self signed server, you have to include special instructions to “ignore” the scary warning, go ahead an do what the browsers clearly does not recommend.

It gets worse: the Java libraries throw an exception when attempting to access such a site.  To allow access with Java, you have to hack around the library.  I document that in a different post: “Working Around Java’s SSL Limitations.”

Why Go Self-Signed?

Setting a server with a self-signed key is quite easy.  You need a public and a private key, and it is easy to generate this pair on demand.  The keys do not cost anything.  In a few minutes you can have a secure server up and running and access it to that server is guaranteed to be private.

To get a proper certificate, you need a couple of things:

  • A certificate is tied to a proper domain name, so you have to order and set up a domain name, which takes time, and only works on a fixed IP address.
  • On a mobile computer (laptop, tablet, phone) where the IP address is constantly reassigned, you simply can not have a DNS name that resolves to that address.  You are out of luck.
  • You have to order and pay for a certificate from a signing authority.
  • The signing authority only wants to give a certificate to a proper legal entity, so you have to have a company with a public address and such.  The signing authority is supposed to check that the site is an official site of of a particular company, and guarantee that.

The proper certificate is important if you are setting up a permanent web site that represents a company.  But if you are just setting up a utility server to support a group of individual who just want privacy, or a peer-to-peer network, the certificate is unnecessary cost and overhead, and impossible on a mobile platform.  Self-signed is quick, cheap, convenient, and it safeguards the privacy of the connection.

Chris Soghoian was quoted in the talk saying: “We need to make services secure out of the box.”  You can only get a proper certificate from an authority after you set up the server and assign a fixed IP addres, but self-signing could be automatic in things like TomCat and Apache, and they work without needing a fixed address.

Self-Signed might Even Be More Secure

For 60D130319-6322the more paranoid readers: there is evidence that the NSA has access to the signing authorities.  Certificate authorities keep both your public and private key, and might have to deliver it to the NSA on demand.  The private key allows access to the entire stream for eves-dropping.  Whether or not this bothers you depends on how nefarious you believe the NSA to be.

It is a fact that your private key is stored by the certificate authority which might be hacked.  Those are carefully guarded, to be sure, but a theft of the private keys in a given signing authority would leave ALL of the banks in the country open to exploitation.  Servers could be set up that mimic real servers, and they would even have the icon indicating that the site is legitimate.

When you make a self-signed server, the private key is in only on that server.

For the more cynical readers: perhaps the reason that the browsers put up the scary warning about self-signed servers is because they are too secure for the NSA to readily hack, and well-placed development moles have worked to make this option uncomfortable.

My Take

I don’t believe there is a government conspiracy, but rather a tendency for engineers toward perfectionism:  if you want to be safe, go all the way;  don’t stop half way to security.  That is the real reason for the warning, but I have demonstrated there are clear reasons for using the self-signed approach — particularly concerning mobile platforms.

Especially now, we need to take steps to safeguard data against all eves-droppers.  Many servers are still unencrypted because it is a costly bother (or impossible) to get the domain name and the certificate.  Browsers should be changed to treat self-signed better than unencrypted access.  No, the browser should not display the little lock symbol.  That symbol should be reserved for fully signed certificates.  But the self-signed should not produce the scary warning, it should instead act mostly like the regular HTTP connection.  There might be little reason to tell the user that the connection is secure, but there is no reason to scare them away.

Call to Action

If you know someone working on the code for Mozilla, Chrome, IE, Apache, or even the Java SSL libraries, ask them why the scary warning screen is necessary.   Self-signed SSL traffic is more secure than open HTTP.  Ask them why they make it hard for servers to use the self-signed option, and why they make it uncomfortable for users.  It makes no sense, and with the dramatic increase in cyber crime we are experiencing, we need to take clear steps to secure all data from eves-droppers.

by kswenson at March 11, 2014 07:13 PM

March 10, 2014

Bruce Silver: More on BPMN Master Class

I’ve worked out the details on the new BPMN Master Class, and here they are:

The class will take place live-online on two successive Mondays, June 2 and June 9, from 11am to 4pm ET (5pm to 10pm CET).  The first day will present the material, with some in-class exercises, and discuss the homework assignment.  Yes, homework!  It must be emailed to me prior to the class on June 9, at which time selected solutions will be presented by students and discussed by the class.  We will also discuss the post-class certification requirement, a mail-in exercise that must be completed within 60 days of June 2.

Only students who have received BPMessentials Method and Style certification in 2013-2014 are eligible to take the class without further preparation.  Those who received Method and Style certification prior to 2013 must complete the bpmnPRO eLearning game through Level 10 in advance of the Master class.  Those who have not received Method and Style certification may take the Master Class if they complete the Method and Style training and certification prior to June 2.

The price of the BPMN Master Class is $795, which includes 60-day license to Process Modeler for Visio from itp commerce, as well as the post-class certification.  If purchased together with the Master Class, bpmnPRO is offered at the special price of $49.95.  Alternatively, if purchased together with the Master Class, the Method and Style live-online training and certification April 22-24 is offered at the special price of $595, a savings of $550!  Click here to register.

The post More on BPMN Master Class appeared first on Business Process Watch.

by bruce at March 10, 2014 11:14 PM

Tom Baeyens: Personal Workflow

How much government is ideal?  How much should be organized by the community?  Each country answers that differently.  In some countries a lot is organized by the community.  In other countries, more freedom is left to the citizens and less aspects are managed centrally.  I’ld say that Business Process Management (BPM) doesn't have any such balance yet.  At the moment, BPM is limited to top-down initiatives.  This would be similar to only having government initiatives and no freedom or initiatives from citizens.  

Corporate executives start by analyzing how work gets done in an organization.  This analysis is often challenging as people doing the work optimize their piece of the puzzle.  To get a complete understanding how people actually collaborate is not that easy.  It’s even hard for employees that get interviewed to explain all their knowledge that goes into tackling a given task.  Therefore, the procedures that result of such BPM initiatives are often incomplete.  That uncertainty creates risk for the people driving a BPM initiative.  They have the power to change things, but they don't have all the detailed knowledge that goes in to the tasks.  And this approach doesn't scale very well as there is usually just a single top down BPM improvement initiative at a time.

Still these centrally lead initiatives can lead to the biggest gains in efficiencies as top down initiatives can create the necessary momentum and executive buy-in to change things.  And the efficiency improvements are multiplied by the number of times these procedures have to be accomplished.  Imagine you can bring down the average time spent on handling a damage claim in an insurance company from 3 hours to 2h30.  For an insurance company dealing with thousands of damage claims per day, these savings add up. 

In countries with less government, self-interest is an important driver and motivation to take initiatives.  That’s an angle totally missing at the moment in BPM and a very interesting one if you start thinking about it.  

What if employees could start automating their own repetitive and tedious work patterns without having to think globally.  As an example, think of Jack's tasks like this: For every invoice email that he gets from Supplier XYZ, he extracts the attachment and uploads it to Google Drive, then passes a link to the document on to Jane in procurement.  What if Jack can build a workflow by himself for his own repetitive work.  He can start improve his own work without requiring any change to be discussed between colleagues.  Since people keep working as they work before, it's really easy and fast to start automating these process snippets. That really reduces the risk and makes it a much faster approach.  All the fine details of how work is done, what's important and what not doesn't have to be talked through.  Instead, employees can just build workflow snippets directly themselves.  

Personal workflow adds an interesting approach next to top down BPM initiatives.  Picking the low hanging fruits like that is easy and scalable.  Imagine all employees creating their own workflows.  This doesn't require meetings and decisions that take months.  Instead it takes 5 minutes to get going.  And all employees can start doing it simultaneous.  Just like societies require a good mix of centrally controlled government and self-interest initiatives, I think that both personal workflow should complement top down BPM initiatives to harvesting those low hanging fruits.

by Tom Baeyens ( at March 10, 2014 08:26 AM

March 04, 2014

Bruce Silver: Announcing New BPMN Master Class

Over the years I have gotten requests for a class that goes beyond the basics of my BPMessentials BPMN Method and Style training.  OK, we’re going to do it!  It will be in June, probably replacing my regular live-online class.  It will only be available to those who have received the BPMN Method and Style certification, or who have completed Level 10 of my bpmnPRO gamified eLearning app,  within the last year.  (If you’re interested and have not taken the Method and Style training, I will be offering a discounted combination of the April Method and Style class and the June BPMN Master class.  Contact me for details.)

The Master class will have homework assignments in between the class days, and will have its own post-class certification based on one or more mail-in assignments.

Here’s some of what I plan to cover…

1.  End-to-end processes composed of multiple BPMN processes.  Because the instance of each activity in a BPMN process must have 1:1 correspondence with the process instance, end-to-end processes in the real world often must be modeled as multiple pools (top-level processes) interacting via messages and shared data.  For example, an activity performed once a week to adjust prices cannot be part of a process where the instance is a single order.  We discuss this a bit in the Method and Style class in the context of a hiring process, but the Master class will go into much more depth and cover more variations.

2.  Event-triggered in-flight process change.  Based on some monitored data condition, either on the work in aggregate – queues too long – or on the particular instance – late, or special priority – an in-flight instance switches to an expedited mode.

3. Unstructured processes driven by events and user action.  Pssst.  Don’t tell the case management fanatics, but much (I would say “most”) of what can be modeled in OMG’s new Case Management Model and Notation (CMMN) can be done already in BPMN!  We’ll discuss the use of event subprocesses, in particular those with Escalation or Conditional triggers – or Multiple, to fully model Event-Condition-Action behavior – to describe unstructured processes.

4. Goal-directed processes.  BPMN is normally used to describe classic orchestration, in which completion of one activity effectively starts the next one.  An alternative might be to establish goals and prerequisites for the process as a whole and each of its component activities, and let those guide the flow, in the same way that Google Maps tells you the best route to your destination based on your current location and traffic conditions.  Systems that actually do this typically employ intelligent agent technology, but BPMN doesn’t have to figure out the best path.  It just needs to be able to describe a process when the “best next step” is dynamically determined.

That’s a pretty interesting list of topics, I think.  Of course, we’ll start with an in-depth review of the Big 3 event types – Message, Timer, and Error – and their use in event subprocesses.  We’ll do some in-class exercises on those, but the four topics above will require more deliberation than we have time for in class, so the exercises on those will emphasize homework, which we will discuss at length in class.

I am still developing the content for this class, so if you are interested in some other BPMN topic I haven’t mentioned, please comment on this post or email me directly.  Look for more details in the coming days.

UPDATE: More info on the class here.

The post Announcing New BPMN Master Class appeared first on Business Process Watch.

by bruce at March 04, 2014 07:23 PM

Drools & JBPM: Webinar (March 12): JBoss BPM Suite 6.0 (based on jBPM6)

Automate workflows now with a leading open source BPM platform


Looking to build powerful workflow automation solutions? Red Hat JBoss BPM Suite 6.0, now generally available, brings Business Activity Monitoring and Business Process Management capabilities from the jBPM community project together in to a single, integrated product.

Join us in this webinar to learn:
  • How to get started quickly with the fully integrated User Interface, Process Simulation and Business Activity Monitoring (BAM) tools.
  • The best use cases for running the process execution as a stand alone server vs. embedded mode. 
  • How to seamlessly manage decision logic with business rules optimization
  • What's coming next...

Prakash Aradhya, Product Management Director, JBoss BPM and BRMS Platforms, Red Hat
Prakash Aradhya is responsible for driving the product strategy and roadmap for JBoss Enterprise BRMS and BPM products. He has over 15 years of experience in product development and product management in the middleware software industry.

Dr Kris Verlaenen, Principal Software Engineer, Lead BPM Architect, Red Hat
Kris Verlaenen leads the jBPM Project effort and is also one of the core developers of the Drools project, to which he started contributing in 2006. After finishing his PhD in Computer Science in 2008, he joined JBoss full-time and became the Drools Flow lead. He has a keen interest in the healthcare domain, one of the areas that have already shown to have a great need for a unified process, rule and event processing framework. 

Join the live event:
  • Wednesday, March 12, 2014 | 15:00 UTC | 11 a.m. (New York) / 4 p.m. (Paris) / 8:30 p.m. (Mumbai)

by Kris Verlaenen ( at March 04, 2014 03:23 PM

Thomas Allweyer: IT-Systeme integrieren

Cover Enterprise Systems IntegrationMöchte man eine möglichst nahtlose IT-Unterstützung für Geschäftsprozesse erreichen, wir man häufig nicht umhin kommen, verschiedene im Unternehmen vorhandene Systeme miteinander zu verbinden. Die Möglichkeiten hierfür reichen von individuell programmierten Punkt-zu-Punkt-Verbindungen bis zur Service-Orchestrierung mit Hilfe von Process Engines. Dieses englischsprachige Buch führt auf fundierte und verständliche Weise in alle wichtigen Integrationstechniken und -konzepte ein. Die konkrete Umsetzung der erläuterten Ansätze wird am Beispiel der bekannten und in der Praxis verbreiteten Integrationsplattform BizTalk Server von Microsoft illustriert. Die prinzipiellen Ausführungen sind aber unabhängig von einer bestimmten Implementierung, und es wird auch immer wieder auf Umsetzungsmöglichkeiten im Java-Umfeld eingegangen. Das Buch ist daher für jeden an der Thematik Interessierten geeignet, unabhängig davon, welche konkrete Technologie eingesetzt werden soll.

Der Leser lernt zunächst grundlegende Messaging-Konzepte und ihre Umsetzung in Form des Java Message Service (JMS) und des Microsoft Message Queuing (MSMQ) kennen. Anschließend wird die Funktion eines Message Brokers diskutiert. Eine Anbindung von betrieblichen Systemen kann auf Daten- oder Anwendungsebene erfolgen. Ein Datenzugriff ist im Zweifelsfall immer möglich, indem man die an der Benutzungsoberfläche angezeigten Daten abgreift. Meist gibt es jedoch andere Möglichkeiten, wie der Austausch von geeignet strukturierten Dateien oder der Zugriff auf Datenbanken mittels Schnittstellen wie ODBC oder JDBC. Auch die Transformation von ausgelesenen Daten nach XML und die Integration eines Datenbank-Adapters in eine Orchestrierung werden beschrieben.

Auf Anwendungsebene können einerseits einfache entfernte Prozeduraufrufe (Remote Procecure Call, RPC) eingesetzt werden. Mächtiger sind die Common Object Request Broker Architecture (CORBA) und die heute stärker dominierenden Web Services. Letztere basieren auf Hersteller- und Implementierungs-übergreifenden Standards. Web Services lassen sich gut in Orchestrierungen integrieren. Umgekehrt kann die Schnittstelle einer Orchestrierung selbst wieder als Web Service veröffentlicht werden.

Orchestrierungen werden in mehreren Kapiteln behandelt. Es handelt sich um Spezialfälle von ausführbaren Geschäftsprozessen. Darin wird das Zusammenspiel verschiedener automatisierter Services ohne menschliche Beteiligung gesteuert. So können beispielsweise nacheinander verschiedene Systeme aufgerufen und die erhaltenen Daten zwischen den Aufrufen weitergegeben werden.

Zunächst führt der Autor in Services und Service-orientierte Architekturen (SOA) ein, bevor er die verschiedenen Aspekte einer Orchestrierung erklärt, wie z. B. Kontrollfluss, Schleifen und Unterprozesse, aber auch Ausnahmebehandlungen, Transaktionen und ähnliche weiterführende Konzepte. Anschließend werden die Umsetzung der beschriebenen Orchestrierungselemente in BPEL (Business Process Execution Language) und ihre grafische Modellierung mit BPMN dargestellt.

Neben der prozessorientierten Integration von Systemen innerhalb eines Unternehmens gewinnt zusehends die unternehmensübergreifende Business-to-Business-Integration an Bedeutung. Hierfür sind Sicherheitskonzepte wie Verschlüsselung und digitale Signaturen erforderlich. Außerdem müssen sich die beteiligten Partner auf einheitliche Nachrichtenformate einigen. Zur Modellierung des Zusammenspiels verschiedener Partner und den ausgetauschten Nachrichten stellt die BPMN verschiedene Diagrammtypen zur Verfügung.

Das Buch zeichnet sich dadurch aus, dass die vorgestellten Konzepte und ihre Eigenschaften sehr konkret und anschaulich vorgestellt werden. Anhand von Codebeispielen lässt sich nachvollziehen, wie die praktische Umsetzung erfolgen kann. Dennoch verliert sich der Autor nicht in den Details einzelner Implementierungstechnologien, sondern behält stets den roten Faden im Fokus. Das Buch ist daher sowohl für rein an den Konzepten interessierte Leser geeignet, als auch für solche, die einen Einstieg suchen, um selbst bestimmte Integrationstechnologien einzusetzen. Gelegentlich fallen beim Lesen Wiederholungen auf, weil ein Konzept etwa einmal im Zusammenhang mit dem BizTalk Server, einmal als grundlegendes Konzept, und ein drittes Mal im Kontext eines bestimmten Standards beschrieben wird. Andererseits hat dies den Vorteil, dass man auch sehr gezielt bestimmte Inhalte nachlesen kann, ohne zu viel hin und her springen zu müssen.

Diogo Ferreira:
Enterprise Systems Integration: A Process-Oriented Approach.
Springer 2013.
Das Buch bei amazon.

by Thomas Allweyer at March 04, 2014 11:38 AM

Keith Swenson: How Relevant is the ‘Boundary Worker’ Idea?

IBM has suggested this new term, the Boundary Worker, as a middle point between a service worker and a knowledge worker.  Is this really something new, or just the natural progress of a all workers in today’s hyper connected world?

The Boundary Worker


The boundary is between knowledge workers, and (presumably un-knowledgeable) service workers.  The example is the IT-enhanced service person who can look things up for you and get you an answer real quick.  Imagine a person wandering the store floor ready to answer questions, carrying a tablet or glasses, and prepared to respond to your queries.

The idea is that these people a “not really knowledge workers” — but they don’t have to be.  They just need to be good at looking stuff up.  The idea is that this new technology can take a routine job, and make it more knowledge-like.  It can also take an unskilled person, and allow them to act somewhat in a knowledge worker way.

Is That Really Knowledge Work?

The misguided stereotype is that knowledge workers are people with a tremendous pile of knowledge, like a university professor, or a librarian.  Knowledge work is not about being knowledgeable.  Knowledge workers may not have extensive knowledge at all.  Instead, they have expertise in the particular thing they do.  So it is a mistake to think that access to knowledge is the essence of what makes a knowledge worker.

Knowledge work has always been about tacit knowledge, also known as skill or expertise.  These are things that are not made explicit.  For example, learning the best judge to submit a particular kind of legal case to, is not the kind of thing that anyone would write down, even if you could be sure of exactly how to formulate the statement.  How to capture the best emotion in a line of text is something that a good writer/editor might be able to do, but you could never look this up on the web.  Knowledge workers internalize their experience, and use that to make decisions that an inexperienced person can not.

Another way to think about this is the difference between “book learning” and experience.  Someone with book learning may have the knowledge, but has not internalized the meaning of that knowledge.  The best a boundary worker could hope for is rapid book learning.  Connectivity to information, might make you informed, but will not make you any more like a knowledge worker.

Attractive Idea

You can see the attraction:  you don’t need to get someone who is actually knowledgeable about your products.  Just hire anyone who is friendly, give them a tablet to walk around with, and you have an inexpensive replacement.

I completely agree that providing connectivity to workers allows them to do more with less training.  My contention is with the inflated concept that this somehow transforms the job into a new category, and thinking that communications can somehow make you into a kind of knowledge worker.  This confuses the idea of a “knowledgeable worker” with a knowledge worker.

My Take

Service people on the floor a store ready to help with whatever might come are, and have always been, knowledge workers.  Being constantly connected will make them better at what they do.  The transformation is something that all knowledge workers are doing.  In a meeting, need to know what the market size for a product is, someone looks it up.   Need to know where a particular person worked in the past: look it up.  Today, I was not sure how to spell a particular associate’s name, so I looked him up on Google.  We are all becomeing more connected.

It is only natural that professionals will want to leverage the latest information and communications technology (ICT) to expand their reach.  Knowledge workers everywhere are become more attached to the web:  for looking up explicit knowledge, but also for communicating to other experts the maintain a relationship with.

The same is true with routine workers.  Even the most routine jobs (for example factor floor workers) are being enhanced with ICT to monitor and respond to a greater variety of inputs.  Being more knowledgeable does not make you a knowledge worker.

The boundary worker is not a new category, but instead just a reflection of the trend to actually use connectivity on the job.  It is a natural progression in all fields of work.   It is internalized expertise that qualifies you as a knowledge worker, and wearing Google Glass will not in any way change that.


by kswenson at March 04, 2014 10:21 AM

March 03, 2014

Drools & JBPM: OptaPlanner blog moved + Can MapReduce solve planning problems?

We've moved the OptaPlanner blog into the website:
It's now fully integrated into the website.
To add the new blog to your favorite newsreader, just add the Atom news feed.

If you want to contribute an article, add a blog article to this directory and send it in as a pull request.

To test-drive the new blog, I've posted an in-depth article called:
  Can MapReduce solve planning problems?
Take a look :)

by Geoffrey De Smet ( at March 03, 2014 03:59 PM

February 28, 2014 camunda BPM 7.1 Live Webinar

On March 21 we will present to you the brand new version 7.1 of camunda BPM – the open source platform for process automation with Java and BPMN 2.0. camunda BPM is spreading rapidly and already being used by well-known organizations such as Lufthansa Technik, Sony DADC and Zalando. See for yourself what our heroic [...]

by Jakob Freund at February 28, 2014 10:50 AM

February 26, 2014

Thomas Allweyer: BPM-Quintessenz wertet 35 Studien aus

BPM QuintessenzRegelmäßig erscheinen neue Studien zu den verschiedenen Aspekten des Geschäftsprozessmanagements. Ein Team der Hochschule Koblenz unter der Leitung von Ayelt Komus hat nun insgesamt 35 dieser Studien aus den vergangenen Jahren systematisch ausgewertet und hinsichtlich ihrer einzelnen Aussagen miteinander verglichen. Natürlich sind die in der Quintessenz zusammengefassten Aussagen mit Vorsicht zu betrachten, denn die einzelnen Studien sind methodisch ganz verschieden aufgebaut und oftmals sind auch gar nicht alle Einzelheiten der zugrunde liegenden Methodik veröffentlicht. Dennoch ergibt sich ein interessantes Gesamtbild mit zumindest plausiblen Trendaussagen. Kaum ein Leser dürfte die Zeit und die Muße haben, eine Vielzahl von Studien detailliert zu analysieren und mit anderen Studien zu vergleichen. Die “BPM-Quintessenz” liefert damit eine in dieser Form einzigartige Gesamtschau über wesentliche Erkenntnisse zum aktuellen Stand des Prozessmanagements.

Die Ergebnisse der einzelnen Studien wurden nach unterschiedlichen Themenfeldern klassifiziert und hinsichtlich ihrer Einzelaussagen miteinander verglichen. Betrachtet wurden u. a. die mit BPM angestrebten Ziele, die erreichte Erfolge, wesentliche Erfolgsfaktoren, Rollen im Prozessmanagement und Change Management. Zu den ebenfalls betrachteten Themen Social Media und agile Methoden im BPM fand sich jeweils nur eine Studie, so dass hier keine Zusammenfassung möglich war. Hier ist weiterer Forschungsbedarf vorhanden.

Beim Vergleich der Studien fällt auf, dass es wenig widersprüchliche Ergebnisse zu geben scheint. Meist weisen die Ergebnisse zumindest tendenziell in dieselbe Richtung, was darauf hindeutet, dass die betreffenden Aussagen tatsächlich zutreffen dürften und nicht nur auf die zufällige Zusammensetzung der Teilnehmer einer einzigen Studie zurückzuführen sind. Die verschiedenen Studien unterscheiden sich mehr darin, welche Themen und Einzelkriterien abgefragt wurden.

Ohne auf die Einzelergebnisse einzugehen, die in der Studie selbst nachgelesen werden können, kristallisiert sich als Quintessenz heraus: Geschäftsprozessmanagement lohnt sich und bringt Erfolge, leidet aber zum Teil unter Akzeptanzproblemen. Viele Unternehmen noch längst nicht soweit, wie sie es sich wünschen würden.

Die Studie kann hier kostenlos angefordert werden.

by Thomas Allweyer at February 26, 2014 08:53 AM

February 25, 2014

Bruce Silver: The BPI Blueprint

My BPMessentials BPMN Method and Style class shows you how to translate process logic from text-based information into BPMN diagrams that are clear, complete, and consistently structured. But how do you get that text-based information in the first place? And what is your purpose for doing so? In most cases, the intent is some form of business process improvement project. And just like the BPMN modeling, that project also needs a methodology. That methodology determines the right members of your project team, the questions they should be asking, the analytical techniques they should employ to pinpoint problems in the As-Is process, and redesign principles that will move them toward their process improvement targets.

If you wish someone would put that all in a book, you’re in luck! Shelley Sweet of i4 Process has just published The BPI Blueprint: A Step-By-Step Guide to Make Your Business Process Improvement Projects Simple, Structured, and Successful.  It’s now available on Amazon at a great discount, and I highly recommend it.  Although the book is aimed at business users, not techies, it avoids the hidebound tool phobia that characterizes most books in the process improvement space and embraces standards like BPMN and modern digital tools.  Yes, there are still colored stickies on butcher paper involved in the information-gathering phase, but the book shows how to capture, maintain, and share that information using IBM Blueworks Live, the leading process improvement tool today that is aimed at business users.

Check it out!

The post The BPI Blueprint appeared first on Business Process Watch.

by bruce at February 25, 2014 06:28 PM

February 24, 2014 Study Results: Change Management becomes more and more important in BPM

Results of the Study 2013 show an impressive increase of current and future importance of “Change Management in the context of BPM”. This topic jumped from a midfield position in 2011 right into the top cluster of our latest BPM survey and therefore is nominated as promising candidate for the upcoming Best Practices in Process Management Workshops 2014.

Along this, five other topics which were already discussed within 2012/2013 best practice workshops made it again into the top cluster (Cluster 1) of the survey.

These five topics are:

  • Process management roles
  • Process improvement methodology
  • Process standardization
  • Integration of legislative and normative requirements
  • Communication of process changes

Identified best practices of these topics can be found in the “Best Practices” section of our website.

In addition, the “Acceptance of process management within the organization” made it to cluster 1 for the second time, but was never considered for one of our best practice workshops before. To change this, it will be one of the topics of the 2014 workshops.

Overall, the evaluation of the current and future importance of all topics increased both compared to the results of the 2011 survey. Interestingly, for two of the three topics with highest maturity, the increase of the importance was not that strong. As a consequence, the topics “Process modeling” and “Organization-wide defined process model” where not able to stay in the top cluster and are members of cluster 2, now. Obviously, both topics are well developed and established in all organizations and the participants of the survey moved their interest to other BPM areas.

Taking this into consideration, it is remarkable that the “Integration of legislative and normative requirements” – which is the topic with second highest maturity – is still in cluster 1. For best practices in this area, please have a look at our already published workshop results here.

Biggest “loser” is the “Editorial process of management system content”. This topic is already very mature and its future importance evaluation decreased slightly while the current importance gained only a little bit. Thus, this topic did not make it to keep up with the other topics from 2011’s cluster 2 and is now located in cluster 3 together with “Application of maturity models” and “Activity-based costing”.

As new topics in 2013, we added the “Application of social software for the creation of the content of management systems” as well as “Application of social software for the further development of the content of management systems” to the survey. Both topics received a very low rating in maturity as well as in current and future importance. But due to the fact, that we expect a high potential to influence the acceptance of BPM within an organization by implementation of web 2.0, social media, and digital leadership elements, we are going to offer a very special workshop series in 2014. This workshop series will aim to increase the maturity of this very young topic, to develop and evaluate prototypes, and to facilitate its cognition. Further information concerning this workshop series will be published in a few days.

Finally, we would like to say THANK YOU to all 109 experts who completed the survey. The winner of the voucher for the flight with historic Ju 52 of Deutsche Lufthansa Berlin-Stiftung is Daniel Bickel from EnBW Systeme Infrastruktur Support GmbH. Congratulations Daniel, enjoy the flight! :-)

If you are interested in a flight with historic Ju 52, too, please have a look at the website of DLBS:

Viele Grüße,

PS: All workshop results will be presented at Process Management Conference 2014. “Blind Date” tickets are still available, but we will publish the agenda soon and price will be updated, too.


by Mirko Kloppenburg at February 24, 2014 08:56 PM

February 21, 2014

Keith Swenson: AdaptiveCM Events for 2014

2014 will be a bustling year for events around Adaptive Case Management, so get these on your calendar:

  • WfMC Case Management Awards – call for papers in January, abstracts due March 20, final paper a month later.  Go ahead and submit an abstract now to get some early advice and guidance on the final submission.  This is the fourth year for the series, and awards will be announced at the BPM/ACM event in DC in June and published in a book.  Last year’s book is available now: Empowering Knowledge Workers
  • BPM NextMarch 25-27 will cover a broad range of future directions for process technology.  Don’t miss this technologist oriented meeting in Asilomar, one of the most scenic stretches of California coast.
  • BPM and Case Management Global Summit Washington DCJune 16-18, The Ritz-Carlton, Pentagon City will actually be the first major trade show significantly about case management, also the location of the Case Management Awards Announcements.
  • AdaptiveCM 2014 Workshop - Sept 1 or 2, Ulm Germany, located at the EDOC2014 conference is the “3rd International Workshop on Adaptive Case Management and other non-workflow approaches to BPM”   Submission deadline April 8.
  •  BPM 2014Sept 7-11 in Haifa Israel continues to be a steady source of research in new directions for process technology, and this year they invited me to give a keynote speech on Adaptive Case Management.
  • iBPMS – Sept 29-Oct 2. Chicago, which this year will have a special day dedicated to banking and finance oriented BPM.
  • BBC2014November 2-6, Ft Lauderdale is a Business Architecture oriented event which includes discussion of alternate forms of process technology, and WfMC will present a workshop.

Mark these down now.  There is plenty of opportunity to meet and discuss Adaptive Case Management with other experts in this coming year.   Please let me know (in a comment) of any and all other significant events that I did not include.

by kswenson at February 21, 2014 05:22 PM