Planet BPM

September 01, 2015

Keith Swenson: AdaptiveCM 2015 Workshop Summary

So much planning, so much anticipation, and now the 4th International Workshop on Adaptive Case Management and other non-workflow approaches to BPM is over after one marvelous day.  We had reserved some time at the end for a round table discussion, with some time in the morning to select topics.  The subject of ‘The Purpose and Value of Modeling for Knowledge Worker Support’ quickly emerged as the dominant concern, and ended up being the main discussion point.  Before we get to that, let me present a summary of the papers (see the program).


Case Management: An Evaluation of Existing Approaches for Knowledge-Intensive Processes

Matthias Hauder gave a presentation on their study into the field of case management. The problem is that all the different papers have different definitions of case management. They did a survey of the literature, and pulled the common characteristics, as well as common requirements. They propose a good definition for case management. They then looked at the requirements, and identified the ones required for modeling, and then looked to see if CMMN would fit the bill. I found the logic a little bit circular: because they assume that the environment will have some characteristics of a CMMN based environment. Specifically, that design modeling capabilities are a separate feature from regular usage, and that only some of the users would do modeling. Because they assume an environment similar to CMMN, it is not surprising that they find CMMN a suitable fit to the requirements — at least for the modeling portion. What they did not do is to validate that the modeling-based approach actually works in any situation.  That migth be the next step for Matthias.

Declarative Process Modelling from the Organizational Perspective

Stefan Schönig gave a presentation on his analysis of five different declarative modeling languages Declare, DCR-Graphs, CMMN, DPIL and EM-BrA2CE.  The first three of theses are graphical in nature, while the last two are text-based language.  He particularly looked at how well the languages represent a couple of specific organizationally relevant patterns involving roles.  The analysis was well done.  The somewhat surprising result was that all three of the graphical notations showed significant deficiencies in representing these kinds of task assignment.  While the text-based languages might do a better job of representing this requirement, they are widely recognized as being harder to use.  DPIL is his preferred choice and he is actively involved in development of it.  He spoke about how to put a graphical representation in front that would automatically translate to DPIL.  Two comments from me:  (1) he needs a better definition of ‘role’ since the one used tended to blur the line between ‘role’ and ‘group’ — a rather different concept.  (2) If you have a graphical notation that faithfully represents what DPIL can do, they why bother with DPIL?  This could be completely hidden and nobody would need know about it.

A Case Modelling Language for Process Variant Management in Case-based Reasoning.

Andreas Martin compared the expressiveness of various modeling techniques measured in a knowledge-work extensive use case: BPMN, CMMN, Declare, and BPFM.  The user case was good: qualifying candidates for admission to their school which involves many different rules as well as international candidates having widely varying supporting evidence.   BPMF is a like a decision tree, and seems to be a better fit for case management which is more about identifying goals and less about the process to get there.

Embracing process compliance and flexibility through behavioral consistency checking in ACM, A Repair Service Management Case

Christoph Ruhsam from ISIS Papyrus presented some approaches that might be used to allow for building of process diagrams at run time.  One problem with letting knowledge workers change process models, or with composing process models from a set of pieces, is that the resulting model might contain internal consistency problem.  This approach would allow for automatic checking of those consistency rules, to warn users immediately about problems.

Modeling crisis management process from goals to scenarios.

Elena Kushnareva presented the idea of using State Charts to model the process for an emergency response organization.  For example, as a flood rises to different levels, there are associated necessary responses, such as closing bridges, roads, or evacuating certain regions  It is a good use case because emergency response is very unpredictable, and it is important.  State charts are particularly strong when you have nested state, but I remain unconvinced that the scenario requires a lot of nested state.  It seems that they particularly use it in such a way that you have a lot of independent states that just happen to be in a containing box “emergency occurs.” The following box, recover from emergency, would need to go to some great length to retain the internal state of the previous box, but that following box was not elaborated in the presentation.  Once again, my main question is “why do modeling”.  In this case I see a clear need for an elaborate model done to run simulations and understand what needs to be prepared — that model done by people who specialize in modeling such things — but the actual emergency response worker don’t show any need to do modeling, or even to change the aforementioned model while an emergency is unfolding.

Supporting Adaptive Case Management Through Semantic Web Technologies

Wilhelm Koop presented an idea to use semantic representations (OWL, RDF) to help guide knowledge workers in choosing what is and is not an acceptable enhancement to a process model.  This is also important as a way to determine whether the extensions made by two different knowledge workers are the same or not.  Some form of semantic analysis of case history seem obviously critical in order to eliminate the arbitrary differences cause simply by the choice of words the workers use.

Supporting Knowledge Work by Speech-Act Based Templates for Micro Processes.

Johannes Tenschert proposed that instead of making models on deconstructing the human activities involved, we should instead base the models on how people communicate, particularly how they communicate to get things done.  This is the realm of speech acts.  They have a couple of basic patterns: a promise, a commitment, a question, a declaration of completion, etc.  I personally think there is a really a lot of promise to this approach, because organizations are ultimately social entities, and what matters is what you say was done, not necessary how you did it.  Their research is just starting on this, and what they need is to elaborate a practical example, and to show that this approach would result in an improved ability to support the work.

Towards Structural Consistency Checking in Adaptive Case Management.

Christoph Czepa gave the second paper on how to automatically detect consistency problems in a language like CMMN.  For example, the exit criteria of one node can directly contradict the entry criteria of the following node, and produce a graph that can never be traversed.  Model checking is used to extend this detection ability to logical contradictions that would not be immediately obvious.  This type of logical check would seem an excellent feature on any process modeling tool.

Towards Process Improvement for Case Management. An Outline Based on Viable System Model and an Example of Organizing Scientific Events.

Ilia BIder put forth that if you want a system that modifies itself, or at least an organization that modifies the system it uses, then the Viable System Model from Cybernetics is worth investigating.  Gave a VSM overview.  Then present the use case of running a scientific meeting, much like the workshop we all are attending.  He gave a list of tasks, and pointed out that each is likely to be needed in any event, so the goal is never to ‘optimize the process’.   He mapped the various parties that are involved in a workshop to the 5 different ‘systems’ defined by the VSM.  He stopped short of demonstrating that this was an effective way to structure an ACM system, but leave that up to a future research to take to the next step.


This all led up to a large discussion on the merits of modeling:  what good is it? what should it be used for? what are the goals? what should it not be used for? how do we measure the effectiveness? and do we need to model at all?   The first day of the BPM conference is starting here in Innsbruck in a few minutes, so you will have to wait until tomorrow for my summary of this discussion on the merits of modeling for knowledge workers.


by kswenson at September 01, 2015 06:44 AM

August 24, 2015

Keith Swenson: Podcast about Robust BPM

Peter Schoof interviewed me last week on the subject of robust BPM. (Thanks Peter!)  This had been the basis of a talk I gave in Montreal at  the Workshop on Methodologies for Robustness Injection into Business Processes.  It is a quick 15 minute summary:

Robust BPM: Keith Swenson Explains How to Build Processes That Last

The main point is that the standard mechanism for reliability in software engineering is the database transaction.  Systems can be made to always be consistent through proper use of transactions, how BPM often spans many systems.  Large distributed transactions, while theoretically possible, are not practical.  Therefor, you will always run into consistency problems, which must be dealt with.  The answer to making a reliable BPM solution is not sweep the problems under the rug, but rather to make sure that any such problem is quickly and reliably reported.  When a problem occurs, stop processing right away, and just record the issue.  Instead of designing processes as an opaque black-box for the user, allow a dashboard-like visibility to some of the key, separate parts of the process, and give status lights to indicate whether the remote process ran correctly or not.  This means that your process system must be instrumented to be able to report on status, particularly error status.  (It is not OK to fail, and then just go dark.  The system has to be able to report about failures.)  Instead of trying to prevent all possible failures, the system needs to be designed with the idea that failures will happen, and to be able to record and communicate about them.

Most important:60D130426-7379

  • You want your BPM diagrams to be a clean, pure representation of the business logic, without muddying from the reality of the hosting environment.  Such a process would run in an idealized perfect environment, but we don’t actually have such an environment.
  • Do not confuse this BPM diagram with the system architecture!  You need a system architect to take the business logic, and translate it to the realities.  Parts of the system are reliable and can have a faithful translation.  Other parts might be out of sync, and as such special mechanisms must be included to help notify people of problems (failures) and to give them controls to restart things when necessary.

It is ironic, that to make a robust reliable system, you do so not by hiding problems, but by exposing all the problems as they happen.

by kswenson at August 24, 2015 04:18 PM

August 23, 2015 Why DMN is the next big thing and you will be excited

Every 6 months we publish a so-called “minor release” of the Camunda BPM platform. The upcoming release 7.4 is scheduled for 30 November, and it will support the new OMG-standard for decision management, DMN.

DMN is currently our priority 1 topic, for a simple reason: We believe, that DMN will become as important for automating decisions, as BPMN has become for automating processes.

I will briefly explain why we think that and then describe how Camunda will embrace DMN, which will make you very excited

Why DMN is the next big thing – Part 1: The problem

I have been …

by Jakob Freund at August 23, 2015 03:49 PM

July 20, 2015 Decision Model and Notation (DMN) – the new Business Rules Standard. An introduction by example.

DMN is a brand new standard of the OMG, it stands for Decision Model and Notation and is clearly related to BPMN and CMMN. DMN defines an XML format and is executable on Decision/Business Rules Engines. It is currently on the home stretch of standardization and camunda will release camunda BPM 7.4 including DMN in November. Over the last months we discussed a lot of Business Rules use cases with clients and sketched solutions in DMN. So high time to give an introduction into DMN and present some learnings we had so far.

The example: Task Assignment/Routing of new claims

The …

by Bernd Rücker at July 20, 2015 02:36 PM

July 17, 2015

Sandy Kemsley: Knowledge Work Incentives at EACBPM

June was a bit of a crazy month, with three conferences in a row (Orlando-London-DC) including two presentations at IRM’s BPM conference in London: a half-day workshop on the Future of Work, and a...

[Content summary only, click through for full article and links]

by sandy at July 17, 2015 02:20 PM

June 23, 2015

Sandy Kemsley: HP Consulting’s Standards-Driven Requirements Method at BPMCM15

Tim Price from HP’s enterprise transformation consulting group presented in the last slot of day 2 of the BPM and case management summit (and what will be my last session, since I’m not...

[Content summary only, click through for full article and links]

by sandy at June 23, 2015 08:17 PM

Sandy Kemsley: The Enterprise Digital Genome with Quantiply at BPMCM15

“An operating system for a self-aware quantifiable predictive enterprise” definitely gets the prize for the most intriguing presentation subtitle, for an afternoon session that I went to...

[Content summary only, click through for full article and links]

by sandy at June 23, 2015 06:19 PM

Sandy Kemsley: The Digital Enterprise Graph with @denisgagne at BPMCM15

Yesterday, Denis Gagné demonstrated the modeling tools in the Trisotech Digital Enterprise Suite, and today he showed us the Digital Enterprise Graph, the semantic layer that underlies the modeling...

[Content summary only, click through for full article and links]

by sandy at June 23, 2015 03:47 PM

Sandy Kemsley: Wearable Workflow by @wareFLO at BPMCM15

Charles Webster gave a breakout session on wearable workflow, looking at some practical examples of combining wearables — smart glasses, watches and even socks — with enterprise...

[Content summary only, click through for full article and links]

by sandy at June 23, 2015 02:47 PM

Sandy Kemsley: Day 2 Keynote at BPMCM15

Second day at the BPM and Case Management summit in DC, and our morning keynote started with Jim Sinur — former Gartner BPM analyst — discussing opportunities in BPM and case management....

[Content summary only, click through for full article and links]

by sandy at June 23, 2015 01:59 PM

June 22, 2015

Sandy Kemsley: BPMN, CMMN and DMN with @denisgagne at BPMCM15

Last session of day 1 of the BPM and Case Management Summit 2015 in DC, and Denis Gagne of Trisotech is up to talk about the three big standards: the Business Process Model and Notation (BPMN), the...

[Content summary only, click through for full article and links]

by sandy at June 22, 2015 08:07 PM

Sandy Kemsley: Fannie Mae Case Study on Effective Process Modeling at BPMCM15

Amit Mayabhate from Fannie Mae (a US government-sponsored mortgage lender that buys mortgages from the banks and packages them for sale as securities) gave a session at the BPM and Case Management...

[Content summary only, click through for full article and links]

by sandy at June 22, 2015 06:17 PM

Sandy Kemsley: PCM Requirements Linking Capability Taxonomy and Process Hierarchy at BPMCM15

I’m in Washington DC for a couple of days at the BPM and Case Management Summit; I missed this last year because I was at the IRM BPM conference in London, and in fact I was home from IRM less...

[Content summary only, click through for full article and links]

by sandy at June 22, 2015 03:44 PM

June 19, 2015

Keith Swenson: Sociocracy

I was approached a few months ago by a group wondering what kinds of collaborative software might exist to support something called Sociocracy.  That was the impetus of my latest journey into the world of organizing on democratic principles.


This will be a lot more than one post, so I need to start with a general background on Sociocracy.  It was a movement started in the 1970’s as a way of running business based on the principles of sociology.  It is based on the ideas from Cybernetics (see Norbert Weiner and Stafford Beer).  It was promoted in the 1970’s primarily by Gerard Endenburg from Holland.

While democracy is rule by the mass of people, sociocracy is about rule by people who have a social relationship with each other.  The central idea is governing by consensus.  People are organized in circles, and circles meet to make policy decisions.  Large organizations are represented as a hierarchy of circles, with two representatives (double-linking) always bridging from one circle to another.  Part of the method involves avoiding voting:  Instead of calling for a vote and picking winners a slightly more elaborate mechanism produces a candidate which then goes through another pass to make sure that nobody has any objections.  Thus decisions are made by consent — something everyone can live with — not necessarily by consensus.  In this way it reminds me of IETF meetings which I participated in years ago that also eschewed voting in favor of what they called “rough consensus.”

There seems currently a resurgence for looking at ways of running groups in a non-traditional ways.  In an earlier post I covered the idea of self-management (Absolutely Self-Managed Workers) and a post on Wirearchy (Wirearchy – a pattern for an adaptive organization?).  John Hagel talks about Push and Pull organizations (The Power of Pull: Just Win, Baby).  Tony Hsieh of Zappos has put Holacracy in the news recently, and the distinction between this and Sociocracy is not clear to me.  These all seem to be appearing as alternatives to the more traditional scientific management (It is All Taylor’s Fault).


A cadre of organizational heavy hitters (Brynjolfsson et al.) has called out in “Open Letter on the Digital Economy” for a set of changes in public policy and research on how the economy is structured.  Steve Denning covered this in his article “An Open Letter From Silicon Valley Calls For Bold Organizational Reform” where he mentiones Sociocracy as one of 3 dozen initiatives for promoting innovative new organizational structure.

The Sociocracy movement in North America seems to be concentrated around the Sociocracy Consulting Group, which includes John Buck and 7 others offering training in the method, and a loose confederation of others consultants (notably Sharon Villines) all somewhat associated with The Sociocracy Group from Holland.

Collaboration Software for Sociocracy

John Buck reached out to Fujitsu to see what capability we might have for flexibly supporting the working patterns of sociocracy.  This is not really a BPM problem.  The people who participate in a sociocratic circle are knowledge workers.  Thus you need something like case management.

He introduced me to a team of people looking to figure out exactly what would be needed.  Like all knowledge workers, the people running a sociocracy want to focus on their day job, and not on the software they are using.  The idea is to come up with something that fits the working patterns of a sociocracy without needing a lot of customization.  The idea intrigued me.

I hope to cover some of the progress in this direction in future posts.  For now, I hope only that this post has made you aware of a new, and up-coming innovative way to organize people.

by kswenson at June 19, 2015 10:00 AM

June 18, 2015

Drools & JBPM: Drools & jBPM get Dockerized

Docker is becoming a reference to build, ship and run container-based applications. It provides an standard, easy and automated way to deploy your applications.

Since latest 6.2.0.Final community release you can use Docker to deploy and run your Drools & jBPM applications in an easy and friendly way. Do not worry about operation system, environment and/or application server provisioning and deployments ... just use the applications!

The images are already available at Docker Hub:

Please refer to next "Drools & jBPM community Docker images" section for more information about what's contained in each image.

Why are these images helpful for me and my company?

To understand the advantages of using these Docker images, let's do a quick comparison with the deployment process for a manual installation of a Drools Workbench application.

If you do it by yourself:
  1. Install and prepare a Java runtime environment
  2. Download the workbench war (and other resources if necessary), from the official home page or from JBoss Nexus
  3. Download and prepare a JBoss WildFly server instance
  4. Configure the WildFly instance, including for example configuring the security subsystem etc.
  5. Deploy Drools into the WildFly instance
  6. Start the application server and run your Drools application
As you can notice, manual installation already takes quite a few steps.  While this process can be automated in some way (as the jbpm-installer for example does), some questions arise at this point ... What if I need a more complex environment? Are other colleagues using the same software versions and configuration? Can I replicate exact same environment? Could someone else easily run my local example easily during a customer demo? And if I need to deploy several identical runtime environments? What about removing my local installation from my computer? ...

Software containers & Docker are a possible solution and help providing an answer to some of these questions.

Both Drools & jBPM community Docker images include:
  • The OpenJDK JRE 1.7 environment 
  • A JBoss WildFly 8.1.0.Final application server
  • Our web-based applications (Drools Workbench, KIE server and/or jBPM Workbench) ready to run (configurations and deployments already present)
You don't have to worry about the Java environment, the application server, the web applications or configuration ... just run the application using a single command:

  docker run -p 8080:8080 -d --name drools-wb jboss/drools-workbench-showcase:6.2.0.Final

Once finished, just remove it:

   docker stop ...

At this point, you can customize, replicate and distribute the applications! Learn more about Docker, its advantages and how to use it at the offical site.

The environment you need

Do not worry about Java environments, application servers or database management systems, just install Docker:

   # For RHEL/Fedora based distributions:
   sudo yum -y install docker

More installation information at the official Docker documentation.

Are you using Windows? 

For windows users, in order to use Docker, you have to install Boot2Docker. It provides a Linux basic environment where Docker can run. Please refer to the official  documentation for the Docker installation on Windows platforms.

You are ready to run!

Drools & jBPM community Docker images

For the 6.2.0.Final community release six Docker images have been released.  They can be categorized in two main groups: Base images provide the base software with no custom configurations. They are intended to be extended and customized by Docker users.   Showcase images provide applications that are ready to run out-of-the-box (including for example some standard configuration).  Just run and use it!  Ideal for demos or evaluations / getting started.
  • Base images 
    • Drools Workbench
    • KIE Execution Server
    • jBPM Workbench
  • Showcase images
    • Drools Workbench Showcase
    • KIE Execution Server Showcase
    • jBPM Workbench Showcase

    Let's dive into a detailed description of each image in the following sections.

    Drools Workbench

    This image provides the standalone Drools web authoring and rules management application for version 6.2.0.Final.  It does not include any custom configuration, it just provides a clean Drools Workbench application running in JBoss WildFly 8.1.  The goal of this image is to provide the base software and allow users to extend it, and apply custom configurations and build custom images.

    Fetch the image into your Docker host:

       docker pull jboss/drools-workbench:6.2.0.Final

    Customize the image by creating your Dockerfiles:

       FROM jboss/drools-workbench:6.2.0.Final

    Please refer to Appendix C for extending this image.

    Run a Drools Workbench container:

    docker run -p 8080:8080 -d --name drools-wb jboss/drools-workbench:6.2.0.Final

    Navigate to your Drools Workbench at:

       http://localhost:8080/drools-wb # Linux users
       http://<boot2docker_ip>:8080/drools-wb # Windows users

    Refer to Appendix A for more information about IP address and port bindings.

    Drools Workbench Showcase

    See it in Docker Hub

    This image provides the standalone Drools web authoring and rules management application for version 6.2.0.Final plus security configuration and some examples.
    Tip: This image inherits from the Drools Workbench one and adds custom configurations for WildFly security subsystem (security realms) and system properties for enabling the use of the examples repository. 
    The goal for this image is to provide a ready to run Drools Workbench application: just pull, run and use the Workbench.

    1. Pull the image:

      docker pull jboss/drools-workbench-showcase:6.2.0.Final

    2. Run the image:

      docker run -p 8080:8080 -d --name drools-wb-showcase jboss/drools-workbench-showcase:6.2.0.Final

    3. Navigate to the workbench at:

       http://localhost:8080/drools-wb # Linux users
       http://<boot2docker_ip>:8080/drools-wb # Windows users

    Refer to Appendix A for more information about IP address and port bindings.

    You can use admin/admin for default logging in - Refer to Appendix B for default users and roles included

    KIE Execution server

    This image provides the standalone rules execution component for version 6.2.0.Final, to handle rules via remote interfaces.
    More information for the KIE Execution Server can be found at the official documentation.
    This image does not include any custom configuration, it just provides a clean KIE Execution Server application running in JBoss WildFly 8.1.  The goal for this image is to provide the base software and let the users to extend it, and apply custom configurations and build custom images.

    Fetch the image into your Docker host:

       docker pull jboss/kie-server:6.2.0.Final

    Customize the image by creating your Dockerfiles:

       FROM jboss/kie-server:6.2.0.Final

    Please refer to Appendix C for extending this image.
    Run a KIE Execution Server container:

       docker run -p 8080:8080 -d --name kie-server jboss/kie-server:6.2.0.Final

    The KIE Execution Server is located at:

       http://localhost:8080/kie-server # Linux users
       http://<boot2docker_ip>:8080/kie-server # Windows users

    Refer to Appendix A for more information about IP address and port bindings.

    Example: use the remote REST API to perform server requests :

     http://localhost:8080/kie-server/services/rest/server # Linux
     http://<boot2docker_ip>:8080/kie-server/services/rest/server # Win

    KIE Execution Server Showcase

    See it in Docker Hub

    This image provides the standalone rules execution component version 6.2.0.Final to handle rules via remote interfaces plus a basic security configuration (include a default user and role).
    More information for the KIE Execution Server can be found at the official documentation. 
    Tip: This image inherits from the KIE Execution Server one and adds custom configuration for WildFly security subsystem (security realms).

    The goal of this image is to provide a ready to run KIE Execution Server: just pull, run and use the remote services.

    1. Pull the image:

       docker pull jboss/kie-server-showcase:6.2.0.Final

    2. Run the image:

       docker run -p 8080:8080 -d --name kie-server-showcase jboss/kie-server-showcase:6.2.0.Final

    3. The server is located at:

       http://localhost:8080/kie-server # Linux users
       http://<boot2docker_ip>:8080/kie-server # Windows users

        The REST API service is located at:
     http://localhost:8080/kie-server/services/rest/server # Linux  
     http://<boot2docker_ip>:8080/kie-server/services/rest/server # Win  

    Refer to Appendix A for more information about IP address and port bindings.

    You can use kie-server/kie-server for default logging - Refer to Appendix B for default users and roles included

    jBPM Workbench

    This image provides the standalone version 6.2.0.Final of the jBPM Workbench: web-based authoring and management of your processes.  It does not include any custom configuration, it just provides a clean jBPM Workbench application running in JBoss WildFly 8.1.  The goal of this image is to provide the base software and let the users to extend it, and apply custom configurations and build custom images.

    Fetch the image into your Docker host:

       docker pull jboss/jbpm-workbench:6.2.0.Final

    Customize the image by creating your Dockerfiles:

       FROM jboss/jbpm-workbench:6.2.0.Final

    Please refer to Appendix C for extending this image.
    Run a jBPM Workbench container:

       docker run -p 8080:8080 -d --name jbpm-wb jboss/jbpm-workbench:6.2.0.Final

    Navigate to your jBPM Workbench at:

       http://localhost:8080/jbpm-console # Linux users
       http://<boot2docker_ip>:8080/jbpm-console # Windows users

    Refer to Appendix A for more information about IP address and port bindings.

    jBPM Workbench Showcase

    This image provides the standalone version 6.2.0.Final of the jBPM Workbench: web-based authoring and management of your processes. It includes the security and persistence configurations and some examples too.
    Tip: This image inherits from the jBPM Workbench one and adds custom configurations for WildFly security subsystem (security realms) and system properties for enabling the use of the examples repository. 
    The goal of this image is to provide a ready to run jBPM Workbench application: just pull, run and use the Workbench:

    1. Pull the image:

       docker pull jboss/jbpm-workbench-showcase:6.2.0.Final

    2. Run the image:

       docker run -p 8080:8080 -d --name jbpm-wb-showcase jboss/jbpm-workbench-showcase:6.2.0.Final

    3. Navigate into the workbench at:

       http://localhost:8080/jbpm-console # Linux users  
       http://<boot2docker_ip>:8080/jbpm-console # Windows users

    Refer to Appendix A for more information about IP address and port bindings.

    You can use admin/admin for default logging - Refer to Appendix B for default users and roles included



    Appendix A - IP address and ports bindings for Docker containers

    Port bindings
    By default, when using any of the Drools & jBPM Docker images, the port 8080 is exposed for the use of the HTTP connector. This port is not exposed to the Docker host by default, so in order to expose it and be able to navigate through the applications please read the following instructions.

    The recommended use for running containers is specifying in the docker client the -p argument as:

      docker run -p 8080:8080 -d ....

    Doing this way, the docker daemon binds the internal container's port 8080 to the Docker host machine's port 8080. So you can navigate into the applications at:


    If your Docker host machine's port 8080 is not available, run the containers with the -P command line argument. Docker binds the internal 8080 port to an available free exposed port in the Docker host, so in order to access the application you have to discover the bind port number.

    To discover running container's ports type the following command:

       docker ps -a

    This command will output the processes and the port mappings for each running container:

    2a55fb....   jboss/drools-w..  ...      ...     ..>8080/tcp.. drools-wb
    The PORTS column shows that the internal container's port 8080 is bound to port 49159 on the Docker host, so you can navigate into the applications at:


    Docker hostname & IP address
    The Docker hostname or IP address have to be specified in order to navigate through the container's applications.

    If you are running Docker in your localhost and using Linux based OS, it defaults to localhost:


    If you are running Docker on another machine or in Windows environments, where Boot2Docker is required,  you have to specify the host name (if DNS available for it) or the IP address for it:

    Appendix B - Default applications users & roles

    The Showcase images Drools Workbench Showcase and jBPM Workbench Showcase include default users & roles:

    Drools & jBPM Workbench Showcase roles
    Role Description
    admin The administrator
    analyst The analyst
    developer The developer
    manager The manager
    user The end user
    kiemgmt KIE management user
    Accounting Accounting role
    PM Project manager role
    HR Human resources role
    sales Sales role
    IT IT role

    Drools & jBPM Workbench Showcase users
    Username Password Roles
    admin admin admin,analyst,kiemgmt
    krisv krisv admin,analyst
    john john analyst,Accounting,PM
    mary mary analyst,HR
    sales-rep sales-rep analyst,sales
    katy katy analyst,HR
    jack jack analyst,IT
    salaboy salaboy admin,analyst,IT,HR,Accounting

    For KIE Execution Server Showcase there is a single user and role:
    Username Password Roles
    kie-server kie-server kie-server

    Appendix C - Extending base images

    The Base images are intended to be inherited from, for adding your custom configurations or deployments.

    In order to extend the images, the Dockerfile must start with: 

        FROM jboss/drools-workbench:6.2.0.Final 
        FROM jboss/kie-server:6.2.0.Final
        FROM jboss/jbpm-workbench:6.2.0.Final

    At this point, custom configurations and deployments can be added. Some notes:
    • JBoss WildFly is located at the path given by $JBOSS_HOME environment variable
    • $JBOSS_HOME points to /opt/jboss/wildfly/
    • Applications are using the server in standalone mode:
      • Configurations located at $JBOSS_HOME/standalone/configuration/
      • Configuration files for the standalone-full profile are used
      • Deployments are located at $JBOSS_HOME/standalone/deployments/
    You can find more information at each official image page at Docker Hub:

    by Roger Martinez ( at June 18, 2015 09:09 PM

    June 17, 2015

    Drools & JBPM: Drools & jBPM meeting space needed in Barcelona

    We are looking to try and organise a team meeting in Barcelona, towards the end of this year. We have limited to no budget for meeting space :( So I thought I'd see if anyone out there would like to volunteer this space - in return you'll have all the core Drools and jBPM developers on hand for a week :) We need a large room suitable for around 25-30 people sitting at tables, and then one or two break out rooms with around 10 people or so per room.


    by Mark Proctor ( at June 17, 2015 01:36 PM

    June 10, 2015 Live on YouTube: Demonstration of Model Interchange among BPMN 2.0 Tools

    Next Wednesday during the OMG Technical Meeting in Berlin, there will be another live demonstration of BPMN 2.0 tools and their interchange capabilities. The demo is performed by the OMG’s BPMN Model Interchange Working Group (MIWG) and will be streamed live on YouTube on Wednesday the 17th of June at 4:00pm Berlin time. The BPMN MIWG if comprised of BPMN vendors as well as end users and its mission is to improve, test and showcase the import and export capabilities of BPMN 2.0 tools.

    Camunda has been contributing to this working group since the very beginning, because BPMN interchange allows users …

    by Falko Menge at June 10, 2015 04:44 PM

    June 09, 2015

    Sandy Kemsley: Top 10 Trends of Digital Enterprise with @setrag at PegaWorld 2015

    I finished my visit to PegaWorld 2015 in the breakout session by Setrag Khoshafian, Pega’s chief BPM evangelist, on the top 10 trends for the adaptive digital enterprise: Context matters. Analyze and...

    [Content summary only, click through for full article and links]

    by sandy at June 09, 2015 08:01 PM

    Sandy Kemsley: The Personology of @RBSGroup at PegaWorld 2015

    Andrew McMullan, director of analytics and decisioning (aka “personologist”) at Royal Bank of Scotland, gave a presentation on how they are building a central (Pega-based) decisioning capability to...

    [Content summary only, click through for full article and links]

    by sandy at June 09, 2015 07:07 PM

    Sandy Kemsley: TD Bank at PegaWorld 2015

    I attended a breakout presented by TD Bank (there was also a TCS presenter, since they’ve done the implementation) on their workflow system for customer maintenance requests – it’s a bit of a signal...

    [Content summary only, click through for full article and links]

    by sandy at June 09, 2015 04:14 PM

    Sandy Kemsley: PegaWorld 2015 Day 2 Customer Keynotes: Big Data and Analytics at AIG and RBS

    After the futurist view of Brian Solis, we had a bit more down-to-earth views from two Pega customers, starting with Bob Noddin from AIG Japan on how to turn information that they have about...

    [Content summary only, click through for full article and links]

    by sandy at June 09, 2015 03:01 PM

    Sandy Kemsley: PegaWORLD 2015 Keynote with @BrianSolis: Innovate or Die!

    Brian Solis from Altimeter  Group was the starting keynote, talking about disruptive technology and how businesses can undergo digital transformation. One of the issues with companies and change is...

    [Content summary only, click through for full article and links]

    by sandy at June 09, 2015 01:54 PM

    June 08, 2015

    Sandy Kemsley: Pega 7 Express at PegaWORLD 2015

    Adam Kenney and Dennis Grady of Pega gave us the first look at Pega 7 Express: a new tool for building apps on top of the Pega infrastructure to allow Pega to push into the low-code end of the...

    [Content summary only, click through for full article and links]

    by sandy at June 08, 2015 04:56 PM

    Sandy Kemsley: PegaWORLD 2015 Keynote: CRM Evolved and Pega 7 Express

    Orlando in June? Check. Overloaded wifi? Check. Loud live band at 8am? Check. I must be at PegaWORLD 2015! Alan Trefler kicked off the first day (after the band) by looking at the new world of...

    [Content summary only, click through for full article and links]

    by sandy at June 08, 2015 03:29 PM

    June 04, 2015 BPMCon 2015 – Frühbucher-Rabatt bis 30.06. sichern

    Der neueste Standard der OMG heißt Decision Model and Notation (DMN) und erlaubt eine bessere Umsetzung von Business Rules in Ihren Geschäftsprozessen. Deshalb widmet sich die schönste Konferenz für Business Process Management (BPM) in diesem Jahr dem Thema Entscheidungen: Sie werden erfahren, wie der neue DMN-Standard in der BPM-Praxis angewandt werden kann und welche Vorteile tatsächlich realisierbar sind.

    Außerdem wird Zalando in der Keynote aus dem Nähkästchen plaudern und beschreiben, wie das rasante Unternehmenswachstum mit zum Teil radikalen Maßnahmen so gestaltet wird, dass die Agilität nicht verloren geht. Weitere spannende BPM-Praxisberichte kommen von der Australischen Post, der Deutschen Bahn und …

    by Jakob Freund at June 04, 2015 10:47 AM

    May 28, 2015

    Sandy Kemsley: IBM ECM Strategy at Content2015

    Wrapping up the one-day IBM Content 2015 mini-conference in Toronto (repeated in several other cities across North America) is Feri Clayton, director of document imaging and capture. Feri and I were...

    [Content summary only, click through for full article and links]

    by sandy at May 28, 2015 08:41 PM

    Sandy Kemsley: IBM ECM and Cloud

    I’m at the IBM Content 2015 road show mini-conference in Toronto today, and sat in on a session with Mike Winter (who I know from my long-ago days at FileNet prior to its acquisition by IBM)...

    [Content summary only, click through for full article and links]

    by sandy at May 28, 2015 07:06 PM

    Sandy Kemsley: Making Yourself Invaluable: Content2015 Keynote by @markeaton7ft4

    I’m usually not a fan of “inspirational” keynotes at technical conferences that have nothing to do with the topic, and just have a few of the sponsor’s buzzwords sprinkled...

    [Content summary only, click through for full article and links]

    by sandy at May 28, 2015 02:15 PM

    May 27, 2015

    Drools & JBPM: More Eclipse Tooling enhancements

    The biggest complaint from our customers about the eclipse tooling for B*MS is that the cost of entry is too high; not only must a user be familiar with several different technologies, such as Git, maven, REST services and how these technologies are exposed by the eclipse tooling, but s/he must also understand the various Drools and jBPM configuration and definition files. Since there are only a few user-friendly/graphical editors that hide underlying file details, the user must become familiar with most of these file formats, and where in the Project or Repository hierarchy the file resides.

    One of the enhancements I have been working on will hopefully ease some of this burden by providing a "navigator" similar to the Eclipse Project Explorer, but designed specifically for Drools/jBPM projects (see below).
    At the root of this tree viewer are the app servers that have Drools/jBPM installed. Servers are managed (start, stop, debug) from the WST Servers view. At the next level is the Organizational Unit, then Repositories and finally Projects. Essentially, this viewer mimics the web console with the addition of multiple servers.

    The tree structure is cached whenever a connection to the server can be established. This allows the view to be used in "offline" mode if the server is down or network connection is unavailable. When the server is available again, the viewer synchronizes its cache with the server.

    Repositories are automatically cloned, and Projects are imported as they are requested by the user with a context menu action.

    I'm still in the design/experimenting phase right now, so if there's a feature you'd like to see, or if you have suggestions for improving this interface please post your comments here.

    You can also see a related post, showing my work on improving the wizards and runtime generation and configuration.

    by Robert Brodt ( at May 27, 2015 04:10 PM

    Drools & JBPM: Improved Drools & jBPM Eclipse wizard

    Bob has been working on improving our Drools & jBPM Eclipse wizards.

    • The user no longer needs to create runtimes. They can now be created automatically on the fly by the new project wizard.
    • The project wizard will now list examples from the github repository and allow them to be selected and dowloaded as part of the wizard.
    You cans see a video for this here:

    Currently all the downloadable examples are jBPM, we still need to migrate the Drools examples over to this repository format.


    by Mark Proctor ( at May 27, 2015 10:01 AM

    May 19, 2015

    Drools & JBPM: A Comparative Study of Correlation Engines for Security Event Management

    This just paper came up on my google alerts, you can download the full text from ResearchGate.
    "A Comparative Study of Correlation Engines for Security Event Management"

     It's an academic paper, published in the peer reviewed journal.
    "10th International Conference on Cyber Warfare and Security (ICCWS-2015)"

    Th paper is evaluating the correlation performance for large rule sets and large data sets in different open source engines. I was very pleased to see how well Drools scaled at the top end. I'll quote this from the conclusion and copy the results charts.
    "As for the comparison study, it must be said that if the sole criteria was raw performance Drools would be considered the best correlation engine, for several reasons: its consistent behaviour and superior performance in the most demanding test cases."

    In Table 2 (first image) we scale form 200 rules to 500 rules, with 1mil events with almost no speed loss - 67s vs 70s.

    In Table 1 (second image) our throughput increases as the event sets become much larger.

    I suspect the reason why our performance is less for for the lower rule and event set numbers, is due to the engine initialisation time for all the functionality we provide and for all the indexing we do. As the matching time becomes large enough, due to larger rule and data sets, this startup time becomes much less significant on the over all figure.

    by Mark Proctor ( at May 19, 2015 11:46 PM

    May 18, 2015 – A noteworthy new blog about BPM

    There is a new kid on the block of BPM Blogs: Our tech lead Daniel Meyer started blogging, and since Daniel strives for excellence in anything he does, I would strongly recommend to subscribe for the feed, follow him on twitter and read his latest post about how to express asynchronous service invocations in BPMN.

    Disclaimer: No, he did not ask me to promote his new blog. In fact he will probably be mad at me because I did – because now it looks like he *did* ask for it – but I won’t mind

    by Jakob Freund at May 18, 2015 07:06 PM

    May 12, 2015 Camunda BPM 7.3 Release Webinar on June 2nd

    Camunda BPM 7.3 will be released on May 31, 2015 (yep, right on schedule!), and it will be jam-packed with outstanding new features.

    My personal favorites are:

    Process Instance Modification: Flexibly start and stop any step within your process – you can even use it like a Star Trek – style “token transporter” and move your process instance from any current state into another. Check this out and be awestruck!

    Super-Flexible Authorizations: Define who is able to do what within Camunda – for example, the members of a group “Marketing” are only allowed to start, see and work on “their” …

    by Jakob Freund at May 12, 2015 07:38 PM

    Drools & JBPM: Validation and Verification for Decision Tables

    The decision tables are getting even more improvements than the UI work Michael has been working on.
    Zooming and Panning between Multiple Huge Interconnected Decision Tables
    Cell Merging, Collapsing and Sorting with Multiple Large Interconnected Decision Tables

    I am currently working on improving the validation and verification of the decision tables. Making it real time and improving the existing V&V checks.

    Validation and verification are used to determine if the given rules are complete and to look for any bugs in the dtable authors logic. More about this subject.

    Features coming in the next release

    Real time Verification & Validation

    Previously the user had to press a button to know if the dtable was valid or not. Now the editor does the check in real time, removing the need to constantly hit the Validate-button. This also makes the V&V faster, since there is no need to validate the entire table, just check how the change of a field affected the rest of the table.

    Finding Redundancy 

    To put it simple: two rows that are equal are redundant, but redundancy can be more complicated. The longer explanation is: redundancy exists when two rows do the same actions when they are given the same set of facts.

    Redundancy might not be a problem if the redundant rules are setting a value on an existing fact, this just sets the value twice. Problems occur when the two rules increase a counter or add more facts into the working memory. In both cases the other row is not needed.



    Finding Subsumption

    Subsumption exists when one row does the same thing as another, with a sub set of the values/facts of another rule. In the simple example below I have a case where a fact that has the max deposit below 2000 fires both rows.

    The problems with subsumption are similar to the case with redundancy.

    Finding Conflicts

    Conflicts can exists either on a single row or between rows.
    A single row conflict prevent the row actions from ever being executed.

    Single row conflict - second row checks that amount is greater than 10000 and below 1

    Conflict between two rows exists when the conditions of two rules are met with a same set of facts, but the actions set existing fact fields to  different values. The conditions might be redundant or just subsumptant.

    The problem here is, how do we know what action is made last? In the example below: Will the rate be set to 2 or 4 in the end? Without going into the details, the end result may be different on each run and with each software version. 
    Two conflicting rows - both rows change the same fact to a different value


    Reporting Missing Columns

    In some cases, usually by accident, the user can delete all the condition or action columns.

    When the conditions are removed all the actions are executed and when the actions columns are missing the rows do nothing.
    The action columns are missing
    The condition columns are missing

    What to expect in the future releases?

    Better reporting

    As seen on the examples above. Reporting the issues is currently poor.
    The report should let the user know how serious the issue is, why it is happening and how to fix it.

    The different issue levels will be:
    • Error - Serious fault. It is clear that the author is doing something wrong. Conflicts are a good example of errors.
    • Warning - These are most likely serious faults. They do not prevent the dtable from working, but need to be double checked by the dtable author. Redundant/subsumptant rules for example, maybe the actions need to happen twice in some cases.
    • Info - The author might not want to have any conditions in the dtable. If the conditions are missing each action gets executed. This can be used to insert a set of facts into the working memory. Still it is good to inform that the conditions might have been deleted by accident.  


    Finding Deficiency

    Deficiency gives the same kind of trouble that conflicts did. The conditions are too loose and the actions conflict.

    For example:
    If the loan amount is less than 2000 we do not accept it.
    If the person has a job we approve the loan.
    The problem is, we might have people with jobs asking for loans that are under 2000. Sometimes they get them, sometimes they do not.


    Finding Missing Ranges and Rows

    Is the table complete? In our previous examples we used the dtable to see if the loan application gets approved. One row in the dtable should always activate, no matter how the user fills out his loan application. Either rejecting or approving the loan or else the applicant does not get a loan decision.
    The goal of the V&V tool will be to find these gaps for the dtable author.


    Finding Cycles

    The actions can insert new facts and the conditions trigger the actions when new facts are inserted. This can cause an infinite number of activations.
    This issue is a common mistake that the goal is to pick it up in the authoring phase with the V&V tool.

    by Toni Rikkola ( at May 12, 2015 05:20 PM

    May 07, 2015

    Sandy Kemsley: SapphireNow User Experience Q&A with Sam Yen

    Wrapping up day 2 of SAPPHIRE NOW 2015, a small group of bloggers met with Sam Yen, SAP’s Chief Design Officer, to talk about user experience at SAP. That, of course, means Fiori: the user...

    [Content summary only, click through for full article and links]

    by sandy at May 07, 2015 01:09 AM

    May 06, 2015

    Sandy Kemsley: SapphireNow 2015 Day 2 Keynote with Bernd Leukert

    The second day of SAP’s SAPPHIRENOW conference started with Bernd Leukert discussing some customers’ employees worry of being disintermediated by the digital enterprise, but how the...

    [Content summary only, click through for full article and links]

    by sandy at May 06, 2015 04:22 PM

    Sandy Kemsley: IoT Solutions Panel at SapphireNow 2015

    Steve Lucas, president of platform solutions at SAP, led a panel on the internet of things at SAPPHIRENOW 2015. He kicked off with some of their new IoT announcements: SAP HANA Cloud Platform (HCP)...

    [Content summary only, click through for full article and links]

    by sandy at May 06, 2015 12:30 PM

    May 05, 2015

    Sandy Kemsley: Consolidated Inbox in SAP Fiori at SapphireNow 2015

    I had a chance to talk with Benny Notheis at lunchtime today about the SAP Operational Intelligence product directions, and followed on to his session on a consolidated inbox that uses SAP’s...

    [Content summary only, click through for full article and links]

    by sandy at May 05, 2015 10:10 PM

    Sandy Kemsley: SapphireNow 2015 Day 1 Keynote with Bill McDermott

    Happy Cinco de Mayo! I’m back in Orlando for the giant SAP SAPPHIRE NOW and ASUG conference to catch up with the product people and hear about what organizations are doing with SAP solutions....

    [Content summary only, click through for full article and links]

    by sandy at May 05, 2015 03:57 PM

    May 01, 2015

    Keith Swenson: Analytics in the Swarm

    Big data is a style of data analysis that reflects a return to large, centralized data repositories. Processing power and memory are getting cheaper, while the bandwidth among all the smart devices remains a barrier to getting all the data together in one place for analysis.  The trend is for putting the anaytics into the swarm of devices known as the Internet of Things (IoT)

    This is an excerpt from the chapter “Mining the Swarm” by Keith D Swenson, Sumeet Batra, Yasumasa Oshiro all from Fujitsu America published in the new book “BPM Everywhere.”

    Mainframe Origins

    The first advances into the field of computing machinery were big, clumsy, error prone electrical and mechanical devices that were not only large physically but extremely expensive requiring specially designed rooms and teams of attendants to keep them running. The huge up-front investment necessary meant that the machines were reserved exclusively for the most important most expensive and most valuable problems.

    We all know the story Moore’s Law and how the cost of such machines dropped dramatically year after year. At first the cost savings meant only that such machines could be dramatically more powerful and could handle many programs running at the same time. Time of the machines was split into slices that could be used by different people at different times. The swapping of machine time to different accounts did represent at the end an overhead and a barrier to use. The groups running the machines needed to charge by the CPU cycle to pay for the machine. While there were times that the machine was under-utilized, it was never possible to really say that there were `free’ cycles available to give away. The cost-recovery motive can’t allow that.

    Emergence of Personal Computers

    The PC revolution was not simply a logical step due to decreased costs of compute machinery, but rather a different paradigm. By owning a small computer, the CPU cycles were there to be used or not used as pleased. CPU cycles were literally free after the modest capital cost of the PC had been paid. This liberated people to use the machines more freely, and opened the way to many classes of applications that would have been hard to justify economically on a time-share system. The electronic spreadsheet was born on the PC because spending expensive CPU power just to update the display for the user could not be justified. The mainframe approach would be print all the numbers onto paper, have the analyst mark up the paper, get it as right as possible, and then have someone input the changes once. The spreadsheet application allowed a user to experiment with numbers; play with the relationships between quantities; try out different potential plans; and to see which of many possible approaches looked better.

    The pendulum had swung from centralized systems to decentralized systems; new applications allow CPU cycles to be used in new innovative ways, but PC users were still isolated. Networking was still in its infancy, but in the 90’s that changed everything.


    World Wide Web

    The Internet meant that PCs were no longer simply equipment for computation, but became communications devices. New applications arrived for delivering content to users. The browser was invented to bring resources from those remote computers and assemble them into a coherent display on user demand. Early browsers were primitive, and there were many disagreements on what capabilities a browser should have to make the presentation of information useful to the user. The focus at that time was on the web server which had access to information in a raw form, and would format the information for display in a browser. Simply viewing the raw data is not that interesting, but actually processing that data in ways customized by the user was the powerful value-add that the web server could provide. Servers had plug-ins, and the Java and JavaScript languages were invented to make it easier to code these capabilities and put them on a server.

    The pendulum had swung back to the mainframe model of centralized computing. The web server, along with its big brother the application server, were the most important processing platforms at that time. The web browser allowed you to connect to the results of any one of thousands of such web servers, but each web server was the source of a single kind of data.

    Apps, HTML5, and client computing

    Web 2.0 was the name of a trend for the web to change from a one way flow of information, to a two-way, collaborative flow of information that allowed users to be more involved in the flow of information. At the same time, and interesting technological change brought about the advent of `apps’ — small programs that could be downloaded, installed, and run more or less automatically. This trend was launched on smart phones and branched out from there. HTML5 promises to bring the same capability to every browser. Once again the pendulum had swung in the direction of decentralization; servers provide data in a more raw form and apps format the display on a device much closer to the user.

    Cloud Computing & Big Data

    More recently the current buzz terms are cloud computing and big data. Moving beyond the basic providing of first-order data, large computing platforms are collecting large amounts of data about people as they use the web platforms. The capabilities for memory have grown so quickly, and the cost dropped so quickly, that there is no longer any need to throw anything away. The huge piles of data collected can then be mined, and surprising new insights gained.

    Cell phones automatically report their position and velocity to the phone company. For cell phones moving quickly — or not so quickly — on a freeway, this is important information about traffic conditions. Google collects this information, determines where the traffic is running slow and where it is running fast, and the display the result on maps using colors to indicate good or bad traffic problems. The cell phone was never designed as a traffic monitor. It is an insightful engineer to realize that out of a large collection of information for one purpose, good information about other things could be concluded.

    Big data means just that: Data that is collected in such quantity that special machines are needed for processing it. A better way to think of it is that the data collection is so large, that even at the fastest transfer speeds; it would take days or months to move it to another location. The idea that you have a special machine for analyzing data does not help at all if the data set needs to be transported to that machine, and the time for this transfer would be prohibitive. Instead of bringing the data to the analysis machine, you have to send the analysis to the machine holding the data. The pendulum had once again swung to centralized machines with large collections of data.

    Analytics in the Swarm

    The theme of this chapter is to then anticipate the next pendulum swing: Big data style analytics will become available in a distributed fashion away from the centralized stockpiles of information. While the challenge in Big Data is variety and velocity, what sensor technology or IoT brings is the variety of the data which had been previously leverages is what we call “dark data.” Dark data is attracting people as new datasource for mining.  Each hardware sensor collects specific data such as video, sounds (in stream), social media texts, stocks, weather, temperature, location, vital data. Analysis of this data is a challenge since these devices are so distributed and relatively difficult to aggregate in the traditional ways. So key analyzing those sensor data is how you extract useful data (meta data) or compress it, and how to interact with other devices or center server. Some say that a machine-to-machine (M2M) approach is called for.

    There are a number of reasons to anticipate these trends, as well as evidence that this is beginning to happen today.

    Read the rest in “BPM Everywhere” where there is more evidence for how memory and processor performance is fall far faster than telecommunications cost, meaning that processing should move closer to devices.  Then examples of how analytics might be used to achieve greater operating efficiencies everywhere.

    by kswenson at May 01, 2015 10:53 AM

    April 28, 2015

    Keith Swenson: Montreal Conference

    I will be speaking in Montreal in May at a conference and at an associated workshop about Innovation and Business Processes.

    I have been asked to do a keynote at the MCETECH 2015 conference in Montreal.  The conference is all about bringing together researchers, decision makers, and practitioners interested in exploring the many facets of Internet applications and technologies.

    My talk will be on May 14th called “Robots don’t innovate: Innovation vs. Automation in BPM” where I will present many ideas from the “Thinking Matters” book.

    I will also be speaking May 12th at the “Workshop on Methodologies for Robustness Injection into Business Processes” where I will get to geek out a little more on the implementation side of BPM software engineering and distributed system design.

    Looking forward to my first visit to Montreal.  Hope to see some of you there!

    ALSO –  Book Released

    New book released:   BPM Everywhere: Internet of Things, Process of Everything

    by kswenson at April 28, 2015 10:38 AM

    April 27, 2015

    Drools & JBPM: Cell Merging, Collapsing and Sorting with Multiple Large Interconnected Decision Tables

    Last month I showed you videos for our proof of concept work, using Lienzo, to see how viable HTML5 canvas is for multiple large interconnected decision tables.

    Michael's made more progress adding cell merging and collapsing as well are sorting columns. All still working on truly massive interconnected tables.

    We plan to make the generic core of this work available as a Lienzo grid component in the future. Although we still need to figure out different data types for cells and how to do seamless in-cell editing rather than a popup.

    (click to turn on 720p HD and full screen)

    by Mark Proctor ( at April 27, 2015 10:18 PM

    Drools & JBPM: Domain Extensions for Data Modeller

    Walter is working on adding domain extensions to the Data Modeller. This will allow different domains to augment the model - such as custom annotations for JPA or OptaPlanner. Each domain is pluggable via a "facet" extension system. Currently, as a temporary solution, each domain extension is added as an item in the toolbar, but this will change soon. In parallel to this Eder will be working on something similar to Intellij's Tool Windows for side bars. Once that is ready those domain extensions plugged in as facets and exposed via this tool window capability. Here is a video showing JPA and it's annotations being used with the Data Modeller.

    (Click to  turn on 720p HD and full screen)

    by Mark Proctor ( at April 27, 2015 02:48 PM

    April 23, 2015

    Keith Swenson: Process Focus vs. System Architecture

    Too much of a focus on the on the business process can cause a business solution to be poorly designed and problematic.  This is a story from several customers who followed the BPM methodology too well, and were blindsided by some nightmarish systems issues.  Too much process can be a real problem.

    Process is King

    We know that the mantra for BPM is to design everything as a process.  The process view of work allows you to assess how well work gets from beginning to end.  It allows you to watch and optimize cycle time, which is essential to customer satisfaction.

    BPM as a management practice is excellent.  However, many people see BPM as a way to design an application.  A process is drawn as a diagram, and from this the application is created.  This can be OK, but there is a particular pitfall I want to warn you about.

    A Sample Process

    Consider the following hypothetical process between servers in a distributed environment:

    exactly-onceHere we have a process in system B (in the middle) that splits into a couple of parallel branches.  Each branch uses a message to communicate to an external remote system (systems A and C) and start a process there.  When those processes complete, the messages comes back and eventually the middle process completes.  This is a “remote subprocess” scenario.

    What is the matter with this?  This seems like a pretty straightforward process.  The middle process easily sends a message.  Receipt of that message easily start a process.  At the end of that process, it easily sends a message back which can easily be received.  What could go wrong?

    Reliability: Exactly-Once

    The assumption being made in this diagram is that the message is delivered exactly once.  “Exactly-once” is a term of art that means that the message is delivered with 100% reliability, and a duplicate is never seen by the receiver.

    Any failure to deliver a message would be a big problem:  Either the sub-proesses would not be started, or the main process would not get the message to continue.  The overall process would then be stuck.  Completely stuck.  The middle process would be inconsistent with the remote processes, and there is no way to ever regain consistency.

    So, then, why not just implement the system to have exactly-once message delivery?   Push the problem down to the transport level.  Build in reliability and checking so that you have exactly once delivery.  In a self-contained system, this can be done.  To be precise, within a single host, or a tightly bound set of hosts with distributed transactions (two phase commit) it is possible to do this.  But this diagram is talking about a distributed system.  These hosts are managed independently.  The next section reveals the shocking truth.

    Exactly-Once Delivery does not Exist

    In a distributed system where the machines are not logically tied and managed as a single system, it is not possible to implement — nor do you want to implement — true exactly once reliable message delivery.  Twice recently, a friend of mine from Microsoft referenced a particular blog post on this topic:  You Cannot Have Exactly-Once Delivery.  There is another discussion at: Why Is Exactly-Once Messaging Not Possible In A Distributed Queue?

    This is a truism that I have believed for a long time.  I never expect reliable message delivery.  There is a thought experiment that help one understand why if we could implement exactly-once delivery, you would not want it.  Think about back-up, and restoring a server from backup.  Systems A, B, and C are managed separately.  That means they are backed up separately.  Imagine that a disk blows up on system C.  That means that a replacement disk will be deployed, and the contents restored from backup, to a state that is a few moments to a few hours ago.  Messages that were reliably delivered during that gap, are certainly not delivered, and the system is stuck.  The process that had been rolled back will send extra messages, that will in turn cause redundant processes on the remote systems, which might (if the interactions were more elaborate) cause them to get stuck.

    Exactly once delivery attempts to keep the state of systems A, B, and C in sync.  Everything works in the way that a Rube Goldberg machine works: as long as everything works exactly as expected you can complete the process, but if anything trips up in the middle all is lost.   The backup scenario destroys the illusion of distributed consistency.  System C is not in sync, and there is no way to ever get into sync again.

    So .. All is Lost?

    We need reliable business processes, and it turns out that can be done using a consistency seeking approach.  What you have to do is to assume that messages are unreliable (as they are).  From a business process point of view, you want to visualize the process as a message delivered, but you do not want to architect the application to literally use this as the mechanism of coordination between the systems.

    You need a background task that reads the state of the three systems, and attempts to get them into sync.  For example, when system B sends a message to system C, it also registers records the fact that it expects system C to run a subprocess.   System C, when receiving a message, records the fact that it has a subprocess running for system B.   A background task will ask system B for all the subprocesses that it expects to be running on system C, then it asks system C for a list of all the processes it actually is running for system B.   If there is a discrepancy, it takes action.

    Consider, for example, system B having a process XYZ that is waiting on system C for a subprocess.  The consistency seeker asks system C if it has a process for XYZ running.  There are two problem scenarios: either there is no such process, in which case it tells system B to re-send the message starting the process.  The other possibility is that the process is there, but it has already completed, in which case it tell system C to resend the completion message.   So if things are out of sync, a repeat message is prompted.  The other requirement is that if, by bad luck, a redundant message is received, it is ignored.  Those two things: resending messages and ignoring duplicates are the essential ingredients of implementing reliable processes on top of an unreliable transport — and it works in distributed systems.

    Consistency Seeking

    Consistency seeking solves the problem at the business process level, and not at the transport level.

    It even works if the system is restored from backup.  For example, imagine that system B (the middle system) is restored from a backup made yesterday, while systems A and C are left in today’s state.  In such a case, there may be processes that had been completed, but are not yet completed in the restored state.  The consistency seeking mechanism will check, and will prompt the re-sending of the messages that will eventually bring the systems into a consistent state.  It is not perfect—there are situations where you can not automatically return to synchronized state—but it works for most common business scenarios.  It certainly works in the case where a simple message was lost.  It is far less fragile than the system that assumes that every message is delivered exactly-once.


    Process oriented thinking causes us to think about processes in isolation.  We forget that real systems need to be backed up, real systems go up and down, real systems are reconfigured independently of each other.  The process oriented approach ignores those to focus exclusively on one processes, with the assumption that everything in that process is always perfectly consistent.

    This does not mean that you should not design with a process.  It remains important for the business to think about how your business is running as a process.   But, naïvely implementing the process exactly as designed will result in a system that is not architected for reliability in a distributed environment.  BPM is not a replacement for good system architecture.

    by kswenson at April 23, 2015 10:05 AM

    April 14, 2015 Free: Camunda BPM Online Training

    My formidable Co-Founder Bernd Rücker created a self-paced training course for Camunda BPM. It consists of 4.5 hours of video plus a couple of hands-on exercises with sample solutions.

    You can complete this course if you want to get your feet wet with Camunda, plus it provides some valuable insights into best practices from our consulting experience, e.g. for creating UI in different technologies, writing unit tests or handling transactions.

    And it’s free! You just have to sign up for the Camunda BPM Network, and off you go.

    Get the Camunda BPM Online Training

    by Jakob Freund at April 14, 2015 01:07 PM

    Sandy Kemsley: London Calling To The Faraway Towns…For EACBPM

    I missed the IRM Business Process Management Europe conference in London last June, but will be there this year from June 15-18 with a workshop, plus a breakout session and a panel session. It’s...

    [Content summary only, click through for full article and links]

    by sandy at April 14, 2015 11:41 AM

    April 13, 2015

    Keith Swenson: bpmNEXT – Day 2

    Here are my notes from the second day of bpmNEXT on March 31, 2015.  Note: I spoke on day 3, so was too busy, so these conclude my notes of the event.


    Michael Grohs, Sapiens DECISION – How to manage Decision Logic

    Decision aware business models are simpler, easier to maintain.  Less ambiguity than natural language descriptions.  Rule content can be managed by users because it is clearly separate from the program.  Communities make their own vocabularies.  Decision management can produce rules that run in several different rules engines.

    Showed the decision design tool.  Typically a whole team working on it playing different roles.  There is a defined process for changing rules, and every change is tracked.  (Product has this javascript “windows” inside a browser, and the demo seems to have trouble managing the windowing. )

    Decision starts with an octagon, then some squares with top corners chopped off which represents a rule family linked together with arrows.  Each rule family declares what values (facts) it generates.  In a typical flat rule set the inferential information is lost, and once lost hard to maintain.  Opening the rule family it looks like a table, essentially a decision table.  Each row is OR with the earlier lines.   Not sure if it is first line that matches, or last line that matches.

    Example was a rules base, and then a specialized rule base for a particular state: Florida.  You can see a side-by-side window, with the differences highlighted.  There are logs of all changes.

    Q: What about knowledge workers?  Can they use it, and can they have their own rules?

    A: Right now focusing on the automation, the 80/20 part.  Everything is about faster and cheaper.  In the future we may think about more elaborate rules.  Agility and closed loop with knowledge worker is the next step.

    Gero Decker, Signavio – Business Decision Management

    BPM addresses 50% of the questions.  The rest is making decisions.  New standard Decision Modeling Notation (DMN 1.0)   Signavio Decision Manager concentrates on the modeling and governance.

    Drew up a quick BPMN diagram.  Used a “business rule” task.  Open that up and see a decisions tree.  Open one node in the tree and it looks like a decision table.  Created a quick example decision table.  The decision node in the tree can have “sub-decisions” which are more decision tables.  Product does some decision checking.  There is a rule testing capability as well.

    DMN is pretty powerful, it covers rules as well as predictive analytics – not supported yet by Signavio.

    Export to DROOLS code, which is pretty flat view of all the rules.  The hierarchy is not apparent, but coded into the rules.  There is a declare statement at the top for run-time binding to the execution environment.

    As for execution of the rules in the DMN standard, there are differing hit rules:  first hit, multiple hits, some sort of weighting, last hit.  He demoed ‘exclusive’ which means that you can not have overlapping rules, and in the case of exclusive it automatically shows you when rules overlap.

    John Reynolds – Kofax – Digital World

    Used to be that BPM ignored the physical world, and BPEL is the best example of that.   Now we need to engage the customers int he real world.  One rule: don’t force users to gather information that is already out there — use a robot instead.  Some information is there in paper form.  Claims that a lot of people still print their PDF files.  Scanning is what Kofax does, and Kapow was awarded last year for creating BPM processes.  Now, SignDoc for signing applications.

    For the demo, he got out a utility bill and a driver’s license.  Processes from the past too often assumed that all the information was already there.  Holds the smart phone over the document,a nd captures the document from the video stream, so it processes and cleans up the image specifically for optical character recognition.  Processed on the phone.  The image has the coffee stain removed, and made it black and white.  This cleanup is a kind of “compression” which is important for mobile and storage.

    The documents are scanned in using some libraries for scaning that they make available to put into custom apps.  The user does not have to re-enter anything … just take pictures of the documents.

    These document “teach” the transformation servers.  There is not any coding, but instead teaching.  Characters are recognized, and then the fields on the document are recognized as well.  There is a manual correction that feeds back to improve the recognition.

    Mike Marin – Mobile Case Management and Mobile

    Mobile is no longer Optional.  Use case is an insurance company that has unhappy customers and decides to implement a mobile app to make customer experience better.  Will show content, capture, and case — all on the mobile device.

    Again, with the phone, took a picture of the document.  After taking the picture, there are some options for cleaning up the document and submit it to the process / case.  Can review the contents.

    Robert Shapiro – Process Discovery

    Take the event logs, and mine them in order to work out a BPMN process model that will have the same statistical behaviors as the event log.  Example demo will be on stat orders. Looking at analytics, we see that we are not meeting the objective KPI.  First figure out all the paths, and analyze all the variants, and find a critical path.  We can see that one step is causing a lot of the delays.  Propose two different strategies: one to reduce time, another to reduce costs.  After optimization, we meet the delay time reduction criteria, and we see that the second case has better cost benefit values.

    Started the demo by opening an event log.  This creates a BPMN serialization.  He has used the idea of “strict BPMN” to enforce BPMN semantics on the model.  It found a top model and two sub models.  It created events and gateways which never were in the log — they had to be figured out.  He showed a series of different models that had been mined.  One had found 3 parallel tasks.  The models also look at the data, and find correlations between different data items and paths.  This can be used to mine the branch conditions.  Can discover a 1hour timer event task as well.

    Can even detect manipulations on data if the event log has data captured in the event log.

    Q: (Bruce) Impressive to figure this all out from the logs.  What happens if 90% of the time data causes a branch, but not 100%.

    A: It never requires 100%  There are statistical assessments of the conditions.  Not 100% precise.

    Q: Is simulation the key different from Disco and the others

    A: you need to have a complete, executable model if you want to make changes and improve the model.

    Tim Stephenson – Omny Link – Toward Zero Code

    Looked at a bunch of self-coding but found that things didn’t work too well.  Going to focus today on decisions.  WordPress claimed that they ran 1/4 of the web.  “Firm Gains” is a business to sell your business.  First step is to build  a form.  Then a decision table.

    Demo started by logging into wordpress.  Edited what looked like a blog post, but it was actually a form.  Standard list of fields without an programming.  Went to another page, inserted a square-bracket-style wordpress tag to include the form, and it appeared.  Very easy, very simple, workflow processes.

    Scott Francis – BP3 – Sleep at Night Again

    How to automate static analysis for BPM.   Your design team is not always experienced.  Once they select technology, they implement, and many times it is over engineered and hard to maintain, and sometimes has to be throws out.  In reality, the kruft does not get added all at once, instead it is incremental.  Neches is a tool to analyze the code, and find problems early in the iterations, and keep them from building up.  Does a complexity measurement on the application.

    Neches is a SaaS tool.  Can sign up for an account.  Users upload their application.  Drill into it.  There can be many versions, and you can look at the measurements how they have changed over time.  You can drill down into the individual metrics.  Example: the length of JavaScript server scripts triggers a warning if longer than a threshold.  Particular rules can be excluded (if you don’t agree with them) or particular flagged issues can also be excluded.

    Q: How complicated is it to create new rules?

    A: Not too hard.  Today this is not exposed, but internally we find it easy to do, and believe once this is exposed people will find it easy enough.

    Linus Chow- Oracle – Rapid Process Excellence

    Showed a web console for starting and interacting with BPM applications.  Mobile interface as well.

    2015-03-31 16.44.10

    2015-03-31 17.37.57_sm

    And that is it.  On day 3 I had a presentation, and was too busy to focus on taking good notes, and then for the entire session before they sequester the laptop.  Overall bpmNEXT remains a place for very forward discussion of new directions, a place that is helpful for me to stay on top of things.  The new venue — Santa Barbara — is likely to remain the choice for next year.  I am looking forward to it already.

    by kswenson at April 13, 2015 10:25 AM

    April 10, 2015 From Push to Pull – External Tasks in BPMN processes

    A process engine typically call services actively (e.g. via Java, REST or SOAP) from within a Service Task. But what if this is not possible because we cannot reach the service? Then we use a pattern we called “External Task” – which I briefly want to describe today.

    Picture on the right taken from – thanks!

    Context and problem

    A couple of recent trends increased the need for this pattern, namely:

    Cloud: When running process/orchestration engines in the cloud you might not be able to reach the target service via network connections – and VPNs or Tunneling is always cumbersome. It is …

    by Bernd Rücker at April 10, 2015 10:10 AM

    April 09, 2015 Orchestration using BPMN and Microservices – Good or bad Practice?

    Martin Fowler recommends in his famous Microservices Article: “Smart endpoints and dumb pipes”. He states:

    The microservice community favours an alternative approach: smart endpoints and dumb pipes. Applications built from microservices aim to be as decoupled and as cohesive as possible – they own their own domain logic and act more as filters in the classical Unix sense – receiving a request, applying logic as appropriate and producing a response. These are choreographed using simple RESTish protocols rather than complex protocols such as WS-Choreography or BPEL or orchestration by a central tool.

    I do not agree! I think even – …

    by Bernd Rücker at April 09, 2015 11:35 AM

    April 02, 2015 bpmNEXT – the BPM industry event that *really* matters

    Picture taken by Benjamin Notheis from SAP, this year’s winner of the best-in-show-award

    Clay Richardson from Forrester Research put it in a nutshell: “bpmNEXT means ‘Show me yours, I’ll show you mine'”.

    And show we did: All BPM Software Vendors that *really* matter were there, presenting the latest and greatest they have to offer – or will offer soon. This was not about Sales or Marketing, but just about showing-off the things we’re proud of, and showing it off to peers who understand and appreciate the passion behind it.

    But bpmNEXT is even more, it is the global gathering of a …

    by Jakob Freund at April 02, 2015 01:25 PM

    Thomas Allweyer: Anwender von Prozessmodellierungstools sind weitgehend zufrieden

    Im Durchschnitt bewerten die in einer neuen Studie der Firma BPM&O befragten Anwender ihr Prozessmodellierungstool mit der Note 2,6. Sie sind also zumindest weitgehend zufrieden. Interessanterweise sinkt die Zufriedenheit mit der Nutzungsdauer. Die Autoren der Studie führen dies darauf zurück, dass sich im Laufe der Zeit die Anforderungen und Rahmenbedingungen ändern, so dass das ursprünglich gewählte Tool nicht mehr ganz so gut passt.

    Mit insgesamt 64 Teilnehmern ist die Studie nicht repräsentativ. Dennoch bietet sie einen interessanten Überblick über die Erfahrungen und Meinungen der Anwender. Überwiegend handelt es sich bei den Teilnehmern um Modellierungsexperten aus BPM-Stabsstellen oder Prozessanalysten. Die am meisten verwendete Notation ist BPMN. Sie wurde doppelt so häufig genannt wie die EPK. Interessanterweise wurden auch Wertschöpfungskettendiagramme, die für Überblicksdarstellungen und Prozesslandkarten dienen, vergleichsweise selten eingesetzt.

    Die Tools werden zumeist von der internen IT-Abteilung bereitgestellt. SaaS-Angebote kommen bislang nur in 15% der Fälle zum Einsatz. Am wichtigsten ist den Anwendern eine einfache Bedienung. Auch ein gutes Portal zur Veröffentlichung der Prozessmodelle sowie Funktionen zur Beteiligung der Fachbereiche haben eine hohe Bedeutung. Für die Zukunft wünschen sich die Modellierer von den Toolanbietern mächtigere Reporting-Möglichkeiten und verbesserte Prozessportale.

    Die Studie kann unter heruntergeladen werden (Registrierung erforderlich). Dort findet sich auch eine im letzten Jahr durchgeführte Anbieter-Umfrage. Außerdem kann man selbst an der Anwenderumfrage teilnehmen, die kontinuierlich fortgesetzt wird.

    by Thomas Allweyer at April 02, 2015 10:02 AM

    April 01, 2015

    Sandy Kemsley: bpmNEXT 2015 Day 3 Demos: Camunda, Fujitsu and Best In Show

    Last demo block of the conference, and we’re focused on case management and unstructured processes. Camunda, CMMN and BPMN Combined Jakob Freund presented on OMG’s (relatively) new...

    [Content summary only, click through for full article and links]

    by sandy at April 01, 2015 07:02 PM

    Sandy Kemsley: bpmNEXT 2015 Day 3 Demos: IBM (again), Safira, Cryo

    It’s the last (half) day of bpmNEXT 2015, and we have five presentations this morning followed by the Best in Show award. Unfortunately, I have to leave at lunchtime to catch a flight, so you...

    [Content summary only, click through for full article and links]

    by sandy at April 01, 2015 05:31 PM

    Sandy Kemsley: bpmNEXT 2015 Day 2 Demos:, BP-3, Oracle

    We’re finishing up this full day of demos with a mixed bag of BPM application development topics, from integration and customization that aims to have no code, to embracing and measuring code...

    [Content summary only, click through for full article and links]

    by sandy at April 01, 2015 12:04 AM

    March 31, 2015

    Sandy Kemsley: bpmNEXT 2015 Day 2 Demos: Kofax, IBM, Process Analytica

    Our first afternoon demo session included two mobile presentations and one on analytics, hitting a couple of the hot buttons of today’s BPM. Kofax: Integrating Mobile Capture and Mobile...

    [Content summary only, click through for full article and links]

    by sandy at March 31, 2015 10:07 PM

    Sandy Kemsley: bpmNEXT 2015 Day 2 Demos: Sapiens Decision, Signavio

    We finished the morning demo sessions with two on the theme of decision modeling and management. Sapiens: How to Manage Business Logic Michael Grohs highlighted the OMG release of the Decision Model...

    [Content summary only, click through for full article and links]

    by sandy at March 31, 2015 07:24 PM

    Sandy Kemsley: bpmNEXT 2015 Day 2 Demos: Trisotech, Comindware, Bonitasoft

    The first group of demos on bpmNEXT day 2 had a focus on the links between architecture and process: from architectural modeling, to executable architecture, to loosely-coupled development...

    [Content summary only, click through for full article and links]

    by sandy at March 31, 2015 05:33 PM

    Keith Swenson: bpmNEXT – Day 1

    My notes from first day of bpmNEXT 2015, March 30.

    Bruce Silver – Conference introduction

    Today we focus somewhere between BPM and Enterprise Architecture.  15 years ago we thought it was huge that we had one system to integrate human and back-end systems, and we have come a long way.  Now, there is still too much balkanization of the technology.

    Main Themes of the Conference:

    1. Breaking the barrier between BPM and Enterprise Architecture.  Anatoly and another from Comindware is going to talk about the 3 gaps.  Denis Gagne will talk about the semantic graph to break down barriers.
    2. Bridge gap between process modeling and decision modeling.  Called “business rules” back then, as if this was an alternative to BPM.  Sapiens has started something called the “Decision Model” because this is too important to leave to the existing approach.  Signavio will also show business decision modeling.
    3. Bridge the gap between BPM and Case Management.  Camunda is offering a unified BPMN/CMMN execution.  Safiro and Cryo will present on how BPM needs to be loosened up.  Kofax and IBM will present on mobile case management and capture.  How do we do case management on our smart phones.  Including signature capture.  IBM has put a lot of emphasis on design, so we might see some of that.
    4. Expanding into new things like the Internet of Things. Presentation from SAP and W4 will focus on this.
    5. Expanding into expert systems and machine learning.  BP3 will present on the automated analysis of bpm code.  Fujitsu (Keith) will present on reconciling independent experts.  IBM will talk about Watson not just winning Jeaopardy, but how it can be used in the cloud with pre-trained services.  Living Systems (Whitestein) measurable intelligence in the process platform.
    6. Expanding into process mining, and Robert will speak about optimization of resources from this.
    7. Reaffirming core values of business empowerment. puts BPM in WordPress for non-programmers.  Oracle will talk about BPM in the public cloud.
    8. Reaffirm embracing continual change, a presentation by bonitasoft on building “living applications”.

    2015-03-30 09.32.09Nathaniel Palmer – What does BPM will look like in 2020

    Today, BPM looks like well defined, fixed routing of packages: channels, switches, but no awareness of what the other packages are doing.  Where does it need to be: like an Amazon warehouse with Kiva robots.  Needs to be data driver, goal oriented, adaptive, and intelligent automation.

    Three things:  Robots, Rules, and Relationship.

    An illustration of the change from 2005 to 2013 – the smartphone example at the announcement of the new pope.

    60% of people switching banks in the past year did so because of insufficient mobile banking capabilities.  Mobile support is the most important thing.  But don’t just transport the laptop UI to the mobile.  Gave an example of an Oldsmobile radio ads that was moved to TV showing a static picture and the radio ad behind it.  The new medium affords new forms of content.   Showed an automated teller as a “state of automation today” which was obviously not mobile.

    What can you do if you have mobile?  Kindle Fire has a “MayDay” button — you press and get an instant conversation with a support person.  This instant connection enables “relationship”.  He showed the Echo from Amazon, because Echo can help walk you through an amazon purchase.  Also showed My Jibo which was popularized through a kickstarter campaign.  Not to automate tasks, but to interface with tasks.

    Another thing is wearables, including wearable workflow.  Task might change to not be a single discrete unit of work;  remove the distinction between the task and the things that support the task.   The three tier architecture is common today.  We need to move toward a four tier architecture.  Client tier (mobile) delivery tier, aggregation tier, and services tier.  JSON and REST, and tasks need to be discoverable.

    Process mining and optimization.  data driven, goal oriented, adaptive, intelligent automation.

    2015-03-30 09.55.49smClay Richardson – Reinventing BPM for the age of the customer.

    Nathaniel’s talk focused on customer experience.  10 years ago much of process was focused on back end systems, and we have changed.  Today it is how to engage customers with mobile.

    60% of all business leaders prioritize revenue growth and customer experience.

    Four periods of history:

    • (1900) age of manufacturing,
    • (1960) age of distribution,
    • (1990) age of information, and finally
    • (2010) the age of the customer.

    Told a story about a promotion combining Jaguar and Thomas Pink.  Packaging was excellent, but reception was completely bad.  The bad impression is a perfect example of a process failure: the dealer had not been informed, they were not prepared, not engaging.  Was really wanting a memorable, rich, engaging experience.

    Big challenge today is to get across from the old to the new.  42% of business people put better mobile support on critical or high priority.  Examples of new mobile apps to order pizza from Pizza Hut and Dominos.  Pizza Hut simply ported their web site to the phone and it took about 20 minutes to make the first order.  Dominos on the other hand made something that works very well: easy to order, buttons for what you ordered last time, and has a tracker to tell you where it is.  Another example is buying US savings bonds.  Clay helped to redesign this, and found that the changes required broke many of the assumptions in the back end systems.

    BPM people don’t have a lot of credibility for improving customer experience.  Need a new title “Digital Customer Experience Arcthitect”.    First, digitize customer end-to-end experience, Another is “Digital operational excellence architect” for drive rapid customer centric innovation and to support prototyping.

    What has to change in BPM?: He produced a customer centric BPM Tech Radar.  Two key items on this chart:  1) low code platforms.  2) customer journey mapping.   Simple cloud orchestration, how to quickly program devices, how to connect devices.

    Customer facing cadence is faster.  The real thing is really a need for speed.  Months to get things done.  Now when touching customers we need to work faster.  This is driving to “low code” approach.  Develop in weeks, releases weekly, method is test and learn, and adoption is intuitive now.

    Gave an example of a customer journey from Philips medical devices to sell a life alert bracelet.  Opportunity to redesign the delivery because the older patient is often anxious, and the purchasing customer needs to be informed.  Another issue was billing since that was being split three ways, make it easier to do this.

    2015-03-30 10.57.49smPanel on BPM Business

    • Miguel Valdes Faura – Bonita Soft
    • Scott Francis – BP3 Global
    • Denis Gagne – Trisotch

    Miguel – Open Source is the key to building a successful ecosystem.  Akira Awata has translated the entire platform into Japanese and there is a big uptick in usage.  Before that, downloads in Japan were limited.  Open Source BPM Japan.   Now reselling subscription to businesses like Bridgestone and Sony.  Bonita BPM Essentials book developed on the open source version, and people can download and access the examples.  Banking regulation is changing in Switzerland, and some are using processes in Bonita to match these new rules.  There are large benefits to the open source model.

    Scott Francis – How to move from Lombardi to independent.  Started trying to be the best Lombardi partner, and then IBM partner.  People worry about time, money, and focus, and it is focus that is the easiest to lose track of.  Learned to find our own customers – IBM did not refer anything.  Service providers get a lot of pressure to pick up other products, for example pick up more IBM products, but it might be better to focus on one product and do the job really well.

    Denis Gagne – Two hobbies: Building an Ecosystem of BPM & standards work.  Still amazed at how much “BPM-101” needs to be taught.  There is a need for us as a general community to educate better.  BPM Incubator has more members outside of US than inside.  190 countries.  We all benefit if the BPM community is better informed.

    Q: (Neil) Convergent vs. Divergent Standards.  Why do standards sometimes work, and sometimes not?

    A: it is easier to have agreement when you are only touching one set of customers.  Bonita has 1000 customers, but they use only about 30% of BPMN.

    Q: (Bruce) People are building Apps.  If the problem that the BPM platforms don’t provide something suitable for those Apps?

    A: (Miguel) This is an important questions.  How to make sustainable apps.  We have been doing a poor job in the BPM industry in helping people to make customized UI.  There is a portal, and there is a level of customization, but you are constrained to the box.  You can’t say, put a button on the right corner of the screen.  How to change?

    1. Low-code approach to avoid the need for developers,
    2. instead making things that support developers to make them more powerful.

    (Scott) A lot of the people doing mobile apps, have no concept of process.  Once the data is shipped back to the server, they don’t care where it goes. Opportunity to fill the gap between mobile and back end.

    2015-03-30 11.42.36smNeil Ward-Dutton – Schroedinger’s BPM

    Is this the end of BPM?  Are we seeing the end of “business transformation”.  Where are we going next?

    Is it dead?: the term BPM is disappearing from conversations.  People don’t want to talk about it.  Instead they use smart process, case management, anything else.  BPM Technology platforms growing at 3%.  (Clay thinks 8%)  Maintenance revenue is dominating license revenue.  However, there is a lot of inquiries, particularly from non-traditional sectors.  Actually we are probably in the very middle of the adoption curve.

    BPMS is fundamentally unlike most enterprise technologies.  Really weird and horrible chimera.   Hard to map on the ways that people normally work.  The innovators think they can use BPM to reinvent the way they work.  But the mainstream reject as having tried it and wasted time and money.  Just another attempt to get us locked into a enterprise platform.  Culture change is too expensive..

    Someone created a “Customer Project Manager” to help premium in-home customer services.  Didn’t call it BPM.  This was about agility.  Another example was a large bank who has a IT led enterprise wide transformation failed big time.

    They are embracing cloud aggressively.  They are using agile ways of working.  Low cost propositions.  The lightweight approaches are about spending less up front.   Why are there all these people out there building these apps, but not really engaging with the back-end.  The culture change is not coordinated: it is too scary.

    Low-code is what we used to call 4GL.

    New agile enterprise has no “target operating model.”  They don’t know what it will be.  This is not the way we did transformation ten years ago.  First instrument, then provide agility of services.

    Why would you do “simulation” when you could put the real solution in the hands of real users and observe how it works?

    Customer journey slide very interesting.  Knowing customer is not enough, build surfacing, on that acting and finally shaping.   That all needs to be done across marketing, sales, operations, and service.

    Advice: don’t fixate on SPA’s, don’t obsess over traditional competitors, don’t fixate on throwing more in the box, do find ways to enhance BYOP particularly with auditability, do look at the implications of digital strategies, do enable clients to take the portfolio management approaches to business processes, do partner, buy, build.

    2015-03-30 13.43.59smRemy Glaisner – Myria Research – Chief Robotics Officers and BPM

    RIOS – Robotics and Intelligent Operational Systems.  Automation, Robotics, and mobile technologies.  There will soon be many people calling themselves “Chief Robotics Officers.”   This is a completely open, nascent field — no leaders yet.  inflection point expected in 2017-2018.  For manufacturing they are already there, but agriculture is a ways off.

    Client acquisition is based largely on how fast you can deliver.  Automation including robotic automation, is quite important.

    By 2025 over 60% of manufacturers over $1B will have a Chief Robotics Officer (CRO) on staff.

    2015-03-30 14.28.52smBenjamin Notheis, Harsh Jegadeesan

    Internet of Things.  There are others:  Internet of People, Internet of Places, and Internet of Content.    All four of these together.  Wil vdAalst talks about the Internet of Events.  IoE means massive data (bigger than big data).  Event stream processing sense patterns in real time.  Once a pattern is identified, one can response with rules and processes.

    Presented a use case about a person who managed pipelines in L.A.  Events notify that there is a problem.   The options to replace a pump are given, different prices, different quality of pump.  Demo is hard to describe here — so see the video.  At one point he assigned a task to someone just typing “@manny escalate issue”  the user was found and task assigned.  Very dynamic!   Had a visual depiction of incidents displayed as tiles, where the size of the tile represnts the number in that category.

    The coolest part of the demo was when he showed the user interface display on a watch display.  One could see the task, see the data values and options, make an audio annotation of the task, and mark the task as completed.  All from the watch.

    Eclipse based modeler showing extended BPMN.   Models can be imported to  This is compiled to JavaScript for running in the SAP cloud service.   Referred to it java script event loop.

    Q: does this use NetWeaver and/or work with it? A:  Basically, not much.  It is a new process engine implemented over the last 6 months or so.

    2015-03-30 14.37.56sm

    Francois Bonnet, W4

    Francois gave a great presentation and demo around a use case of monitoring elderly and responding to falls.  Showed a “faintness sensor” based on a raspberry pi processor.  When it tilted for more than a few seconds, it started a BPMN process.  A heartrate event might cause this process to escalate to various steps, such as calling them, calling a neighbor, and sending in a response team.  If it got back upright, the process was cancelled.  If the fall happened too many times in a particular period, it started a different process.

    It was pretty interesting that the event modeling was done effectively in BPMN, however the aggregate even (falling too many times in a period) was not modeled directly in BPMN.

    Dan Neason, Living Systems

    Covered the Whitestein system.  All processes have a reflection capability so you can ask a running model what it is capable of doing.  Interesting demo, but hard to capture here.

    2015-03-30 16.14.01smJim Sinur – Swarming and Goal Directed Collaborative Processes

    BPM is not a sexy term any more.  What else do we go to?  There is a notion of a Hybrid Processes.  Could go on that, but as Neal pointed out, growth is not that high.

    The idea we should follow is that of transforming the digital organization.  How do processes help organizations become digital organizations.

    Got some of this from Keith’s presentation last year where he showed a video of starlings flocking (murmerating).  The idea is that birds guided by simple rules can act collectively in an emergent way.  But we ned to think about flocks with starlings, ducks, geese, sparrows, etc.   We will have swarms of things, but they consist of robots, information systems, and everything else.

    Processes should help organizations cope with the “big change” coming their way.  We force customers to go through a phone menu which matches the organization that was designed on industrial age ideas.  Why force the customer to this?  Tomorrow there will be an “Uber” in every different industry.

    Gave an example of an insurance company that wanted general reps to be able to handle all products.  They used AI systems to help.  Tried to get rid of specialization, but they failed because the rules technology was not available.

    In production in Norway is a company to help with dementia patients.  Gave them a wristband with GPS in there.  If the patient approaches or crosses a boundary, they are notified and and go get him.

    “Going digital” is the goal.  A couple of way to get there.  “do it, try it, fix it” is one approach.  Today the process is often in control.  But in the future the goals are in control of work and the process.

    Can you imagine a bunch of swarming agents deciding what to do next?  Agents are: level of humanity, level of collaboration, level of intelligence, and a vector in goal driven freedom.  Hybrid resourceses, hybrid process styles (cases, flows, forms), hybrid speed, hybrid goals, etc.

    Example of a bike store that has a kiosk that analyzes the customer to determine what mood, what kind of personality, and body type.  She keys in information about the kinds of riding she would like, and it suggests a bike. Imagine that there were many of these intelligent agenst swarming to help sell this bike.

    Another exmple is using a swarm to find a suitable house by sending in a photo of the kind of house you want.  It could search for similar homes, and a bank might do this in order to also offer a mortgage.  Issues with autonomous cars and robots: legal issues.  Who do you sue when something goes wrong?

    Not just UI, not just mobile, but how you treat customers and how you meet their need is the important thing.

    That is it for the first day.  Then it was off to winetasting on the roof-top patio.

    2015-03-30 18.12.21sm

    by kswenson at March 31, 2015 11:20 AM

    March 30, 2015

    Sandy Kemsley: bpmNEXT 2015 Day 1 Demos: SAP, W4 and Whitestein

    The demo program kicked off in the afternoon, with time for three of them sandwiched between two afternoon keynotes. Demos are strictly limited to 30 minutes, with a 5-minute, 20-slide,...

    [Content summary only, click through for full article and links]

    by sandy at March 30, 2015 11:52 PM

    Sandy Kemsley: bpmNEXT 2015 Day 1: More Business of BPM

    Talking with people at the first break of the first day, I feel so lucky to be part of a community with so many people who are friends, and with whom you can have both enlightening and amusing...

    [Content summary only, click through for full article and links]

    by sandy at March 30, 2015 07:15 PM