Planet BPM

October 01, 2018

Keith Swenson: Adaptive Case Meeting in Copenhagen

I will be participating in this presentation / discussion of new trends in Adaptive Case Management and other support for knowledge workers in Copenhagen on Oct 12.   Morten Marquard will show the latest developments in DSR graphs, Thomas Hildebrandt will talk about their EcoKnow project, and I will be introducing the new subject of “Emergent Synthetic Processes”.

The meeting will be in English.  There will be lots of time for questions and answers. It should be a very good discussion, so if you are in the area, I hope you can make it.

by kswenson at October 01, 2018 02:05 PM

September 28, 2018

Keith Swenson: Industry Templates and Process Re-use

In most BPM RFP’s there is a request for access to industry templates to allow for re-use and to get a head start.  Most BPM vendors have some offering.  The question is: are these of any value at all?

The Risk of Re-Use

One thing we learn in software is that it is easy to add things to software, but it is very hard to remove things.  When you add a new variable, a new class, a new module, you know that there are no existing references to it, and you can add in exactly what you need.   But if you find an existing module and want to change it, you have to find all the calls to that module, and analyze that code to determine which of the various promises that the API makes are being depended on.  This analysis of all the places that use a module can take more time and effort than just writing a new module from scratch.  Or put another way: the risk of causing a bug is far lower if you write a new module.

Extending this same concept to a process template:  no organization has the same process as any other organization.  The processes you take from one organization will surely have to be modified.  When modifying, you will always find extra things that are probably not needed for the new organization, but for which is it very hard to tell whether you can remove them or not.  If you remove something that is actually called elsewhere, it might cause problems that appear later when you are in testing.  It is most often safest to start from scratch, and only add the pieces that are actually needed.

As a system architect, I would always recommend making a new, clean design based on the needs of the specific organization, because any savings due to re-use would be dwarfed by the effort and risk of having to carry a lot of unnecessary costly baggage along.

The Process Handbook

I remember in the 1990’s Tom Malone from MIT ran a project called “The Process Handbook” where he got a grant from the government to simply go and map out all of the processes for all the industries.  The idea was, once all the processes were known, they would be made available to everyone for free, and this would enhance the general capability of all industries.  Sounds like a good investment.  But it never worked!  This site is still there, but is looks like it has not been updated since 2003.

Could it be that the cost of re-using a process exceeds the cost of creating one from scratch! 

Any Success Stories?

One question I would like to know is whether ANYONE has seen a process model developed for one organization successfully re-used at a different organization?

How would you measure the value of this?  Did it save a lot of effort, or not?

Beyond “perceived value” of having a head start in a process project, is there really any real value in the “industry templates?”

by kswenson at September 28, 2018 05:26 PM

July 19, 2018

Keith Swenson: What if the new Standard breaks the old tests?

Would you release a product to the public before you run the tests?  Whether you are manufacturing, or software, or agriculture, or anything, if you have a set of tests, you would run the tests before providing the final product to the pubic.  Makes sense, right?  The world of technology standards is different.

DMN 1.2 Spec Released

The OMG approved the release of DMN 1.2 a couple weeks ago, and it continues to go through various rounds of formalization that will take a couple more months, but the content of the standard is promised to be entirely fixed at this point.

The DMN TCK has more than 750 tests used to verify the correct running of the DMN implementations. Currently these run on the 1.1 version of the spec.  The next step will be for vendors to implement the changes for version 1.2 into their engines, and then we will run the existing tests on those modified modules.

What if a Test Fails?

We would call this an upward compatibility problem.  It can happen anytime a new feature in the specification changes the way something works.  The committee always tries to avoid this when possible, but a new behavior is a new behavior and if it happens to infringe on a prior behavior that was being used some someone, it causes an upward compatibility problem. Obvious cases can be avoided, but the ones that are not avoided can still be there by accident.

Such problems are hard to recognize in advance.  This is why most serious software projects have a large list of test cases that are brought from version to version.  (In one project here in Fujitsu, we have tests that were written more than 15 years that still run today unmodified in order to prove compatibility to the older versions.)   Before a new version is release, all the old tests are run.  If an incompatibility somewhere deep in the interactions of the various scenarios is found — then it can be addressed before shipping the product.

What if a DMN test fails now?  What if an accidental incompatibility is found now, after the spec has been released?  That will have to be decided once the problem turns up.  Later.  After the spec is in circulation.

OMG Standards are not Tested

The OMG process for creating a standard does not involve any demonstration of an implementation actually running the standard. No running code is required before the standard is “released.”   No matter how sharp your designers are, you simply can not verify that everything works the way you think until you get code actually running.

That is one big reason we started the TCK is to assure that running implementation are actually tested and compared.  Demonstrating that the standard works is a critical part in the development of any standard.

What happens if a compatibility problem is found?  A decision will be made at that time.  The incompatibility might be left int he standard, because sometimes to progress a break with the past must be done.  Implementers then have a choice of which version to support, and possibly including a flag to specify which version of behavior you want.

The more insidious problems are the statements in the spec that are found to be unimplementable.  Several of these exist in the 1.1 spec:  for example the statement that claims that a reference to a list with one element in it is exactly the same as a reference to the only element of that list.  Implementing that was simply impossible, but it is still there in the 1.1 version of the spec.  It was changed in 1.2, but because the spec is released before any implementation, there are going to be things in the spec which are wrong.  Individual vendors decide not to implement when it is impossible — and sometimes don’t implement things that are possible — and this makes the field of implementations lack the uniformity we all would like.  The de jure standard does not match the de facto standard.


Because the scope of change in DMN 1.2 is small, and due to the large amount of scrutiny, I don’t expect to find any large surprises after implementation.  There could be small ones, and they will be dealt with when discovered.

Still, it would be a benefit to the entire community if the OMG required at least one running implementation before releasing the spec.  The OMG won’t change in this regard, so it is up to the rest of the community to test and various implementations, and to discover what the real technology turns out to be.


by kswenson at July 19, 2018 10:10 AM

May 22, 2018

Keith Swenson: RPA / BPM Implementation Strategy

There is a broad misconception that RPA about “business process” by itself. I have heard people say that they were going to switch from BPM to RPA. That is strange because the capabilities are quite different. It makes sense to use RPA and BPM together, and sometimes you can use one without the other, but only to solve different problems.


  • RPA – Robotic Process Automation – a software system with the ability to access another software system using the user interface with the capability to enter data into that other system, or extract it out.
  • BPM – Business Process Management – a software system designed to represent the flow of responsibility through an organization and to deal with the complexity of involving humans into a process.

Yes, they both have the word ‘process’ in them, but neither of these should be confused with an “Operating System Process” which also uses the word.  Process is a very general term saying that multiple things get done in a particular order, and all three of these accomplish a ‘process’ to some degree.  But there are big differences in the types of process that can be implemented.


Both allow a kind of automation. Both carry data for a specific instance.  Both can send and receive data to other systems.  Both offer a graphical representation of the automation.

Diagram Differences

The style and capabilities of the diagram are completely different.  Below you see a typical RPA-style diagram.

What you are seeing is boxes that represent things that the RPA tool will do:  login to salesforce, search contacts, create CSV file, and notify user.  There are branch conditions that allow you to get the RPA tool to do different things in different conditions, and to implement loops and such.  It is a programming language for telling the RPA tool what to do.

The RPA diagram represents automation. In such diagrams, the boxes represent operations and there can be branch nodes, etc. It is a style of visual programming, but what is missing is the representation of people and roles. RPA can access existing software systems, and can send or receive data through the web interface that a human normally uses. It does log in as a person, and could log in as many different people, but the diagrams do not represent the “roles” or “skills” that a person would require. While RPA is meant to replace people doing menial key-entry type jobs, the diagrams have no representation for assigning a task to a person to do. RPA does not provide a worklist that people can log into, to claim their workitem, and to tell the RPA system that you as a human are finished working on something. There is no way to represent the flow of responsibility through the people of an organization.

The most telling is the “Notify User” step in the diagram above because you see users are not first class concepts in the diagram. What you can not do is to pick a box in this diagram and designate in any way that “Sally” will be responsible for doing that step, and that “Joe” will do the following step.   There simply is no concept in this diagram about assigning work to users because this is not a diagram of work distributed through an organization.  This is just a visual programming language for the RPA tool to perform.   As a robot.

Compare to a process for a BPM system:

Here you have tasks, but these are designed to be done by people.  The task “Reproduce Problem” is a quality assurance task that can not be automated, but instead requires a human to do it.  This process is showing the flow of responsibility between three roles within the organization, and each swim lane represents not one person, but a class of people who have particular skills.

Not shown in this diagram is how those people are informed of their tasks, how they search and find a suitable one, how they claim the task.  There is no step showing “login” or any other trivial system interaction.  What is not shown is the deadline to get the task done, nor the reminder they get if the dead line is passed.  Notifying users is an inherent part of the system, and does not need to be explicitly modeled.  What is not shown is what happens if a task is reassigned from one person to another when halfway completed.  You can’t in this diagram see any logic to cope with daily shifts, human work scheduled, and vacation schedules.

Most important: the RPA diagram shows what the RPA system will be doing, while the BPM diagram shows the flow of responsibility between humans without getting bogged down in the details of how this coordination is accomplished.


You can use RPA to implement a business process, but you can also use Java, email, or Excel to implement one as well.  You can theoretically use any programmable system to implement this.  The question is not whether it is possible, but whether it is the right fit to be easy to implement and easy to maintain in the long run.

RPA might be able to implement a “straight through process” where no humans are involved, but it has no facility to represent a process where people need to participate and make decisions.

One could re-implement the responsibility flow capabilities from scratch in the RPA tool: implement worklists, vacation schedule logic, reassignment logic, notification to people, and reminders to people.   The point of course is that these capabilities are built in to a BPM system and you don’t have to reinvent it as explicit logic which is copied from implementation to implementation.

Working Together

What makes a lot more sense is to have a BPM based “work distribution model” which is a representation of the skills and capabilities of the people involved in the business, and distributes work to them.  And then, on individual tasks, you can automate SOME of them with RPA where the person was primarily employed to enter or extract data. Some RPA is deployed standalone to completely replace a user, and in other cases it is deployed in “attended” mode where the RPA helps replace some of the routine aspects of the work, but does not entirely replace the humans.

Hopefully this helps clarify the role of both technologies. Clearly the future lies in systems that can handle both aspects: the RPA (replace humans when using a web interface) and the BPM (distribute work to humans) but it is important to understand the differences that exist today in the current implementations of each.

by kswenson at May 22, 2018 03:16 PM

April 20, 2018

Keith Swenson: DMN TCK Turns Two

Attending bpmNEXT conference this week.  It was here, two years ago, that we decided to start the DMN TCK effort.  How far have we come?

The original idea was to simply to back up the written spec with some real running code.  The specification can’t possibly express the full detail necessary for an actual implementation.  Anyone faced with implementing DMN would have to make thousands of small design decisions between, say, behavior X or behavior Y.  Most of those are arbitrary in the sense that the a user could use behavior of either of X or Y, but what matters is that the product is consistent.  Moving from product to product requires that all implementations make the same choices at those levels.  The specification is, by its nature, ambiguous on these many small points.

After the first couple of months we abandoned the idea of creating a reference implementation, and pivoted successfully to the idea of providing a framework for publishing test cases which any successful implementation could use to verify and validate their execution.  The tests would be files constructed in XML: the DMN model in the standard format, and the test data in another XML which we designed containing input values to submit to the decision model, and output values to compare the output to.  To use the tests one needs to write a small “runner” which is code that reads the files, and runs the test.

All of the DMN examples and all the test files are available on GitHub and usable for free with a Creative Commons license.

An Example Test

Here is the XML representation of a simple DMN decision

This decision simply calculates the yearly amount by multiplying the monthly amount by 12.  This tests the ability for an engine to parse that FEEL expression, and correctly evaluate it.   And here is the XML test file:

As you can see, it puts 10,000 as the monthly amount, and specifies 120,000 as the yearly result to compare to.  DMN models can be much more complex than this, yet the test files generally remain simple lists of inputs and outputs.  It really is that easy to create tests, and that might drive widespread adoption.

The results for all vendors are collected and listed in this DMN TCK Results Site.

Collecting Tests

Once the basic pieces were in place, we started collecting tests.  The first few tests were models that Bruce Silver had developed for his book DMN Method and Style.  The next set of tests came from Red Hat during the testing and finalization of their FEEL evaluator.  Camunda added a number of tests.  In July of 2017 Actico entered the group, and converted hundreds of their internal tests into the standard form that could be used by everyone bringing the total number of tests to 588.

We welcome more tests.  Now that the framework is complete and functional, it is easy to add a DMN model, and a set of input / output values, and that constitutes a new test that anyone can run.  Anyone can contribute who can make a GitHub style pull request.  The team members will review the test, and assure that it runs on at least one existing engine, and then include it in the official set.

Improving the DMN Specification

While it is not the goal for TCK members to discuss modifications to the DMN spec itself, it is necessary to discuss how to implement parts of the DMN spec, and on many occasions we have run into parts that had been interpreted differently by different vendors, or even parts which were impossible to implement.  In many cases the disagreement could be resolved by a careful re-reading of the spec.  In some cases the spec was ambiguous, and so a proposed clarification was submitted to the RTF group maintaining the spec.

  • Spec did not allow a time duration to be divided by another time duration
  • Spec did not allow a date and time type to be parsed from a string
  • Spec considered that a list with a single element was in all cases exactly the same as that element alone.   When we got into testing, we found that causes any number of paradoxes in the parsing of formulas.

For each of these, and many more, we raised an issue with the RTF to get the spec changed or clarified.


The current effort has received a lot of support from these organizations:

There are seven vendors that have run the tests and contributed results to the results site.  However, the DM Community lists 18 vendors who claim DMN support.  Why are these lists different?  Why have so many vendors not run the tests?  What assurance do you, as a consumer, have that those vendors actually run DMN?  These are questions to ask those vendors directly.

What is Left to Do?

We are waiting for DMN 1.2 to be released so that we can then add some tests that are not compatible with 1.1, and we expect they will be supported by the new release.  When that happens we will create a number of tests around 1.2.

We have not really explored error cases beyond one or two.  Clearly there are many more error cases to check for.  We might also test DMN models with invalid FEEL expressions to make sure that engines react consistently to those.

The main focus, however, will be adding real-world DMN models which have been developed for actual operating business applications.  Those models test not only the execution of the engine in cases that are realistic, but also the suitability of DMN in general to real world problems.


by kswenson at April 20, 2018 12:52 PM

April 18, 2018

Drools & JBPM: The DMN Cookbook has been published

The Decision Model and Notation (DMN) Standard offers something no previous attempt at standardization of decision modelling did: a simple, graphical effective language for the documentation and modelling of business decisions. It defines both the syntax and the semantics of the model, allowing IT and Business teams to "speak the same language". It also ensures interoperability between vendor tools that support the standard, and protect customer's investment and IP.

It was an honour to work with accomplished author Bruce Silver to write the "DMN Cookbook", a book that explains the features of the standard by examples, showing solutions for real modelling problems. It discusses what DMN offers that is different from traditional rules authoring languages, as well as how to leverage its features to create robust solutions.

Topics covered include:

  • What is DMN?
  • How DMN differs from traditional rule languages
  • DMN Basics
    • DRG elements and DRDs
    • Decision tables and other boxed expressions
    • FEEL
  • Decision services
  • Practical examples
    • Uniform Residential Loan Application: validation, handling null values, handling XML input
    • GSE Mortgage Eligibility: variations using a central registry
    • Canadian Sales Tax: variations without a central registry (dynamic and static composition)
    • Timing the Stock Market: modeling a state chart with DMN
    • Land Registry: DMN-enhanced Smart Contract
    • Decision Service Deployment: automated and manual
    • Decision Service Orchestration: BPMN or Microsoft Flow

More information on the book website.

Available on Amazon.

by Edson Tirelli ( at April 18, 2018 06:25 PM

Drools & JBPM: bpmNext 2018 day 1 videos are already online!

The organization of bpmNEXT 2018 is outdoing themselves! The videos from the first day of the conference are already available.

In particular, the presentations from Denis Gagné, Bruce Silver and Edson Tirelli are directly related to Drools with content related to DMN. I also recommend the presentation from Vanessa Bridge, as it is related to BPM and the research we've been doing on blockchain.

Smarter Contracts with DMN: Edson Tirelli, Red Hat

Timing the Stock Market with DMN: Bruce Silver,

Decision as a Service (DaaS): The DMN Platform Revolution: Denis Gagné, Trisotech

Secure, Private, Decentralized Business Processes for Blockchains: Vanessa Bridge, ConsenSys

The Future of Process in Digital Business: Jim Sinur, Aragon Research

A New Architecture for Automation: Neil Ward-Dutton, MWD Advisors

Turn IoT Technology into Operational Capability: Pieter van Schalkwyk, XMPro

Business Milestones as Configuration: Joby O'Brien and Scott Menter, BPLogix

Designing the Data-Driven Company: Elmar Nathe, MID GmbH

Using Customer Journeys to Connect Theory with Reality: Till Reiter and Enrico Teterra, Signavio

Discovering the Organizational DNA: Jude Chagas Pereira, IYCON; Frank Kowalkowski, KCI


by Edson Tirelli ( at April 18, 2018 03:50 PM

April 13, 2018

Keith Swenson: Agile Best Practice: Start with the Empty

A co-worker on an agile project was showing me a feature he was perfecting, and it was looking pretty good with features to add and remove various settings as per the design.   I wanted to try it.  He said “sorry, the create function is not implemented yet.”   It got me thinking….

Start any development project with the function to create an empty record.

Nothing symbolizes Agile development better than that advice.  We all understand the “minimum viable product” which is the first version that you would ask a user to use.  Agile development within the team works on an even finer scale.   Ideally, the developer creates the smallest increment in functionality, makes it work, tests it, gets it to customer release quality, and pushes it to all the others on the team.

So ideally, every button you add, you add that one button, you make that button work, you test it, and you push it to the rest of the team before adding another button.  What you try to avoid, is creating a whole bunch of things at once.  Don’t create all the button controls at once, and then make them all work.  What you do create, should be finished to the point of actually being able to use it.

In the conversation in question above, the developer had added some records to the database manually: database hacking.  This is not evil in itself, but worth thinking about from an agile perspective.  In this case the project will need a button to create this structure.  The simplest case is creating an empty record, or whatever the absolutely minimal record is.  It may not sound very interesting to have a button that creates an empty record, but you are going to need it anyway.  And you are going to need a test for the create empty record case.  Once finished, tested, and pushed to the rest of the team, they too can create empty records.  You can’t use anything else until you have the create button, so from he point of view of getting to “release quality” nothing can be tested until you can create an empty record.

So a button to create an empty record is the logical starting point, but there is more to this idea:  you should not spend time creating more functionality until you have completed and checked this in.  With Agile, you want to avoid large check-ins of any kind.  Always do exactly one step at a time.  Implement the smallest increment you can, and check it in.  Implement the next increment, and check it in.  Don’t wait and implement a lot of things.  Lots of uncompleted things is technical debt.  Most importantly, don’t spend time perfecting advanced functionality, when the basic required function is not there yet.

But really?  Every button?  My practical guideline is to check in once a day.  Pick an increment in functionality that you can start and finish in one day.  Maybe that is actually an entire screen with five buttons.  Get it running.  Test it.  Make sure it is implemented to customer release quality.  And check it in.  Every day.

You may think that your small increment is useless to the others, and it sort of is, but we do it to force ourselves to keep the quality high, and to eliminate technical debt.  While the create empty record button may not seem that useful — no matter how small the increment — it is not technical debt as long as it works and represent a real function that the final end user will actually need.

Start any development project with the function to create an empty record.  Develop, test, and commit that to the project, before doing anything else.

by kswenson at April 13, 2018 09:54 AM

April 03, 2018

Keith Swenson: Joining the DMN TCK

The DMN TCK is a group of people united around a single desire: we want to see DMN succeed as a decision modeling paradigm.  We want it to really work, and not simply be  another passing fad.  We want transparency of what the particular implementations can and can’t do, and we want an independent way to verify vendor claims.  Do you have a vested interest in DMN?  Maybe you should consider joining the TCK team to reduce your risk, and to assure the success of the DMN marketplace.

From all accounts DMN looks like a promising direction for modeling decisions.  It goes beyond the simple decision tables, and allows you to link together a number of different logic formulations into a single structure.  The goal is for this formulation of the decision to be portable and reusable both as a visually expressive models and also as executable programs.

The group developing the standard does not have the time (or possibly even the inclination) to provide a reference implementation.  The DMN TCK is making a set of tests that will allow vendors to demonstrate precise behaviors across a broad range of functionality.  Currently we have 576 distinct tests, and we continue to add more tests all the time.

Are you interested in DMN success?

Consider these questions:

  • Do you have an investment in DMN modeling?
  • Have you trained people, and are they developing DMN models?
  • Do you want assurance that your models will run in the years to come?
  • Would you like assurance that your models will run the same on different vendors?
  • How certain are you that your developers are using DMN in the way that the designers of DMN anticipate you are using it.
  • How certain are you that new versions of DMN will not invalidate your models?
  • How much would you lose if your models became unusable or unreliable in the future?

Jumping onto a new technology always embodies a certain amount of risk.  I have plenty battle wounds from adopting a new technology only to find the supporters go in an adverse direction incompatible to my use.  Anyone in the industry more than a few years has similar scars.  Maybe you already know that painful feeling when you are forced to decide to drop one (beloved) technology, and to re-code everything in a different one.  Ouch!

Sometimes technology progresses in a direction because of a clear strategy that makes sense for the overall marketplace.   Sometimes is goes in a direction simply because there is no clear direction, and one direction must be picked for better or worse.

Sharing is Insurance

The best way to make sure that the technology does not go in an adverse direction is share your existing DMN models with the vendors as a specific test cases.  The model is accompanied with a set of inputs, and expected outputs.  It is easy to run and verify this test case on each new release that comes out.

Existence of a concrete test case makes it much harder for a committee member, or a product vendor, to make a change that invalidates the test.  The test makes the case real, everything else is hypothetical.

Sharing Improves the Quality

We are particularly looking for models that come from a real world application.  We would like to see models from a broad variety of domains.  Richer tests are better.

Most of the tests today focus on one or another specific language feature.  It is not necessary for tests to be on an isolated feature.  It is better, in fact, for tests to represent real world examples.  Whether it is complex or simple, a model that uses DMN to solve a real world use case is better than any feature-focused test.

Users of DMN Technology

We welcome new members readily. We are specifically looking for users of DMN who can contribute DMN models as tests. Such a contribution is good in a number of ways:

  • Your test case will be reviewed by experts for consistency with the current standard.
  • As new versions of the vendor products come out, they will be tested against your model, to be sure that nothing you depend on is changed in any adverse way.

The risk you carry in your investment in DMN is reduced. At the same time, you are insuring that DMN as a market is more likely to succeed for the long term.

Vendors of DMN Technology

We also are looking to grow in the number of vendors supporting the TCK. All you have to do is to pull a copy of the tests; set up your own runner to run them and package the results in the proper CSV format; and submit the results back to GitHub as a pull request. You will benefit from the best and most complete set of DMN tests available. You results will be published along side all the other top DMN vendors. Simply participating at the minimal level shows that you are an essential player in the DMN field.

Contributing tests cases will also benefit you in the same way it benefits the users above: your investment in DMN details is more protected if there are running, reusable tests that run the way your software runs, and are available to all vendors to run. In the detailed areas that the written spec can not possibly cover, your contributed tests may actually be setting the standard in a very real way.

You will also benefit by joining into the weekly discussion of the current state of the DMN standard. Discussions of problems areas that others are encountering might be a tip to address those areas yourself before your user base gets there. Every vendor who has joined has invariably discovered gaps in their implementation that they were able to address quickly.

How do you Join?

Let me know:

  • Who you are
  • What organization you represent
  • What your interest is
  • Whether you are a user or vendor of DMN technology

Everything that the TCK team does is freely available to the public, so you don’t have to join to use the results.  But you will gain by being part of the action to set the standard, and reduce your risk of adopting this new technology.

by kswenson at April 03, 2018 10:30 AM

April 02, 2018

Keith Swenson: Worklist Performance Considerations

This is a technical deep dive into system design around providing a worklist to workers in a BPM system, and the technical tradeoffs you will encounter.


We assume a system where you have an application that is designed around business process diagrams.  Those diagrams describe tasks to be done by people, and so you need work items which is an object that communicates the need for a person to perform a particular task.  The user finds the work items assigned to them, picks one, completes the task, and in so doing advances the process to the next step which might involve more work items for them or for others.

We assume that activities will really be assigned to some sort of role or group which have the skill or responsibility necessary to do the job.  This is held separately in some sort of organizational directory which maps group name to a list of individuals who are in the group at the time.

In order to scale such a system, we need to think about how this is structured in the database.  What we want to avoid, is a full search of the entire database every time someone wants to see a list of their tasks.  So we break the work items out into separate records which are indexed by the assigned user.  If process instance 10005 has an activity for “Review” assigned to a group called “Reviewers”.  If that group has 5 people in it, then 5 work item records are created in the work item table, one for each assignee.  This allows a simply database query to find all the work items assigned to “alex” with a single highly efficient query.  When that activity is completed, those 5 records are deleted so that they no longer show up in anyone’s work list.

That is all pretty straightforward until you think about one thing: what happens when the members of the group change?  What if a new sixth person joins the group?

The answer to how this should be handled depends on the characteristics of the process and the rate of change of the group.

Group Level Work Items

As described above, work items were created for each individual, but there is another possibility:  create a single work item for the group.  That is, create a single work item for “Reviewers” instead of for the 5 individual members of the group.

Then, instead of searching for all the work items for “Alex” you search for work items for “Reviewers”.  But a person may be in multiple groups, so to find all of their work items, you will need to search for all the groups they are a member of.  If Alex is a member of 6 groups, then 6 database queries need to be run (or one fancy not-so-efficient query).

The advantage of the group level work items is that when someone joins the group, all of the existing work items become immediately available to them.  With the individual work items, when “Frank” joins the group of “Reviewers” he has no work items initially, because all the existing work items are already created for the other five individuals.   A query for work items assigned to “Frank” returns nothing.  The advantage of a group level work item is that within a moment after Frank is made part of the group, all of the work items assigned to the group “Reviewers” are immediately available to him.

Refreshing the Work Items

An alternative to group level work items, is an option to regenerate all the work items in the system.  That is, walk through all the process instances that involve the group “Reviewers” and regenerate all the work item records.  This will typically delete the existing work items, and then create 6 work items now that there are six people in the group.

This has the advantage that Frank can now just do a search for work items assigned to Frank and see all the reviewer tasks, but this is not instantaneous.  The group is changed, and then the refresh operation is done, which might take a few minutes and might not be scheduled until later.   Refreshing the work items on a large BPM installation can be a serious hit to the resources.  If you have 1 million processes, this operation along might take the bulk of 20 minutes of server time, or at the very least put a serious load on it that effects everyone else.  This is contrasted to the group level work item which produces absolutely no additional load when adding or removing people, but causes some additional complication by having to check multiple work lists.


Determining which of these to use depends on the characteristics of the process being implemented.

Very Fast Processes

If the activity in question is handled very quickly, then there is no need to bother.  For example, in a  trouble ticket system, the goal may be to accept the first activity in a matter of minutes.  If a process exists for only 5 minutes in that particular activity, then there is no real reason to worry about Frank not having all the current tasks.  Just wait 5 minutes and there will be new ones available for him to take up.  The additional bother of arranging to get workitems on existing processes just simply is not worth it.

For the most part we are considering a process activity which might hang around for days before being handled.  in that situation, you would not want to wait days for Frank to get tasks.  For the rest of the discussion, assume we are talking about tasks that take anywhere from days to months on the average.

Frequency of Organizational Changes

The next question is how often the organization changes.  If Frank is hired on, and expected to stay years in the position, then you would expect that a person joining would be relatively rare.  If you add a person only every couple of months, then recalculating all the work items for each group is a reasonable thing.  The overhead at run time retrieving the work list is minimized by being able to do a single query, and the refresh is done only every month or so.

On the other extreme, if this is a charity volunteer organization where different people are dropping by, and the group of people doing the work changes daily or even more frequently, then the overhead of refreshing the work items is large.  It would need to be done daily or hourly, and that additional load on a large system would exceed the benefit.  In such a volatile organization, one would be far better to choose the group level work items.  Just have the person who is sitting in for an hour search for work items assigned to the group, and complete them that way.  There is no utility in creating the individual level workitems that will only exist for a day or so, on a task that typically takes more time than that.

The Balance

As a rule of thumb, updating the database is about 10x heavier hit than just querying it.  Thus, if you use an update in order to save querying, then you need ten queries saved for every database update you do.  This is not precise at all, particular if the query size is different form the update size, but lets use this to get a feel for the tradeoff.

If you have 100 people who participate in 6 groups each, then for individual work items you will have 100 queries per hour, and with group level work items you will have 600 smaller queries per hour.  It is worth noting that with group level work items the work item table will be 6 times smaller as well.

Say you have 100 processes alive at a time.  These are distributed equaly across 10 groups.

With 10 groups and each user in six of the groups, then each group has 60 members.  There will either be 6000 individual work items,  while there would be only 100 group level work items.

Since each group has 10 processes, and each individual is in six groups, the typical individual worklist will retrieve 60 records.  Using group level work items you would run 6 queries (one for each group) retrieving 10 records each.

Changing a group and doing a mass update will cause 6000 individual work items to be updated.  Because of the 10-to-1 weighting of updates vs. queries, this is equivalent to 60,000 records queried, or 10 hours of querying.

The real difference lies in this:  what is the difference between making a single query for 60 items, versus 6 queries for 10 items each?   What is the overhead of submitting six queries, versus a single, larger query.  With proper indexing the difference should be slight.

This analysis leads me to believe that if a user is in a fixed number of groups, that asking the database for the work items by group might be more efficient than re-updating the database every time a person joins or leaves the group.

by kswenson at April 02, 2018 08:28 PM

February 26, 2018

Drools & JBPM: The Drools Executable Model is alive


The purpose of the executable model is to provide a pure Java-based representation of a rule set, together with a convenient Java DSL to programmatically create such model. The model is low level and designed for the user to provide all the information it needs, such as the lambda’s for the index evaluation. This keeps it fast and avoids building in too many assumptions at this level. It is expected higher level representations can layer on in the future, that may be more end-user focused. This work also highly compliments the unit work, which provides a java-oriented way to provide data and control orchestration.


This model is generic enough to be independent from Drools but can be compiled into a plain Drools knowledge base. For this reason the implementation of the executable model has been split in 2 subprojects:
  1. drools-canonical-model is the canonical representation of a rule set model which is totally independent from Drools
  2. drools-model-compiler compiles the canonical model into Drools internal data structures making it executable by the engine
The introduction of the executable model brings a set of benefits in different areas:
  • Compile time: in Drools 6 a kjar contained the list of drl files and other Drools artifacts defining the rule base together with some pre generated classes implementing the constraints and the consequences. Those drl files needed to be parsed and compiled from scratch, when the kjar is downloaded from the Maven repository and installed in a KieContainer, making this process quite slow especially for large rules sets. Conversely it is now possible to package inside the kjar the Java classes implementing the executable model of the project rule base and recreate the KieContainer and its KieBases out of it in a much faster way. The kie-maven-plugin automatically generates the executable model sources from the drl files during the compilation process.
  • Runtime: in the executable model all constraints are defined as Java lambda expressions. The same lambdas are also used for constraints evaluation and this allows to get rid of both mvel for interpreted evaluation and the jitting process transforming the mvel-based constraints in bytecode, resulting in a slow warming up process.
  • Future research: the executable model will allow to experiment new features of the rule engine without the need of encoding them in the drl format and modify the drl parser to support them. 

    Executable Model DSLs

    One goal while designing the first iteration of the DSL for the executable model was to get rid of the notion of pattern and to consider a rule as a flow of expressions (constraints) and actions (consequences). For this reason we called it Flow DSL. Some examples of this DSL are available here.
    However after having implemented the Flow DSL it became clear that the decision of avoiding the explicit use of patterns obliged us to implement some extra-logic that had both a complexity and a performance cost, since in order to properly recreate the data structures expected by the Drools compiler it is necessary to put together the patterns out of those apparently unrelated expressions.
    For this reason it has been decided to reintroduce the patterns in a second DSL that we called Pattern DSL. This allowed to bypass that algorithm grouping expressions that has to fill an artificial semantic gap and that is also time consuming at runtime.
    We believe that both DSLs are valid for different use cases and then we decided to keep and support both. In particular the Pattern DSL is safer and faster (even if more verbose) so this will be the DSL that will be automatically generated when creating a kjar through the kie-maven-plugin. Conversely the Flow DSL is more succinct and closer to the way how an user may want to programmatically define a rule in Java and we planned to make it even less verbose by generating in an automatic way through a post processor the parts of the model defining the indexing and property reactivity. In other terms we expect that the Pattern DSL will be written by machines and the Flow DSL eventually by human.

    Programmatic Build

    As evidenced by the test cases linked in the former section it is possible to programmatically define in Java one or more rules and then add them to a Model with a fluent API

    Model model = new ModelImpl().addRule( rule );

    Once you have this model, which as explained is totally independent from Drools algorithms and data structures, it’s possible to create a KieBase out of it as it follows

    KieBase kieBase = KieBaseBuilder.createKieBaseFromModel( model );

    Alternatively, it is also possible to create an executable model based kieproject by starting from plain drl files, adding them to a KieFileSystem as usual

    KieServices ks = KieServices.Factory.get();
    KieFileSystem kfs = ks.newKieFileSystem()
    .write( "src/main/resources/r1.drl", createDrl( "R1" ) );
    KieBuilder kieBuilder = ks.newKieBuilder( kfs );

    and then building the project using a new overload of the buildAll() method that accepts a class specifying which kind of project you want to build

    kieBuilder.buildAll( ExecutableModelProject.class );

    Doing so the KieBuilder will generate the executable model (based on the Pattern DSL) and then the resulting KieSession

    KieSession ksession = ks.newKieContainer(ks.getRepository()

    will work with lambda expression based constraint as described in the first section of this document. In the same way it is also possible to generate the executable model from the Flow DSL by passing a different project class to the KieBuilder

    kieBuilder.buildAll( ExecutableModelFlowProject.class );

    but, for what explained when discussing the 2 different DSLs, it is better to use the pattern-based one for this purpose.

    Kie Maven Plugin

    In order to generate a kjar embedding the executable model using the kie-maven-plugin it is necessary to add the dependencies related to the two formerly mentioned projects implementing the model and its compiler in the pom.xml file:


    also add the plugin to the plugin section

    An example of a pom.xml file already prepared to generate the executable model is available here. By default the kie-maven-plugin still generates a drl based kjar, so it is necessary to run the plugin with the following argument:

    Where <VALUE> can be one of three values:


    Both YES and WITHDRL will generate and add to the kjar use the Java classes implementing the executable model corresponding to the drl files in the original project with difference that the first will exclude the drl files from the generated kjar, while the second will also add them. However in this second case the drl files will play only a documentation role since the KieBase will be built from the executable model regardless.

    Future developments

    As anticipated one of the next goal is to make the DSLs, especially the flow one, more user friendly, in particular generating with a post-processor all the parts that could be automatically inferred, like the ones related to indexes and property reactivity.
    Orthogonally from the executable model we improved the modularity and orchestration of rules especially through the work done on rule units, This focus around pojo-ification compliments this direction of research around pure java DSLs and we already have a few simple examples of how executable model and rule units can be mixed to this purpose.

    by Mario Fusco ( at February 26, 2018 03:29 PM

    February 22, 2018

    Keith Swenson: Usability is a Factor in Security

    I am going once again through data security training.  That is not in itself bad, but the misguided and outdated recommendations propagate misinformation.  Security is very important, but why can’t the “security” experts learn?

    We are presented with this guidance:

    The most important aspect of a strong password, is that it is easy to remember.  When you have 30 to 40 passwords to use just for the official job functions (not counting all the personal online services) it can be a challenge to keep them all straight.  A hard to remember password will be written down … because it is hard to remember.  Duh!

    Writing down passwords is a security flaw.  Duh again!

    Therefor, it is quite clear that for a strong password to be successfully used, it must be at the same time easy to remember.  One can easily make a strong password that is easy to remember … without engaging in any of the prohibited actions mentioned on the right.

    The strongest password you can create is a list of letters that correspond to a complete phrase that you are familiar with.  Take a phrase that you know, but is obscure, probably something personal to you which you can easily remember, but nobody would necessarily know it is associated with you.  For this example I am using a very popular phrase, but in real use you should not use a popular phrase.

    “Now is the time for all good men to come to the aid of their country”

    You password is the first letters of all the words: “nittfagmtcttaotc”

    That is a strong password.  Try it.  If you know the phrase, it is easy to type.  It essentially is a random collection of letters.  It will be as hard for a password generator to guess as any password.

    It is even better if you are a little less regular about this and mix up the letters that you transform to.  One recommendation I saw was the phrase “This little piggy went to market” might become “tlpWENT2m”.   Another suggested the phrase might be “Try to crack my latest password, all you hackers” to become “t2cmlp,@yh”.

    You can have the strongest password in the world, but if you have to write it down, then all advantages of a strong password are lost.

    Why do these “security recommendations” ignore the fact that human factors is the single largest cause of unauthorized access?  There is some evidence that the cryptic hard to remember passwords are actually worse than simpler, but longer phrases.  NIST guidelines recommends that people create long passphrases instead of gobbldegook short passwords.

    It continues with this page:

    What is missing?  Usability.  Again, you can make a super secure system, but if it is extremely difficult to use, then there is a strong disincentive to use it.

    For example, in Windows one can control the access of every document down to exactly which users can read or update.  But almost business user today ever uses it!   It is too tedious.  Sometimes, on a network drive, one might set the access to a complete folder to a group or something, but that leaves the door open for a  wide variety of people.  This is not really Microsoft’s fault: it is that the paradigm of setting up specific users access to specific files is just inherently tedious.

    The most important aspect of a security system is that a regular business user can easily navigate the controls to accurately and easily restrict access from the right people, and allow access to the right people.

    Part of a good usable system will be indicators that tell you when it is wrong and help you get around it.  For example, when a user should have access, and doesn’t, there should be an easy way to request and get access — without a lot of tedium.  The access control should be easily visible.

    When there is a failure of security or a mis-configuration there should be a clear error message that accurately tells what was restricted and why.   Most primitive security people believe that no error should be produced, or if one is, there should be no discernible information in it.  All of this makes most data security environment so difficult to use that people avoid it.

    Eventually I encounter this screen in the training:

    The checked answer is the one you must choose to get the question correct.

    I had thought the security experts were simply oblivious to usability concerns, but it seems that they are actively against having passwords that are easy to remember!   They actually believe that having a hard to remember password is better security!  Unbelievable!  It can be a big challenge to remember 30 to 40 password just for the job, all of which are expected to be different and changed reasonably often.

    Yes, I know, password managers like LastPass do a good job of solving this memory problem.  Very convenient.  The clear advantage is that you can have super strong passwords, as long as you want, and change them as often as you want.  I used it for a while, but I had some problems with it inserting itself into all the web pages I browsed.  I don’t remember exactly the problem, but I had to stop using it.

    Again, an easily remembered password will not be written down, and therefor will be safer and more secure.

    Super Ironic

    The email telling me about the security course has my username and password (for the course) directly in the email.  Yes, I know this is not a super-secure bit of information, but if you want to train people to behave the right way, it would make sense to demonstrate the correct behavior even when it is not necessary.  It is particularly ironic that a training class on security uses a bad security shortcut itself.

    Why do they do that?  It is too difficult otherwise!  They found people failed to do the course when they had to set up and maintain their (separate) password for the course.   Hello?   Is anyone listening?

    Another course

    I got done with another course, and the last step of that course was to fill in a survey.   I linked to the appropriate page, Logged in carefully.  The instructions said to press a button to fill in the survey.   This is what was presented to me:

    Is it access control gone wrong?  Maybe.  Probably.  The people who made the survey probably have no idea this is happening.  Usability is strictly a secondary consideration around security access problems.

    I am not the only one feeling this way


    by kswenson at February 22, 2018 04:58 PM

    February 07, 2018

    Drools & JBPM: Running multi Workbench modules on the latest IntelliJ Idea with live reloading (client side)

    NOTE: The instructions below apply only to the old version of the gwt-maven-plugin

    At some point in the past, IntelliJ released an update that made it impossible to run the Workbench using the GWT plugin. After exchanging ideas with people on the team and summing up solutions, some workarounds have emerged. This guide provides information to running any Errai-based applications in the latest version of IntelliJ along with other modules to take advantage of IntelliJ's (unfortunately limited) live reloading capabilities to speed the development workflow.

    Table of contents

    1. Running Errai-based apps in the latest IntelliJ
    2. Importing other modules and use live reload for client side code
    3. Advanced configurations
    3.1. Configuring your project's pom.xml to download and unpack Wildfly for you
    3.2. Alternative workaround for non-patched Wildfly distros

    1. Running Errai-based apps in the latest IntelliJ

    As Max Barkley described on #logicabyss a while ago, IntelliJ has decided to hardcode gwt-dev classes to the classpath when launching Super Dev Mode in the GWT plugin. Since we're using the EmbeddedWildflyLauncher to deploy the Workbench apps, these dependencies are now deployed inside our Wilfdfly instance. Nothing too wrong with that except the fact that gwt-dev jar depends on apache-jsp, which has a ServletContainerInitializer marker file that causes the deploy to fail.

    To solve that issue, the code that looks to the ServletContainerIntitilizer file and causes the deploy to fail was removed in custom patched versions of Wildfly that are available in Maven Central under the org.jboss.errai group id.

    The following steps provide a quick guide to running any Errai-based application on the latest version of IntelliJ.

    1. Download a patched version of Wildfly and unpack it into any directory you like
    - For Wildfly 11.0.0.Final go here

    2. Import the module you want to work on (I tested with drools-wb)
      - Open IntelliJ, go to File -> Open.. and select the pom.xml file, hit Open then choose Open as Project

    3. Configure the GWT plugin execution like you normally would on previous versions of IntelliJ

    - VM Options:

    - Dev Mode parameters:
      -server org.jboss.errai.cdi.server.gwt.EmbeddedWildFlyLauncher

    4. Hit the Play button and wait for the application to be deployed

    2. Importing other modules and using live reload for client side code

    After being able to run a single webapp inside the latest version of IntelliJ, it might be very useful to have some of its dependencies be imported as well, so that after changing client-code on that dependency, you don't have to wait (way) too long for GWT to compile and bundle your application's JavaScript code again.

    Simply go to File > New > Module from existing sources.. and choose the pom.xml of the module you want to import.
    If you have kie-wb-common or appformer imported alongside with another project, you'll most certainly have to apply a patch in the beans.xml file of your webapp.

    For drools-wb you can download the patch here. For other projects such as jbpm-wb, optaplanner-wb or kie-wb-distributions, you'll have to essentially do the same thing, but changing the directories inside the .diff file.

    If your webapp is up, hit the Stop button and then hit Play again. Now you should be able to re-compile any code changed inside IntelliJ much faster.

    3.1. Configuring your project's pom.xml to download and unpack Wildfly for you

    If you are used to a less manual workflow, you can use the maven-dependency-plugin to download and unpack a Wildfly instance of your choice to any directory you like.

    After you've added the snipped below to your pom.xml file, remember to add a "Run Maven Goal" before the Build of your application in the "Before launch" section of your GWT Configuration. Here I'm using the process-resources phase, but other phases are OK too.

                  <!-- Using a patched version of Wildfly -->
                  <!-- Unpacking it into /target/wildfly-11.0.0.Final -->

    3.2. Alternative workaround for non-patched Wildfly distros

    If you want to try a different version of Widlfly or if you simply don't want to depend on any patched versions, you can still use official distros and exclude the ServletContainerInitializer file from the apache-jsp jar on your M2_REPO folder.

    If you're working on a Unix system, the following commands should do the job.

    1. cd ~/.m2/repository/

    2. zip -d org/eclipse/jetty/apache-jsp/{version}/apache-jsp-{version}.jar META-INF/services/javax.servlet.ServletContainerInitializer

    By excluding it manually from the apache-jsp jar, Maven won't try to download it again after you remove the file. That makes this workaround permanent as long as you don't erase your ~/.m2/ folder. Keep in mind that if you ever need the apache-jsp jar to have this file back, the best option is to delete the apache-jsp dependency directory and let Maven download it again.

    New instructions for the new version of the gwt-maven-plugin are to come, stay tunned!

    by Tiago Bento ( at February 07, 2018 03:25 PM

    January 25, 2018

    Keith Swenson: Product Trial Strategies

    Selling big complex products is always a challenge.  I recently was asked why not make the product simply available on the cloud for free sign-up and access so that people can try it out for free.  Here is my response.

    Been There, Done That

    In 2010 we launched our cloud based BPM initiative, and we set it up to allow free access to people.  We ran this until around 2014.  Obviously this was early days, and if we did it again now we might do a much better job.  We still have BPM in the cloud and the same thing on premises however you want it.  But we don’t offer free trials on the cloud.

    We learned a couple of things.  The main one is that enterprise application integration and BPM are inherently complex subjects.  The problem is not drawing a diagram.  The problem is wading through all the myriad networks of existing systems to determine what needs to be called when, and what all the boundary conditions are going to be.  Your legacy systems were not designed to be “integrated to”  and they lack proper documentation of any kind for use in the next generation technology.

    VM Approach Preferred

    Instead of offering people a free trial on the cloud, we offer a free trial by downloadable VM.  One downloads a 4 to 5 gigabyte file, and in ten minutes you can have it running using VM Player or other such tool.  We put it on a freely distributable version of Linux, include the community version of Postgres, and free versions of everything else you would need.

    With a free VM, you have everything that you would get from a free cloud trial, except these advantages:

    • Enterprise integration is not something you do casually in an hour or two of trying.  Even with the powerful tools we offer, it takes a serious effort to even detail a problem in that space, and you can only appreciate the powerful techniques when in the middle of a very sticky problem.
    • To put it in terms that most would understand: Oracle making a relational database available on the cloud as a try-before-you-buy service would make no sense because the kinds of things you have to do with a database are not done in a couple of hours of fiddling.
    • Downloading a VM is pretty quick and easy.  It takes about 10 minutes of work to get it running, but that is honestly not much more than accessing a cloud service.
    • With a cloud service, you can’t save versions as you go, and restore to that point like you can with a VM.  A VM will allow you to prepare a demo, and save it in that state, so that every demo starts in the same situation.  With the cloud, everything you do is final.
    • The agile approach means you want to try things out quickly.  With a VM you can do this with the confidence that if you decide that is a wrong direction, you can always go back to the last saved copy.
    • With the cloud you cannot give a copy to a coworker.  Giving a coworker access to your cloud instance means that they will be doing things in there while you are.  With a VM you can have as many copies as you want running simultaneously.
    • WIth a cloud service it is difficult to work on two independent projects at the same time.  If the vendor allows you two copies of the cloud service, then you could do it that way.  But with a VM you can have two or more copies, one for each concurrent project if you choose.  When one project goes on hiatus, you shut down the VM assured that if it starts back up again you just need to restart the VM.  There is essentially no cost in storing the dormant VM — but that is not the case in the free-trial cloud versions.
    • You can not access a cloud service from a highly secure location.  You might or might not be able to bring a VM into such a situation.
    • Typically with a cloud approach, you get a limited time… like one month … after that it is all lost.  You might think that is good for sales, but only for sales if very simple software.  Learning the details of enterprise integration takes months and the prospect of losing it all after one month is a significant barrier to potential customers.


    I don’t mean to say that a “free trial on the cloud” approach is a bad idea.  It is great for products that can be learned in a few hours of fiddling.  But the above limitations are real when dealing with a system designed to handle big problems.  We have opted for a VM approach because it is a better approach for learning the system, teaching the system, doing development, and also for doing demonstrations of solutions built on the system.

    by kswenson at January 25, 2018 06:51 PM

    January 17, 2018

    Sandy Kemsley: A variety of opinions on what’s ahead for BPM in 2018

    I was asked to contribute to 2018 prediction posts on a couple of different sites, along with various other industry pundits. Here’s a summary. Predictions published The Year Ahead...

    [Content summary only, click through for full article and links]

    by sandy at January 17, 2018 02:47 PM

    January 11, 2018

    Sandy Kemsley: Prepping for OPEXWeek presentation on customer journey mapping – share your ideas!

    I’m headed off to OPEX Week in Orlando later this month, where I’ll give a presentation on customer journey mapping and how it results in process improvement as well as customer satisfaction/value....

    [Content summary only, click through for full article and links]

    by sandy at January 11, 2018 06:43 PM

    January 05, 2018

    Sandy Kemsley: Vega Unity 7: productizing ECM/BPM systems integration for better user experience and legacy modernization

    I recently had the chance to catch up with some of my former FileNet colleagues, David Lewis and Brian Gour, who are now at Vega Solutions and walked me through their Unity 7 product release. Having...

    [Content summary only, click through for full article and links]

    by sandy at January 05, 2018 02:03 PM

    January 03, 2018

    Keith Swenson: BPM and 2018

    Is it a new year already?  Hmmm.  Time to look around and reassess the situation.

    • Hottest topic last year: was Robotic Process Automation (RPA).  The “robot” uses regular HTML user interface to inject data to and extract data from systems that lack proper data-level web service API.  I guess this means that SOA is dead– long live the new SOA.
    • The cloud is no longer scary.  Companies are moving data out of their data centers as quickly as they can, hoping to avoid the liability of actually holding sensitive data, and letting others take that problem.
    • It seems that all business process systems have a case management component now.  Maybe this year we can finally completely merge the ACM and BPM awards programs.
    • Most important innovation in process space for 2018: Deep Learning.  Alpha-Go showed us a system that can play a game that was considered unsolvable only a few years ago, and it did this without any programming by humans.  Tremendous advances in (1) big data and (2) cheap parallel computation, but….
    • Most disappointing innovation for 2018: Deep Learning.  Learning systems really have not solved broad open ended problems such as we need in the process space.  Currently limited to hand-coded algorithms.  Deep learning exhibits very quirky reliability: some amazing results, but lots of overwhelmingly problematic results on the long tail of exceptional situations.  In such a system it is hard to understand what has been learned, and hard to modify and adapt it without starting over.  Automatically improving a process requires understanding the business (cultural, moral, etc.) far outside the system.  This important step is only the beginning.
    • Process mining will continue to be under-appreciated in 2018.
    • SOAP can finally be ignored.  REST has won.
    • Decision Modeling continues to show promise, but it really is just an improvement on how to express computable programs, and it highlights the limitations of BPMN more than it represents anything new.   The DMN TCK had tremendous results helping to firm up the still uncompleted DMN spec.
    • Self-managed organizations continue to rise, and Slack seems to be the most sophisticated technology really needed to make this happen.

    What do we have to look forward to:

    • We will be holding another Adaptive Case Management Workshop this year at the time of the EDOC conference in October in Stockholm.  Our 2017 experiment trying to run this in America failed due to inability to attract attendees from outside of Europe.
    • The BPM conference will be in Sydney Australia this year and it should be as good as ever.
    • Open Rules is planning another Decision Camp this time in Brussels in mid September

    So, indeed it is another year.  Happy New Year!


    Here is a helpful common sense video which lends some perspective about the state of artificial intelligence and deep learning:

    by kswenson at January 03, 2018 04:39 PM

    January 02, 2018

    Sandy Kemsley: ITESOFT | W4 Secure Capture and Process Automation digital business platform

    It’s been three years since I looked at ITESOFT | W4’s BPMN+ product, which was prior to W4’s acquisition by ITESOFT. At that time, I had just seen W4 for the first time at bpmNEXT 2014, and had...

    [Content summary only, click through for full article and links]

    by sandy at January 02, 2018 12:58 PM

    December 29, 2017

    Sandy Kemsley: Column 2 wrapup for 2017

    As the year draws to an end, I’m taking a look at what I wrote here this year, and what you were reading. I had fewer posts this year since I curtailed a lot of my conference travel, but still...

    [Content summary only, click through for full article and links]

    by sandy at December 29, 2017 01:54 PM

    December 22, 2017

    Sandy Kemsley: A Perfect Combination: Low Code and Case Management

    The paper that I wrote on low code and case management has just been published – consider it a Christmas gift! It’s sponsored by TIBCO, and you can find it here (registration required)....

    [Content summary only, click through for full article and links]

    by sandy at December 22, 2017 05:46 PM

    December 15, 2017

    Sandy Kemsley: What’s in a name? BPM and DPA

    The term “business process management” (BPM) has always been a bit problematic because it means two things: the operations management practice of discovering, modeling and improving business...

    [Content summary only, click through for full article and links]

    by sandy at December 15, 2017 06:31 PM

    December 14, 2017

    Keith Swenson: 2017 BPM Awards

    There are a number of conclusions about the industry that we can make from this year’s WfMC Awards for Excellence in BPM.  Thirteen submissions won awards this year across a number of industries and practices.  First a summary of the cases:

    Key Takeaways

    The details of the winners can best be explored by reading the individual cases which will be available in a book later next year.  But across all of the winners, I saw these distinct trends:

    • BPM together with Case Management – in almost every study, the system was a hybrid that included both BPMN style pre-defined processes, as well as non-modeled goal-oriented cases that are structured while you work.  Predictable and unpredictable are implemented together.
    • No longer just Finance – While banking and insurance are still large users of BPM, they are no longer alone in the field.  This year cases were from retail, utility, internet provider, two examples of management consulting, construction, facility management, two examples of government, telecom, automotive, medical devices, and health care plans.
    • Avoid Big Bang – half of the studies pointed out that processes should not be perfected before use, but that such perfection is a waste.  Work in smaller chunks, but where each chunk is still a viable minimal process.
    • Agile Incremental Development – Implement part of the process and let it evolve as everyone learns what works and does not work. Clearly, ability to change on the fly is critical.  In several cases this was identified as the single most important ingredient of success.
    • Still Manual Work to be Automated – we are far from completely automated: most of the cases were fresh automation of manual processes, but a couple were re-work from earlier automation attempts.
    • South America – this region showed up remarkable strong this year, winning 6 of the 13 awards, followed by Europe 3 and USA 3, and one in Mexico.  This seems to show that the South America market is maturing.
    • No Sign of Shakeout – the technology was from a variety of sources, some open source, some new to the field.  There is no evidence that all the cases are settling down to a few dominant vendors.
    • Digital Transformation Included – in every case we saw signs of attempts to fundamentally redesign the business using internet technologies.  The destiny of workflow, which became BPM, is to find its full fruition in digital transformation platforms.

    BPM Award Winners

    • DIA is a multinational retail company with more than 7000 stores across Spain, Portugal, Argentina, Brazil and China.  They had grown by acquisition and merging and naturally their different product lines were being handled differently, and this was causing delays.  They focused on standardizing new product introduction, decreasing the amount of time product managers devote to this by 70%, eliminating 50% of the purely administrative work, and reducing errors by 80%.
    • EPM is a public utility providing energy, gas and water in Colombia.  They were able to reduce service costs by 50% and measure a 60% increase in service level agreements.
    • FiberCorp offers cloud, internet, data, and video services in Argentina.  They wanted to reduce their time to market for new products and services, in the most extreme case reducing delay from 2 weeks down to 10 minutes.
    • Groupo A offers project management and education services in Brazil.  They have been on a 10 year journey to transform the way their 200 employees generate and deliver content.
    • Hilti AG is a construction industry leader supported by a very savvy team from the University of Liechtenstein.  They point out that BPM is used two distinct ways: (1) to optimize existing processes, and (2) to innovate and bring about transformations in organizations.  They made a strong adoption of case management techniques along with reduction in the number of separate ERP systems from 50 to 1.
    • ISS Facility Services operates facilities for private and public sector across Europe, Asia, Pacific, North, and South America. Again, it involves a combination of automation (process) with flexibility (case management).
    • New York State Backoffice Operations was challenged by Governor Cuomo to streamline their systems, and be ready to handle all invoices in less than 22 days.  Their 57 agencies had 57 different billing systems handling 700,000 invoices per year.  They reduced or eliminated the differences, reduced the number of data centers from 53 down to 10, and by doing everything on-line dramatically reduced the paper usage.
    • Pret Communications in Mexico wants to be the most competitive vendor in the telecom space through automation and case management.  Their most important lesson is to avoid building too much, because it is all going to change, so use an incremental agile approach.
    • Rio de Janeiro City Hall needed to streamline the granting of permits for businesses and building. They saved 1,230,000 sheets of paper, while allowing 45% of the permits to be completed in less than 30 minutes.  72% of the applicants are handled automatically, but the exceptions still get the full review and handling by people who are freed from the drudgery of the simple cases.  Interestingly, 40% of submissions can be automatically rejected, and when done within 30 minutes the applicant does not pay any fees.  Even though rejections happen more effectively, overall they increased the number of successful applications by 25%.
    • Solix offers program and process management to both public and private sectors integrated several separate systems to reduce the time and effort for supporting their processes.
    • Valeo is a French automotive automotive supplier, 106,000 employees, and 500 existing BPM applications.  By cutting the time for one step by 80% to a minute or so, they were able to save the company 3-4000 hours per month.
    • Vincula is a Brazilian is a medical device supplier with, for example, implants for knee, hip, back, and jaw.  They used BPM to implement the ability to change from indirect sales, to direct sales, cutting out a step and improving their ability to know the customer and respond to needs.
    • WellCare Healthcare Plans is a Florida based health care service provider.  They implemented an adaptive case management system which reduced cycle time by 20%, reduced rework by 20% and eliminated 70% of the paper use.

    I will link the recording of the awards ceremony when it becomes available.


    by kswenson at December 14, 2017 11:50 AM

    December 12, 2017

    Keith Swenson: Conversation on Goal Oriented BPM

    A few weeks ago Peter Schooff recorded a discussion between us on the topic of cloud and goal oriented BPM. Here is the link:

    The transcript is copied here:

    Peter Schooff:How important is the cloud to digital transformation?

    Keith Swenson: It’s satisfying after struggling for a decade with trying to get people to move to the cloud. It’s satisfying to see that people are no longer worried about the cloud. It’s perfectly accessible. A lot of our services run in the cloud. We have figured out that when it comes to security breaches, it’s better to have your IT system. You know, you have your data centers run by people who do nothing else. That’s all they do, run data centers. That way, all the proper procedures are taken care of.

    So from that aspect, I’m seeing people accepting the cloud a tremendous amount. Now, still when it comes to digital transformation, I think … you can do that with data centers and in-house. You can do it out of house. I don’t think that should be a barrier. I don’t think you’d want to go to a pure cloud-only solution because then you kind of become trapped. Also you wouldn’t want to invest in something that only runs in-house. You’d want to have that flexibility. I think if you’re looking forward, you need to consider agility and the ability to move quickly back and forth between your on-premise and cloud. And make them work together in a true hybrid approach. That’s the safest approach for anybody.

    Peter Schooff: That’s great. We’ve touched on a lot of things. What would you say are the one or two key takeaways you think people should remember from this podcast?

    Keith Swenson: Okay, there’s one thing I can throw out there. Process is no longer the center of this thing. For many, many years, we’ve been preaching, let’s look at business process. And why we were looking at business process is because we wanted to take the focus off of functional programming. In other words, I’ve got an accounting department, and I handle accounts receivable. So I’m gonna optimize accounts receivable on its own. But accounts receivable is only one part of a longer process, and it’s more important that you look at the whole thing holistically and you identify what your goals are.

    So that’s why we moved to a process-oriented view on designing IT systems. But when I say that process is no longer the center of it, what I’m saying is that we still want that goal. We still want the long-term goal, but what’s happening is that we often can’t identify the process before we start. We can identify the goal from the beginning, so we want to be goal-centered, and that’s where case management comes in. You can assign a goal to a case. That’s where you’re gonna go. And then the process becomes auxiliary. It’s off on the side. And when you can say, “Oh okay, fine, to get to the goal, I could use this process,” you’ll bring in that process and use it. And you’ll bring in a bunch of different processes and combine them. But there may be aspects of your case that simply … you haven’t had the process for that, but you still have the goal.

    So I mean everything … We’ve unseated process as the center of the whole system. I mentioned earlier that sharing is easy, but controlling the sharing is difficult and challenging, so usability around security, access control. Making it natural, making it like a conversation. When you involve somebody in a conversation, they somehow automatically get the appropriate rights to the artifacts that they need to carry on the conversation. We’re still challenged in trying to find how to make that really work.

    Same thing with the constant change we see in our organizations. Say you’ve got a case that’s got 20 to 30 tasks assigned to people. And then a new person joins the organization. Now you have to go back and reassign all the tasks. There needs to be a better way to allow this stuff to just flow.

    You mentioned robotic process automation. That’s a very important integration technique. There’s another thing. Everybody, of course, knows about deep learning and analytics. That’s gonna be huge in digital transformation. In other words, we’re going to implement the systems. I said move quickly, deploy, but you also need to watch what you’re doing. And that’s where … they call them real-time analytics. They’re not strictly real-time, but anyway, analytics that are fairly current allows you to see how things are going and keep tabs on it. That’s incredibly important.

    And what is really needed is integrative platforms that bring all of these pieces together. Open source or proprietary or whatever it is, having them pre-fit into a platform that’s known to work together, giving you all those capability, that’s gonna be a key aspect of your … a central part of your digital transformation plan.

    Get Keith’s new book – When Thinking Matters in the Workplace: How Executives and Leaders of Knowledge Work Teams Can Innovate with Case Management – available at Amazon.

    by kswenson at December 12, 2017 08:01 PM

    December 08, 2017

    Sandy Kemsley: Tune in for the 2017 WfMC Global Awards for Excellence in BPM and Workflow

    I had the privilege this year of judging some of the entries for WfMC’s Global Awards for Excellence in BPM and Workflow, and next Tuesday the 12 winners will be announced in a webinar. Tune in to...

    [Content summary only, click through for full article and links]

    by sandy at December 08, 2017 01:10 PM

    December 04, 2017

    Sandy Kemsley: Presenting at OPEXWeek in January: customer journey mapping and lowcode

    I’ll be on stage for a couple of speaking slots at the OPEX Week Business Transformation Summit 2018 in Orlando the week of January 22nd: Tuesday afternoon, I’ll lead a breakout session in the...

    [Content summary only, click through for full article and links]

    by sandy at December 04, 2017 06:59 PM

    December 01, 2017

    Sandy Kemsley: Release webinar: @CamundaBPM 7.8

    I listened in on the Camunda 7.8 release webinar this morning – they issue product releases every six months like clockwork – to hear about the new features and upgrades from CEO Jakob Freund and VP...

    [Content summary only, click through for full article and links]

    by sandy at December 01, 2017 05:50 PM

    November 22, 2017 Best Practice Talk about Process Digitalization in Hamburg

    Dear BPM experts,

    we would like to invite you to the next Best Practice Talk in Hamburg. This time, it well be all about Process Digitalization with exciting presentations by Taxdoo, Hansa Flex, and Otto Group. The talk will take place on Nov 30, 2017.

    Please visit the event site on Xing for more information:

    See you next week!

    by Mirko Kloppenburg at November 22, 2017 08:59 PM

    November 21, 2017

    Sandy Kemsley: Fun times with low code and case management

    I recently held a webinar on low code and case management, along with Roger King and Nicolas Marzin of TIBCO (TIBCO sponsored the webinar). We tossed aside the usual webinar presentation style and...

    [Content summary only, click through for full article and links]

    by sandy at November 21, 2017 01:03 PM

    November 10, 2017

    Drools & JBPM: Building Business Applications with DMN and BPMN

    A couple weeks ago our own Matteo Mortari delivered a joint presentation and live demo with Denis Gagné from Trisotech at the virtual event.

    During the presentation, Matteo live demo'd a BPMN process and a couple DMN decision models created using the Trisotech tooling and exported to Red Hat BPM Suite for seamless execution.

    Please note that no glue code was necessary for this demo. The BPMN process and the DMN models are natively executed in the platform, no Java knowledge needed.

    Enough talking, hit play to watch the presentation... :)

    by Edson Tirelli ( at November 10, 2017 06:20 PM

    October 27, 2017

    Sandy Kemsley: Machine learning in ABBYY FlexiCapture

    Chip VonBurg, senior solutions architect at ABBYY, gave us a look at machine learning in FlexiCapture 12. This is my last session for ABBYY Technology Summit 2017; there’s a roadmap session...

    [Content summary only, click through for full article and links]

    by sandy at October 27, 2017 08:47 PM

    Sandy Kemsley: Machine learning in ABBYY FlexiCapture

    Chip VonBurg, senior solutions architect at ABBYY, gave us a look at machine learning in FlexiCapture 12. This is my last session for ABBYY Technology Summit 2017; there’s a roadmap session...

    [Content summary only, click through for full article and links]

    by sandy at October 27, 2017 08:47 PM

    Sandy Kemsley: Capture microservices for BPO with iCapt and ABBYY

    Claudio Chaves Jr. of iCapt presented a session at ABBYY Technology Summit on how business process outsourcing (BPO) operations are improving efficiencies through service reusability. iCapt is a...

    [Content summary only, click through for full article and links]

    by sandy at October 27, 2017 06:07 PM

    Sandy Kemsley: Pairing @UiPath and ABBYY for image capture within RPA

    Andrew Rayner of UiPath presented at the ABBYY Technology Summit on robotic process automation powered by ABBYY’s FineReader Engine (FRE). He started with a basic definition of RPA —...

    [Content summary only, click through for full article and links]

    by sandy at October 27, 2017 04:51 PM

    Sandy Kemsley: ABBYY partnerships in ECM, BPM, RPA and ERP

    It’s the first session of the last morning of the ABBYY Technology Summit 2017, and the crowd is a bit sparse — a lot of people must have had fun at the evening event last night —...

    [Content summary only, click through for full article and links]

    by sandy at October 27, 2017 04:15 PM

    Sandy Kemsley: ABBYY mobile real-time recognition

    Dimitry Chubanov and Derek Gerber presented at the ABBYY Technology Summit on ABBYY’s mobile real-time recognition (RTR), which allows for recognition directly on a mobile device, rather than...

    [Content summary only, click through for full article and links]

    by sandy at October 27, 2017 12:11 AM

    October 26, 2017

    Sandy Kemsley: ABBYY Robotic Information Capture applies machine learning to capture

    Back in the SDK track at ABBYY Technology Summit, I attended a session on “robotic information capture” with FlexiCapture Engine 12, with lead product manager Andrew Zyuzin and director...

    [Content summary only, click through for full article and links]

    by sandy at October 26, 2017 10:20 PM

    Sandy Kemsley: ABBYY Recognition Server 5.0 update

    I’ve switched over to the FlexiCapture technical track at the ABBYY Technology Summit for a preview of the new version of Recognition Server to be released in the first half of 2018. Paula...

    [Content summary only, click through for full article and links]

    by sandy at October 26, 2017 09:27 PM

    Sandy Kemsley: ABBYY SDK update and FineReader Engine deep dive

    I attended two back-to-back sessions from the SDK track in the first round of breakouts at the 2017 ABBYY Technology Summit. All of the products covered in these sessions are developer tools for...

    [Content summary only, click through for full article and links]

    by sandy at October 26, 2017 08:05 PM

    Sandy Kemsley: The collision of capture, content and analytics

    Martyn Christian of UNDRSTND Group, who I worked with back in FileNet in 2000-1, gave a keynote at ABBYY Technology Summit 2017 on the evolution and ultimate collision of capture, content and...

    [Content summary only, click through for full article and links]

    by sandy at October 26, 2017 05:39 PM

    Sandy Kemsley: ABBYY corporate vision and strategy

    We have a pretty full agenda for the next two days of the 2017 ABBYY Technology Summit, and we started off with an address from Ulf Persson, ABBYY’s relatively new worldwide CEO (although he is...

    [Content summary only, click through for full article and links]

    by sandy at October 26, 2017 04:10 PM

    Sandy Kemsley: ABBYY analyst briefing

    I’m in San Deigo for a quick visit to the ABBYY Technology Summit. I’m not speaking this year (I keynoted last year), but wanted to take a look at some of the advances that they’re...

    [Content summary only, click through for full article and links]

    by sandy at October 26, 2017 12:15 AM

    October 25, 2017

    Sandy Kemsley: Low code and case management discussion with @TIBCO

    I’m speaking on a webinar sponsored by TIBCO on November 9th, along with Roger King (TIBCO’s senior director of product management and strategy, and Austin Powers impressionist extraordinaire) and...

    [Content summary only, click through for full article and links]

    by sandy at October 25, 2017 04:11 PM

    October 24, 2017

    Sandy Kemsley: Citizen development with @FlowForma and @JohnRRymer

    I attended a webinar today sponsored by FlowForma and featuring John Rymer of Forrester talking about low-code platforms and citizen developers. Rymer made a distinction between three classes of...

    [Content summary only, click through for full article and links]

    by sandy at October 24, 2017 04:31 PM

    October 19, 2017

    Sandy Kemsley: Financial decisions in DMN with @JanPurchase

    Trisotech and their partner Lux Magi held a webinar today on the role of decision modeling and management in financial services firms. Jan Purchase of Lux Magi, co-author (with James Taylor) of...

    [Content summary only, click through for full article and links]

    by sandy at October 19, 2017 05:10 PM

    October 12, 2017

    5 Pillars of a Successful Java Web Application

    Last week, Alex Porcelli and I had the opportunity to present at JavaOne San Francisco 2017 two talks related to our work: "5 Pillars of a Successful Java Web Application” and The Hidden Secret of Java Open Source Projects.

    It was great to share our cumulative experience over the years building the workbench and the web tooling for the Drools and jBPM platform and both talks had great attendance (250+ people in the room).

    In this series of posts, we’ll detail our "5 Pillars of a Successful Java Web Application”, trying to give you an overview of our research and also a taste of participating in a great event like Java One.
    There are a lot of challenges related to building and architecting a web application, especially if you want to keep your codebase updated with modern techniques without throwing away a lot of your code every two years in favor of the latest trendy JS framework.
    In our team we are able to successfully keep a 7+ year old Java application up-to-date, combining modern techniques with a legacy codebase of more than 1 million LOC, with an agile, sustainable, and evolutionary web approach.
    More than just choosing and applying any web framework as the foundation of our web application, we based our web application architecture on 5 architectural pillars that proved crucial for our platform’s success. Let's talk about them:

    1st Pillar: Large Scale Applications

    The first pillar is that every web application architecture should be concerned about the potential of becoming a long-lived and mission-critical application, or in other words, a large-scale application. Even if your web application is not exactly big like ours (1mi+ lines of web code, 150 sub-projects, +7 years old) you should be concerned about the possibility that your small web app will become a big and important codebase for your business. What if your startup becomes an overnight success? What if your enterprise application needs to integrate with several external systems?
    Every web application should be built as a large-scale application because it is part of a distributed system and it is hard to anticipate what will happen to your application and company in two to five years.
    And for us, a critical tool for building these kinds of distributed and large-scale applications throughout the years has been static typing.

    Static Typing

    The debate of static vs. dynamic typing is very controversial. People who advocate in favor of dynamic typing usually argue that it makes the developer's job easier. This is true for certain problems.
    However, static typing and a strong type system, among other advantages, simplify identifying errors that can generate failures in production and, especially for large-scale systems, make refactoring more effective.
    Every application demands constant refactoring and cleaning. It’s a natural need. For large-scale ones, with codebases spread across multiple modules/projects, this task is even more complex. The confidence when refactoring is related to two factors: test coverage and the tooling that only a static type system is able to provide.
    For instance, we need a static type system in order to find all usages of a method, in order to extract classes, and most importantly to figure out at compile time if we accidentally broke something.
    But we are in web development and JavaScript is the language of the web. How can we have static typing in order to refactor effectively in the browser?

    Using a transpiler

    A transpiler is a type of compiler that takes the source code of a program written in one programming language as its input and produces equivalent source code in another programming language.
    This is a well-known Computer Science problem and there are a lot of transpilers that output JavaScript. In a sense, JavaScript is the assembly of the web: the common ground across all the web ecosystems. We, as engineers, need to figure out what is the best approach to deal with JavaScript’s dynamic nature.
    A Java transpiler, for instance, takes the Java code and transpiles it to JavaScript at compile time. So we have all the advantages of a statically-typed language, and its tooling, targeting the browser.

    Java-to-JavaScript Transpilation

    The transpiler that we use in our architecture, is GWT. This choice is a bit controversial, especially because the GWT framework was launched in 2006, when the web was a very different place.
    But keep in mind that every piece of technology has its own good parts and bad parts. For sure there are some bad parts in GWT (like the Swing Style Widgets, multiple permutations per browser/language), but keep in mind that for our architecture what we are trying to achieve is static typing on the web, and for this purpose the GWT compiler is amazing.
    Our group is part of GWT steering committee, and the next generation of GWT is all about JUST these good parts. Basically removing or decoupling the early 2000 legacy and keeping only the good parts. In our opinion the best parts of GWT are:
    • Java to JavaScript transpiler: extreme JavaScript performance due to compiling optimizations and static typing in the web;
    • java.* emulation: excellent emulation of the main java libraries, providing runtime behavior/consistency;
    • JS Interop: almost transparent interoperability between Java <-> Javascript. This is a key aspect of the next generation of GWT and the Drools/jBPM platform: embrace and interop (two way) with JS ecosystem.

    Google is currently working on a new transpiler called J2CL (short for Java-to-Closure, using the Google Closure Compiler) that will be the compiler used in GWT 3, the next major GWT release. The J2CL transpiler has a different architecture and scope, allowing it to overcome many of the disadvantages of the previous GWT 2 compiler.

    Whereas the GWT 2 compiler must load the entire AST of all sources (including dependencies), J2CL is not a monolithic compiler. Much like javac, it is able to individually compile source files, using class files to resolve external dependencies, leaving greater potential for incremental compilation.
    These three good parts are great and in our opinion, you should really consider using GWT as a transpiler in your web applications. But keep in mind that the most important point here is that GWT is just our first pillar implementation. You can consider using other transpilers like Typescript, Dart, Elm, ScalaJS, PureScript, or TeaVM.
    The key point is that every web application should be handled as a large-scale application, and every large-scale application should be concerned about effective refactoring. The best way to achieve this is using statically-typed languages.
    This is the first of three posts about our 5 pillars of successful web applications. Stay tuned for the next ones.

    [I would like to thank Max Barkley and Alexandre Porcelli for kindly reviewing this article before publication, contribute with the final text and provided great feedback.]

    by Eder Ignatowicz ( at October 12, 2017 09:13 PM

    October 09, 2017

    Sandy Kemsley: International BPM conference 2018 headed down under

    The international BPM conference for academics and researchers is headed back to Australia next year, September 9-14 in Sydney, hosted by the University of New South Wales. I’ve attended the...

    [Content summary only, click through for full article and links]

    by sandy at October 09, 2017 07:08 PM

    October 04, 2017

    Sandy Kemsley: Citrix Productivity Panel – the future of work

    I had a random request from Citrix to come out to a panel event that they were holding in downtown Toronto — not sure what media lists I’m on, but fun to check out to events I wouldn’t normally...

    [Content summary only, click through for full article and links]

    by sandy at October 04, 2017 10:31 PM

    September 26, 2017

    Sandy Kemsley: ABBYY Technology Summit 2017

    Welcome back after a nice long summer break! Last year, I gave the keynote at ABBYY’s Technology Summit, and I’m headed back to San Diego this year to just do the analyst stuff: attend briefings and...

    [Content summary only, click through for full article and links]

    by sandy at September 26, 2017 01:12 PM

    September 20, 2017

    September 11, 2017

    Keith Swenson: Why Does Digital Transformation Need Case Management?

    A platform for digital transformation brings a number of different capabilities together: processes, agents, integration, analytics, decisions, and — perhaps most important — case management.  Why case management?  What does that really bring to the table and why is it needed?


    What is the big deal about case management?  People are often underwhelmed.  In many ways, case management is simple a “file folder on steroids.”  Essentially it is just a big folder that you can throw things into.  Traditional case management was centered on exactly that: a case folder and really that is the only physical manifestation.  It is true that the folder as a collecting point for documents and data of any kind — but there is a little more to it.

    I already have shared folders, so why do I need anything more?  The biggest difference between case management and shared folders is how you gain access to the folder.

    My shared file system already has access control.  Right, but there is a question of granularity: it is a question of granularity.  If the access can be controlled only to the whole folder, it means every participant has all-or-nothing access and that is to much.  At the other end of the spectrum, if every file can be assigned to any person, it gets to be too tedious:  adding a person to a large case with 50 files can take significant effort, costing more than 10 minutes of work.  People may be joining and leaving the case on a daily basis, and going through all the documents on every person might leave you with a full time job managing the access rights.  A case manager is too busy to do that.  A better approach has to be found that blends the access control together with other things that a case manager is doing.

    For example, let’s say that you have a task, and the task is associated with 10 documents in the folder.  Changing the assignment of the task, from one person to another, should at the same time (and without any additional trouble) change the rights to access the associated documents from one person to another.  It is reasonable to ask a case manager to assign a task to someone.   It is unreasonable to expect the case manager to go and manually adjust the access privileges for each of the 10 documents.  It is not only tedious, it is error prone.  Forget to give access to a critical document, and the worker can’t do the job.  Or give access to the wrong document to someone with no need to know might constitute a confidentiality violation.  This is one example of how a case management blends the case management and the access control together.  Another example is role-based access.

    My shared file system already have role-based access control.  Many document management systems offer global roles that you can set up:  a group for all managers, a group for all writers, a group for all editors.   You can assign privileges to such a  group, and simply by adding a person to the group gives them access to all the resources of the group.

    This is a complete misunderstanding of how cases have to work.  Each case, needs its own groups of people to play particular roles just for that case.  For example, a case dedicated to closing a major deal with a customer will have a salesperson, a person to develop and give a demo, maybe a market analyst.  But you can’t use the global groups for salespeople, demo developers, and market analysts.  This case has a particular sales person, and not just anyone in the sales person pool.   That particular sales person will have special access to the case that no other sales person should have.  A global role simply can’t fit the need.

    I could make individual roles for every case even in the global system.  Right, but creating and modifying global roles is often restricted to a person with global administration privileges.  The case manager needs the rights to create and change the roles for that case, and for no other case.  This right to manage roles needs to come automatically from being assigned to the case manager role for that case.  Case management adds mechanisms above the basic access control to avoid the tedium of having to manage thousands of individual access control settings.

    So that is all it is, powerful access control?  There is more to it.  It must also have the ability to create tasks of any kind and assign them to people at any time.  This means that the case management needs convenient ways to find all the tasks assigned to a particular person, and to (1) produce a work list of all currently assigned tasks, and (2) email notifications of either the entire list, or the items that are just about to reach a deadline.  These are BPM-ish capabilities, but there is no need for a process diagram.  For routine, pre-defined processes just use a regular BPM product.  Case management is really more about completely ad-hoc tasks assigned as desired.

    So there is no pattern to the processes at all?  Sorry, I didn’t mean to imply that.  There are patterns.  Each case manager develops their own style for getting a case done, and they often reuse those patterns.  The list of common tasks are usually copied from case to case in order to be reused.  At the same time, the patterns are never exactly the same.  And they change after the case is started.

    Since tasks are assigned manually, there is a strong need for a capability to “see who is available” which takes into account skills, workload, vacation schedule, and other criteria to help locate the right person for the job.

    There are also well defined routine processes to be called upon as well, and you use BPM for that.  The tighter the BPM is integrated to the case management, the easier it will be for case managers to complete the work.


    The above discussion is not an exhaustive list of capabilities that case management brings to the table.

    • It is a dumping ground for all the work that can not be known in advance.  A kind of safety valve to catch work which does not fall neatly into the pre-defined buckets for process management.
    • It collects any kind of information and documents, and makes them available to people working on the case.
    • It offers powerful access control that is integrated into the case logical structure so that it is easier to use than a simple document-based access control system.
    • It offers tasking so that assignments can be made and tracked to completion.
    • There are often portal features that can reach out to external people to register themselves and to play a role in the case.
    • It has calendars and vacation schedules that give workers an awareness of who is available and who might be best to do a job.
    • Conversation about the case is simplified by connections to discussion topics, commenting capability, chat capability, unified communications, email, social media, etc.

    Knowledge workers need these capabilities because their work is inherently unpredictable.  A digital transformation platform brings all the tools together to make solutions that transform the business.  Knowledge workers constitute about 50% of the workforce, and that percentage is growing.  Any solution destined to transform the organization absolutely must have some case management capabilities.

    by kswenson at September 11, 2017 05:49 PM

    September 07, 2017

    Keith Swenson: Business Driven Software

    I liked the recent post from Silvie Spreeuwenberg when she asks “When to combine decisions, case management and artificial intelligence?

    She correctly points out that “pre-defined workflow” are useful only in well defined scripted situations, and more and more knowledge workers need to break out of these constraints to get things done.  She points to Adaptive Case Management.

    I would position it slightly differently.  The big push today is “Digital Transformation” but it is exactly what she is talking about:  you are combining aspects of traditional process management, with unstructured case management, separating out decision management, and adding artificial intelligence.

    I would go further to say that Digital Transformation Platform (DXP) would need all that plus strong analytics, background processing agents, and robotic process automation. These become the basic ingredients that are combined for specific knowledge worker solution.  I think Spreeuwenberg has rightly expressed the essence of an intuitive platform of capabilities to meet the needs of today’s business.

    She closes saying he will be talking at the Institute of Risk Management — once again the domain of knowledge workers: risk management.

    by kswenson at September 07, 2017 10:49 PM

    September 01, 2017

    Keith Swenson: Update on DMN TCK

    Last year we started the Decision Model & Notation Technical Compatibility Kit (DMN-TCK) working group.  A lot has happened since the last time I wrote about this, so let me give you an update.

    Summary Points

    • We have running code!:  The tests are actual samples of DMN models, and the input / output value force a vendor to actually run them in order to demonstrate compliance.  This was the main goal and we have achieved it!
    • Beautiful results web site:  Vendors who participate are highlighted in an attractive site that lists all the tests that have passed.  It includes detail on all the tests that a vendor skips and why they skip them.  Thanks mainly to Edson Tirelli at Red Hat.
    • Six vendors included:  The updated results site, published today, has six vendors who are able to run the tests to demonstrate actual running compliance:  Actico, Camunda, Open Rules, Oracle, Red Hat, Trisotech.
    • Broad test set: The current 52 tests do a broad coverage of DMN capability.   Will jump to 101 tests by mid September.  Broad but not deep at this time: Now that the framework is set up, it is simply a matter of filling in additional tests.
    • Expanding test set: Participating vendors are expanding the set of tests by drawing upon their existing tests suites and converting into the TCK format, and including in the published set.  We are ready to enter a period of rapid test expansion.
    • All freely available: It is all open source and available on GitHub.

    How We Got Here

    It was April 2016 that DMN emerged onto the stage of the BPMNext conference as an important topic.  I expressed skepticism that any standard could survive without actual running code that demonstrated correct behavior.  Written specifications are simply not detailed enough to describe any software, and particular one that has an expression language as part of the deal.  Someone challenged me to do something about it.

    We started meeting weekly in summer of 2016, and have done so for a complete year.  There has been steady participation from Red Hat, Camunda, Open Rules, Trisotech, Bruce Silver and me, and more recently Oracle and Actico.

    I insisted that the models be the standard DMN XML-based format.  The TCK does not define anything about the DMN standard, but instead we simply define a way to test that an implementation runs according to the standard.   We did define a simple XML test case structure that has named input values, and named output values, using standard XML datatype syntax.  The test case consists purely of XML files which can be read and manipulated on any platform in any language.

    We also developed a runner, a small piece of Java code which will read the test cases,  make calls to an implementing engine, and test whether the results match.  It is not required to use this runner, because the Java interface to the engine is not part of the standard, however many vendors have found this a convenient way to get started on their own specific runner.

    As we worked on the tests, we uncovered dozens, possibly hundreds, of places where the DMN spec was ambiguous or unclear.  One participant would implement a set of tests, and it was — without exception — eye opening when the second participant tried to run them.  This is the way that implementing a new language (FEEL) naturally goes.  The spec simply can not get 100% on all the edge cases, and the implementation of the tests forced this debate into the public.  Working together with the RTF we were able to come to a common understand of the correct behavior of the evaluation engine.  Working through these cases was probably the most valuable aspect of the TCK work.

    A vendor runs the tests and submits a simple CSV file with all the results back to the TCK.  These are checked into GitHub for all to see, and that is the basis for the data presented on the web site.   We open the repository for new tests and changes in tests for the first half of every month.  The second half of the month is then for vendors that wish to remain current, to run all the new tests, and produce new results.  The updated web site will then be generated on the first of the next month.  Today, September 1, we have all the new results for all the tests that were available before mid August.  This way vendors are assured the time they need to keep their results current.

    The current status is that we have a small set of tests cases, that test a broad but shallow coverage of DMN capabilities.  A vendor who can pass the tests will be demonstrating a fairly complete implementation of all the DMN capabilities, but there are only a couple of tests on each functional area.  The next step will be drive deeper, and to design test that verify that the functional area works correctly in a larger number of special situations.  Some of the participating vendors already have such tests available in a non-TCK format.  Our immediate goal is then to encourage participating vendors to convert those tests and contribute them to the TCK repository.  (And I like to remind vendors that it is in their advantage to do so, because adding tests that you already pass, makes the test suite stronger, and forces other vendors to comply to functionality that you already have.)

    What this means to Consumers

    You now have a reliable source to validate a vendor claim that they have implemented the DMN standard.  On the web site, you can drill down to each functional category, and even to the individual tests to see what a vendor has implemented.

    Some vendors skip certain tests because they think that particular functionality is not important.  You can drill down to those particular tests, and see why the vendor has taken this stance, and determine whether you agree.

    Then there are vendors who claim to implement DMN, but are not listed on the site.  Why not?  Open source: All of the files are made freely available at GitHub in standard, readily-accessible formats.   Ask questions.  Why would a DMN implementation avoid demonstrating conformance to the standard when it is freely available?  Are you comfortable making the investment in time to use a particular product, when it can not demonstrate publicly this level of conformance to the spec?

    What this means to Vendors

    There are certainly a number of vendors who are just learning of this effort now.  It is not too late to join.  The last participant to join had the tests running in under two weeks.  We welcome any and all new participants who want to demonstrate their conformance to the DMN spec.

    To join, you simply need to read all the materials that are publicly available on the web site, send a note to the group using GitHub, plan to attend weekly meetings, and submit your results for inclusion in the site.  The effort level could be anywhere from a couple hours up to a max of 1 day per week.

    The result of joining the TCK is that you will know that your implementation runs in exactly the same way as the other implementations.  You product gains credibility,and customers gain confidence in it.  You will also be making the DMN market stronger as you reduce the risk that consumers have in adopting DMN as a way to model their decisions.


    I have had the honor of running the meetings, but I have done very little of the real work.  Credit for actually getting things done goes largely to Edson Tirelli from Red Hat, and Bruce Silver, and a huge amount of credit is due to Falko Menge from Camunda, Jacob Feldman from Open Rules, Denis Gagne from Trisotech, Volker Grossmann and Daniel Thanner from Actico, Gary Hallmark from Oracle, Octavian Patrascoiu from Goldman Sachs, Tim Stephenson for a lot of the early work, Mihail Popov from MITRE, and I am sure many other people from the various organizations who have helped actually get it working even though I don’t know them from the meetings.    Thanks everyone, and great work!

    by kswenson at September 01, 2017 06:07 PM

    August 29, 2017

    Keith Swenson: Blogging Platforms

    Today I am pretty frustrated by WordPress so I am going to vent a bit.  10 years ago I picked it as the platform to start my first blog on, and here you have it: I still here.  Yet I have seen so many problems in recent days I will be looking for an alternative platform.

    What Happened?

    I spent a lot of time trying to set up a blog for a friend who has a book coming out and needed a place to talk about it. I said “blogs are easy” but that was a mistake.  Three days later and the blog is still not presentable.

    Strange User Restrictions – Using my login, I created a blog for her using her full name as the name of the blog (e.g jsmith)   Then, I wanted to sign her up as a wordpress user with “jsmith” as her username.  You can’t do that.  Since there was a blog with that name, you are not allowed to register a user with that name.  The point is that the blog is her blog.  Her own blog is preventing her from having her username.  How silly is that?

    Given that I created the blog, there is no way to then set the password on the user for that name, and since there is no email associated, there is no way to reset the password.

    You can’t just register a user.  If you want to register a user, you have to create another blog!  It walks you through creation of a blog before you can specify a password for the user account.  We already had the blog created, I just needed a way for her to log in.  The only way we found to do that was to create yet another blog until finally, with the user name she didn’t want, could set a password on that username.  Blogs and user are different things … it really does not have to be so hard.

    You Can’t Move/Copy a Site – One of the impressive features is WordPress claims you can always move your site.  I have never tried until now, and can say it does not work.  I had previously set he blog up on a different blog address, so I wanted to move it.  Simply export and then import, right?  No.  You download a ZIP file, but it only has one file in it, and XML file.  There are none of the graphics, none of the media, and none of the settings.  Since it downloaded a zip file, at the import prompt I tried to upload the ZIP file.  This produces an arcane error message saying that a particular file is missing.  Strange.  I download the zip file a few times.  Always the same result.  There are two different export commands, and the produce different output!

    Finally I try to upload the XML file alone.  I know this has no chance of moving the pictures and media, but since there was none in the ZIP file anyway, I tried.  This avoided the error, and acted like it was working.  Eventually, I got a mess.  It just added the pages to the pages that were there.  Some of the old pages had special roles, like home and blog, so I can’t delete them in order to make way for the imported home and blog pages.  I have the same theme, but NOTHING looks the same.  None of the featured images were there.  No media files at all.   The sidebar (footer) text blocks were different.  I was horrified.  All this time I thought you could move a blog and not lose things.  This was eye opening.

    Incomprehensible Manage Mode – I have been trying for months to find out how to get from the “new” admin mode back to the blog itself.  That is, you edit a page, and you want to see how the page looks.  It gives you a “preview” mode which causes a semblance of the page to appear on top of the admin mode, but that is not the same thing, and the links do not work the same way.  After hours of looking, I still can not find any way to get “out” of admin mode.   You can “preview” the page, and the “launch” the page full screen.  That seems to do it, but it is a small pain.  I have until now just edit the URL to get back to my blog url.  In fact, I have taken to bookmarking the blog I am editing, and using the bookmark every few minutes to get out of admin mode.  It is rediculous.

    Visual Editor Damages Scripts – One of my blogs is about programming, so I have some programming samples.   If you accidentally open that in the “visual” editor, it strips out all the indent and does other things to it.  The problem is that you have no control of the editor until AFTER you click to edit.  It is a kind of russian roulette.  If you click edit and the visual editor appears, and then you switch to HTML editor, you post is already damaged.   What I have to do is click edit and see what mode it is in.  If visual, I switch to HTML.  Then I use the favorites link mentioned above, to return to the blog abandoning the edits.  Now I hit edit again and it comes back in the right HTML mode.   This is a real pain since some of my posts I would like to use the visual editor, and others because of the corruption I must use the HTML editor.  I worry forever that I will get visual editor on a post that has source code further down on the page, and I accidentally save it that way.

    Backslashes disappear  – besides ruining the indentation, at times it will strip out all the backslashes.  I got a comment today on a post from a couple years ago that the code was wrong: missing backslashes.  Sure enough.  I have struggled with that post, but I am sure that when I left it the last time, all the backslashes were in place.

    Old vs. New Admin Mode – Right now I am using the old admin mode to write this — thank god — I don’t know how to guarantee to get this.  The new admin mode is missing some features.  A few months ago I spent about an hour trying to find the setting to turn off some option that had somehow gotten turned on.  I finally contacted support, and they told me to find the “old” admin UI and the setting could be manipulated there.

    Can’t change blogs without manually typing the address in – This is the strangest thing.  If I am on one blog, I can go the menu that switches blogs, and choose another of my blogs, but there is no way to get back “out” of admin mode.  I end up editing the address line.  How hard would it be to give a simple list of my blogs and allow me to navigate there?  The new admin UI is a nightmare.  It didn’t use to be that bad!

    Login / Logout moves your location – if you are on a page which you would like to edit, but you are not logged in, I would expect to be able to log in, and then click edit on the page.    No chance with WordPress.  When you are done logging in, you are in some completely different place!  You cant use the browser back button to get back to where you were (this is reasonable, but I am trying to find a way around the predicament).  I then usually have to go search for the post.

    Edit does not return you to the page – If you are on a page and click the edit, when you are done editing you are not put back on the page you start on.  It looks like you page, but there is an extra bar at the top, and links don’t work.

    Managing Comments is Inscrutable – When reviewing and approving comments, I want a link to takes me to the page in question, so I can see the page and the comment.  I think there is a link that does this, but it is hard to find.  The main link takes you to the editor for that page.  Not what I want, and as mentioned above it is impossible to get from the editor to the page.  I often end us searching for the blog page using the search function.  Other links take you to the poster’s web site, which is not always what I want either.

    Vapid Announcements – When I make a hyperlink from one blog post to another of my own blog posts, why does it send me an email announcing that I have a new comment on those posts?  I know it makes a back-link, but for hyperlinked posts within a single blog it seems the email announcement is not useful in any way.

    Sloppy Tech – I looked at the XML file produced for the site, and they user CData sections to hold your blog posts.   Any use of CData is a hack because it does not encode all possible character sequences, when regular XML encoding works perfectly.  i realize I am getting to the bottom of the barrel of complaints, but I want to be complete here.

    What I want?

    • Keep it simple.
    • Let me navigate through my site like normal, but put a single edit button on each page that is easy to find and not in different places for different themes.
    • Then, when done editing, but me BACK on that page.
    • When I log in, leave me on the same page that I started the login from.
    • When I switch blogs, take me to the actual blog and not the admin for that blog.
    • Give me a simple way to exit the admin mode back to the actual blog.
    • And make a single admin mode that has all the functionality.
    • Don’t corrupt my pages by taking backslashes and indentation out.  Protect my content as if it was valuable.
    • Provide a complete export that includes all the media and theme settings as well
    • Provide an import that read the export and sets up the blog to be EXACTLY as the original that you exported.

    Is that too much to ask for?

    As yet, I don’t know of any better blogging platform.  But I am going to start considering  other options in earnest.


    PS. As a result of writing this post, it forced me to figure out how to reliably get to the “old” admin interface, which remains workable in a very predictable manner.  Maybe if I try hard, I can avoid using the “new” admin interface completely, and avoid that all those quirky usability problems.

    PPS. Now a new “View Site” button appears in the “new” admin mode to get back to the site, but this has the strange side effect of logging you out.  That is, you can see the page, but you are no longer logged in.  Strange.

    by kswenson at August 29, 2017 06:49 AM

    August 09, 2017

    Drools & JBPM: Talking about Rule Engines at Software Engineering Radio

    I had the pleasure of talking to Robert Blumen, at Software Engineering Radio, about Drools and Rule Engines in general.

    If you don't know this podcast, I highly recommend their previous episodes as well. Very informative, technically oriented podcast.

    Hope you enjoy,

    by Edson Tirelli ( at August 09, 2017 01:08 AM

    August 07, 2017

    Keith Swenson: Still think you need BPEL?

    Fourteen years ago, IBM and Microsoft announced plans to introduce a new language called Business Process Execution Langauge (BPEL) to much fanfare and controversy.  This post takes a retrospective look at BPEL, how things have progressed, and ponders the point of it all.


    In 2002, BPM was a new term, and Web Services was a new concept.  The term BPM meant a lot of different things in that day, just as it still does today, but of the seven different kinds of BPM, the one that is relevant in this context is Process Driven Server Integration (PDSI).  Nobody actually many real web services at that time, but it was clear that unifying such services with a standard protocol passing XML back and forth was a path to the future.  Having a way to integrate those web services was needed.  Both Microsoft and IBM had offerings in the integration space (BizTalk and FlowMark respectively).  Instead of battling against each other, they decided to join forces, and propose a open standard language for such integration processes.

    In April 2003 A proposal was made to OASIS to form a working to define a language called BPEL4WS (BPEL for Web Services).  I attended the inaugural meeting for that group with about 40 other high tech professionals.  It was a rather noisy meeting with people jockeying for position to control what was perceived to be the new lingua franca for business processes.  The conference calls were crazy, and we must credit the leaders with a lot of patience to stick with it and work though all the details.  The name was changed to WS-BPEL, and after a couple of years a spec was openly published as promised.


    BPEL was originally proposed as an interchange format.  That is, one should be able to take a process defined in one product, and move it to another product, and still be executable.  It was to be the universal language for Process Driven Server Integration.

    Both Microsoft and IBM were on board, as well as whole host of wannabes.  A group called the Business Process Management Initiative dumped their similar programming language called BPML in favor of BPEL as a clear case of “it you can’t beat ’em you can join ’em.”

    It was designed from the beginning to be a “Turing-Complete Programming Language” which is a great goal for a programming language, but what does that have to do with business?  The problem with the hype is that it confused the subject of “server integration” with human business processes.  While management was concerned with how to make their businesses run better, they were being sold a programming language for server integration.

    The hype exists after the spec was announced, but before it was finally published.  This happens with most proposed specs: claim that the proposal can do everything are hard to refute until finally the spec is published.  Only then can claims be accurately refuted.  For more than 4 years BPEL existed in this intermediate state where inflated expectations could thrive.

    Who Needs It?

    At the time, I could not see any need for a new programming language.  Analysts at Gartner and Forrester were strongly recommending companies go with products that included BPEL.  I confronted them, asking “Why is this programming language important?” And the candid answer was “We don’t know, we just know that a lot of major players are backing it, and that means it is going to be a winner.”  It was a case of widespread delusion.

    My position at the time was clear: as a programming language it is fine, but it has nothing to do with business processes.  It was Derek Miers who introduced me to the phrase “BPEL does not have any B in it.”   The language had a concept of a “participant”, but a participant was defined to be a web service, something with a WSDL interface.

    In 2007 I  wrote a article called “BPEL: Who Needs It Anyway?” and it is still one of the most accessed articles on BPM.COM.  In that article I point out that translating a BPMN diagram into BPEL presents a limitation on the kinds of diagrams that can be executed.  I point out that directly interpreting the BPMN diagram, something that has become more popular in the meantime, does not have this limitation.

    If what we need is a language for PDSI, then why not use Java or C#?  Both of those languages have proven portability, as well as millions of supporters.  When I asked those working on BPEL why they don’t just make an extension to an existing language, the response was the incredible: “We need a language based on XML.”  Like you need a hole in the head.

    Attempted Rescue

    The process wonks knew that BPEL was inappropriate for human processes, but still wanting to join the party, there was a proposal for the cleverly named “BPEL 4 People” together with “WS-HumanTask.”    This is the idea that since people are not web services, and since BPEL can only interact with web services, we can define a standardized web service that represents a real person, and push tasks to it.  It is not a bad idea, and it incorporates some of the task delegation ideas from WF-XML, it fails to meet the need of a real human process system because it assumes that people are passive receptors of business tasks.

    When a task is sent to a web service for handling, there is no way to “change your mind” and reallocate that to someone else.  BPEL, which is a programming language for PDSI, unsurprisingly does not include the idea of “changing your mind” about whom to send the task to.  Generally, when programming servers, a task sent to a server is completed, period.  There is no need to send “reminders” to a server.  There are many aspects of a human process which are simply not, and never should be, a part of BPEL.  Patching it up with representing people as standardized web services does not address the fundamental problem that people do not at any level interact in the same way that servers do.

    Decline of BPEL

    Over time the BPM community has learned this lesson.  The first version of BPMN specification made the explicit assumption that you would want to translate to BPEL.  The latest version of BPMN throws that idea out completely, and proposes a new serialization format instead of BPEL.

    Microsoft pulled away from it as well as a core part of their engine.  First proposing that BPEL would be an interchange format that they would translate to their internal format.  Oracle acquired Collaxa an excellent implementation of BPEL, and they even produced extensions of BPEL that allowed for round trip processing of BPMN diagrams using BPEL as the file format.  But Oracle now appear to be pulling away from the BPEL approach in favor of a higher-level direct interpretation of a BPMN-like diagram.

    Later it became doubtful that processes expressed in BPEL are interchangeable at any level.  Of course, a simple process that sticks to the spec and only calls web services will work everywhere, but it seems that to accomplish something useful every vendor adds extensions — calls to server specific capabilities.  Those extensions are valid, and useful, but they limit the ability to exchange processes between vendors.

    Where Do We Go From Here?

    To be clear, BPEL did not fail as a server programming language.  A engine that is internally based on BPEL for Process Driven Server Integration, should be able to continue to do that task well.  To the credit of those who designed it for this purpose, they did an exemplary job.   As far as I know, BPEL engines run very reliably.

    BPEL only failed as

    • a universal representation of a process for the exchange between engines.
    • as a representation of a business process that people are involved in.

    BPMN is more commonly used as a representation of people oriented processes for direct interpretation.  Yet portability of BPMN diagrams is still sketchy — and this has nothing to do with the serialization format, it has to do with the semantics being designed by a committee.  But that is a whole other discussion.

    The business process holy grail still eludes the industry as we discover that organizations consist of interactions patters that are much more complex than we previously realized.  No simple solution will ever be found for this inherently complex problem, but the search for some means to keep it under control goes on. What I hope we learned from this is to be cautious about overblown claims based on simplified assumptions, and to take as more studied and careful approach to standard in the future.


    by kswenson at August 07, 2017 10:25 AM

    August 04, 2017

    Keith Swenson: A Strange FEELing about Dates

    The new expression language for Decision Model and Notation standard is called the Friendly Enough Expression Language (FEEL).  Over all it is a credible offering, and one that is much needed in decision modeling where no specific grammar has emerged as the standard.   But I found the handling of date and time values a bit odd. I want to start a public discussion on this on this, so I felt the best place to start is the blog post here, and this can serve as a focal point for discussion references.

    The Issue

    A lot of decisions will center on date and time values.  Decisions about fees will depend on deadlines.  Those deadlines will be determined by the date and time of other actions.  You need to be able to do things like calculate whether the current transaction is before or after a date-time that was calculated from other date-time values.

    FEEL includes a data type for date, for time (of day) and for date-time.  It offers certain math functions that can be performed between these types and other numbers.  It offers ways to compare the values.

    Strange case 1: Would you be surprised that in FEEL you can define three date-time values, x1, x2, and x3 such that when you compare them all of the following are true?:

    x1 > x2
    x2 > x3
    x3 > x1.

    All of those expressions are true.  They are not the same date-time, they are all different points in time (few hours apart in real time), but the “greater than” operator is defined in a way that dates can not actually be sorted into a single order.

    Strange Case 2: Would you be surprised that in FEEL you can define two date-time values, y1, and y2, such that all of the following are false?:

    y1 > y2
    y1 = y2
    y1 < y2

    That is right, y1 is neither greater than, equal to, nor less than y2.

    What is Happening?

    In short, the strangeness in handling these values comes from the way that time zones and GMT offsets are used.  Sometimes these offsets and time zones are significant, and sometimes not.  Sometimes the timezone is fixed to UTC.  Sometimes unspecified timezones come from the server locales, and other times from the value being compared to.

    Date-time inequalities (greater-than and less-than) are done in a different way than equals comparisons.  When comparing greater or less than, the epoch value is used. (That is —  the actual number of seconds from that instant in time since Jan 1, 1970 and the timezone is considered in that calculation.)  But when comparing two date-time values, they are not equal unless they come from the exact same timezone.

    It gets stranger with date-time values that omit the timezone.  If one of the date-time values is defined without a timezone, then the two values are compared as if they were in the same timezone.  This kind of date-time has a value that changes depending upon the timezone of the data value being compared to!

    Date values, however must be the date at midnight UTC.  Timestamps taken in the evening in California on Aug 13 will be greater than a date value of Aug 14!  The spec is actually ambiguous.  At one point is says that the date value must be UTC midnight.  UTC midnight of Aug 14, is Aug 13 in California.  At other points it says that the time value is ignored and the numeric day value (13) would be used.  The two different interpretations yield different days for the date-time to date conversion.

    It gets even worse when you consider time zones at the opposite ends of the timezone spectrum.  When I call team members in Japan, we always have to remember to specify the date at each end of the call … because even though we are meeting at one instant in time, it is always a different day there.  This effects your ability to convert times to dates and back.

    Time of day values oddly can have a time zone indicator.  This may not strike you as odd immediately, but it should.  Time zones vary their offset from GMT differently at different times of the year.  California is either 8 or 7 hours from GMT, depending on whether you are in the summer or winter.  But the time-of-day value does not specify whether it is in summer or winter.  Subtracting two time-of-day values can give value varying by 0, 1 or 2 hours depending on the time of year that the subtraction is done, and it is not clear even how to determine the time of year to use.  The server current date?  Your model will give different results at different times of the year.  Also, you can combine a date and a time-of-day to get a date-time, but it is not clear what happens when the time-of-day has a timezone.  For example, if I combine Aug 14 date, with time-of-day 8pm in California, do I get Aug 13, or Aug 14 in California?  Time-of-day has to be positive (according to the spec) but this appears to add 24 hours in certain cases where the timezone offset is negative.

    If that is not enough, it is not clear that the DMN model will be interpreted the same in different time zones.  Remember that phone call to Japan?  The same DMN model running in Japan will see a different date than the same model running in California.  If your business rule says that something has to happen by April 15, a given times stamp in Japan might be too late, while the exact same time in California still hours to go.

    I write systems that collect data all over the world.  We correlate and process events from a server running in India and compare to one in Finland and another in Washington DC.   I am left scratching my head to figure out how I am going to write rules that work the same way on data from different locations, and so those rules run exactly the same way on servers running in different time zones.  It is critical that these decision models be clear, unambiguous, and run the same way in every location.

    Solution is Simple

    Given all the systems that support date and time, it is surprising that FEEL does not just borrow from something that has been shown to work.  I take my position from Java which has solved the problem nicely.  The date-time value is well defined as the epoch value (number of milliseconds since Jan 1, 1970).  Then Java offers a Calendar object for all the rest of the calculations and conversions that takes into account all the vagaries of specific timezone offsets including daylight time switching.  The Calendar offers calculations like converting a string representation to a date, and converting a date back to a string.  This is already well tested and proven, so just use it.

    First: In the DMN spec, date-time values should simply be compared by using the epoch value — the number of seconds since Jan 1, 1970 UTC.    This value is already what is used for greater than and less than comparisons.  The spec should be changed to do that same for equals comparison.  This would make the date/time value for 3pm in New York equal 12 noon in California for that same day.  This seems clearly to be what you want.    The current spec says these are NOT the same time.  This would give a clear order for sorting all date-time values.

    Second: The DMN spec should then define a default timezone for any model.  Any date or time value without a timezone indicator is interpreted to be in the time zone of the default for the model.  Date time calculation (such as add 3 days, or conversion from date-time to date, or time) use a calendar for that time zone locale.  A date value would then be the 24 hour period for that date from that default calendar.   A time of day would be for the default timezone, and would probably handle daylight time changes correctly.

    This solves most the strangeness.  Since the model defines the timezone for the model, it always executes exactly the same way, no matter where the model is being interpreted.  You are never dependent on the “local timezone” of the server.  And, since identical points in time always compare as equal, even if those points in time came from different locations, the rules around handling time are clear, unambiguous, and “friendly enough”.

    Final Note

    I don’t actually know the rationale for the unusual aspects of the specification.  Maybe there is some special reason for the arcane approach.  If so, one might need to invent a couple new date functions to handle them along with the scheme above.  I would hazard a bet that those functions would be identical to ones already on the Java Calendar object.  We really don’t need to be inventing a new and incompatible way of dealing with date values.  But, I will wait for feedback and see.




    by kswenson at August 04, 2017 12:50 AM

    August 01, 2017

    Drools & JBPM: Drools, jBPM and Optaplanner Day: September 26 / 28, 2017 (NY / Washington)

    Red Hat is organizing a Drools, jBPM and Optaplanner Day in New York and Washington DC later this year to show how business experts and citizen developers can use business processes, decisions and other models to develop modern business applications.
    This free full day event will focus on some key aspects and several of the community experts will be there to showcase some of the more recent enhancements, for example:
    • Using the DMN standard (Decision Model and Notation) to define and execute decisions
    • Moving from traditional business processes to more flexible and dynamic case management
    • The rise of cloud for modeling, execution and monitoring
    IT executives, architects, software developers, and business analysts who want to learn about the latest open source, low-code application development technologies.

    Detailed agenda and list of speakers can be found on each of the event pages.

    Places are limited, so make sure to register asap !

    by Edson Tirelli ( at August 01, 2017 11:00 PM

    July 13, 2017

    Sandy Kemsley: Insurance case management: SoluSoft and OpenText

    It’s the last session of the last morning at OpenText Enterprise World 2017 — so might be my last post from here if I skip out on the one session that I have bookmarked for late this...

    [Content summary only, click through for full article and links]

    by sandy at July 13, 2017 04:16 PM

    Sandy Kemsley: Getting started with OpenText case management

    I had a demo from Simon English at the OpenText Enterprise World expo earlier this week, and now he and Kelli Smith are giving a session on their dynamic case management offering. English started by...

    [Content summary only, click through for full article and links]

    by sandy at July 13, 2017 02:57 PM