Planet BPM

May 19, 2015

Drools & JBPM: A Comparative Study of Correlation Engines for Security Event Management

This just paper came up on my google alerts, you can download the full text from ResearchGate.
"A Comparative Study of Correlation Engines for Security Event Management"

 It's an academic paper, published in the peer reviewed journal.
"10th International Conference on Cyber Warfare and Security (ICCWS-2015)"

Th paper is evaluating the correlation performance for large rule sets and large data sets in different open source engines. I was very pleased to see how well Drools scaled at the top end. I'll quote this from the conclusion and copy the results charts.
"As for the comparison study, it must be said that if the sole criteria was raw performance Drools would be considered the best correlation engine, for several reasons: its consistent behaviour and superior performance in the most demanding test cases."

In Table 2 (first image) we scale form 200 rules to 500 rules, with 1mil events with almost no speed loss - 67s vs 70s.

In Table 1 (second image) our throughput increases as the event sets become much larger.

I suspect the reason why our performance is less for for the lower rule and event set numbers, is due to the engine initialisation time for all the functionality we provide and for all the indexing we do. As the matching time becomes large enough, due to larger rule and data sets, this startup time becomes much less significant on the over all figure.




by Mark Proctor (noreply@blogger.com) at May 19, 2015 11:46 PM

May 18, 2015

BPM-Guide.de: long-running.net – A noteworthy new blog about BPM

There is a new kid on the block of BPM Blogs: Our tech lead Daniel Meyer started blogging, and since Daniel strives for excellence in anything he does, I would strongly recommend to subscribe for the feed, follow him on twitter and read his latest post about how to express asynchronous service invocations in BPMN.

Disclaimer: No, he did not ask me to promote his new blog. In fact he will probably be mad at me because I did – because now it looks like he *did* ask for it – but I won’t mind

by Jakob Freund at May 18, 2015 07:06 PM

May 12, 2015

BPM-Guide.de: Camunda BPM 7.3 Release Webinar on June 2nd

Camunda BPM 7.3 will be released on May 31, 2015 (yep, right on schedule!), and it will be jam-packed with outstanding new features.

My personal favorites are:

Process Instance Modification: Flexibly start and stop any step within your process – you can even use it like a Star Trek – style “token transporter” and move your process instance from any current state into another. Check this out and be awestruck!

Super-Flexible Authorizations: Define who is able to do what within Camunda – for example, the members of a group “Marketing” are only allowed to start, see and work on “their” …

by Jakob Freund at May 12, 2015 07:38 PM

Drools & JBPM: Validation and Verification for Decision Tables

The decision tables are getting even more improvements than the UI work Michael has been working on.
Zooming and Panning between Multiple Huge Interconnected Decision Tables
Cell Merging, Collapsing and Sorting with Multiple Large Interconnected Decision Tables

I am currently working on improving the validation and verification of the decision tables. Making it real time and improving the existing V&V checks.

Validation and verification are used to determine if the given rules are complete and to look for any bugs in the dtable authors logic. More about this subject.

Features coming in the next release


Real time Verification & Validation

Previously the user had to press a button to know if the dtable was valid or not. Now the editor does the check in real time, removing the need to constantly hit the Validate-button. This also makes the V&V faster, since there is no need to validate the entire table, just check how the change of a field affected the rest of the table.




Finding Redundancy 

To put it simple: two rows that are equal are redundant, but redundancy can be more complicated. The longer explanation is: redundancy exists when two rows do the same actions when they are given the same set of facts.

Redundancy might not be a problem if the redundant rules are setting a value on an existing fact, this just sets the value twice. Problems occur when the two rules increase a counter or add more facts into the working memory. In both cases the other row is not needed.




 

 

Finding Subsumption

Subsumption exists when one row does the same thing as another, with a sub set of the values/facts of another rule. In the simple example below I have a case where a fact that has the max deposit below 2000 fires both rows.

The problems with subsumption are similar to the case with redundancy.






Finding Conflicts

Conflicts can exists either on a single row or between rows.
A single row conflict prevent the row actions from ever being executed.

Single row conflict - second row checks that amount is greater than 10000 and below 1






Conflict between two rows exists when the conditions of two rules are met with a same set of facts, but the actions set existing fact fields to  different values. The conditions might be redundant or just subsumptant.

The problem here is, how do we know what action is made last? In the example below: Will the rate be set to 2 or 4 in the end? Without going into the details, the end result may be different on each run and with each software version. 
Two conflicting rows - both rows change the same fact to a different value

 

Reporting Missing Columns

In some cases, usually by accident, the user can delete all the condition or action columns.

When the conditions are removed all the actions are executed and when the actions columns are missing the rows do nothing.
The action columns are missing
The condition columns are missing










What to expect in the future releases?


Better reporting

As seen on the examples above. Reporting the issues is currently poor.
The report should let the user know how serious the issue is, why it is happening and how to fix it.

The different issue levels will be:
  • Error - Serious fault. It is clear that the author is doing something wrong. Conflicts are a good example of errors.
  • Warning - These are most likely serious faults. They do not prevent the dtable from working, but need to be double checked by the dtable author. Redundant/subsumptant rules for example, maybe the actions need to happen twice in some cases.
  • Info - The author might not want to have any conditions in the dtable. If the conditions are missing each action gets executed. This can be used to insert a set of facts into the working memory. Still it is good to inform that the conditions might have been deleted by accident.  

 

Finding Deficiency

Deficiency gives the same kind of trouble that conflicts did. The conditions are too loose and the actions conflict.

For example:
If the loan amount is less than 2000 we do not accept it.
If the person has a job we approve the loan.
The problem is, we might have people with jobs asking for loans that are under 2000. Sometimes they get them, sometimes they do not.


 

Finding Missing Ranges and Rows

Is the table complete? In our previous examples we used the dtable to see if the loan application gets approved. One row in the dtable should always activate, no matter how the user fills out his loan application. Either rejecting or approving the loan or else the applicant does not get a loan decision.
The goal of the V&V tool will be to find these gaps for the dtable author.

 

Finding Cycles

The actions can insert new facts and the conditions trigger the actions when new facts are inserted. This can cause an infinite number of activations.
This issue is a common mistake that the goal is to pick it up in the authoring phase with the V&V tool.

by Toni Rikkola (noreply@blogger.com) at May 12, 2015 05:20 PM

May 07, 2015

Sandy Kemsley: SapphireNow User Experience Q&A with Sam Yen

Wrapping up day 2 of SAPPHIRE NOW 2015, a small group of bloggers met with Sam Yen, SAP’s Chief Design Officer, to talk about user experience at SAP. That, of course, means Fiori: the user...

[Content summary only, click through for full article and links]

by sandy at May 07, 2015 01:09 AM

May 06, 2015

Sandy Kemsley: SapphireNow 2015 Day 2 Keynote with Bernd Leukert

The second day of SAP’s SAPPHIRENOW conference started with Bernd Leukert discussing some customers’ employees worry of being disintermediated by the digital enterprise, but how the...

[Content summary only, click through for full article and links]

by sandy at May 06, 2015 04:22 PM

Sandy Kemsley: IoT Solutions Panel at SapphireNow 2015

Steve Lucas, president of platform solutions at SAP, led a panel on the internet of things at SAPPHIRENOW 2015. He kicked off with some of their new IoT announcements: SAP HANA Cloud Platform (HCP)...

[Content summary only, click through for full article and links]

by sandy at May 06, 2015 12:30 PM

May 05, 2015

Sandy Kemsley: Consolidated Inbox in SAP Fiori at SapphireNow 2015

I had a chance to talk with Benny Notheis at lunchtime today about the SAP Operational Intelligence product directions, and followed on to his session on a consolidated inbox that uses SAP’s...

[Content summary only, click through for full article and links]

by sandy at May 05, 2015 10:10 PM

Sandy Kemsley: SapphireNow 2015 Day 1 Keynote with Bill McDermott

Happy Cinco de Mayo! I’m back in Orlando for the giant SAP SAPPHIRE NOW and ASUG conference to catch up with the product people and hear about what organizations are doing with SAP solutions....

[Content summary only, click through for full article and links]

by sandy at May 05, 2015 03:57 PM

May 01, 2015

Keith Swenson: Analytics in the Swarm

Big data is a style of data analysis that reflects a return to large, centralized data repositories. Processing power and memory are getting cheaper, while the bandwidth among all the smart devices remains a barrier to getting all the data together in one place for analysis.  The trend is for putting the anaytics into the swarm of devices known as the Internet of Things (IoT)

This is an excerpt from the chapter “Mining the Swarm” by Keith D Swenson, Sumeet Batra, Yasumasa Oshiro all from Fujitsu America published in the new book “BPM Everywhere.”

Mainframe Origins

The first advances into the field of computing machinery were big, clumsy, error prone electrical and mechanical devices that were not only large physically but extremely expensive requiring specially designed rooms and teams of attendants to keep them running. The huge up-front investment necessary meant that the machines were reserved exclusively for the most important most expensive and most valuable problems.

We all know the story Moore’s Law and how the cost of such machines dropped dramatically year after year. At first the cost savings meant only that such machines could be dramatically more powerful and could handle many programs running at the same time. Time of the machines was split into slices that could be used by different people at different times. The swapping of machine time to different accounts did represent at the end an overhead and a barrier to use. The groups running the machines needed to charge by the CPU cycle to pay for the machine. While there were times that the machine was under-utilized, it was never possible to really say that there were `free’ cycles available to give away. The cost-recovery motive can’t allow that.

Emergence of Personal Computers

The PC revolution was not simply a logical step due to decreased costs of compute machinery, but rather a different paradigm. By owning a small computer, the CPU cycles were there to be used or not used as pleased. CPU cycles were literally free after the modest capital cost of the PC had been paid. This liberated people to use the machines more freely, and opened the way to many classes of applications that would have been hard to justify economically on a time-share system. The electronic spreadsheet was born on the PC because spending expensive CPU power just to update the display for the user could not be justified. The mainframe approach would be print all the numbers onto paper, have the analyst mark up the paper, get it as right as possible, and then have someone input the changes once. The spreadsheet application allowed a user to experiment with numbers; play with the relationships between quantities; try out different potential plans; and to see which of many possible approaches looked better.

The pendulum had swung from centralized systems to decentralized systems; new applications allow CPU cycles to be used in new innovative ways, but PC users were still isolated. Networking was still in its infancy, but in the 90’s that changed everything.

threeexpansions

World Wide Web

The Internet meant that PCs were no longer simply equipment for computation, but became communications devices. New applications arrived for delivering content to users. The browser was invented to bring resources from those remote computers and assemble them into a coherent display on user demand. Early browsers were primitive, and there were many disagreements on what capabilities a browser should have to make the presentation of information useful to the user. The focus at that time was on the web server which had access to information in a raw form, and would format the information for display in a browser. Simply viewing the raw data is not that interesting, but actually processing that data in ways customized by the user was the powerful value-add that the web server could provide. Servers had plug-ins, and the Java and JavaScript languages were invented to make it easier to code these capabilities and put them on a server.

The pendulum had swung back to the mainframe model of centralized computing. The web server, along with its big brother the application server, were the most important processing platforms at that time. The web browser allowed you to connect to the results of any one of thousands of such web servers, but each web server was the source of a single kind of data.

Apps, HTML5, and client computing

Web 2.0 was the name of a trend for the web to change from a one way flow of information, to a two-way, collaborative flow of information that allowed users to be more involved in the flow of information. At the same time, and interesting technological change brought about the advent of `apps’ — small programs that could be downloaded, installed, and run more or less automatically. This trend was launched on smart phones and branched out from there. HTML5 promises to bring the same capability to every browser. Once again the pendulum had swung in the direction of decentralization; servers provide data in a more raw form and apps format the display on a device much closer to the user.

Cloud Computing & Big Data

More recently the current buzz terms are cloud computing and big data. Moving beyond the basic providing of first-order data, large computing platforms are collecting large amounts of data about people as they use the web platforms. The capabilities for memory have grown so quickly, and the cost dropped so quickly, that there is no longer any need to throw anything away. The huge piles of data collected can then be mined, and surprising new insights gained.

Cell phones automatically report their position and velocity to the phone company. For cell phones moving quickly — or not so quickly — on a freeway, this is important information about traffic conditions. Google collects this information, determines where the traffic is running slow and where it is running fast, and the display the result on maps using colors to indicate good or bad traffic problems. The cell phone was never designed as a traffic monitor. It is an insightful engineer to realize that out of a large collection of information for one purpose, good information about other things could be concluded.

Big data means just that: Data that is collected in such quantity that special machines are needed for processing it. A better way to think of it is that the data collection is so large, that even at the fastest transfer speeds; it would take days or months to move it to another location. The idea that you have a special machine for analyzing data does not help at all if the data set needs to be transported to that machine, and the time for this transfer would be prohibitive. Instead of bringing the data to the analysis machine, you have to send the analysis to the machine holding the data. The pendulum had once again swung to centralized machines with large collections of data.

Analytics in the Swarm

The theme of this chapter is to then anticipate the next pendulum swing: Big data style analytics will become available in a distributed fashion away from the centralized stockpiles of information. While the challenge in Big Data is variety and velocity, what sensor technology or IoT brings is the variety of the data which had been previously leverages is what we call “dark data.” Dark data is attracting people as new datasource for mining.  Each hardware sensor collects specific data such as video, sounds (in stream), social media texts, stocks, weather, temperature, location, vital data. Analysis of this data is a challenge since these devices are so distributed and relatively difficult to aggregate in the traditional ways. So key analyzing those sensor data is how you extract useful data (meta data) or compress it, and how to interact with other devices or center server. Some say that a machine-to-machine (M2M) approach is called for.

There are a number of reasons to anticipate these trends, as well as evidence that this is beginning to happen today.

Read the rest in “BPM Everywhere” where there is more evidence for how memory and processor performance is fall far faster than telecommunications cost, meaning that processing should move closer to devices.  Then examples of how analytics might be used to achieve greater operating efficiencies everywhere.


by kswenson at May 01, 2015 10:53 AM

April 28, 2015

Keith Swenson: Montreal Conference

I will be speaking in Montreal in May at a conference and at an associated workshop about Innovation and Business Processes.

I have been asked to do a keynote at the MCETECH 2015 conference in Montreal.  The conference is all about bringing together researchers, decision makers, and practitioners interested in exploring the many facets of Internet applications and technologies.

My talk will be on May 14th called “Robots don’t innovate: Innovation vs. Automation in BPM” where I will present many ideas from the “Thinking Matters” book.

I will also be speaking May 12th at the “Workshop on Methodologies for Robustness Injection into Business Processes” where I will get to geek out a little more on the implementation side of BPM software engineering and distributed system design.

Looking forward to my first visit to Montreal.  Hope to see some of you there!

ALSO –  Book Released

New book released:   BPM Everywhere: Internet of Things, Process of Everything


by kswenson at April 28, 2015 10:38 AM

April 27, 2015

Drools & JBPM: Cell Merging, Collapsing and Sorting with Multiple Large Interconnected Decision Tables

Last month I showed you videos for our proof of concept work, using Lienzo, to see how viable HTML5 canvas is for multiple large interconnected decision tables.

Michael's made more progress adding cell merging and collapsing as well are sorting columns. All still working on truly massive interconnected tables.

We plan to make the generic core of this work available as a Lienzo grid component in the future. Although we still need to figure out different data types for cells and how to do seamless in-cell editing rather than a popup.

(click to turn on 720p HD and full screen)

by Mark Proctor (noreply@blogger.com) at April 27, 2015 10:18 PM

Drools & JBPM: Domain Extensions for Data Modeller

Walter is working on adding domain extensions to the Data Modeller. This will allow different domains to augment the model - such as custom annotations for JPA or OptaPlanner. Each domain is pluggable via a "facet" extension system. Currently, as a temporary solution, each domain extension is added as an item in the toolbar, but this will change soon. In parallel to this Eder will be working on something similar to Intellij's Tool Windows for side bars. Once that is ready those domain extensions plugged in as facets and exposed via this tool window capability. Here is a video showing JPA and it's annotations being used with the Data Modeller.

(Click to  turn on 720p HD and full screen)

by Mark Proctor (noreply@blogger.com) at April 27, 2015 02:48 PM

April 23, 2015

Keith Swenson: Process Focus vs. System Architecture

Too much of a focus on the on the business process can cause a business solution to be poorly designed and problematic.  This is a story from several customers who followed the BPM methodology too well, and were blindsided by some nightmarish systems issues.  Too much process can be a real problem.

Process is King

We know that the mantra for BPM is to design everything as a process.  The process view of work allows you to assess how well work gets from beginning to end.  It allows you to watch and optimize cycle time, which is essential to customer satisfaction.

BPM as a management practice is excellent.  However, many people see BPM as a way to design an application.  A process is drawn as a diagram, and from this the application is created.  This can be OK, but there is a particular pitfall I want to warn you about.

A Sample Process

Consider the following hypothetical process between servers in a distributed environment:

exactly-onceHere we have a process in system B (in the middle) that splits into a couple of parallel branches.  Each branch uses a message to communicate to an external remote system (systems A and C) and start a process there.  When those processes complete, the messages comes back and eventually the middle process completes.  This is a “remote subprocess” scenario.

What is the matter with this?  This seems like a pretty straightforward process.  The middle process easily sends a message.  Receipt of that message easily start a process.  At the end of that process, it easily sends a message back which can easily be received.  What could go wrong?

Reliability: Exactly-Once

The assumption being made in this diagram is that the message is delivered exactly once.  “Exactly-once” is a term of art that means that the message is delivered with 100% reliability, and a duplicate is never seen by the receiver.

Any failure to deliver a message would be a big problem:  Either the sub-proesses would not be started, or the main process would not get the message to continue.  The overall process would then be stuck.  Completely stuck.  The middle process would be inconsistent with the remote processes, and there is no way to ever regain consistency.

So, then, why not just implement the system to have exactly-once message delivery?   Push the problem down to the transport level.  Build in reliability and checking so that you have exactly once delivery.  In a self-contained system, this can be done.  To be precise, within a single host, or a tightly bound set of hosts with distributed transactions (two phase commit) it is possible to do this.  But this diagram is talking about a distributed system.  These hosts are managed independently.  The next section reveals the shocking truth.

Exactly-Once Delivery does not Exist

In a distributed system where the machines are not logically tied and managed as a single system, it is not possible to implement — nor do you want to implement — true exactly once reliable message delivery.  Twice recently, a friend of mine from Microsoft referenced a particular blog post on this topic:  You Cannot Have Exactly-Once Delivery.  There is another discussion at: Why Is Exactly-Once Messaging Not Possible In A Distributed Queue?

This is a truism that I have believed for a long time.  I never expect reliable message delivery.  There is a thought experiment that help one understand why if we could implement exactly-once delivery, you would not want it.  Think about back-up, and restoring a server from backup.  Systems A, B, and C are managed separately.  That means they are backed up separately.  Imagine that a disk blows up on system C.  That means that a replacement disk will be deployed, and the contents restored from backup, to a state that is a few moments to a few hours ago.  Messages that were reliably delivered during that gap, are certainly not delivered, and the system is stuck.  The process that had been rolled back will send extra messages, that will in turn cause redundant processes on the remote systems, which might (if the interactions were more elaborate) cause them to get stuck.

Exactly once delivery attempts to keep the state of systems A, B, and C in sync.  Everything works in the way that a Rube Goldberg machine works: as long as everything works exactly as expected you can complete the process, but if anything trips up in the middle all is lost.   The backup scenario destroys the illusion of distributed consistency.  System C is not in sync, and there is no way to ever get into sync again.

So .. All is Lost?

We need reliable business processes, and it turns out that can be done using a consistency seeking approach.  What you have to do is to assume that messages are unreliable (as they are).  From a business process point of view, you want to visualize the process as a message delivered, but you do not want to architect the application to literally use this as the mechanism of coordination between the systems.

You need a background task that reads the state of the three systems, and attempts to get them into sync.  For example, when system B sends a message to system C, it also registers records the fact that it expects system C to run a subprocess.   System C, when receiving a message, records the fact that it has a subprocess running for system B.   A background task will ask system B for all the subprocesses that it expects to be running on system C, then it asks system C for a list of all the processes it actually is running for system B.   If there is a discrepancy, it takes action.

Consider, for example, system B having a process XYZ that is waiting on system C for a subprocess.  The consistency seeker asks system C if it has a process for XYZ running.  There are two problem scenarios: either there is no such process, in which case it tells system B to re-send the message starting the process.  The other possibility is that the process is there, but it has already completed, in which case it tell system C to resend the completion message.   So if things are out of sync, a repeat message is prompted.  The other requirement is that if, by bad luck, a redundant message is received, it is ignored.  Those two things: resending messages and ignoring duplicates are the essential ingredients of implementing reliable processes on top of an unreliable transport — and it works in distributed systems.

Consistency Seeking

Consistency seeking solves the problem at the business process level, and not at the transport level.

It even works if the system is restored from backup.  For example, imagine that system B (the middle system) is restored from a backup made yesterday, while systems A and C are left in today’s state.  In such a case, there may be processes that had been completed, but are not yet completed in the restored state.  The consistency seeking mechanism will check, and will prompt the re-sending of the messages that will eventually bring the systems into a consistent state.  It is not perfect—there are situations where you can not automatically return to synchronized state—but it works for most common business scenarios.  It certainly works in the case where a simple message was lost.  It is far less fragile than the system that assumes that every message is delivered exactly-once.

Conclusion

Process oriented thinking causes us to think about processes in isolation.  We forget that real systems need to be backed up, real systems go up and down, real systems are reconfigured independently of each other.  The process oriented approach ignores those to focus exclusively on one processes, with the assumption that everything in that process is always perfectly consistent.

This does not mean that you should not design with a process.  It remains important for the business to think about how your business is running as a process.   But, naïvely implementing the process exactly as designed will result in a system that is not architected for reliability in a distributed environment.  BPM is not a replacement for good system architecture.


by kswenson at April 23, 2015 10:05 AM

April 14, 2015

BPM-Guide.de: Free: Camunda BPM Online Training

My formidable Co-Founder Bernd Rücker created a self-paced training course for Camunda BPM. It consists of 4.5 hours of video plus a couple of hands-on exercises with sample solutions.

You can complete this course if you want to get your feet wet with Camunda, plus it provides some valuable insights into best practices from our consulting experience, e.g. for creating UI in different technologies, writing unit tests or handling transactions.

And it’s free! You just have to sign up for the Camunda BPM Network, and off you go.

Get the Camunda BPM Online Training

by Jakob Freund at April 14, 2015 01:07 PM

Sandy Kemsley: London Calling To The Faraway Towns…For EACBPM

I missed the IRM Business Process Management Europe conference in London last June, but will be there this year from June 15-18 with a workshop, plus a breakout session and a panel session. It’s...

[Content summary only, click through for full article and links]

by sandy at April 14, 2015 11:41 AM

April 13, 2015

Keith Swenson: bpmNEXT – Day 2

Here are my notes from the second day of bpmNEXT on March 31, 2015.  Note: I spoke on day 3, so was too busy, so these conclude my notes of the event.

 

Michael Grohs, Sapiens DECISION – How to manage Decision Logic

Decision aware business models are simpler, easier to maintain.  Less ambiguity than natural language descriptions.  Rule content can be managed by users because it is clearly separate from the program.  Communities make their own vocabularies.  Decision management can produce rules that run in several different rules engines.

Showed the decision design tool.  Typically a whole team working on it playing different roles.  There is a defined process for changing rules, and every change is tracked.  (Product has this javascript “windows” inside a browser, and the demo seems to have trouble managing the windowing. )

Decision starts with an octagon, then some squares with top corners chopped off which represents a rule family linked together with arrows.  Each rule family declares what values (facts) it generates.  In a typical flat rule set the inferential information is lost, and once lost hard to maintain.  Opening the rule family it looks like a table, essentially a decision table.  Each row is OR with the earlier lines.   Not sure if it is first line that matches, or last line that matches.

Example was a rules base, and then a specialized rule base for a particular state: Florida.  You can see a side-by-side window, with the differences highlighted.  There are logs of all changes.

Q: What about knowledge workers?  Can they use it, and can they have their own rules?

A: Right now focusing on the automation, the 80/20 part.  Everything is about faster and cheaper.  In the future we may think about more elaborate rules.  Agility and closed loop with knowledge worker is the next step.

Gero Decker, Signavio – Business Decision Management

BPM addresses 50% of the questions.  The rest is making decisions.  New standard Decision Modeling Notation (DMN 1.0)   Signavio Decision Manager concentrates on the modeling and governance.

Drew up a quick BPMN diagram.  Used a “business rule” task.  Open that up and see a decisions tree.  Open one node in the tree and it looks like a decision table.  Created a quick example decision table.  The decision node in the tree can have “sub-decisions” which are more decision tables.  Product does some decision checking.  There is a rule testing capability as well.

DMN is pretty powerful, it covers rules as well as predictive analytics – not supported yet by Signavio.

Export to DROOLS code, which is pretty flat view of all the rules.  The hierarchy is not apparent, but coded into the rules.  There is a declare statement at the top for run-time binding to the execution environment.

As for execution of the rules in the DMN standard, there are differing hit rules:  first hit, multiple hits, some sort of weighting, last hit.  He demoed ‘exclusive’ which means that you can not have overlapping rules, and in the case of exclusive it automatically shows you when rules overlap.

John Reynolds – Kofax – Digital World

Used to be that BPM ignored the physical world, and BPEL is the best example of that.   Now we need to engage the customers int he real world.  One rule: don’t force users to gather information that is already out there — use a robot instead.  Some information is there in paper form.  Claims that a lot of people still print their PDF files.  Scanning is what Kofax does, and Kapow was awarded last year for creating BPM processes.  Now, SignDoc for signing applications.

For the demo, he got out a utility bill and a driver’s license.  Processes from the past too often assumed that all the information was already there.  Holds the smart phone over the document,a nd captures the document from the video stream, so it processes and cleans up the image specifically for optical character recognition.  Processed on the phone.  The image has the coffee stain removed, and made it black and white.  This cleanup is a kind of “compression” which is important for mobile and storage.

The documents are scanned in using some libraries for scaning that they make available to put into custom apps.  The user does not have to re-enter anything … just take pictures of the documents.

These document “teach” the transformation servers.  There is not any coding, but instead teaching.  Characters are recognized, and then the fields on the document are recognized as well.  There is a manual correction that feeds back to improve the recognition.

Mike Marin – Mobile Case Management and Mobile

Mobile is no longer Optional.  Use case is an insurance company that has unhappy customers and decides to implement a mobile app to make customer experience better.  Will show content, capture, and case — all on the mobile device.

Again, with the phone, took a picture of the document.  After taking the picture, there are some options for cleaning up the document and submit it to the process / case.  Can review the contents.

Robert Shapiro – Process Discovery

Take the event logs, and mine them in order to work out a BPMN process model that will have the same statistical behaviors as the event log.  Example demo will be on stat orders. Looking at analytics, we see that we are not meeting the objective KPI.  First figure out all the paths, and analyze all the variants, and find a critical path.  We can see that one step is causing a lot of the delays.  Propose two different strategies: one to reduce time, another to reduce costs.  After optimization, we meet the delay time reduction criteria, and we see that the second case has better cost benefit values.

Started the demo by opening an event log.  This creates a BPMN serialization.  He has used the idea of “strict BPMN” to enforce BPMN semantics on the model.  It found a top model and two sub models.  It created events and gateways which never were in the log — they had to be figured out.  He showed a series of different models that had been mined.  One had found 3 parallel tasks.  The models also look at the data, and find correlations between different data items and paths.  This can be used to mine the branch conditions.  Can discover a 1hour timer event task as well.

Can even detect manipulations on data if the event log has data captured in the event log.

Q: (Bruce) Impressive to figure this all out from the logs.  What happens if 90% of the time data causes a branch, but not 100%.

A: It never requires 100%  There are statistical assessments of the conditions.  Not 100% precise.

Q: Is simulation the key different from Disco and the others

A: you need to have a complete, executable model if you want to make changes and improve the model.

Tim Stephenson – Omny Link – Toward Zero Code

Looked at a bunch of self-coding but found that things didn’t work too well.  Going to focus today on decisions.  WordPress claimed that they ran 1/4 of the web.  “Firm Gains” is a business to sell your business.  First step is to build  a form.  Then a decision table.

Demo started by logging into wordpress.  Edited what looked like a blog post, but it was actually a form.  Standard list of fields without an programming.  Went to another page, inserted a square-bracket-style wordpress tag to include the form, and it appeared.  Very easy, very simple, workflow processes.

Scott Francis – BP3 – Sleep at Night Again

How to automate static analysis for BPM.   Your design team is not always experienced.  Once they select technology, they implement, and many times it is over engineered and hard to maintain, and sometimes has to be throws out.  In reality, the kruft does not get added all at once, instead it is incremental.  Neches is a tool to analyze the code, and find problems early in the iterations, and keep them from building up.  Does a complexity measurement on the application.

Neches is a SaaS tool.  Can sign up for an account.  Users upload their application.  Drill into it.  There can be many versions, and you can look at the measurements how they have changed over time.  You can drill down into the individual metrics.  Example: the length of JavaScript server scripts triggers a warning if longer than a threshold.  Particular rules can be excluded (if you don’t agree with them) or particular flagged issues can also be excluded.

Q: How complicated is it to create new rules?

A: Not too hard.  Today this is not exposed, but internally we find it easy to do, and believe once this is exposed people will find it easy enough.

Linus Chow- Oracle – Rapid Process Excellence

Showed a web console for starting and interacting with BPM applications.  Mobile interface as well.

2015-03-31 16.44.10


2015-03-31 17.37.57_sm

And that is it.  On day 3 I had a presentation, and was too busy to focus on taking good notes, and then for the entire session before they sequester the laptop.  Overall bpmNEXT remains a place for very forward discussion of new directions, a place that is helpful for me to stay on top of things.  The new venue — Santa Barbara — is likely to remain the choice for next year.  I am looking forward to it already.


by kswenson at April 13, 2015 10:25 AM

April 10, 2015

BPM-Guide.de: From Push to Pull – External Tasks in BPMN processes

A process engine typically call services actively (e.g. via Java, REST or SOAP) from within a Service Task. But what if this is not possible because we cannot reach the service? Then we use a pattern we called “External Task” – which I briefly want to describe today.

Picture on the right taken from http://www.from-push-to-pull.com/projects/what-is-pull-marketing/ – thanks!

Context and problem

A couple of recent trends increased the need for this pattern, namely:

Cloud: When running process/orchestration engines in the cloud you might not be able to reach the target service via network connections – and VPNs or Tunneling is always cumbersome. It is …

by Bernd Rücker at April 10, 2015 10:10 AM

April 09, 2015

BPM-Guide.de: Orchestration using BPMN and Microservices – Good or bad Practice?

Martin Fowler recommends in his famous Microservices Article: “Smart endpoints and dumb pipes”. He states:

The microservice community favours an alternative approach: smart endpoints and dumb pipes. Applications built from microservices aim to be as decoupled and as cohesive as possible – they own their own domain logic and act more as filters in the classical Unix sense – receiving a request, applying logic as appropriate and producing a response. These are choreographed using simple RESTish protocols rather than complex protocols such as WS-Choreography or BPEL or orchestration by a central tool.

I do not agree! I think even – …

by Bernd Rücker at April 09, 2015 11:35 AM

April 02, 2015

BPM-Guide.de: bpmNEXT – the BPM industry event that *really* matters

Picture taken by Benjamin Notheis from SAP, this year’s winner of the best-in-show-award

Clay Richardson from Forrester Research put it in a nutshell: “bpmNEXT means ‘Show me yours, I’ll show you mine'”.

And show we did: All BPM Software Vendors that *really* matter were there, presenting the latest and greatest they have to offer – or will offer soon. This was not about Sales or Marketing, but just about showing-off the things we’re proud of, and showing it off to peers who understand and appreciate the passion behind it.

But bpmNEXT is even more, it is the global gathering of a …

by Jakob Freund at April 02, 2015 01:25 PM

Thomas Allweyer: Anwender von Prozessmodellierungstools sind weitgehend zufrieden

Im Durchschnitt bewerten die in einer neuen Studie der Firma BPM&O befragten Anwender ihr Prozessmodellierungstool mit der Note 2,6. Sie sind also zumindest weitgehend zufrieden. Interessanterweise sinkt die Zufriedenheit mit der Nutzungsdauer. Die Autoren der Studie führen dies darauf zurück, dass sich im Laufe der Zeit die Anforderungen und Rahmenbedingungen ändern, so dass das ursprünglich gewählte Tool nicht mehr ganz so gut passt.

Mit insgesamt 64 Teilnehmern ist die Studie nicht repräsentativ. Dennoch bietet sie einen interessanten Überblick über die Erfahrungen und Meinungen der Anwender. Überwiegend handelt es sich bei den Teilnehmern um Modellierungsexperten aus BPM-Stabsstellen oder Prozessanalysten. Die am meisten verwendete Notation ist BPMN. Sie wurde doppelt so häufig genannt wie die EPK. Interessanterweise wurden auch Wertschöpfungskettendiagramme, die für Überblicksdarstellungen und Prozesslandkarten dienen, vergleichsweise selten eingesetzt.

Die Tools werden zumeist von der internen IT-Abteilung bereitgestellt. SaaS-Angebote kommen bislang nur in 15% der Fälle zum Einsatz. Am wichtigsten ist den Anwendern eine einfache Bedienung. Auch ein gutes Portal zur Veröffentlichung der Prozessmodelle sowie Funktionen zur Beteiligung der Fachbereiche haben eine hohe Bedeutung. Für die Zukunft wünschen sich die Modellierer von den Toolanbietern mächtigere Reporting-Möglichkeiten und verbesserte Prozessportale.

Die Studie kann unter www.bpm-toolmarktmonitor.de heruntergeladen werden (Registrierung erforderlich). Dort findet sich auch eine im letzten Jahr durchgeführte Anbieter-Umfrage. Außerdem kann man selbst an der Anwenderumfrage teilnehmen, die kontinuierlich fortgesetzt wird.

by Thomas Allweyer at April 02, 2015 10:02 AM

April 01, 2015

Sandy Kemsley: bpmNEXT 2015 Day 3 Demos: Camunda, Fujitsu and Best In Show

Last demo block of the conference, and we’re focused on case management and unstructured processes. Camunda, CMMN and BPMN Combined Jakob Freund presented on OMG’s (relatively) new...

[Content summary only, click through for full article and links]

by sandy at April 01, 2015 07:02 PM

Sandy Kemsley: bpmNEXT 2015 Day 3 Demos: IBM (again), Safira, Cryo

It’s the last (half) day of bpmNEXT 2015, and we have five presentations this morning followed by the Best in Show award. Unfortunately, I have to leave at lunchtime to catch a flight, so you...

[Content summary only, click through for full article and links]

by sandy at April 01, 2015 05:31 PM

Sandy Kemsley: bpmNEXT 2015 Day 2 Demos: Omny.link, BP-3, Oracle

We’re finishing up this full day of demos with a mixed bag of BPM application development topics, from integration and customization that aims to have no code, to embracing and measuring code...

[Content summary only, click through for full article and links]

by sandy at April 01, 2015 12:04 AM

March 31, 2015

Sandy Kemsley: bpmNEXT 2015 Day 2 Demos: Kofax, IBM, Process Analytica

Our first afternoon demo session included two mobile presentations and one on analytics, hitting a couple of the hot buttons of today’s BPM. Kofax: Integrating Mobile Capture and Mobile...

[Content summary only, click through for full article and links]

by sandy at March 31, 2015 10:07 PM

Sandy Kemsley: bpmNEXT 2015 Day 2 Demos: Sapiens Decision, Signavio

We finished the morning demo sessions with two on the theme of decision modeling and management. Sapiens: How to Manage Business Logic Michael Grohs highlighted the OMG release of the Decision Model...

[Content summary only, click through for full article and links]

by sandy at March 31, 2015 07:24 PM

Sandy Kemsley: bpmNEXT 2015 Day 2 Demos: Trisotech, Comindware, Bonitasoft

The first group of demos on bpmNEXT day 2 had a focus on the links between architecture and process: from architectural modeling, to executable architecture, to loosely-coupled development...

[Content summary only, click through for full article and links]

by sandy at March 31, 2015 05:33 PM

Keith Swenson: bpmNEXT – Day 1

My notes from first day of bpmNEXT 2015, March 30.

Bruce Silver – Conference introduction

Today we focus somewhere between BPM and Enterprise Architecture.  15 years ago we thought it was huge that we had one system to integrate human and back-end systems, and we have come a long way.  Now, there is still too much balkanization of the technology.

Main Themes of the Conference:

  1. Breaking the barrier between BPM and Enterprise Architecture.  Anatoly and another from Comindware is going to talk about the 3 gaps.  Denis Gagne will talk about the semantic graph to break down barriers.
  2. Bridge gap between process modeling and decision modeling.  Called “business rules” back then, as if this was an alternative to BPM.  Sapiens has started something called the “Decision Model” because this is too important to leave to the existing approach.  Signavio will also show business decision modeling.
  3. Bridge the gap between BPM and Case Management.  Camunda is offering a unified BPMN/CMMN execution.  Safiro and Cryo will present on how BPM needs to be loosened up.  Kofax and IBM will present on mobile case management and capture.  How do we do case management on our smart phones.  Including signature capture.  IBM has put a lot of emphasis on design, so we might see some of that.
  4. Expanding into new things like the Internet of Things. Presentation from SAP and W4 will focus on this.
  5. Expanding into expert systems and machine learning.  BP3 will present on the automated analysis of bpm code.  Fujitsu (Keith) will present on reconciling independent experts.  IBM will talk about Watson not just winning Jeaopardy, but how it can be used in the cloud with pre-trained services.  Living Systems (Whitestein) measurable intelligence in the process platform.
  6. Expanding into process mining, and Robert will speak about optimization of resources from this.
  7. Reaffirming core values of business empowerment.  Omny.link puts BPM in WordPress for non-programmers.  Oracle will talk about BPM in the public cloud.
  8. Reaffirm embracing continual change, a presentation by bonitasoft on building “living applications”.

2015-03-30 09.32.09Nathaniel Palmer – What does BPM will look like in 2020

Today, BPM looks like well defined, fixed routing of packages: channels, switches, but no awareness of what the other packages are doing.  Where does it need to be: like an Amazon warehouse with Kiva robots.  Needs to be data driver, goal oriented, adaptive, and intelligent automation.

Three things:  Robots, Rules, and Relationship.

An illustration of the change from 2005 to 2013 – the smartphone example at the announcement of the new pope.

60% of people switching banks in the past year did so because of insufficient mobile banking capabilities.  Mobile support is the most important thing.  But don’t just transport the laptop UI to the mobile.  Gave an example of an Oldsmobile radio ads that was moved to TV showing a static picture and the radio ad behind it.  The new medium affords new forms of content.   Showed an automated teller as a “state of automation today” which was obviously not mobile.

What can you do if you have mobile?  Kindle Fire has a “MayDay” button — you press and get an instant conversation with a support person.  This instant connection enables “relationship”.  He showed the Echo from Amazon, because Echo can help walk you through an amazon purchase.  Also showed My Jibo which was popularized through a kickstarter campaign.  Not to automate tasks, but to interface with tasks.

Another thing is wearables, including wearable workflow.  Task might change to not be a single discrete unit of work;  remove the distinction between the task and the things that support the task.   The three tier architecture is common today.  We need to move toward a four tier architecture.  Client tier (mobile) delivery tier, aggregation tier, and services tier.  JSON and REST, and tasks need to be discoverable.

Process mining and optimization.  data driven, goal oriented, adaptive, intelligent automation.


2015-03-30 09.55.49smClay Richardson – Reinventing BPM for the age of the customer.

Nathaniel’s talk focused on customer experience.  10 years ago much of process was focused on back end systems, and we have changed.  Today it is how to engage customers with mobile.

60% of all business leaders prioritize revenue growth and customer experience.

Four periods of history:

  • (1900) age of manufacturing,
  • (1960) age of distribution,
  • (1990) age of information, and finally
  • (2010) the age of the customer.

Told a story about a promotion combining Jaguar and Thomas Pink.  Packaging was excellent, but reception was completely bad.  The bad impression is a perfect example of a process failure: the dealer had not been informed, they were not prepared, not engaging.  Was really wanting a memorable, rich, engaging experience.

Big challenge today is to get across from the old to the new.  42% of business people put better mobile support on critical or high priority.  Examples of new mobile apps to order pizza from Pizza Hut and Dominos.  Pizza Hut simply ported their web site to the phone and it took about 20 minutes to make the first order.  Dominos on the other hand made something that works very well: easy to order, buttons for what you ordered last time, and has a tracker to tell you where it is.  Another example is buying US savings bonds.  Clay helped to redesign this, and found that the changes required broke many of the assumptions in the back end systems.

BPM people don’t have a lot of credibility for improving customer experience.  Need a new title “Digital Customer Experience Arcthitect”.    First, digitize customer end-to-end experience, Another is “Digital operational excellence architect” for drive rapid customer centric innovation and to support prototyping.

What has to change in BPM?: He produced a customer centric BPM Tech Radar.  Two key items on this chart:  1) low code platforms.  2) customer journey mapping.   Simple cloud orchestration, how to quickly program devices, how to connect devices.

Customer facing cadence is faster.  The real thing is really a need for speed.  Months to get things done.  Now when touching customers we need to work faster.  This is driving to “low code” approach.  Develop in weeks, releases weekly, method is test and learn, and adoption is intuitive now.

Gave an example of a customer journey from Philips medical devices to sell a life alert bracelet.  Opportunity to redesign the delivery because the older patient is often anxious, and the purchasing customer needs to be informed.  Another issue was billing since that was being split three ways, make it easier to do this.


2015-03-30 10.57.49smPanel on BPM Business

  • Miguel Valdes Faura – Bonita Soft
  • Scott Francis – BP3 Global
  • Denis Gagne – Trisotch

Miguel – Open Source is the key to building a successful ecosystem.  Akira Awata has translated the entire platform into Japanese and there is a big uptick in usage.  Before that, downloads in Japan were limited.  Open Source BPM Japan.   Now reselling subscription to businesses like Bridgestone and Sony.  Bonita BPM Essentials book developed on the open source version, and people can download and access the examples.  Banking regulation is changing in Switzerland, and some are using processes in Bonita to match these new rules.  There are large benefits to the open source model.

Scott Francis – How to move from Lombardi to independent.  Started trying to be the best Lombardi partner, and then IBM partner.  People worry about time, money, and focus, and it is focus that is the easiest to lose track of.  Learned to find our own customers – IBM did not refer anything.  Service providers get a lot of pressure to pick up other products, for example pick up more IBM products, but it might be better to focus on one product and do the job really well.

Denis Gagne – Two hobbies: Building an Ecosystem of BPM & standards work.  Still amazed at how much “BPM-101″ needs to be taught.  There is a need for us as a general community to educate better.  BPM Incubator has more members outside of US than inside.  190 countries.  We all benefit if the BPM community is better informed.

Q: (Neil) Convergent vs. Divergent Standards.  Why do standards sometimes work, and sometimes not?

A: it is easier to have agreement when you are only touching one set of customers.  Bonita has 1000 customers, but they use only about 30% of BPMN.

Q: (Bruce) People are building Apps.  If the problem that the BPM platforms don’t provide something suitable for those Apps?

A: (Miguel) This is an important questions.  How to make sustainable apps.  We have been doing a poor job in the BPM industry in helping people to make customized UI.  There is a portal, and there is a level of customization, but you are constrained to the box.  You can’t say, put a button on the right corner of the screen.  How to change?

  1. Low-code approach to avoid the need for developers,
  2. instead making things that support developers to make them more powerful.

(Scott) A lot of the people doing mobile apps, have no concept of process.  Once the data is shipped back to the server, they don’t care where it goes. Opportunity to fill the gap between mobile and back end.


2015-03-30 11.42.36smNeil Ward-Dutton – Schroedinger’s BPM

Is this the end of BPM?  Are we seeing the end of “business transformation”.  Where are we going next?

Is it dead?: the term BPM is disappearing from conversations.  People don’t want to talk about it.  Instead they use smart process, case management, anything else.  BPM Technology platforms growing at 3%.  (Clay thinks 8%)  Maintenance revenue is dominating license revenue.  However, there is a lot of inquiries, particularly from non-traditional sectors.  Actually we are probably in the very middle of the adoption curve.

BPMS is fundamentally unlike most enterprise technologies.  Really weird and horrible chimera.   Hard to map on the ways that people normally work.  The innovators think they can use BPM to reinvent the way they work.  But the mainstream reject as having tried it and wasted time and money.  Just another attempt to get us locked into a enterprise platform.  Culture change is too expensive..

Someone created a “Customer Project Manager” to help premium in-home customer services.  Didn’t call it BPM.  This was about agility.  Another example was a large bank who has a IT led enterprise wide transformation failed big time.

They are embracing cloud aggressively.  They are using agile ways of working.  Low cost propositions.  The lightweight approaches are about spending less up front.   Why are there all these people out there building these apps, but not really engaging with the back-end.  The culture change is not coordinated: it is too scary.

Low-code is what we used to call 4GL.

New agile enterprise has no “target operating model.”  They don’t know what it will be.  This is not the way we did transformation ten years ago.  First instrument, then provide agility of services.

Why would you do “simulation” when you could put the real solution in the hands of real users and observe how it works?

Customer journey slide very interesting.  Knowing customer is not enough, build surfacing, on that acting and finally shaping.   That all needs to be done across marketing, sales, operations, and service.

Advice: don’t fixate on SPA’s, don’t obsess over traditional competitors, don’t fixate on throwing more in the box, do find ways to enhance BYOP particularly with auditability, do look at the implications of digital strategies, do enable clients to take the portfolio management approaches to business processes, do partner, buy, build.


2015-03-30 13.43.59smRemy Glaisner – Myria Research – Chief Robotics Officers and BPM

RIOS – Robotics and Intelligent Operational Systems.  Automation, Robotics, and mobile technologies.  There will soon be many people calling themselves “Chief Robotics Officers.”   This is a completely open, nascent field — no leaders yet.  inflection point expected in 2017-2018.  For manufacturing they are already there, but agriculture is a ways off.

Client acquisition is based largely on how fast you can deliver.  Automation including robotic automation, is quite important.

By 2025 over 60% of manufacturers over $1B will have a Chief Robotics Officer (CRO) on staff.


2015-03-30 14.28.52smBenjamin Notheis, Harsh Jegadeesan

Internet of Things.  There are others:  Internet of People, Internet of Places, and Internet of Content.    All four of these together.  Wil vdAalst talks about the Internet of Events.  IoE means massive data (bigger than big data).  Event stream processing sense patterns in real time.  Once a pattern is identified, one can response with rules and processes.

Presented a use case about a person who managed pipelines in L.A.  Events notify that there is a problem.   The options to replace a pump are given, different prices, different quality of pump.  Demo is hard to describe here — so see the video.  At one point he assigned a task to someone just typing “@manny escalate issue”  the user was found and task assigned.  Very dynamic!   Had a visual depiction of incidents displayed as tiles, where the size of the tile represnts the number in that category.

The coolest part of the demo was when he showed the user interface display on a watch display.  One could see the task, see the data values and options, make an audio annotation of the task, and mark the task as completed.  All from the watch.

Eclipse based modeler showing extended BPMN.   Models can be imported to http://bpmn.io.  This is compiled to JavaScript for running in the SAP cloud service.   Referred to it java script event loop.

Q: does this use NetWeaver and/or work with it? A:  Basically, not much.  It is a new process engine implemented over the last 6 months or so.

2015-03-30 14.37.56sm


Francois Bonnet, W4

Francois gave a great presentation and demo around a use case of monitoring elderly and responding to falls.  Showed a “faintness sensor” based on a raspberry pi processor.  When it tilted for more than a few seconds, it started a BPMN process.  A heartrate event might cause this process to escalate to various steps, such as calling them, calling a neighbor, and sending in a response team.  If it got back upright, the process was cancelled.  If the fall happened too many times in a particular period, it started a different process.

It was pretty interesting that the event modeling was done effectively in BPMN, however the aggregate even (falling too many times in a period) was not modeled directly in BPMN.


Dan Neason, Living Systems

Covered the Whitestein system.  All processes have a reflection capability so you can ask a running model what it is capable of doing.  Interesting demo, but hard to capture here.


2015-03-30 16.14.01smJim Sinur – Swarming and Goal Directed Collaborative Processes

BPM is not a sexy term any more.  What else do we go to?  There is a notion of a Hybrid Processes.  Could go on that, but as Neal pointed out, growth is not that high.

The idea we should follow is that of transforming the digital organization.  How do processes help organizations become digital organizations.

Got some of this from Keith’s presentation last year where he showed a video of starlings flocking (murmerating).  The idea is that birds guided by simple rules can act collectively in an emergent way.  But we ned to think about flocks with starlings, ducks, geese, sparrows, etc.   We will have swarms of things, but they consist of robots, information systems, and everything else.

Processes should help organizations cope with the “big change” coming their way.  We force customers to go through a phone menu which matches the organization that was designed on industrial age ideas.  Why force the customer to this?  Tomorrow there will be an “Uber” in every different industry.

Gave an example of an insurance company that wanted general reps to be able to handle all products.  They used AI systems to help.  Tried to get rid of specialization, but they failed because the rules technology was not available.

In production in Norway is a company to help with dementia patients.  Gave them a wristband with GPS in there.  If the patient approaches or crosses a boundary, they are notified and and go get him.

“Going digital” is the goal.  A couple of way to get there.  “do it, try it, fix it” is one approach.  Today the process is often in control.  But in the future the goals are in control of work and the process.

Can you imagine a bunch of swarming agents deciding what to do next?  Agents are: level of humanity, level of collaboration, level of intelligence, and a vector in goal driven freedom.  Hybrid resourceses, hybrid process styles (cases, flows, forms), hybrid speed, hybrid goals, etc.

Example of a bike store that has a kiosk that analyzes the customer to determine what mood, what kind of personality, and body type.  She keys in information about the kinds of riding she would like, and it suggests a bike. Imagine that there were many of these intelligent agenst swarming to help sell this bike.

Another exmple is using a swarm to find a suitable house by sending in a photo of the kind of house you want.  It could search for similar homes, and a bank might do this in order to also offer a mortgage.  Issues with autonomous cars and robots: legal issues.  Who do you sue when something goes wrong?

Not just UI, not just mobile, but how you treat customers and how you meet their need is the important thing.


That is it for the first day.  Then it was off to winetasting on the roof-top patio.

2015-03-30 18.12.21sm


by kswenson at March 31, 2015 11:20 AM

March 30, 2015

Sandy Kemsley: bpmNEXT 2015 Day 1 Demos: SAP, W4 and Whitestein

The demo program kicked off in the afternoon, with time for three of them sandwiched between two afternoon keynotes. Demos are strictly limited to 30 minutes, with a 5-minute, 20-slide,...

[Content summary only, click through for full article and links]

by sandy at March 30, 2015 11:52 PM

Sandy Kemsley: bpmNEXT 2015 Day 1: More Business of BPM

Talking with people at the first break of the first day, I feel so lucky to be part of a community with so many people who are friends, and with whom you can have both enlightening and amusing...

[Content summary only, click through for full article and links]

by sandy at March 30, 2015 07:15 PM

Sandy Kemsley: bpmNEXT 2015 Day 1: The Business of BPM

I can’t believe it’s already the third year of bpmNEXT, my favorite BPM conference, organized by Nathaniel Palmer and Bruce Silver. It’s a place to meet up with other BPM industry...

[Content summary only, click through for full article and links]

by sandy at March 30, 2015 05:44 PM

March 27, 2015

Sandy Kemsley: Going Beyond Process Modeling, Part 1

I recently wrote two white papers for Bizagi on going beyond process modeling to process execution: Bizagi is known for their free downloadable process modeler, but also have a full-featured BPMS for...

[Content summary only, click through for full article and links]

by sandy at March 27, 2015 02:55 PM

March 26, 2015

BPM-Guide.de: New Camunda Usergroup in Australia

Camunda is spreading, also in Australia. The first usergroup is already evolving, and they will meet for the second time next week.

If you would like to swing by and meet some other Camunda users, here is what you need to know:

Date: Tuesday, March 31 Time: 5pm Melbourne Time Place: Tuscan Bar – 79 Bourke Street, Melbourne

This time you can also meet Bernd Frey, one of our senior consultants who is currently down under and engaged in a fascinating Camunda project.

Many thanks to Phillip Spartalis, who is organizing this. He has agreed to share his email address here in case …

by Jakob Freund at March 26, 2015 01:29 AM

March 23, 2015

Thomas Allweyer: Praxisforum zu 20 Jahren Prozessmanagement

Seit dem Erscheinen des wegweisenden Buchs “Reengineering the Corporation” von Hammer und Champy sind schon über 20 Jahre vergangen. Daher widmet sich das Praxisforum BPM & ERP in einer ganztägigen Veranstaltung der Entwicklung des Prozessmanagements in diesen zwanzig Jahren und dem heute erreichten Stand. Neben der historischen Rückschau stehen auch zahlreiche Praxisvorträge auf dem Programm, u. a. von MAN, dem Landschaftsverband Rheinland, BASF, Globus und Bayer. Zu den behandelten Themen gehören beispielsweise Process Excellence, Prozesslandkarten, Datenmanagement, Prozessautomatisierung und ERP-Einführung.
Die Tagung findet am 16. Juni in der Nähe von Koblenz statt. Das vollständige Programm und ein Anmeldeformular finden sich hier.

by Thomas Allweyer at March 23, 2015 09:21 AM

March 21, 2015

BPM-Guide.de: Review: Camunda Community Day in London

Yesterday we had our first Camunda Community Day in the UK. Thanks to our friends at 6point6 who organized this, we could meet in the famous Royal Institution. This was definitely the most decent location we had for a communiy meeting so far!

It was a great half day of presentations, discussions and networking. Most of the attendees already knew existing BPM products, and when I described the Zero-Code BPM Myth they immediately knew what I was talking about. I also gave a little BPMN crash-course, and I did not use a single slide, but just live-modeled everything I explained …

by Jakob Freund at March 21, 2015 09:56 AM

March 18, 2015

Bruce Silver: Process-Driven Applications: A New Approach to Executable BPMN

One of the singular successes of BPM technology is a common language – BPMN – used both for process modeling and executable design.  At least in theory….   In reality, the BPMN created by the business analyst to represent the business requirements for implementation often bears little resemblance to the BPMN created by the BPMS developer, which must cope with real-world details of application integration.  That not only weakens the business-IT collaboration so central to BPM’s promise of business agility, but it leads to BPMN that must be revised whenever any backend system is updated or changed.  It doesn’t have to be that way, according to an interesting new book by Volker Stiehl of SAP, called Process-Driven Applications with BPMN (www.springer.com/978-3-319-07217-3).

Process-driven applications are executable BPMN processes with these characteristics:

  1. Strategic to the business, not situational apps.  They must be worth designing for the long term.
  2. Containing a mix of human and automated activities, not human-only or straight-through processing.
  3. Span functional and system boundaries, integrating with multiple systems of record.
  4. Performed (with local variations) in multiple areas of the company.
  5. Subject to change over time, either in business functionality or in underlying technical infrastructure, or both.

Stiehl identifies the following design objectives of process-driven applications:

  • Process-driven applications should be loosely coupled with the called back-end systems. They should be as independent as possible from the system landscape. After all, the composite does not know which systems it will actually run against.
  • Process-driven applications, because of their independence, should have their own lifecycles, which ideally are separate from the lifecycles of the systems involved. It is also desirable that the versions of a composite and the versions of the called back-end systems are independent of one another. This protects a composite from version changes in the involved applications.
  • Process-driven applications should work only with the data that they need to meet their business requirements. The aim is to keep the number of attributes of a business object within a composite to a minimum.
  • Process-driven applications should work with a canonical data type system, which enables a loose coupling with their environment at the data type level. They intentionally abstain from reusing data types and interfaces that may already exist in the back-end systems.
  • Process-driven applications should be non-invasive. They should not require any kind of adaptation or modification in the connected systems in order to use the functionality of a process-driven application.  Services in the systems to be integrated should be used exactly as they are.

fig1

Let’s look at a very simple example, an Order Booking process.  Here is the process model created by the business analyst in conjunction with the business.  Upon receipt of an order from the customer, an on order entry clerk enters it into a form, from which the price is calculated.  Then an automated task charges the credit card.  If the charge does not succeed, a customer service rep contacts the customer to resolve the problem.  Once the charge succeeds, the process books the order in the ERP system, another automated task, and ends by returning a confirmation message to the customer.  If the charge fails and cannot be resolved, the process ends by sending a failure notice to the customer.

fig2

In the conventional BPMS scenario, here is the developer’s view.  It looks the same except that the simple service tasks have been replaced by subprocesses, and the service providers – the credit card processing and ERP booking services – are shown as black box pools with the request and response messages visible as message flows.  There are 2 reasons the service tasks were changed to subprocesses: One is to accommodate technical exception handling.  What happens if the service returns a fault, or times out?  Some system administrator has to intervene, fix the problem, and retry the action.  The BA isn’t going to put that in the BPMN, but it needs to be in the solution somewhere.  The second reason is to allow for asynchronous calls to the services, with separate send and receive steps.  You also notice that Book order is interacting with more than one ERP system.  Don’t you wish there was one ERP system that handled everything the customer could buy?  Well sometimes there is not, so the process must determine which one to use for each instance.  Actually an order could have some items booked in system A and other items booked in system B.  The business stakeholders, possibly even the business analyst, may be unaware of these technical details, but the developer must be fully aware.

fig3

Here is the child level of Charge credit card.  It is invoked asynchronously, submitting a charge request and then waiting for a response.  If the service times out, an administrator must fix the issue and retry the charge.  The service returns either a confirmation if the charge succeeds or an error message if it fails.  Here we modeled this as two different messages; in other circumstances we might have modeled these as two different values of a single message.   If you remember your Method and Style, the child level has two end states, Charge ok and Charge failed, that match up with the gateway in the parent level.

fig4

And here is the child level of Book order.  A decision task needs to parse the order and for each order item determine is it handled by system A or system B.  Then there are separate booking subprocesses to submit the booking request and receive the confirmation for each order item in each system.  Finally an automated task consolidates all the item confirmations into an overall order confirmation.

So you already can see some of the problems with this approach.  The developer’s BPMN is no longer recognizable by the business, possibly even by the BA.  This reduces one of BPMN’s most important potential benefits, a common process description shared by business and IT.  Second, the integration details are inside the process model.  Whenever there are changes to the interface of either the credit card service, ERP system A, or system B, the process model must be changed as well.  If this process is repeated in various divisions of the company, using different ERP and credit systems, those process models will all be different.  And third, this tight binding of process activities to a SOA-defined interface to specific application systems means the process is manipulating heavyweight business objects that specify many details of no interest to the process.

All three of these problems illustrate what you could call the SOA fallacy in BPM.  In theory, SOA is supposed to maximize reuse of business functions performed on backend systems.  In practice, SOA has succeeded in enabling more consistent communications between processes and these systems, but the reuse as imagined by SOA architects has been difficult to achieve.  The actual reuse by business processes is frequently defeated by variation and change in the specific systems that perform the services.  So, instead the PDA approach seeks to maximize the actual reuse of business-defined functionality provided by services, not across different processes but across variations of the same process, caused by variation and change in the enterprise system landscape.  This is a radical difference in philosophy.

In his book, Volker Stiehl calls this new approach Process-Driven Architecture.  This architecture layers the process design and removes all integration details from the business process model, representing the Process-Driven Application, or PDA. The services specified in the PDA process make no reference to the actual interfaces and endpoints of specific backend systems.  Instead each service in the PDA process defines and references a fixed service contract interface, specifying just the elements needed to perform the required business function, regardless of the actual interface of the backend systems required.  This service contract interface is essentially defined by the business process – by the business, not the SOA architect or integration developer.

The data elements and types used in that interface are based on a canonical data model, not the elements and types specific to a backend system.  Remember, the object is not reuse of SOA endpoints and service interfaces across business processes, but reuse of this particular service contract interface across the system differences found in various divisions of the enterprise and across changes in these systems over time.  Ideally the PDA process, from a business perspective, is universal across the enterprise and stable over time.

Translation from this stable service contract interface, based on canonical data, to the occasionally changing interfaces and data of real backend systems is the responsibility of the Service Contract implementation layer.  What makes this nice is that BPMN can be used in this layer as well.  Each integration service call from the PDA process is represented in the architecture by a Service Contract Implementation (SCI) process defined in BPMN.  This process effectively binds the system-agnostic call by the PDA process to a specific system or systems used to implement the service.  It performs the data mapping required, issues the requests and waits for responses, and handles technical exceptions.  The PDA process doesn’t deal with any of this.  Moreover, the SCI process is non-invasive, meaning it should not require any change to existing backend systems or existing SOA services.  Everything required to link the PDA to these real systems and services must be designed into the SCI process.

The beauty of this architecture is that, unlike the conventional approach, the PDA process model is the same for the business analyst and the integration developer.  All of the variation and change inherent in the enterprise system landscape is encapsulated in the SCI process; the PDA process doesn’t change.  Effectively the executable process solution becomes truly business-driven.

fig5

Here is a diagrammatic representation of the architecture. The steps in a PDA process, modeled in BPMN, represent various user interfaces and service calls. When the service call is implemented by a backend system, a business partner, or an external process, its interface – shown here as the Service Contract Interface – is defined by the PDA not by the external system or process. For each call to the Service Contract Interface, a Service Contract Implementation process is defined, also in BPMN, to communicate with the backend system, trading partner, or external process, insulating the PDA process from all those details. The Service Contract Interface, based on a canonical data model, defines the interface between the PDA process and the SCI process. This neatly separates the work of the process designer, creating the executable PDA process, from that of the integration designer working in the Service Contract Implementation layer.  Since the PDA process and the SCI processes are both based on BPMN, the simplest thing is use the same BPM Suite process engine for both, with communication between them using standard BPMN message flows.

fig6

Here is what it looks like with our simple order booking process reconfigured using Process-Driven architecture. The details of Charge credit card are no longer modeled in a child level diagram of the business process, but instead are modeled as a separate SCI process.  The charge credit card activity in the PDA process is truly a reusable business service.  It defines the service contract interface using only the business data required: the cardholder name, card number and expiration date, charge amount, return status, and confirmation number. It doesn’t know anything about how or where the credit card service is performed, whether it is performed by a machine or a person, the format of the data inputs and outputs, or the communications to the service provider. All of those integration details could change and the PDA process would not need to change.  The SCI process maps the canonical request to the input parameters of the actual service provider, issues the request, receives the response, maps that back to the canonical response format, and replies to the PDA service task.

fig7

If the card service is temporarily unavailable or fails to return a response within a reasonable time period, a system administrator may be required to resolve the problem and resubmit the charge. The business user is not involved in this, and it should not be part of the PDA process. This too is part of the SCI process.  However, if the service returns a business exception, such as invalid credentials or the charge is declined, this must be handled by the business process, so this detail is part of the PDA. And in fact, it should be part of the business analyst’s model worked out in conjunction with the business.

This 2-layer architecture, consisting of a PDA process layer and a Service Contract Implementation layer, succeeds in isolating the business process model from the details of application integration. But there are some problems with it…

  • First, the BPMN engine running the PDA and SCI process must be able to connect directly to all of the backend systems, trading partners, and external processes involved. In many large-scale processes, in particular core processes, this is difficult if not impossible to achieve.
  • Second, a single SCI process may involve multiple backend systems and must be revised whenever any of them changes.
  • And third, things like flexible enterprise-scale communications, guaranteed message delivery, data mapping and message aggregation are handled more easily, reliably, and faster in an enterprise service bus than in a BPMN process. So we’d like to leverage that if possible.

The solution then is to split the SCI Layer in two, creating a 3 layer architecture.  The SCI process is divided into a stateful integration process and one or more stateless ESB processes.  Stateless here means short-running and able to run as a single unit of work or transaction. A stateless process cannot include human tasks, waiting for a message or a join, anything that takes time and requires maintaining the state of the instance. ESBs are designed to execute these very well. A single ESB process can send a message (or possibly N messages all at once) but does not wait for a response. A separate ESB process is instantiated with each response.

The stateful integration process can be long-running, meaning it can contain human tasks, it can wait for a message, or wait for parallel paths to join.  The stateful integration process can process a correlation id, linking an instance of the stateful process to the right process and activity instance in PDA. The stateless ESB processes cannot do this. More on that in a minute.

fig8

To illustrate this let’s look at the activity Book Order, which books the order in the ERP system and generates a confirmation for the customer. Recall that this is what it looked like in our conventional BPMN. We have two ERP systems, and the process needs to look up which system applies to each order item before issuing the booking request. Here you can see some of the defects we have just discussed: The process model must map to the details of system A and system B; the system administrator handling stuck booking requests must be a BPMS user; etc.

fig9

Here is what it looks like in the 3-layer PDA architecture. The PDA process is almost exactly as before. The subprocess book order is simply an asynchronous send followed by a wait for the response, a simple long-running sevice call. The request message – the order – and the confirmation response message are defined by the PDA, that is, by the business, without regard to the parameters required by the ERP systems. Book order is a reusable business service in the sense that it can be used with any booking system, now or in the future.

The messy integration work is left to the SCI layer, here divided into a stateful integration process and 2 stateless ESB processes, one for sending and the other for processing the response. Here is what they do… Upon receipt of the order message from the PDA process, the SCI process first parses the order and looks up the ERP system associated with each order item. Really it just needs a count of the receivers of the ERP booking request message, so the process knows how many responses to wait for. This could be a service task or a business rule task depending on the implementation. Let’s say this receiver list is simply put in the header of the order message, which is then sent, using a send task, to the stateless ESB send process.

The ESB has the job of dealing with the details of the individual systems. First it splits the order message into separate variables for each system, that is, for each instance of the multi-instance Book in ERP system. For each system, this activity first looks up the interface of the request message, then maps the canonical order data to the system request parameters, providing any additional details required by the system interface, and then sends the ERP booking request to the system. A basic principle of the PDA approach is that the call to the external system or service is non-invasive, i.e. it must not be modified in any way in order to be integrated with the process. The integration process must accommodate its interface as-is.

I have shown the ESB process using BPMN but typically ESBs have their own modeling language and tooling. That’s fine. Since it’s a stateless process by definition, the BPMN is not asking the ESB to do anything that cannot be done in its native tooling.

The ERP system sends back its response, which triggers a second stateless process, ESB Receive. We’ve marked this as a multi-participant pool, meaning N instances of it will be created for a single order. The ESB does not know the count. Each ERP booking response simply triggers a new instance. Now here is something interesting: correlation. We need to correlate the booking response to a particular booking request. In a stateful process you can save a request id and use it to match up with the response. But the ESB processes are stateless. The Send process can’t communicate its request id to the Receive process. So the Receive process must parse the order content to uniquely determine the order instance. The receive process must also look up the service interface of the ERP system sending the response and then map the response back to the format expected by the stateful SCI process, the same for all of the called systems.

Now back in the stateful integration process, the subprocess Receive booking response receives the message. Because it is stateful, this process can correlate message exchanges, so the message event is triggered only by a receiver response for this particular process instance. This booking service normally completes immediately, so if no response is received in one minute, something is wrong. Here we’ll say a system administrator resolves the problem and manually books the order in the external system. Even though this human intervention is required it is outside the scope of the business user’s concerns, and not part of the PDA process. This multi-instance subprocess waits for a message from each receiver. Recall that we derived the count in the first step of this process. So it is quite general. It works for any number of receivers, as long as a receiver can be determined each order item. This process doesn’t even need to know the technical details of the receiver, its endpoint, interface, or communications methods. All of that is delegated to the ESB. The stateful integration process does need to define a way to extract a unique instance id out of the original order message content, as this logic will be used by the ESB Receive process to provide correlation.

Once the ERP booking response is received, it is used to update a cross-reference table. What is that? This is a table that provides a uniform means of confirmation regardless of the physical system used to book each item. Each of those systems will provide a confirmation string in its own format. The Xref table links each system-specific confirmation string with the confirmation string for the order as a whole, in the format defined by the PDA.

One final detail before we leave this diagram… the message flows. The message flows linking PDA process to the stateful SCI process, as well as those linking the stateful SCI process to the ESB process, are standard message flows as implemented by the BPMS for process-to-process communication within the product and for reading and writing message queues. The message flows between the ESB processes and the backend systems are more flexible. The transport and message format are probably determined by the external system, whether that is a web service call, file transfer, EDI or whatever the ESB can handle. This communications complexity is completely removed from the BPMS, which is the strength of the ESB approach.

There is a bit more to it, but if you are interested, I suggest you get the book.

The post Process-Driven Applications: A New Approach to Executable BPMN appeared first on Business Process Watch.

by bruce at March 18, 2015 09:20 PM

Keith Swenson: ‘Fail fast, fail often’ is essential advice for innovators

Yes, it is a negative statement, but in uttering it, you desensitize the team to a harmful fear of failure.

I am responding today to an article in The Globe and Mail titled “‘Fail fast, fail often’ may be the stupidest business mantra of all time.”  The article criticizes this saying on two points.  First, business people have a hard time saying it, and don’t come across as credible.  Second, the statement focuses on the negative which is … negative.   The author proposed an improved statement: “Succeed fast, adjust or move on.”

Recasting it like this shows that the author does not understand the point of making the statement in the first place.  Psychologists have demonstrated that people naturally have a bais against loss.  Given a carefully designed test, people will value $100 loss as equivalent to $200 of gain.  That is, people are naturally very loss averse.  Irrationally so.  People naturally tend to form groups that tend to punish failure as a way to prevent even small failures.

Saying “succeed fast” does not really give the option for failure.  Fear of failure has a powerful inhibiting effect which needs to be countered, particularly in an organization that strives to be innovative.

While success is the goal, there is one thing worse than failure, and that is doing nothing.  If you do nothing, you always lose.  On average, people will come up with good ideas, but not always.  If you fear failure, if you have a culture that punished failure, then members will not try.  They will wait until they are sure they have a success, and only then act.  Many many opportunities will be lost because the risk of failure might be a fraction of the benefit of success, but that risk prevents action.

When a leader says “fail fast, fail often” they make it clear that failure is a word that we can talk about.  Failure is no longer taboo.  It may be hard for them to say it — nobody said that leadership was easy.  They want success, they don’t like talking about failure, but doing so makes it clear that the culture would rather see you try and sometimes fail, than it is to not try at all.

Some say that you can only learn from your mistakes.  But you can’t learn if you don’t make any mistakes.  If people are too fearful to try, you won’t have a learning organization.

Another silicon valley statement is: “Don’t ask for permission, ask for forgiveness.”  This focuses on the negative as well, but it is essential to the spirit of innovation that you make it clear that success is not required 100% of the time, and action is valued over inaction.

While the ‘fail fast, fail often’ statement is negative, it inoculates the group against a crippling fear of failure.  Far from the stupidest mantra of all time, it shows depth of wisdom and skill of leadership.  The writer of this article clearly does not understand the dynamics of an innovative organization.

(See “When Thinking Matters in the Workplace” chapter 4: “Agile Management” on Amazon)


by kswenson at March 18, 2015 03:33 PM

March 16, 2015

Sandy Kemsley: Effektif BPM Goes Open Source

On a call with Tom Baeyens last week, he told me about their decision to turn the engine and APIs of Effektif BPM into an open source project: not a huge surprise since he was a driver behind two...

[Content summary only, click through for full article and links]

by sandy at March 16, 2015 11:05 AM

March 13, 2015

Drools & JBPM: Reactive Incremental Graph Reasoning with Drools

Today Mario got a first working version for incremental reactive graphs with Drools. This means people no longer need to flatten their models to a purely relational representation to get reactivity. It provides a hybrid language and engine for both relational and graph based reasoning. To differentiate between relational joins and reference traversal a new XPath-like was introduced, that can be used inside of patterns. Like XPath it supports collection iteration.

Here is a simple example, that finds all men in the working memory:
Man( $toy: /wife/children[age > 10]/toys )

For each man it navigates the wife reference then the children reference; the children reference is a list. For each child in the list that is over ten it will navigate to its toy's list. With the XPath notation if the leaf property is collection it will iterate it, and the variable binds to each iteration value. If there are two children over the age of 10, who have 3 toys each, it would execute 6 times.

As it traverses each reference a hook is injected to support incremental reactivity. If a new child is added or removed, or if an age changes, it will propagate the incremental changes. The incremental nature means these hooks are added and removed as needed, which keeps it efficient and light.

You can follow some of the unit tests here:
https://github.com/mariofusco/drools/blob/xpath/drools-compiler/src/test/java/org/drools/compiler/xpath/XpathTest.java

It's still very early pre-alpha stuff, but I think this is exciting stuff.

by Mark Proctor (noreply@blogger.com) at March 13, 2015 11:34 PM

March 12, 2015

Thomas Allweyer: Kontinuierliches Prozessmanagement ist vielfach immer noch Mangelware

Cover Studie Reifegrade GPM 2015Wer sich regelmäßig mit dem Thema Prozessmanagement beschäftigt, dürfte von diesem Ergebnis nicht wirklich überrascht sein: Zwar haben immer mehr Unternehmen Maßnahmen zur Verbesserung ihrer Prozesse eingeführt, doch kümmern sie sich wesentlich weniger um das Controlling und die Weiterentwicklung des Prozessmanagements. An der jüngst erschienen Studie “Reifegrad des Geschäftsprozessmanagements 2015″ beteiligten sich 216 Teilnehmer aus dem deutschsprachigen Raum, die sich durchschnittlich seit neun Jahren mit dem Prozessmanagement befassen. Als Grundlage für die Befragung wurde ein von iProcess entwickeltes Reifegradmodell verwendet, das über fünf Reifegradstufen verfügt. Im Schnitt erreichten die Unternehmen die Reifegradstufe zwei. Es ist also noch einiges Entwicklungspotenzial vorhanden, denn nach wie vor liegt der Fokus vielerorts vor allem auf der Modellierung und Analyse der Abläufe, nicht jedoch auf der kontinuierlichen Überprüfung und Weiterentwicklung des Prozessmanagements. So mag es zwar gelingen, “Quick Wins” durch konkrete Prozessverbesserungen zu erreichen, doch wird das viel weitergehende Potenzial eines durchgängig geschlossesen Prozessmanagement-Kreislaufs nicht genutzt.

Interessanterweise konnte kein eindeutiger Zusammenhang zwischen Unternehmensgröße und Prozessmanagement-Reifegrad festgestellt werden. Kleinere und mittlere Unternehmen (KMU) können durchaus mit wesentlich größeren Organisationen mithalten. Flache Hierarchien und eine höhere Kundennähe aller Beteiligten erleichtern den KMU das Management ihrer Prozesse. Hingegen zeigten sich deutliche Unterschiede zwischen den Branchen. So ist der Reifegrad im Bereich Immobilien und Handel besonders hoch. Auch die Transportbranche sowie Banken sind hier ganz gut aufgestellt. Die Autoren der Studie sehen dies dadurch verursacht, dass diese Branchen sehr personal- und wissensintensive Prozesse haben. Zudem herrscht ein hoher Wettbewerbsdruck. Außerdem zeigte sich, dass Unternehmen mit vielen verteilten Niederlassungen über einen höheren Prozessreifegrad verfügen, da bei ihnen die Standardisierung der Prozesse eine hohe Bedeutung hat.


Minonne, C.; Koch, A.; Ginsburg, V.:
Reifegrad des Geschäftsprozessmanagements 2015. Eine empirische Untersuchung.
iProcess AG (Ltd.) Luzern 2015
Leseprobe und Bestellmöglichkeit

by Thomas Allweyer at March 12, 2015 01:01 PM

March 11, 2015

Sandy Kemsley: KofaxTransform 2015 In Pictures

As I prepared to depart Las Vegas, I flicked through some of my photos from the past couple of days and decided to share. First, the great work of the ImageThink team of graphic recorders:   ...

[Content summary only, click through for full article and links]

by sandy at March 11, 2015 07:30 PM

March 10, 2015

Sandy Kemsley: Analytics For Kofax TotalAgility With @Altosoft

Last session here at Kofax Transform, and as much I’d like to be sitting around the pool, I also like to squeeze every bit out of these events, and support the speakers who get this most...

[Content summary only, click through for full article and links]

by sandy at March 10, 2015 11:15 PM

Sandy Kemsley: Smarter Processes With Kapow Integration

I’m in a Kofax Transform breakout session on Kapow Integration together with KTA; I missed documenting the first part of the session when my Bluetooth keyboard stopped talking to my Android...

[Content summary only, click through for full article and links]

by sandy at March 10, 2015 09:43 PM

Sandy Kemsley: Process Intelligence at KofaxTransform

It’s after lunch on the second (last) day of Kofax Transform, and the bar for keeping my attention in a session has gone up somewhat. To that end, I’m in a session with Scott Opitz and...

[Content summary only, click through for full article and links]

by sandy at March 10, 2015 08:47 PM

Sandy Kemsley: Kofax Claims Agility SPA

Continuing with breakout sessions at Kofax Transform is a presentation on the Claims Agility smart process application that Kofax is creating for US healthcare claims processing, based on the KTA...

[Content summary only, click through for full article and links]

by sandy at March 10, 2015 06:45 PM

Sandy Kemsley: TotalAgility Product Update At KofaxTransform

In a breakout session at Kofax Transform, Dermot McCauley gave us an update on the TotalAgility product vision and strategy. He described five vital communities impacted by their product innovation:...

[Content summary only, click through for full article and links]

by sandy at March 10, 2015 06:17 PM

Sandy Kemsley: KofaxTransform 2015: Day 2 Customer Keynotes

I had a chance to hear Tom Knapp from Waterstone Mortgage speak yesterday at the analyst briefing here at Kofax Transform, and we have him to kick off this morning’s keynote. They started their...

[Content summary only, click through for full article and links]

by sandy at March 10, 2015 04:21 PM

Drools & JBPM: UF Dashbuilder - Activity monitoring in jBPM

syndicated from http://dashbuilder.blogspot.com.es/2015/03/uf-dashbuilder-in-jbpm-for-activity.html

Last week, the jBPM team announced the 6.2.0.Final release (announcement here). In this release (like in previous ones) you can author processes, rules, data models, forms and all the assets of a BPM project. You can also create or clone existing projects from remote GIT repositories and group such repositories into different organizational units. Everything can be done from the jBPM authoring console (aka KIE Workbench), a unified UI built using the Uberfire framework & GWT.

   In this latest release, they have also added a new perspective to monitor the activity of the source GIT repositories and organizational units managed by the tooling (see screenshot below). The perspective itself it's just a dashboard displaying several indicators about the commit activity. From the dashboard controls it is possible to:

  • Show the overall activity on our repositories
  • Select a single organizational unit or repository
  • List the top contributors
  • Show only the activity for an specific time frame

  In this video you can see the dashboard in action (do not forget to select HD).

Contributors Perspective

  Organizational units can be managed from the menu Authoring>Administration>Organizational Units. Every time an organizational unit is added or removed the dashboard is updated.

Administration - Organizational Units 

   Likewise, from the Authoring>Administration>Repositories view we can create, clone or delete repositories. The dashboard will always feed from the list of repositories available.

Administration - Repositories



   As shown, activity monitoring in jBPM can be applied to not only to the processes business domain but also to the authoring lifecycle in order the get a detailed view of the ongoing development activities.

How it's made


The following diagram shows the overall design of the dashboard architecture. Components in grey are platform components, blue ones are specific to the contributors dashboard.

Contributors dashboard architecture

  These are the steps the backend components take to build the contributors data set:

  • The ContributorsManager asks the platform services for the set of available org. units & repos. 
  • Once it has such information, it builds a data set containing the commit activity.
  • The contributors dataset is registered into the Dashbuilder's DataSetManager.

   All the steps above are executed on application start up time. Once running, the ContributorsManager also receives notifications form the platform services about any changes on the org. units & repositories registered, so that the contributors data set is synced up accordingly. 


   From the UI perspective, the jBPM's contributors dashboard is an example of hard-coded dashboard built using the Dashbuilder Displayer API which was introduced in this blog entry. The ContributorsDashboard component is just a GWT composite widget containing several Displayer instances feeding from the contributors data set.

   (The source code of the contributors perspective can be found here)

    This has been a good example of how to leverage the Dashbuilder technology to build activity monitoring dashboards. In the future, we plan for applying the technology in other areas within jBPM, like, for instance, an improved version of the jBPM process dashboard. We will keep you posted!

by Mark Proctor (noreply@blogger.com) at March 10, 2015 03:24 PM

March 09, 2015

Drools & JBPM: Zooming and Panning between Multiple Huge Interconnected Decision Tables

Michael has started the work on revamping our web based decision tables. We've been experimenting with HMTL5 canvas with great results, using the excellent Lienzo tool. First we needed to ensure we could scale to really large decision tables, with thousands of rows. Secondly we wanted to be able to pan and zoom between related or interconnected decision tables. We'll be working towards Decision Model and Notation support, that allows networked diagrams of Decision Tables.

You can watch the video here, don't forget to select HD:
https://www.youtube.com/watch?v=WgZTdfLis0Q

Notice in the video that while you can manually pan and zoom it also has links between tables. When you select the link it animates the pan and zoom to the linked location. 25s to 47s in is showing  that we can have really large number of rows and keep excellent performance, while 55s is showing the pan speed with these large tables. Initially the example starts with 50% of cells populated, at 1m in we change that to 100% populated and demonstrate that we still have excellent performance.




by Mark Proctor (noreply@blogger.com) at March 09, 2015 11:55 PM

Sandy Kemsley: Kofax Altosoft For Operational Intelligence

Wayne Chambliss and Rich Rabin of Kofax Altosoft gave a presentation at Kofax Transform, most of which was a demo, on becoming an operational intelligence guru. This is my first real look at the...

[Content summary only, click through for full article and links]

by sandy at March 09, 2015 09:48 PM

Sandy Kemsley: Tablets And Digital Signatures At AIA Life

Just to maximize confusion, we have a second AIA at the Kofax Transform conference: this morning, Aia referred to the customer communications management company recently acquired by Kofax; this...

[Content summary only, click through for full article and links]

by sandy at March 09, 2015 08:44 PM

Sandy Kemsley: Kofax Analyst Briefing And Portfolio Update

Following the Kofax Transform day 1 keynotes, we had a separate session for financial and industry analysts to be briefed on the products and financials. After a brief introduction from Reynolds...

[Content summary only, click through for full article and links]

by sandy at March 09, 2015 08:03 PM

BPinPM.net: Insights from German flagship conference Wirtschaftsinformatik 2015

Last week, the BPinPM.net team visited the conference Wirtschaftsinformatik 2015 in Osnabrück. Topic of this year’s conference was Smart Enterprise Engineering.

In three days, several research and business tracks gave visitors insights in emerging trends of information systems. Furthermore, different companies (e.g., Thyssen Krupp or SAP) presented their next steps for achieving digitalized business.

For example, Thyssen Krupp wants to use Big Data and digitalization to revolutionize their elevator business. In the future, elevators won’t be travelling solely vertically but also horizontally. In the keynote, this video was shown. We found it very impressing and it fits very well to our scheduled innovation workshop “BPM meets the Innovation Helix“. So, we want to share it with you:

https://www.youtube.com/watch?v=KUa8M0H9J5

Besides the business tracks, the German research community presented their recent work. The presented papers dealt with Business Process Management, Information Systems Usage, or Social Media an Collective Intelligence.

We are proud that one of our team members also presented her work at the conference. Janina Kettenbohrer talked about impact of employees’ attitude toward their job on business process standardization acceptance. She and her two colleagues Dr. Andreas Eckhardt and Prof. Dr. Daniel Beimborn developed a theoretical model which explains how job-related attributes (e.g., autonomy or skill variety), work-role fit, co-worker relation, and the wider process environment influence the employees’ perception of meaningfulness of work and consequently process standardization acceptance. If you are interested in Janina’s latest work, you can find her paper here:

http://www.wi2015.uni-osnabrueck.de/Files//WI2015-D-14-00270.pdf

If you’re interested in testing the model in your organization and in finding out how to successfully implement process standards, please contact Janina.

by Mirko Kloppenburg at March 09, 2015 08:02 PM

Sandy Kemsley: Kicking off KofaxTransform 2015: Day 1 Keynotes

I’m in Vegas for a couple of days for the Kofax Transform conference. Kofax has built their business beyond their original scanning and capture capabilities (although many customers still use...

[Content summary only, click through for full article and links]

by sandy at March 09, 2015 04:38 PM

Drools & JBPM: jBPM 6.2.0.Final released

The bits for the jBPM 6.2 release are now available for you to download and try out !  

Version 6.2 comes with a few new features and a lot of bug fixes !  New features include a.o. EJB, (improved) OSGi and Camel endpoints support, a new asset management feature (to introduce a development and release branch and promote assets between both), social profiles and feeds and the ability to extend the workbench with your own plugins!

More details below, but if you want to jump right in:

Downloads
Documentation
Release Notes

Ready to give it a try but not sure how to start?  Take a look at the jbpm-installer chapter.

jBPM 6.2 is released alongside Drools (for business rules) and Optaplanner (for planning and constraint solving), check out the new features in the Drools release blog, including a brand new rules execution server and the Optaplanner release blog as well.

A big thank you to everyone who contributed to this release!

Some highlights from the release notes.

Core services

  • EJB: the jBPM execution server (that is for example embedded in our web-based workbench) now also comes with an EJB interface.  A refactoring of the underlying jbpm-services now makes the execution services accessible using pure Java, CDI, EJB and Spring. Remote interfaces using REST and JMS are still available as well of course !  A lot more details are described in Maciej's blog here.
  • Deployments (defining which versions of which projects are currently active in the execution server) are now by default stored in the database.  This greatly simplifies the architecture in a clustered environment in case you are only using our runtime side of our web tooling (for example by having dedicated execution servers in production).
  • Our asynchronous job executor has improved support for requeuing failed jobs and for recurring jobs (e.g. daily tasks).
  • OSGi: Full core engine functionality is now available on top of OSGi.  A significant number of additional jars (including for example the human task service, the runtime managers, full persistence, etc.) were "OSGi-fied". Specific extensions and tests showing it in action are available for Apache Karaf and Aries Blueprint (in the droolsjbpm-integration repository).
  • Camel endpoint URIs: A new out-of-the-box service task has been implemented for using Apache Camel to connect a process to the outside world using some of the numerous Camel endpoint URIs. The service task allows you to for example specify how to pass data to an FTP endpoint by configuring properties such as hostname, port, username, payload, etc. for some common endpoints like (S)FTP, File, JMS, XSLT, etc. but you can use virtually any of the available endpoints by defining the URI yourself (http://camel.apache.org/uris.html).

Workbench
  • Form Modeler comes with improved support for adding custom logic to your forms using JavaScript on changes, and support for configurable ComboBox and RadioGroup fields, and simple List types.
  • Asset management: It is now possible to make a repository a "managed repository".  This allows you to split up a repository in multiple branches, one for doing development and on for releasing.  Users can then request various assets to be promoted to the resource branch when ready.  This promotion process, and the linked build and deploy processes, are defined using a BPMN2 process as well and include approval and build tasks.  Check the documentation for more details.

  • Social features, like user profiles (including gravatar pictures), and various event feeds like the most recent assets you worked on, on recent changes by other users.


  • Contributors perspective is a new out-of-the-box report (using the new dashbuilder technology) that gives high-level insight in who is changing what in your repositories.
  • Pluggable workbench:  you can now extend the workbench with your own views, menus, etc. using workbench plugins. Available features includes creation of perspectives via a programmable or a drag and drop interface, create new screens, editors, splashscreens and dynamic menus. 

by Kris Verlaenen (noreply@blogger.com) at March 09, 2015 02:39 PM

Sandy Kemsley: Software AG Analyst Day: The Enterprise Gets Digital

After the DST Advance conference in Phoenix two weeks ago, I headed north for a few days vacation at the Grand Canyon. Yes, there was snow, but it was lovely: Back at work, I spent a day last week in...

[Content summary only, click through for full article and links]

by sandy at March 09, 2015 12:42 PM

March 06, 2015

Drools & JBPM: Drools 6.2.0.Final Released

We are happy to announce the latest and greatest Drools 6.2.0.Final release.

This release in particular had a greater focus on improved usability and features that make the project easier to use (and adopt). Lots of improvements on the workbench UI, support for social activities and plugin management, as well as a brand new Execution Server for rules are among the new features.

Improved Wizards

Execution Server Management UI

Social activities


Contributors dashboard


Perspective editors


Here are a few links of interest:

We would like to use the opportunity to thank all the community members for their contributions to this release and also JetBrains and Syncro Soft for the open source licenses to their products that greatly help our developers!

Happy drooling!


   



by Edson Tirelli (noreply@blogger.com) at March 06, 2015 03:54 PM

March 05, 2015

Thomas Allweyer: FireStart kann beides: Durchgängige fachliche Modellierung und Prozessausführung

Firestart Outlook-IntegrationMeist werden für die fachliche Prozessmodellierung und die Prozessausführung unterschiedliche Systeme eingesetzt. Zwar bieten einige BPMS-Hersteller auch Wertschöpfungskettendiagramme und ähnliches an, doch bleiben die Fähigkeiten zur fachlichen Prozessdokumentation und -analyse meist weit hinter den reinen Prozessmodellierungswerkzeugen zurück. Eine positive Ausnahme stellt die FireStart BPM Suite von Prologics dar. Die Plattform ermöglicht eine kollaborative Modellierung in einer benutzerfreundlichen grafischen Modellierungsumgebung, die über das gewohnte Look and Feel von Office-Produkten verfügt. Die Modelle werden in einem zentralen Repository abgelegt. Die Publikation der Modelle in einem Prozessportal und die Generierung von Prozesshandbüchern werden ebenso unterstützt wie eine Versionsverwaltung und die revisionssichere Ablage der Modelle. Das rollenbasiert anpassbare Prozessportal verfügt über eine moderne, mit HTML 5 realisierte Oberfläche. Die leistungsfähige Suche und weitere Funktionen werden mittels bei Bedarf eingeblendeter Overlay-Menüs aufgerufen, wie man sie z. B. aus Google Maps kennt.

Neben den Prozessen lassen sich u. a. auch Prozesslandkarten, Organigramme, Datenmodelle, IT-Landschaften und Risiken modellieren und auf einfache Weise mit den Aktivitäten in den Prozessmodellen verbinden. Damit ist eine durchgängig integrierte Unternehmensmodellierung möglich. Insbesondere im Zusammenhang mit BPMN-Modellen ist dies nicht selbstverständlich – selbst prominente Modellierungsplattformen weisen hier oftmals Schwächen auf, wie diese Untersuchung zeigt. Der Clou: Bei der Darstellung der Prozessmodelle kann man jederzeit zwischen der Darstellung in BPMN und EPK umschalten und in der jeweils anderen Notation weitermodellieren. In der EPK-Darstellung werden zugeordnete Organisationseinheiten, IT-Systeme u. ä. als eigene Objekte dargestellt, die über Pfeile mit den jeweiligen Aktivitäten verbunden sind. In der BPMN-Darstellung wird durch kleine Icons in den Aktivitäts-Symbolen angezeigt, zu welchen weiteren Objekttypen Verbindungen bestehen. Diese Möglichkeit des Wechsels zwischen BPMN und EPK dürfte insbesondere die Akzeptanz in Fachabteilungen erhöhen, die vielerorts die EPK-Darstellung gewohnt sind.

FireStart Durchlaufzeitenanalyse im GanttchartInternationale Unternehmen werden sich über die integrierte Übersetzungsfunktion freuen, die die Modelle ohne weiteres Zutun in eine Vielzahl von Sprachen übersetzen kann. Auch wenn eine automatisierte Übersetzung nicht immer perfekt sein dürfte, erleichtert sie das Verständnis der Modelle in verschiedenen Landesniederlassungen immens.

Für die Prozessanalyse stehen spezielle Ansichten der Prozesse zur Verfügung. So kann man sich den zeitlichen Verlauf eines Prozesses in Form eines Gantt-Charts anzeigen lassen. Verändert man die Zeiten einzelner Prozess-Schritte, so wird direkt die Auswirkung auf die Gesamtdurchlaufzeit deutlich. Da FireStart die Prozesse im Gegensatz zu reinen Modellierungswerkzeugen auch ausführen kann, ist eine solche Durchlaufzeitenanalyse nicht nur auf der Grundlage von Vorgabewerten möglich, sondern auch auf Basis der echten Daten ausgeführter Prozessinstanzen. In einer Matrixdarstellung können Prozesskosten analysiert und den einzelnen Aktivitäten Hinweise auf Schwachstellen und Verbesserungsvorschläge zugeordnet werden.

Die integrierte Modellierung von Prozessen, Organigrammen, Daten usw. ist nicht auf eine fachliche Betrachtung beschränkt. Sie wird auch beim Übergang zur Prozessausführung genutzt. So werden etwa die fachlichen Datenobjekte um technische Details ergänzt, so dass sie bei der Prozessausführung zur Aufnahme konkreter Daten dienen können. Den Organisationseinheiten aus dem Organigramm werden konkrete Benutzern zugeordnet, und modellierte IT-Systeme werden mit Schnittstellen- und Aufrufinformationen hinterlegt. Die ausführungsbezogene Konfiguration der Modelle wird dem Modellierer an vielen Stellen erleichtert. Zieht man etwa ein Datenobjekt auf einen Benutzer-Task, so stehen die betreffenden Datenfelder direkt im Formular dieses Tasks zur Verfügung.

Firestart ProzesskostenanalyseAuch bei der Prozessausführung macht sich die Integration der fachlichen Prozessmodellierung im selben Tool bezahlt. So ist das Portal, das die am Prozess beteiligten Bearbeiter nutzen, dasselbe, das zur Publikation der Prozessmodelle dient. Man kann sich somit bei der Prozessdurchführung jederzeit über die Prozesse informieren. Zudem können laufende Prozessinstanzen im Prozessmodell verfolgt werden. Das Portal ist responsiv gestaltet, so dass auch eine komfortable Bearbeitung auf Tablets und Smartphones möglich ist, auch mit Gestensteuerung. Zudem wird eine Integration in Microsoft Sharepoint und Outlook angeboten. Damit können Mitarbeiter ihre Aufgaben über den gewohnten Maileingang erhalten und die zugehörigen Formulare komplett in Outlook bearbeiten ohne in das separate Portal wechseln zu müssen. Generell spielt FireStart seine Stärken in der Integration mit Microsoft-Produkten aus. Daneben stehen aber auch Konnektoren zu SAP und anderen Systemen zur Verfügung, und natürlich werden auch verschiedene Standards wie Web Services unterstützt.

Bei der jüngsten BPM-Studie des Fraunhofer IESE landete FireStart im Spitzenfeld. Das System schnitt in den meisten untersuchten Kategorien überdurchschnittlich ab. Dass es insbesondere in der Kategorie “Prozessmodellierung” vor allen anderen BPM-Systemen landete, überrascht angesichts der Funktionsvielfalt der Modellierungskomponente nicht.

by Thomas Allweyer at March 05, 2015 08:39 AM

March 02, 2015

BPinPM.net: Invitation to “BPM meets the Innovation Helix” Workshop

„Quo Vadis, BPM?“ – This was already the title of the key note speech held by Dr. Bernhard Krusche at our recent BPinPM.net Process Management Conference and most of the conference participants agreed, that the challenges of the digital transformation of organizations will also challenge BPM.

Dr. Krusche’s idea to combine tools of successful innovation processes with structured BPM started a discussion on how this fusion of classical BPM and new innovation methodologies could look like.

Thus, we decided to set up a workshop to explore the “Innovation Helix” which was invented by Dr. Krusche and Prof. Sonja Zillner and match it with the BPM Life Cycle.

To learn more about this innovation workshop, please check the event details…

by Mirko Kloppenburg at March 02, 2015 09:45 PM

February 27, 2015

Thomas Allweyer: IT-Strategie-Studie gestartet

Im letzten Jahr kam eine von Scheer Management erstellte Studie zu dem ernüchternden Fazit, dass es vielen Firmen nicht gelingt, ihre Unternehmensstrategien auch tatsächlich umzusetzen. Auf dem operativen Level kam zumeist kaum noch etwas von dem an, was in der Führungsetage als Strategie erarbeitet wurde. Jetzt hat das Saarbrücker Beratungshaus eine neue Umfrage gestartet, diesmal zum Thema IT-Strategien. Es wird untersucht, welche Elemente IT-Strategien in der Praxis beinhalten und wie sie kommuniziert und umgesetzt werden.

Die Studie richtet sich an alle Management-Ebenen und Branchen. Die Beantwortung der Fragen dauert etwa 15 Minuten. Die Teilnahme ist unter diesem Link möglich.

by Thomas Allweyer at February 27, 2015 07:50 AM