Planet BPM

July 13, 2017

Sandy Kemsley: Insurance case management: SoluSoft and OpenText

It’s the last session of the last morning at OpenText Enterprise World 2017 — so might be my last post from here if I skip out on the one session that I have bookmarked for late this...

[Content summary only, click through for full article and links]

by sandy at July 13, 2017 04:16 PM

Sandy Kemsley: Getting started with OpenText case management

I had a demo from Simon English at the OpenText Enterprise World expo earlier this week, and now he and Kelli Smith are giving a session on their dynamic case management offering. English started by...

[Content summary only, click through for full article and links]

by sandy at July 13, 2017 02:57 PM

Sandy Kemsley: OpenText Process Suite becomes AppWorks Low Code

“What was formerly known as Process Suite is AppWorks Low Code, since it has always been an application development environment and we don’t want the focus to be on a single technology...

[Content summary only, click through for full article and links]

by sandy at July 13, 2017 01:57 PM

July 12, 2017

Sandy Kemsley: OpenText Process Suite Roadmap

Usually I live-blog sessions at conferences, publishing my notes at the end of each, but here at OpenText Enterprise World 2017, I realized that I haven’t taken a look at OpenText Process Suite...

[Content summary only, click through for full article and links]

by sandy at July 12, 2017 10:03 PM

Sandy Kemsley: OpenText Enterprise World 2017 day 2 keynote with @muhismajzoub

We had a brief analyst Q&A yesterday at OpenText Enterprise World 2017 with Mark Barrenechea (CEO/CTO), Muhi Majzoub (EVP of engineering) and Adam Howatson (CMO), and today we heard more from...

[Content summary only, click through for full article and links]

by sandy at July 12, 2017 02:47 PM

July 11, 2017

Sandy Kemsley: OpenText Enterprise World keynote with @markbarrenechea

I’m at OpenText Enterprise World 2017  in Toronto; there is very little motivating me to attend the endless stream of conferences in Vegas, but this one is in my backyard. There have been a...

[Content summary only, click through for full article and links]

by sandy at July 11, 2017 05:44 PM

July 03, 2017

Drools & JBPM: Drools, jBPM and Optaplanner are switching to agile delivery!

Today we would like to give everyone in the community a heads up at some upcoming changes that we believe will be extremely beneficial to the community as a whole.

The release of Drools, jBPM and Optaplanner version 7.0 a few weeks ago brought more than just a new major release of these projects.

About a year ago, the core team and Red Hat started investing on improving a number of processes related to the development of the projects. One of the goals was to move from an upfront planning, waterfall-like development process into a more iterative agile development.

The desire to deliver features earlier and more often to the community, as well as to better adapt to devops-managed cloud environments, required changes from the ground up. From how the team manages branches to how it automates builds and how it delivers releases. A challenge for any development team, but even more so to a team that is essentially remote with developers spread all over the world.

Historically, Drools, jBPM and Optaplanner aimed for a cadence of 2 releases per year. Some versions with a larger scope took a bit longer, some were a bit faster, but on average that was the norm.

With version 7.0 we started a new phase in the project. We are now working with 2-week sprints, and with an overall goal of releasing one minor version every 2 sprints. That is correct, one minor version per month on average.

We are currently in a transition phase, but we intend to release version 7.1 at the end of the next sprint (~6 weeks after 7.0), and then we are aiming to release a new version every ~4 weeks after that.

Reducing the release timeframe brings a number of advantages, including:
  • More frequent releases gives the community earlier access to new features, allowing users to try them and provide valuable feedback to the core team. 
  • Reducing the scope of each release allows us to do more predictable releases and to improve our testing coverage, maintaining a more stable release stream.
  • Bug fixes as usual are included in each release, allowing users more frequent access to them as well. 
It is important to note that we will continue to maintain backward compatibility between minor releases (as much as possible - this is even more important in the context of managed cloud deployments as well where seamless upgrades are the norm) and the scope of features is expected to remain similar to what was before. That has two implications:
  • If before, we would release version 7.1 around ~6 months after 7.0, we now will release roughly 6 new versions in those 6 months (7.1, 7.2, ..., 7.6 ), but the amount of feature will be relatively equivalent. I.e., the old version 7.1 is roughly equivalent in terms of features as the scope of the new versions 7.1,..., 7.6 combined. It just splits the scope in smaller chunks and delivers earlier and more often.
  • Users that prefer to not update so often will not lose anything. For instance, a user that updated every 6 months can continue to do so, but instead of jumping from one minor version to the next, he will jump 5-6 minor versions. This is not a problem, again, because the scope is roughly the same as before and the backward compatibility between versions is the same.
This is of course work in progress and we will continue to evolve and adapt the process to better fit the community's and user's needs. We strongly believe, though, that this is a huge step forward and a milestone on the project maturity level.

by Edson Tirelli (noreply@blogger.com) at July 03, 2017 10:48 PM

June 22, 2017

Keith Swenson: Complex Project Delay

I run large complex software projects.  A naive understanding of complex project management can be more dangerous than not knowing anything about it.  This is a recent experience.

Setting

A large important customer wanted a new capability.  Actually, they thought they already had the capability, but discovered that the existing capability didn’t quite do what they needed.  They were willing to wait for development, however they felt they really deserved the feature, and we agreed.  “Can we have it by spring of next year?”    “That seems reasonable” I said.

At that time we had about 16 months.  We were finishing up a release cycle, so nothing urgent, I planned on a 12 month cycle starting in a few months.  I will start the project in “Month 12” and count down to the deadline.

We have a customer account executive (lets call him AE) who has asked to be the single point of contact to this large, important customer.  This makes sense because you don’t want the company making a lot of commitments on the side without at least one person to keep a list of all and make sure they all are followed through on.

Shorten lines of communication if you can.  Longer lines of communication make it harder to have reliable communication, so more effort is needed.

Palpable Danger

The danger in any such project, is that you have a fixed time period, but the precise requirements are not specified.  Remember that the devil is in the details. Often we throw about the terms “Easy to Use”  “Friendly”  “Powerful” and those can mean anything in detail.  Even terms that seem very specific, like “Conformance to spec XYZ” can include considerable interpretation by the reader.  All written specifications are ambiguous.

The danger is that you will get deep into the project, and it will come to light that customer expects functionality X.  If X is known at the beginning, and design can incorporate it from the beginning, and it might be relatively inexpensive.   But retrofitting X into a project when it is half completed can multiple that cost by ten times.  The goal then is to get all the required capabilities to a suitable level of detail before you start work.

A software project is a lot like piloting a big oil tanker.  You get a number of people going in different, but coordinated directions.  As the software starts to take form, all the boundaries between the parts that the people are working on gradually firm up and become difficult to change.  As the body of code becomes large, the cost of making small changes increases.  In my experience, at about the halfway point, the entire oil tanker is steaming along in a certain direction, and it become virtually impossible to change course without drastic consequences.

With a clear agreement up front, you avoid last minute changes.   The worst thing that can happen is that late in the project, the customer says “But I really expected this to run on Linux.”   (Or something similar).   Late discoveries like this can be the death knell.   If this occurs, there are only two possibilities: ship without X and disappoint the customer, or change course to add X, and miss the deadline.  Either choice is bad.

Danger lies in the unknown.  If it is possible to shed light and bring in shared understanding, the risk for danger decreases.

Beginning to Build a Plan

In month 12, I put together a high level requirements document.  This is simply to create an unambiguous “wish list” that encompasses the entire customer expectation.  It does NOT include technical details on how they will be met.  That can be a lot of work.  Instead, we just want the “wishes” at this time.

If we have agreement on that, we can then flesh out the technical details in a specification for the development.  This is a considerable amount of work, and it is important that this work be focused on the customer wishes.

I figured on a basic timetable like this:

  • Step 1: one month to agree on requirements  (Month 12)
  • Step 2: one month to develop and agree on specification (Month 11)
  • Step 3: one month to make a plan and agree on schedule (Month 10)
  • Step 4: about 4 months of core development (Months 9-6)
  • Step 5: about 4 months of QA/finishing (Months 5-2)
  • leaving one month spare just in case we need it. (Month 1)

Of course, if the customer comes back with extensive requirements, we might have to rethink the whole schedule.  Maybe this is a 2 year project.  We won’t know until we get agreement on the requirements.

Then AE comes to a meeting and announces that the customer is fully expecting to get the delivery of this new capability in Month 3!  Change of schedule!  This cuts us down to having only 9 months to deliver.  But more important, we have no agreement yet on what is to be delivered.  This is the classic failure mode: agreeing to a hard schedule before the details of what is to be delivered is worked out.  This point should be obvious to all.

The requirements document is 5 pages, one of those pages is the title page.  It should be an afternoon’s amount of work to read, gather people, and get this basic agreement.

Month 12 comes to an end.  Finally toward the middle of Month 11, the customer comes back with an extensive response.  Most of what they are asking for in “requirements” are things that the product already does, so no real problem.  There are few things that we can not complete on this schedule, so we need to push back.  But I am worried, we are six weeks into a task that should have been completed a month earlier.

Deadlines are missed one day at a time.

We’ve Got Plenty of Time

In the end of month 11, I give a revised the requirements and gave to AE.   AE’s response was not to give it to the customer.  He said “let’s work and understand this first, before we give it to the customer.”  This drags on for another couple of weeks, so we are now 8 weeks into the project, and we have not completed the first step originally planned for one month.

I press AE on this.  We are slipping day by day, week by week.  This is how projects are missed.  What was originally planned for 1 month out of twelve, is now close to 2 months, which is now out of 9.   We are getting squeezed!

AE says:  “What is the concern?   We have 8 more months to go!   What does a few weeks matter out of 8 months?

The essence of naive thinking that causes projects to fail is the idea that there is plenty of time and we can waste some.

Everything Depends

Lets count backwards on the dependency:

  • We want to deliver a good product that pleases the customer
  • This depends on using our resources wisely and getting everything done
  • This depends on not having any surprises late in the project about customer desires which waste developer time
  • This depends on having a design that meets all the expectation of the customer
  • This depends on having a clear understanding of what the customer wants before shape of the project starts to ossify.
  • This depends on having clear agreement on the customer desires before all of the above happens.

Each of these cascades, and a poor job in any step causes repercussions that get amplified as last minute changes echo through the development.

I also want to say that this particular customer is not flakey.  They are careful in planning what they want, and don’t show any excessive habit of changing their direction.  They are willing to wait a year for this capability.  I believe they have a good understanding of what they want — this step of getting agreement on the requirements is really just a way to make sure that the development team understands what the customer wants.

Why Such a Stickler?

AE says: “You should be able to go ahead and start without the agreement on requirements.  We have 8 more months, we can surely take a few more weeks or months getting this agreement.

Step 2 is to draw up a specification and to share that with the customer.  Again, we want to be transparent so that we avoid any misunderstanding that might cause problems late in the project.  However, writing a spec takes effort.

Imagine that I ask someone to write a spec for features A, B, and C.   Say that is two weeks of work.   Then the customer asks for feature D, and that causes a complete change in A, B, and C.  For example, given A, B, and C we might decide to write in Python, and that will have an effect on we way things are structured.  Then the customer requires running an environment where Python is not available.  That simple change would require us to start completely over.  All the work on the Python design is wasted work which we have to throw out, and could cause us to lose up to a month of time on the project, causing the entire project to be late.   However, if we know “D” before we start, we don’t waste that time.

Step 2 was planned to take a month, so if we steal 2 weeks from that, by being lazy about getting agreement on the requirements, we already lose half the time needed.  It is not likely that we can do this step in half the time.  And the two weeks might be wasted, causing us to need even more time.  Delaying the completion of step 1, can cause an increase in time of step 2, ultimately cascading all the way to final delivery.

Coming to agreement on the requirements should take 10% of the time, but if not done, could have repercussions that cost far more than 10% of the time.  It is important to treat those early deadlines as if the final delivery of the project depended on them.

Lack of attention to setting up the project at the front, always has an amplified effect toward the end of the project.

But What About Agile?

Agile development is about optimizing the work of the team to be optimally productive, but it is very hard to predict accurate deliveries at specific dates in the future.  I can clearly say we will have great capabilities next year, and the year after.  But this situation is the case that the customer has a specific expectation in a specific time frame.

Without a clear definition of what they want, the time to develop is completely unpredictable.  There is a huge risk by having an agreed upon date, but no agreed upon detailed functionality.

Since the customer understands what they want, the most critical and urgent thing is to capture that desire in a document we both can agree on.  The more quickly that is done, the greater the reduction in risk and danger.

Even when developing in an agile way, the better we understand things up front, the better the whole project will go.  Don’t leave things in the dark just because you are developing in an agile way.  It is a given that there many things that can’t be known in the course of a project, but that gives no license to purposefully ignore things that can be known.

Conclusions

Well run project act as if early deadlines are just as important as late deadlines.  Attention to details is not something that just appears at the last moment.  It must start early, and run through the entire project.

Most software project fail because of lack of clear agreement on what will satisfy the customer.  It is always those late discoveries that cause projects to miss deadlines.  A well run project requires strict attention to clarifying the goals as early as possible.

Do not ignore early deadlines.  Act as if every step of a project is as important as the final delivery.   Because every step is as important as the final delivery.

 

 

 


by kswenson at June 22, 2017 05:02 PM

June 21, 2017

Sandy Kemsley: Smart City initiative with @TorontoComms at BigDataTO

Winding down the second day of Big Data Toronto, Stewart Bond of IDC Canada interviewed Michael Kolm, newly-appointed Chief Transformation Officer at the city of Toronto, on the Smart City...

[Content summary only, click through for full article and links]

by sandy at June 21, 2017 06:08 PM

Sandy Kemsley: Consumer IoT potential: @ZoranGrabo of @ThePetBot has some serious lessons on fun

I’m back for a couple of sessions at the second day at Big Data Toronto, and just attended a great session by Zoran Grabovac of PetBot on the emerging markets for consumer IoT devices. His premise is...

[Content summary only, click through for full article and links]

by sandy at June 21, 2017 04:40 PM

June 20, 2017

Sandy Kemsley: Data-driven deviations with @maxhumber of @borrowell at BigDataTO

Any session at a non-process conference with the word “process” in the title gets my attention, and I’m here to see Max Humber of Borrowell discuss how data-driven deviations allow you to make...

[Content summary only, click through for full article and links]

by sandy at June 20, 2017 07:56 PM

Sandy Kemsley: IBM’s cognitive, AI and ML with @bigdata_paulz at BigDataTO

I’ve been passing on a lot of conferences lately – just too many trips to Vegas for my liking, and insufficient value for my time – but tend to drop in on ones that happen in Toronto, where I live....

[Content summary only, click through for full article and links]

by sandy at June 20, 2017 04:18 PM

June 13, 2017

BPM-Guide.de: “Obviously there are many solutions out there advertising brilliant process execution, finding the “right” one turns out to be a tricky task.” – Interview with Fritz Ulrich, Process Development Specialist

Fritz graduated with a Bachelors of Information Systems at WWU Münster in 2013 and since then has been working for Duni GmbH in the area of Process Development (responsible for all kinds of BPM topics and Duni’s BPM framework) and as a Project Manager.

by Darya Niknamian at June 13, 2017 08:00 AM

June 05, 2017

Keith Swenson: Initial DMN Test Results

The initial Decision Model Notation TCK test results are in!   The web site is up showing the results from three vendors.

Tests of Correctness

There are currently 52 tests which require a conforming DMN implementation to read a DMN model in the standard DMN XML-based file format.   Along with the model are a set of input values, and values to compare the outputs to.  Everything is a file so that not matter what technology environment the DMN requires, it need only read the files and run the models.

The results of running the tests are reported back to the committee by way of a simple CSV file.  The three vendors who have done this to date are Red Hat with the DROOLS rules engine, Trisotech with their web based models which also leverages the DROOLS implementation, and Camunda with their Camunda BPM.   It is worth mentioning that one more implementation has been involved to verify and validate the tests created by Bruce Silver but not included in the results since it is not commercialized.

What we all get from this is the assurance that an implementation really is running the standard model in a standard way.  This can help you avoid a costly mistake of adopting a technology that takes you down a blind alley.

Open Invitation

This is an open invitation for anyone working the DMN space:

  • If you are developing DMN technology, you can take the tests for free and try them out.  When your implementation does well, send us the results and we can put you on the board to let everyone know.
  • If you are using DMN from someone vendor, ask them if they have looked at the tests, and if not, why not?

The tests are all freely available, and there are links from the web site directly to the test models and data.

Acknowledgement

I certainly want to acknowledge the hard work of people at Red Hat, Trisotech, Camunda, Open Rules (who will be releasing their results soon), Bruce Silver, and several others who made this all come about.


by kswenson at June 05, 2017 11:30 AM

May 29, 2017

Drools & JBPM: New KIE persistence API on 7.0

This post introduce the upcoming drools and jBPM persistence api. The motivation for creating a persistence api that is to not be bound to JPA, as persistence in Drools and jBPM was until the 7.0.0 release is to allow a clean integration of alternative persistence mechanisms to JPA. While JPA is a great api it is tightly bound to a traditional RDBMS model with the drawbacks inherited from there - being hard to scale and difficult to get good performance from on ever scaling systems. With the new api we open up for integration of various general NoSQL databases as well as the creation of tightly tailor-made persistence mechanisms to achieve optimal performance and scalability.
At the time of this writing several implementations has been made - the default JPA mechanism, two generic NoSQL implementations backend by Inifinispan and MapDB which will be available as contributions, and a single tailor made NoSQL implementation discussed shortly in this post.

The changes done in the Drools and jBPM persistence mechanisms, its new features, and how it allows to build clean new implementations of persistence for KIE components is the basis for a new soon to be added MapDB integration experimental module. The existing Infinispan adaptation has been changed to accommodate to the new structure.
Because of this refactor, we can now have other implementations of persistence for KIE without depending on JPA, unless our specific persistence implementation is JPA based. It has implied, however, a set of changes:

Creation of drools-persistence-api and jbpm-persistence-api

In version 6, most of the persistence components and interfaces were only present in the JPA projects, where they had to be reused from other persistencies. We had to refactor these projects to reuse these interfaces without having the JPA dependencies added each time we did so. Here's the new set of dependencies:
<dependency>
 <groupId>org.drools</groupId>
 <artifactId>drools-persistence-api</artifactId>
 <version>7.0.0-SNAPSHOT</version>
</dependency>
<dependency>
 <groupId>org.jbpm</groupId>
 <artifactId>jbpm-persistence-api</artifactId>
 <version>7.0.0-SNAPSHOT</version>
</dependency>

The first thing to mention about the classes in this refactor is that the persistence model used by KIE components for KieSessions, WorkItems, ProcessInstances and CorrelationKeys is no longer a JPA class, but an interface. These interfaces are:
  • PersistentSession: For the JPA implementation, this interface is implemented by SessionInfo. For the upcoming MapDB implementation, MapDBSession is used.
  • PersistentWorkItem: For the JPA implementation, this interface is implemented by WorkItemInfo, and MapDBWorkItem for MapDB
  • PersistentProcessInstance: For the JPA implementation, this interface is implemented by ProcessInstanceInfo, and MapDBProcessInstance for MapDB
The important part is that, if you were using the JPA implementation and wish to continue doing so with the same classes as before. All interfaces are prepared to work with these interfaces. Which brings us to our next point

PersistenceContext, ProcessPersistenceContext and TaskPersistenceContext refactors

Interfaces of persistence contexts in version 6 were dependent on the JPA implementations of the model. In order to work with other persistence mechanisms, they had to be refactored to work with the runtime model (ProcessInstance, KieSession, and WorkItem, respectively), build the implementations locally, and be able to return the right element if requested by other components (ProcessInstanceManager, SignalManager, etc)
Also, for components like TaskPersistenceContext there were multiple dynamic HQL queries used in the task service code which wouldn’t be implementable in another persistence model. To avoid it, they were changed to use specific mechanisms more related to a Criteria. This way, the different filtering objects can be used in a different manner by other persistence mechanisms to create the queries required.

Task model refactor

The way the current task model relates tasks and content, comment, attachment and deadline objects was also dependent on the way JPA stores that information, or more precisely, the way ORMs related those types. So a refactor of the task persistence context interface was introduced to do the relation between components for us, if desired. Most of the methods are still there, and the different tables can still be used, but if we just want to use a Task to bind everything together as an object (the way a NoSQL implementation would do it) we now can. For the JPA implementation, it still relates object by ID. For other persistence mechanisms like MapDB, it justs add the sub-object to the task object, which it can fetch from internal indexes.
Another thing that was changed for the task model is that, before, we had different interfaces to represent a Task (Task, InternalTask, TaskSummary, etc) that were incompatible with each other. For JPA, this was ok, because they would represent different views of the same data.
But in general the motivation behind this mix of interfaces is to allow optimizations towards table based stores - by no means a bad thing. For non table based stores however these optimizations might not make sense. Making these interfaces compatible allows implementations where the runtime objects retrieved from the store to implement a multitude of the interfaces without breaking any runtime behavior. Making these interfaces compatible could be viewed as a first step, a further refinement would be to let these interfaces extending each other to underline the model  and make the implementations simpler
(But for other types of implementation like MapDB, where it would always be cheaper to get the Task object directly than creating a different object, we needed to be able to return a Task and make it work as a TaskSummary if the interface requested so. All interfaces now match for the same method names to allow for this.)

Extensible TimerJobFactoryManager / TimerService

On version 6, the only possible implementations of a TimerJobFactoryManager were bound in the construction by the values of theTimeJobFactoryType enum. A refactor was done to extend the existing types, to allow other types of timer job factories to be dynamically added

Creating your own persistence. The MapDB case

All these interfaces can be implemented anew to create a completely different persistence model, if desired. For MapDB, this is exactly what was done. In the case of the MapDB implementation that is still under review, there are three new modules:
  • org.kie:drools-persistence-mapdb
  • org.kie:jbpm-persistence-mapdb
  • org.kie:jbpm-human-task-mapdb
That are meant to implement all the Task model using MapDB implementation classes. Anyone with a wish to have another type of implementation for the KIE components can just follow these steps to get an implementation going:
  1. Create modules for mixing the persistence API projects with a persistence implementation mechanism dependencies
  2. Create a model implementation based on the given interfaces with all necessary configurations and annotations
  3. Create your own (Process|Task)PersistenceContext(Manager) classes, to implement how to store persistent objects
  4. Create your own managers (WorkItemManager, ProcessInstanceManager, SignalManager) and factories with all the necessary extra steps to persist your model.
  5. Create your own KieStoreServices implementation, that creates a session with the required configuration, and adding it to the classpath

You’re not alone: The MultiSupport case

MultiSupport is a Denmark based company that has used this refactor to create its own persistence implementation. They provide an archiving product that is focused on creating a O(1) archive retrieval system, and had a strong interest in getting their internal processes to work using the same persistence mechanism they used for their archives.
We worked on an implementation that allowed for an increase in the response time for large databases. Given their internal mechanism for lookup and retrieval of data, they were able to create an implementation with millions of active tasks which had virtually no degradation in response time.
In MultiSupport we have used the persistence api to create a tailored store, based on our in house storage engine - our motivation has been to provide unlimited scalability, extended search capabilities, simple distribution and a performance we struggled to achieve with the JPA implementation. We think this can be used as a showcase of just how far you can go with the new persistence api. With the current JPA implementation and a dedicated SQL server we have achieved an initial performance of less than 10 ‘start process’ operations per second, now with the upcoming release we on a single application server have a performance more than 10 fold.

by Marian Buenosayres (noreply@blogger.com) at May 29, 2017 10:35 PM

May 08, 2017

BPM-Guide.de: “From a development point of view, it is important that the BPM software be open.” – Interview with Michael Kirven, VP of IT

As the Vice President of Business Solutions within the IT Applications area at People’s United Bank I manage a team of developers across multiple development technologies with a focus towards bringing efficiencies to the bank’s back office areas. I’ve been with People’s United Bank since 1999 and before that I was a commercial software developer for several different startup companies.

by Darya Niknamian at May 08, 2017 08:00 AM

Keith Swenson: bpmNEXT Keynotes

Talks by Nathaniel Palmer, Jim Sinur, Neil Ward-Dutton, Clay Richardson kick off this bell weather event in the process industry.  The big theme is digital transformation.

BPMAllFourVery interesting to see how all four play off of each other, and reflect a suprisingly lucid view of the trends in the process space.

Nathaniel Palmer (¤)

Kicked off the event and gives an excellent overview of the industry.  Exponential organizations define the decade, and where is BPM in that?  He has been promoting Robotic Process Automation as a key topic for a number of years now, which has finally come into popular usage. Who thought India would be a hotspot for this interest?  Lots of kinds of robots: conversational assistants (Echo), robot lawyer accessed by web page, mobile robots that assist people in stores, industrial robots, etc.  Tasks have different meanings and difference behavior based on the way that you interact with it.

We are going to need an army of robot lawyers to combat an army of robot lawyers for every transaction we do.  Previous laws are based on a utility curve that assumes a limit to how much effort you would be willing to put in, but robots don’t care about that utility curve.

Biggest contribution is this architecture for a suite of capabilities needed for a digital transformation platform, which includes process management, decision management, machine learning and automation.

We should consider this architecture as a common understanding of where BPM is going.

BPMFramework


Jim Sinur (¤)

BPM is morphing.  Goal directed, autonomous and robots.  2017 Trends:

  • Predictive apps get smarter.  Predictive and cognitive.  Decision criteria
  • Big data, deep learning
    • Machine learning is easy today; found 112 machine learning algorithms.  Takes a lot of horsepower.
    • Medium scale is deep learning on the fly and updating knowledge as you go.
    • High cost is cognitive computing is expert and highly trained on specific topics.  Watson takes a long time to train.  Healthcare is a big focus.
  • IoT: how to manage, how to talk.  NEST protocol gaining steam.  Autonomy at the edge, not just smart centrally.  Things will have smarter chips.  In the past predefined processes will give way to dynamic processes.  Example, GM has paint robots that bid on jobs, and optimize order for work.  Different booths have different quality ratings.
  • Sales engagement platforms (SEP) and service engagement platforms
  • Video enabled business applications.  BPM is more about collaborative work management.  All the fast upcoming company has included workflow.  Training.
  • Chat bots and digital assistants.  Considers his Amazon Echo to be such a thing.
  • Virtual reality will be more and more important.  Gamification.  Glasses now.  Google glass a failure.
  • Work Hubs and Platforms.
  • Drones are being put to work.  surveillance.  Delivery.
  • Block Chain – In production very few.  Contained, constrained, small volume, real time.  Builds.  May require new kinds of chips.  Security and integrity is crucial.

Digital business platform:  (1) business applications (2) processes and cases, (3) machines & sensors (4) cognition and calculations (5) data and systems integration.  Real live solution involves drone that flies checking pipeline status.  Spectrum of vendors.  Change management is a key part of all this. Digital transformation is mistakenly compared to enterprise re-engineering.  Digital Identity is critical.

Digital DNA:  goal driven processes, robots, cognitive, digital assistants, software bots, intelligent process, controllers, deep learning, learning, voice, RPA, sensors, machine learning.

BPMSinur


Neil Ward-Dutton (¤)

The new wave of Automation

Context – A major shift in experience of automation.  Shift in how we interact with machine.  Used to shaping out live around automated system.  Training ourselves.  Automation lines are designed for the robots.   But it is going the other way.

  1. way they interact with their environment
  2. flexing and recommending, and
  3. packaging work for us.
  • First industrial automation (flour mill) 1785
  • Learning system only since the 1960s/1970s.

Layers – Three layers; Interaction, Insight, Integration.

  • Interaction is sensing and responding.
  • Insight is about moving away from static analysis of plans, but dynamic reevaluation from moment to moment.
  • Integration is about componentization, automate resources.  Not just EAI, but more openness.

Drivers – why are we seeing these things now?

  1. rapidly evolving – fundamental assumption was that computing resourceses were scarce.  Expensive, hard to get, and mistakes are costly.  BUt that is changing.
  2. business pressures
    • Customer experience excellence.  How to create customer journeys that are enticing.
    • Decouple knowledge from labor.
    • Perform at speed
  3. familiarity – automation, bots, recommendations

It is crazy that the people that turnover the most, need the least training and cost the least are the people who talk to the customers most.

Impacts

  • Insight: some HR system that identifies people who are likely to leave, and what the cost of that person leaving would mean.
  • Integration:  Does RPA belong here?  Primarily integration.

Follow the money: expert assistants (increase impact of experts), case advisors (make everyone as good as the best), task automators (highly procedural routine tasks).  And personal productivity.

It is bullshit to say that a particular job will disappear — maybe tasks, but not entire jobs.

Your opportunity.

Layers: (see graphic at the end of section) highly automated tasks on lower levels.  Sense and respond above that.  Then human personal productivity assistants – useful but low value.

  • Chatbots – text based or speech based interactions, but no real smarts.
  • Recommendation services – expert, next best action
  • Smart Infrastructure – maintenance, management

Virtuous cycle, three steps:

  • Shift to self service in terms of access to tools.  Integration tools.  Process.  Kanban kind of tools.  Tools that can be used by a much broader audience.
  • Shift to (network) platforms.  Aggregates data and insights.  IaaS was really about cost and scale.  This is different.  Shared, networked, cloud platform.  Not about the underlying technology, but data and insights.  If everyone has to build their own, they won’t have access to aggregated data.  But the value of a networked cloud platform is access to data, aggregating insights, quicker.
  • Shift to learning systems.  Figuring out how to take recommendation services.

SnapLogic: integration assistant.
Boomie Dell StepLogic:  operational data from customers history.  Can identify customers that are struggling.

BPMNeal

BPMNeal2


Clay Richardson (¤)

How to Survive the Great Digital Migration

Clay has a new startup: Digital Fast Forward  and also advising at American University.

A poll found that digital transformation is the #1 priority for BPM.  But the teams don’t know what to do.  Prediction that 75% of BPM programs will fail.   Referenced the World Economic Forum’s Fourth Industrial Revolution.  Creativity used to be low on the list, now near the top.  Companies are not seeing this yet.

Digital gold rush versus the digital drought.  They get the technology part, but not the skill part.  Less than 20% of companies have the skills.

AT&T’s competitors are not just Verizon and Sprint, but also tech giants like Amazon and Google. For the company to survive in this environment, Mr. Stephenson needs to retrain its 280,000 employees so they can improve their coding skills, or learn them, and make quick business decisions based on a fire hose of data coming into the company.

Strategies to address these: hire, reinvent, and outsource.  Try to take all three.

Maintain –> waterfall,   implement–>agile/scrum,   experiment –> ?????    (design thinking?)  Actually very ad-hoc.

Design,  Validate, Learn    Check out Google design sprints.   How do you move quickly into design sprints.  How many are familiar with Objectives and Key Results (OKR)?

Not just what you learn, but HOW you learn it.  Has to be interactive and immersive.  Like a hackathon.

Digital innovation boot camp.  6 weeks in silicon valley.  retrain to become digital experts.  Put together for immersion.  Real world experiences.  Tripled the volume of digital innovation ideas.  Accelerated speed to green light digital projects.

Incorporated ‘Escape Room’ concepts into training exercises that he ran.  People love learning in interactive, immersive situations.

Digital platforms must evolve to support experimentation.  AI, robotics, mobile, low code, IoT.   Is going to have to bring rapid protoyping, OKR management, and hypothesis boards.  Need the cycle of build, measure, learn.

BPMClay


by kswenson at May 08, 2017 07:40 AM

April 25, 2017

Drools & JBPM: Just a few... million... rules... per second!

How would you architect a solution capable of executing literally millions of business rules per second? That also integrates hybrid solutions in C++ and Java? While at the same time drives latency down? And that is consumed by several different teams/customers?

Here is your chance to ask the team from Amadeus!

They prepared a great presentation for you at the Red Hat summit next week:

Decisions at a fast pace: scaling to multi-million transactions/second at Amadeus

During the session they will talk about their journey from requirements to the solution they built to meet their huge demand for decision automation. They will also talk about how a collaboration with Red Hat helped to achieve their goals.

Join us for this great session on Thursday, May 4th, at 3:30pm!



by Edson Tirelli (noreply@blogger.com) at April 25, 2017 03:41 PM

April 24, 2017

Drools & JBPM: DMN demo at Red Hat Summit

We have an event packed full of Drools, jBPM and Optaplanner content coming next week at the Red Hat Summit, but if you would like to know more about Decision Model and Notation and see a really cool demo, then we have the perfect session for you!

At the Decision Model and Notation 101 session, attendees will get a taste of what DMN brings to the table. How it allows business users to model executable decisions using a fun, high level, graphical language, that promotes interoperability and preserves their investment preventing vendor-lock-in.

But this will NOT be your typical slideware presentation. We have prepared a really nice demo of the end-to-end DMN solution announced by Trisotech a few days ago. During the session you will see a model being created with the Trisotech DMN Modeler, statically analyzed using the Method&Style DT Analysis module and executed in the cloud using Drools/Red Hat BRMS.

Come an join us on Tuesday, May 2nd at 3:30pm.

It is a full 3-course meal, if you will. And you can follow that up with drinks at the reception happening from 5pm-7pm at the partner Pavillion where you can also talk to us at the Red Hat booth about it and anything else you are interested in.

Happy Drooling!



by Edson Tirelli (noreply@blogger.com) at April 24, 2017 11:52 PM

April 20, 2017

Sandy Kemsley: Cloud ECM with @l_elwood @OpenText at AIIM Toronto Chapter

Lynn Elwood, VP of Cloud and Services Solutions at OpenText, presented on managing information in a cloud world at today’s AIIM chapter meeting in Toronto. This is of particular interest...

[Content summary only, click through for full article and links]

by sandy at April 20, 2017 02:28 PM

April 14, 2017

Keith Swenson: AdaptiveCM Workshop in America for first time

The Sixth International AdaptiveCM Workshop will be associated with the EDOC conference this year, which will be held in Quebec City in October 2017 and is the first opportunity for many US and Canadian researchers to attend without having to travel to Europe.

Since 2011 the AdaptiveCM Workshop has been the premier place to present and discuss leading edge ideas for advanced case management ideas and other non-workflow approaches to supporting business processes and knowledge workers in general.  It has been held in conjunction with the EDOC conference twice before, and the BPM conference twice as well, however it has always been held in Europe in the past.

91461836Key dates:

  • Paper submission deadline – May 7, 2017
  • Notification of acceptance – July 16, 2017
  • Camera ready – August 6, 2017
  • Workshop – October 10, 2017

Papers are welcome on the following topics:

  • Non-workflow BPM: how does one specify and working patterns that are not fixed in advance, that depend upon cooperation, and where the elaboration of the working pattern for a specific case is a product of the work itself.  Past workshops have included papers on CMMN and Dynamic Condition Response Graphs.
  • Adaptive Case Management: experience and approaches how knowledge workers use their time in an agile way, including empirical studies of how knowledge work teams share and control their information including vendors like Computas and ISIS Papyrus.
  • Decision Modeling and Management: is a new extension of the workshop this year to encourage papers that explore the ways that a decision model might be used away from a strictly defined process diagram for flexible knowledge work.

The biggest challenge is that many people working on systems for knowledge workers don’t know their systems have features in common with others.  For example: a system to help lawyers file all the right paperwork with the courts may not be seen initially as having commonality with a system to help maintenance workers handle emergency repairs.  Those commonalities exist — because people must manage their time in the face of change — and understanding their common structure is critical to allowing agile organizations to operate more effectively.

Titles of papers in recent years:

  • On the analysis of CMMN expressiveness: revisiting workflow patterns
  • Semantics of Higraphs for Process Modeling and Analysis
  • Limiting Variety by Standardizing and Controlling Knowledge Intensive Processes
  • Using Open Data to Support Case Management
  • Declarative Process Modelling from the Organizational Perspective.
  • Automated Event Driven Dynamic Case Management
  • Collective Case Decisions Without Voting
  • A Case Modelling Language for Process Variant Management in Case-based Reasoning
  • An ontology-based approach for defining compliance rules by knowledge workers in ACM: A repair service management case
  • Dynamic Context Modeling for ACM
  • Towards Structural Consistency Checking in ACM
  • Examining Case Management Demand using Event Log Complexity Metrics
  • Process-Aware Task Management Support for Knowledge-Intensive Business Processes: Findings, Challenges, Requirements
  • Towards a pattern recognition approach for transferring knowledge in ACM
  • How can the blackboard metaphor enrich collaborative ACM systems?
  • Dynamic Condition Response Graphs for Trustworthy Adaptive Case Management
    Collaboration between research and practice

Participants in the past have been from from all the key research institutions across Europe as well as some of the key vendors of flexible work support systems.  This year we hope to attract more interest from researchers and practitioners from Canada, the US, and the western hemisphere together with the core EDOC community drawn from all over the world.   Meet and discuss approaches / techniques and spend a day investigating and sharing all the latest approaches.

I will be there in Quebec City in October for sure, and hope to see as many of you as can make it!

download the: PDF Handout

 


by kswenson at April 14, 2017 12:28 PM

April 12, 2017

Drools & JBPM: DMN Quick Start Program announced

Trisotech, a Red Hat partner, announced today the release of the DMN Quickstart Program.

Trisotech, in collaboration with Bruce Silver AssociatesAllegiance Advisory and Red Hat, is offering the definitive Decision Management Quick Start Success Program. This unique program provides the foundation for learning, modeling, analyzing, testing, executing and maintaining DMN level 3-compliant decision models as well as best practices to incorporate in an enterprise-level Decision Management Center of Excellence. 

The solution is a collaboration between the partner companies around the DMN standard. This is just one more advantage of standards: not only users are free from the costs of vendor lock-in, but it also allow vendors to collaborate in order to offer customers complete solutions.

by Edson Tirelli (noreply@blogger.com) at April 12, 2017 10:43 PM

April 11, 2017

Drools & JBPM: An Open Source perspective for the youngsters

Please allow me to take a break from the technical/community oriented posts and talk a bit about something that has been on my mind a lot lately. Stick with me and let me know what you think!

Twenty one years ago, Leandro Komosinski, one of the best teachers (mentor might be more appropriate) I had, told me in one of our meetings:

"- You should never stop learning. In our industry, if you stop learning, after three years you are obsolete. Do it for 5 years and you are relegated to maintaining legacy systems or worse, you are out of the market completely. "

While this seems pretty obvious today, it was a big insight to that 18 years old boy. I don’t really have any data to back this claim or the timeframes mentioned, but that advice stuck with me ever since.

It actually applies to everything, it doesn’t need to be technology. The gist of it: it is important to never stop learning, never stop growing, personally and professionally.

That brings me to the topic I would like to talk about. Nowadays, I talk to a lot of young developers. Unfortunately, several of them when asked “What do you like to do? What is your passion?” either don’t know or just offer generic answers: “I like software development”.

"But, what do you like in software development? Which books have you been reading? Which courses are you taking?" And the killer question: "which open source projects are you contributing to?"

The typical answer is: “- the company I work for does not give me time to do it.” 

Well, let me break it down for you: “this is not about the company you work for. This is about you!” :) 

What is your passion? How do you fuel it? What are you curious about? How do you learn more about it?

It doesn’t need to be software, it can be anything that interests you, but don’t waste your time. Don’t wait for others to give you time. Make your own time.

And if your passion is technology or software, then it is even easier. Open Source is a lot of things to a lot of people, but let me skip ideology. Let me give you a personal perspective for it: it is a way to learn, to grow, to feed your inner kid, to show what you care for, to innovate, to help.

If you think about Open Source as “free labour” or “work”, you are doing it wrong. Open source is like starting a masters degree and writing your thesis, except you don’t have teachers (you have communities), you don’t have classes (you do your own exploratory research), you don’t have homework (you apply what you learn) and you don’t have a diploma (you have your project to proudly flaunt to the world). 

It doesn’t matter if your project is used by the Fortune 500 or if it is your little pet that you feed every now and then. The important part is: did you grow by doing it? Are you better now than you were when you started?

So here is my little advice for the youngsters (please take it at face value):

- Be restless, be inquisitive, be curious, be innovative, be loud! Look for things that interest you in technology, arts, sociology, nature, and go after them. Just never stop learning, never stop growing. And if your passion is software development, then your open source dream project is probably a google search away.

Happy Drooling,
Edson

by Edson Tirelli (noreply@blogger.com) at April 11, 2017 06:40 PM

April 03, 2017

BPM-Guide.de: BPM software should evolve and interoperate with other standards and tools – Interview with Judy Fainor, Chief Architect

Judy Fainor is the Chief Architect at Sparta Systems where she is responsible for enterprise software design, technology direction, and architecture. She has over 25 years of experience in product development including leading patent initiatives, speaking at technical conferences and interacting with Fortune 500 customers. Prior to Sparta Systems she was responsible for the architectural strategy of the IBM Optim Data Management portfolio where she led research and development projects that spanned IBM’s global labs including Japan, India, China, Israel and North America while also participating on the IBM Software Group Architecture Board.

by Darya Niknamian at April 03, 2017 08:00 AM

March 31, 2017

Drools & JBPM: A sneak peek into what is coming! Are you ready?

As you might have guessed already, 2017 will be a great year for Drools, jBPM and Optaplanner! We have a lot of interesting things in the works! And what better opportunity to take a look under the hood at what is coming than joining us on a session, side talk or over a happy hour in the upcoming conferences?

Here is a short list of the sessions we have on two great conferences in the next month! The team and myself hope to meet you there!

Oh, and check the bottom of this post for a discount code for the Red Hat Summit registration!


Santa Barbara, California April 18-20, 2017





















by Edson Tirelli (noreply@blogger.com) at March 31, 2017 11:53 PM

March 21, 2017

Drools & JBPM: DMN 1.1 XML: from modeling to automation with Drools 7.0

I am a freelance consultant, but I am acting today as a PhD student. The global context of my thesis is Enterprise Architecture (EA), which requires to model the Enterprise. As one aspect of EA is business process modeling, I am using BPMN from years, but this notation is not very appropriate to represent decision criteria: a cascade of nested gateways becomes quickly difficult to understand then to modify. So, when OMG published the first version 1.0 Beta of DMN specification in 2014, I found that DMN was a very interesting notation to model decision-making. I succeeded in developing my own DMN modeling tool, based on DMN metamodel, in using the Sirius plugin for Eclipse . But even the next “final” version 1.0 of DMN specification was not very accomplished.

The latest version 1.1 of DMN, published in June 2016, is quite good. In the meantime, software editors (at least twenty) have launched good modeling tools, as Signavio Decision Manager (free for Academics) used for this article. This Signavio tool was already able to generate specific DRL files for running DMN models on the BRMS Drools current version 6. In addition to the graphics, some editors added recently the capability to export DMN models (diagram & decision tables) into “DMN 1.1 XML” files, which are compliant with the DMN specification. And the good news is that BRMS like Drools (future version 7, available in Beta version) are able to run theses DMN XML files for automating decision-making (a few lines of Java code are required to invoke theses high level DMN models).

This new approach of treating “DMN 1.1 XML” interchange model directly is better for tool independency and model portability. Here is a short comparison between the former classic but specific solution and this new and generic solution, using the tool Signavio Decision Manager (latest version 10.13.0). MDA (Model Driven Architecture) and its three models CIM, PIM & PSM gives us the appropriate reading grid for this comparison:

3 MDA models
Description
Classic specific DMN solution
from Signavio Decision Manager
to BRMS Drools
CIM (Computation
Independent Model)
Representation model for business,
independent of computer considerations
DRD (Decision Requirements Diagram)
+ Decision Tables
PIM (Platform
Independent Model)
Design model for computing,
independent of the execution platform
û
PSM (Platform
Specific Model)
Design model for computing,
specific to the execution platform
DRL (Drools Rule Language)
+ DMN Formulae Java8-1.0-SNAPSHOT.jar

The visible aspect of DMN is its emblematic Decision Requirements Diagram (DRD) which can be completed with some Decision Tables for representing the business logic for decision-making. A DRD and its Decision Tables compose a CIM model, independent of any computer considerations.

Then, in the classic but specific DMN solution, Signavio Decision Manager is able, from a business DMN model (DRD diagram and Decision Tables), to export a DRL file directly for a Drools rules engine. So, this solution skips the intermediate PIM level, that is not very compliant with MDA concept. Note that this DRL file needs a specific Signavio’s jar library with DMN formulae.

3 MDA models
Description
New generic DMN solution
from Signavio Decision Manager(or other tools)
to BRMS Drools (or other BRMS)
CIM (Computation
Independent Model)
Representation model for business,
independent of computer considerations
DRD (Decision Requirements Diagram)
+ Decision Tables
PIM (Platform
Independent Model)
Design model for computing,
independent of the execution platform
DMN 1.1 XML (interchange model)
containing FEEL Expressions
PSM (Platform
Specific Model)
Design model for computing,
specific to the execution platform
û

The invisible aspect of DMN is its DMN XML interchange model, very useful for exchanging a model between modeling tools. DMN XLM is also very useful for going from model to automation. DMN XML model takes into account computer considerations, but as it is defined into DMN specification, a standard published by OMG (Object Management Group), it is independent of any execution platform, so it is a PIM model. DMN XML complies to DMN metamodel and can be checked with an XSD schema provided by OMG. The latest version 1.1 of DMN has refined this DMN XML format.

As DMN is a declarative language, a DMN XML file contains essentially declarations. The business logic included can be expressed with FEEL (Friendly Enough Expression Language) expressions. All entities required for a DMN model (input data, decision tables, rules, output decisions, etc.) are exported into the DMN XML file, due to a mechanism called serialization. It is why automation is now possible from DMN XML directly. Not all DMN modeling tools allow to export (or import) to DMN XML format.

With the new generic DMN solution, Signavio Decision Manager is now able, from the same business DMN model (DRD diagram and decision tables), to export “DMN 1.1 XML” interchange model. As the future 7.0.0 version of Drools is able to interpret “DMN 1.1 XML” format directly, the last level PSM, specific to the execution platform, is not useful anymore.

The new generic DMN solution, without skipping PIM level, sounds definitely better than the specific one and is a good basis for automating decision-making. Another advantage is, as Signavio said, that this new approach using “DMN 1.1 XML” reduces the vendor lock-in.

Thierry BIARD

by Thierry Biard (noreply@blogger.com) at March 21, 2017 03:10 PM

March 18, 2017

Sandy Kemsley: Twelve years – and a million words – of Column 2

In January, I read Paul Harmon’s post at BPTrends on predictions for 2017, and he mentioned that it was the 15th anniversary of BPTrends. This site hasn’t been around quite that long, but today marks...

[Content summary only, click through for full article and links]

by sandy at March 18, 2017 01:17 PM

March 13, 2017

BPinPM.net: Invitation to Best Practice Talk in Hamburg

Dear BPM-Experts,

to facilitate knowledge exchange and networking, we would like to invite you to our first “Best Practice Talk” about Process Management. The event will take place on March 30, 2017 at the Mercure Hotel Hamburg Mitte.

Experts from Olympus, ECE, and PhantoMinds will provide inspiring presentations and we will have enough time to discuss BPM questions. 🙂

Please visit our xing event for all the details:
https://www.xing.com/events/best-practice-talk-prozessmanagement-hamburg-1788523

See you in Hamburg!
Mirko

by Mirko Kloppenburg at March 13, 2017 08:31 PM

March 12, 2017

Drools & JBPM: DroolsJBPM organization on GitHub to be renamed to KieGroup


   In preparation for the 7.0 community release in a few weeks, the "droolsjbpm" organization on GitHub will be renamed to "kiegroup". This is scheduled to happen on Monday, March 13th.

   While the rename has no effect on the code itself, if you have cloned the code repository, you will need to update your local copy with the proper remote URL changing it from:


   To:


   Unfortunately, the URL redirect feature in GitHub will not support this rename, so you will likely have to update the URL manually on your local machines.

   Sorry for the inconvenience. 

by Edson Tirelli (noreply@blogger.com) at March 12, 2017 02:56 PM

March 08, 2017

BPM-Guide.de: My Interview with 5 Beautifully Unique Women at Camunda

Let me start by saying Happy International Women’s Day to my fellow females and males who are taking it upon themselves to discuss and share their emotions, experiences and challenges faced by women around the globe.

With the recent article by former Uber employee Susan Fowler and the theme of this years International Women’s Day: Women in the Changing World of Work: Planet 50-50 by 2030, I wanted to highlight the diversity of women I have the privilege of working with everyday to further showcase that women continue to take on a variety of roles, changing the workforce.

I am a firm …

by Darya Niknamian at March 08, 2017 07:34 AM

February 03, 2017

Sandy Kemsley: AIIM breakfast meeting on Feb 16: digital transformation and intelligent capture

I’m speaking at the AIIM breakfast meeting in Toronto on February 16, with an updated version of the presentation that I gave at the ABBYY conference in November on digital transformation and...

[Content summary only, click through for full article and links]

by sandy at February 03, 2017 01:15 PM

February 02, 2017

Drools & JBPM: AI Engineer - Entando are Hiring

Entando are looking to hire an AI Engineer, in Italy, to work closely with the Drools team building a next generation platform for integrated and hybrid AI. Together we'll be looking at how we can build systems that leverage and integrate different AI paradigms for the contextual awareness domain - such as enhancing our complex event processing,  building fuzzy/probability rules extensions or looking at Case Based Learning/Reasoning to help with predictive behavioural automation.

The application link can be found here.

by Mark Proctor (noreply@blogger.com) at February 02, 2017 08:41 PM

Drools & JBPM: Drools & jBPM are Hiring

The Drools and jBPM team are looking to hire. The role requires a generalist able work with both front-end and back-end code. We need a flexible and dynamic person who is able to handle what ever is thrown at them and relishes the challenge of learning new things on the fly. Ideally, although not a requirement, you'll be able to show some contributions to open source projects. You'll work closely with some key customers implementing their requirements in our open source products.

This is a remote role, and we can potentially hire in any country there is a Red Hat office, although you may be expected to do very occasional travel to visit clients.

The application link for the role can be found here:

Mark

by Mark Proctor (noreply@blogger.com) at February 02, 2017 08:22 PM

January 26, 2017

Keith Swenson: How do you want that Standard?

You know the old adage:  If want something real bad, you get it real bad.  If you want something worse, you get it worse.  This comes to mind when I think about the DMN proposed standard.  Why?  There is something about the process that technical standards go through…

I am writing again about the Decision Model and Notation (DMN) specification which is a promising approach to modeling decision logic.  It is designed by a committee.

Committees

A group of people come together to accomplish a goal.  We call it a committee.  It most certainly is not an efficient machine for producing high quality design ideas.  Actually it seem more closely to a troupe of clowns doing slapstick than a machine.  I have participated in that, I know.  There are many different agendas:

The True Believer: This is someone who really really want to make an excellent contribution to the industry.  They work very very hard.  Unfortunately they follow a course set by myths and legends of the last system they designed.  They fear going down the wrong blind alley.  They tend to zealously follow a new, innovative, and possibly untested, direction.  The true believers spends a lot of time on Reddit.

The Gold Digger: This is a consultant who knows that complicated complex documentation of any kind needs a host of experts who can help explain it to people.  Like everyone, they fear ambiguity in the spec, but also they fear incompleteness and simplicity.  Justified by an attempt to be complete, they tend to drive the spec to be endlessly long and complex and to include as many side topics as possible.  The gold digger sticks to Linked In.

The Vendor Defender: The defender knows that the principle risk is that someone else will implement this before they do.  Therefor they contribute copiously when the spec appears to be going in a way contrary to their existing technology investments, but sit back and relax when it appears that the committee is going nowhere.  Their fear is that the spec will be finished before they have resources to implement it.  They tend to quickly bring up all the (obscure) problems with the spec (particularly ones that conflict with their existing approach) but are slothful when it comes to finding solutions that they don’t already have.  The defender watches MSNBC and CNN.

The Parade Master: This is a person who is primarily interested in the marketing value of having a well branded name, a logo, and the ability to claim support by many different products.  Their fear is that nobody will pay attention to the effort.  They tend to push the spec to be very easy to implement in superficial ways in order to claim support and to include all the proper buzz terms in all the right places.  You can find them on Twitter.

The Professor: This is a person from academia who is probably quite knowledgeable about all the existing approaches, even some from ancient history more than 5 years ago.  The professor typically proposes well thought out, consistent approaches without regard to pragmatic aspects of whether the average user can understand it or not.  Their fear is that this effort will needlessly duplicate an earlier one, or fail to leverage an earlier good work.  The professor, beyond blogging, has hacked Siri and Google Analytics together to bring them feeds from The Onion.

Levels Of Conformance

Different people with different agendas work together to make a document that leads the industry in a new direction.  Some want a super complete super detailed, some want everything that works and only things that work, and other want a minimal set just barely enough to glorify the claim to have implemented it.  The solution is to allow for levels of compliance, and DMN is no exception.  There are three levels of conformance:

  • Level 3 – implementations must conform to the visual notation guidelines of the spec, both for the overall picture (DRG) as well as for the parts that compose the overall graph (DRD).   There are requirements on the metadata of these parts.  And the decision models must be expressed in the FEEL expression language.
  • Level 2 – like above, but the actual expressions can be in a simplified language
  • Level 1 – live above except that the actual expressions that there is no requirement on how the conditions that you base the decision on are expressed.  The expressions do not need to be executable, and could in fact be arbitrary pseudo code that looks like a conditional expression but that “are not meant to be interpreted automatically.”

Level 1 compliance is essentially useless for designing a decision model that actually makes decisions for you.  Since the expressions can be literally anything, there is no possibility to design a model once and use it for anything other than printing out and looking at.  Clearly, vendors are making decision tables that work, but they each work differently, with completely different kinds of expressions, and different interpretations.

Even within the areas that are supposedly enforced, there are many optional aspects of the model.  There are diagrams that are listed with the caveat that it is only an example and many other examples would be possible — without stating how those might be constrained.  There are places which actually state that the design is implementation dependent.

This is quite convenient for vendors.  You can take almost any implementation of decision tables, and claim level 1 conformance as long as a make the graphics conform to some fairly basic layout requirements.

What Does the Customer Want?

The purpose of a specification in the goals of the standard.  DMN lists these goals:

  • The primary goal of DMN is to provide a common notation that is readily understandable by all business users, ….   DMN creates a standardized bridge for the gap between the business decision design and decision implementation. DMN notation is designed to be useable alongside the standard BPMN business process notation.
  • Another goal is to ensure that decision models are interchangeable across organizations via an XML representation.

You want to be able to make decision models that can be created by one person, and understood by another.  The decision logic written by one person must be unambiguous, it must be clear, and it must not be mistaken for meaning something else.  Level 1 conformance simply does not meet either goal to any degree.  The decision expressions can use any syntax, any vocabulary, and any semantics.  By way of analogy, it is a little bit like saying that the message from the designers can use any language (French, German, or Creole) just as long as they use the roman alphabet.  The fundamental thing about a decision is how you write the basic conditions.

Clearly, allowing any expression language — even ones that are not formalized — helps the vendors.  They all have different languages, and the spec does not require that they do anything about that.

It is similarly clear that if you take a model from a level 1 tool, and bring it to another tool, there is no guarantee that it can read it and display it.  Most of the tools require that the expressions be in their own expression language, and so if it is not in that language it will most likely fail to be read.

What Do You Need?

If you are considering DMN as a user, consider what you need.  You are going to invest a lot of hours into learning the details of DMN.

 


by kswenson at January 26, 2017 06:09 AM

January 23, 2017

Sandy Kemsley: BPM skills in 2017–ask the experts!

Zbigniew Misiak over at BPM Tips decided to herd the cats, and asked a number of BPM experts on the skills that are required – and not relevant any more – as we move into 2017 and beyond. I was happy...

[Content summary only, click through for full article and links]

by sandy at January 23, 2017 01:20 PM

January 19, 2017

Sandy Kemsley: AIIM Toronto seminar: @jasonbero on Microsoft’s ECM

I’ve recently rejoined AIIM — I was a member years ago when I did a lot of document capture and workflow implementation projects, but drifted away as I became more focused on process...

[Content summary only, click through for full article and links]

by sandy at January 19, 2017 03:42 PM

January 18, 2017

Keith Swenson: DMN Technology Compatibility Kit (TCK)

A few months ago I wrote about the Decision Model and Notation standard effort. Since getting involved at that time, I am happy to report a lot of progress, but at the same time there is much further to go.

What is DMN?

Decision Model and Notation promises to be a standard way for business users to define complex decision logic so that other business users (that is non-programmers) can view and understand the logic, while at the same time the logic can be evaluated and used in process automation and other applications.

A decision table is an example of a way of expression such logic that is both visually represented as well as executable. DMN takes decision tables to the next level. It allows you to build a graph (called a DRG) of element, where each element can be a decision table or one of a number of other kinds of basic decision expression blocks. That very high level simplified view of DMN should be sufficient for this discussion.

Pipe Dream?

I have seen a lot of standards specs in my time. Most standards are documents that are drawn up by a group of technologists who have high hopes of solving an important problem. Most standards documents are not worth the paper they are printed on. The ones that don’t make it are quickly forgotten. The difference between the proposed standards that disappear (the pipe dreams) and those that survive has to do with adoption. Anyone can write a spec and propose a standard but only adopted standards matter.

I became convinced early last year that the time was right for something beyond decision tables, and DMN seemed to be drawing the right comments from the right people. However, I was shocked to find that nobody had actually implemented it. A couple of vendors claimed to implement it, but when I pressed further, I found that what they claimed to implement was a tiny fraction, and often that fraction had been done in a incompatible way. In other words, the vendor had something similar to DMN, and they were calling it DMN in order to get a free ride on the band wagon.

Running Code

The problem with a specification that does not have running code is that the English language text is subject to interpretation. Until implemented, the precise meaning of phrases of the spec can not be known. I say: the code is 10 times more detailed than the spec can ever be; until you have the code you can not be sure of the intent of the spec. Once code is written and running, you can compare implementations and sort out the differences.

What is a TCK?

What we need is running code. In the Java community since the 1990s there have been groups that get together to build parts of the implementation, or running technological pieces that help in making the implementations a reality. It is more than a spec. The TCK might include code that is part of the final implementation. Or it might be test cases that could be run. Or anything else beyond the code that helps implementers create a successful implementation.

At the 2016 bpmNEXT conference we decided to form a TCK for DMN. The goal is simple: DMN offers to be a standard way of expressing conditional logic, and we need to assure that that logic runs the same on every implementation. What we need then is simply a set of test cases: a sample DMN diagram, with some context data, and the expected results.

Let’s Collect some DMN Models

The DMN specification defines an XML based file format for a DMN diagram. Using this, you can write out a DMN diagram to a file, and read it back in again. All of the tags necessary are defined, along with specific name spaces. Each tag is clearly associated with an element of the DMN diagram. This part of the spec is quite clear. It really should be just a matter of contacting vendors with existing implementations, and asking them to send some example files.

I was surprised to find that of the 16 vendors who claim DMN compatibility, essentially none of them could read and write the standard format.  Without the ability transfer a model from one to the other, there is no easy way to assure that separate implementations actually function the same way.  Reading and writing a standard file format is relegated in the spec to a level 3 compatibility requirement.  The committee does not provide DMN file examples aimed at assuring that import/export works consistently across implementations.

Trisotech was building a modeling tool that imported and exported the format, but they hope to leverage other implementations to evaluate the rules. Bruce Silver in his research for his book on DMN had implemented his own evaluation engine to read and execute the format. There was a challenge in May to encourage support of the format.  Open Rules, Comunda, and One Decision demonstrated or released converters.   Red Hat was committed to creating a DMN evaluation engine based directly on the standard file format and execution semantics.  It is all hampered because Level 1 compliance allows vendors to claim compatibility with virtually no assurance that the users efforts will be transferable elsewhere.

There is, however, a deep commitment in the DMN community to make the standard work.  From Bruce’s and Red Hat’s implementations we were able to assemble a set of test decision models with good coverage of the full DMN standard.

The Rest of the Test Case

The other thing we need is technically outside of the standard, and that is a way to define a set of input data and expected results. The DMN standard defines the names and types of data values that must be supplied to the model, but it is expected that each execution environment will get those values either from a running process or another kind of application data store, and where the data comes from is outside the scope of the DMN standard.

We decided to define a simple transparent XML file structure. The file contains a set of ‘tests’. Each test contains a set of input data which is named according to the requirement of the DMN model being tested. Each test also has a set of data values which are the expected outputs of the execution. We even defined how to compare the floating point numbers, and to what precision a matching value must match.

Testing to see if your implementation is correct becomes a very simple task. Regardless of the technology used to implement the DMN standard, one needs code that can read the test values from the input file, given them to the DMN model execution, take the results, and compare to expected results. IF they match, you pass the test. IT does not matter whether your implementation is in Java, C++, XSLT, C#, Microsoft BASIC, Perl, or whatever. Any language can read the test input values and compare the output to the expected results.

testsetdiagram

A “runner” is needed to load the test into the engine, and to evaluate the results.  Most vendors will need to imlement their own runner according to their own technical needs.  The TCK only defines file formats to be read for the test.  The TCK does make a Java-based runner available also open source, but it is not necessary that any given implementation use that runner.

Results

The status is today that we have:

  • A set of tests ready today
  • All available as open source
  • Tests touch upon a broad range of DMN requirements.
  • Each test is defined according to a specific capability mentioned in the DMN document.
  • Each test has a DMN model is expressed in file format defined by the standard.
  • Test input and expected values are in a file format that is simple to read.
  • Every test has been executed in two completely independent implementations of DMN: one written in Java, and the other written in XSLT.
  • The entire test suite is completely transparent: each file can be examined and reviewed by any member of the public by accessing them at the DMN-TCK GitHub site.

Over time we will improve these tests, and develop many more tests, to increase the coverage of DMN capability.  We hope to get contributions from more vendors who want to see DMN succeed.  Yet we already have a good, useful test set today.

If you are a consumer of decision logic, and you are thinking of purchasing and implementation of DMN, and you don’t want to be locked into a vendor specific not-quite-standard implementation, you should ask your vendor whether they can run these tests. Or better yet, you can try running them yourself. You simply can’t have a serious implementation of DMN, without demonstrating that the implementation can run these fairly straightforward DMN TCK tests.  If the tests don’t run, ask your vendor why.  Do you feel comfortable with the answer?

Conclusion

The success of DMN depends upon getting implementation that run the same way. Talking about DMN will never assure they run the same way. Advertisements and brochures do not assure that you investment in a DMN model will be usable anywhere else. The only way to assure this is to have a common core of tests and that can quickly and easily demonstrate that they work and get the same results. That is what you want in any decision logic: the same results for the same inputs every time. Ask your vendor if they can run the DMN-TCK tests.

Acknowledgements

No effort like this succeeds without a lot of dedication and long hours by key team members. Eight people have contributed to this TCK, but I want to especially highlight two in particular. Edson Tirelli is technical lead for the Red Hat DMN project was tireless in his thorough examination of the specification and implementation in Java. Bruce Silver has also been a monumental motivation for the TCK, and made a separate implementation in XSLT. By working through all the differences of these two implementations, and coming to a common understanding of all the points of the spec, gives us all confidence that the existing tests are robust and accurate.


by kswenson at January 18, 2017 07:44 PM

January 17, 2017

Thomas Allweyer: Lesenswerte Studie zu Prozessmanagement und Digitaler Transformation

Welche Rolle spielt das Prozessmanagement für die digitale Transformation von Unternehmen? Diese Frage untersuchten Wissenschaftler der Zürcher Hochschule für Angewandte Wissenschaften (ZHAW) in ihrer diesjährigen Studie zum Business Process Management. Zwar wird immer wieder betont, dass erfolgreiche Digitalisierungsprojekte nicht ohne optimal darauf abgestimmt Geschäftsprozesse funktionieren, doch scheinen sich die meisten Prozessmanagement-Aktivitäten nach wie vor eher um die Effizienz interner Prozesse als etwa um das Kundenerlebnis zu kümmern. Und in der Tat wird Effizienz von einem sehr großen Teil der in der Studie Befragten als Zielsetzung des Prozessmanagements genannt. Die wichtigste Motivation stellt aber das Erreichen einer hohen Transparenz dar. Und auch die Kundenzufriedenheit gewinnt als Prozessmanagement-Ziel an Bedeutung. Sie wird mittlerweile ähnlich hoch priorisiert wie die Effizienz.

Die Macher der Studie konstatieren, dass die durch das Prozessmanagement gewonnene Transparenz durchaus genutzt wird, um Digitalisierungspotenzial für Kundeninteraktionen und schwach strukturierte Prozesse zu identifizieren – allerdings erfolgt dies vielfach nicht systematisch. So werden etwa Prozessmodelle meist nicht mit Customer Journeys verknüpft, wie sie in Digitalisierungsinitiativen zum Einsatz kommen. Es ist daher zu befürchten, dass neu entwickelte Front-End-Lösungen nicht in Form durchgängiger Prozesse mit den Back-End-Systemen integriert werden und somit neue Silos entstehen. Auch bei der Flexibilisierung von Prozessen sind viele Unternehmen noch zögerlich. So fristet das Thema „Adaptive Case Management“ nach wie vor ein Schattendasein. Ähnlich sieht es mit der Nutzung von Kundendaten aus: Nur selten werden diese Daten zur kundenorientierten Optimierung und Gestaltung der Prozesse genutzt.

Zusätzlich zur Online-Befragung wurden im Rahmen eines Workshops fünf Fallstudien untersucht, die im Studienbericht ausführlich vorgestellt werden. Sie stammen aus unterschiedlichen Branchen wie Fahrzeug-Leasing, Versicherungen, öffentlicher Verwaltung und Telekommunikation. Dabei handelt es sich zum Teil tatsächlich um die durchgängige Digitalisierung von Kundeninteraktionen und die Veränderung von Geschäftsmodellen. So etwa im Falle eines Fahrzeug-Leasing-Anbieters, dessen Vertrieb bislang hauptsächlich über Fahrzeughändler erfolgte. Künftig kann die Abwicklung auch komplett online erfolgen, wobei sich der Leasingnehmers per Video identifiziert. Ein Projekt des Kantons Zürich ermöglicht es den Bürgern, sämtliche mit einem Umzug verbundenen Behördeninteraktionen komplett elektronisch abzuwickeln. Andere Projekte befassen sich eher mit herkömmlicher Prozessautomatisierung, beispielsweise für das Service Management. Vielfach sind zunächst interne Prozessverbesserungen als Voraussetzung für die Digitalisierung der kundenbezogenen Prozesse erforderlich.

Im Fazit fordern die Autoren der Studie, dass sich das Prozessmanagement stärker mit den Methoden und Werkzeugen anderer Managementdisziplinen auseinandersetzen müsse, wie z. B. Innovationsmanagement, Enterprise Architecture Management, Wissensmanagement und Customer Experience Management. Dann könnten Chancen und Grenzen der Prozessdigitialisierung wirksamer ausgelotet werden.

Download der Studie

by Thomas Allweyer at January 17, 2017 10:18 AM

BPM-Guide.de: Orchestrierung von Microservices und die JAX

2016 kam aus meiner Sicht spätestens der Durchbruch der Idee von Microservices. Das Thema ist super präsent und wird auch nicht durch Ignorieren verschwinden. Wir selbst haben zu “BPM + Microservices” bereits 2015 einen Artikel im Java Magazin veröffentlicht: Wie lässt sich Ordnung in einen Haufen (Micro-)Services bringen? Diese Gedanken wurden auch in dem Whitepaper BPM & Microservices aufgearbeitet. Inzwischen ist einige Zeit ins Land gegangen und es gab viele Diskussionen dazu. Wir haben viel gelernt, so würde ich persönlich zum Beispiel inzwischen eher von “Orchestrierung” als von “BPM” sprechen. Das und viele andere aktuelle Praxiserfahrungen werde ich in meiner …

by Bernd Rücker at January 17, 2017 08:22 AM

January 13, 2017

Sandy Kemsley: BPM books for your reading list

I noticed that Zbigniew’s reading list of BPM books for 2017 included both of the books where I have author credit on Amazon: Social BPM, and Best Practices for Knowledge Workers. You can find the...

[Content summary only, click through for full article and links]

by sandy at January 13, 2017 04:20 PM

January 08, 2017

Drools & JBPM: DMN runtime example with Drools

As announced last year, Drools 7.0 will have full runtime support for DMN models at compliance level 3.

The runtime implementation is, at the time of this blog post, feature complete and the team now is working on nice to have improvements, bug fixes and user friendliness.

Unfortunately, we will not have full authoring capabilities in time for the 7.0 release, but we are working on it for the future. The great thing about standards, though, is that there is no vendor lock-in. Any tool that supports the standard can be used to produce the models that can be executed using the Drools runtime engine. One company that has a nice DMN modeller is Trisotech, and their tools work perfectly with the Drools runtime.

Another great resource about DMN is Bruce Silver's website Method & Style. In particular I highly recommend his book for anyone that wishes to learn more about DMN.

Anyway, I would like to give users a little taste of what is coming and show one example of a DMN model and how it can be executed using Drools.

The Decision Management Community website periodically publishes challenges for anyone interested in trying to provide a solution for simple decision problems. This example is my solution to their challenge from October/2016.

Here are the links to the relevant files:

* Solution explanation and documentation
* DMN source file
* Example code to execute the example

I am also reproducing a few of the diagrams below, but take a look at the PDF for the complete solution and the documentation.

Happy Drooling!





by Edson Tirelli (noreply@blogger.com) at January 08, 2017 10:25 PM

January 06, 2017

Thomas Allweyer: Wie entwickelt man Prozesslandkarten?

Prozesslandkarten sind ein häufig genutztes Werkzeug zur Strukturierung der Abläufe eines Unternehmens. Aber wie entwickelt man eine Prozesslandkarte? Und was macht eine gute Prozesslandkarte aus? Mit diesen Fragen beschäftigt sich der Artikel „Prozesslandkarten entwickeln – Vorgehen, Qualitätskriterien und Nutzen“ von Appelfeller, Boentert und Laumann, der in der Zeitschrift Führung und Organisation (zfo), Ausgabe 6/2016, erschienen ist.

In der Literatur finden sich insgesamt fünf idealtypische Ansätze für die Entwicklung einer Prozesslandkarte: 

  1. Ableitung der Prozesse aus den Zielen der Organisation (zielbasierter Ansatz)
  2. Zusammensetzen der Prozesse aus einzelnen Aktivitäten (aktivitätenbasierter Ansatz)
  3. Ableiten der Prozesse aus den in der Organisation bearbeiteten Objekte (objektbasierter Ansatz)
  4. Ableiten der Prozesslandkarte aus der Landkarte einer anderen existierenden oder idealtypischen Organisation (referenzmodellbasierter Ansatz)
  5. Zerlegen der Unternehmensfunktionen in Teilfunktionen und Zusammensetzen zu Prozessen (funktionsbasierter Ansatz)

In der Praxis werden meist mehrerer dieser Ansätze kombiniert. Die Autoren erläutern eine mögliche Vorgehensweise am Beispiel der Prozesslandkarte für die Fachhochschule Münster.

Schließlich werden noch eine Reihe von Qualitätsmerkmalen diskutiert. So soll eine Prozesslandkarte den abteilungsübergreifenden Prozessgedanken vermitteln und die strategische Ausrichtung der Organisation stützen. Weitere Kriterien betreffen u. a. die geeignete Benennung der Prozesse, den systematischen Aufbau und die Eignung für die verschiedenen Nutzergruppen.

 

by Thomas Allweyer at January 06, 2017 09:21 AM

January 02, 2017

BPM-Guide.de: Camunda in 2016 and 2017

Camunda has had an outstanding 2016:

Tremendous Growth

More than 120 customers are now using Camunda BPM Enterprise which allowed us to grow our annual revenue by an incredible 82%.

Our revenue stream is subscription based, and more than 98% of our customers decided to renew their subscription (some of them entering their fourth year of subscription). Since we are not talking about SaaS here, but rather the enterprise subscription for an open source software product, this number speaks to the actual value of our enterprise services such as support and maintenance.

Spreading world-wide

Our customers as well as our 50 system integration partners are …

by Jakob Freund at January 02, 2017 08:07 AM

December 28, 2016

Thomas Allweyer: Flexible Case Management-Systeme werden noch wenig genutzt

Business Process Management-Systeme (BPMS) eignen sich mit ihrem modellbasierten Ansatz nach Meinung der Gartner-Group gut als Basis für Case Management-Frameworks (CMF) zur Unterstützung schwach strukturierter, wissensintensiver Prozesse. Und so widmen die Analysten eigens einen ihrer „Magic Quadrant“-Reports den BPM-Plattform-basierten CMFs. Die BPMS-Hersteller stehen mit ihren Case Management-Modulen u. a. in Konkurrenz mit Anbietern von Systemen für Enterprise Content Management (ECM) oder Customer Relationship Management (CRM), die ihre Produkte ebenfalls um Case Management-Funktionalitäten angereichert haben.

Zudem gibt es für viele konkrete konkrete Anwendungsbereiche etablierte Standardsoftware. Der Vorteil BPMS-basierter CMFs ist insbesondere die wesentlich höhere Flexiblität und Anpassbarkeit. Vielfach werden für diese Plattformen auch vorgefertigte Templates für bestimmte Branchen und Anwendungsfälle angeboten.

Die Verbreitung von Case Management-Frameworks ist noch vergleichsweise gering. Die Autoren der Studie schätzen, dass bislang weniger als 20% der in Frage kommenden Unternehmen diese Technologie nutzt. Und dies werde sich in den nächsten Jahren auch nur langsam ändern.

In den letzten Jahren wurde viel über Adaptives Case Management (ACM) diskutiert, bei dem die Mitarbeiter die Abläufe während der Bearbeitung entwickeln und verändern. Bislang handelt es sich dabei laut Gartner noch eher um einen Hype als um die Realität. Bei den meisten Anbietern sind die Anpassungsmöglichkeiten zur Laufzeit noch weitgehend auf die bei der Entwicklung festgelegten Optionen eingeschränkt. Auch die Nachfrage nach umfassenden Adaptionsmöglichkeiten hält sich bislang noch in Grenzen.

Download des Reports bei Appian (Registrierung erforderlich)

by Thomas Allweyer at December 28, 2016 02:42 PM

December 22, 2016

Sandy Kemsley: RPA just wants to be free: @WorkFusion RPA Express

Last week, WorkFusion announced that their robotic process automation product, RPA Express, will be released in 2017 as a free product; they published a blog post as well as the press release, and...

[Content summary only, click through for full article and links]

by sandy at December 22, 2016 09:23 PM

December 17, 2016

Drools & JBPM: Introducing Drools Fiddle

Drools Fiddle is the fiddle for Drools. Like many other fiddle tools, Drools Fiddle allows both technical and business users to play around with Drools and aims at making Drools accessible to everyone. 



The entry point to Drools Fiddle is the DRL editor (top left panel), which allows to define and implement both fact models and business rules, using the Drools Rule Language. Once the rules are defined, they can be compiled into a KieBase by clicking on the Build button.

If the KieBase is successfully built, the visualization panel on the right will visualize the fact types as well as the rules as graph nodes. For instance, this DRL will be displayed as follows:



declare MyFactType
    value : int
end


rule "MyRule"
when
   f : MyFactType(value == 42)
then
   modify( f ) {setValue( 41 )}

end



All the actions that are performed on the working memory will be represented by arrows in this graph. The purpose of the User icon is to identify all the actions performed directly by the user. 

For example, let's see how we can dynamically insert fact instances into the working memory. After the KieBase compilation, the Drools Facts tab is displayed on the left:




This form allows you to create instances of the fact types that have been previously declared in the DRL. For each instances inserted in the working memory a blue node will be displayed in the Visualization tab. The arrow coming from the User icon shows that this action was performed manually by the user.

Once your working memory is ready, you can trigger the fireAllRules method by clicking on the Fire button. As a result, all the events occurring in the engine: rule matching, fact insertion/update/deletion are displayed in the visualization tab.
In the above example, we can see that the fact inserted by the user in step 1 triggered the rule "MyRule" which in turn modified the value of the fact from 42 to 41.

Some additional features have been implemented in order to enhance the user experience: 
  • Step by step debugging of the engine events.
  • Persistence: the Save button associates a unique URI to a DRL snippet in order to share it with the community, e.g.: http://droolsfiddle.tk/#/VYxQ4rW6
So far, only the minimum set of functionalities have been implemented to showcase the Drools Fiddle concept but there are still a lot of exciting features in the pipe:

  • Multi tabbed DRL editor
  • Decision  table support
  • Sequence diagram representation of rule engine events
  • Fact history visualization
  • Improvement of log events visualization
  • KieSession persistence to resume stateful sessions
  • Integration within Drools Workbench
The source code of Drools Fiddle is available on GitHub under Apache v2 License and you can access the application at http://droolsfiddle.tk. Should you wish to contribute, pull requests are welcome ;)

We would love to have the feedback of the Drools community in order to improve the fiddle and make it evolve in the right direction.

by Julien Vipret & Matteo Casalino

by Julien VIPRET (noreply@blogger.com) at December 17, 2016 01:35 PM

December 12, 2016

Thomas Allweyer: BPMN-Praxishandbuch um CMMN und DMN erweitert

Das Praxishandbuch BPMN der camunda-Gründer Jakob Freund und Bernd Rücker liegt sein kurzem in der fünften Auflage vor. Als wesentliche Neuerungen sind kompakte Beschreibungen der beiden neueren Standards aus dem BPMN-Umfeld hinzugekommen. Dabei handelt es sich zum einen um „Case Management Model and Notation“ (CMMN) zur Beschreibung schwach strukturierter, flexibler Fallbearbeitungen, zum anderen um „Decision Model and Notation“ zur Modellierung und Spezifikation von Entscheidungslogik. Dabei werden nicht nur die Standards und ihre Notationselemente selbst beschrieben, sondern auch das sinnvolle Zusammenspiel der drei Notationen. So können sich stark strukturierte BPMN-Prozesse und flexible, in CMMN beschriebene, Fallbearbeitungen gegenseitig auslösen. Wo komplexere Entscheidungen anstehen, kann es sowohl in BPMN- als auch in CMMN-Modellen hilfreich sein, auf Entscheidungsdiagramme und -tabellen gemäß DMN zu verweisen.

Grundlegend überarbeitet wurde zudem das Kapitel zur Automatisierung. Auch hier wurden die Themen Fallmanagement und die Ausführung von Entscheidungslogik aufgenommen. Zudem flossen neuere Praxistipps aus der Erfahrung zahlreicher Prozessautomatisierungsprozesse ein. Im Gegenzug wurden die in früheren Auflagen vorhandenen ausführlichen XML-Beispiele entfernt, da sie wohl kaum gelesen wurden. Ansonsten wurde die Struktur des Buchs nicht verändert. Nach wie vor erhält der Leser eine umfassende Einführung in die BPMN und das camunda-Methodenframework.


Freund, J.; Rücker, B:
Praxishandbuch BPMN – Mit Einführung in CMMN und BPMN
5. Auflage, Hanser 2016.
Das Buch bei amazon.

by Thomas Allweyer at December 12, 2016 09:13 AM

December 07, 2016

Sandy Kemsley: TechnicityTO 2016: Challenges, Opportunities and Change Agents

The day at Technicity 2016 finished up with two panels: the first on challenges and opportunities, and the second on digital change agents. The challenges and opportunities panel, moderated...

[Content summary only, click through for full article and links]

by sandy at December 07, 2016 08:35 PM

Sandy Kemsley: TechnicityTO 2016: Open data driving business opportunities

Our afternoon at Technicity 2016 started with a panel on open data, moderated by Andrew Eppich, managing director of Equinix Canada, and featuring Nosa Eno-Brown, manager of Open...

[Content summary only, click through for full article and links]

by sandy at December 07, 2016 07:04 PM

Sandy Kemsley: TechnicityTO 2016: IoT and Digital Transformation

I missed a couple of sessions, but made it back to Technicity in time for a panel on IoT moderated by Michael Ball of AGF Investments, featuring Zahra Rajani, VP Digital Experience at Jackman...

[Content summary only, click through for full article and links]

by sandy at December 07, 2016 05:32 PM

Sandy Kemsley: Exploring City of Toronto’s Digital Transformation at TechnicityTO 2016

I’m attending the Technicity conference today in Toronto, which focuses on the digital transformation efforts in our city. I’m interested in this both as a technologist, since much of my...

[Content summary only, click through for full article and links]

by sandy at December 07, 2016 02:36 PM

December 05, 2016

December 01, 2016

Sandy Kemsley: What’s on your agenda for 2017? Some BPM conferences to consider

I just saw a call for papers for a conference for next October, and went through to do a quick update of my BPM Event Calendar. I certainly don’t attend all of these events, but like to keep track of...

[Content summary only, click through for full article and links]

by sandy at December 01, 2016 05:31 PM

November 24, 2016

BPM-Guide.de: Camunda BPM 7.6 Roadshow

Berlin 16.01. | Hamburg 17.01. | Düsseldorf 18.01. | Stuttgart 19.01. | München 20.01. | Zürich 24.01. | Wien 25.01.

Die wichtigsten Neuigkeiten zu Camunda BPM 7.6

Die Camunda BPM 7.6 Roadshow wird im Januar 2017 in insgesamt 7 Städten vorbeischauen. Die Veranstaltung geht jeweils von 9-12 Uhr und ist kostenlos.

In dieser Veranstaltung erfahren Sie alles über Camunda BPM, die neuen Funktionen in Version 7.6 und vieles mehr.

Camunda-Mitgründer Bernd Rücker und weitere Camunda-Ansprechpartner werden vor Ort sein, und wir freuen uns auf ein Wiedersehen bzw. persönliches Kennenlernen.

Achtung: Die Teilnahme ist kostenlos, jedoch sind die Plätze begrenzt. Bis bald!

Termine, Agenda und Anmeldung 16.01. 9-12 Uhr Berlin Jetzt kostenlos anmelden 17.01. 9-12 Uhr Hamburg Jetzt kostenlos anmelden 18.01. 9-12 Uhr Düsseldorf Jetzt kostenlos anmelden 19.01. 9-12 …

by Jakob Freund at November 24, 2016 09:45 AM

November 18, 2016

Sandy Kemsley: Intelligent Capture enables Digital Transformation at #ABBYYSummit16

I’ve been in beautiful San Diego for the past couple of days at the ABBYY Technology Summit, where I gave the keynote yesterday on why intelligent capture (including recognition technologies and...

[Content summary only, click through for full article and links]

by sandy at November 18, 2016 03:04 PM

November 11, 2016

Drools & JBPM: Red Hat BRMS and BPMS Roadmap Presentation (Nov 22nd, London)

Original Link :  http://www.c2b2.co.uk/red_hat_brms_and_bpms_roadmap_presentation

Featuring Drools, jBPM, OptaPlanner, DashBuilder, UberFire and Errai
For our second JBUG this November we’re delighted to welcome back Red Hat Platform Architect, Mark Proctor who will be part of a panel of speakers presenting roadmap talks on each component technology.
We’re fortunate to have this opportunity for so many project leads to be in one room at the same time, and it’s a fantastic opportunity to come along and ask questions about the future plans for BRMS and BPMS.
The talk will look at how the 7 series is shifting gears, presenting a vision for low-code application development in the cloud - with a much stronger focus on quality and maturity over previous releases.
Key topics will include:
  • The new Rich Client Platform
  • The new BPMN2 Designer
  • New Case Management and Modelling
  • Improved Advanced Decision Tables and new Decision Model Notation
  • Improved Forms and Page building
  • Fully integrated DashBuilder reporting
  • New OptaPlanner features & performance improvements
There will be opportunities for questions and the chance to network with the team over a beer and slice of pizza.
Registration
Attendees must register at the Skills Matter website prior to the meet-up. Please – only register if you intend to come along. Follow this link to register: https://skillsmatter.com/meetups/8489-jboss-november-meetup.
Agenda
18:30 – 18:45     Meet up at Skills Matter with a beer at the bar
18:45 – 19:45     Part One
19:45 – 20:00     Refreshment break
20:00 – 20:30     Part Two
20:30                    Pizza, beer and networking
Speakers
Mark Proctor
Mark is a Red Hat Platform Architect and co-creator of the Drools project - the leading Java Open Source rules system. In 2005 Mark joined JBoss as lead of the Drools project. In 2006, when Red Hat acquired JBoss, Mark’s role evolved into his current position as platform architect for the Red Hat JBoss BRMS (Business Rules Management System) and BPMS (Business Process Management System) platforms - which incorporate the Drools and jBPM projects.
Kris Verlaenen
Kris is the JBoss BPM project lead, and is interested in pretty much everything related to business process management. He is particularly fascinated by healthcare - an area that has already demonstrated the need for flexible business processes.
Geoffrey De Smet
Geoffrey is the founder and project lead of OptaPlanner (http://www.optaplanner.org), the leading open source constraint satisfaction solver in Java. He started coding Java in 1999, regularly participates in academic competitions, and enjoys assisting developers in optimizing challenging planning problems of real-world enterprises. He is also a contributor to a variety of other open source projects.
Mauricio Salatino
Mauricio Salatino is a Drools/jBPM Senior Software Engineer in Red Hat, and author of the jBPM5 and jBPM Developer Guide, and the Drools 6 Developer Guide. His main task right now is to develop the next generation cloud capability for the BRMS and BPMS platforms - which includes the Drools and jBPM technologies.
Max Barkley
Max is a Software Engineer at Red Hat and the Errai project lead. Joining Red Hat as an intern in 2013, he took on his current role after graduating H.B.Sc. Mathematics from the University of Toronto in 2015.

by Mark Proctor (noreply@blogger.com) at November 11, 2016 02:17 AM

November 10, 2016

Thomas Allweyer: Paper zum Download: Jetzt kommen die Roboter und automatisieren die Prozesse

Das Thema Prozessautomatisierung war in der Vergangenheit untrennbar verknüpft mit Workflow- oder Business Process Management-Systemen (BPMS). In jüngerer Zeit macht jedoch ein neuer Ansatz von sich reden: Robotic Process Automation (RPA). Anstelle umfangreicher Automatisierungsprozesse werden Software-Roboter installiert, die einfach die vorhandenen Benutzungsoberflächen verwenden und daher keine tiefergehende Integration benötigen. In Fallstudien wird von immensen Einsparungen berichtet – auch gegenüber vergleichbaren Integrationsprojekten auf Basis herkömmlicher BPM-Systeme. Grund genug, sich einmal genauer mit RPA zu beschäftigen.

Mein Paper „Robotic Process Automation – Neue Perspektiven für die Prozessautomatisierung“ beleuchtet den RPA-Ansatz. Es werden die typischen Merkmale von RPA-Systemen erläutert, mögliche Einsatzbereiche aufgezeigt, Nutzenpotenziale herausgearbeitet und eine Abgrenzung zu anderen Systemen vorgenommen. Ein wichtiger Punkt sind die zu erwartenden Auswirkungen auf Mitarbeiter und Arbeitsplätze. Schließlich wird eine zusammenfassende Einschätzung vorgenommen, und es werden mögliche weitere Entwicklungen diskutiert.

Download: Robotic Process Automation – Neue Perspektiven für die Prozessautomatisierung.

by Thomas Allweyer at November 10, 2016 11:43 AM

October 31, 2016

Drools & JBPM: Drools 7 to support DMN (Decision Model and Notation)

The Decision Model and Notation (DMN) specification is a relatively new standard by OMG (Object Management Group) that aims to do for business rules and business decisions what BPMN (it's sibling specification) did for business processes: standardize the notation and execution semantics to enable both its use by business users, and the interchange of models between tools from different vendors.

The Drools team has been actively following the specification and the direction it is taking. The team believes that, in accordance with its long time commitment to open standards, it is now time to support the specification and provide a compliant implementation for the benefit of its users.

The specification defines among other things:


  1. an expression language called FEEL used to express constraints and decisions
  2. a graphical language to model decision requirements
  3. a metamodel and runtime semantics for decision models
  4. an XML-based interchange format for decision models


As part of the investigation, the Drools team implemented a PoC that is now public and available here. The PoC already covers:


  • a complete, compliance level 3, FEEL language implementation.
  • complete support for the XML-based interchange format for marshalling and unmarshalling.
  • A partial implementation of the metamodel and runtime semantics 

We expect to have a complete runtime implementation released with Drools 7.0 (expected for Q1/2017).

On a related note, this is also a great opportunity for community involvement. This being a standard implementation, and relatively isolated from other existing components, it is the perfect chance for any community member that wishes to get involved with Drools and open source development to get his/her hands dirty and help bring this specification to life. Contact me on the Drools mailing list or on IRC if you would like to help.

We will publish over the next few weeks several blogs on this subject, with both general explanations about the specification and with details of our plans and our implementation. Bellow you can find a quick Q&A. Feel free to ask additional questions you might have about this subject on the mailing list.

Happy Drooling!

Questions & Answers


1. What DMN version and what compliance level will Drools support?

Drools is implementing DMN version 1.1 support at compliance level 3.

2. Is DMN support integrated with the Drools platform?

Yes, the DMN implementation leverages the whole Drools platform (including, among other things, the deployment model, infrastructure and tooling). DMN models are a first class citizen in the platform and an additional asset that can be included in kjars. DMN models will be supported in the kie-server and decision services exposed via the usual kie-server interfaces.

3. Is Drools DMN integrated with jBPM BPMN?

At the moment of this announcement, the integration is not implemented yet, but we expect it will be fully functional by the time Drools and jBPM 7.0 release (Q1 2017).

4. Will FEEL be a supported dialect for DRL rules? 

At the moment this is not clear and requires additional research. While FEEL works well as part of the XML-based interchange format, its syntax (that supports spaces and special characters as part of identifiers) is ambiguous and cannot be easily embedded into another language like DRL. We will discuss this topic further in the upcoming months.

by Edson Tirelli (noreply@blogger.com) at October 31, 2016 07:55 PM

BPM-Guide.de: BPM Day auf der WJAX

Nächste Woche ist es wieder so weit – die WJAX in München öffnet ihre Pforten. Es gibt am Mittwoch, den 09.11., wieder einen BPM-Day, mit einem spannenden Programm: Zuerst wird Kai Jamella über BPM und Microservices reden. Dann werde ich selbst eine Einführung in Workflow mit BPMN bzw. Case Management mit CMMN geben. DMN widme ich mich dann in einem eigenen interaktiven Vortrag. Darauf folgen Erfahrungsberichte von Wolfgang Strunk (Sixt Leasing) sowie Ringo Roidl und David Ibl (jeweils Lebensversicherung von 1871 a. G. München (LV 1871)). Wir sehen uns!

by Bernd Rücker at October 31, 2016 07:53 AM

October 28, 2016

Sandy Kemsley: Keynoting at @ABBYY_USA Technology Summit

I’ve been in the BPM field since long before it was called BPM, starting with imaging and workflow projects back in the early 1990s. Although my main focus is on process now (hence the name of my...

[Content summary only, click through for full article and links]

by sandy at October 28, 2016 05:12 PM