Planet BPM

November 10, 2017

Drools & JBPM: Building Business Applications with DMN and BPMN

A couple weeks ago our own Matteo Mortari delivered a joint presentation and live demo with Denis Gagné from Trisotech at the BPM.com virtual event.

During the presentation, Matteo live demo'd a BPMN process and a couple DMN decision models created using the Trisotech tooling and exported to Red Hat BPM Suite for seamless execution.

Please note that no glue code was necessary for this demo. The BPMN process and the DMN models are natively executed in the platform, no Java knowledge needed.

Enough talking, hit play to watch the presentation... :)


by Edson Tirelli (noreply@blogger.com) at November 10, 2017 06:20 PM

October 27, 2017

Sandy Kemsley: Machine learning in ABBYY FlexiCapture

Chip VonBurg, senior solutions architect at ABBYY, gave us a look at machine learning in FlexiCapture 12. This is my last session for ABBYY Technology Summit 2017; there’s a roadmap session...

[Content summary only, click through for full article and links]

by sandy at October 27, 2017 08:47 PM

Sandy Kemsley: Machine learning in ABBYY FlexiCapture

Chip VonBurg, senior solutions architect at ABBYY, gave us a look at machine learning in FlexiCapture 12. This is my last session for ABBYY Technology Summit 2017; there’s a roadmap session...

[Content summary only, click through for full article and links]

by sandy at October 27, 2017 08:47 PM

Sandy Kemsley: Capture microservices for BPO with iCapt and ABBYY

Claudio Chaves Jr. of iCapt presented a session at ABBYY Technology Summit on how business process outsourcing (BPO) operations are improving efficiencies through service reusability. iCapt is a...

[Content summary only, click through for full article and links]

by sandy at October 27, 2017 06:07 PM

Sandy Kemsley: Pairing @UiPath and ABBYY for image capture within RPA

Andrew Rayner of UiPath presented at the ABBYY Technology Summit on robotic process automation powered by ABBYY’s FineReader Engine (FRE). He started with a basic definition of RPA —...

[Content summary only, click through for full article and links]

by sandy at October 27, 2017 04:51 PM

Sandy Kemsley: ABBYY partnerships in ECM, BPM, RPA and ERP

It’s the first session of the last morning of the ABBYY Technology Summit 2017, and the crowd is a bit sparse — a lot of people must have had fun at the evening event last night —...

[Content summary only, click through for full article and links]

by sandy at October 27, 2017 04:15 PM

Sandy Kemsley: ABBYY mobile real-time recognition

Dimitry Chubanov and Derek Gerber presented at the ABBYY Technology Summit on ABBYY’s mobile real-time recognition (RTR), which allows for recognition directly on a mobile device, rather than...

[Content summary only, click through for full article and links]

by sandy at October 27, 2017 12:11 AM

October 26, 2017

Sandy Kemsley: ABBYY Robotic Information Capture applies machine learning to capture

Back in the SDK track at ABBYY Technology Summit, I attended a session on “robotic information capture” with FlexiCapture Engine 12, with lead product manager Andrew Zyuzin and director...

[Content summary only, click through for full article and links]

by sandy at October 26, 2017 10:20 PM

Sandy Kemsley: ABBYY Recognition Server 5.0 update

I’ve switched over to the FlexiCapture technical track at the ABBYY Technology Summit for a preview of the new version of Recognition Server to be released in the first half of 2018. Paula...

[Content summary only, click through for full article and links]

by sandy at October 26, 2017 09:27 PM

Sandy Kemsley: ABBYY SDK update and FineReader Engine deep dive

I attended two back-to-back sessions from the SDK track in the first round of breakouts at the 2017 ABBYY Technology Summit. All of the products covered in these sessions are developer tools for...

[Content summary only, click through for full article and links]

by sandy at October 26, 2017 08:05 PM

Sandy Kemsley: The collision of capture, content and analytics

Martyn Christian of UNDRSTND Group, who I worked with back in FileNet in 2000-1, gave a keynote at ABBYY Technology Summit 2017 on the evolution and ultimate collision of capture, content and...

[Content summary only, click through for full article and links]

by sandy at October 26, 2017 05:39 PM

Sandy Kemsley: ABBYY corporate vision and strategy

We have a pretty full agenda for the next two days of the 2017 ABBYY Technology Summit, and we started off with an address from Ulf Persson, ABBYY’s relatively new worldwide CEO (although he is...

[Content summary only, click through for full article and links]

by sandy at October 26, 2017 04:10 PM

Sandy Kemsley: ABBYY analyst briefing

I’m in San Deigo for a quick visit to the ABBYY Technology Summit. I’m not speaking this year (I keynoted last year), but wanted to take a look at some of the advances that they’re...

[Content summary only, click through for full article and links]

by sandy at October 26, 2017 12:15 AM

October 25, 2017

Sandy Kemsley: Low code and case management discussion with @TIBCO

I’m speaking on a webinar sponsored by TIBCO on November 9th, along with Roger King (TIBCO’s senior director of product management and strategy, and Austin Powers impressionist extraordinaire) and...

[Content summary only, click through for full article and links]

by sandy at October 25, 2017 04:11 PM

October 24, 2017

Sandy Kemsley: Citizen development with @FlowForma and @JohnRRymer

I attended a webinar today sponsored by FlowForma and featuring John Rymer of Forrester talking about low-code platforms and citizen developers. Rymer made a distinction between three classes of...

[Content summary only, click through for full article and links]

by sandy at October 24, 2017 04:31 PM

October 19, 2017

Sandy Kemsley: Financial decisions in DMN with @JanPurchase

Trisotech and their partner Lux Magi held a webinar today on the role of decision modeling and management in financial services firms. Jan Purchase of Lux Magi, co-author (with James Taylor) of...

[Content summary only, click through for full article and links]

by sandy at October 19, 2017 05:10 PM

October 12, 2017

5 Pillars of a Successful Java Web Application

Last week, Alex Porcelli and I had the opportunity to present at JavaOne San Francisco 2017 two talks related to our work: "5 Pillars of a Successful Java Web Application” and The Hidden Secret of Java Open Source Projects.

It was great to share our cumulative experience over the years building the workbench and the web tooling for the Drools and jBPM platform and both talks had great attendance (250+ people in the room).


In this series of posts, we’ll detail our "5 Pillars of a Successful Java Web Application”, trying to give you an overview of our research and also a taste of participating in a great event like Java One.
There are a lot of challenges related to building and architecting a web application, especially if you want to keep your codebase updated with modern techniques without throwing away a lot of your code every two years in favor of the latest trendy JS framework.
In our team we are able to successfully keep a 7+ year old Java application up-to-date, combining modern techniques with a legacy codebase of more than 1 million LOC, with an agile, sustainable, and evolutionary web approach.
More than just choosing and applying any web framework as the foundation of our web application, we based our web application architecture on 5 architectural pillars that proved crucial for our platform’s success. Let's talk about them:

1st Pillar: Large Scale Applications

The first pillar is that every web application architecture should be concerned about the potential of becoming a long-lived and mission-critical application, or in other words, a large-scale application. Even if your web application is not exactly big like ours (1mi+ lines of web code, 150 sub-projects, +7 years old) you should be concerned about the possibility that your small web app will become a big and important codebase for your business. What if your startup becomes an overnight success? What if your enterprise application needs to integrate with several external systems?
Every web application should be built as a large-scale application because it is part of a distributed system and it is hard to anticipate what will happen to your application and company in two to five years.
And for us, a critical tool for building these kinds of distributed and large-scale applications throughout the years has been static typing.

Static Typing

The debate of static vs. dynamic typing is very controversial. People who advocate in favor of dynamic typing usually argue that it makes the developer's job easier. This is true for certain problems.
However, static typing and a strong type system, among other advantages, simplify identifying errors that can generate failures in production and, especially for large-scale systems, make refactoring more effective.
Every application demands constant refactoring and cleaning. It’s a natural need. For large-scale ones, with codebases spread across multiple modules/projects, this task is even more complex. The confidence when refactoring is related to two factors: test coverage and the tooling that only a static type system is able to provide.
For instance, we need a static type system in order to find all usages of a method, in order to extract classes, and most importantly to figure out at compile time if we accidentally broke something.
But we are in web development and JavaScript is the language of the web. How can we have static typing in order to refactor effectively in the browser?

Using a transpiler

A transpiler is a type of compiler that takes the source code of a program written in one programming language as its input and produces equivalent source code in another programming language.
This is a well-known Computer Science problem and there are a lot of transpilers that output JavaScript. In a sense, JavaScript is the assembly of the web: the common ground across all the web ecosystems. We, as engineers, need to figure out what is the best approach to deal with JavaScript’s dynamic nature.
A Java transpiler, for instance, takes the Java code and transpiles it to JavaScript at compile time. So we have all the advantages of a statically-typed language, and its tooling, targeting the browser.

Java-to-JavaScript Transpilation

The transpiler that we use in our architecture, is GWT. This choice is a bit controversial, especially because the GWT framework was launched in 2006, when the web was a very different place.
But keep in mind that every piece of technology has its own good parts and bad parts. For sure there are some bad parts in GWT (like the Swing Style Widgets, multiple permutations per browser/language), but keep in mind that for our architecture what we are trying to achieve is static typing on the web, and for this purpose the GWT compiler is amazing.
Our group is part of GWT steering committee, and the next generation of GWT is all about JUST these good parts. Basically removing or decoupling the early 2000 legacy and keeping only the good parts. In our opinion the best parts of GWT are:
  • Java to JavaScript transpiler: extreme JavaScript performance due to compiling optimizations and static typing in the web;
  • java.* emulation: excellent emulation of the main java libraries, providing runtime behavior/consistency;
  • JS Interop: almost transparent interoperability between Java <-> Javascript. This is a key aspect of the next generation of GWT and the Drools/jBPM platform: embrace and interop (two way) with JS ecosystem.

Google is currently working on a new transpiler called J2CL (short for Java-to-Closure, using the Google Closure Compiler) that will be the compiler used in GWT 3, the next major GWT release. The J2CL transpiler has a different architecture and scope, allowing it to overcome many of the disadvantages of the previous GWT 2 compiler.

Whereas the GWT 2 compiler must load the entire AST of all sources (including dependencies), J2CL is not a monolithic compiler. Much like javac, it is able to individually compile source files, using class files to resolve external dependencies, leaving greater potential for incremental compilation.
These three good parts are great and in our opinion, you should really consider using GWT as a transpiler in your web applications. But keep in mind that the most important point here is that GWT is just our first pillar implementation. You can consider using other transpilers like Typescript, Dart, Elm, ScalaJS, PureScript, or TeaVM.
The key point is that every web application should be handled as a large-scale application, and every large-scale application should be concerned about effective refactoring. The best way to achieve this is using statically-typed languages.
This is the first of three posts about our 5 pillars of successful web applications. Stay tuned for the next ones.

[I would like to thank Max Barkley and Alexandre Porcelli for kindly reviewing this article before publication, contribute with the final text and provided great feedback.]


by Eder Ignatowicz (noreply@blogger.com) at October 12, 2017 09:13 PM

October 09, 2017

Sandy Kemsley: International BPM conference 2018 headed down under

The international BPM conference for academics and researchers is headed back to Australia next year, September 9-14 in Sydney, hosted by the University of New South Wales. I’ve attended the...

[Content summary only, click through for full article and links]

by sandy at October 09, 2017 07:08 PM

October 04, 2017

Sandy Kemsley: Citrix Productivity Panel – the future of work

I had a random request from Citrix to come out to a panel event that they were holding in downtown Toronto — not sure what media lists I’m on, but fun to check out to events I wouldn’t normally...

[Content summary only, click through for full article and links]

by sandy at October 04, 2017 10:31 PM

September 26, 2017

Sandy Kemsley: ABBYY Technology Summit 2017

Welcome back after a nice long summer break! Last year, I gave the keynote at ABBYY’s Technology Summit, and I’m headed back to San Diego this year to just do the analyst stuff: attend briefings and...

[Content summary only, click through for full article and links]

by sandy at September 26, 2017 01:12 PM

September 20, 2017

September 11, 2017

Keith Swenson: Why Does Digital Transformation Need Case Management?

A platform for digital transformation brings a number of different capabilities together: processes, agents, integration, analytics, decisions, and — perhaps most important — case management.  Why case management?  What does that really bring to the table and why is it needed?

Discussion

What is the big deal about case management?  People are often underwhelmed.  In many ways, case management is simple a “file folder on steroids.”  Essentially it is just a big folder that you can throw things into.  Traditional case management was centered on exactly that: a case folder and really that is the only physical manifestation.  It is true that the folder as a collecting point for documents and data of any kind — but there is a little more to it.

I already have shared folders, so why do I need anything more?  The biggest difference between case management and shared folders is how you gain access to the folder.

My shared file system already has access control.  Right, but there is a question of granularity: it is a question of granularity.  If the access can be controlled only to the whole folder, it means every participant has all-or-nothing access and that is to much.  At the other end of the spectrum, if every file can be assigned to any person, it gets to be too tedious:  adding a person to a large case with 50 files can take significant effort, costing more than 10 minutes of work.  People may be joining and leaving the case on a daily basis, and going through all the documents on every person might leave you with a full time job managing the access rights.  A case manager is too busy to do that.  A better approach has to be found that blends the access control together with other things that a case manager is doing.

For example, let’s say that you have a task, and the task is associated with 10 documents in the folder.  Changing the assignment of the task, from one person to another, should at the same time (and without any additional trouble) change the rights to access the associated documents from one person to another.  It is reasonable to ask a case manager to assign a task to someone.   It is unreasonable to expect the case manager to go and manually adjust the access privileges for each of the 10 documents.  It is not only tedious, it is error prone.  Forget to give access to a critical document, and the worker can’t do the job.  Or give access to the wrong document to someone with no need to know might constitute a confidentiality violation.  This is one example of how a case management blends the case management and the access control together.  Another example is role-based access.

My shared file system already have role-based access control.  Many document management systems offer global roles that you can set up:  a group for all managers, a group for all writers, a group for all editors.   You can assign privileges to such a  group, and simply by adding a person to the group gives them access to all the resources of the group.

This is a complete misunderstanding of how cases have to work.  Each case, needs its own groups of people to play particular roles just for that case.  For example, a case dedicated to closing a major deal with a customer will have a salesperson, a person to develop and give a demo, maybe a market analyst.  But you can’t use the global groups for salespeople, demo developers, and market analysts.  This case has a particular sales person, and not just anyone in the sales person pool.   That particular sales person will have special access to the case that no other sales person should have.  A global role simply can’t fit the need.

I could make individual roles for every case even in the global system.  Right, but creating and modifying global roles is often restricted to a person with global administration privileges.  The case manager needs the rights to create and change the roles for that case, and for no other case.  This right to manage roles needs to come automatically from being assigned to the case manager role for that case.  Case management adds mechanisms above the basic access control to avoid the tedium of having to manage thousands of individual access control settings.

So that is all it is, powerful access control?  There is more to it.  It must also have the ability to create tasks of any kind and assign them to people at any time.  This means that the case management needs convenient ways to find all the tasks assigned to a particular person, and to (1) produce a work list of all currently assigned tasks, and (2) email notifications of either the entire list, or the items that are just about to reach a deadline.  These are BPM-ish capabilities, but there is no need for a process diagram.  For routine, pre-defined processes just use a regular BPM product.  Case management is really more about completely ad-hoc tasks assigned as desired.

So there is no pattern to the processes at all?  Sorry, I didn’t mean to imply that.  There are patterns.  Each case manager develops their own style for getting a case done, and they often reuse those patterns.  The list of common tasks are usually copied from case to case in order to be reused.  At the same time, the patterns are never exactly the same.  And they change after the case is started.

Since tasks are assigned manually, there is a strong need for a capability to “see who is available” which takes into account skills, workload, vacation schedule, and other criteria to help locate the right person for the job.

There are also well defined routine processes to be called upon as well, and you use BPM for that.  The tighter the BPM is integrated to the case management, the easier it will be for case managers to complete the work.

Summary

The above discussion is not an exhaustive list of capabilities that case management brings to the table.

  • It is a dumping ground for all the work that can not be known in advance.  A kind of safety valve to catch work which does not fall neatly into the pre-defined buckets for process management.
  • It collects any kind of information and documents, and makes them available to people working on the case.
  • It offers powerful access control that is integrated into the case logical structure so that it is easier to use than a simple document-based access control system.
  • It offers tasking so that assignments can be made and tracked to completion.
  • There are often portal features that can reach out to external people to register themselves and to play a role in the case.
  • It has calendars and vacation schedules that give workers an awareness of who is available and who might be best to do a job.
  • Conversation about the case is simplified by connections to discussion topics, commenting capability, chat capability, unified communications, email, social media, etc.

Knowledge workers need these capabilities because their work is inherently unpredictable.  A digital transformation platform brings all the tools together to make solutions that transform the business.  Knowledge workers constitute about 50% of the workforce, and that percentage is growing.  Any solution destined to transform the organization absolutely must have some case management capabilities.


by kswenson at September 11, 2017 05:49 PM

September 07, 2017

Keith Swenson: Business Driven Software

I liked the recent post from Silvie Spreeuwenberg when she asks “When to combine decisions, case management and artificial intelligence?

She correctly points out that “pre-defined workflow” are useful only in well defined scripted situations, and more and more knowledge workers need to break out of these constraints to get things done.  She points to Adaptive Case Management.

I would position it slightly differently.  The big push today is “Digital Transformation” but it is exactly what she is talking about:  you are combining aspects of traditional process management, with unstructured case management, separating out decision management, and adding artificial intelligence.

I would go further to say that Digital Transformation Platform (DXP) would need all that plus strong analytics, background processing agents, and robotic process automation. These become the basic ingredients that are combined for specific knowledge worker solution.  I think Spreeuwenberg has rightly expressed the essence of an intuitive platform of capabilities to meet the needs of today’s business.

She closes saying he will be talking at the Institute of Risk Management — once again the domain of knowledge workers: risk management.


by kswenson at September 07, 2017 10:49 PM

September 01, 2017

Keith Swenson: Update on DMN TCK

Last year we started the Decision Model & Notation Technical Compatibility Kit (DMN-TCK) working group.  A lot has happened since the last time I wrote about this, so let me give you an update.

Summary Points

  • We have running code!:  The tests are actual samples of DMN models, and the input / output value force a vendor to actually run them in order to demonstrate compliance.  This was the main goal and we have achieved it!
  • Beautiful results web site:  Vendors who participate are highlighted in an attractive site that lists all the tests that have passed.  It includes detail on all the tests that a vendor skips and why they skip them.  Thanks mainly to Edson Tirelli at Red Hat.
  • Six vendors included:  The updated results site, published today, has six vendors who are able to run the tests to demonstrate actual running compliance:  Actico, Camunda, Open Rules, Oracle, Red Hat, Trisotech.
  • Broad test set: The current 52 tests do a broad coverage of DMN capability.   Will jump to 101 tests by mid September.  Broad but not deep at this time: Now that the framework is set up, it is simply a matter of filling in additional tests.
  • Expanding test set: Participating vendors are expanding the set of tests by drawing upon their existing tests suites and converting into the TCK format, and including in the published set.  We are ready to enter a period of rapid test expansion.
  • All freely available: It is all open source and available on GitHub.

How We Got Here

It was April 2016 that DMN emerged onto the stage of the BPMNext conference as an important topic.  I expressed skepticism that any standard could survive without actual running code that demonstrated correct behavior.  Written specifications are simply not detailed enough to describe any software, and particular one that has an expression language as part of the deal.  Someone challenged me to do something about it.

We started meeting weekly in summer of 2016, and have done so for a complete year.  There has been steady participation from Red Hat, Camunda, Open Rules, Trisotech, Bruce Silver and me, and more recently Oracle and Actico.

I insisted that the models be the standard DMN XML-based format.  The TCK does not define anything about the DMN standard, but instead we simply define a way to test that an implementation runs according to the standard.   We did define a simple XML test case structure that has named input values, and named output values, using standard XML datatype syntax.  The test case consists purely of XML files which can be read and manipulated on any platform in any language.

We also developed a runner, a small piece of Java code which will read the test cases,  make calls to an implementing engine, and test whether the results match.  It is not required to use this runner, because the Java interface to the engine is not part of the standard, however many vendors have found this a convenient way to get started on their own specific runner.

As we worked on the tests, we uncovered dozens, possibly hundreds, of places where the DMN spec was ambiguous or unclear.  One participant would implement a set of tests, and it was — without exception — eye opening when the second participant tried to run them.  This is the way that implementing a new language (FEEL) naturally goes.  The spec simply can not get 100% on all the edge cases, and the implementation of the tests forced this debate into the public.  Working together with the RTF we were able to come to a common understand of the correct behavior of the evaluation engine.  Working through these cases was probably the most valuable aspect of the TCK work.

A vendor runs the tests and submits a simple CSV file with all the results back to the TCK.  These are checked into GitHub for all to see, and that is the basis for the data presented on the web site.   We open the repository for new tests and changes in tests for the first half of every month.  The second half of the month is then for vendors that wish to remain current, to run all the new tests, and produce new results.  The updated web site will then be generated on the first of the next month.  Today, September 1, we have all the new results for all the tests that were available before mid August.  This way vendors are assured the time they need to keep their results current.

The current status is that we have a small set of tests cases, that test a broad but shallow coverage of DMN capabilities.  A vendor who can pass the tests will be demonstrating a fairly complete implementation of all the DMN capabilities, but there are only a couple of tests on each functional area.  The next step will be drive deeper, and to design test that verify that the functional area works correctly in a larger number of special situations.  Some of the participating vendors already have such tests available in a non-TCK format.  Our immediate goal is then to encourage participating vendors to convert those tests and contribute them to the TCK repository.  (And I like to remind vendors that it is in their advantage to do so, because adding tests that you already pass, makes the test suite stronger, and forces other vendors to comply to functionality that you already have.)

What this means to Consumers

You now have a reliable source to validate a vendor claim that they have implemented the DMN standard.  On the web site, you can drill down to each functional category, and even to the individual tests to see what a vendor has implemented.

Some vendors skip certain tests because they think that particular functionality is not important.  You can drill down to those particular tests, and see why the vendor has taken this stance, and determine whether you agree.

Then there are vendors who claim to implement DMN, but are not listed on the site.  Why not?  Open source: All of the files are made freely available at GitHub in standard, readily-accessible formats.   Ask questions.  Why would a DMN implementation avoid demonstrating conformance to the standard when it is freely available?  Are you comfortable making the investment in time to use a particular product, when it can not demonstrate publicly this level of conformance to the spec?

What this means to Vendors

There are certainly a number of vendors who are just learning of this effort now.  It is not too late to join.  The last participant to join had the tests running in under two weeks.  We welcome any and all new participants who want to demonstrate their conformance to the DMN spec.

To join, you simply need to read all the materials that are publicly available on the web site, send a note to the group using GitHub, plan to attend weekly meetings, and submit your results for inclusion in the site.  The effort level could be anywhere from a couple hours up to a max of 1 day per week.

The result of joining the TCK is that you will know that your implementation runs in exactly the same way as the other implementations.  You product gains credibility,and customers gain confidence in it.  You will also be making the DMN market stronger as you reduce the risk that consumers have in adopting DMN as a way to model their decisions.

Acknowledgements

I have had the honor of running the meetings, but I have done very little of the real work.  Credit for actually getting things done goes largely to Edson Tirelli from Red Hat, and Bruce Silver, and a huge amount of credit is due to Falko Menge from Camunda, Jacob Feldman from Open Rules, Denis Gagne from Trisotech, Volker Grossmann and Daniel Thanner from Actico, Gary Hallmark from Oracle, Octavian Patrascoiu from Goldman Sachs, Tim Stephenson for a lot of the early work, Mihail Popov from MITRE, and I am sure many other people from the various organizations who have helped actually get it working even though I don’t know them from the meetings.    Thanks everyone, and great work!


by kswenson at September 01, 2017 06:07 PM

August 29, 2017

Keith Swenson: Blogging Platforms

Today I am pretty frustrated by WordPress so I am going to vent a bit.  10 years ago I picked it as the platform to start my first blog on, and here you have it: I still here.  Yet I have seen so many problems in recent days I will be looking for an alternative platform.

What Happened?

I spent a lot of time trying to set up a blog for a friend who has a book coming out and needed a place to talk about it. I said “blogs are easy” but that was a mistake.  Three days later and the blog is still not presentable.

Strange User Restrictions – Using my login, I created a blog for her using her full name as the name of the blog (e.g jsmith)   Then, I wanted to sign her up as a wordpress user with “jsmith” as her username.  You can’t do that.  Since there was a blog with that name, you are not allowed to register a user with that name.  The point is that the blog is her blog.  Her own blog is preventing her from having her username.  How silly is that?

Given that I created the blog, there is no way to then set the password on the user for that name, and since there is no email associated, there is no way to reset the password.

You can’t just register a user.  If you want to register a user, you have to create another blog!  It walks you through creation of a blog before you can specify a password for the user account.  We already had the blog created, I just needed a way for her to log in.  The only way we found to do that was to create yet another blog until finally, with the user name she didn’t want, could set a password on that username.  Blogs and user are different things … it really does not have to be so hard.

You Can’t Move/Copy a Site – One of the impressive features is WordPress claims you can always move your site.  I have never tried until now, and can say it does not work.  I had previously set he blog up on a different blog address, so I wanted to move it.  Simply export and then import, right?  No.  You download a ZIP file, but it only has one file in it, and XML file.  There are none of the graphics, none of the media, and none of the settings.  Since it downloaded a zip file, at the import prompt I tried to upload the ZIP file.  This produces an arcane error message saying that a particular file is missing.  Strange.  I download the zip file a few times.  Always the same result.  There are two different export commands, and the produce different output!

Finally I try to upload the XML file alone.  I know this has no chance of moving the pictures and media, but since there was none in the ZIP file anyway, I tried.  This avoided the error, and acted like it was working.  Eventually, I got a mess.  It just added the pages to the pages that were there.  Some of the old pages had special roles, like home and blog, so I can’t delete them in order to make way for the imported home and blog pages.  I have the same theme, but NOTHING looks the same.  None of the featured images were there.  No media files at all.   The sidebar (footer) text blocks were different.  I was horrified.  All this time I thought you could move a blog and not lose things.  This was eye opening.

Incomprehensible Manage Mode – I have been trying for months to find out how to get from the “new” admin mode back to the blog itself.  That is, you edit a page, and you want to see how the page looks.  It gives you a “preview” mode which causes a semblance of the page to appear on top of the admin mode, but that is not the same thing, and the links do not work the same way.  After hours of looking, I still can not find any way to get “out” of admin mode.   You can “preview” the page, and the “launch” the page full screen.  That seems to do it, but it is a small pain.  I have until now just edit the URL to get back to my blog url.  In fact, I have taken to bookmarking the blog I am editing, and using the bookmark every few minutes to get out of admin mode.  It is rediculous.

Visual Editor Damages Scripts – One of my blogs is about programming, so I have some programming samples.   If you accidentally open that in the “visual” editor, it strips out all the indent and does other things to it.  The problem is that you have no control of the editor until AFTER you click to edit.  It is a kind of russian roulette.  If you click edit and the visual editor appears, and then you switch to HTML editor, you post is already damaged.   What I have to do is click edit and see what mode it is in.  If visual, I switch to HTML.  Then I use the favorites link mentioned above, to return to the blog abandoning the edits.  Now I hit edit again and it comes back in the right HTML mode.   This is a real pain since some of my posts I would like to use the visual editor, and others because of the corruption I must use the HTML editor.  I worry forever that I will get visual editor on a post that has source code further down on the page, and I accidentally save it that way.

Backslashes disappear  – besides ruining the indentation, at times it will strip out all the backslashes.  I got a comment today on a post from a couple years ago that the code was wrong: missing backslashes.  Sure enough.  I have struggled with that post, but I am sure that when I left it the last time, all the backslashes were in place.

Old vs. New Admin Mode – Right now I am using the old admin mode to write this — thank god — I don’t know how to guarantee to get this.  The new admin mode is missing some features.  A few months ago I spent about an hour trying to find the setting to turn off some option that had somehow gotten turned on.  I finally contacted support, and they told me to find the “old” admin UI and the setting could be manipulated there.

Can’t change blogs without manually typing the address in – This is the strangest thing.  If I am on one blog, I can go the menu that switches blogs, and choose another of my blogs, but there is no way to get back “out” of admin mode.  I end up editing the address line.  How hard would it be to give a simple list of my blogs and allow me to navigate there?  The new admin UI is a nightmare.  It didn’t use to be that bad!

Login / Logout moves your location – if you are on a page which you would like to edit, but you are not logged in, I would expect to be able to log in, and then click edit on the page.    No chance with WordPress.  When you are done logging in, you are in some completely different place!  You cant use the browser back button to get back to where you were (this is reasonable, but I am trying to find a way around the predicament).  I then usually have to go search for the post.

Edit does not return you to the page – If you are on a page and click the edit, when you are done editing you are not put back on the page you start on.  It looks like you page, but there is an extra bar at the top, and links don’t work.

Managing Comments is Inscrutable – When reviewing and approving comments, I want a link to takes me to the page in question, so I can see the page and the comment.  I think there is a link that does this, but it is hard to find.  The main link takes you to the editor for that page.  Not what I want, and as mentioned above it is impossible to get from the editor to the page.  I often end us searching for the blog page using the search function.  Other links take you to the poster’s web site, which is not always what I want either.

Vapid Announcements – When I make a hyperlink from one blog post to another of my own blog posts, why does it send me an email announcing that I have a new comment on those posts?  I know it makes a back-link, but for hyperlinked posts within a single blog it seems the email announcement is not useful in any way.

Sloppy Tech – I looked at the XML file produced for the site, and they user CData sections to hold your blog posts.   Any use of CData is a hack because it does not encode all possible character sequences, when regular XML encoding works perfectly.  i realize I am getting to the bottom of the barrel of complaints, but I want to be complete here.

What I want?

  • Keep it simple.
  • Let me navigate through my site like normal, but put a single edit button on each page that is easy to find and not in different places for different themes.
  • Then, when done editing, but me BACK on that page.
  • When I log in, leave me on the same page that I started the login from.
  • When I switch blogs, take me to the actual blog and not the admin for that blog.
  • Give me a simple way to exit the admin mode back to the actual blog.
  • And make a single admin mode that has all the functionality.
  • Don’t corrupt my pages by taking backslashes and indentation out.  Protect my content as if it was valuable.
  • Provide a complete export that includes all the media and theme settings as well
  • Provide an import that read the export and sets up the blog to be EXACTLY as the original that you exported.

Is that too much to ask for?

As yet, I don’t know of any better blogging platform.  But I am going to start considering  other options in earnest.

Postscript

PS. As a result of writing this post, it forced me to figure out how to reliably get to the “old” admin interface, which remains workable in a very predictable manner.  Maybe if I try hard, I can avoid using the “new” admin interface completely, and avoid that all those quirky usability problems.

PPS. Now a new “View Site” button appears in the “new” admin mode to get back to the site, but this has the strange side effect of logging you out.  That is, you can see the page, but you are no longer logged in.  Strange.


by kswenson at August 29, 2017 06:49 AM

August 09, 2017

Drools & JBPM: Talking about Rule Engines at Software Engineering Radio

I had the pleasure of talking to Robert Blumen, at Software Engineering Radio, about Drools and Rule Engines in general.

http://www.se-radio.net/2017/08/se-radio-episode-299-edson-tirelli-on-rules-engines/

If you don't know this podcast, I highly recommend their previous episodes as well. Very informative, technically oriented podcast.

Hope you enjoy,
Edson

by Edson Tirelli (noreply@blogger.com) at August 09, 2017 01:08 AM

August 07, 2017

Keith Swenson: Still think you need BPEL?

Fourteen years ago, IBM and Microsoft announced plans to introduce a new language called Business Process Execution Langauge (BPEL) to much fanfare and controversy.  This post takes a retrospective look at BPEL, how things have progressed, and ponders the point of it all.

Origins

In 2002, BPM was a new term, and Web Services was a new concept.  The term BPM meant a lot of different things in that day, just as it still does today, but of the seven different kinds of BPM, the one that is relevant in this context is Process Driven Server Integration (PDSI).  Nobody actually many real web services at that time, but it was clear that unifying such services with a standard protocol passing XML back and forth was a path to the future.  Having a way to integrate those web services was needed.  Both Microsoft and IBM had offerings in the integration space (BizTalk and FlowMark respectively).  Instead of battling against each other, they decided to join forces, and propose a open standard language for such integration processes.

In April 2003 A proposal was made to OASIS to form a working to define a language called BPEL4WS (BPEL for Web Services).  I attended the inaugural meeting for that group with about 40 other high tech professionals.  It was a rather noisy meeting with people jockeying for position to control what was perceived to be the new lingua franca for business processes.  The conference calls were crazy, and we must credit the leaders with a lot of patience to stick with it and work though all the details.  The name was changed to WS-BPEL, and after a couple of years a spec was openly published as promised.

Hype

BPEL was originally proposed as an interchange format.  That is, one should be able to take a process defined in one product, and move it to another product, and still be executable.  It was to be the universal language for Process Driven Server Integration.

Both Microsoft and IBM were on board, as well as whole host of wannabes.  A group called the Business Process Management Initiative dumped their similar programming language called BPML in favor of BPEL as a clear case of “it you can’t beat ’em you can join ’em.”

It was designed from the beginning to be a “Turing-Complete Programming Language” which is a great goal for a programming language, but what does that have to do with business?  The problem with the hype is that it confused the subject of “server integration” with human business processes.  While management was concerned with how to make their businesses run better, they were being sold a programming language for server integration.

The hype exists after the spec was announced, but before it was finally published.  This happens with most proposed specs: claim that the proposal can do everything are hard to refute until finally the spec is published.  Only then can claims be accurately refuted.  For more than 4 years BPEL existed in this intermediate state where inflated expectations could thrive.

Who Needs It?

At the time, I could not see any need for a new programming language.  Analysts at Gartner and Forrester were strongly recommending companies go with products that included BPEL.  I confronted them, asking “Why is this programming language important?” And the candid answer was “We don’t know, we just know that a lot of major players are backing it, and that means it is going to be a winner.”  It was a case of widespread delusion.

My position at the time was clear: as a programming language it is fine, but it has nothing to do with business processes.  It was Derek Miers who introduced me to the phrase “BPEL does not have any B in it.”   The language had a concept of a “participant”, but a participant was defined to be a web service, something with a WSDL interface.

In 2007 I  wrote a article called “BPEL: Who Needs It Anyway?” and it is still one of the most accessed articles on BPM.COM.  In that article I point out that translating a BPMN diagram into BPEL presents a limitation on the kinds of diagrams that can be executed.  I point out that directly interpreting the BPMN diagram, something that has become more popular in the meantime, does not have this limitation.

If what we need is a language for PDSI, then why not use Java or C#?  Both of those languages have proven portability, as well as millions of supporters.  When I asked those working on BPEL why they don’t just make an extension to an existing language, the response was the incredible: “We need a language based on XML.”  Like you need a hole in the head.

Attempted Rescue

The process wonks knew that BPEL was inappropriate for human processes, but still wanting to join the party, there was a proposal for the cleverly named “BPEL 4 People” together with “WS-HumanTask.”    This is the idea that since people are not web services, and since BPEL can only interact with web services, we can define a standardized web service that represents a real person, and push tasks to it.  It is not a bad idea, and it incorporates some of the task delegation ideas from WF-XML, it fails to meet the need of a real human process system because it assumes that people are passive receptors of business tasks.

When a task is sent to a web service for handling, there is no way to “change your mind” and reallocate that to someone else.  BPEL, which is a programming language for PDSI, unsurprisingly does not include the idea of “changing your mind” about whom to send the task to.  Generally, when programming servers, a task sent to a server is completed, period.  There is no need to send “reminders” to a server.  There are many aspects of a human process which are simply not, and never should be, a part of BPEL.  Patching it up with representing people as standardized web services does not address the fundamental problem that people do not at any level interact in the same way that servers do.

Decline of BPEL

Over time the BPM community has learned this lesson.  The first version of BPMN specification made the explicit assumption that you would want to translate to BPEL.  The latest version of BPMN throws that idea out completely, and proposes a new serialization format instead of BPEL.

Microsoft pulled away from it as well as a core part of their engine.  First proposing that BPEL would be an interchange format that they would translate to their internal format.  Oracle acquired Collaxa an excellent implementation of BPEL, and they even produced extensions of BPEL that allowed for round trip processing of BPMN diagrams using BPEL as the file format.  But Oracle now appear to be pulling away from the BPEL approach in favor of a higher-level direct interpretation of a BPMN-like diagram.

Later it became doubtful that processes expressed in BPEL are interchangeable at any level.  Of course, a simple process that sticks to the spec and only calls web services will work everywhere, but it seems that to accomplish something useful every vendor adds extensions — calls to server specific capabilities.  Those extensions are valid, and useful, but they limit the ability to exchange processes between vendors.

Where Do We Go From Here?

To be clear, BPEL did not fail as a server programming language.  A engine that is internally based on BPEL for Process Driven Server Integration, should be able to continue to do that task well.  To the credit of those who designed it for this purpose, they did an exemplary job.   As far as I know, BPEL engines run very reliably.

BPEL only failed as

  • a universal representation of a process for the exchange between engines.
  • as a representation of a business process that people are involved in.

BPMN is more commonly used as a representation of people oriented processes for direct interpretation.  Yet portability of BPMN diagrams is still sketchy — and this has nothing to do with the serialization format, it has to do with the semantics being designed by a committee.  But that is a whole other discussion.

The business process holy grail still eludes the industry as we discover that organizations consist of interactions patters that are much more complex than we previously realized.  No simple solution will ever be found for this inherently complex problem, but the search for some means to keep it under control goes on. What I hope we learned from this is to be cautious about overblown claims based on simplified assumptions, and to take as more studied and careful approach to standard in the future.

References


by kswenson at August 07, 2017 10:25 AM

August 04, 2017

Keith Swenson: A Strange FEELing about Dates

The new expression language for Decision Model and Notation standard is called the Friendly Enough Expression Language (FEEL).  Over all it is a credible offering, and one that is much needed in decision modeling where no specific grammar has emerged as the standard.   But I found the handling of date and time values a bit odd. I want to start a public discussion on this on this, so I felt the best place to start is the blog post here, and this can serve as a focal point for discussion references.

The Issue

A lot of decisions will center on date and time values.  Decisions about fees will depend on deadlines.  Those deadlines will be determined by the date and time of other actions.  You need to be able to do things like calculate whether the current transaction is before or after a date-time that was calculated from other date-time values.

FEEL includes a data type for date, for time (of day) and for date-time.  It offers certain math functions that can be performed between these types and other numbers.  It offers ways to compare the values.

Strange case 1: Would you be surprised that in FEEL you can define three date-time values, x1, x2, and x3 such that when you compare them all of the following are true?:

x1 > x2
x2 > x3
x3 > x1.

All of those expressions are true.  They are not the same date-time, they are all different points in time (few hours apart in real time), but the “greater than” operator is defined in a way that dates can not actually be sorted into a single order.

Strange Case 2: Would you be surprised that in FEEL you can define two date-time values, y1, and y2, such that all of the following are false?:

y1 > y2
y1 = y2
y1 < y2

That is right, y1 is neither greater than, equal to, nor less than y2.

What is Happening?

In short, the strangeness in handling these values comes from the way that time zones and GMT offsets are used.  Sometimes these offsets and time zones are significant, and sometimes not.  Sometimes the timezone is fixed to UTC.  Sometimes unspecified timezones come from the server locales, and other times from the value being compared to.

Date-time inequalities (greater-than and less-than) are done in a different way than equals comparisons.  When comparing greater or less than, the epoch value is used. (That is —  the actual number of seconds from that instant in time since Jan 1, 1970 and the timezone is considered in that calculation.)  But when comparing two date-time values, they are not equal unless they come from the exact same timezone.

It gets stranger with date-time values that omit the timezone.  If one of the date-time values is defined without a timezone, then the two values are compared as if they were in the same timezone.  This kind of date-time has a value that changes depending upon the timezone of the data value being compared to!

Date values, however must be the date at midnight UTC.  Timestamps taken in the evening in California on Aug 13 will be greater than a date value of Aug 14!  The spec is actually ambiguous.  At one point is says that the date value must be UTC midnight.  UTC midnight of Aug 14, is Aug 13 in California.  At other points it says that the time value is ignored and the numeric day value (13) would be used.  The two different interpretations yield different days for the date-time to date conversion.

It gets even worse when you consider time zones at the opposite ends of the timezone spectrum.  When I call team members in Japan, we always have to remember to specify the date at each end of the call … because even though we are meeting at one instant in time, it is always a different day there.  This effects your ability to convert times to dates and back.

Time of day values oddly can have a time zone indicator.  This may not strike you as odd immediately, but it should.  Time zones vary their offset from GMT differently at different times of the year.  California is either 8 or 7 hours from GMT, depending on whether you are in the summer or winter.  But the time-of-day value does not specify whether it is in summer or winter.  Subtracting two time-of-day values can give value varying by 0, 1 or 2 hours depending on the time of year that the subtraction is done, and it is not clear even how to determine the time of year to use.  The server current date?  Your model will give different results at different times of the year.  Also, you can combine a date and a time-of-day to get a date-time, but it is not clear what happens when the time-of-day has a timezone.  For example, if I combine Aug 14 date, with time-of-day 8pm in California, do I get Aug 13, or Aug 14 in California?  Time-of-day has to be positive (according to the spec) but this appears to add 24 hours in certain cases where the timezone offset is negative.

If that is not enough, it is not clear that the DMN model will be interpreted the same in different time zones.  Remember that phone call to Japan?  The same DMN model running in Japan will see a different date than the same model running in California.  If your business rule says that something has to happen by April 15, a given times stamp in Japan might be too late, while the exact same time in California still hours to go.

I write systems that collect data all over the world.  We correlate and process events from a server running in India and compare to one in Finland and another in Washington DC.   I am left scratching my head to figure out how I am going to write rules that work the same way on data from different locations, and so those rules run exactly the same way on servers running in different time zones.  It is critical that these decision models be clear, unambiguous, and run the same way in every location.

Solution is Simple

Given all the systems that support date and time, it is surprising that FEEL does not just borrow from something that has been shown to work.  I take my position from Java which has solved the problem nicely.  The date-time value is well defined as the epoch value (number of milliseconds since Jan 1, 1970).  Then Java offers a Calendar object for all the rest of the calculations and conversions that takes into account all the vagaries of specific timezone offsets including daylight time switching.  The Calendar offers calculations like converting a string representation to a date, and converting a date back to a string.  This is already well tested and proven, so just use it.

First: In the DMN spec, date-time values should simply be compared by using the epoch value — the number of seconds since Jan 1, 1970 UTC.    This value is already what is used for greater than and less than comparisons.  The spec should be changed to do that same for equals comparison.  This would make the date/time value for 3pm in New York equal 12 noon in California for that same day.  This seems clearly to be what you want.    The current spec says these are NOT the same time.  This would give a clear order for sorting all date-time values.

Second: The DMN spec should then define a default timezone for any model.  Any date or time value without a timezone indicator is interpreted to be in the time zone of the default for the model.  Date time calculation (such as add 3 days, or conversion from date-time to date, or time) use a calendar for that time zone locale.  A date value would then be the 24 hour period for that date from that default calendar.   A time of day would be for the default timezone, and would probably handle daylight time changes correctly.

This solves most the strangeness.  Since the model defines the timezone for the model, it always executes exactly the same way, no matter where the model is being interpreted.  You are never dependent on the “local timezone” of the server.  And, since identical points in time always compare as equal, even if those points in time came from different locations, the rules around handling time are clear, unambiguous, and “friendly enough”.

Final Note

I don’t actually know the rationale for the unusual aspects of the specification.  Maybe there is some special reason for the arcane approach.  If so, one might need to invent a couple new date functions to handle them along with the scheme above.  I would hazard a bet that those functions would be identical to ones already on the Java Calendar object.  We really don’t need to be inventing a new and incompatible way of dealing with date values.  But, I will wait for feedback and see.

 

 

 


by kswenson at August 04, 2017 12:50 AM

August 01, 2017

Drools & JBPM: Drools, jBPM and Optaplanner Day: September 26 / 28, 2017 (NY / Washington)

Red Hat is organizing a Drools, jBPM and Optaplanner Day in New York and Washington DC later this year to show how business experts and citizen developers can use business processes, decisions and other models to develop modern business applications.
This free full day event will focus on some key aspects and several of the community experts will be there to showcase some of the more recent enhancements, for example:
  • Using the DMN standard (Decision Model and Notation) to define and execute decisions
  • Moving from traditional business processes to more flexible and dynamic case management
  • The rise of cloud for modeling, execution and monitoring
IT executives, architects, software developers, and business analysts who want to learn about the latest open source, low-code application development technologies.

Detailed agenda and list of speakers can be found on each of the event pages.

Places are limited, so make sure to register asap !

by Edson Tirelli (noreply@blogger.com) at August 01, 2017 11:00 PM

July 13, 2017

Sandy Kemsley: Insurance case management: SoluSoft and OpenText

It’s the last session of the last morning at OpenText Enterprise World 2017 — so might be my last post from here if I skip out on the one session that I have bookmarked for late this...

[Content summary only, click through for full article and links]

by sandy at July 13, 2017 04:16 PM

Sandy Kemsley: Getting started with OpenText case management

I had a demo from Simon English at the OpenText Enterprise World expo earlier this week, and now he and Kelli Smith are giving a session on their dynamic case management offering. English started by...

[Content summary only, click through for full article and links]

by sandy at July 13, 2017 02:57 PM

Sandy Kemsley: OpenText Process Suite becomes AppWorks Low Code

“What was formerly known as Process Suite is AppWorks Low Code, since it has always been an application development environment and we don’t want the focus to be on a single technology...

[Content summary only, click through for full article and links]

by sandy at July 13, 2017 01:57 PM

July 12, 2017

Sandy Kemsley: OpenText Process Suite Roadmap

Usually I live-blog sessions at conferences, publishing my notes at the end of each, but here at OpenText Enterprise World 2017, I realized that I haven’t taken a look at OpenText Process Suite...

[Content summary only, click through for full article and links]

by sandy at July 12, 2017 10:03 PM

Sandy Kemsley: OpenText Enterprise World 2017 day 2 keynote with @muhismajzoub

We had a brief analyst Q&A yesterday at OpenText Enterprise World 2017 with Mark Barrenechea (CEO/CTO), Muhi Majzoub (EVP of engineering) and Adam Howatson (CMO), and today we heard more from...

[Content summary only, click through for full article and links]

by sandy at July 12, 2017 02:47 PM

July 11, 2017

Sandy Kemsley: OpenText Enterprise World keynote with @markbarrenechea

I’m at OpenText Enterprise World 2017  in Toronto; there is very little motivating me to attend the endless stream of conferences in Vegas, but this one is in my backyard. There have been a...

[Content summary only, click through for full article and links]

by sandy at July 11, 2017 05:44 PM

July 03, 2017

Drools & JBPM: Drools, jBPM and Optaplanner are switching to agile delivery!

Today we would like to give everyone in the community a heads up at some upcoming changes that we believe will be extremely beneficial to the community as a whole.

The release of Drools, jBPM and Optaplanner version 7.0 a few weeks ago brought more than just a new major release of these projects.

About a year ago, the core team and Red Hat started investing on improving a number of processes related to the development of the projects. One of the goals was to move from an upfront planning, waterfall-like development process into a more iterative agile development.

The desire to deliver features earlier and more often to the community, as well as to better adapt to devops-managed cloud environments, required changes from the ground up. From how the team manages branches to how it automates builds and how it delivers releases. A challenge for any development team, but even more so to a team that is essentially remote with developers spread all over the world.

Historically, Drools, jBPM and Optaplanner aimed for a cadence of 2 releases per year. Some versions with a larger scope took a bit longer, some were a bit faster, but on average that was the norm.

With version 7.0 we started a new phase in the project. We are now working with 2-week sprints, and with an overall goal of releasing one minor version every 2 sprints. That is correct, one minor version per month on average.

We are currently in a transition phase, but we intend to release version 7.1 at the end of the next sprint (~6 weeks after 7.0), and then we are aiming to release a new version every ~4 weeks after that.

Reducing the release timeframe brings a number of advantages, including:
  • More frequent releases gives the community earlier access to new features, allowing users to try them and provide valuable feedback to the core team. 
  • Reducing the scope of each release allows us to do more predictable releases and to improve our testing coverage, maintaining a more stable release stream.
  • Bug fixes as usual are included in each release, allowing users more frequent access to them as well. 
It is important to note that we will continue to maintain backward compatibility between minor releases (as much as possible - this is even more important in the context of managed cloud deployments as well where seamless upgrades are the norm) and the scope of features is expected to remain similar to what was before. That has two implications:
  • If before, we would release version 7.1 around ~6 months after 7.0, we now will release roughly 6 new versions in those 6 months (7.1, 7.2, ..., 7.6 ), but the amount of feature will be relatively equivalent. I.e., the old version 7.1 is roughly equivalent in terms of features as the scope of the new versions 7.1,..., 7.6 combined. It just splits the scope in smaller chunks and delivers earlier and more often.
  • Users that prefer to not update so often will not lose anything. For instance, a user that updated every 6 months can continue to do so, but instead of jumping from one minor version to the next, he will jump 5-6 minor versions. This is not a problem, again, because the scope is roughly the same as before and the backward compatibility between versions is the same.
This is of course work in progress and we will continue to evolve and adapt the process to better fit the community's and user's needs. We strongly believe, though, that this is a huge step forward and a milestone on the project maturity level.

by Edson Tirelli (noreply@blogger.com) at July 03, 2017 10:48 PM

June 22, 2017

Keith Swenson: Complex Project Delay

I run large complex software projects.  A naive understanding of complex project management can be more dangerous than not knowing anything about it.  This is a recent experience.

Setting

A large important customer wanted a new capability.  Actually, they thought they already had the capability, but discovered that the existing capability didn’t quite do what they needed.  They were willing to wait for development, however they felt they really deserved the feature, and we agreed.  “Can we have it by spring of next year?”    “That seems reasonable” I said.

At that time we had about 16 months.  We were finishing up a release cycle, so nothing urgent, I planned on a 12 month cycle starting in a few months.  I will start the project in “Month 12” and count down to the deadline.

We have a customer account executive (lets call him AE) who has asked to be the single point of contact to this large, important customer.  This makes sense because you don’t want the company making a lot of commitments on the side without at least one person to keep a list of all and make sure they all are followed through on.

Shorten lines of communication if you can.  Longer lines of communication make it harder to have reliable communication, so more effort is needed.

Palpable Danger

The danger in any such project, is that you have a fixed time period, but the precise requirements are not specified.  Remember that the devil is in the details. Often we throw about the terms “Easy to Use”  “Friendly”  “Powerful” and those can mean anything in detail.  Even terms that seem very specific, like “Conformance to spec XYZ” can include considerable interpretation by the reader.  All written specifications are ambiguous.

The danger is that you will get deep into the project, and it will come to light that customer expects functionality X.  If X is known at the beginning, and design can incorporate it from the beginning, and it might be relatively inexpensive.   But retrofitting X into a project when it is half completed can multiple that cost by ten times.  The goal then is to get all the required capabilities to a suitable level of detail before you start work.

A software project is a lot like piloting a big oil tanker.  You get a number of people going in different, but coordinated directions.  As the software starts to take form, all the boundaries between the parts that the people are working on gradually firm up and become difficult to change.  As the body of code becomes large, the cost of making small changes increases.  In my experience, at about the halfway point, the entire oil tanker is steaming along in a certain direction, and it become virtually impossible to change course without drastic consequences.

With a clear agreement up front, you avoid last minute changes.   The worst thing that can happen is that late in the project, the customer says “But I really expected this to run on Linux.”   (Or something similar).   Late discoveries like this can be the death knell.   If this occurs, there are only two possibilities: ship without X and disappoint the customer, or change course to add X, and miss the deadline.  Either choice is bad.

Danger lies in the unknown.  If it is possible to shed light and bring in shared understanding, the risk for danger decreases.

Beginning to Build a Plan

In month 12, I put together a high level requirements document.  This is simply to create an unambiguous “wish list” that encompasses the entire customer expectation.  It does NOT include technical details on how they will be met.  That can be a lot of work.  Instead, we just want the “wishes” at this time.

If we have agreement on that, we can then flesh out the technical details in a specification for the development.  This is a considerable amount of work, and it is important that this work be focused on the customer wishes.

I figured on a basic timetable like this:

  • Step 1: one month to agree on requirements  (Month 12)
  • Step 2: one month to develop and agree on specification (Month 11)
  • Step 3: one month to make a plan and agree on schedule (Month 10)
  • Step 4: about 4 months of core development (Months 9-6)
  • Step 5: about 4 months of QA/finishing (Months 5-2)
  • leaving one month spare just in case we need it. (Month 1)

Of course, if the customer comes back with extensive requirements, we might have to rethink the whole schedule.  Maybe this is a 2 year project.  We won’t know until we get agreement on the requirements.

Then AE comes to a meeting and announces that the customer is fully expecting to get the delivery of this new capability in Month 3!  Change of schedule!  This cuts us down to having only 9 months to deliver.  But more important, we have no agreement yet on what is to be delivered.  This is the classic failure mode: agreeing to a hard schedule before the details of what is to be delivered is worked out.  This point should be obvious to all.

The requirements document is 5 pages, one of those pages is the title page.  It should be an afternoon’s amount of work to read, gather people, and get this basic agreement.

Month 12 comes to an end.  Finally toward the middle of Month 11, the customer comes back with an extensive response.  Most of what they are asking for in “requirements” are things that the product already does, so no real problem.  There are few things that we can not complete on this schedule, so we need to push back.  But I am worried, we are six weeks into a task that should have been completed a month earlier.

Deadlines are missed one day at a time.

We’ve Got Plenty of Time

In the end of month 11, I give a revised the requirements and gave to AE.   AE’s response was not to give it to the customer.  He said “let’s work and understand this first, before we give it to the customer.”  This drags on for another couple of weeks, so we are now 8 weeks into the project, and we have not completed the first step originally planned for one month.

I press AE on this.  We are slipping day by day, week by week.  This is how projects are missed.  What was originally planned for 1 month out of twelve, is now close to 2 months, which is now out of 9.   We are getting squeezed!

AE says:  “What is the concern?   We have 8 more months to go!   What does a few weeks matter out of 8 months?

The essence of naive thinking that causes projects to fail is the idea that there is plenty of time and we can waste some.

Everything Depends

Lets count backwards on the dependency:

  • We want to deliver a good product that pleases the customer
  • This depends on using our resources wisely and getting everything done
  • This depends on not having any surprises late in the project about customer desires which waste developer time
  • This depends on having a design that meets all the expectation of the customer
  • This depends on having a clear understanding of what the customer wants before shape of the project starts to ossify.
  • This depends on having clear agreement on the customer desires before all of the above happens.

Each of these cascades, and a poor job in any step causes repercussions that get amplified as last minute changes echo through the development.

I also want to say that this particular customer is not flakey.  They are careful in planning what they want, and don’t show any excessive habit of changing their direction.  They are willing to wait a year for this capability.  I believe they have a good understanding of what they want — this step of getting agreement on the requirements is really just a way to make sure that the development team understands what the customer wants.

Why Such a Stickler?

AE says: “You should be able to go ahead and start without the agreement on requirements.  We have 8 more months, we can surely take a few more weeks or months getting this agreement.

Step 2 is to draw up a specification and to share that with the customer.  Again, we want to be transparent so that we avoid any misunderstanding that might cause problems late in the project.  However, writing a spec takes effort.

Imagine that I ask someone to write a spec for features A, B, and C.   Say that is two weeks of work.   Then the customer asks for feature D, and that causes a complete change in A, B, and C.  For example, given A, B, and C we might decide to write in Python, and that will have an effect on we way things are structured.  Then the customer requires running an environment where Python is not available.  That simple change would require us to start completely over.  All the work on the Python design is wasted work which we have to throw out, and could cause us to lose up to a month of time on the project, causing the entire project to be late.   However, if we know “D” before we start, we don’t waste that time.

Step 2 was planned to take a month, so if we steal 2 weeks from that, by being lazy about getting agreement on the requirements, we already lose half the time needed.  It is not likely that we can do this step in half the time.  And the two weeks might be wasted, causing us to need even more time.  Delaying the completion of step 1, can cause an increase in time of step 2, ultimately cascading all the way to final delivery.

Coming to agreement on the requirements should take 10% of the time, but if not done, could have repercussions that cost far more than 10% of the time.  It is important to treat those early deadlines as if the final delivery of the project depended on them.

Lack of attention to setting up the project at the front, always has an amplified effect toward the end of the project.

But What About Agile?

Agile development is about optimizing the work of the team to be optimally productive, but it is very hard to predict accurate deliveries at specific dates in the future.  I can clearly say we will have great capabilities next year, and the year after.  But this situation is the case that the customer has a specific expectation in a specific time frame.

Without a clear definition of what they want, the time to develop is completely unpredictable.  There is a huge risk by having an agreed upon date, but no agreed upon detailed functionality.

Since the customer understands what they want, the most critical and urgent thing is to capture that desire in a document we both can agree on.  The more quickly that is done, the greater the reduction in risk and danger.

Even when developing in an agile way, the better we understand things up front, the better the whole project will go.  Don’t leave things in the dark just because you are developing in an agile way.  It is a given that there many things that can’t be known in the course of a project, but that gives no license to purposefully ignore things that can be known.

Conclusions

Well run project act as if early deadlines are just as important as late deadlines.  Attention to details is not something that just appears at the last moment.  It must start early, and run through the entire project.

Most software project fail because of lack of clear agreement on what will satisfy the customer.  It is always those late discoveries that cause projects to miss deadlines.  A well run project requires strict attention to clarifying the goals as early as possible.

Do not ignore early deadlines.  Act as if every step of a project is as important as the final delivery.   Because every step is as important as the final delivery.

 

 

 


by kswenson at June 22, 2017 05:02 PM

June 21, 2017

Sandy Kemsley: Smart City initiative with @TorontoComms at BigDataTO

Winding down the second day of Big Data Toronto, Stewart Bond of IDC Canada interviewed Michael Kolm, newly-appointed Chief Transformation Officer at the city of Toronto, on the Smart City...

[Content summary only, click through for full article and links]

by sandy at June 21, 2017 06:08 PM

Sandy Kemsley: Consumer IoT potential: @ZoranGrabo of @ThePetBot has some serious lessons on fun

I’m back for a couple of sessions at the second day at Big Data Toronto, and just attended a great session by Zoran Grabovac of PetBot on the emerging markets for consumer IoT devices. His premise is...

[Content summary only, click through for full article and links]

by sandy at June 21, 2017 04:40 PM

June 20, 2017

Sandy Kemsley: Data-driven deviations with @maxhumber of @borrowell at BigDataTO

Any session at a non-process conference with the word “process” in the title gets my attention, and I’m here to see Max Humber of Borrowell discuss how data-driven deviations allow you to make...

[Content summary only, click through for full article and links]

by sandy at June 20, 2017 07:56 PM

Sandy Kemsley: IBM’s cognitive, AI and ML with @bigdata_paulz at BigDataTO

I’ve been passing on a lot of conferences lately – just too many trips to Vegas for my liking, and insufficient value for my time – but tend to drop in on ones that happen in Toronto, where I live....

[Content summary only, click through for full article and links]

by sandy at June 20, 2017 04:18 PM

June 13, 2017

BPM-Guide.de: “Obviously there are many solutions out there advertising brilliant process execution, finding the “right” one turns out to be a tricky task.” – Interview with Fritz Ulrich, Process Development Specialist

Fritz graduated with a Bachelors of Information Systems at WWU Münster in 2013 and since then has been working for Duni GmbH in the area of Process Development (responsible for all kinds of BPM topics and Duni’s BPM framework) and as a Project Manager.

by Darya Niknamian at June 13, 2017 08:00 AM

June 05, 2017

Keith Swenson: Initial DMN Test Results

The initial Decision Model Notation TCK test results are in!   The web site is up showing the results from three vendors.

Tests of Correctness

There are currently 52 tests which require a conforming DMN implementation to read a DMN model in the standard DMN XML-based file format.   Along with the model are a set of input values, and values to compare the outputs to.  Everything is a file so that not matter what technology environment the DMN requires, it need only read the files and run the models.

The results of running the tests are reported back to the committee by way of a simple CSV file.  The three vendors who have done this to date are Red Hat with the DROOLS rules engine, Trisotech with their web based models which also leverages the DROOLS implementation, and Camunda with their Camunda BPM.   It is worth mentioning that one more implementation has been involved to verify and validate the tests created by Bruce Silver but not included in the results since it is not commercialized.

What we all get from this is the assurance that an implementation really is running the standard model in a standard way.  This can help you avoid a costly mistake of adopting a technology that takes you down a blind alley.

Open Invitation

This is an open invitation for anyone working the DMN space:

  • If you are developing DMN technology, you can take the tests for free and try them out.  When your implementation does well, send us the results and we can put you on the board to let everyone know.
  • If you are using DMN from someone vendor, ask them if they have looked at the tests, and if not, why not?

The tests are all freely available, and there are links from the web site directly to the test models and data.

Acknowledgement

I certainly want to acknowledge the hard work of people at Red Hat, Trisotech, Camunda, Open Rules (who will be releasing their results soon), Bruce Silver, and several others who made this all come about.


by kswenson at June 05, 2017 11:30 AM

May 29, 2017

Drools & JBPM: New KIE persistence API on 7.0

This post introduce the upcoming drools and jBPM persistence api. The motivation for creating a persistence api that is to not be bound to JPA, as persistence in Drools and jBPM was until the 7.0.0 release is to allow a clean integration of alternative persistence mechanisms to JPA. While JPA is a great api it is tightly bound to a traditional RDBMS model with the drawbacks inherited from there - being hard to scale and difficult to get good performance from on ever scaling systems. With the new api we open up for integration of various general NoSQL databases as well as the creation of tightly tailor-made persistence mechanisms to achieve optimal performance and scalability.
At the time of this writing several implementations has been made - the default JPA mechanism, two generic NoSQL implementations backend by Inifinispan and MapDB which will be available as contributions, and a single tailor made NoSQL implementation discussed shortly in this post.

The changes done in the Drools and jBPM persistence mechanisms, its new features, and how it allows to build clean new implementations of persistence for KIE components is the basis for a new soon to be added MapDB integration experimental module. The existing Infinispan adaptation has been changed to accommodate to the new structure.
Because of this refactor, we can now have other implementations of persistence for KIE without depending on JPA, unless our specific persistence implementation is JPA based. It has implied, however, a set of changes:

Creation of drools-persistence-api and jbpm-persistence-api

In version 6, most of the persistence components and interfaces were only present in the JPA projects, where they had to be reused from other persistencies. We had to refactor these projects to reuse these interfaces without having the JPA dependencies added each time we did so. Here's the new set of dependencies:
<dependency>
 <groupId>org.drools</groupId>
 <artifactId>drools-persistence-api</artifactId>
 <version>7.0.0-SNAPSHOT</version>
</dependency>
<dependency>
 <groupId>org.jbpm</groupId>
 <artifactId>jbpm-persistence-api</artifactId>
 <version>7.0.0-SNAPSHOT</version>
</dependency>

The first thing to mention about the classes in this refactor is that the persistence model used by KIE components for KieSessions, WorkItems, ProcessInstances and CorrelationKeys is no longer a JPA class, but an interface. These interfaces are:
  • PersistentSession: For the JPA implementation, this interface is implemented by SessionInfo. For the upcoming MapDB implementation, MapDBSession is used.
  • PersistentWorkItem: For the JPA implementation, this interface is implemented by WorkItemInfo, and MapDBWorkItem for MapDB
  • PersistentProcessInstance: For the JPA implementation, this interface is implemented by ProcessInstanceInfo, and MapDBProcessInstance for MapDB
The important part is that, if you were using the JPA implementation and wish to continue doing so with the same classes as before. All interfaces are prepared to work with these interfaces. Which brings us to our next point

PersistenceContext, ProcessPersistenceContext and TaskPersistenceContext refactors

Interfaces of persistence contexts in version 6 were dependent on the JPA implementations of the model. In order to work with other persistence mechanisms, they had to be refactored to work with the runtime model (ProcessInstance, KieSession, and WorkItem, respectively), build the implementations locally, and be able to return the right element if requested by other components (ProcessInstanceManager, SignalManager, etc)
Also, for components like TaskPersistenceContext there were multiple dynamic HQL queries used in the task service code which wouldn’t be implementable in another persistence model. To avoid it, they were changed to use specific mechanisms more related to a Criteria. This way, the different filtering objects can be used in a different manner by other persistence mechanisms to create the queries required.

Task model refactor

The way the current task model relates tasks and content, comment, attachment and deadline objects was also dependent on the way JPA stores that information, or more precisely, the way ORMs related those types. So a refactor of the task persistence context interface was introduced to do the relation between components for us, if desired. Most of the methods are still there, and the different tables can still be used, but if we just want to use a Task to bind everything together as an object (the way a NoSQL implementation would do it) we now can. For the JPA implementation, it still relates object by ID. For other persistence mechanisms like MapDB, it justs add the sub-object to the task object, which it can fetch from internal indexes.
Another thing that was changed for the task model is that, before, we had different interfaces to represent a Task (Task, InternalTask, TaskSummary, etc) that were incompatible with each other. For JPA, this was ok, because they would represent different views of the same data.
But in general the motivation behind this mix of interfaces is to allow optimizations towards table based stores - by no means a bad thing. For non table based stores however these optimizations might not make sense. Making these interfaces compatible allows implementations where the runtime objects retrieved from the store to implement a multitude of the interfaces without breaking any runtime behavior. Making these interfaces compatible could be viewed as a first step, a further refinement would be to let these interfaces extending each other to underline the model  and make the implementations simpler
(But for other types of implementation like MapDB, where it would always be cheaper to get the Task object directly than creating a different object, we needed to be able to return a Task and make it work as a TaskSummary if the interface requested so. All interfaces now match for the same method names to allow for this.)

Extensible TimerJobFactoryManager / TimerService

On version 6, the only possible implementations of a TimerJobFactoryManager were bound in the construction by the values of theTimeJobFactoryType enum. A refactor was done to extend the existing types, to allow other types of timer job factories to be dynamically added

Creating your own persistence. The MapDB case

All these interfaces can be implemented anew to create a completely different persistence model, if desired. For MapDB, this is exactly what was done. In the case of the MapDB implementation that is still under review, there are three new modules:
  • org.kie:drools-persistence-mapdb
  • org.kie:jbpm-persistence-mapdb
  • org.kie:jbpm-human-task-mapdb
That are meant to implement all the Task model using MapDB implementation classes. Anyone with a wish to have another type of implementation for the KIE components can just follow these steps to get an implementation going:
  1. Create modules for mixing the persistence API projects with a persistence implementation mechanism dependencies
  2. Create a model implementation based on the given interfaces with all necessary configurations and annotations
  3. Create your own (Process|Task)PersistenceContext(Manager) classes, to implement how to store persistent objects
  4. Create your own managers (WorkItemManager, ProcessInstanceManager, SignalManager) and factories with all the necessary extra steps to persist your model.
  5. Create your own KieStoreServices implementation, that creates a session with the required configuration, and adding it to the classpath

You’re not alone: The MultiSupport case

MultiSupport is a Denmark based company that has used this refactor to create its own persistence implementation. They provide an archiving product that is focused on creating a O(1) archive retrieval system, and had a strong interest in getting their internal processes to work using the same persistence mechanism they used for their archives.
We worked on an implementation that allowed for an increase in the response time for large databases. Given their internal mechanism for lookup and retrieval of data, they were able to create an implementation with millions of active tasks which had virtually no degradation in response time.
In MultiSupport we have used the persistence api to create a tailored store, based on our in house storage engine - our motivation has been to provide unlimited scalability, extended search capabilities, simple distribution and a performance we struggled to achieve with the JPA implementation. We think this can be used as a showcase of just how far you can go with the new persistence api. With the current JPA implementation and a dedicated SQL server we have achieved an initial performance of less than 10 ‘start process’ operations per second, now with the upcoming release we on a single application server have a performance more than 10 fold.

by Marian Buenosayres (noreply@blogger.com) at May 29, 2017 10:35 PM

May 08, 2017

BPM-Guide.de: “From a development point of view, it is important that the BPM software be open.” – Interview with Michael Kirven, VP of IT

As the Vice President of Business Solutions within the IT Applications area at People’s United Bank I manage a team of developers across multiple development technologies with a focus towards bringing efficiencies to the bank’s back office areas. I’ve been with People’s United Bank since 1999 and before that I was a commercial software developer for several different startup companies.

by Darya Niknamian at May 08, 2017 08:00 AM

Keith Swenson: bpmNEXT Keynotes

Talks by Nathaniel Palmer, Jim Sinur, Neil Ward-Dutton, Clay Richardson kick off this bell weather event in the process industry.  The big theme is digital transformation.

BPMAllFourVery interesting to see how all four play off of each other, and reflect a suprisingly lucid view of the trends in the process space.

Nathaniel Palmer (¤)

Kicked off the event and gives an excellent overview of the industry.  Exponential organizations define the decade, and where is BPM in that?  He has been promoting Robotic Process Automation as a key topic for a number of years now, which has finally come into popular usage. Who thought India would be a hotspot for this interest?  Lots of kinds of robots: conversational assistants (Echo), robot lawyer accessed by web page, mobile robots that assist people in stores, industrial robots, etc.  Tasks have different meanings and difference behavior based on the way that you interact with it.

We are going to need an army of robot lawyers to combat an army of robot lawyers for every transaction we do.  Previous laws are based on a utility curve that assumes a limit to how much effort you would be willing to put in, but robots don’t care about that utility curve.

Biggest contribution is this architecture for a suite of capabilities needed for a digital transformation platform, which includes process management, decision management, machine learning and automation.

We should consider this architecture as a common understanding of where BPM is going.

BPMFramework


Jim Sinur (¤)

BPM is morphing.  Goal directed, autonomous and robots.  2017 Trends:

  • Predictive apps get smarter.  Predictive and cognitive.  Decision criteria
  • Big data, deep learning
    • Machine learning is easy today; found 112 machine learning algorithms.  Takes a lot of horsepower.
    • Medium scale is deep learning on the fly and updating knowledge as you go.
    • High cost is cognitive computing is expert and highly trained on specific topics.  Watson takes a long time to train.  Healthcare is a big focus.
  • IoT: how to manage, how to talk.  NEST protocol gaining steam.  Autonomy at the edge, not just smart centrally.  Things will have smarter chips.  In the past predefined processes will give way to dynamic processes.  Example, GM has paint robots that bid on jobs, and optimize order for work.  Different booths have different quality ratings.
  • Sales engagement platforms (SEP) and service engagement platforms
  • Video enabled business applications.  BPM is more about collaborative work management.  All the fast upcoming company has included workflow.  Training.
  • Chat bots and digital assistants.  Considers his Amazon Echo to be such a thing.
  • Virtual reality will be more and more important.  Gamification.  Glasses now.  Google glass a failure.
  • Work Hubs and Platforms.
  • Drones are being put to work.  surveillance.  Delivery.
  • Block Chain – In production very few.  Contained, constrained, small volume, real time.  Builds.  May require new kinds of chips.  Security and integrity is crucial.

Digital business platform:  (1) business applications (2) processes and cases, (3) machines & sensors (4) cognition and calculations (5) data and systems integration.  Real live solution involves drone that flies checking pipeline status.  Spectrum of vendors.  Change management is a key part of all this. Digital transformation is mistakenly compared to enterprise re-engineering.  Digital Identity is critical.

Digital DNA:  goal driven processes, robots, cognitive, digital assistants, software bots, intelligent process, controllers, deep learning, learning, voice, RPA, sensors, machine learning.

BPMSinur


Neil Ward-Dutton (¤)

The new wave of Automation

Context – A major shift in experience of automation.  Shift in how we interact with machine.  Used to shaping out live around automated system.  Training ourselves.  Automation lines are designed for the robots.   But it is going the other way.

  1. way they interact with their environment
  2. flexing and recommending, and
  3. packaging work for us.
  • First industrial automation (flour mill) 1785
  • Learning system only since the 1960s/1970s.

Layers – Three layers; Interaction, Insight, Integration.

  • Interaction is sensing and responding.
  • Insight is about moving away from static analysis of plans, but dynamic reevaluation from moment to moment.
  • Integration is about componentization, automate resources.  Not just EAI, but more openness.

Drivers – why are we seeing these things now?

  1. rapidly evolving – fundamental assumption was that computing resourceses were scarce.  Expensive, hard to get, and mistakes are costly.  BUt that is changing.
  2. business pressures
    • Customer experience excellence.  How to create customer journeys that are enticing.
    • Decouple knowledge from labor.
    • Perform at speed
  3. familiarity – automation, bots, recommendations

It is crazy that the people that turnover the most, need the least training and cost the least are the people who talk to the customers most.

Impacts

  • Insight: some HR system that identifies people who are likely to leave, and what the cost of that person leaving would mean.
  • Integration:  Does RPA belong here?  Primarily integration.

Follow the money: expert assistants (increase impact of experts), case advisors (make everyone as good as the best), task automators (highly procedural routine tasks).  And personal productivity.

It is bullshit to say that a particular job will disappear — maybe tasks, but not entire jobs.

Your opportunity.

Layers: (see graphic at the end of section) highly automated tasks on lower levels.  Sense and respond above that.  Then human personal productivity assistants – useful but low value.

  • Chatbots – text based or speech based interactions, but no real smarts.
  • Recommendation services – expert, next best action
  • Smart Infrastructure – maintenance, management

Virtuous cycle, three steps:

  • Shift to self service in terms of access to tools.  Integration tools.  Process.  Kanban kind of tools.  Tools that can be used by a much broader audience.
  • Shift to (network) platforms.  Aggregates data and insights.  IaaS was really about cost and scale.  This is different.  Shared, networked, cloud platform.  Not about the underlying technology, but data and insights.  If everyone has to build their own, they won’t have access to aggregated data.  But the value of a networked cloud platform is access to data, aggregating insights, quicker.
  • Shift to learning systems.  Figuring out how to take recommendation services.

SnapLogic: integration assistant.
Boomie Dell StepLogic:  operational data from customers history.  Can identify customers that are struggling.

BPMNeal

BPMNeal2


Clay Richardson (¤)

How to Survive the Great Digital Migration

Clay has a new startup: Digital Fast Forward  and also advising at American University.

A poll found that digital transformation is the #1 priority for BPM.  But the teams don’t know what to do.  Prediction that 75% of BPM programs will fail.   Referenced the World Economic Forum’s Fourth Industrial Revolution.  Creativity used to be low on the list, now near the top.  Companies are not seeing this yet.

Digital gold rush versus the digital drought.  They get the technology part, but not the skill part.  Less than 20% of companies have the skills.

AT&T’s competitors are not just Verizon and Sprint, but also tech giants like Amazon and Google. For the company to survive in this environment, Mr. Stephenson needs to retrain its 280,000 employees so they can improve their coding skills, or learn them, and make quick business decisions based on a fire hose of data coming into the company.

Strategies to address these: hire, reinvent, and outsource.  Try to take all three.

Maintain –> waterfall,   implement–>agile/scrum,   experiment –> ?????    (design thinking?)  Actually very ad-hoc.

Design,  Validate, Learn    Check out Google design sprints.   How do you move quickly into design sprints.  How many are familiar with Objectives and Key Results (OKR)?

Not just what you learn, but HOW you learn it.  Has to be interactive and immersive.  Like a hackathon.

Digital innovation boot camp.  6 weeks in silicon valley.  retrain to become digital experts.  Put together for immersion.  Real world experiences.  Tripled the volume of digital innovation ideas.  Accelerated speed to green light digital projects.

Incorporated ‘Escape Room’ concepts into training exercises that he ran.  People love learning in interactive, immersive situations.

Digital platforms must evolve to support experimentation.  AI, robotics, mobile, low code, IoT.   Is going to have to bring rapid protoyping, OKR management, and hypothesis boards.  Need the cycle of build, measure, learn.

BPMClay


by kswenson at May 08, 2017 07:40 AM

April 25, 2017

Drools & JBPM: Just a few... million... rules... per second!

How would you architect a solution capable of executing literally millions of business rules per second? That also integrates hybrid solutions in C++ and Java? While at the same time drives latency down? And that is consumed by several different teams/customers?

Here is your chance to ask the team from Amadeus!

They prepared a great presentation for you at the Red Hat summit next week:

Decisions at a fast pace: scaling to multi-million transactions/second at Amadeus

During the session they will talk about their journey from requirements to the solution they built to meet their huge demand for decision automation. They will also talk about how a collaboration with Red Hat helped to achieve their goals.

Join us for this great session on Thursday, May 4th, at 3:30pm!



by Edson Tirelli (noreply@blogger.com) at April 25, 2017 03:41 PM

April 24, 2017

Drools & JBPM: DMN demo at Red Hat Summit

We have an event packed full of Drools, jBPM and Optaplanner content coming next week at the Red Hat Summit, but if you would like to know more about Decision Model and Notation and see a really cool demo, then we have the perfect session for you!

At the Decision Model and Notation 101 session, attendees will get a taste of what DMN brings to the table. How it allows business users to model executable decisions using a fun, high level, graphical language, that promotes interoperability and preserves their investment preventing vendor-lock-in.

But this will NOT be your typical slideware presentation. We have prepared a really nice demo of the end-to-end DMN solution announced by Trisotech a few days ago. During the session you will see a model being created with the Trisotech DMN Modeler, statically analyzed using the Method&Style DT Analysis module and executed in the cloud using Drools/Red Hat BRMS.

Come an join us on Tuesday, May 2nd at 3:30pm.

It is a full 3-course meal, if you will. And you can follow that up with drinks at the reception happening from 5pm-7pm at the partner Pavillion where you can also talk to us at the Red Hat booth about it and anything else you are interested in.

Happy Drooling!



by Edson Tirelli (noreply@blogger.com) at April 24, 2017 11:52 PM

April 20, 2017

Sandy Kemsley: Cloud ECM with @l_elwood @OpenText at AIIM Toronto Chapter

Lynn Elwood, VP of Cloud and Services Solutions at OpenText, presented on managing information in a cloud world at today’s AIIM chapter meeting in Toronto. This is of particular interest...

[Content summary only, click through for full article and links]

by sandy at April 20, 2017 02:28 PM

April 14, 2017

Keith Swenson: AdaptiveCM Workshop in America for first time

The Sixth International AdaptiveCM Workshop will be associated with the EDOC conference this year, which will be held in Quebec City in October 2017 and is the first opportunity for many US and Canadian researchers to attend without having to travel to Europe.

Since 2011 the AdaptiveCM Workshop has been the premier place to present and discuss leading edge ideas for advanced case management ideas and other non-workflow approaches to supporting business processes and knowledge workers in general.  It has been held in conjunction with the EDOC conference twice before, and the BPM conference twice as well, however it has always been held in Europe in the past.

91461836Key dates:

  • Paper submission deadline – May 7, 2017
  • Notification of acceptance – July 16, 2017
  • Camera ready – August 6, 2017
  • Workshop – October 10, 2017

Papers are welcome on the following topics:

  • Non-workflow BPM: how does one specify and working patterns that are not fixed in advance, that depend upon cooperation, and where the elaboration of the working pattern for a specific case is a product of the work itself.  Past workshops have included papers on CMMN and Dynamic Condition Response Graphs.
  • Adaptive Case Management: experience and approaches how knowledge workers use their time in an agile way, including empirical studies of how knowledge work teams share and control their information including vendors like Computas and ISIS Papyrus.
  • Decision Modeling and Management: is a new extension of the workshop this year to encourage papers that explore the ways that a decision model might be used away from a strictly defined process diagram for flexible knowledge work.

The biggest challenge is that many people working on systems for knowledge workers don’t know their systems have features in common with others.  For example: a system to help lawyers file all the right paperwork with the courts may not be seen initially as having commonality with a system to help maintenance workers handle emergency repairs.  Those commonalities exist — because people must manage their time in the face of change — and understanding their common structure is critical to allowing agile organizations to operate more effectively.

Titles of papers in recent years:

  • On the analysis of CMMN expressiveness: revisiting workflow patterns
  • Semantics of Higraphs for Process Modeling and Analysis
  • Limiting Variety by Standardizing and Controlling Knowledge Intensive Processes
  • Using Open Data to Support Case Management
  • Declarative Process Modelling from the Organizational Perspective.
  • Automated Event Driven Dynamic Case Management
  • Collective Case Decisions Without Voting
  • A Case Modelling Language for Process Variant Management in Case-based Reasoning
  • An ontology-based approach for defining compliance rules by knowledge workers in ACM: A repair service management case
  • Dynamic Context Modeling for ACM
  • Towards Structural Consistency Checking in ACM
  • Examining Case Management Demand using Event Log Complexity Metrics
  • Process-Aware Task Management Support for Knowledge-Intensive Business Processes: Findings, Challenges, Requirements
  • Towards a pattern recognition approach for transferring knowledge in ACM
  • How can the blackboard metaphor enrich collaborative ACM systems?
  • Dynamic Condition Response Graphs for Trustworthy Adaptive Case Management
    Collaboration between research and practice

Participants in the past have been from from all the key research institutions across Europe as well as some of the key vendors of flexible work support systems.  This year we hope to attract more interest from researchers and practitioners from Canada, the US, and the western hemisphere together with the core EDOC community drawn from all over the world.   Meet and discuss approaches / techniques and spend a day investigating and sharing all the latest approaches.

I will be there in Quebec City in October for sure, and hope to see as many of you as can make it!

download the: PDF Handout

 


by kswenson at April 14, 2017 12:28 PM

April 12, 2017

Drools & JBPM: DMN Quick Start Program announced

Trisotech, a Red Hat partner, announced today the release of the DMN Quickstart Program.

Trisotech, in collaboration with Bruce Silver AssociatesAllegiance Advisory and Red Hat, is offering the definitive Decision Management Quick Start Success Program. This unique program provides the foundation for learning, modeling, analyzing, testing, executing and maintaining DMN level 3-compliant decision models as well as best practices to incorporate in an enterprise-level Decision Management Center of Excellence. 

The solution is a collaboration between the partner companies around the DMN standard. This is just one more advantage of standards: not only users are free from the costs of vendor lock-in, but it also allow vendors to collaborate in order to offer customers complete solutions.

by Edson Tirelli (noreply@blogger.com) at April 12, 2017 10:43 PM

April 11, 2017

Drools & JBPM: An Open Source perspective for the youngsters

Please allow me to take a break from the technical/community oriented posts and talk a bit about something that has been on my mind a lot lately. Stick with me and let me know what you think!

Twenty one years ago, Leandro Komosinski, one of the best teachers (mentor might be more appropriate) I had, told me in one of our meetings:

"- You should never stop learning. In our industry, if you stop learning, after three years you are obsolete. Do it for 5 years and you are relegated to maintaining legacy systems or worse, you are out of the market completely. "

While this seems pretty obvious today, it was a big insight to that 18 years old boy. I don’t really have any data to back this claim or the timeframes mentioned, but that advice stuck with me ever since.

It actually applies to everything, it doesn’t need to be technology. The gist of it: it is important to never stop learning, never stop growing, personally and professionally.

That brings me to the topic I would like to talk about. Nowadays, I talk to a lot of young developers. Unfortunately, several of them when asked “What do you like to do? What is your passion?” either don’t know or just offer generic answers: “I like software development”.

"But, what do you like in software development? Which books have you been reading? Which courses are you taking?" And the killer question: "which open source projects are you contributing to?"

The typical answer is: “- the company I work for does not give me time to do it.” 

Well, let me break it down for you: “this is not about the company you work for. This is about you!” :) 

What is your passion? How do you fuel it? What are you curious about? How do you learn more about it?

It doesn’t need to be software, it can be anything that interests you, but don’t waste your time. Don’t wait for others to give you time. Make your own time.

And if your passion is technology or software, then it is even easier. Open Source is a lot of things to a lot of people, but let me skip ideology. Let me give you a personal perspective for it: it is a way to learn, to grow, to feed your inner kid, to show what you care for, to innovate, to help.

If you think about Open Source as “free labour” or “work”, you are doing it wrong. Open source is like starting a masters degree and writing your thesis, except you don’t have teachers (you have communities), you don’t have classes (you do your own exploratory research), you don’t have homework (you apply what you learn) and you don’t have a diploma (you have your project to proudly flaunt to the world). 

It doesn’t matter if your project is used by the Fortune 500 or if it is your little pet that you feed every now and then. The important part is: did you grow by doing it? Are you better now than you were when you started?

So here is my little advice for the youngsters (please take it at face value):

- Be restless, be inquisitive, be curious, be innovative, be loud! Look for things that interest you in technology, arts, sociology, nature, and go after them. Just never stop learning, never stop growing. And if your passion is software development, then your open source dream project is probably a google search away.

Happy Drooling,
Edson

by Edson Tirelli (noreply@blogger.com) at April 11, 2017 06:40 PM

April 03, 2017

BPM-Guide.de: BPM software should evolve and interoperate with other standards and tools – Interview with Judy Fainor, Chief Architect

Judy Fainor is the Chief Architect at Sparta Systems where she is responsible for enterprise software design, technology direction, and architecture. She has over 25 years of experience in product development including leading patent initiatives, speaking at technical conferences and interacting with Fortune 500 customers. Prior to Sparta Systems she was responsible for the architectural strategy of the IBM Optim Data Management portfolio where she led research and development projects that spanned IBM’s global labs including Japan, India, China, Israel and North America while also participating on the IBM Software Group Architecture Board.

by Darya Niknamian at April 03, 2017 08:00 AM

March 31, 2017

Drools & JBPM: A sneak peek into what is coming! Are you ready?

As you might have guessed already, 2017 will be a great year for Drools, jBPM and Optaplanner! We have a lot of interesting things in the works! And what better opportunity to take a look under the hood at what is coming than joining us on a session, side talk or over a happy hour in the upcoming conferences?

Here is a short list of the sessions we have on two great conferences in the next month! The team and myself hope to meet you there!

Oh, and check the bottom of this post for a discount code for the Red Hat Summit registration!


Santa Barbara, California April 18-20, 2017





















by Edson Tirelli (noreply@blogger.com) at March 31, 2017 11:53 PM

March 21, 2017

Drools & JBPM: DMN 1.1 XML: from modeling to automation with Drools 7.0

I am a freelance consultant, but I am acting today as a PhD student. The global context of my thesis is Enterprise Architecture (EA), which requires to model the Enterprise. As one aspect of EA is business process modeling, I am using BPMN from years, but this notation is not very appropriate to represent decision criteria: a cascade of nested gateways becomes quickly difficult to understand then to modify. So, when OMG published the first version 1.0 Beta of DMN specification in 2014, I found that DMN was a very interesting notation to model decision-making. I succeeded in developing my own DMN modeling tool, based on DMN metamodel, in using the Sirius plugin for Eclipse . But even the next “final” version 1.0 of DMN specification was not very accomplished.

The latest version 1.1 of DMN, published in June 2016, is quite good. In the meantime, software editors (at least twenty) have launched good modeling tools, as Signavio Decision Manager (free for Academics) used for this article. This Signavio tool was already able to generate specific DRL files for running DMN models on the BRMS Drools current version 6. In addition to the graphics, some editors added recently the capability to export DMN models (diagram & decision tables) into “DMN 1.1 XML” files, which are compliant with the DMN specification. And the good news is that BRMS like Drools (future version 7, available in Beta version) are able to run theses DMN XML files for automating decision-making (a few lines of Java code are required to invoke theses high level DMN models).

This new approach of treating “DMN 1.1 XML” interchange model directly is better for tool independency and model portability. Here is a short comparison between the former classic but specific solution and this new and generic solution, using the tool Signavio Decision Manager (latest version 10.13.0). MDA (Model Driven Architecture) and its three models CIM, PIM & PSM gives us the appropriate reading grid for this comparison:

3 MDA models
Description
Classic specific DMN solution
from Signavio Decision Manager
to BRMS Drools
CIM (Computation
Independent Model)
Representation model for business,
independent of computer considerations
DRD (Decision Requirements Diagram)
+ Decision Tables
PIM (Platform
Independent Model)
Design model for computing,
independent of the execution platform
û
PSM (Platform
Specific Model)
Design model for computing,
specific to the execution platform
DRL (Drools Rule Language)
+ DMN Formulae Java8-1.0-SNAPSHOT.jar

The visible aspect of DMN is its emblematic Decision Requirements Diagram (DRD) which can be completed with some Decision Tables for representing the business logic for decision-making. A DRD and its Decision Tables compose a CIM model, independent of any computer considerations.

Then, in the classic but specific DMN solution, Signavio Decision Manager is able, from a business DMN model (DRD diagram and Decision Tables), to export a DRL file directly for a Drools rules engine. So, this solution skips the intermediate PIM level, that is not very compliant with MDA concept. Note that this DRL file needs a specific Signavio’s jar library with DMN formulae.

3 MDA models
Description
New generic DMN solution
from Signavio Decision Manager(or other tools)
to BRMS Drools (or other BRMS)
CIM (Computation
Independent Model)
Representation model for business,
independent of computer considerations
DRD (Decision Requirements Diagram)
+ Decision Tables
PIM (Platform
Independent Model)
Design model for computing,
independent of the execution platform
DMN 1.1 XML (interchange model)
containing FEEL Expressions
PSM (Platform
Specific Model)
Design model for computing,
specific to the execution platform
û

The invisible aspect of DMN is its DMN XML interchange model, very useful for exchanging a model between modeling tools. DMN XLM is also very useful for going from model to automation. DMN XML model takes into account computer considerations, but as it is defined into DMN specification, a standard published by OMG (Object Management Group), it is independent of any execution platform, so it is a PIM model. DMN XML complies to DMN metamodel and can be checked with an XSD schema provided by OMG. The latest version 1.1 of DMN has refined this DMN XML format.

As DMN is a declarative language, a DMN XML file contains essentially declarations. The business logic included can be expressed with FEEL (Friendly Enough Expression Language) expressions. All entities required for a DMN model (input data, decision tables, rules, output decisions, etc.) are exported into the DMN XML file, due to a mechanism called serialization. It is why automation is now possible from DMN XML directly. Not all DMN modeling tools allow to export (or import) to DMN XML format.

With the new generic DMN solution, Signavio Decision Manager is now able, from the same business DMN model (DRD diagram and decision tables), to export “DMN 1.1 XML” interchange model. As the future 7.0.0 version of Drools is able to interpret “DMN 1.1 XML” format directly, the last level PSM, specific to the execution platform, is not useful anymore.

The new generic DMN solution, without skipping PIM level, sounds definitely better than the specific one and is a good basis for automating decision-making. Another advantage is, as Signavio said, that this new approach using “DMN 1.1 XML” reduces the vendor lock-in.

Thierry BIARD

by Thierry Biard (noreply@blogger.com) at March 21, 2017 03:10 PM

March 18, 2017

Sandy Kemsley: Twelve years – and a million words – of Column 2

In January, I read Paul Harmon’s post at BPTrends on predictions for 2017, and he mentioned that it was the 15th anniversary of BPTrends. This site hasn’t been around quite that long, but today marks...

[Content summary only, click through for full article and links]

by sandy at March 18, 2017 01:17 PM

March 13, 2017

BPinPM.net: Invitation to Best Practice Talk in Hamburg

Dear BPM-Experts,

to facilitate knowledge exchange and networking, we would like to invite you to our first “Best Practice Talk” about Process Management. The event will take place on March 30, 2017 at the Mercure Hotel Hamburg Mitte.

Experts from Olympus, ECE, and PhantoMinds will provide inspiring presentations and we will have enough time to discuss BPM questions. 🙂

Please visit our xing event for all the details:
https://www.xing.com/events/best-practice-talk-prozessmanagement-hamburg-1788523

See you in Hamburg!
Mirko

by Mirko Kloppenburg at March 13, 2017 08:31 PM

March 12, 2017

Drools & JBPM: DroolsJBPM organization on GitHub to be renamed to KieGroup


   In preparation for the 7.0 community release in a few weeks, the "droolsjbpm" organization on GitHub will be renamed to "kiegroup". This is scheduled to happen on Monday, March 13th.

   While the rename has no effect on the code itself, if you have cloned the code repository, you will need to update your local copy with the proper remote URL changing it from:


   To:


   Unfortunately, the URL redirect feature in GitHub will not support this rename, so you will likely have to update the URL manually on your local machines.

   Sorry for the inconvenience. 

by Edson Tirelli (noreply@blogger.com) at March 12, 2017 02:56 PM

March 08, 2017

BPM-Guide.de: My Interview with 5 Beautifully Unique Women at Camunda

Let me start by saying Happy International Women’s Day to my fellow females and males who are taking it upon themselves to discuss and share their emotions, experiences and challenges faced by women around the globe.

With the recent article by former Uber employee Susan Fowler and the theme of this years International Women’s Day: Women in the Changing World of Work: Planet 50-50 by 2030, I wanted to highlight the diversity of women I have the privilege of working with everyday to further showcase that women continue to take on a variety of roles, changing the workforce.

I am a firm …

by Darya Niknamian at March 08, 2017 07:34 AM

February 03, 2017

Sandy Kemsley: AIIM breakfast meeting on Feb 16: digital transformation and intelligent capture

I’m speaking at the AIIM breakfast meeting in Toronto on February 16, with an updated version of the presentation that I gave at the ABBYY conference in November on digital transformation and...

[Content summary only, click through for full article and links]

by sandy at February 03, 2017 01:15 PM