Planet BPM

February 22, 2018

Keith Swenson: Usability is a Factor in Security

I am going once again through data security training.  That is not in itself bad, but the misguided and outdated recommendations propagate misinformation.  Security is very important, but why can’t the “security” experts learn?

We are presented with this guidance:

The most important aspect of a strong password, is that it is easy to remember.  When you have 30 to 40 passwords to use just for the official job functions (not counting all the personal online services) it can be a challenge to keep them all straight.  A hard to remember password will be written down … because it is hard to remember.  Duh!

Writing down passwords is a security flaw.  Duh again!

Therefor, it is quite clear that for a strong password to be successfully used, it must be at the same time easy to remember.  One can easily make a strong password that is easy to remember … without engaging in any of the prohibited actions mentioned on the right.

The strongest password you can create is a list of letters that correspond to a complete phrase that you are familiar with.  Take a phrase that you know, but is obscure, probably something personal to you which you can easily remember, but nobody would necessarily know it is associated with you.  For this example I am using a very popular phrase, but in real use you should not use a popular phrase.

“Now is the time for all good men to come to the aid of their country”

You password is the first letters of all the words: “nittfagmtcttaotc”

That is a strong password.  Try it.  If you know the phrase, it is easy to type.  It essentially is a random collection of letters.  It will be as hard for a password generator to guess as any password.

It is even better if you are a little less regular about this and mix up the letters that you transform to.  One recommendation I saw was the phrase “This little piggy went to market” might become “tlpWENT2m”.   Another suggested the phrase might be “Try to crack my latest password, all you hackers” to become “t2cmlp,@yh”.

You can have the strongest password in the world, but if you have to write it down, then all advantages of a strong password are lost.

Why do these “security recommendations” ignore the fact that human factors is the single largest cause of unauthorized access?  There is some evidence that the cryptic hard to remember passwords are actually worse than simpler, but longer phrases.  NIST guidelines recommends that people create long passphrases instead of gobbldegook short passwords.

It continues with this page:

What is missing?  Usability.  Again, you can make a super secure system, but if it is extremely difficult to use, then there is a strong disincentive to use it.

For example, in Windows one can control the access of every document down to exactly which users can read or update.  But almost business user today ever uses it!   It is too tedious.  Sometimes, on a network drive, one might set the access to a complete folder to a group or something, but that leaves the door open for a  wide variety of people.  This is not really Microsoft’s fault: it is that the paradigm of setting up specific users access to specific files is just inherently tedious.

The most important aspect of a security system is that a regular business user can easily navigate the controls to accurately and easily restrict access from the right people, and allow access to the right people.

Part of a good usable system will be indicators that tell you when it is wrong and help you get around it.  For example, when a user should have access, and doesn’t, there should be an easy way to request and get access — without a lot of tedium.  The access control should be easily visible.

When there is a failure of security or a mis-configuration there should be a clear error message that accurately tells what was restricted and why.   Most primitive security people believe that no error should be produced, or if one is, there should be no discernible information in it.  All of this makes most data security environment so difficult to use that people avoid it.

Eventually I encounter this screen in the training:

The checked answer is the one you must choose to get the question correct.

I had thought the security experts were simply oblivious to usability concerns, but it seems that they are actively against having passwords that are easy to remember!   They actually believe that having a hard to remember password is better security!  Unbelievable!  It can be a big challenge to remember 30 to 40 password just for the job, all of which are expected to be different and changed reasonably often.

Yes, I know, password managers like LastPass do a good job of solving this memory problem.  Very convenient.  The clear advantage is that you can have super strong passwords, as long as you want, and change them as often as you want.  I used it for a while, but I had some problems with it inserting itself into all the web pages I browsed.  I don’t remember exactly the problem, but I had to stop using it.

Again, an easily remembered password will not be written down, and therefor will be safer and more secure.

Super Ironic

The email telling me about the security course has my username and password (for the course) directly in the email.  Yes, I know this is not a super-secure bit of information, but if you want to train people to behave the right way, it would make sense to demonstrate the correct behavior even when it is not necessary.  It is particularly ironic that a training class on security uses a bad security shortcut itself.

Why do they do that?  It is too difficult otherwise!  They found people failed to do the course when they had to set up and maintain their (separate) password for the course.   Hello?   Is anyone listening?

Another course

I got done with another course, and the last step of that course was to fill in a survey.   I linked to the appropriate page, Logged in carefully.  The instructions said to press a button to fill in the survey.   This is what was presented to me:

Is it access control gone wrong?  Maybe.  Probably.  The people who made the survey probably have no idea this is happening.  Usability is strictly a secondary consideration around security access problems.

I am not the only one feeling this way

 

by kswenson at February 22, 2018 04:58 PM

February 07, 2018

Drools & JBPM: Running multi Workbench modules on the latest IntelliJ Idea with live reloading (client side)


NOTE: The instructions below apply only to the old version of the gwt-maven-plugin

At some point in the past, IntelliJ released an update that made it impossible to run the Workbench using the GWT plugin. After exchanging ideas with people on the team and summing up solutions, some workarounds have emerged. This guide provides information to running any Errai-based applications in the latest version of IntelliJ along with other modules to take advantage of IntelliJ's (unfortunately limited) live reloading capabilities to speed the development workflow.


Table of contents


1. Running Errai-based apps in the latest IntelliJ
2. Importing other modules and use live reload for client side code
3. Advanced configurations
3.1. Configuring your project's pom.xml to download and unpack Wildfly for you
3.2. Alternative workaround for non-patched Wildfly distros



1. Running Errai-based apps in the latest IntelliJ


As Max Barkley described on #logicabyss a while ago, IntelliJ has decided to hardcode gwt-dev classes to the classpath when launching Super Dev Mode in the GWT plugin. Since we're using the EmbeddedWildflyLauncher to deploy the Workbench apps, these dependencies are now deployed inside our Wilfdfly instance. Nothing too wrong with that except the fact that gwt-dev jar depends on apache-jsp, which has a ServletContainerInitializer marker file that causes the deploy to fail.

To solve that issue, the code that looks to the ServletContainerIntitilizer file and causes the deploy to fail was removed in custom patched versions of Wildfly that are available in Maven Central under the org.jboss.errai group id.

The following steps provide a quick guide to running any Errai-based application on the latest version of IntelliJ.


1. Download a patched version of Wildfly and unpack it into any directory you like
- For Wildfly 11.0.0.Final go here

2. Import the module you want to work on (I tested with drools-wb)
  - Open IntelliJ, go to File -> Open.. and select the pom.xml file, hit Open then choose Open as Project

3. Configure the GWT plugin execution like you normally would on previous versions of IntelliJ

- VM Options:
  -Xmx6144m
    -Xms2048m
  -Dorg.uberfire.nio.git.dir=/tmp/drools-wb
  -Derrai.jboss.home=/Users/tiagobento/drools-wb/drools-wb-webapp/target/wildfly-11.0.0.Final


- Dev Mode parameters:
  -server org.jboss.errai.cdi.server.gwt.EmbeddedWildFlyLauncher


4. Hit the Play button and wait for the application to be deployed


2. Importing other modules and using live reload for client side code


After being able to run a single webapp inside the latest version of IntelliJ, it might be very useful to have some of its dependencies be imported as well, so that after changing client-code on that dependency, you don't have to wait (way) too long for GWT to compile and bundle your application's JavaScript code again.

Simply go to File > New > Module from existing sources.. and choose the pom.xml of the module you want to import.
If you have kie-wb-common or appformer imported alongside with another project, you'll most certainly have to apply a patch in the beans.xml file of your webapp.

For drools-wb you can download the patch here. For other projects such as jbpm-wb, optaplanner-wb or kie-wb-distributions, you'll have to essentially do the same thing, but changing the directories inside the .diff file.

If your webapp is up, hit the Stop button and then hit Play again. Now you should be able to re-compile any code changed inside IntelliJ much faster.



3.1. Configuring your project's pom.xml to download and unpack Wildfly for you


If you are used to a less manual workflow, you can use the maven-dependency-plugin to download and unpack a Wildfly instance of your choice to any directory you like.

After you've added the snipped below to your pom.xml file, remember to add a "Run Maven Goal" before the Build of your application in the "Before launch" section of your GWT Configuration. Here I'm using the process-resources phase, but other phases are OK too.

  <plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-dependency-plugin</artifactId>
    <executions>
      <execution>
        <id>unpack</id>
        <phase>process-resources</phase>
        <goals>
          <goal>unpack</goal>
        </goals>
        <configuration>
          <artifactItems>
            <artifactItem>
              <!-- Using a patched version of Wildfly -->
              <groupId>org.jboss.errai</groupId>
              <artifactId>wildfly-dist</artifactId>
              <version>11.0.0.Final</version>
              <type>zip</type>
              <overWrite>false</overWrite>
              <!-- Unpacking it into /target/wildfly-11.0.0.Final -->
              <outputDirectory>${project.build.directory}</outputDirectory>
            </artifactItem>
          </artifactItems>
          <skip>${gwt.compiler.skip}</skip>
        </configuration>
      </execution>
    </executions>
  </plugin>



3.2. Alternative workaround for non-patched Wildfly distros


If you want to try a different version of Widlfly or if you simply don't want to depend on any patched versions, you can still use official distros and exclude the ServletContainerInitializer file from the apache-jsp jar on your M2_REPO folder.

If you're working on a Unix system, the following commands should do the job.

1. cd ~/.m2/repository/

2. zip -d org/eclipse/jetty/apache-jsp/{version}/apache-jsp-{version}.jar META-INF/services/javax.servlet.ServletContainerInitializer

By excluding it manually from the apache-jsp jar, Maven won't try to download it again after you remove the file. That makes this workaround permanent as long as you don't erase your ~/.m2/ folder. Keep in mind that if you ever need the apache-jsp jar to have this file back, the best option is to delete the apache-jsp dependency directory and let Maven download it again.


New instructions for the new version of the gwt-maven-plugin are to come, stay tunned!




by Tiago Bento (noreply@blogger.com) at February 07, 2018 03:25 PM

January 25, 2018

Keith Swenson: Product Trial Strategies

Selling big complex products is always a challenge.  I recently was asked why not make the product simply available on the cloud for free sign-up and access so that people can try it out for free.  Here is my response.

Been There, Done That

In 2010 we launched our cloud based BPM initiative, and we set it up to allow free access to people.  We ran this until around 2014.  Obviously this was early days, and if we did it again now we might do a much better job.  We still have BPM in the cloud and the same thing on premises however you want it.  But we don’t offer free trials on the cloud.

We learned a couple of things.  The main one is that enterprise application integration and BPM are inherently complex subjects.  The problem is not drawing a diagram.  The problem is wading through all the myriad networks of existing systems to determine what needs to be called when, and what all the boundary conditions are going to be.  Your legacy systems were not designed to be “integrated to”  and they lack proper documentation of any kind for use in the next generation technology.

VM Approach Preferred

Instead of offering people a free trial on the cloud, we offer a free trial by downloadable VM.  One downloads a 4 to 5 gigabyte file, and in ten minutes you can have it running using VM Player or other such tool.  We put it on a freely distributable version of Linux, include the community version of Postgres, and free versions of everything else you would need.

With a free VM, you have everything that you would get from a free cloud trial, except these advantages:

  • Enterprise integration is not something you do casually in an hour or two of trying.  Even with the powerful tools we offer, it takes a serious effort to even detail a problem in that space, and you can only appreciate the powerful techniques when in the middle of a very sticky problem.
  • To put it in terms that most would understand: Oracle making a relational database available on the cloud as a try-before-you-buy service would make no sense because the kinds of things you have to do with a database are not done in a couple of hours of fiddling.
  • Downloading a VM is pretty quick and easy.  It takes about 10 minutes of work to get it running, but that is honestly not much more than accessing a cloud service.
  • With a cloud service, you can’t save versions as you go, and restore to that point like you can with a VM.  A VM will allow you to prepare a demo, and save it in that state, so that every demo starts in the same situation.  With the cloud, everything you do is final.
  • The agile approach means you want to try things out quickly.  With a VM you can do this with the confidence that if you decide that is a wrong direction, you can always go back to the last saved copy.
  • With the cloud you cannot give a copy to a coworker.  Giving a coworker access to your cloud instance means that they will be doing things in there while you are.  With a VM you can have as many copies as you want running simultaneously.
  • WIth a cloud service it is difficult to work on two independent projects at the same time.  If the vendor allows you two copies of the cloud service, then you could do it that way.  But with a VM you can have two or more copies, one for each concurrent project if you choose.  When one project goes on hiatus, you shut down the VM assured that if it starts back up again you just need to restart the VM.  There is essentially no cost in storing the dormant VM — but that is not the case in the free-trial cloud versions.
  • You can not access a cloud service from a highly secure location.  You might or might not be able to bring a VM into such a situation.
  • Typically with a cloud approach, you get a limited time… like one month … after that it is all lost.  You might think that is good for sales, but only for sales if very simple software.  Learning the details of enterprise integration takes months and the prospect of losing it all after one month is a significant barrier to potential customers.

Conclusion

I don’t mean to say that a “free trial on the cloud” approach is a bad idea.  It is great for products that can be learned in a few hours of fiddling.  But the above limitations are real when dealing with a system designed to handle big problems.  We have opted for a VM approach because it is a better approach for learning the system, teaching the system, doing development, and also for doing demonstrations of solutions built on the system.

by kswenson at January 25, 2018 06:51 PM

January 17, 2018

Sandy Kemsley: A variety of opinions on what’s ahead for BPM in 2018

I was asked to contribute to 2018 prediction posts on a couple of different sites, along with various other industry pundits. Here’s a summary. BPM.com: Predictions BPM.com published The Year Ahead...

[Content summary only, click through for full article and links]

by sandy at January 17, 2018 02:47 PM

January 11, 2018

Sandy Kemsley: Prepping for OPEXWeek presentation on customer journey mapping – share your ideas!

I’m headed off to OPEX Week in Orlando later this month, where I’ll give a presentation on customer journey mapping and how it results in process improvement as well as customer satisfaction/value....

[Content summary only, click through for full article and links]

by sandy at January 11, 2018 06:43 PM

January 05, 2018

Sandy Kemsley: Vega Unity 7: productizing ECM/BPM systems integration for better user experience and legacy modernization

I recently had the chance to catch up with some of my former FileNet colleagues, David Lewis and Brian Gour, who are now at Vega Solutions and walked me through their Unity 7 product release. Having...

[Content summary only, click through for full article and links]

by sandy at January 05, 2018 02:03 PM

January 03, 2018

Keith Swenson: BPM and 2018

Is it a new year already?  Hmmm.  Time to look around and reassess the situation.

  • Hottest topic last year: was Robotic Process Automation (RPA).  The “robot” uses regular HTML user interface to inject data to and extract data from systems that lack proper data-level web service API.  I guess this means that SOA is dead– long live the new SOA.
  • The cloud is no longer scary.  Companies are moving data out of their data centers as quickly as they can, hoping to avoid the liability of actually holding sensitive data, and letting others take that problem.
  • It seems that all business process systems have a case management component now.  Maybe this year we can finally completely merge the ACM and BPM awards programs.
  • Most important innovation in process space for 2018: Deep Learning.  Alpha-Go showed us a system that can play a game that was considered unsolvable only a few years ago, and it did this without any programming by humans.  Tremendous advances in (1) big data and (2) cheap parallel computation, but….
  • Most disappointing innovation for 2018: Deep Learning.  Learning systems really have not solved broad open ended problems such as we need in the process space.  Currently limited to hand-coded algorithms.  Deep learning exhibits very quirky reliability: some amazing results, but lots of overwhelmingly problematic results on the long tail of exceptional situations.  In such a system it is hard to understand what has been learned, and hard to modify and adapt it without starting over.  Automatically improving a process requires understanding the business (cultural, moral, etc.) far outside the system.  This important step is only the beginning.
  • Process mining will continue to be under-appreciated in 2018.
  • SOAP can finally be ignored.  REST has won.
  • Decision Modeling continues to show promise, but it really is just an improvement on how to express computable programs, and it highlights the limitations of BPMN more than it represents anything new.   The DMN TCK had tremendous results helping to firm up the still uncompleted DMN spec.
  • Self-managed organizations continue to rise, and Slack seems to be the most sophisticated technology really needed to make this happen.

What do we have to look forward to:

  • We will be holding another Adaptive Case Management Workshop this year at the time of the EDOC conference in October in Stockholm.  Our 2017 experiment trying to run this in America failed due to inability to attract attendees from outside of Europe.
  • The BPM conference will be in Sydney Australia this year and it should be as good as ever.
  • Open Rules is planning another Decision Camp this time in Brussels in mid September

So, indeed it is another year.  Happy New Year!

——————–

Here is a helpful common sense video which lends some perspective about the state of artificial intelligence and deep learning:

by kswenson at January 03, 2018 04:39 PM

January 02, 2018

Sandy Kemsley: ITESOFT | W4 Secure Capture and Process Automation digital business platform

It’s been three years since I looked at ITESOFT | W4’s BPMN+ product, which was prior to W4’s acquisition by ITESOFT. At that time, I had just seen W4 for the first time at bpmNEXT 2014, and had...

[Content summary only, click through for full article and links]

by sandy at January 02, 2018 12:58 PM

December 29, 2017

Sandy Kemsley: Column 2 wrapup for 2017

As the year draws to an end, I’m taking a look at what I wrote here this year, and what you were reading. I had fewer posts this year since I curtailed a lot of my conference travel, but still...

[Content summary only, click through for full article and links]

by sandy at December 29, 2017 01:54 PM

December 22, 2017

Sandy Kemsley: A Perfect Combination: Low Code and Case Management

The paper that I wrote on low code and case management has just been published – consider it a Christmas gift! It’s sponsored by TIBCO, and you can find it here (registration required)....

[Content summary only, click through for full article and links]

by sandy at December 22, 2017 05:46 PM

December 15, 2017

Sandy Kemsley: What’s in a name? BPM and DPA

The term “business process management” (BPM) has always been a bit problematic because it means two things: the operations management practice of discovering, modeling and improving business...

[Content summary only, click through for full article and links]

by sandy at December 15, 2017 06:31 PM

December 14, 2017

Keith Swenson: 2017 BPM Awards

There are a number of conclusions about the industry that we can make from this year’s WfMC Awards for Excellence in BPM.  Thirteen submissions won awards this year across a number of industries and practices.  First a summary of the cases:

Key Takeaways

The details of the winners can best be explored by reading the individual cases which will be available in a book later next year.  But across all of the winners, I saw these distinct trends:

  • BPM together with Case Management – in almost every study, the system was a hybrid that included both BPMN style pre-defined processes, as well as non-modeled goal-oriented cases that are structured while you work.  Predictable and unpredictable are implemented together.
  • No longer just Finance – While banking and insurance are still large users of BPM, they are no longer alone in the field.  This year cases were from retail, utility, internet provider, two examples of management consulting, construction, facility management, two examples of government, telecom, automotive, medical devices, and health care plans.
  • Avoid Big Bang – half of the studies pointed out that processes should not be perfected before use, but that such perfection is a waste.  Work in smaller chunks, but where each chunk is still a viable minimal process.
  • Agile Incremental Development – Implement part of the process and let it evolve as everyone learns what works and does not work. Clearly, ability to change on the fly is critical.  In several cases this was identified as the single most important ingredient of success.
  • Still Manual Work to be Automated – we are far from completely automated: most of the cases were fresh automation of manual processes, but a couple were re-work from earlier automation attempts.
  • South America – this region showed up remarkable strong this year, winning 6 of the 13 awards, followed by Europe 3 and USA 3, and one in Mexico.  This seems to show that the South America market is maturing.
  • No Sign of Shakeout – the technology was from a variety of sources, some open source, some new to the field.  There is no evidence that all the cases are settling down to a few dominant vendors.
  • Digital Transformation Included – in every case we saw signs of attempts to fundamentally redesign the business using internet technologies.  The destiny of workflow, which became BPM, is to find its full fruition in digital transformation platforms.

BPM Award Winners

  • DIA is a multinational retail company with more than 7000 stores across Spain, Portugal, Argentina, Brazil and China.  They had grown by acquisition and merging and naturally their different product lines were being handled differently, and this was causing delays.  They focused on standardizing new product introduction, decreasing the amount of time product managers devote to this by 70%, eliminating 50% of the purely administrative work, and reducing errors by 80%.
  • EPM is a public utility providing energy, gas and water in Colombia.  They were able to reduce service costs by 50% and measure a 60% increase in service level agreements.
  • FiberCorp offers cloud, internet, data, and video services in Argentina.  They wanted to reduce their time to market for new products and services, in the most extreme case reducing delay from 2 weeks down to 10 minutes.
  • Groupo A offers project management and education services in Brazil.  They have been on a 10 year journey to transform the way their 200 employees generate and deliver content.
  • Hilti AG is a construction industry leader supported by a very savvy team from the University of Liechtenstein.  They point out that BPM is used two distinct ways: (1) to optimize existing processes, and (2) to innovate and bring about transformations in organizations.  They made a strong adoption of case management techniques along with reduction in the number of separate ERP systems from 50 to 1.
  • ISS Facility Services operates facilities for private and public sector across Europe, Asia, Pacific, North, and South America. Again, it involves a combination of automation (process) with flexibility (case management).
  • New York State Backoffice Operations was challenged by Governor Cuomo to streamline their systems, and be ready to handle all invoices in less than 22 days.  Their 57 agencies had 57 different billing systems handling 700,000 invoices per year.  They reduced or eliminated the differences, reduced the number of data centers from 53 down to 10, and by doing everything on-line dramatically reduced the paper usage.
  • Pret Communications in Mexico wants to be the most competitive vendor in the telecom space through automation and case management.  Their most important lesson is to avoid building too much, because it is all going to change, so use an incremental agile approach.
  • Rio de Janeiro City Hall needed to streamline the granting of permits for businesses and building. They saved 1,230,000 sheets of paper, while allowing 45% of the permits to be completed in less than 30 minutes.  72% of the applicants are handled automatically, but the exceptions still get the full review and handling by people who are freed from the drudgery of the simple cases.  Interestingly, 40% of submissions can be automatically rejected, and when done within 30 minutes the applicant does not pay any fees.  Even though rejections happen more effectively, overall they increased the number of successful applications by 25%.
  • Solix offers program and process management to both public and private sectors integrated several separate systems to reduce the time and effort for supporting their processes.
  • Valeo is a French automotive automotive supplier, 106,000 employees, and 500 existing BPM applications.  By cutting the time for one step by 80% to a minute or so, they were able to save the company 3-4000 hours per month.
  • Vincula is a Brazilian is a medical device supplier with, for example, implants for knee, hip, back, and jaw.  They used BPM to implement the ability to change from indirect sales, to direct sales, cutting out a step and improving their ability to know the customer and respond to needs.
  • WellCare Healthcare Plans is a Florida based health care service provider.  They implemented an adaptive case management system which reduced cycle time by 20%, reduced rework by 20% and eliminated 70% of the paper use.

I will link the recording of the awards ceremony when it becomes available.

 

by kswenson at December 14, 2017 11:50 AM

December 12, 2017

Keith Swenson: Conversation on Goal Oriented BPM

A few weeks ago Peter Schooff recorded a discussion between us on the topic of cloud and goal oriented BPM. Here is the link:

https://bpm.com/bpm-today/blogs/1241-the-cloud-and-goal-oriented-bpm

The transcript is copied here:

Peter Schooff:How important is the cloud to digital transformation?

Keith Swenson: It’s satisfying after struggling for a decade with trying to get people to move to the cloud. It’s satisfying to see that people are no longer worried about the cloud. It’s perfectly accessible. A lot of our services run in the cloud. We have figured out that when it comes to security breaches, it’s better to have your IT system. You know, you have your data centers run by people who do nothing else. That’s all they do, run data centers. That way, all the proper procedures are taken care of.

So from that aspect, I’m seeing people accepting the cloud a tremendous amount. Now, still when it comes to digital transformation, I think … you can do that with data centers and in-house. You can do it out of house. I don’t think that should be a barrier. I don’t think you’d want to go to a pure cloud-only solution because then you kind of become trapped. Also you wouldn’t want to invest in something that only runs in-house. You’d want to have that flexibility. I think if you’re looking forward, you need to consider agility and the ability to move quickly back and forth between your on-premise and cloud. And make them work together in a true hybrid approach. That’s the safest approach for anybody.

Peter Schooff: That’s great. We’ve touched on a lot of things. What would you say are the one or two key takeaways you think people should remember from this podcast?

Keith Swenson: Okay, there’s one thing I can throw out there. Process is no longer the center of this thing. For many, many years, we’ve been preaching, let’s look at business process. And why we were looking at business process is because we wanted to take the focus off of functional programming. In other words, I’ve got an accounting department, and I handle accounts receivable. So I’m gonna optimize accounts receivable on its own. But accounts receivable is only one part of a longer process, and it’s more important that you look at the whole thing holistically and you identify what your goals are.

So that’s why we moved to a process-oriented view on designing IT systems. But when I say that process is no longer the center of it, what I’m saying is that we still want that goal. We still want the long-term goal, but what’s happening is that we often can’t identify the process before we start. We can identify the goal from the beginning, so we want to be goal-centered, and that’s where case management comes in. You can assign a goal to a case. That’s where you’re gonna go. And then the process becomes auxiliary. It’s off on the side. And when you can say, “Oh okay, fine, to get to the goal, I could use this process,” you’ll bring in that process and use it. And you’ll bring in a bunch of different processes and combine them. But there may be aspects of your case that simply … you haven’t had the process for that, but you still have the goal.

So I mean everything … We’ve unseated process as the center of the whole system. I mentioned earlier that sharing is easy, but controlling the sharing is difficult and challenging, so usability around security, access control. Making it natural, making it like a conversation. When you involve somebody in a conversation, they somehow automatically get the appropriate rights to the artifacts that they need to carry on the conversation. We’re still challenged in trying to find how to make that really work.

Same thing with the constant change we see in our organizations. Say you’ve got a case that’s got 20 to 30 tasks assigned to people. And then a new person joins the organization. Now you have to go back and reassign all the tasks. There needs to be a better way to allow this stuff to just flow.

You mentioned robotic process automation. That’s a very important integration technique. There’s another thing. Everybody, of course, knows about deep learning and analytics. That’s gonna be huge in digital transformation. In other words, we’re going to implement the systems. I said move quickly, deploy, but you also need to watch what you’re doing. And that’s where … they call them real-time analytics. They’re not strictly real-time, but anyway, analytics that are fairly current allows you to see how things are going and keep tabs on it. That’s incredibly important.

And what is really needed is integrative platforms that bring all of these pieces together. Open source or proprietary or whatever it is, having them pre-fit into a platform that’s known to work together, giving you all those capability, that’s gonna be a key aspect of your … a central part of your digital transformation plan.

Get Keith’s new book – When Thinking Matters in the Workplace: How Executives and Leaders of Knowledge Work Teams Can Innovate with Case Management – available at Amazon.

by kswenson at December 12, 2017 08:01 PM

December 08, 2017

Sandy Kemsley: Tune in for the 2017 WfMC Global Awards for Excellence in BPM and Workflow

I had the privilege this year of judging some of the entries for WfMC’s Global Awards for Excellence in BPM and Workflow, and next Tuesday the 12 winners will be announced in a webinar. Tune in to...

[Content summary only, click through for full article and links]

by sandy at December 08, 2017 01:10 PM

December 04, 2017

Sandy Kemsley: Presenting at OPEXWeek in January: customer journey mapping and lowcode

I’ll be on stage for a couple of speaking slots at the OPEX Week Business Transformation Summit 2018 in Orlando the week of January 22nd: Tuesday afternoon, I’ll lead a breakout session in the...

[Content summary only, click through for full article and links]

by sandy at December 04, 2017 06:59 PM

December 01, 2017

Sandy Kemsley: Release webinar: @CamundaBPM 7.8

I listened in on the Camunda 7.8 release webinar this morning – they issue product releases every six months like clockwork – to hear about the new features and upgrades from CEO Jakob Freund and VP...

[Content summary only, click through for full article and links]

by sandy at December 01, 2017 05:50 PM

November 22, 2017

BPinPM.net: Best Practice Talk about Process Digitalization in Hamburg

Dear BPM experts,

we would like to invite you to the next Best Practice Talk in Hamburg. This time, it well be all about Process Digitalization with exciting presentations by Taxdoo, Hansa Flex, and Otto Group. The talk will take place on Nov 30, 2017.

Please visit the event site on Xing for more information:
https://www.xing.com/events/best-practice-talk-prozessmanagement-hamburg-3-1875930

See you next week!
Mirko

by Mirko Kloppenburg at November 22, 2017 08:59 PM

November 21, 2017

Sandy Kemsley: Fun times with low code and case management

I recently held a webinar on low code and case management, along with Roger King and Nicolas Marzin of TIBCO (TIBCO sponsored the webinar). We tossed aside the usual webinar presentation style and...

[Content summary only, click through for full article and links]

by sandy at November 21, 2017 01:03 PM

November 10, 2017

Drools & JBPM: Building Business Applications with DMN and BPMN

A couple weeks ago our own Matteo Mortari delivered a joint presentation and live demo with Denis Gagné from Trisotech at the BPM.com virtual event.

During the presentation, Matteo live demo'd a BPMN process and a couple DMN decision models created using the Trisotech tooling and exported to Red Hat BPM Suite for seamless execution.

Please note that no glue code was necessary for this demo. The BPMN process and the DMN models are natively executed in the platform, no Java knowledge needed.

Enough talking, hit play to watch the presentation... :)


by Edson Tirelli (noreply@blogger.com) at November 10, 2017 06:20 PM

October 27, 2017

Sandy Kemsley: Machine learning in ABBYY FlexiCapture

Chip VonBurg, senior solutions architect at ABBYY, gave us a look at machine learning in FlexiCapture 12. This is my last session for ABBYY Technology Summit 2017; there’s a roadmap session...

[Content summary only, click through for full article and links]

by sandy at October 27, 2017 08:47 PM

Sandy Kemsley: Machine learning in ABBYY FlexiCapture

Chip VonBurg, senior solutions architect at ABBYY, gave us a look at machine learning in FlexiCapture 12. This is my last session for ABBYY Technology Summit 2017; there’s a roadmap session...

[Content summary only, click through for full article and links]

by sandy at October 27, 2017 08:47 PM

Sandy Kemsley: Capture microservices for BPO with iCapt and ABBYY

Claudio Chaves Jr. of iCapt presented a session at ABBYY Technology Summit on how business process outsourcing (BPO) operations are improving efficiencies through service reusability. iCapt is a...

[Content summary only, click through for full article and links]

by sandy at October 27, 2017 06:07 PM

Sandy Kemsley: Pairing @UiPath and ABBYY for image capture within RPA

Andrew Rayner of UiPath presented at the ABBYY Technology Summit on robotic process automation powered by ABBYY’s FineReader Engine (FRE). He started with a basic definition of RPA —...

[Content summary only, click through for full article and links]

by sandy at October 27, 2017 04:51 PM

Sandy Kemsley: ABBYY partnerships in ECM, BPM, RPA and ERP

It’s the first session of the last morning of the ABBYY Technology Summit 2017, and the crowd is a bit sparse — a lot of people must have had fun at the evening event last night —...

[Content summary only, click through for full article and links]

by sandy at October 27, 2017 04:15 PM

Sandy Kemsley: ABBYY mobile real-time recognition

Dimitry Chubanov and Derek Gerber presented at the ABBYY Technology Summit on ABBYY’s mobile real-time recognition (RTR), which allows for recognition directly on a mobile device, rather than...

[Content summary only, click through for full article and links]

by sandy at October 27, 2017 12:11 AM

October 26, 2017

Sandy Kemsley: ABBYY Robotic Information Capture applies machine learning to capture

Back in the SDK track at ABBYY Technology Summit, I attended a session on “robotic information capture” with FlexiCapture Engine 12, with lead product manager Andrew Zyuzin and director...

[Content summary only, click through for full article and links]

by sandy at October 26, 2017 10:20 PM

Sandy Kemsley: ABBYY Recognition Server 5.0 update

I’ve switched over to the FlexiCapture technical track at the ABBYY Technology Summit for a preview of the new version of Recognition Server to be released in the first half of 2018. Paula...

[Content summary only, click through for full article and links]

by sandy at October 26, 2017 09:27 PM

Sandy Kemsley: ABBYY SDK update and FineReader Engine deep dive

I attended two back-to-back sessions from the SDK track in the first round of breakouts at the 2017 ABBYY Technology Summit. All of the products covered in these sessions are developer tools for...

[Content summary only, click through for full article and links]

by sandy at October 26, 2017 08:05 PM

Sandy Kemsley: The collision of capture, content and analytics

Martyn Christian of UNDRSTND Group, who I worked with back in FileNet in 2000-1, gave a keynote at ABBYY Technology Summit 2017 on the evolution and ultimate collision of capture, content and...

[Content summary only, click through for full article and links]

by sandy at October 26, 2017 05:39 PM

Sandy Kemsley: ABBYY corporate vision and strategy

We have a pretty full agenda for the next two days of the 2017 ABBYY Technology Summit, and we started off with an address from Ulf Persson, ABBYY’s relatively new worldwide CEO (although he is...

[Content summary only, click through for full article and links]

by sandy at October 26, 2017 04:10 PM

Sandy Kemsley: ABBYY analyst briefing

I’m in San Deigo for a quick visit to the ABBYY Technology Summit. I’m not speaking this year (I keynoted last year), but wanted to take a look at some of the advances that they’re...

[Content summary only, click through for full article and links]

by sandy at October 26, 2017 12:15 AM

October 25, 2017

Sandy Kemsley: Low code and case management discussion with @TIBCO

I’m speaking on a webinar sponsored by TIBCO on November 9th, along with Roger King (TIBCO’s senior director of product management and strategy, and Austin Powers impressionist extraordinaire) and...

[Content summary only, click through for full article and links]

by sandy at October 25, 2017 04:11 PM

October 24, 2017

Sandy Kemsley: Citizen development with @FlowForma and @JohnRRymer

I attended a webinar today sponsored by FlowForma and featuring John Rymer of Forrester talking about low-code platforms and citizen developers. Rymer made a distinction between three classes of...

[Content summary only, click through for full article and links]

by sandy at October 24, 2017 04:31 PM

October 19, 2017

Sandy Kemsley: Financial decisions in DMN with @JanPurchase

Trisotech and their partner Lux Magi held a webinar today on the role of decision modeling and management in financial services firms. Jan Purchase of Lux Magi, co-author (with James Taylor) of...

[Content summary only, click through for full article and links]

by sandy at October 19, 2017 05:10 PM

October 12, 2017

5 Pillars of a Successful Java Web Application

Last week, Alex Porcelli and I had the opportunity to present at JavaOne San Francisco 2017 two talks related to our work: "5 Pillars of a Successful Java Web Application” and The Hidden Secret of Java Open Source Projects.

It was great to share our cumulative experience over the years building the workbench and the web tooling for the Drools and jBPM platform and both talks had great attendance (250+ people in the room).


In this series of posts, we’ll detail our "5 Pillars of a Successful Java Web Application”, trying to give you an overview of our research and also a taste of participating in a great event like Java One.
There are a lot of challenges related to building and architecting a web application, especially if you want to keep your codebase updated with modern techniques without throwing away a lot of your code every two years in favor of the latest trendy JS framework.
In our team we are able to successfully keep a 7+ year old Java application up-to-date, combining modern techniques with a legacy codebase of more than 1 million LOC, with an agile, sustainable, and evolutionary web approach.
More than just choosing and applying any web framework as the foundation of our web application, we based our web application architecture on 5 architectural pillars that proved crucial for our platform’s success. Let's talk about them:

1st Pillar: Large Scale Applications

The first pillar is that every web application architecture should be concerned about the potential of becoming a long-lived and mission-critical application, or in other words, a large-scale application. Even if your web application is not exactly big like ours (1mi+ lines of web code, 150 sub-projects, +7 years old) you should be concerned about the possibility that your small web app will become a big and important codebase for your business. What if your startup becomes an overnight success? What if your enterprise application needs to integrate with several external systems?
Every web application should be built as a large-scale application because it is part of a distributed system and it is hard to anticipate what will happen to your application and company in two to five years.
And for us, a critical tool for building these kinds of distributed and large-scale applications throughout the years has been static typing.

Static Typing

The debate of static vs. dynamic typing is very controversial. People who advocate in favor of dynamic typing usually argue that it makes the developer's job easier. This is true for certain problems.
However, static typing and a strong type system, among other advantages, simplify identifying errors that can generate failures in production and, especially for large-scale systems, make refactoring more effective.
Every application demands constant refactoring and cleaning. It’s a natural need. For large-scale ones, with codebases spread across multiple modules/projects, this task is even more complex. The confidence when refactoring is related to two factors: test coverage and the tooling that only a static type system is able to provide.
For instance, we need a static type system in order to find all usages of a method, in order to extract classes, and most importantly to figure out at compile time if we accidentally broke something.
But we are in web development and JavaScript is the language of the web. How can we have static typing in order to refactor effectively in the browser?

Using a transpiler

A transpiler is a type of compiler that takes the source code of a program written in one programming language as its input and produces equivalent source code in another programming language.
This is a well-known Computer Science problem and there are a lot of transpilers that output JavaScript. In a sense, JavaScript is the assembly of the web: the common ground across all the web ecosystems. We, as engineers, need to figure out what is the best approach to deal with JavaScript’s dynamic nature.
A Java transpiler, for instance, takes the Java code and transpiles it to JavaScript at compile time. So we have all the advantages of a statically-typed language, and its tooling, targeting the browser.

Java-to-JavaScript Transpilation

The transpiler that we use in our architecture, is GWT. This choice is a bit controversial, especially because the GWT framework was launched in 2006, when the web was a very different place.
But keep in mind that every piece of technology has its own good parts and bad parts. For sure there are some bad parts in GWT (like the Swing Style Widgets, multiple permutations per browser/language), but keep in mind that for our architecture what we are trying to achieve is static typing on the web, and for this purpose the GWT compiler is amazing.
Our group is part of GWT steering committee, and the next generation of GWT is all about JUST these good parts. Basically removing or decoupling the early 2000 legacy and keeping only the good parts. In our opinion the best parts of GWT are:
  • Java to JavaScript transpiler: extreme JavaScript performance due to compiling optimizations and static typing in the web;
  • java.* emulation: excellent emulation of the main java libraries, providing runtime behavior/consistency;
  • JS Interop: almost transparent interoperability between Java <-> Javascript. This is a key aspect of the next generation of GWT and the Drools/jBPM platform: embrace and interop (two way) with JS ecosystem.

Google is currently working on a new transpiler called J2CL (short for Java-to-Closure, using the Google Closure Compiler) that will be the compiler used in GWT 3, the next major GWT release. The J2CL transpiler has a different architecture and scope, allowing it to overcome many of the disadvantages of the previous GWT 2 compiler.

Whereas the GWT 2 compiler must load the entire AST of all sources (including dependencies), J2CL is not a monolithic compiler. Much like javac, it is able to individually compile source files, using class files to resolve external dependencies, leaving greater potential for incremental compilation.
These three good parts are great and in our opinion, you should really consider using GWT as a transpiler in your web applications. But keep in mind that the most important point here is that GWT is just our first pillar implementation. You can consider using other transpilers like Typescript, Dart, Elm, ScalaJS, PureScript, or TeaVM.
The key point is that every web application should be handled as a large-scale application, and every large-scale application should be concerned about effective refactoring. The best way to achieve this is using statically-typed languages.
This is the first of three posts about our 5 pillars of successful web applications. Stay tuned for the next ones.

[I would like to thank Max Barkley and Alexandre Porcelli for kindly reviewing this article before publication, contribute with the final text and provided great feedback.]


by Eder Ignatowicz (noreply@blogger.com) at October 12, 2017 09:13 PM

October 09, 2017

Sandy Kemsley: International BPM conference 2018 headed down under

The international BPM conference for academics and researchers is headed back to Australia next year, September 9-14 in Sydney, hosted by the University of New South Wales. I’ve attended the...

[Content summary only, click through for full article and links]

by sandy at October 09, 2017 07:08 PM

October 04, 2017

Sandy Kemsley: Citrix Productivity Panel – the future of work

I had a random request from Citrix to come out to a panel event that they were holding in downtown Toronto — not sure what media lists I’m on, but fun to check out to events I wouldn’t normally...

[Content summary only, click through for full article and links]

by sandy at October 04, 2017 10:31 PM

September 26, 2017

Sandy Kemsley: ABBYY Technology Summit 2017

Welcome back after a nice long summer break! Last year, I gave the keynote at ABBYY’s Technology Summit, and I’m headed back to San Diego this year to just do the analyst stuff: attend briefings and...

[Content summary only, click through for full article and links]

by sandy at September 26, 2017 01:12 PM

September 20, 2017

September 11, 2017

Keith Swenson: Why Does Digital Transformation Need Case Management?

A platform for digital transformation brings a number of different capabilities together: processes, agents, integration, analytics, decisions, and — perhaps most important — case management.  Why case management?  What does that really bring to the table and why is it needed?

Discussion

What is the big deal about case management?  People are often underwhelmed.  In many ways, case management is simple a “file folder on steroids.”  Essentially it is just a big folder that you can throw things into.  Traditional case management was centered on exactly that: a case folder and really that is the only physical manifestation.  It is true that the folder as a collecting point for documents and data of any kind — but there is a little more to it.

I already have shared folders, so why do I need anything more?  The biggest difference between case management and shared folders is how you gain access to the folder.

My shared file system already has access control.  Right, but there is a question of granularity: it is a question of granularity.  If the access can be controlled only to the whole folder, it means every participant has all-or-nothing access and that is to much.  At the other end of the spectrum, if every file can be assigned to any person, it gets to be too tedious:  adding a person to a large case with 50 files can take significant effort, costing more than 10 minutes of work.  People may be joining and leaving the case on a daily basis, and going through all the documents on every person might leave you with a full time job managing the access rights.  A case manager is too busy to do that.  A better approach has to be found that blends the access control together with other things that a case manager is doing.

For example, let’s say that you have a task, and the task is associated with 10 documents in the folder.  Changing the assignment of the task, from one person to another, should at the same time (and without any additional trouble) change the rights to access the associated documents from one person to another.  It is reasonable to ask a case manager to assign a task to someone.   It is unreasonable to expect the case manager to go and manually adjust the access privileges for each of the 10 documents.  It is not only tedious, it is error prone.  Forget to give access to a critical document, and the worker can’t do the job.  Or give access to the wrong document to someone with no need to know might constitute a confidentiality violation.  This is one example of how a case management blends the case management and the access control together.  Another example is role-based access.

My shared file system already have role-based access control.  Many document management systems offer global roles that you can set up:  a group for all managers, a group for all writers, a group for all editors.   You can assign privileges to such a  group, and simply by adding a person to the group gives them access to all the resources of the group.

This is a complete misunderstanding of how cases have to work.  Each case, needs its own groups of people to play particular roles just for that case.  For example, a case dedicated to closing a major deal with a customer will have a salesperson, a person to develop and give a demo, maybe a market analyst.  But you can’t use the global groups for salespeople, demo developers, and market analysts.  This case has a particular sales person, and not just anyone in the sales person pool.   That particular sales person will have special access to the case that no other sales person should have.  A global role simply can’t fit the need.

I could make individual roles for every case even in the global system.  Right, but creating and modifying global roles is often restricted to a person with global administration privileges.  The case manager needs the rights to create and change the roles for that case, and for no other case.  This right to manage roles needs to come automatically from being assigned to the case manager role for that case.  Case management adds mechanisms above the basic access control to avoid the tedium of having to manage thousands of individual access control settings.

So that is all it is, powerful access control?  There is more to it.  It must also have the ability to create tasks of any kind and assign them to people at any time.  This means that the case management needs convenient ways to find all the tasks assigned to a particular person, and to (1) produce a work list of all currently assigned tasks, and (2) email notifications of either the entire list, or the items that are just about to reach a deadline.  These are BPM-ish capabilities, but there is no need for a process diagram.  For routine, pre-defined processes just use a regular BPM product.  Case management is really more about completely ad-hoc tasks assigned as desired.

So there is no pattern to the processes at all?  Sorry, I didn’t mean to imply that.  There are patterns.  Each case manager develops their own style for getting a case done, and they often reuse those patterns.  The list of common tasks are usually copied from case to case in order to be reused.  At the same time, the patterns are never exactly the same.  And they change after the case is started.

Since tasks are assigned manually, there is a strong need for a capability to “see who is available” which takes into account skills, workload, vacation schedule, and other criteria to help locate the right person for the job.

There are also well defined routine processes to be called upon as well, and you use BPM for that.  The tighter the BPM is integrated to the case management, the easier it will be for case managers to complete the work.

Summary

The above discussion is not an exhaustive list of capabilities that case management brings to the table.

  • It is a dumping ground for all the work that can not be known in advance.  A kind of safety valve to catch work which does not fall neatly into the pre-defined buckets for process management.
  • It collects any kind of information and documents, and makes them available to people working on the case.
  • It offers powerful access control that is integrated into the case logical structure so that it is easier to use than a simple document-based access control system.
  • It offers tasking so that assignments can be made and tracked to completion.
  • There are often portal features that can reach out to external people to register themselves and to play a role in the case.
  • It has calendars and vacation schedules that give workers an awareness of who is available and who might be best to do a job.
  • Conversation about the case is simplified by connections to discussion topics, commenting capability, chat capability, unified communications, email, social media, etc.

Knowledge workers need these capabilities because their work is inherently unpredictable.  A digital transformation platform brings all the tools together to make solutions that transform the business.  Knowledge workers constitute about 50% of the workforce, and that percentage is growing.  Any solution destined to transform the organization absolutely must have some case management capabilities.

by kswenson at September 11, 2017 05:49 PM

September 07, 2017

Keith Swenson: Business Driven Software

I liked the recent post from Silvie Spreeuwenberg when she asks “When to combine decisions, case management and artificial intelligence?

She correctly points out that “pre-defined workflow” are useful only in well defined scripted situations, and more and more knowledge workers need to break out of these constraints to get things done.  She points to Adaptive Case Management.

I would position it slightly differently.  The big push today is “Digital Transformation” but it is exactly what she is talking about:  you are combining aspects of traditional process management, with unstructured case management, separating out decision management, and adding artificial intelligence.

I would go further to say that Digital Transformation Platform (DXP) would need all that plus strong analytics, background processing agents, and robotic process automation. These become the basic ingredients that are combined for specific knowledge worker solution.  I think Spreeuwenberg has rightly expressed the essence of an intuitive platform of capabilities to meet the needs of today’s business.

She closes saying he will be talking at the Institute of Risk Management — once again the domain of knowledge workers: risk management.

by kswenson at September 07, 2017 10:49 PM

September 01, 2017

Keith Swenson: Update on DMN TCK

Last year we started the Decision Model & Notation Technical Compatibility Kit (DMN-TCK) working group.  A lot has happened since the last time I wrote about this, so let me give you an update.

Summary Points

  • We have running code!:  The tests are actual samples of DMN models, and the input / output value force a vendor to actually run them in order to demonstrate compliance.  This was the main goal and we have achieved it!
  • Beautiful results web site:  Vendors who participate are highlighted in an attractive site that lists all the tests that have passed.  It includes detail on all the tests that a vendor skips and why they skip them.  Thanks mainly to Edson Tirelli at Red Hat.
  • Six vendors included:  The updated results site, published today, has six vendors who are able to run the tests to demonstrate actual running compliance:  Actico, Camunda, Open Rules, Oracle, Red Hat, Trisotech.
  • Broad test set: The current 52 tests do a broad coverage of DMN capability.   Will jump to 101 tests by mid September.  Broad but not deep at this time: Now that the framework is set up, it is simply a matter of filling in additional tests.
  • Expanding test set: Participating vendors are expanding the set of tests by drawing upon their existing tests suites and converting into the TCK format, and including in the published set.  We are ready to enter a period of rapid test expansion.
  • All freely available: It is all open source and available on GitHub.

How We Got Here

It was April 2016 that DMN emerged onto the stage of the BPMNext conference as an important topic.  I expressed skepticism that any standard could survive without actual running code that demonstrated correct behavior.  Written specifications are simply not detailed enough to describe any software, and particular one that has an expression language as part of the deal.  Someone challenged me to do something about it.

We started meeting weekly in summer of 2016, and have done so for a complete year.  There has been steady participation from Red Hat, Camunda, Open Rules, Trisotech, Bruce Silver and me, and more recently Oracle and Actico.

I insisted that the models be the standard DMN XML-based format.  The TCK does not define anything about the DMN standard, but instead we simply define a way to test that an implementation runs according to the standard.   We did define a simple XML test case structure that has named input values, and named output values, using standard XML datatype syntax.  The test case consists purely of XML files which can be read and manipulated on any platform in any language.

We also developed a runner, a small piece of Java code which will read the test cases,  make calls to an implementing engine, and test whether the results match.  It is not required to use this runner, because the Java interface to the engine is not part of the standard, however many vendors have found this a convenient way to get started on their own specific runner.

As we worked on the tests, we uncovered dozens, possibly hundreds, of places where the DMN spec was ambiguous or unclear.  One participant would implement a set of tests, and it was — without exception — eye opening when the second participant tried to run them.  This is the way that implementing a new language (FEEL) naturally goes.  The spec simply can not get 100% on all the edge cases, and the implementation of the tests forced this debate into the public.  Working together with the RTF we were able to come to a common understand of the correct behavior of the evaluation engine.  Working through these cases was probably the most valuable aspect of the TCK work.

A vendor runs the tests and submits a simple CSV file with all the results back to the TCK.  These are checked into GitHub for all to see, and that is the basis for the data presented on the web site.   We open the repository for new tests and changes in tests for the first half of every month.  The second half of the month is then for vendors that wish to remain current, to run all the new tests, and produce new results.  The updated web site will then be generated on the first of the next month.  Today, September 1, we have all the new results for all the tests that were available before mid August.  This way vendors are assured the time they need to keep their results current.

The current status is that we have a small set of tests cases, that test a broad but shallow coverage of DMN capabilities.  A vendor who can pass the tests will be demonstrating a fairly complete implementation of all the DMN capabilities, but there are only a couple of tests on each functional area.  The next step will be drive deeper, and to design test that verify that the functional area works correctly in a larger number of special situations.  Some of the participating vendors already have such tests available in a non-TCK format.  Our immediate goal is then to encourage participating vendors to convert those tests and contribute them to the TCK repository.  (And I like to remind vendors that it is in their advantage to do so, because adding tests that you already pass, makes the test suite stronger, and forces other vendors to comply to functionality that you already have.)

What this means to Consumers

You now have a reliable source to validate a vendor claim that they have implemented the DMN standard.  On the web site, you can drill down to each functional category, and even to the individual tests to see what a vendor has implemented.

Some vendors skip certain tests because they think that particular functionality is not important.  You can drill down to those particular tests, and see why the vendor has taken this stance, and determine whether you agree.

Then there are vendors who claim to implement DMN, but are not listed on the site.  Why not?  Open source: All of the files are made freely available at GitHub in standard, readily-accessible formats.   Ask questions.  Why would a DMN implementation avoid demonstrating conformance to the standard when it is freely available?  Are you comfortable making the investment in time to use a particular product, when it can not demonstrate publicly this level of conformance to the spec?

What this means to Vendors

There are certainly a number of vendors who are just learning of this effort now.  It is not too late to join.  The last participant to join had the tests running in under two weeks.  We welcome any and all new participants who want to demonstrate their conformance to the DMN spec.

To join, you simply need to read all the materials that are publicly available on the web site, send a note to the group using GitHub, plan to attend weekly meetings, and submit your results for inclusion in the site.  The effort level could be anywhere from a couple hours up to a max of 1 day per week.

The result of joining the TCK is that you will know that your implementation runs in exactly the same way as the other implementations.  You product gains credibility,and customers gain confidence in it.  You will also be making the DMN market stronger as you reduce the risk that consumers have in adopting DMN as a way to model their decisions.

Acknowledgements

I have had the honor of running the meetings, but I have done very little of the real work.  Credit for actually getting things done goes largely to Edson Tirelli from Red Hat, and Bruce Silver, and a huge amount of credit is due to Falko Menge from Camunda, Jacob Feldman from Open Rules, Denis Gagne from Trisotech, Volker Grossmann and Daniel Thanner from Actico, Gary Hallmark from Oracle, Octavian Patrascoiu from Goldman Sachs, Tim Stephenson for a lot of the early work, Mihail Popov from MITRE, and I am sure many other people from the various organizations who have helped actually get it working even though I don’t know them from the meetings.    Thanks everyone, and great work!

by kswenson at September 01, 2017 06:07 PM

August 29, 2017

Keith Swenson: Blogging Platforms

Today I am pretty frustrated by WordPress so I am going to vent a bit.  10 years ago I picked it as the platform to start my first blog on, and here you have it: I still here.  Yet I have seen so many problems in recent days I will be looking for an alternative platform.

What Happened?

I spent a lot of time trying to set up a blog for a friend who has a book coming out and needed a place to talk about it. I said “blogs are easy” but that was a mistake.  Three days later and the blog is still not presentable.

Strange User Restrictions – Using my login, I created a blog for her using her full name as the name of the blog (e.g jsmith)   Then, I wanted to sign her up as a wordpress user with “jsmith” as her username.  You can’t do that.  Since there was a blog with that name, you are not allowed to register a user with that name.  The point is that the blog is her blog.  Her own blog is preventing her from having her username.  How silly is that?

Given that I created the blog, there is no way to then set the password on the user for that name, and since there is no email associated, there is no way to reset the password.

You can’t just register a user.  If you want to register a user, you have to create another blog!  It walks you through creation of a blog before you can specify a password for the user account.  We already had the blog created, I just needed a way for her to log in.  The only way we found to do that was to create yet another blog until finally, with the user name she didn’t want, could set a password on that username.  Blogs and user are different things … it really does not have to be so hard.

You Can’t Move/Copy a Site – One of the impressive features is WordPress claims you can always move your site.  I have never tried until now, and can say it does not work.  I had previously set he blog up on a different blog address, so I wanted to move it.  Simply export and then import, right?  No.  You download a ZIP file, but it only has one file in it, and XML file.  There are none of the graphics, none of the media, and none of the settings.  Since it downloaded a zip file, at the import prompt I tried to upload the ZIP file.  This produces an arcane error message saying that a particular file is missing.  Strange.  I download the zip file a few times.  Always the same result.  There are two different export commands, and the produce different output!

Finally I try to upload the XML file alone.  I know this has no chance of moving the pictures and media, but since there was none in the ZIP file anyway, I tried.  This avoided the error, and acted like it was working.  Eventually, I got a mess.  It just added the pages to the pages that were there.  Some of the old pages had special roles, like home and blog, so I can’t delete them in order to make way for the imported home and blog pages.  I have the same theme, but NOTHING looks the same.  None of the featured images were there.  No media files at all.   The sidebar (footer) text blocks were different.  I was horrified.  All this time I thought you could move a blog and not lose things.  This was eye opening.

Incomprehensible Manage Mode – I have been trying for months to find out how to get from the “new” admin mode back to the blog itself.  That is, you edit a page, and you want to see how the page looks.  It gives you a “preview” mode which causes a semblance of the page to appear on top of the admin mode, but that is not the same thing, and the links do not work the same way.  After hours of looking, I still can not find any way to get “out” of admin mode.   You can “preview” the page, and the “launch” the page full screen.  That seems to do it, but it is a small pain.  I have until now just edit the URL to get back to my blog url.  In fact, I have taken to bookmarking the blog I am editing, and using the bookmark every few minutes to get out of admin mode.  It is rediculous.

Visual Editor Damages Scripts – One of my blogs is about programming, so I have some programming samples.   If you accidentally open that in the “visual” editor, it strips out all the indent and does other things to it.  The problem is that you have no control of the editor until AFTER you click to edit.  It is a kind of russian roulette.  If you click edit and the visual editor appears, and then you switch to HTML editor, you post is already damaged.   What I have to do is click edit and see what mode it is in.  If visual, I switch to HTML.  Then I use the favorites link mentioned above, to return to the blog abandoning the edits.  Now I hit edit again and it comes back in the right HTML mode.   This is a real pain since some of my posts I would like to use the visual editor, and others because of the corruption I must use the HTML editor.  I worry forever that I will get visual editor on a post that has source code further down on the page, and I accidentally save it that way.

Backslashes disappear  – besides ruining the indentation, at times it will strip out all the backslashes.  I got a comment today on a post from a couple years ago that the code was wrong: missing backslashes.  Sure enough.  I have struggled with that post, but I am sure that when I left it the last time, all the backslashes were in place.

Old vs. New Admin Mode – Right now I am using the old admin mode to write this — thank god — I don’t know how to guarantee to get this.  The new admin mode is missing some features.  A few months ago I spent about an hour trying to find the setting to turn off some option that had somehow gotten turned on.  I finally contacted support, and they told me to find the “old” admin UI and the setting could be manipulated there.

Can’t change blogs without manually typing the address in – This is the strangest thing.  If I am on one blog, I can go the menu that switches blogs, and choose another of my blogs, but there is no way to get back “out” of admin mode.  I end up editing the address line.  How hard would it be to give a simple list of my blogs and allow me to navigate there?  The new admin UI is a nightmare.  It didn’t use to be that bad!

Login / Logout moves your location – if you are on a page which you would like to edit, but you are not logged in, I would expect to be able to log in, and then click edit on the page.    No chance with WordPress.  When you are done logging in, you are in some completely different place!  You cant use the browser back button to get back to where you were (this is reasonable, but I am trying to find a way around the predicament).  I then usually have to go search for the post.

Edit does not return you to the page – If you are on a page and click the edit, when you are done editing you are not put back on the page you start on.  It looks like you page, but there is an extra bar at the top, and links don’t work.

Managing Comments is Inscrutable – When reviewing and approving comments, I want a link to takes me to the page in question, so I can see the page and the comment.  I think there is a link that does this, but it is hard to find.  The main link takes you to the editor for that page.  Not what I want, and as mentioned above it is impossible to get from the editor to the page.  I often end us searching for the blog page using the search function.  Other links take you to the poster’s web site, which is not always what I want either.

Vapid Announcements – When I make a hyperlink from one blog post to another of my own blog posts, why does it send me an email announcing that I have a new comment on those posts?  I know it makes a back-link, but for hyperlinked posts within a single blog it seems the email announcement is not useful in any way.

Sloppy Tech – I looked at the XML file produced for the site, and they user CData sections to hold your blog posts.   Any use of CData is a hack because it does not encode all possible character sequences, when regular XML encoding works perfectly.  i realize I am getting to the bottom of the barrel of complaints, but I want to be complete here.

What I want?

  • Keep it simple.
  • Let me navigate through my site like normal, but put a single edit button on each page that is easy to find and not in different places for different themes.
  • Then, when done editing, but me BACK on that page.
  • When I log in, leave me on the same page that I started the login from.
  • When I switch blogs, take me to the actual blog and not the admin for that blog.
  • Give me a simple way to exit the admin mode back to the actual blog.
  • And make a single admin mode that has all the functionality.
  • Don’t corrupt my pages by taking backslashes and indentation out.  Protect my content as if it was valuable.
  • Provide a complete export that includes all the media and theme settings as well
  • Provide an import that read the export and sets up the blog to be EXACTLY as the original that you exported.

Is that too much to ask for?

As yet, I don’t know of any better blogging platform.  But I am going to start considering  other options in earnest.

Postscript

PS. As a result of writing this post, it forced me to figure out how to reliably get to the “old” admin interface, which remains workable in a very predictable manner.  Maybe if I try hard, I can avoid using the “new” admin interface completely, and avoid that all those quirky usability problems.

PPS. Now a new “View Site” button appears in the “new” admin mode to get back to the site, but this has the strange side effect of logging you out.  That is, you can see the page, but you are no longer logged in.  Strange.

by kswenson at August 29, 2017 06:49 AM

August 09, 2017

Drools & JBPM: Talking about Rule Engines at Software Engineering Radio

I had the pleasure of talking to Robert Blumen, at Software Engineering Radio, about Drools and Rule Engines in general.

http://www.se-radio.net/2017/08/se-radio-episode-299-edson-tirelli-on-rules-engines/

If you don't know this podcast, I highly recommend their previous episodes as well. Very informative, technically oriented podcast.

Hope you enjoy,
Edson

by Edson Tirelli (noreply@blogger.com) at August 09, 2017 01:08 AM

August 07, 2017

Keith Swenson: Still think you need BPEL?

Fourteen years ago, IBM and Microsoft announced plans to introduce a new language called Business Process Execution Langauge (BPEL) to much fanfare and controversy.  This post takes a retrospective look at BPEL, how things have progressed, and ponders the point of it all.

Origins

In 2002, BPM was a new term, and Web Services was a new concept.  The term BPM meant a lot of different things in that day, just as it still does today, but of the seven different kinds of BPM, the one that is relevant in this context is Process Driven Server Integration (PDSI).  Nobody actually many real web services at that time, but it was clear that unifying such services with a standard protocol passing XML back and forth was a path to the future.  Having a way to integrate those web services was needed.  Both Microsoft and IBM had offerings in the integration space (BizTalk and FlowMark respectively).  Instead of battling against each other, they decided to join forces, and propose a open standard language for such integration processes.

In April 2003 A proposal was made to OASIS to form a working to define a language called BPEL4WS (BPEL for Web Services).  I attended the inaugural meeting for that group with about 40 other high tech professionals.  It was a rather noisy meeting with people jockeying for position to control what was perceived to be the new lingua franca for business processes.  The conference calls were crazy, and we must credit the leaders with a lot of patience to stick with it and work though all the details.  The name was changed to WS-BPEL, and after a couple of years a spec was openly published as promised.

Hype

BPEL was originally proposed as an interchange format.  That is, one should be able to take a process defined in one product, and move it to another product, and still be executable.  It was to be the universal language for Process Driven Server Integration.

Both Microsoft and IBM were on board, as well as whole host of wannabes.  A group called the Business Process Management Initiative dumped their similar programming language called BPML in favor of BPEL as a clear case of “it you can’t beat ’em you can join ’em.”

It was designed from the beginning to be a “Turing-Complete Programming Language” which is a great goal for a programming language, but what does that have to do with business?  The problem with the hype is that it confused the subject of “server integration” with human business processes.  While management was concerned with how to make their businesses run better, they were being sold a programming language for server integration.

The hype exists after the spec was announced, but before it was finally published.  This happens with most proposed specs: claim that the proposal can do everything are hard to refute until finally the spec is published.  Only then can claims be accurately refuted.  For more than 4 years BPEL existed in this intermediate state where inflated expectations could thrive.

Who Needs It?

At the time, I could not see any need for a new programming language.  Analysts at Gartner and Forrester were strongly recommending companies go with products that included BPEL.  I confronted them, asking “Why is this programming language important?” And the candid answer was “We don’t know, we just know that a lot of major players are backing it, and that means it is going to be a winner.”  It was a case of widespread delusion.

My position at the time was clear: as a programming language it is fine, but it has nothing to do with business processes.  It was Derek Miers who introduced me to the phrase “BPEL does not have any B in it.”   The language had a concept of a “participant”, but a participant was defined to be a web service, something with a WSDL interface.

In 2007 I  wrote a article called “BPEL: Who Needs It Anyway?” and it is still one of the most accessed articles on BPM.COM.  In that article I point out that translating a BPMN diagram into BPEL presents a limitation on the kinds of diagrams that can be executed.  I point out that directly interpreting the BPMN diagram, something that has become more popular in the meantime, does not have this limitation.

If what we need is a language for PDSI, then why not use Java or C#?  Both of those languages have proven portability, as well as millions of supporters.  When I asked those working on BPEL why they don’t just make an extension to an existing language, the response was the incredible: “We need a language based on XML.”  Like you need a hole in the head.

Attempted Rescue

The process wonks knew that BPEL was inappropriate for human processes, but still wanting to join the party, there was a proposal for the cleverly named “BPEL 4 People” together with “WS-HumanTask.”    This is the idea that since people are not web services, and since BPEL can only interact with web services, we can define a standardized web service that represents a real person, and push tasks to it.  It is not a bad idea, and it incorporates some of the task delegation ideas from WF-XML, it fails to meet the need of a real human process system because it assumes that people are passive receptors of business tasks.

When a task is sent to a web service for handling, there is no way to “change your mind” and reallocate that to someone else.  BPEL, which is a programming language for PDSI, unsurprisingly does not include the idea of “changing your mind” about whom to send the task to.  Generally, when programming servers, a task sent to a server is completed, period.  There is no need to send “reminders” to a server.  There are many aspects of a human process which are simply not, and never should be, a part of BPEL.  Patching it up with representing people as standardized web services does not address the fundamental problem that people do not at any level interact in the same way that servers do.

Decline of BPEL

Over time the BPM community has learned this lesson.  The first version of BPMN specification made the explicit assumption that you would want to translate to BPEL.  The latest version of BPMN throws that idea out completely, and proposes a new serialization format instead of BPEL.

Microsoft pulled away from it as well as a core part of their engine.  First proposing that BPEL would be an interchange format that they would translate to their internal format.  Oracle acquired Collaxa an excellent implementation of BPEL, and they even produced extensions of BPEL that allowed for round trip processing of BPMN diagrams using BPEL as the file format.  But Oracle now appear to be pulling away from the BPEL approach in favor of a higher-level direct interpretation of a BPMN-like diagram.

Later it became doubtful that processes expressed in BPEL are interchangeable at any level.  Of course, a simple process that sticks to the spec and only calls web services will work everywhere, but it seems that to accomplish something useful every vendor adds extensions — calls to server specific capabilities.  Those extensions are valid, and useful, but they limit the ability to exchange processes between vendors.

Where Do We Go From Here?

To be clear, BPEL did not fail as a server programming language.  A engine that is internally based on BPEL for Process Driven Server Integration, should be able to continue to do that task well.  To the credit of those who designed it for this purpose, they did an exemplary job.   As far as I know, BPEL engines run very reliably.

BPEL only failed as

  • a universal representation of a process for the exchange between engines.
  • as a representation of a business process that people are involved in.

BPMN is more commonly used as a representation of people oriented processes for direct interpretation.  Yet portability of BPMN diagrams is still sketchy — and this has nothing to do with the serialization format, it has to do with the semantics being designed by a committee.  But that is a whole other discussion.

The business process holy grail still eludes the industry as we discover that organizations consist of interactions patters that are much more complex than we previously realized.  No simple solution will ever be found for this inherently complex problem, but the search for some means to keep it under control goes on. What I hope we learned from this is to be cautious about overblown claims based on simplified assumptions, and to take as more studied and careful approach to standard in the future.

References

by kswenson at August 07, 2017 10:25 AM

August 04, 2017

Keith Swenson: A Strange FEELing about Dates

The new expression language for Decision Model and Notation standard is called the Friendly Enough Expression Language (FEEL).  Over all it is a credible offering, and one that is much needed in decision modeling where no specific grammar has emerged as the standard.   But I found the handling of date and time values a bit odd. I want to start a public discussion on this on this, so I felt the best place to start is the blog post here, and this can serve as a focal point for discussion references.

The Issue

A lot of decisions will center on date and time values.  Decisions about fees will depend on deadlines.  Those deadlines will be determined by the date and time of other actions.  You need to be able to do things like calculate whether the current transaction is before or after a date-time that was calculated from other date-time values.

FEEL includes a data type for date, for time (of day) and for date-time.  It offers certain math functions that can be performed between these types and other numbers.  It offers ways to compare the values.

Strange case 1: Would you be surprised that in FEEL you can define three date-time values, x1, x2, and x3 such that when you compare them all of the following are true?:

x1 > x2
x2 > x3
x3 > x1.

All of those expressions are true.  They are not the same date-time, they are all different points in time (few hours apart in real time), but the “greater than” operator is defined in a way that dates can not actually be sorted into a single order.

Strange Case 2: Would you be surprised that in FEEL you can define two date-time values, y1, and y2, such that all of the following are false?:

y1 > y2
y1 = y2
y1 < y2

That is right, y1 is neither greater than, equal to, nor less than y2.

What is Happening?

In short, the strangeness in handling these values comes from the way that time zones and GMT offsets are used.  Sometimes these offsets and time zones are significant, and sometimes not.  Sometimes the timezone is fixed to UTC.  Sometimes unspecified timezones come from the server locales, and other times from the value being compared to.

Date-time inequalities (greater-than and less-than) are done in a different way than equals comparisons.  When comparing greater or less than, the epoch value is used. (That is —  the actual number of seconds from that instant in time since Jan 1, 1970 and the timezone is considered in that calculation.)  But when comparing two date-time values, they are not equal unless they come from the exact same timezone.

It gets stranger with date-time values that omit the timezone.  If one of the date-time values is defined without a timezone, then the two values are compared as if they were in the same timezone.  This kind of date-time has a value that changes depending upon the timezone of the data value being compared to!

Date values, however must be the date at midnight UTC.  Timestamps taken in the evening in California on Aug 13 will be greater than a date value of Aug 14!  The spec is actually ambiguous.  At one point is says that the date value must be UTC midnight.  UTC midnight of Aug 14, is Aug 13 in California.  At other points it says that the time value is ignored and the numeric day value (13) would be used.  The two different interpretations yield different days for the date-time to date conversion.

It gets even worse when you consider time zones at the opposite ends of the timezone spectrum.  When I call team members in Japan, we always have to remember to specify the date at each end of the call … because even though we are meeting at one instant in time, it is always a different day there.  This effects your ability to convert times to dates and back.

Time of day values oddly can have a time zone indicator.  This may not strike you as odd immediately, but it should.  Time zones vary their offset from GMT differently at different times of the year.  California is either 8 or 7 hours from GMT, depending on whether you are in the summer or winter.  But the time-of-day value does not specify whether it is in summer or winter.  Subtracting two time-of-day values can give value varying by 0, 1 or 2 hours depending on the time of year that the subtraction is done, and it is not clear even how to determine the time of year to use.  The server current date?  Your model will give different results at different times of the year.  Also, you can combine a date and a time-of-day to get a date-time, but it is not clear what happens when the time-of-day has a timezone.  For example, if I combine Aug 14 date, with time-of-day 8pm in California, do I get Aug 13, or Aug 14 in California?  Time-of-day has to be positive (according to the spec) but this appears to add 24 hours in certain cases where the timezone offset is negative.

If that is not enough, it is not clear that the DMN model will be interpreted the same in different time zones.  Remember that phone call to Japan?  The same DMN model running in Japan will see a different date than the same model running in California.  If your business rule says that something has to happen by April 15, a given times stamp in Japan might be too late, while the exact same time in California still hours to go.

I write systems that collect data all over the world.  We correlate and process events from a server running in India and compare to one in Finland and another in Washington DC.   I am left scratching my head to figure out how I am going to write rules that work the same way on data from different locations, and so those rules run exactly the same way on servers running in different time zones.  It is critical that these decision models be clear, unambiguous, and run the same way in every location.

Solution is Simple

Given all the systems that support date and time, it is surprising that FEEL does not just borrow from something that has been shown to work.  I take my position from Java which has solved the problem nicely.  The date-time value is well defined as the epoch value (number of milliseconds since Jan 1, 1970).  Then Java offers a Calendar object for all the rest of the calculations and conversions that takes into account all the vagaries of specific timezone offsets including daylight time switching.  The Calendar offers calculations like converting a string representation to a date, and converting a date back to a string.  This is already well tested and proven, so just use it.

First: In the DMN spec, date-time values should simply be compared by using the epoch value — the number of seconds since Jan 1, 1970 UTC.    This value is already what is used for greater than and less than comparisons.  The spec should be changed to do that same for equals comparison.  This would make the date/time value for 3pm in New York equal 12 noon in California for that same day.  This seems clearly to be what you want.    The current spec says these are NOT the same time.  This would give a clear order for sorting all date-time values.

Second: The DMN spec should then define a default timezone for any model.  Any date or time value without a timezone indicator is interpreted to be in the time zone of the default for the model.  Date time calculation (such as add 3 days, or conversion from date-time to date, or time) use a calendar for that time zone locale.  A date value would then be the 24 hour period for that date from that default calendar.   A time of day would be for the default timezone, and would probably handle daylight time changes correctly.

This solves most the strangeness.  Since the model defines the timezone for the model, it always executes exactly the same way, no matter where the model is being interpreted.  You are never dependent on the “local timezone” of the server.  And, since identical points in time always compare as equal, even if those points in time came from different locations, the rules around handling time are clear, unambiguous, and “friendly enough”.

Final Note

I don’t actually know the rationale for the unusual aspects of the specification.  Maybe there is some special reason for the arcane approach.  If so, one might need to invent a couple new date functions to handle them along with the scheme above.  I would hazard a bet that those functions would be identical to ones already on the Java Calendar object.  We really don’t need to be inventing a new and incompatible way of dealing with date values.  But, I will wait for feedback and see.

 

 

 

by kswenson at August 04, 2017 12:50 AM

August 01, 2017

Drools & JBPM: Drools, jBPM and Optaplanner Day: September 26 / 28, 2017 (NY / Washington)

Red Hat is organizing a Drools, jBPM and Optaplanner Day in New York and Washington DC later this year to show how business experts and citizen developers can use business processes, decisions and other models to develop modern business applications.
This free full day event will focus on some key aspects and several of the community experts will be there to showcase some of the more recent enhancements, for example:
  • Using the DMN standard (Decision Model and Notation) to define and execute decisions
  • Moving from traditional business processes to more flexible and dynamic case management
  • The rise of cloud for modeling, execution and monitoring
IT executives, architects, software developers, and business analysts who want to learn about the latest open source, low-code application development technologies.

Detailed agenda and list of speakers can be found on each of the event pages.

Places are limited, so make sure to register asap !

by Edson Tirelli (noreply@blogger.com) at August 01, 2017 11:00 PM

July 13, 2017

Sandy Kemsley: Insurance case management: SoluSoft and OpenText

It’s the last session of the last morning at OpenText Enterprise World 2017 — so might be my last post from here if I skip out on the one session that I have bookmarked for late this...

[Content summary only, click through for full article and links]

by sandy at July 13, 2017 04:16 PM

Sandy Kemsley: Getting started with OpenText case management

I had a demo from Simon English at the OpenText Enterprise World expo earlier this week, and now he and Kelli Smith are giving a session on their dynamic case management offering. English started by...

[Content summary only, click through for full article and links]

by sandy at July 13, 2017 02:57 PM

Sandy Kemsley: OpenText Process Suite becomes AppWorks Low Code

“What was formerly known as Process Suite is AppWorks Low Code, since it has always been an application development environment and we don’t want the focus to be on a single technology...

[Content summary only, click through for full article and links]

by sandy at July 13, 2017 01:57 PM

July 12, 2017

Sandy Kemsley: OpenText Process Suite Roadmap

Usually I live-blog sessions at conferences, publishing my notes at the end of each, but here at OpenText Enterprise World 2017, I realized that I haven’t taken a look at OpenText Process Suite...

[Content summary only, click through for full article and links]

by sandy at July 12, 2017 10:03 PM

Sandy Kemsley: OpenText Enterprise World 2017 day 2 keynote with @muhismajzoub

We had a brief analyst Q&A yesterday at OpenText Enterprise World 2017 with Mark Barrenechea (CEO/CTO), Muhi Majzoub (EVP of engineering) and Adam Howatson (CMO), and today we heard more from...

[Content summary only, click through for full article and links]

by sandy at July 12, 2017 02:47 PM

July 11, 2017

Sandy Kemsley: OpenText Enterprise World keynote with @markbarrenechea

I’m at OpenText Enterprise World 2017  in Toronto; there is very little motivating me to attend the endless stream of conferences in Vegas, but this one is in my backyard. There have been a...

[Content summary only, click through for full article and links]

by sandy at July 11, 2017 05:44 PM

July 03, 2017

Drools & JBPM: Drools, jBPM and Optaplanner are switching to agile delivery!

Today we would like to give everyone in the community a heads up at some upcoming changes that we believe will be extremely beneficial to the community as a whole.

The release of Drools, jBPM and Optaplanner version 7.0 a few weeks ago brought more than just a new major release of these projects.

About a year ago, the core team and Red Hat started investing on improving a number of processes related to the development of the projects. One of the goals was to move from an upfront planning, waterfall-like development process into a more iterative agile development.

The desire to deliver features earlier and more often to the community, as well as to better adapt to devops-managed cloud environments, required changes from the ground up. From how the team manages branches to how it automates builds and how it delivers releases. A challenge for any development team, but even more so to a team that is essentially remote with developers spread all over the world.

Historically, Drools, jBPM and Optaplanner aimed for a cadence of 2 releases per year. Some versions with a larger scope took a bit longer, some were a bit faster, but on average that was the norm.

With version 7.0 we started a new phase in the project. We are now working with 2-week sprints, and with an overall goal of releasing one minor version every 2 sprints. That is correct, one minor version per month on average.

We are currently in a transition phase, but we intend to release version 7.1 at the end of the next sprint (~6 weeks after 7.0), and then we are aiming to release a new version every ~4 weeks after that.

Reducing the release timeframe brings a number of advantages, including:
  • More frequent releases gives the community earlier access to new features, allowing users to try them and provide valuable feedback to the core team. 
  • Reducing the scope of each release allows us to do more predictable releases and to improve our testing coverage, maintaining a more stable release stream.
  • Bug fixes as usual are included in each release, allowing users more frequent access to them as well. 
It is important to note that we will continue to maintain backward compatibility between minor releases (as much as possible - this is even more important in the context of managed cloud deployments as well where seamless upgrades are the norm) and the scope of features is expected to remain similar to what was before. That has two implications:
  • If before, we would release version 7.1 around ~6 months after 7.0, we now will release roughly 6 new versions in those 6 months (7.1, 7.2, ..., 7.6 ), but the amount of feature will be relatively equivalent. I.e., the old version 7.1 is roughly equivalent in terms of features as the scope of the new versions 7.1,..., 7.6 combined. It just splits the scope in smaller chunks and delivers earlier and more often.
  • Users that prefer to not update so often will not lose anything. For instance, a user that updated every 6 months can continue to do so, but instead of jumping from one minor version to the next, he will jump 5-6 minor versions. This is not a problem, again, because the scope is roughly the same as before and the backward compatibility between versions is the same.
This is of course work in progress and we will continue to evolve and adapt the process to better fit the community's and user's needs. We strongly believe, though, that this is a huge step forward and a milestone on the project maturity level.

by Edson Tirelli (noreply@blogger.com) at July 03, 2017 10:48 PM

June 22, 2017

Keith Swenson: Complex Project Delay

I run large complex software projects.  A naive understanding of complex project management can be more dangerous than not knowing anything about it.  This is a recent experience.

Setting

A large important customer wanted a new capability.  Actually, they thought they already had the capability, but discovered that the existing capability didn’t quite do what they needed.  They were willing to wait for development, however they felt they really deserved the feature, and we agreed.  “Can we have it by spring of next year?”    “That seems reasonable” I said.

At that time we had about 16 months.  We were finishing up a release cycle, so nothing urgent, I planned on a 12 month cycle starting in a few months.  I will start the project in “Month 12” and count down to the deadline.

We have a customer account executive (lets call him AE) who has asked to be the single point of contact to this large, important customer.  This makes sense because you don’t want the company making a lot of commitments on the side without at least one person to keep a list of all and make sure they all are followed through on.

Shorten lines of communication if you can.  Longer lines of communication make it harder to have reliable communication, so more effort is needed.

Palpable Danger

The danger in any such project, is that you have a fixed time period, but the precise requirements are not specified.  Remember that the devil is in the details. Often we throw about the terms “Easy to Use”  “Friendly”  “Powerful” and those can mean anything in detail.  Even terms that seem very specific, like “Conformance to spec XYZ” can include considerable interpretation by the reader.  All written specifications are ambiguous.

The danger is that you will get deep into the project, and it will come to light that customer expects functionality X.  If X is known at the beginning, and design can incorporate it from the beginning, and it might be relatively inexpensive.   But retrofitting X into a project when it is half completed can multiple that cost by ten times.  The goal then is to get all the required capabilities to a suitable level of detail before you start work.

A software project is a lot like piloting a big oil tanker.  You get a number of people going in different, but coordinated directions.  As the software starts to take form, all the boundaries between the parts that the people are working on gradually firm up and become difficult to change.  As the body of code becomes large, the cost of making small changes increases.  In my experience, at about the halfway point, the entire oil tanker is steaming along in a certain direction, and it become virtually impossible to change course without drastic consequences.

With a clear agreement up front, you avoid last minute changes.   The worst thing that can happen is that late in the project, the customer says “But I really expected this to run on Linux.”   (Or something similar).   Late discoveries like this can be the death knell.   If this occurs, there are only two possibilities: ship without X and disappoint the customer, or change course to add X, and miss the deadline.  Either choice is bad.

Danger lies in the unknown.  If it is possible to shed light and bring in shared understanding, the risk for danger decreases.

Beginning to Build a Plan

In month 12, I put together a high level requirements document.  This is simply to create an unambiguous “wish list” that encompasses the entire customer expectation.  It does NOT include technical details on how they will be met.  That can be a lot of work.  Instead, we just want the “wishes” at this time.

If we have agreement on that, we can then flesh out the technical details in a specification for the development.  This is a considerable amount of work, and it is important that this work be focused on the customer wishes.

I figured on a basic timetable like this:

  • Step 1: one month to agree on requirements  (Month 12)
  • Step 2: one month to develop and agree on specification (Month 11)
  • Step 3: one month to make a plan and agree on schedule (Month 10)
  • Step 4: about 4 months of core development (Months 9-6)
  • Step 5: about 4 months of QA/finishing (Months 5-2)
  • leaving one month spare just in case we need it. (Month 1)

Of course, if the customer comes back with extensive requirements, we might have to rethink the whole schedule.  Maybe this is a 2 year project.  We won’t know until we get agreement on the requirements.

Then AE comes to a meeting and announces that the customer is fully expecting to get the delivery of this new capability in Month 3!  Change of schedule!  This cuts us down to having only 9 months to deliver.  But more important, we have no agreement yet on what is to be delivered.  This is the classic failure mode: agreeing to a hard schedule before the details of what is to be delivered is worked out.  This point should be obvious to all.

The requirements document is 5 pages, one of those pages is the title page.  It should be an afternoon’s amount of work to read, gather people, and get this basic agreement.

Month 12 comes to an end.  Finally toward the middle of Month 11, the customer comes back with an extensive response.  Most of what they are asking for in “requirements” are things that the product already does, so no real problem.  There are few things that we can not complete on this schedule, so we need to push back.  But I am worried, we are six weeks into a task that should have been completed a month earlier.

Deadlines are missed one day at a time.

We’ve Got Plenty of Time

In the end of month 11, I give a revised the requirements and gave to AE.   AE’s response was not to give it to the customer.  He said “let’s work and understand this first, before we give it to the customer.”  This drags on for another couple of weeks, so we are now 8 weeks into the project, and we have not completed the first step originally planned for one month.

I press AE on this.  We are slipping day by day, week by week.  This is how projects are missed.  What was originally planned for 1 month out of twelve, is now close to 2 months, which is now out of 9.   We are getting squeezed!

AE says:  “What is the concern?   We have 8 more months to go!   What does a few weeks matter out of 8 months?

The essence of naive thinking that causes projects to fail is the idea that there is plenty of time and we can waste some.

Everything Depends

Lets count backwards on the dependency:

  • We want to deliver a good product that pleases the customer
  • This depends on using our resources wisely and getting everything done
  • This depends on not having any surprises late in the project about customer desires which waste developer time
  • This depends on having a design that meets all the expectation of the customer
  • This depends on having a clear understanding of what the customer wants before shape of the project starts to ossify.
  • This depends on having clear agreement on the customer desires before all of the above happens.

Each of these cascades, and a poor job in any step causes repercussions that get amplified as last minute changes echo through the development.

I also want to say that this particular customer is not flakey.  They are careful in planning what they want, and don’t show any excessive habit of changing their direction.  They are willing to wait a year for this capability.  I believe they have a good understanding of what they want — this step of getting agreement on the requirements is really just a way to make sure that the development team understands what the customer wants.

Why Such a Stickler?

AE says: “You should be able to go ahead and start without the agreement on requirements.  We have 8 more months, we can surely take a few more weeks or months getting this agreement.

Step 2 is to draw up a specification and to share that with the customer.  Again, we want to be transparent so that we avoid any misunderstanding that might cause problems late in the project.  However, writing a spec takes effort.

Imagine that I ask someone to write a spec for features A, B, and C.   Say that is two weeks of work.   Then the customer asks for feature D, and that causes a complete change in A, B, and C.  For example, given A, B, and C we might decide to write in Python, and that will have an effect on we way things are structured.  Then the customer requires running an environment where Python is not available.  That simple change would require us to start completely over.  All the work on the Python design is wasted work which we have to throw out, and could cause us to lose up to a month of time on the project, causing the entire project to be late.   However, if we know “D” before we start, we don’t waste that time.

Step 2 was planned to take a month, so if we steal 2 weeks from that, by being lazy about getting agreement on the requirements, we already lose half the time needed.  It is not likely that we can do this step in half the time.  And the two weeks might be wasted, causing us to need even more time.  Delaying the completion of step 1, can cause an increase in time of step 2, ultimately cascading all the way to final delivery.

Coming to agreement on the requirements should take 10% of the time, but if not done, could have repercussions that cost far more than 10% of the time.  It is important to treat those early deadlines as if the final delivery of the project depended on them.

Lack of attention to setting up the project at the front, always has an amplified effect toward the end of the project.

But What About Agile?

Agile development is about optimizing the work of the team to be optimally productive, but it is very hard to predict accurate deliveries at specific dates in the future.  I can clearly say we will have great capabilities next year, and the year after.  But this situation is the case that the customer has a specific expectation in a specific time frame.

Without a clear definition of what they want, the time to develop is completely unpredictable.  There is a huge risk by having an agreed upon date, but no agreed upon detailed functionality.

Since the customer understands what they want, the most critical and urgent thing is to capture that desire in a document we both can agree on.  The more quickly that is done, the greater the reduction in risk and danger.

Even when developing in an agile way, the better we understand things up front, the better the whole project will go.  Don’t leave things in the dark just because you are developing in an agile way.  It is a given that there many things that can’t be known in the course of a project, but that gives no license to purposefully ignore things that can be known.

Conclusions

Well run project act as if early deadlines are just as important as late deadlines.  Attention to details is not something that just appears at the last moment.  It must start early, and run through the entire project.

Most software project fail because of lack of clear agreement on what will satisfy the customer.  It is always those late discoveries that cause projects to miss deadlines.  A well run project requires strict attention to clarifying the goals as early as possible.

Do not ignore early deadlines.  Act as if every step of a project is as important as the final delivery.   Because every step is as important as the final delivery.

 

 

 

by kswenson at June 22, 2017 05:02 PM

June 21, 2017

Sandy Kemsley: Smart City initiative with @TorontoComms at BigDataTO

Winding down the second day of Big Data Toronto, Stewart Bond of IDC Canada interviewed Michael Kolm, newly-appointed Chief Transformation Officer at the city of Toronto, on the Smart City...

[Content summary only, click through for full article and links]

by sandy at June 21, 2017 06:08 PM

Sandy Kemsley: Consumer IoT potential: @ZoranGrabo of @ThePetBot has some serious lessons on fun

I’m back for a couple of sessions at the second day at Big Data Toronto, and just attended a great session by Zoran Grabovac of PetBot on the emerging markets for consumer IoT devices. His premise is...

[Content summary only, click through for full article and links]

by sandy at June 21, 2017 04:40 PM

June 20, 2017

Sandy Kemsley: Data-driven deviations with @maxhumber of @borrowell at BigDataTO

Any session at a non-process conference with the word “process” in the title gets my attention, and I’m here to see Max Humber of Borrowell discuss how data-driven deviations allow you to make...

[Content summary only, click through for full article and links]

by sandy at June 20, 2017 07:56 PM

Sandy Kemsley: IBM’s cognitive, AI and ML with @bigdata_paulz at BigDataTO

I’ve been passing on a lot of conferences lately – just too many trips to Vegas for my liking, and insufficient value for my time – but tend to drop in on ones that happen in Toronto, where I live....

[Content summary only, click through for full article and links]

by sandy at June 20, 2017 04:18 PM

June 13, 2017

BPM-Guide.de: “Obviously there are many solutions out there advertising brilliant process execution, finding the “right” one turns out to be a tricky task.” – Interview with Fritz Ulrich, Process Development Specialist

Fritz graduated with a Bachelors of Information Systems at WWU Münster in 2013 and since then has been working for Duni GmbH in the area of Process Development (responsible for all kinds of BPM topics and Duni’s BPM framework) and as a Project Manager.

by Darya Niknamian at June 13, 2017 08:00 AM