Planet BPM

September 29, 2014

Keith Swenson: 3 Innovative Approaches to Process Modeling

In a post titled “Business Etiquette Modeling” I made a plea for modeling business processes such that they naturally deform themselves as needed to accommodate changes.  If we model a fixed process diagram, it is too fragile, and can be costly to manually maintain.  While I was at the EDOC conference and the BPM conference, I saw three papers that introduce innovations which are not completely defined solutions, they represent solid research on steps in the right direction.  Here is a quick summary of each.

(1) Implementation Framework for Production Case
Management: Modeling and Execution

(Andreas Meyer, Nico Herzberg, Mathias Weske of the Hasso Plattner Institute and Frank Puhlmann of Bosch, EDOC 2014 pages 190-199)

This approach is aimed specifically at production case management which means that it is to support a knowledge worker, who has to decide in real time what to do, however the kinds of things that such a worker might do are well known in advance.  The example used is that of a travel agent:  we can identify all the various things that a travel agent might be able to do, but they might combine these actions in an unlimited variety of ways.  If we draw a fixed diagram, we end up restricting the travel agent unnecessarily.  Think about it: a travel agent might book one hotel one day, book flights the next, book another hotel, then change the flights, then cancel one of the hotel bookings — it is simply not possible to say that there is a single, simple process that a travel agent will always follow.

Instead of drawing a single diagram, the approach suggested is to draw separate little process snippets of all the things that a travel agent might do.  Here is the interesting part: the same activity might appear in multiple snippets.  At run time the system combines the snippets dynamically based on conditions.  Each task in each snippet is linked to things that are required before that task would be triggers, so based on the current case instance information, a particular task might or might not appear as needed.  Dynamic instance data determines how the current process is constructed.  Activities have required inputs and produce outputs which is part of the conditions on whether they are included in a particular instance.

modelshotAbove are some examples of the process snippets that might be used for a travel agent.   Note that “Create Offer” and “Validate Offer” appear in two different snippet with slightly different conditions.  The ultimate process would be assembled at run time in a way that depends upon the details of the case.  I would have to refer you to the paper for the full details on how this works, but I was impressed by Andreas’ presentation.  I am not sure this is exactly the right approach, but I am sure that we need this kind of research in this direction.

(2) Informal Process Essentials

(C. Timurhan Sungur, Tobias Binz, Uwe Breitenbücher, Frank Leymann, Universtity of Stuttgart, EDOC 2014 page 200-209)

They describe the need to support “informal processes” which is not exactly what I am looking for.  Informal means “having a relaxed, friendly, or unofficial style, manner, or nature; a style of writing or conversational speech characterized by simple grammatical structures.”  What I am looking for are processes that are well crafted, official, meaningful, accurate, and at the same time responsive to external changes.   Formal/informal is not the same relationship as fixed/adaptive.  However, they do cover some interesting ideas that are relevant.  They specify four properties:

  1. Implicit Business Logic – the logic is not explicit until run time
  2. Different Relationships Among Resources – interrelated sets of individuals are used to accomplish more complex goals
  3. Resource Participation in Multiple Processes – people are not dedicated to a single project.
  4. Changing Resources – dynamic teams assembled as needed.

These properties look a lot like innovative knowledge worker pattern, and so this research is likely to be relevant.  They find the following requirements to be able to meet the need:

  1. Enactable Informal Process Representation
  2. Resource Relationships Definition
  3. Resource Visibility Definition
  4. Support for Dynamically Changing Resources

It seems that these approaches need to focus more on resources, roles, and relationships, and less on the specific sequences of activities.  Then from that, one should be able to generate the actual process needed for a particular instance.

The tricky part is how to find an expert who can model this.  Once of the reasons for drawing a BP diagram is that it is that drawing a diagram simplifies the job of creating the process automation.   Getting to the underlying relationships might be more accurate and adaptive, it is not simpler.

(3) oBPM – An Opportunistic Approach to Business Process Modeling and Execution

(David Grünert, Elke Brucker-Kley and Thomas Keller, Institute for Business Information Management, Winterthur, Switzerland, BPMS2 Workshop at BPM 2014)

This paper comes the closest to Business Etiquette Modeling, because it is specifically about the problem of creating a business with a strict sequence of user tasks.  This top-down approach tends to be over-constrained.  Since this is the BPM and Social Software Workshop, the paper tries to find a ways to be more connected to social technology, and to take a more bottom up approach.  They call it “opportunistic” BPM because the idea is that the actual process flow can be generated after the details of the situation are known.  Such a process can take advantage of the opportunities automatically, without needing a process designer to tweak the process every time.

The research has centered on modeling roles, the activities that those roles typically so, and also associating with the artifacts that are either generated or consumed.  They leverage an extension of the UML use case modeling notation, and it might look a little like this:

usecaseshotThe artifacts (documents, etc) have a state themselves.  When a particular document enters a particular state, it enables a particular activity for a particular role.  To me this shows a lot of promise.  Upon examination, there are weaknesses to this approach: modeling the state diagram for a document would seem to be a challenge because the states that a document can be in are too intricately tied to the process you want to perform.  It might be that our preconception of the process might overly restrict the state chart, which in turn limits what processes could be generated.   Also, there is a data model that Grünert admitted would have to be modeled by a data model expert, but perhaps there are a limited number of data models, and maybe they don’t change that often.  Somehow, all of this would have to be discoverable automatically from the working of the knowledge workers in order to eliminate the huge up front cost of having to model all this explicitly.  Again, I refer you to the actual paper for the details.

Net-Net

What this shows is that there is research being done to take process to the next level.  Perhaps a combination of these approaches might leave us with the ultimate solution: a system that can generate process maps on demand that are appropriate for a specific situation.  This would be exactly like your GPS unit which can generate a route from point A to point B give the underlying map of what is possible.  That is what we are looking for, is a way to map what the underlying role interactions could possibly be, along with a set of rules about what might be appropriate when.  Like in a GPS when you add a new highway, you might add a new rule, and all the existing business processes would automatically change if that new rule applies to that case.  We are not there yet, but this research shows promise.


by kswenson at September 29, 2014 04:48 PM

September 23, 2014

Thomas Allweyer: Von der Pyramide zum Haus – Neue Auflage des Praxishandbuch BPMN

Cover Praxishandbuch BPMN 2.0 - vierte AuflageVon dem weit verbreiteten Praxishandbuch BPMN 2.0 von Jakob Freund und Bernd Rücker ist kürzlich die vierte Auflage. Wesentlicher Unterschied zur dritten Auflage: Das bislang als Pyramide dargestellte camunda Methodenframework wurde geändert und wird nun in Form eines Hauses visualisiert. Ausschlaggebend für die Änderung waren einige Missverständnisse, die im Zusammenhang mit der Pyramide gelegentlich auftraten. Darin war die Ebene des technischen, d. h. des ausführbaren Prozessmodells unterhalb der Ebene des operativen Prozessmodells angesiedelt. Dies veranlasste viele Leser zur Auffassung, dass die technische Ebene zwangsläufig eine Verfeinerung der operativen Ebene sein müsse. Damit verbanden sie die Erwartung, dass die technischen Prozessmodelle immer nach den operativen Modellen entstehen müssten und dass die Verantwortungen für diese Ebenen säuberlich zwischen Fachabteilung und IT getrennt seien.

Diese Auffassungen entsprechen aber nicht den Intentionen der Verfasser. In der neuen Darstellung als Haus enthält das Dach nach wie vor die Ebene des strategischen Prozessmodells. Das Haus selbst besteht jedoch nur aus einem Stockwerk, dem operativen Prozessmodell. Es ist unterteilt in einen “menschlichen Prozessfluss” und einen “technischen Prozessfluss”, die sich beide auf derselben Ebene befinden. Der menschliche Prozessfluss wird von den Prozessbeteiligten durchgeführt. Die Abarbeitung des technischen Prozessflusses erfolgt durch ein Softwaresystem, typischerweise eine Process Engine. Zumeist bestehen enge Interaktionen zwischen menschlichem und technischem Fluss. Im Zuge einer agilen Prozessentwicklung werden beide Flüsse gemeinsam entwickelt, wobei Fach- und IT-Experten eng zusammenarbeiten.

Ansonsten sind im Buch nur kleinere Änderungen vorgenommen worden. Da der XML-basierte BPEL-Standard für ausführbare Prozesse stark an Bedeutung verloren hat, wird hierauf nur noch kurz eingegangen. Schließlich wurde noch ein kurzer Überblick über die Open Source Plattform “camunda BPM” eingefügt, die unter Leitung der Autoren entwickelt wurde.


Freund, J.; Rücker, B.:
Praxishandbuch BPMN 2.0. 4. Auflage.
Hanser 2014
Das Buch bei amazon.

by Thomas Allweyer at September 23, 2014 06:36 AM

September 22, 2014

BPM-Guide.de: Thanks for an awesome BPMCon 2014

Awesome location, awesome talks and most of all: awesome attendees. This year’s BPMCon was indeed the “schönste BPM-Konferenz” I’ve ever seen. Thank you so much to all who made it happen, including Guido Fischermanns for the moderation, Sandy Kemsley for her Keynote about the Zero-Code BPM Myth, all those BPM practitioners who presented their lessons [...]

by Jakob Freund at September 22, 2014 05:46 PM

September 19, 2014

Drools & JBPM: The Birth of Drools Pojo Rules

A few weeks back I blogged about our plans for a clean low level executable mode, you can read about that here.

We now have our first rules working, and you can find the project with unit tests here. None of this requires drools-compiler any more, and allows people to write DSLs without ever going through DRL and heavy compilation stages.

It's far off our eventually plans for the executable model, but it's a good start that fits our existing problem domain. Here is a code snippet from the example in the project above, it uses the classic Fire Alarm example from the documentation.

We plan to build Scala and Clojure DSLs in the near future too, using the same technique as below.

public static class WhenThereIsAFireTurnOnTheSprinkler {
Variable<Fire> fire = any(Fire.class);
Variable<Sprinkler> sprinkler = any(Sprinkler.class);

Object when = when(
input(fire),
input(sprinkler),
expr(sprinkler, s -> !s.isOn()),
expr(sprinkler, fire, (s, f) -> s.getRoom().equals(f.getRoom()))
);

public void then(Drools drools, Sprinkler sprinkler) {
System.out.println("Turn on the sprinkler for room " + sprinkler.getRoom().getName());
sprinkler.setOn(true);
drools.update(sprinkler);
}
}

public static class WhenTheFireIsGoneTurnOffTheSprinkler {
Variable<Fire> fire = any(Fire.class);
Variable<Sprinkler> sprinkler = any(Sprinkler.class);

Object when = when(
input(sprinkler),
expr(sprinkler, Sprinkler::isOn),
input(fire),
not(fire, sprinkler, (f, s) -> f.getRoom().equals(s.getRoom()))
);

public void then(Drools drools, Sprinkler sprinkler) {
System.out.println("Turn off the sprinkler for room " + sprinkler.getRoom().getName());
sprinkler.setOn(false);
drools.update(sprinkler);
}
}

by Mark Proctor (noreply@blogger.com) at September 19, 2014 06:03 PM

September 18, 2014

Sandy Kemsley: What’s Next In camunda – Wrapping Up Community Day

We finished the camunda community day with an update from camunda on features coming in 7.2 next month, and the future roadmap. camunda releases the community edition in advance of the commercial...

[Content summary only, click through for full article and links]

by sandy at September 18, 2014 04:12 PM

Sandy Kemsley: camunda Community Day technical presentations

The second customer speaker at camunda’s community day was Peter Hachenberger from 1&1 Internet, describing how they use Signavio and camunda BPM to create their Process Platform, which is...

[Content summary only, click through for full article and links]

by sandy at September 18, 2014 02:59 PM

Sandy Kemsley: Australia Post at camunda Community Day

I am giving the keynote at camunda’s BPMcon conference tomorrow, and since I arrived in Berlin a couple of days early, camunda invited me to attend their community day today, which is the open...

[Content summary only, click through for full article and links]

by sandy at September 18, 2014 11:53 AM

September 17, 2014

Drools & JBPM: Decision Camp is just 1 Month away (SJC 13 Oct)

Decision Camp, San Jose (CA), October 2014, is only one month away, and is free for all attendees who register. Follow the link here, for more details on agenda and registration.

by Mark Proctor (noreply@blogger.com) at September 17, 2014 02:50 AM

September 16, 2014

Drools & JBPM: Workbench Multi Module Project Structure Support

The upcoming Drools and jBPM community 6.2 release will be adding support for Maven multi-module projects. Walter has prepared a video, showing the work in progress. While not shown in this video, the multi-module projects will have managed support to assist with automating version updates, releases, and will have full support for multiple version streams across GIT branches.

There is no audio, but it's fairly self explanatory. The video starts by creating a single project, and then showing how the wizard can convert it to a multi-module project. It then proceeds to add and edit modules, also demonstrating how the parent pom information is configured. The video also shows how this can work across different repositories without a problem - each with their own project structure page. Repositories can also be unmanaged, which allows for user created single projects, much as we have now with  6.0 and 6.1, which means previous repositories will still continue to work as they did before.

Don't forget to switch the video to 720p, and watch it full screen. Youtube does not always select that by default, and the video is fuzzy without it.




by Mark Proctor (noreply@blogger.com) at September 16, 2014 10:25 PM

September 15, 2014

Sandy Kemsley: Survey on Mobile BPM and DM

James Taylor of Decision Management Solutions and I are doing some research into the use and integration of BPM (business process management) and DM (decision management) technology into mobile...

[Content summary only, click through for full article and links]

by sandy at September 15, 2014 04:52 PM

Drools & JBPM: Setting up the Kie Server (6.2.Beta version)

Roger Parkinson did a nice blog on how to setup the Kie Server 6.2.Beta version to play with.

This is still under development (hence Beta) and we are working on improving both setup and features before final, but following his blog steps you can easily setup your environment to play with it.

Only one clarification: while the workbench can connect and manage/provision to multiple remote kie-servers, they are designed to work independently and one can use REST services exclusively to manage/provision the kie-server. In this case, it is not necessary to use the workbench.

Here are a few test cases showing off how to use the client API (a helper wrapper around the REST calls) in case you wanna try:

https://github.com/droolsjbpm/droolsjbpm-integration/blob/master/kie-server/kie-server-services/src/test/java/org/kie/server/integrationtests/KieServerContainerCRUDIntegrationTest.java

https://github.com/droolsjbpm/droolsjbpm-integration/blob/master/kie-server/kie-server-services/src/test/java/org/kie/server/integrationtests/KieServerIntegrationTest.java

Thanks Roger!

by Edson Tirelli (noreply@blogger.com) at September 15, 2014 03:59 PM

Thomas Allweyer: Soll man noch das “klassische” BPMS-Konzept vermitteln?

Bislang habe ich sehr positive Reaktionen auf mein neues BPMS-Buch erhalten. Unter anderem kam aber auch die Frage auf, ob das klassische, Prozessmodell-getriebene BPMS-Konzept, das ich in dem Buch mit vielen Beispielprozessen erläutere, überhaupt noch zeitgemäß ist. Sollte man sich angesichts eines immer größeren Anteils an Wissensarbeitern nicht stattdessen lieber mit neueren und flexibleren Ansätzen beschäftigen, wie Adaptive Case Management (ACM)?

Sicherlich muss man die klassische BPMS-Philosophie kritisch hinsichtlich ihrer Eignung für verschiedene Einsatzbereiche hinterfragen. Für die meisten schwach strukturierten und wissensintensiven Prozesse ist es tatsächlich nicht sinnvoll und meist auch gar nicht möglich, den kompletten Ablauf im Voraus in Form eines BPMN-Modells festzulegen. Für solche Prozesse ist Adaptive Case Management besser geeignet. Das heißt aber nicht, dass der herkömmliche BPM-Ansatz komplett überholt wären. Das Buch soll einen fundierten Einstieg in das Themengebiet bieten. Es gibt eine Reihe von Gründen, weshalb ich mich darin auf Prozessmodell-basierte BPMS beschränkt habe:

  • Die überwiegende Mehrzahl aller BPMS, die heute auf dem Markt verfügbar sind, verwenden den Prozessmodell-basierten Ansatz. Zwar gibt es durchaus reine ACM-Systeme, doch sind diese zumindest momentan noch in der Minderheit. Häufig wird Case Management auf klassischen BPM-Plattformen als zusätzliche Funktionalität angeboten.
  • Das klassische BPM-Konzept ist in Theorie und Praxis recht weit entwickelt. Die entsprechenden Systeme haben einen hohen Reifegrad erreicht. Es handelt sich somit um einen etablierten Ansatz, der eine Grundlage dieses Fachgebiets darstellt.
  • Bei ACM hingegen handelt es sich um einen recht neuen Ansatz, der sich noch stark in Entwicklung befindet. Daher ist es schwierig, entsprechende Grundlagen zu identifizieren, die nicht bereits in wenigen Jahren überholt sein werden.
  • Die Kenntnis der klassischen BPM-Grundlagen hilft beim Verständnis von ACM und anderen neuen Ansätzen. So finden sich Konzepte wie Definitionen und Instanzen von Prozessen auch bei ACM in Form von Fall-Templates und Fällen wieder. Ebenso sollte man etwa verstehen, worum es bei der Korrelation von Nachrichten geht. Ob eine Nachricht einer Prozessinstanz oder einem Fall zugeordnet wird, stellt keinen großen Unterschied dar. Manche Vorteile des ACM-Ansatzes erschließen sich erst richtig, wenn man sie mit dem klassischen Konzept vergleicht, wo z. B. Mitarbeiter während der Prozessdurchführung nicht so einfach ganz neue Bearbeitungsschritte hinzufügen können.
  • Auch bei der Bearbeitung von Fällen gibt es oftmals Teile, die in Form von strukturierten Prozessen ablaufen. Das klassische BPM-Konzept wird daher wohl nicht komplett abgelöst. Stattdessen werden sich ACM und BPMS-Funktionalitäten sinnvoll ergänzen.
  • Die Zahl der strukturierten und standardisierten Prozesse dürfte in Zukunft keineswegs sinken. Zum einen gibt es immer mehr komplett automatisierte Prozesse, die zwangsläufig sehr stark strukturiert sind – zumindest solange, bis sich hochgradig intelligente und autonome Software-Agenten auf breiter Front durchgesetzt haben. Zum anderen müssen immer mehr Prozesse skalierbar sein um effizient über das Internet abgewickelt werden zu können. Hierzu müssen sie stark strukturiert und standardisiert sein. Wenn jemand bei einem großen Internethändler etwas bestellt, dann wird nicht erst individuell darüber nachgedacht, wie man die Wünsche dieses Kunden erfüllen kann. Es läuft vielmehr ein komplett standardisierter Prozess ab. Es mag sein, dass Prozesse mit starker Mitarbeiterbeteiligung künftig verstärkt mit Hilfe von ACM unterstützt werden. Klassische Process Engines wird man dann eher bei der Steuerung komplett automatisierter Prozesse finden. Die Zahl der Einsatzmöglichkeiten wird damit aber nicht geringer.

Wer sich als für den Einstieg zunächst mit den Grundlagen klassischer BPMS beschäftigt, liegt daher auf jeden Fall richtig. Und am besten versteht man diese, wenn man sie selbst ausprobiert. Daher gibt es die zahlreichen Beispielprozesse zu dem Buch, die man herunterladen und mit dem Open Source-System “Bonita” ausführen kann.

by Thomas Allweyer at September 15, 2014 10:54 AM

September 13, 2014

Keith Swenson: BPM2014 Keynote: Keith Swenson

I was honored to give the keynote on the second day of the BPM2014 conference, and promised to answer questions, so here are the slides and summary.

Slides are at slideshare:

(Slideshare no longer has the ability to put an audio together with the slides, so I apologize that the slides alone probably don’t make a lot of sense.  I hope to get invited to present the same talk at another event where they video.)

Twitter Responses

twitter7twitter6

twitter8

twitter1

twitter16

twitter3

Nice of you to notice!  The talk went on schedule and as far as I know there was nothing that I forgot to say.

twitter10

excellent!

twitter9

It is a little of both.  There is a tendency for managers of all types, especially less experienced managers, to want to over-constrain the processes.  At the same time, programmers tend to implement restrictions very literally and without any wiggle room.  I don’t think we can let either one off the hook.

twitter11

twitter14

 

This was one of my key points:  if our goal is to make ‘business’ successful, maybe there is more to it than just increasing raw efficiency in terms of reducing expenses.  Maybe an excellent business needs to keep their knowledge workers experienced, and possibly our IT systems should be helping to exercise the knowledge workers.

twitter12

This tweet got the most favorites and retweets.  I had not realized that this was not clear before, so let me state it here.  I included in the presentation the definition of BPM that was gathered earlier this year.  I mentioned that this was not exactly the definition that I had formerly thought, but the discussion included a broad spectrum of BPM experts, and so I am willing to go along with this definition.

Under this new definition, ANYTHING and EVERYTHING that makes your business processes better is included.  Some of you thought this all the time.  Previously, I had subscribed to a different (and wrong) definition of BPM, which was a bit more restrictive, and that is why in the past I have stressed the distinction between BPM and ACM.  However, this new, agreed upon definition allows BPM method to have and to not have models, to have and not have execution, etc.  So BPM clearly includes ACM because it also is a way of supporting business and processes.  This is the definition now that so many have pledged to support, and I can support it as well.

I am still teaching myself to say “Workflow-style BPM” or “traditional-BPM” instead of simply ‘BPM’, and I have not completely mastered that change.

twitter13

twitter15

 

There is no doubt:  knowledge work is more satisfying to do.   I spoke to some attendees afterwards, who felt I was being ‘unfair’ to the routine workers:  they are doing their jobs too, why pick on them just because their job is routine?   I am not sure how to respond to that.  I think most people find routine work dull and boring.  Sure, it is a job, but most people would like to be doing more interesting things, and that generally is knowledge work that depends upon expertise you acquire.  In general, automatic routine work will allow a typical business to employ more knowledge workers, particularly if the competitors are doing so.  It is somewhat unlikely to think that all routine worker individuals will switch and become knowledge workers, but some will, and for the most part the transition will occur by hiring exclusively knowledge workers, and losing routine workers by attrition.

photo4


by kswenson at September 13, 2014 08:00 AM

September 11, 2014

Tom Baeyens: 5 Types Of Cloud Workflow

Last Wednesday, Box Workflow was announced. It was a expected move for them to go higher up the stack as the cost of storage “races very quickly toward zero”.  It made me realize there are actually 4 different types of workflow solutions available on the cloud.

Box, Salesforce, Netsuite and many others have bolted workflow on top of their products.  In this case workflow is offered as a feature on a product with a different focus.  The advantage is they are well integrated with the product and that it’s available when you have the product already.  The downside can be that the scope is mostly limited to the product.

Another type is the BPM as a service (aka cloud enabled BPM).  BPM as a service has an online service for which you can register an account and use the product online without setting up or maintaining any IT infrastructure for it.  The cloud poses a different set of challenges and opportunities for BPM.  We at Effektif provide a product that is independent, focused on BPM and which is born and raised in the cloud.  In our case, we could say that our on-premise version is actually the afterthought.  Usually it’s the other way round.  Most cloud enabled BPM products were created for on-premise first and have since been tweaked to run on the cloud.  My opinion ‘might’ be a bit biased, but I believe that today’s hybrid enterprise environments are very different from the on-premise-only days.   Ensuring that a BPM solution integrates seamless with other cloud services is non-trivial.   Especially when it needs to integrate just as well with existing on-premise products.

BPM platform as a service (bpmPaaS) is an extension of virtualization.  These are prepackaged images of BPM solutions that can be deployed on a hosting provider.  So you rent a virtual machine with a hosting provider and you then have a ready-to-go image that you can deploy on that machine to run your BPM engine.  As an example, you can have a look at Red Hat’s bpmPaaS cartridge.

Amazon simple workflow service is in many ways unique and a category on it‘s own in my opinion.  It is a developer service that in essence stores the process instance data and it takes care of the distributed locking of activity instances.  All the rest is up to the user to code.  The business logic in the activities has to be coded.  But what makes Amazon’s workflow really unique is that you can (well.. have to) code the logic between the activities yourself as well.  There's no diagram involved.  So when an activity is completed, your code has to perform the calculation of what activities have to be done next.  I think it provides a lot of freedom, but it’s also courageous of them to fight the uphill battle against the user’s expectations of a visual workflow diagram builder.

Then there is IFTTT and Zapier.  These are in my opinion iconic online services because they define a new product category.  At the core, they provide an integration service.  Integration has traditionally been one of the most low level technical aspects of software automation.  Yet they managed to provide this as an online service enabling everyone to accomplish significant integrations without IT or developer involvement.  I refer to those services a lot because they have transformed something that was complex into something simple.  That, I believe, is a significant accomplishment.  We at Effektif are on a similar mission.  BPM has been quite technical and complex.  Our mission is also to remove the need for technical expertise so that you can build your own processes. 

by Tom Baeyens (noreply@blogger.com) at September 11, 2014 09:57 AM

September 10, 2014

BPinPM.net: BPM meets Digital Age – Win the new book „Management by Internet“ by Willms Buhse

Evaluation of Digital Age BPM ideasTogether with eight fearless BPM experts from four different organizations, we went on an exciting journey to bring together Digital Age and BPM. Supported by Dr. Willms Buhse and his experts from doubleYUU, we have developed a number of possibilities to combine web 2.0 and social media features as well as digital leadership aspects with business process management.

Today, we are going to introduce the results of this workshop series in more detail. – And please don’t miss the chance to win a copy of the inspiring book “Management by Internet” by Willms Buhse at the end of this article. The book covers a lot of the aspects, which we combined with BPM and provides practical examples how to benefit from the Digital Age as a manager.

Overall goal of the workshop series was to increase the acceptance and the benefit of BPM by the implementation of Digital Age elements. Within the first workshop session, we developed more than 70 ideas which we clustered into six areas of interest for further evaluation: ‘Participation’, ‘Training and Communication’, ‘Feedback and Exchange’, ‘Search Engine’, ‘Process Transparency’, and ‘Mobile Access’.

Based on an evaluation of these six areas by BPM experts from the participating organizations, we started to develop prototypes during the second workshop for the eleven highest ranked ideas in an overnight delivery session. Afterwards, these prototypes went through a second evaluation cycle by employees of the participating organizations.

Search like googleBiggest winners of the evaluation by the employees were the ideas related to the ‘Search Engine’. Obviously, employees expect the search engine of the BPM system to be as fast and precise as google. But – as we have learned from Willms and his team – it is absolutely not fair to compare google with the search engine of a BPM system. Google processes much more search requests which can be analyzed and google invests an immense amount of money to optimize its algorithms. But there is still the expectation by the employees to have a search like google. Thus, we discussed ideas like tagging, result ranking, and previews to push the BPM search engine towards google expectations.

The "Like-Button" failed...Biggest looser of the evaluation was the “Like-Button” which was represented by a “heart” in our prototypes. By having a closer look onto the results, we realized that it probably doesn’t make sense to “like” a process. Result of our discussion was to redesign the button to a “Helpful”-Button which can be clicked by users to indicate that the process description was helpful for them.

Now, we are going to wrap-up all the learnings for a more detailed presentation of the results during our BPinPM.net Conference in Novembers as well as to prepare the prototypes for further evaluation. In addition, we will present detailed insights about the current implementation status of Digital Age BPM at one of the participating organizations at the conference. So if you are interested in more details, please meet us at the conference. :-)

To provide even more insights of the Digital Age elements which we have discussed during the workshop, we are going to raffle a copy of the new “Management by Internet” book by Willms Buhse. So don’t wait and enter the lottery here…

Best regards,
Mirko

by Mirko Kloppenburg at September 10, 2014 12:48 PM

September 09, 2014

Keith Swenson: Business Etiquette Modeling: a new paradigm for process

The AdaptiveCM 2014 workshop this past Monday provided a very interesting discussion of the state of the art in adaptive case management and other non-workflow oriented ways for supporting knowledge work. While there I presented, and we discussed, an intriguing new way to think about processes which I call “Business Etiquette Modelling”

Processes Emerge from Individual Interaction

The key to this approach is to treat a business process as an epiphenomenon that is a secondary effect that results from business interactions, but is not primary to business interactions.  The primary thing that is happening is interactions between people.  If those interactions are tuned properly, business results.

I have found the following video to be helpful in to giving a concrete idea of emergent behavior that we can discussion.  Watch the video, particularly between 0:30 and 1:30.  plainbirdsThe behavior of the flock of birds, called murmurating, is the way that the groups of birds appears to bunch, expand, and swirl.  The birds themselves have no idea they are doing this.  Take a look (click this link to access the video – strange problem with WordPress at the moment):

The behavior of the flock is analogous to the business that an organization is engaged in.  With regular top-down or outside-in processes, you start with the emergent business behavior that you want to support, and model that directly.  To refer to the analogy, you draw descriptions of the bunching, flowing, and swirling of the flock, and from that you would come up with specific flight paths that individual birds would need to follow to get that overall behavior.  However, that is not how the birds actually do it!

You can simulate this murmuration behavior by endowing individual birds with a few simple rules:  match speed with nearby other birds, try to stay near the group of birds, and leave enough space to avoid hitting other birds.  Computer simulation using these rules produces flock behavior very similar to starlings shown in the video.

murmuration

On the left you see the emergent flock behavior, and on the right the rules that produce that, but there is no known way to derive the rules from the flock behavior.  (These rules were found by trial & error experimentation in the simulator.)

The behavior of the birds in a flock emerges from the behaviors of the individual bird interactions — there is no guidance at the flock level.  This is very much like business:  an organization has many individual people interacting, and the business emerges as a result.  Obviously the interaction of people is far more complex than the birds, and business equally more complex than flock behavior, but the analogy holds: business can be modified indirectly by changing the rules of behavior of individuals.

Top-Down Design Runs Into Trouble

Consider the bird flock again, and approach trying to reproduce this the way that we do with a typical BPM approach.  In BPM we would define the overall process that is desired, and then we would determine the steps of everyone along the way to make that happen.  BirdProcessFor the bird flock, that would be like outlining the shape of the flock, stating that the goal is a particular shape, and a particular swooping, and then calculate the flight paths of each of the birds in order to get the desired output.  That might seem like a daunting task for so many birds, but it is doable.  The result is that you will have a precisely defined flock flying pattern.

This pattern would be very fragile.  If you tried to fly where a tree was in the way, some of the pre-calculated bird trajectories would hit the tree.  If there was a hawk in the region, some of the birds would quite likely be captured, because the path is fixed.  To fix this, you would have to go back to the overall flock design, come with a shape that avoids the specific tree, or makes a hole for the predator, and then calculate all the bird trajectories again.  The bird flock behavior has become fragile because any small perturbation in the context requires manually revisiting, and modifying, the overall plan.

With the bottom-up approach, these situations are cleanly handled by adding a couple more rules: avoid trees and other stationary things, and to always keep a certain distance from a predator.  By adding those rules in, the behavior of the flock becomes stable in the face of those perturbations.  If we design the rules properly, the birds are able to determine their own flight paths.  They do so as they fly, and automatically take into account any need to change the overall flock structure.  Flock automatically avoid trees, and they automatically make a hole where a predator flies.  Note of course that we can not be 100% sure of what the flock will exactly look like when it is flying, but we do know that it will have the swooping behavior, as well as avoiding trees and predators.

The problem with modeling the high level epiphenomenon directly is that once you specify the exact flight paths of the birds, the result is very fragile.  Yes, you get a precise overall behavior, but you get only that exact behavior.  When the conditions change, you are stuck, and it is hard to change.  If however you model the micro-level rules, the resulting macro level behavior automatically adapts without any additional work to the new, unanticipated situation.

What is an Etiquette Model?

Etiquette is a term that refers to the rules of interactions between individuals.  Each individual follows their own rules, and if these rules are defined well enough, proper business behavior will emerge.  We can’t call this “Business Rule Modeling” because that already exists, and means something quite different. The term ‘etiquette’ implies that the rules are specifically for guiding the behavior individuals at the interpersonal level.

The etiquette model defines explicitly how individuals in particular roles interact with others.  There would be a set of tasks that might be performed as well as conditions of when to perform that task structured as a kind of heuristic that can be used as needed. Seletion criteris might include specific goals that an individual might have (such as “John is responsible for customer X.”) as well as global utilities, (such as “try to minimize costs” or “assure that the customer goes away satisfied.”)   The set of heuristics are over-constrained, meaning that the individual does not simply follow all the rules, but would have to weigh the options and choose the best guess for the specific situation.

purchasingagent

For example, a role like “Purchasing Agent” would be fully defined by all the actions that a purchasing agent might make, and the conditions that would be necessary for such a role player to take action.   They might purchase something only when the requesting party presents a properly formed “purchase request” document, and which carries the proper number of approvals from the right people in the organization.  Defined this way, any number of different business processes might have a “purchase by purchaser” within it, and the rules for purchasing would be consistent across all of them.  If there is a need to make a change to the behavior of the purchaser, those ‘etiquette’ rules could be changed, and as a result all of the processes that involve purchasing would be automatically modified in a consistent way.

Isn’t this the Functional Orientation that BPM avoids?

The answer is yes and no.   Yes, it is modeling very fine grained behavior with a set of heuristics that tell what one individual will do to the exclusion of all others.  There is a real danger that the rules for one role might be defined in such a way as to throw a tremendous burden on everyone else in the organization.  This could decreasing the overall efficiency of the organization.  We can not optimize one role’s etiquette rules in exclusion of all other roles — we need to consider how the resulting end-to-end business process appears.

Given the heuristics and guidelines for all the individuals that will be involved in a process, it is possible to simulate what the resulting business processes will be.  Using predictive analytics, we can estimate the efficiency of that process, and particular waste points can be identified.  This can be used to modify the etiquette of the individual participants so that overloaded individuals do slightly fewer things, and underloaded individuals do a bit more, and so that the overall end-to-end process is optimized.

The result is that you achieve the goals of BPM: you are engaged in a practice of continually improving your business processes.  But you do so without directly dictating the form of the process!  You dictate how individuals interact, and the business process naturally emerges from that.

Is this Worth the Trouble?

The amazing result of this approach is that the resulting business process is anti-fragile!   When a perturbation appears in the organization, the business processes can automatically, and instantly, change to adapt to the situation.  A simple example is a heuristic for one role to pick up some tasks from another role, if that other role is overloaded.  Normally it is more efficient for Role X to do that task, but if because of an accident, several of the people who normally play Role X end up in the hospital for a few weeks, the business process automatically, and instantly, adjusts to the new configuration, without any involvement of a process designer or anyone.

Consider a sales example.  There can be multiple heuristics for closing a deal: one that explores all possible product configurations to identify the ideal match with the customer and maximizes revenue for the company, and another heuristic that gets things approximately right but closes very quickly.  As you get closer to the end of the month, the priority to close business in the month might shift from the more accurate heuristic, to the quick-and-dirty heuristic in order to get business into that month’s accounting results.  These kinds of adaptations are incredibly hard to model using the standard workflow diagram type approach.

The Amazon Example

Wil van der Aalst in his keynote at EDOC 2014 reminded me of a situation that happened to me recently with some orders from Amazon.  On one day I ordered two books and one window sticker from Amazon.  On the next day, I remembered about another book, and ordered that.  The result was that a few days later I received all three books in a single shipment, and the window sticker came a week after that separately.  The first order was broken into two parts for shipping, and then the second order was combined together with part of the first order.

This is actually very hard to model using BPMN.  You can make a BPMN process of a particular item, such as a book, which starts by being ordered and ultimately shipped, but the treatment of the order, by splitting when necessary, and combining when necessary will not appear in the BPMN diagram.  It is hard (or impossible) to include the idea to “optimize shipping costs” into a process that represent the behavior of only a single item of the purchase.

When you model the Business Etiquette of the particular roles, it is very easy to include a heuristic to split an order into parts when the parts are coming from different vendors.  Not every order is split up.  There are guidelines for when to use this heuristic that dictate when it should and should not be done.   Same for the shipper, who might have a heuristic to combine shipments if they are close enough together, and then shipping costs can be reduced.

This approach allows for supporting things like the Kanban method which constrains the number of instances that can be in a particular step at a time.  BPMN has no way to express these kinds of constraints that cross multiple processes.

Summary

Let’s discuss this approach.  My rather cursory search did not turn up any research on this approach to representing business process by representing the interactions between individual roles in the organization, although on Monday at the BPM conference I saw a good paper called “Opportunistic Business Process Modeling” which was a start in this direction.  I will make links to research projects if I find some.

This approach also works well for adaptive case management.  The heuristics and guidelines can be used within a case to guide the behavior of the case manger and other participants.  If this is done, then even though you can not predict the course of a single instance, you can use predictive analytics to approximate the handling of future cases.  This technique might be a new tool in the BPM toolkit.


by kswenson at September 09, 2014 04:45 AM

September 05, 2014

Keith Swenson: Final Keynote EDOC 2014: Barbara Weber

Barbara Weber is a professor at University of Innsbruck in Austria.  Next year she will be hosting the BPM 2015 conference at that location.  She gave a talk on how they are studying the difficulties of process modeling.   My notes follow:

Most process model research is focusing on the end product of process models. Studies have shown that a surprisingly large number, from 10% to 50% of existing models, have errors.  Generally process models are created and then the quality of the final model is measured, in terms of complexity of model, model notation, secondary notation, and measure accuracy, speed, and mental effort.   Other studies take collections of industrial models, and measure size, control flow complexity and other metrics, and look for errors like deadlocks and livelocks.

Standard process modeling lifecycle is (1) elicitation, and then (2) formalization. Good communications skills needed in first part. Second part requires skills in a particular notation. She calls this PPM (Process of process modeling). Understanding this better would help both practice and teaching. This can be captured from a couple of different perspectives.

1) logging of modeling interactions
2) tracking of eye movement
3) video and audio
4) biofeedback collecting heart rate etc.

Nautilus Project focused on logging modeling environment. Cheetah experimental platform (CEP) guides modelers through sessions and the other things is that it records the entire thing and plays it back later.  The resulting events can be imported to a process mining tool and analyze the process of process modeling.  She showed some details of the log file that is captured.

Logging at the fine grained level was not going anywhere, because the result was looking like a spaghetti diagram.  They broke the formalization stage into five phases:  

  • problem understanding: what the problem is, what has to be modeled, what notation to use
  • method finding: how to map the things into the modeling notation
  • Modeling: actually doing the drawing on the canvas
  • Reconciliation: is about then improving the understandability of the model, like factoring, layout, and typographic clues all of which make maintenance easier
  • Validation – search for quality issues, comparing external and internal representation, syntactic and semantic, and pragmatic quality issues

They validated this with user doing a “think aloud” work.  They could then map the different kinds of events to these phases.  For example creating elements are modeling phase, while moving and editing existing is more often reconciliation phase.  She showed to charts from two users: one spent a lot of time in problem understanding, and then build quickly, the other user proceeded quite a bit more slowly, adding and removing things over time.

Looking at different users, they found (unsurprisingly) that less experienced users take a lot more time in the ‘problem understanding’ phase.  In ‘method finding’ they found that people with a lot of domain knowledge were significantly more effective.  At the end there are long understanding phases that occur around the clean up.  They did not look at ‘working memory capacity’ as a factor, even though it is well known that this is a factor in most kinds of modeling.  

Second project “Modeling Mind” took a look at eye movements and other biofeedback while modeling.  These additional events in the log will add more dimensions of analysis.  With eye tracking you measure number of fixations, and mean fixation duration.  Then define areas of interesting (modeling canvas, text description, etc.)  They found that eye trace patterns matched well to the phases of modeling.  Initial understanding they spend a lot of time on the text description with quick glances elsewhere.  During the building of the model, naturally you look at the canvas and the tool bar.  During reconciliation there is a lot of looking from model to text and back.

What they would then like is to get a continuous measure of mental effort.  That would give an indication of when people are working hard, and when that changes.  These might give some important clues.  Nothing available at the moment to make this easy, but they are trying to capture this.  For example, maybe measuring the size of the pupil.  Heart rate variability is another way to approximate this.

Conclusion: it is not sufficient to look only at the results of process modeling — the process maps that result — but we really need to look at the process of process modeling: what people are actually doing at the time, and how they accomplish the outcome.  This is the important thing you need to know in order to build better modeling environments, better notations and tools, and ultimately increase the quality of process models.  This might also produce a way to detect errors that are being made during the modeling, and possibly ways to avoid those errors.

Note that there was today no discussion of elicitation phase (process discovery) but that is an area of study they are doing as well.

The tools they use (Cheetah) is open source, and so there are opportunities for others to become involved.

Q&A

Can the modeling tool simulate a complete modeling environment?  Some of the advanced tools check at run time and don’t allow certain syntactic errors.  Can you simulate this? –  The editor models BPMN, and there is significant ability to configure the way it interacts with the user.

Sometimes it is unclear what is the model, and what is the description of the model.  Is this kept clearly separated in your studies?  Do we need more effort to distinguish these more in modelers?  – WE consider that modeling consists of everything including understanding what you have to do, sense making, and then the drawing of the model.  

This is similar to cognitive modeling.  Have you considered using brain imaging techniques?  – we will probably explore that.  There is a student now starting to look at these things. We need to think carefully whether the subject is important enough for such a large investment.

Have you considered making small variations in the description, for example tricky key word, and see how this effects the task?  – we did do one study where we had the same, slightly modified requirements to model.  These can have a large effect.

Starting from greenfield scenario, right?  What about using these for studying process improvement on existing models? – some little bit of study of this.  The same approach should work well.  Would definitely be interesting to do more work on this.

 


by kswenson at September 05, 2014 08:15 AM

Thomas Allweyer: BPM in Practice diskutiert ACM, Internet of Things und mehr

“Enterprise BPM 2.0, Adaptive Case Management und das Internet of Things – Wie passt das alles zusammen?”, fragt Dirk Slama, Autor des empfehlenswerten Buchs “Enterprise BPM“, in seiner Keynote auf dem Workshop “BPM in Practice” am 9. Oktober in Hamburg. Das Adaptive Case Management und seine praktischen Anwendung wird in den anschließenden Parallel-Tracks von mehreren Referenten aufgegriffen und vertieft. Weitere Themen sind die Validierung von Prozessmodellen in Szenarien, Process Mining, Werkzeug- und organisations-übergreifende Kollaboration, Decision Management und die praktische Umsetzung vom Modell zur Automatisierung in 45 Minuten.

Das genaue Programm und ein Anmeldeformular finden sich hier.

by Thomas Allweyer at September 05, 2014 08:08 AM

September 04, 2014

BPM-Guide.de: Neue Auflage: Praxishandbuch BPMN 2.0

Die neueste Auflage gibt es ab sofort im Handel – zum Beispiel bei Amazon. Leider gehen bei Amazon damit wie immer alle Bewertungen der vorherigen Auflage verloren, sprich wir fangen wieder bei Null an. Falls also jemand Zeit und Lust haben sollte, (erneut) seine Meinung über das Buch dort kund zu tun, wären wir mehr [...]

by Jakob Freund at September 04, 2014 08:13 AM

September 03, 2014

Keith Swenson: Opening Keynote EDOC 2014: Wil van der Aalst

Wil van del Aalst, the foremost expert in workflow and process mining, spoke this morning on the overlap between Data Science and Business Process, and showed how process mining is the super glue between them.  What follows is the notes I made at the event.

Data science is a rapidly growing field.  As evidence he mentioned the Philips currently has 80 openings for data scientists, and plan to hire 50 more every year in the next few years.  That is probably a lot more than computer scientists.   Four main questions for data science:

  • what happened?
  • why did it happen?
  • what will happen in the future?
  • what is the best that could happen?

These are the fundamental questions of data science, and it is incredibly important. A good data scientist is not just computer science, not just statistics, not just databases, but a combination of nine or ten different subjects.

People talk about Big Data, and usually on to Map reduce, and Hadoop, etc.  But this is not the key:  He calls that “Big Blah Blah”.  Process is the important subject.  The reason for mining data is to improve the organization or the service it provides.  For example, improve the functioning of the hospital by examining data.  Or improving the use of X-Ray machines.  (Yes, that is him, at the right, hard at work solving the problems of x-ray machines.)

Process mining breaks out into four fields:  process model analysis, then there is the data mining world which focuses on the data without consideration of the process.  The third area is performance questions about how well the process is runing, and the last area is compliance: how many times is the process being done correctly or incorrectly.

He showed an example of a mined process.  It seems the ProM will output SVG animations that can be played back later showing the flow of tokens through the process.  He talked about the slider in ProM that increases or decreases the complexity of the displayed diagram, by selecting or unselecting differing amounts of the unusual traces.  They also show particular instances of a process using red dashed lines placed on top of the normal process in blue solid lines.  He reminded everyone that the diagrams we not modeled, but mined directly from the data without human input.

Data mining is quite a bit more appealing to business people than pure process modeling because it has real performance measures in it.  IT people are also interested because the analytic output related to real world situations.  Process mining works at design time, but it also works at run time.  You can mine the processes from event streams as they are being created.

There will be more and more data in the future to mine.  Internet of things: you shaving device will be connected to the internet.  Even the baby bite-ring will be connected so that parents will know when the baby is getting teeth. 

He showed an ER diagram of key process mining concepts.  Mentioned specifically the XES event format.

Can you mine SAP?  Yes, but a typical SAP installation has tens of thousands of tables.  You need to understand the data model. You need to scope and select the data for mining.  This is a challenge.  You need to flatten the event data.    A nice log table, with case id (instance id), event id, timestamp, activity name, and other attributes.  Produces a flat model without complicated relationships.  Very seldom people look at more complicated models with many-to-many relationships, and this remains one of the key challenges.

Gave an example of booking tickets for a concert venue.  It is easy to extract the events that occurred.  The hard part is to understand what questions you want to ask about the events.  First choice is to decide what the process instance Id is from all the things going on.  If the process is the lifecycle of a ticket, that would be one.  If it is the lifecycle of the seat you get a different process model.  Or the lifecycle of a booking yet another process is generated.  If we focus on lifecycle of a ticket, then process mining is complicated by the fact that multiple tickets may share the same booking, and the same set of payments.  What if a band cancels a concert?  That would effect many tickets and many bookings.

Another classical example is Amazon where you might look at orderlines, orders, and/or delivery.  I can order 2 books today, 3 more tomorrow, and they may come in 4 different shipments spread over the next few weeks.  Try to draw a process model of this using BPMN?  Very difficult.  You need to think clearly about this before you start drawing pictures.

Data quality problems.  There may be missing data, incorrect data, imprecise data, and additional irrelevant data.  He gave examples of these for process instances (cases) events, and many other attributes.  so in summary: three main challenges: finding the data, flattening the data, and data quality problems.

He gave 12 guidelines for logging (G4L) so that systems are designed to capture high quality information in the first place, so that big data might be able to make use of these later.

Process mining and conformance checking is trying to say something about the real process, but all you can see is “examples” of existing processes.  There is a difference between examples, and the real process.  We can not know what the real process is when you have not seen all possible examples.  If you look at hospital data, there may be one patient who was 80 years old, drunk, and had a problem.  This example may or may not say something about how other people are handled.

  • True Positives: traces possible in the model, and also possible in the real process
  • True Negatives: not possible in the model, and not found in real life
  • False Positives: traces that are possible inthe model, but can not (or did not) happen reality
  • False Negatives: traces not possible in the model, but happen in real life.

Showed a Venn diagram of this.  Try to apply precision metrics to process mining, but you can’t do much.  Your process log only contains a fraction of what is really possible.  From this sample, you can look at what matches the model or not, and that gives you some measure of the log file, but not necessarily reality.  An event log will never say “this can not happen.”  You only see positive examples.  If you look at a sample of university students, MOST students will follow a unique path.  If you look at hospital patients, most will follow a unique path.  Hard then to talk about the fraction that fits a particular process.  Consider a silicon wafer test machine: you have one trace with 50,000 events.  No two traces will match exactly with this number of events.

You never are interested in making a model that fits 100% of the event log.  If you had a model that contained all possible traces, it would not be very useful.  He used an analogy of four forces on an airplane: lift, drag, gravity, thrust.  Lift = fitness (ability to explain the observed behavior), gravity = simplicity (Occam’s Razor), thrust = generalization (avoid over-fitting), and drag is precision (avoid under-fitting).  Different situations require differing amounts of each.

Everything so far has been about one process.  How then do you go to multiple processes?  If we look at the study behavior of Dutch students and international students.  We might find the the behavior of Dutch students is usually different from international students.  Comparative process mining allow you to mine parts of the process, and show the two processes side by side.  Interested in differences in performance, and differences in conformance.  Notion of a process cube:  dimensions of time, department, location, amount, gender, level, priority, etc.  Can do database, extract with a particular filter, and generate the process, but this is tedious.  Solution is put everything in a process cube, and then able to apply process mining on slices of the cube.  For example a car rental agency looking at three different offices, three different time periods, and three different types of customers.  Gave a real example of building permits in different Dutch municipalities.

He records all his lectures, and the students can watch the lectures off-line.  There is a lot of interesting data because they know what parts of the lectures that students watch multiple times.  Students can control the speed of playback, and look at parts the students typically play faster.  They are correlating this with grades at the end of the course.  They can compare different students from different origins and see how they compare.  Standard OLAP techniques do not generally work here because we are dealing with events.  Showed a model of students who passed, versus students who failed.  For students who passed, the most likely first event is “watch lecture 1″.  For the students that failed, the most likely first event is “take an exam”.  (only after failing they go back and watch the lectures).

In conclusion: many of these things are mature enough to use in an industrial situation.  But there are many challenges mentioned.  There is a MOOC on Coursera on Process Mining this fall.  There are 3000 registered students, and it will start in October.

Q&A

In many year at SAP I have not seen a lot of reflections on past decisions.  Is this really going to be used?  SAP is not designed well to capture events.  If you go to a hospital, things are much easier to mine, even if the systems are built ad-hoc.  Also, there is a lack of maturity on process mining.  You really need to be trained, and you need to see it work.

Philosophically, does the nature of process really matter?  It is crucial that you isolate your notion of a process instance.  One you have identified the process you have in mind, the process mining will work well.  But it is a broad spectrum of process types.  There are spaghetti processes, and lasagna processes.  A lasagna process is fairly structured, and process mining of the overall process is not interesting, because people already know it.  Instead you want to look at bottlenecks.  For spaghetti processes every trace is unique, and the value comes from an aggregate overview of the process and the exceptions.

Is the case management metaphor more valuable than a process management metaphor?  This is an illustration that the classical workflow metaphor is too narrow.  The problem is that there are in reality many-to-many relationships, but when we go to the model we have to simplify.  It is quite important for this community to bridge this gap.  This is probably the main reason that process modeling formats have not become standard. It is too simple.  For example, using the course data, there is a model of the student, and a completely different model of the course, coming from the exact same data.

About real-time event detection, how do you construct a sliding window of evens to mine?  how does mining relate to complex event processing?  Event correlation: how to translate lower level things into higher level things.  Generating a model is extremely fast, so this can be done nearly real time.  Map-reduce could be used to distribute some of the processing.  On the other hand, conformance checking is extremely expensive.  Complexity of that problem remains an issue. We are developing online variants of the process mining, which no longer require storing of the entire event log.  

What about end users?  Model driven engineering … it is possible to incorporate end users into engineering.  How far are we away from involving end users into process mining?  There will probably be different types of end users.  First type will be data scientists to do the analysis of the data and get the competitive advantage.  Once educated, data scientists will have no problem leveraging process mining.  There are other kinds of users, that can be involved in varying degrees.  For example, use the map of Germany as a metaphor.  Some people are very interested in a map, but most people casually look and don’t worry about it.  But, if you project data on the map, then a lot more people are interested.  The same with process maps: put information that is relevant to people on it, and people will become more interested and more involved.

 


by kswenson at September 03, 2014 08:54 AM

September 02, 2014

Drools & JBPM: Activity Insight coming in Drools & jBPM 6.2

The next Drools and jBPM 6.2 release will include new Activity pages, that provides insight into projects. Early versions of both features should be ready to test drive in the up coming beta2 release, end of next week.

The first Activity page captures events and publishes them as timelines, as a sort of social activities system - which was previous blogged in detail here.  Notice it also now does user profiles This allows events such as "new repository" or "file edited" to be captured, indexed and filtered to be displayed in custom user dashboards. It will come with a number of out of the box filters, but should be user extensible over time.

click to enlarge

We have a video here, using an old CSS and layout. The aim is to allow for user configurable dashboards, for different activity types.

We have also added GIT repository charting for contributors, using the DashBuilder project. There is a short video showing this in action here.


click to enlarge

by Mark Proctor (noreply@blogger.com) at September 02, 2014 08:38 PM

September 01, 2014

Thomas Allweyer: Buchvorstellung: Prozessqualität hängt vom Zusammenspiel mit den IT-Systemen ab

Cover Aligning Business Processes and Information SystemsDer Titel führt möglicherweise ein wenig in die Irre: Es geht in diesem englischsprachigen Buch nicht allgemein um die gegenseitige Anpassung von Geschäftsprozessen und Informationssystemen, sondern speziell um die Auswirkungen auf die Prozessqualität. Im ersten Teil entwickelt der Autor ein Referenzmodell zur umfassenden Beschreibung aller unterschiedlichen Aspekte, die die Qualität von Geschäftsprozessen ausmachen. Das Business Process Quality Reference Model (BPRQM) basiert auf einem Standard für die Qualität von Softwareprodukten. Die Definitionen der dort verwendeten Qualitätsmerkmale wurden begrifflich an den Anwendungsbereich der Geschäftsprozesse angepasst. Sicher lässt sich hinterfragen, ob es für Geschäftsprozesse nicht noch ganz andere Qualitätsmerkmale als für Softwareprodukte gibt. In einer Fallstudie aus einer Uni-Klinik wird aber gezeigt, dass sich das Modell in der Praxis gut anwenden lässt. Hierbei wurde auf Basis dieses Qualitätsmodells ein Fragebogen zur Bewertung eines Prozesses entwickelt, der erfolgreich für die Schwachstellenanalyse eingesetzt wurde und bei den beteiligten Prozessexperten auf eine hohe Akzeptanz stieß.

Möchte man umfassende Qualitätsinformationen in ein Geschäftsprozessmodell integrieren, so steht man vor dem Problem, dass es sehr viele verschiedene Qualitätsattribute gibt, die sich nicht alle gleichzeitig darstellen lassen. Es wird daher eine Erweiterung für BPMN und andere Prozessmodellierungsnotationen vorgestellt, bei denen jede Aktivität Icons für die verwendeten Kategorien von Qualitätsmerkmalen erhält. Sind für eine Aktivität beispielsweise Merkmale aus den Kategorien “Reife” und “Verfügbarkeit” definiert, so werden hierfür zwei Icons angezeigt. Klickt man auf ein Icon, so werden die zugehörigen Qualitätsattribute und ihre Werte in einem Eigenschaftsfenster dargestellt.

Im zweiten Teil des Buchs wird mit Hilfe einer Simulationsstudie untersucht, welchen Einfluss das Zusammenspiel von Geschäftsprozessen und IT-Systemen auf die gesamte Prozessqualität hat. Isolierte Betrachtungen der Prozesse einerseits und der IT-Systeme andererseits sind weniger aussagekräftig, da die Qualität eines ansonsten optimierten Prozesses etwa durch mangelnde Verfügbarkeit eines IT-Systems beeinträchtigt wird. In der Studie wird beispielhaft das Qualitätsmerkmal “Performance” untersucht, dass sich besonders gut für eine Simulation eignet. An einer Fallstudie wird gezeigt, dass die vom Autor entwickelte Methode zur integrierten Simulation von Prozessen und IT-Systemen eine bessere Voraussage der Performance ermöglicht als herkömmliche Verfahren. Insbesondere starke Lastschwankungen kann diese Methode besser vorhersagen.

Das Buch ist aus einer Dissertation entstanden, weshalb der Schreibstil recht wissenschaftlich ist. So dürften einige längere Passagen, die sich mit anderen Arbeiten auseinandersetzen, für praktisch ausgerichtete Leser kaum interessant sein. Dabei haben die vom Autor entwickelten Methoden durchaus eine hohe praktische Relevanz. Sie liefern wertvolle Impulse für die Weiterentwicklung der Themen Prozessqualität und Simulation.


Robert Heinrich:
Aligning Business Processes and Information Systems
New Approaches to Continuous Quality Engineering
Springer 2014.
Das Buch bei amazon.

by Thomas Allweyer at September 01, 2014 02:31 PM

August 27, 2014

Thomas Allweyer: Umfrage Status Quo Prozessmanagement gestartet

In regelmäßigen Abständen ermittelt die Studie “Status Quo Prozessmanagement”, wie die unterschiedlichen Facetten des Themas Prozessmanagement in der Praxis umgesetzt sind, und welche Trends sich erkennen lassen. Diesmal wird die Umfrage gemeinsam von BPM&O und Bearingpoint durchgeführt. Alle Teilnehmer der aktuellen Studie erhalten Ende des Jahres die ausführlichen Ergebnisse.
Zur Umfrage Status Quo Prozessmanagement.

by Thomas Allweyer at August 27, 2014 08:08 AM

August 26, 2014

Drools & JBPM: Pluggable Knowledge with Custom Assemblers, Weavers and Runtimes

As part of the Bayesian work I've refactored much of Kie to have clean extension points. I wanted to make sure that all the working parts for a Bayesian system could be done, without adding any code to the existing core.

So now each knowledge type can have it's own package, assembler, weaver and runtime. Knowledge is no longer added directly into KiePackage, but instead into an encapsulated knowledge package for that domain, and that is then added to KiePackage. The assembler stage is used when parsing and assembling the knowledge definitions. The weaving stage is when weaving those knowledge definitions into an existing KieBase. Finally the runtime encapsulates and provides the runtime for the knowledge.

drools-beliefs contains the Bayesian integration and a good starting point to see how this works:
https://github.com/droolsjbpm/drools/tree/beliefs/drools-beliefs/

For this to work you and a META-INF/kie.conf file and it will be discovered and made available:
https://github.com/droolsjbpm/drools/blob/beliefs/drools-beliefs/src/main/resources/META-INF/kie.conf

The file uses the MVEL syntax and specifies one or more services:
[
'assemblers' : [ new org.drools.beliefs.bayes.assembler.BayesAssemblerService() ],
'weavers' : [ new org.drools.beliefs.bayes.weaver.BayesWeaverService() ],
'runtimes' : [ new org.drools.beliefs.bayes.runtime.BayesRuntimeService() ]
]

Github links to the package and service implementations:
Bayes Package
Assembler Service
Weaver Service
Runtime Service


Here is a quick unit test showing things working end to end, notice how the runtime can be looked up and accessed. It's using the old api in the test, but will work fine with the declarative kmodule.xml stuff too. The only bit that is still hard coded is the ResourceType.Bayes. As ResourceTypes is an enum. We will probably refactor that to be a standard Class instead, so that it's not hard coded.

The code to lookup the runtime:
StatefulKnowledgeSessionImpl ksession = (StatefulKnowledgeSessionImpl) kbase.newStatefulKnowledgeSession();
BayesRuntime bayesRuntime = ksession.getKieRuntime(BayesRuntime.class);

The unit test:
KnowledgeBuilder kbuilder = new KnowledgeBuilderImpl();
kbuilder.add( ResourceFactory.newClassPathResource("Garden.xmlbif", AssemblerTest.class), ResourceType.BAYES );

KnowledgeBase kbase = getKnowledgeBase();
kbase.addKnowledgePackages( kbuilder.getKnowledgePackages() );

StatefulKnowledgeSessionImpl ksession = (StatefulKnowledgeSessionImpl) kbase.newStatefulKnowledgeSession();

BayesRuntime bayesRuntime = ksession.getKieRuntime(BayesRuntime.class);
BayesInstance instance = bayesRuntime.getInstance( Garden.class );
assertNotNull( instance );

jBPM is already refactored out from core and compiler, although it uses it's own interfaces for this. We plan to port the existing jBPM way to this and actually all the Drools stuff will eventually be done this way too. This will create a clean KIE core and compiler with rules, processes, bayes or any other user knowledge type are all added as plugins.

A community person is also already working on a new type declaration system, that will utilise these extensions. Here is an example of what this new type system will look like:
https://github.com/sotty/metaprocessor/blob/master/deklare/src/test/resources/test1.ktd

by Mark Proctor (noreply@blogger.com) at August 26, 2014 11:55 PM

August 25, 2014

Drools & JBPM: Drools - Bayesian Belief Network Integration Part 4

This follows my earlier Part 3 posting in May.

I have integrated the Bayesian System into the Truth Maintenance System, with a first end to end test. It's still very raw, but it demonstrates how the TMS can be used to provide evidence via logical insertions. 

The BBN variables are mapped to fields on the Garden class. Evidence is applied as a logical insert, using a property reference - indicating it's evidence for the variable mapped to that property.  If there is conflict evidence for the same field, then the fact becomes undecided. 

The rules are added via a String, while the BBN is added from a file. This code uses the new pluggable knowledge types, which allow pluggable parsers, builders and runtimes. This is how the Bayesian stuff is added cleanly, without touching the core - but I'll blog about those another time.

String drlString = "package org.drools.bayes; " +
"import " + Garden.class.getCanonicalName() + "; \n" +
"import " + PropertyReference.class.getCanonicalName() + "; \n" +
"global " + BayesBeliefFactory.class.getCanonicalName() + " bsFactory; \n" +
"dialect 'mvel'; \n" +
" " +
"rule rule1 when " +
" String( this == 'rule1') \n" +
" g : Garden()" +
"then " +
" System.out.println(\"rule 1\"); \n" +
" insertLogical( new PropertyReference(g, 'cloudy'), bsFactory.create( new double[] {1.0,0.0} ) ); \n " +
"end " +

"rule rule2 when " +
" String( this == 'rule2') \n" +
" g : Garden()" +
"then " +
" System.out.println(\"rule2\"); \n" +
" insertLogical( new PropertyReference(g, 'sprinkler'), bsFactory.create( new double[] {1.0,0.0} ) ); \n " +
"end " +

"rule rule3 when " +
" String( this == 'rule3') \n" +
" g : Garden()" +
"then " +
" System.out.println(\"rule3\"); \n" +
" insertLogical( new PropertyReference(g, 'sprinkler'), bsFactory.create( new double[] {1.0,0.0} ) ); \n " +
"end " +


"rule rule4 when " +
" String( this == 'rule4') \n" +
" g : Garden()" +
"then " +
" System.out.println(\"rule4\"); \n" +
" insertLogical( new PropertyReference(g, 'sprinkler'), bsFactory.create( new double[] {0.0,1.0} ) ); \n " +
"end " +
"\n";

KnowledgeBuilder kBuilder = KnowledgeBuilderFactory.newKnowledgeBuilder();
kBuilder.add( ResourceFactory.newByteArrayResource(drlString.getBytes()),
ResourceType.DRL );
kBuilder.add( ResourceFactory.newClassPathResource("Garden.xmlbif", AssemblerTest.class), ResourceType.BAYES );

KnowledgeBase kBase = KnowledgeBaseFactory.newKnowledgeBase();
kBase.addKnowledgePackages( kBuilder.getKnowledgePackages() );

StatefulKnowledgeSession kSession = kBase.newStatefulKnowledgeSession();

NamedEntryPoint ep = (NamedEntryPoint) ksession.getEntryPoint(EntryPointId.DEFAULT.getEntryPointId());

BayesBeliefSystem bayesBeliefSystem = new BayesBeliefSystem( ep, ep.getTruthMaintenanceSystem());

BayesBeliefFactoryImpl bayesBeliefValueFactory = new BayesBeliefFactoryImpl(bayesBeliefSystem);

ksession.setGlobal( "bsFactory", bayesBeliefValueFactory);

BayesRuntime bayesRuntime = ksession.getKieRuntime(BayesRuntime.class);
BayesInstance<Garden> instance = bayesRuntime.createInstance(Garden.class);
assertNotNull( instance );

assertTrue(instance.isDecided());
instance.globalUpdate();
Garden garden = instance.marginalize();
assertTrue( garden.isWetGrass() );

FactHandle fh = ksession.insert( garden );
FactHandle fh1 = ksession.insert( "rule1" );
ksession.fireAllRules();
assertTrue(instance.isDecided());
instance.globalUpdate(); // rule1 has added evidence, update the bayes network
garden = instance.marginalize();
assertTrue(garden.isWetGrass()); // grass was wet before rule1 and continues to be wet


FactHandle fh2 = ksession.insert( "rule2" ); // applies 2 logical insertions
ksession.fireAllRules();
assertTrue(instance.isDecided());
instance.globalUpdate();
garden = instance.marginalize();
assertFalse(garden.isWetGrass() ); // new evidence means grass is no longer wet

FactHandle fh3 = ksession.insert( "rule3" ); // adds an additional support for the sprinkler, belief set of 2
ksession.fireAllRules();
assertTrue(instance.isDecided());
instance.globalUpdate();
garden = instance.marginalize();
assertFalse(garden.isWetGrass() ); // nothing has changed

FactHandle fh4 = ksession.insert( "rule4" ); // rule4 introduces a conflict, and the BayesFact becomes undecided
ksession.fireAllRules();

assertFalse(instance.isDecided());
try {
instance.globalUpdate();
fail( "The BayesFact is undecided, it should throw an exception, as it cannot be updated." );
} catch ( Exception e ) {
// this should fail
}

ksession.delete( fh4 ); // the conflict is resolved, so it should be decided again
ksession.fireAllRules();
assertTrue(instance.isDecided());
instance.globalUpdate();
garden = instance.marginalize();
assertFalse(garden.isWetGrass() );// back to grass is not wet


ksession.delete( fh2 ); // takes the sprinkler belief set back to 1
ksession.fireAllRules();
instance.globalUpdate();
garden = instance.marginalize();
assertFalse(garden.isWetGrass() ); // still grass is not wet

ksession.delete( fh3 ); // no sprinkler support now
ksession.fireAllRules();
instance.globalUpdate();
garden = instance.marginalize();
assertTrue(garden.isWetGrass()); // grass is wet again

by Mark Proctor (noreply@blogger.com) at August 25, 2014 04:04 AM

August 24, 2014

Keith Swenson: Collective Adaptive Systems (CAS)

The BPM 2014 conference, Sept 7-12, has been moved from Israel to Eindhoven Holland (because of unrest in the middle east) and I will be giving a keynote on Wednesday Sept 10.  There will be an interesting workshop on Business Processes in Collective Adaptive Systems (BPCAS’14) on Monday, associated with a group called FoCAS (Fundamentals of Collective Adaptive Systems).

What is a Collective Adaptive System?

Also sometimes called “Adaptive Collective Systems,” they are described as “heterogeneous collections of autonomous task-oriented systems
that cooperate on common goals forming a collective system.”  While being wide open to interpretation, there is a key point that the units are assumed to (potentially) autonomous.  I think this is a more natural way of looking at human organizations which form automatically from humans who are themselves quite complex and autonomous.

FoCAS cropped-focas-web-logo2describes its purpose:  “The socio-technical fabric of our society more and more depends on systems that are constructed as a collective of heterogeneous components and that are tightly entangled with humans and social structures. Their components increasingly need to be able to evolve, collaborate and function as a part of an artificial society.

Nature – A strong orientation to working the way that biological systems work.  Natural systems are referenced frequently as they try to tease out the essential capabilities behind the working of ecosystems, cellular systems, herd dynamics, etc.  I particularly like the non-machine, non-Taylorist approach.

Automated or Facilitated? – There are a mix of approaches.  Some of the research seems oriented toward facilitating humans in an organization, and some is toward replacing humans with automated, yet flexible, systems.

Non-Uniform – Another thing I like about this approach is they do not assume that there is a single uniform process system.  So much of BPM research assumes that all actors will interact with a single process.  This approach assumes from the beginning that there will be many diverse components interacting in complex ways.  Diversity is the important ingredient for stability in the face of unexpected changes.

FoCAS offers a free book to get an overview of the situation:  “Adaptive Collective Systems: Herding Black Sheep”  offers 75 pages that cover the need and various approaches that they are trying.

Research Projects

These projects (all in Europe) are associated with FoCAS:

  • Allow Ensembles – human oriented pervasive business processes.  Define processes as flow, but it is expected in real life that the process will need to be changed.  No single system, but the idea that there will be a large number of separate systems.  There are different goals at different levels: individual and collective.  Non-functional requirements are called “utilities” (e.g. reduce smog, increase efficiency).  Processes are defined in cells and cells collaborate.  Example given is a travel scenario for two people that has to be adapted.  Clearly the person involved is able to modify the route, although it is not clear whether they want to make this ‘automatic’ or not.  Supply chain is another example.
  • Assisi | bf – Project to interact with collections of animals (or presumably humans) in order to influence behavior.  Examples were bees and fish.  Ultimately this is for influencing huma n”swarm intelligence.”  Compares their work to google, wikipedia, facebook, and twitter.
  • CASSTING – Stands for collective adaptive systems (CAS) and Synthesis With Non-Zero-Sum Games (STING).  They use a game theory approach to evolve the correct independent units.
  • DIVERSIFY – Goal is to learn how biodiversity emerges in ecosystems. These systems are plastic and able to adapt to many kinds of changes.  This is quite different from the software we use today which is usually picked from one a a small number of varients.  This is fragile.  If systems can be made more diverse, it might be more robust.  One challenge is that it is a overlap between math/statistics, computer science, and biology.  What is the nature of software that support diversity.  Simply scrambling code will not work.
  • QUANTICOL – Quantitative modeling of collective adaptive systems.  Made of components which have state and communicate with other components.  Looking at smart grid. Edinborough has a bus system which reports positions every 30 seconds.  They are looking at how to adapt to emerging roadwork or traffic patterns.  Can traffic lights be tweaked to optimize the system.   Looks to me like ‘automatic’ adapting without explicit ways for people to manipulate the system.
  • Smart Society – Key to making something robust is diversity. Ethics, trust, and reputation.  A large semantic gap between human systems and computer systems.
  • Swarm organ – Machines and technology are quite fragile.  Biology can do amazing things, like self healing.  Studying morphgenesis: how do cells form organs, and there are multiple strategies for how this might happen.  Why do it one way, and why do it another.  The idea is that you might make self-organizing systems that form themselves into the systems that we use.

Net/Net

Robust systems will need to be designed in this way, with a lot of collaborating yet diverse systems each advocating different goals.  People need to be part of these systems, and must interact fluidly with them.  The collective adaptive system approach a distinctly non-Taylorist approach that is worth watching.


by kswenson at August 24, 2014 10:10 AM

August 23, 2014

Sandy Kemsley: Moving Hosts

I’m moving hosts for this blog this weekend; if you can’t reach the site, try clearing your cache or just waiting a while for the new DNS to propagate. Update: all done. If you see anything weird,...

[Content summary only, click through for full article and links]

by sandy at August 23, 2014 04:04 PM

August 21, 2014

Thomas Allweyer: Fraunhofer IAO: Erste von mehreren BPM-Tool-Studien liefert Marktüberblick

BPM-Tools Fraunhofer IAO 2014Gleich vier Marktstudien zu BPM-Tools hat das Stuttgarter Fraunhofer-Institut für Arbeitswirtschaft und Organisation (IAO) angekündigt. Die erste, ein allgemeiner Marktüberblick, liegt bereits vor. Sie soll im Laufe des Jahres um Studien zu den Themen Social BPM, Compliance in Geschäftsprozessen und Überwachung von Geschäftsprozessen ergänzt werden. In dem Marktüberblick sind insgesamt 28 Anbieter mit 27 Werkzeugen vertreten. Dabei ist das Teilnehmerfeld recht heterogen. Die Spanne reicht von einfachen Modellierungswerkzeugen bis zu umfangreichen BPM-Suiten inklusive Prozessausführung und ‑monitoring. Auch ein reines Process Mining-Tool ist vertreten.

Die von den Herstellern per Online-Fragebogen erhobenen Informationen beziehen sich zu großen Teilen auf die Anbieter und Konditionen. Zur eigentlichen Funktionalität wurden nur wenige Fragen gestellt. Hier wird auf die noch folgenden Studien zu speziellen Themen verwiesen. Immerhin erfährt man, dass fast alle betrachteten Werkzeuge über ein integriertes Repository verfügen, und dass BPMN die mit Abstand am weitesten verbreitete Notation ist. Generell sehen die Autoren der Studie einen Trend zu umfassenderen Werkzeugen, die alle Phasen des Prozesslebenszyklus unterstützen.

Die Studie gibt zunächst eine Einführung in das Prozessmanagement und aktuelle Entwicklungen. Anschließend werden die wesentlichen Ergebnisse des Marktüberblicks zusammengefasst. Nähere Einzelheiten zu den Produkten und Anbietern kann man den Einzeldarstellungen entnehmen. Zu jedem Hersteller sind die Antworten zum Online-Fragebogen abgedruckt, sowie vier Seiten Selbstdarstellung.

Der Marktüberblick kann hier heruntergeladen werden.

by Thomas Allweyer at August 21, 2014 10:25 AM

August 19, 2014

Drools & JBPM: Drools Mailing List migration to Google Groups

Drools community member,

The Drools team are moving the rules-usesrs and rules-dev list to Google Groups. This will allow users to have a combined email and web access to the group.
New Forum Information : http://drools.org/community/forum.html (click link to view)

The rules-users mailing list has become high volume and it seems natural to split the group into those asking for help with setup, configuration, installation and administration and those who are asking for help with authoring and executing of rules. For this reason rules-users will be split into two groups - drools-setup and drools-usage.  

Drools Setup - https://groups.google.com/forum/#!forum/drools-setup (click link to subscribe)
Drools Usage - https://groups.google.com/forum/#!forum/drools-usage (click link to subscribe)

The rules-dev mailing list will move to drools-development. 

Drools Development - https://groups.google.com/forum/#!forum/drools-development (click link to subscribe)

Google Groups limits the number of invitations, so we were unable to send invitations. For this reason you will need to manually subscribe. 

The Drools Team

by Mark Proctor (noreply@blogger.com) at August 19, 2014 03:38 PM

August 15, 2014

Drools & JBPM: Drools Execution Server demo (6.2.0.Beta1)

As some of you know already, we are introducing a new Drools Execution Server in version 6.2.0.

I prepared a quick video demo showing what we have done so far (version 6.2.0.Beta1). Make sure you select "Settings -> 720p" and watch it in full screen.


by Edson Tirelli (noreply@blogger.com) at August 15, 2014 12:53 AM

August 12, 2014

Thomas Allweyer: Open Innovation – Prozessoptimierungen in der Radiologie gesucht

logo_medvÜber die Open Innovation-Plattform des in der Region Nürnberg angesiedelten Medizintechnik-Clusters “Medical Valley” werden Experten gesucht, die zur Lösung konkreter Problemen aus der Medizintechnik beitragen. Dass es sich dabei nicht immer um rein technische Lösungen handeln muss, zeigt eine aktuelle Ausschreibung zu Prozessoptimierung in der Radiologie.

Häufig führt ein ineffizienter Informationsaustausch zwischen Patienten, überweisenden Ärzten und Radiologie-Zentren zu langen Durchlaufzeiten und hohen Kosten. Gesucht werden daher Vorschläge zur Verbesserung des gesamten Prozesses zur Patientenuntersuchung mit bildgebenden Verfahren. Die Vorschläge sollen insbesondere auch eine geeignete IT-Unterstützung berücksichtigen. Einreichungen sind bis zum 29.9.2014 über die Medical Valley-Plattform möglich.

by Thomas Allweyer at August 12, 2014 08:30 AM

August 11, 2014

Drools & JBPM: JUDCon 2014 Brazil: Call for Papers

The International JBoss Users and Developer Conference, and premier JBoss developer event “By Developers, For Developers,” is pleased to announce that the call for papers for JUDCon: 2014 Brazil, which will be held in São Paulo on September 26th, is now open! Got Something to Say? Say it at JUDCon: 2014 Brazil! Call for papers ends at 5 PM on August 22nd, 2014 São Paulo time, and selected speakers will be notified by August 29th, so don't delay!

http://www.jboss.org/events/JUDCon/2014/brazil/cfp

by Edson Tirelli (noreply@blogger.com) at August 11, 2014 01:17 PM

August 09, 2014

August 06, 2014

Thomas Allweyer: Kongress zum Prozessmanagement in der Finanzindustrie

pex logoProzessmanager aus Banken und Versicherungen treffen sich vom 27. bis 29. Oktober in Wiesbaden zur “PEX Process Excellence Finance”. Wie erreicht man Process Excellence und Agilität in einem immer stärker regulierten Umfeld? Diese Frage dürfte viele der Teilnehmer beschäftigen. Zahlreiche Praxisvorträge von Referenten namhafter Finanzinstitute werden hierfür umfangreichen Diskussionsstoff liefern.

So wird beispielsweise vorgestellt, wie das Prozessmanagement einer Bank dabei half, ihr Geschäftsmodell erfolgreich von einer Transaktionsbank zu einem Versorger für Wertpapierabwicklungsdienstleistungen umzustellen. Nach wie vor sind viele Banken damit beschäftigt, die Erstellung ihrer Dienstleistungen stärker zu industrialisieren. So werden die Rolle von Shared Service Centers, die Integration von Service Partnern und eine Verbesserung der Kundenorientierung in Backoffice-Prozessen thematisiert. Auch die verstärkte Digitalisierung des Bankgeschäfts steht in Wiesbaden auf der Agenda, ebenso wie Erfolgsfaktoren für Prozess- und Veränderungsmanagement.

Gemeinsam mit Sven Schnägelberger werde ich in einem Workshop einen Überblick über aktuelle Entwicklungen von Werkzeugen und Technologien für BPM vorstellen und Hinweise für die Auswahl der passenden Lösung geben.

Weitere Informationen gibt es auf der Website zur PEX Finance 2014.

 

by Thomas Allweyer at August 06, 2014 11:25 AM

August 04, 2014

Keith Swenson: Organize for Complexity Book

Niels Pflaeging’s amazing little book, Organize for Complexity, gives good advice on how to create self managing organization that are resilient and stable.

There is a lot to like about the book.  It is short: only 114 pages.  Lots of hand drawn diagrams illustrate the concepts.  Instead of bogging down in lengthy descriptions, it keeps statements clear and to the point.

Alpha and Beta

Alpha is a taylorist way of running an organization.  It is the embodiment of command & control, theory X, hierarchical, structured, machine-like, bureaucratic traditional organizations.  The reason that alpha style organizations have worked is an accident of history.  Complexity of marketplaces, and subsequently manufacturing environments, were long ago quite complex, but the dawn of the industrial age brought a century or so where the markets were sluggish and complexity quite diminished.  During this period of diminished complexity, alpha style organizations were able to thrive.  However, this came to an end in the 1970’s or 1980’s, and the world has become more complex again.

OrganizeForComplexityBeta is the style of organizing that is effective at dealing with complexity with a focus on theory Y, decentralization, agile, and self organizing.  He suggests we should form people into teams with a clear boundary.  Keep everything completely transparent within the team so everyone knows what is going on.  Give challenges to the entire team (or better, let them self-identify the tasks) and recognize accomplishments of the team, and not individuals.  Done correctly, the members of the teams will work out the details, taking on the tasks best suited to themselves, without regard to roles, titles, job positions, status symbols, etc.

The book spends a good deal of time motivating why this works.   One subject which I have covered a lot on this blog: a machine-like approach can not work against complexity.  Analytic decomposition of a complex situation, and addressing parts of a complex system can actually do more harm than good.  The one ‘silver bullet’ is that human beings have the ability to work in the face of complexity, so you must set up the organization to leverage native human intelligence. (Reminds me of human 1.0.)

Networked Organizations

The goal is to make an organization networked along informal lines, and also along value creating lines.  Instead of centralized command center pushing ideas out, the network is formed with a periphery which deals directly with the market, while there is a center which supports the periphery.  The network is driven by the periphery … very much the same as a pull organization.  I agree, and have argued that such an organization is indeed more robust and able to handle complexity (see ‘“Pull” Systems are Antifragile‘).  The networked organization decentralizes decision making, putting it closer to the customer, resulting in fast and better decisions.

Leadership

Since teams are self organizing, leadership works a little … differently.  Leadership needs to focus on improving the system, and not so much on the tasks and activities.  Radical transparency, connectedness, team culture are all important.  You might even call it collaborative planning.  He even spends some time discussing the steps you might have to do to transform an organization from an ‘alpha’ to a ‘beta’ working mode.

Summary

I really love the book.  It should be quite accessible to managers and leaders in any organization.  Like most inspirational books, it makes things sound easier than they are.  Ideally, each team, and each team member, would get paid proportionally to the value the team/member provides each time period — as if the organization was a form of idealized market.  Some forms of value are nebulous and defy measurement.  Also, people band into organizations in order gain the stability that comes from a fixed structure so that they don’t have to worry about how their own bills will be paid at the end of the month.  There will always be someone taking the risk, and as a result having a commanding influence.  One can’t be a purist; and it is pragmatic to expect that a mixture of alpha and beta will always be in force.  Still, the book gives an excellent overview of the principles of a networked organization to strive for, along with a reasonable explanation supporting why they work, as the title suggests, in the face of complexity.

 


by kswenson at August 04, 2014 02:35 PM

August 01, 2014

Keith Swenson: The third era of process support: Empathy

Rita Gunther McGrath’s post this week on the HBR Blog called Management’s Three Eras: A Brief History has a lesson for those of us designing business process technology.  The parallel between management and process technology might be stronger than we normally admit.

According to McGrath, management didn’t really exist before the industrial revolution, at which time it came in to being to coordinate these larger organizations.  The organization was conceptualized as a machine to produce products.  The epitome of this thinking is captured by FW Taylor and others who preached scientific management.

Early process technology was similarly oriented around viewing the organization as a machine.  Workflow, and later business process management (BPM), was all about finding the one best process, and constructing machinery that help to enforce those best processes.

The second phase of management emerged in the decades after WWII when organizations started to focus on expertise and to provide services.  Peter Drucker invented the term “knowledge work” and Douglas McGregor called Theory Y a management style distinguished from the earlier Theorey X.  Command and control does not work, and a new contract with workers is needed to retain their talent and expertise.

There is a second phase in process technology as well, with the dramatic rise in interest in Case Management technologies recently to support knowledge workers, to allow them to leverage their expertise, and to enable far more agile organizations necessary to provide services.

Glistening Dew Along the High Sierra TrailMcGrath proposes that we are at the dawn of a third era in management.  The first era was machine-like to produce products, the second collaborative to provide advanced services, the third will be to create “complete and meaningful experiences.”  She says this is a new era of empathy.  A pull organization would be empathetic in the sense that customer desires rather directly drive the working of the organization.  This might be the management style that Margaret Wheatley, Myron Kellner-Rogers, Fritjof Kapra, and other new path writers are hinting at.

We should brace ourselves for a similar emergence of technology that will enhance and improve our ability to work together in this more empathetic style.  A hyper-social organization might be the organizing principle.  What will that new process technology look like?  I don’t know, but we have some time to sort that out.

Management I emerged in the 1800’s to 1950, while that early process technology appeared in the 1980’s and 1990’s.   Management II emerged in the 1950’s and 1960’s and the process technology started appearing in a real way around 2010.  If Management III is appearing now, perhaps we have until 2020 to get to the point where the technology to support it is being worked out. That leaves us plenty of time to work out the details.

Or maybe not.  What if Management III is emerging concomitant with social and enterprise 2.0 technology we see starting to be used today?  What if Management I was originally tied inherently with the rise of use of steam and electric power, while Management II inherently came with technology of telephones and telefaxes?  If Management III is tied directly to new social technologies, it might be that by the time it fully emerges, the technology base will be set.  We see the technology support for management I & II as separate because the information technology came later, but that is not the case for management III.  It might be happening now.

Surely in the future, when we look back on these times, we will recognize the early attempts at systems that support an empathy style of management starting here and now.  We need only look for it, and recognizes it for what it is.

 


by kswenson at August 01, 2014 02:34 PM

Thomas Allweyer: Die Gewinner des BPMS-Buchs stehen fest

Herzlichen Dank an alle, die an der Verlosung des BPMS-Buchs teilgenommen haben.

Je ein Exemplar haben gewonnen:

  • Dr. Wiebke Dresp, Rösrath
  • Tim Pidun, Dresden
  • Dr. Tobias Walter, Offenbach

Herzlichen Glückwunsch! Die Bücher sind auf dem Weg zu Ihnen.

Weitere Informationen zu dem Buch unter www.kurze-prozesse.de/bpms-buch

by Thomas Allweyer at August 01, 2014 10:49 AM

July 30, 2014

Sandy Kemsley: BP3 Brazos Portal For IBM BPM: One Ring To Rule Them All?

Last week BP3 announced the latest addition to their Brazos line of UI tooling for IBM BPM: Brazos Portal. Scott Francis gave me a briefing a few days before the announcement, and he had Ivan...

[Content summary only, click through for full article and links]

by sandy at July 30, 2014 12:38 PM

July 28, 2014

BPinPM.net: Leading BPM – Agenda of 2014 BPinPM.net Conference revealed!

21-07-2014 22-09-19In a growing number of organizations, focus of BPM is moving towards leadership oriented topics to increase acceptance and benefit of process management systems. Basics like process modeling and compliance management are already quite mature and widely discussed. Thus, we are going to put the motto “Leading BPM” into reality and will pay attention to upcoming areas such as “real” BPM training (not system training) for employees and management, change management aspects, and activities to strengthen acceptance of BPM systems.

To facilitate these topics, we joined forces with BPM experts from business areas like engineering, finance, aerospace, social sector, and chemical industry to perform a number of workshops to identify best practices. Combined with latest insights from scientific world and practical examples, results of the workshops will be presented at the 2014 BPinPM.net Process Management Conference on Nov 24/25 at Lufthansa Training & Conference Center Seeheim in the area of Frankfurt, Germany.

In addition, we will focus on future-oriented BPM topics and present detailed results of our “Digital Age BPM” workshop series which we have performed in cooperation with digital leadership expert Willms Buhse and his doubleYUU team. In a group of five organizations from various sectors, we experimented with bringing BPM and digital age aspects such as social media, web 2.0, agile management, and mobile devices together. Results are quite fascinating and will be presented on day two of the conference.

For the first time ever, we will give the BPM2thePeople Award to an organization form social or education sector for its achievements in applying BPM methods. Read more about the award on its website: http://www.BPM2thePeople.org

For sure, the conference will also offer enough space for knowledge exchange with other BPM experts. And especially for a facilitated networking, we will offer several speed dating sessions during the breaks.

Finally, Samsung will support us with latest mobile devices to continue our paperless conference approach and to enable live polls and digital networking. Many thanks to Samsung! :-)

So don’t miss this year’s BPinPM.net Conference and register now!

 

 

PS: Currently, we are offering an early bird discount of 10 percent!

Again, this will be a local conference in Germany, but if enough non-German-speaking experts are interested, we will think about ways to share the know-how with the international BPinPM.net community as well. Please feel free to contact the team.

by Mirko Kloppenburg at July 28, 2014 12:05 PM

Keith Swenson: Wirearchy – a pattern for an adaptive organization?

What is a Wirearchy?  How does it work?  When should it be considered?  When should it be avoided?  What are the advantages?  This post covers the basics elements of a Wirearchy.

What is a Wirearchy?

Jon Husband has a blog “wirearchy.com” which as you can tell from the name is dedicated to the subject.

It is an organizing principle.  Instead of the top down, command and control hierarchy that we are used to, a wirearchy instead organizes around champions and channels.  It is an organization designed around a networked world.  He says:

The working definition of Wirearchy is “a dynamic two-way flow of  power and authority, based on knowledge, trust, credibility and a focus on results, enabled by interconnected people and technology”.

The description reads a little like the communist manifesto where the employee is being liberated from the oppression of bureaucracy, where “rapid flows of information are like electronic grains of sand, eroding the pillars of rigid traditional hierarchies.”  There is no doubt that information technology is having a profound effect on how we organize, and a Wirearchy is an honest attempt to distill the trends that are already happening around us.

Taylorism

Husband feels that Taylorism, or Scientific Management, is coded into the traditional hierarchy.  Scientific management can be seen as the application of Enlightenment (reductionist) principles to work processes.  Breaking highly complicated manufacturing into a sequence of discrete well defined steps, so that work can be passed from person to person in a factory like setting. It is surprising that he draws a parallel between hierarchies and scientific management, because the latter is between 100 and 200 years old, while hierarchies have been used since ancient times, and don’t seem to be related to the industrial revolution at all.  Hierarchies worked for the Egyptians.

“first we shape our structures, then our structures shape us” -Churchill

Is it Technology?

Husband claims that the concept of wirearchy has nothing to do with technology.  I think I know what he means: it is an organization of human interactions, not specifically a something designed in a piece of software.  Thus a wirearchy would then be what we used to call “the grape vine” – an informal network of communications.  In this sense wirearchies have always existed.

To say that it has nothing to do with technology is not really honest.  It is the expansion of telecommunications technologies that allow so many more people to be connected than before.  It is the information technology that allows a wirearchy to be more than just a gossip network.

Indeed Husband seem to contradict himself.  Consider the advise to a manager: “become knowledgeable about online work systems and how the need for collaboration is changing the nature of work.”   A wirearchy is not instigated by an specific technology system, but there is no doubt that a wirearchy results from new modes of communications from social technology in general.

Not a Revolution

Husband does not expect traditional hierarchies to be replaced by wirearchies.  Hierarchies remain, but wirearchies explain some of the changes we are seeing in the interconnected world.

I really want to compare this to Francois Gossieaux’s “Human 1.0″ which is that social technologies are allowing us to working together in a much more natural way.  People have always built their own networks, but during the industrial revolution there was a strong incentive to organize into much more rigid organizational structures.  Call those rigid structures from industrialization and scientific management “human 2.0″.  Then social networks will allows us be just as productive, but get back to relating to each on in a way that people always have.

The Big Shift: Push vs. Pull

Hagel et. al. talk about social technology bringing about a shift from push oriented organizations, to pull organizations.  The point of a wirearchy is that initiatives do not start from the top, and get pushed to the workers.  Instead, initiatives can start from anyplace, and be carried out by ad-hoc teams that know each other and share common goals.  That sound very much the same as a pull organization: the edges of the organization in direct contact with the customer make key decisions about what will be offered, and then are supported by the rest of the organization to deliver the results.  The hierarchy does not go away, instead the focus is on how it  is used, and where the initiative come from.

Agility

One of the central themes is responsiveness to change.  He says people should “be aware of, and identify, the changes and prepare for more change on an ongoing basis.”  In other words, prepare to be Agile.  Don’t forget, it was Alvin Toffler in his 1970 book “Future Shock” said exactly the same thing: in the future success will depend less on perfecting a particular mode of work, and instead in learning how to rapid and continually adopt new patterns of work.” The idea that we need to adapt quickly is not new.

But Still … Highly Relevant

Reading the above I seem critical of the originality of wirearchy, but let me clarify.  Wirearchy is a way of seeing and talking about what is happening.  Many others are seeing the same thing, and that is why it is so important.  Here are some highlights of posts he has written:

Harold Jarche has written a number of posts on wirearchy:

Net-Net

Organizations that do not adapt to the changes that social technology brings to the market and to the office will be left behind by those who adapt.  There is no question that such pressures exist.  It is useful to talk about a wirearchy as a view of how organizations are changing, and as a guiding principle to help determining the better future course of action available to organizaitons.

 

 


by kswenson at July 28, 2014 10:39 AM

July 25, 2014

Thomas Allweyer: Agile Methoden weiter auf dem Vormarsch

Zum zweiten Mal nach 2012 hat das BPM-Labor der Hochschule Koblenz unter Leitung von Ayelt Komus eine Bestandsaufnahme zur Verbreitung agiler Verfahren durchgeführt. Die Macher der Studie freuten sich über mehr als 600 Teilnehmern aus 30 Nationen. “Zwei Jahre später sind agile Methoden wie Scrum und IT-Kanban weiter etabliert und zunehmend auch außerhalb der Software-Entwicklung in der täglichen Praxis angekommen”, fassen die Autoren das Ergebnis zusammen.

Fast zwei Drittel der Studienteilnehmer haben erst in den letzten vier Jahren begonnen, agil zu arbeiten. Meist werden agile Methoden nicht in Reinform angewandt, sondern in Kombination mit Elementen anderer, oftmals klassischer Vorgehen. Als meist genutzte Methode wird nach wie vor Scrum eingesetzt. Kanban und Design Thinking haben aber deutlich höhere Wachstumsraten als andere Methoden. Insgesamt wurden agile Methoden auch in der aktuellen Umfrage wieder deutlich positiver und erfolgreicher beurteilt als klassische Projektmanagement-Methoden.

Der Abschlussbericht der Studie ist über die Seite www.status-quo-agile.de erhältlich.

by Thomas Allweyer at July 25, 2014 10:21 AM

July 21, 2014

Drools & JBPM: Drools Executable Model (Rules in pure Java)

The Executable Model is a re-design of the Drools lowest level model handled by the engine. In the current series (up to 6.x) the executable model has grown organically over the last 8 years, and was never really intended to be targeted by end users. Those wishing to programmatically write rules were advised to do it via code generation and target drl; which was no ideal. There was never any drive to make this more accessible to end users, because extensive use of anonymous classes in Java was unwieldy. With Java 8 and Lambda's this changes, and the opportunity to make a more compelling model that is accessible to end users becomes possible.

This new model is generated during the compilation process of higher level languages, but can also be used on its own. The goal is for this Executable Model to be self contained and avoid the need for any further byte code munging (analysis, transformation or generation); From this model's perspective, everything is provided either by the code or by higher level language layers. For example indexes etc must be provided by arguments, which the higher level language generates through analysis, when it targets the Executable model.
   
It is designed to map well to a Fluent level builders, leveraging Java 8's lambdas. This will make it more appealing to java developers, and language developers. Also this will allow low level engine feature design and testing, independent of any language. Which means we can innovate at an engine level, without having to worry about the language layer.
   
The Executable Model should be generic enough to map into multiple domains. It will be a low level dataflow model in which you can address functional reactive programming models, but still usable to build a rule based system out of it too.

The following example provides a first view of the fluent DSL used to build the executable model
         
DataSource persons = sourceOf(new Person("Mark", 37),
new Person("Edson", 35),
new Person("Mario", 40));

Variable<Person> markV = bind(typeOf(Person.class));

Rule rule = rule("Print age of persons named Mark")
.view(
input(markV, () -> persons),
expr(markV, person -> person.getName().equals("Mark"))
)
.then(
on(markV).execute(mark -> System.out.println(mark.getAge())
)
);

The previous code defines a DataSource containing a few person instances and declares the Variable markV of type Person. The rule itself contains the usual two parts: the LHS is defined by the set of inputs and expressions passed to the view() method, while the RHS is the action defined by the lambda expression passed to the then() method.

Analyzing the LHS in more detail, the statement
         
input(markV, () -> persons)
binds the objects from the persons DataSource to the markV variable, pattern matching by the object class. In this sense the DataSource can be thought as the equivalent of a Drools entry-point.

Conversely the expression
         
expr(markV, person -> person.getName().equals("Mark"))
uses a Predicate to define a condition that the object bound to the markV Variable has to satisfy in order to be successfully matched by the engine. Note that, as anticipated, the evaluation of the pattern matching is not performed by a constraint generated as a result of any sort of analysis or compilation process, but it's merely executed by applying the lambda expression implementing the predicate ( in this case, person -> person.getName().equals("Mark") ) to the object to be matched. In other terms the former DSL produces the executable model of a rule that is equivalent to the one resulting from the parsing of the following drl.
         
rule "Print age of persons named Mark"
when
markV : Person( name == "Mark" ) from entry-point "persons"
then
System.out.println(markV.getAge());
end
It is also under development a rete builder that can be fed with the rules defined with this DSL. In particular it is possible to add these rules to a CanonicalKieBase and then to create KieSessions from it as for any other normal KieBase.
         
CanonicalKieBase kieBase = new CanonicalKieBase();
kieBase.addRules(rule);

KieSession ksession = kieBase.newKieSession();
ksession.fireAllRules();
Of course the DSL also allows to define more complex conditions like joins:
         
Variable<Person> markV = bind(typeOf(Person.class));
Variable<Person> olderV = bind(typeOf(Person.class));

Rule rule = rule("Find persons older than Mark")
.view(
input(markV, () -> persons),
input(olderV, () -> persons),
expr(markV, mark -> mark.getName().equals("Mark")),
expr(olderV, markV, (older, mark) -> older.getAge() > mark.getAge())
)
.then(
on(olderV, markV)
.execute((p1, p2) -> System.out.println(p1.getName() + " is older than " + p2.getName())
)
);
or existential patterns:
 
Variable<Person> oldestV = bind(typeOf(Person.class));
Variable<Person> otherV = bind(typeOf(Person.class));

Rule rule = rule("Find oldest person")
.view(
input(oldestV, () -> persons),
input(otherV, () -> persons),
not(otherV, oldestV, (p1, p2) -> p1.getAge() > p2.getAge())
)
.then(
on(oldestV)
.execute(p -> System.out.println("Oldest person is " + p.getName())
)
);
Here the not() stands for the negation of any expression, so the form used above is actually only a shortcut for
 
not( expr( otherV, oldestV, (p1, p2) -> p1.getAge() > p2.getAge() ) )
Also accumulate is already supported in the following form:
 
Variable<Person> person = bind(typeOf(Person.class));
Variable<Integer> resultSum = bind(typeOf(Integer.class));
Variable<Double> resultAvg = bind(typeOf(Double.class));

Rule rule = rule("Calculate sum and avg of all persons having a name starting with M")
.view(
input(person, () -> persons),
accumulate(expr(person, p -> p.getName().startsWith("M")),
sum(Person::getAge).as(resultSum),
avg(Person::getAge).as(resultAvg))
)
.then(
on(resultSum, resultAvg)
.execute((sum, avg) -> result.value = "total = " + sum + "; average = " + avg)
);
To provide one last more complete use case, the executable model of the classical fire and alarm example can be defined with this DSL as it follows.
 
Variable<Room> room = any(Room.class);
Variable<Fire> fire = any(Fire.class);
Variable<Sprinkler> sprinkler = any(Sprinkler.class);
Variable<Alarm> alarm = any(Alarm.class);

Rule r1 = rule("When there is a fire turn on the sprinkler")
.view(
input(fire),
input(sprinkler),
expr(sprinkler, s -> !s.isOn()),
expr(sprinkler, fire, (s, f) -> s.getRoom().equals(f.getRoom()))
)
.then(
on(sprinkler)
.execute(s -> {
System.out.println("Turn on the sprinkler for room " + s.getRoom().getName());
s.setOn(true);
})
.update(sprinkler, "on")
);

Rule r2 = rule("When the fire is gone turn off the sprinkler")
.view(
input(sprinkler),
expr(sprinkler, Sprinkler::isOn),
input(fire),
not(fire, sprinkler, (f, s) -> f.getRoom().equals(s.getRoom()))
)
.then(
on(sprinkler)
.execute(s -> {
System.out.println("Turn off the sprinkler for room " + s.getRoom().getName());
s.setOn(false);
})
.update(sprinkler, "on")
);

Rule r3 = rule("Raise the alarm when we have one or more fires")
.view(
input(fire),
exists(fire)
)
.then(
execute(() -> System.out.println("Raise the alarm"))
.insert(() -> new Alarm())
);

Rule r4 = rule("Lower the alarm when all the fires have gone")
.view(
input(fire),
not(fire),
input(alarm)
)
.then(
execute(() -> System.out.println("Lower the alarm"))
.delete(alarm)
);

Rule r5 = rule("Status output when things are ok")
.view(
input(alarm),
not(alarm),
input(sprinkler),
not(sprinkler, Sprinkler::isOn)
)
.then(
execute(() -> System.out.println("Everything is ok"))
);

CanonicalKieBase kieBase = new CanonicalKieBase();
kieBase.addRules(r1, r2, r3, r4, r5);

KieSession ksession = kieBase.newKieSession();

// phase 1
Room room1 = new Room("Room 1");
ksession.insert(room1);
FactHandle fireFact1 = ksession.insert(new Fire(room1));
ksession.fireAllRules();

// phase 2
Sprinkler sprinkler1 = new Sprinkler(room1);
ksession.insert(sprinkler1);
ksession.fireAllRules();

assertTrue(sprinkler1.isOn());

// phase 3
ksession.delete(fireFact1);
ksession.fireAllRules();
In this example it's possible to note a few more things:

  • Some repetitions are necessary to bind the parameters of an expression to the formal parameters of the lambda expression evaluating it. Hopefully it will be possible to overcome this issue using the -parameters compilation argument when this JDK bug will be resolved.
  • any(Room.class) is a shortcut for bind(typeOf(Room.class))
  • The inputs don't declare a DataSource. This is a shortcut to state that those objects come from a default empty DataSource (corresponding to the Drools default entry-point). In fact in this example the facts are programmatically inserted into the KieSession.
  • Using an input without providing any expression for that input is actually a shortcut for input(alarm), expr(alarm, a -> true)
  • In the same way an existential pattern without any condition like not(fire) is another shortcut for not( expr( fire, f -> true ) )
  • Java 8 syntax also allows to define a predicate as a method reference accessing a boolean property of a fact like in expr(sprinkler, Sprinkler::isOn)
  • The RHS, together with the block of code to be executed, also provides a fluent interface to define the working memory actions (inserts/updates/deletes) that have to be performed when the rule is fired. In particular the update also gets a varargs of Strings reporting the name of the properties changed in the updated fact like in update(sprinkler, "on"). Once again this information has to be explicitly provided because the executable model has to be created without the need of any code analysis.

by Mario Fusco (noreply@blogger.com) at July 21, 2014 04:48 PM

July 20, 2014

Drools & JBPM: jBPM6 Developer Guide coming out soon!

Hello everyone. This post is just to let you know that jBPM6 Developer Guide is about to get published, and you can pre-order it from here and get from a 20% to a 37% discount on your order! With this book, you can learn how to:
  • Model and implement different business processes using the BPMN2 standard notation
  • Understand how and when to use the different tools provided by the JBoss Business Process Management (BPM) platform
  • Learn how to model complex business scenarios and environments through a step-by-step approach
Here you can find a list of what you will find in each chapter:  

Chapter 1, Why Do We Need Business Process Management?, introduces the BPM discipline. This chapter will provide the basis for the rest of the book, by providing an understanding of why and how the jBPM6 project has been designed, and the path its evolution will follow.  
Chapter 2, BPM Systems Structure, goes in depth into understanding what the main pieces and components inside a Business Process Management System (BPMS) are. This chapter introduces the concept of BPMS as the natural follow up of an understanding of the BPM discipline. The reader will find a deep and technical explanation about how a BPM system core can be built from scratch and how it will interact with the rest of the components in the BPMS infrastructure. This chapter also describes the intimate relationship between the Drools and jBPM projects, which is one of the key advantages of jBPM6 in comparison with all the other BPMSs, as well as existing methodologies where a BPMS connects with other systems.
Chapter 3, Using BPMN 2.0 to Model Business Scenarios, covers the main constructs used to model our business processes, guiding the reader through an example that illustrates the most useful modeling patterns. The BPMN 2.0 specification has become the de facto standard for modeling executable business processes since it was released in early 2011, and is recommended to any BPM implementation, even outside the scope of jBPM6.  
Chapter 4, Understanding the Knowledge Is Everything Workbench, takes a look into the tooling provided by the jBPM6 project, which will enable the reader to both define new processes and configure a runtime to execute those processes. The overall architecture of the tooling provided will be covered as well in this chapter.
Chapter 5, Creating a Process Project in the KIE Workbench, dives into the required steps to create a process definition with the existing tooling, as well as to test it and run it. The BPMN 2.0 specification will be put into practice as the reader creates an executable process and a compiled project where the runtime specifications will be defined.
Chapter 6, Human Interactions, covers in depth the Human Task component inside jBPM6. A big feature of BPMS is the capability to coordinate human and system interactions. It also describes how the existing tooling builds a user interface using the concepts of task lists and task forms, exposing the end users involved in the execution of multiple process definitions’ tasks to a common interface.
Chapter 7, Defining Your Environment with the Runtime Manager, covers the different strategies provided to configure an environment to run our processes. The reader will see the configurations for connecting external systems, human task components, persistence strategies and the relation a specific process execution will have with an environment, as well as methods to define their own custom runtime configuration.
Chapter 8, Implementing Persistence and Transactions, covers the shared mechanisms between the Drools and jBPM projects used to store information and define transaction boundaries. When we want to support processes that coordinate systems and people over long periods of time, we need to understand how the process information can be persisted.  
Chapter 9, Integration with other Knowledge Definitions, gives a brief introduction to the Drools Rule Engine. It is used to mix business processes with business rules, to define advanced and complex scenarios. Also, we cover Drools Fusion, and added feature of the Drools Rule Engine to add the ability of temporal reasoning, allowing business processes to be monitored, improved and covered by business scenarios that require temporal inferences.  
Chapter 10, KIE Workbench Integration with External Systems, describes the ways in which the provided tooling can be extended with extra features, along with a description of all the different extension points provided by the API and exposed by the tooling. A set of good practices is described in order to give the reader a comprehensive way to deal with different scenarios a BPMS will likely face.
Appendix A, The UberFire Framework, goes into detail about the based utility framework used by the KIE Workbench to define its user interface. The reader will learn the structure and use of the framework, along with a demonstration that will enable the extension of any component in the workbench distribution you choose. Hope you like it! Cheers,

by Marian Buenosayres (noreply@blogger.com) at July 20, 2014 09:10 PM

July 18, 2014

Drools & JBPM: Kie Uberfire Social Activities

The Uberfire Framework, has a new extension: Kie Uberfire Social Activities. In this initial version this Uberfire extension will provided an extensible architecture to capture, handle, and present (in a timeline style) configurable types of social events.


  • Basic Architecture
An event is any type of "CDI Event" and will be handled by their respective adapter. The adapter is a CDI Managed Bean, which implements SocialAdapter interface. The main responsibility of the adapter is to translate from a CDI event to a Social Event. This social event will be captured and persisted by Kie Uberfire Social Activities in their respectives timelines (basically user and type timeline). 

That is the basic architecture and workflow of this tech:

Basic Architecture


  • Timelines

There is many ways of interact and display a timeline. This session will briefly describe each one of them.

a-) Atom URL

Social Activities provides a custom URL for each event type. This url is accessible by: http://project/social/TYPE_NAME.



The users timeline works on the same way, being accessible by http://project/social-user/USER_NAME .

Another cool stuff is that an adapter can provide his pluggable url-filters. Implementing the method getTimelineFilters from SocialAdapter interface, he can do anything that he want with his timeline. This filters is accessible by a query parameter, i.e. http://project/social/TYPE_NAME?max-results=1 .


B-) Basic Widgets

Social Activities also includes some basic (extendable) widgets. There is two type of timelines widgets: simple and regular widgets.

Simple Widget

Regular Widget

The ">" symbol on 'Simple Widget' is a pagination component. You can configure it by an easy API. With an object SocialPaged( 2 ) you creates a pagination with 2 items size. This object helps you to customize your widgets using the methods canIGoBackward() and canIGoForward() to display icons, and  forward() and backward() to set the navigation direction.
The Social Activities component has an initial support for avatar. In case you provide an user e-mail for the API, the gravatar image will be displayed in this widgets.


C-) Drools Query API

Another way to interact with a timeline is throught the Social Timeline Drools Query API. This API executes one or more DRLs in a Timeline in all cached events. It's a great way to merge different types of timelines.



  • Followers/Following Social Users

A user can follow another social user.  When a user generates a social event, this event is replicated in all timelines of his followers. Social also provides a basic widget to follow another user, show all social users and display a user following list.


It is important to mention that the current implementation lists socials users through  a "small hack". We search the uberfire default git repository for branch names (each uberfire user has his own branch),  and extract the list of social users.

This hack is needed as we don’t have direct access of the user base (due the container based auth).



  • Persistence Architecture

The persistence architecture of Social Activities is build on two concepts: Local Cache and File Persistence. The local cache is a in memory cache that holds all recent social events. These events are kept only in this cache until the max events threshold is reached. The size of this threshold is configured by a system property org.uberfire.social.threshold (default value 100).

When the threshold is reached, the social persist the current cache into the file system (system.git repository - social branch). Inside this branch there is a social-files directory and this structure:



  • userNames: file that contains all social users name
  • each user has his own file (with his name), that contains a Json with user data.
  • a directory for each social type event .
  • a directory "USER_TIMELINE" that contains specific user timelines


Each directory keeps a file "LAST_FILE_INDEX" that point for the most recent timeline file.




Inside each file, there is a persisted list of Social Events in JSON format:

({"timestamp":"Jul16,2014,5:04:13PM","socialUser":{"name":"stress1","followersName":[],"followingName":[]},"type":"FOLLOW_USER","adicionalInfo":["follow stress2"]})

Separating each JSONs there is a HEX and the size in bytes of the JSON. The file is read by social in reverse order.

The METADATA file current hold only the number of social events on that file (used for pagination support).

It is important to mention that this whole structure is transparent to the widgets and pagination. All the file structure and respective cache are MERGED to compose a timeline.

  • Clustering
In case that your application is using Uberfire in a cluster environment, Kie Social Activities also supports distributed persistence. His cluster sync is build on top of UberfireCluster support (Apache Zookeeper and Apache Helix).


Each node broadcast social events to the cluster via a cluster message  SocialClusterMessage.NEW_EVENT containing Social Event data. With this message, all the nodes receive the event and can store it on their own local cache. In that point all nodes caches are consistent.
When a cache from a node reaches the threshold, it lock the filesystem to persist his cache on filesystem. Then the node sends a SOCIAL_FILE_SYSTEM_PERSISTENCE message to the cluster notifying all the nodes that the cache is persisted on filesystem.
If during this persistence process, any node receives a new event, this stale event is merged during this sync.

  • Stress Test and Performance

In my github account, there is an example Stress Test class used to test the performance of this project.  This class isn't imported to our official repository.

The results of that test, find out that Social Actitivies can write ~1000 events per second in my personal laptop (Mb Pro,  Intel Core i5 2.4 GHZ, 8Gb 1600MHz DDR3, SSD). In a single instance enviroment, it writes 10k events in 7s, writed 100k in 48s, and 500k events in 512s.
  • Demo
A sample project of this feature can be found at my GitHub account or you can just download and install the war of this demo. Please take a note that this repository moved from my account to our official uberfire extensions repository.

  • Roadmap
This is an early version of Kie Uberfire Social Activities. In the nexts versions we plan to provide:

  • A "Notification Center" tool, inspired by OSX notification tool; (far term)
  • Integrate this project with dashbuilder KPI's;(far term)
  • A purge tool, able to move old events from filesystem to another persistence store; (short term)
  • In this version, we only provide basic widgets. We need to create a way to allow to use customized templates on this widgets.(near term)
  • A dashboard to group multiple social widgets.(near term)

If you want start contributing to Open Source, this is a nice opportunity. Fell free to contact me!

by ederign (noreply@blogger.com) at July 18, 2014 07:40 PM

Thomas Allweyer: Mein neues Buch: Eine praxisorientierte Einführung in Business Process Management-Systeme

Frontpage BPMS-Buch_klIn dem neuen Buch geht es um Business Process Management-Systeme (BPMS), also um Systeme zur Prozessausführung. Wie lernt man am besten, wie ein solches System funktioniert? Indem man es selbst ausprobiert. Ähnlich wie man zum Erlernen einer Programmiersprache viele Beispielprogramme erstellt und zum Laufen bringt, sollte man für den Einstieg in BPMS möglichst viele ausführbare Prozesse modellieren und zur Ausführung bringen. Aus diesem Grund enthält das Buch über 50 Beispielprozesse, die man auf der Webseite zum Buch herunterladen und selbst ausprobieren kann.

Darunter finden sich nicht nur einfache Standardprozesse, wie sie in typischen Einsteiger-Tutorials verwendet werden, sondern auch Umsetzungen komplexerer Aufgabenstellungen, wie z. B. Mehrfachteilnehmer, Ausnahmebehandlungen, Kollaboration mehrerer Prozesse in unterschiedlichen Systemen, und viele mehr.

Dabei spielt die Prozessmodellierung mit BPMN eine zentrale Rolle. Ein ausführbarer Prozess besteht aber nicht nur aus einem Prozessmodell, sondern auch noch aus zahlreichen weiteren Elementen, wie z. B. Daten, Benutzer-Dialogen, Benutzer-Rollen und Organisationsstrukturen, Geschäftsregeln, Anwendungsfunktionalität, usw. Auch diese Aspekte werden ausführlich anhand vieler weiterer Beispiele erläutert und praktisch angewendet. So lernt der Leser, wie man komplexe Datenobjekte anlegt und benutzt, Nachrichtenflüsse definiert, Benutzer-Dialoge und Screenflows spezifiziert, Skripte erstellt, Web Services einbindet, Benutzer dynamisch auswählt, Entscheidungstabellen einsetzt, und vieles mehr.

Auch die Bearbeitung der einzelnen Schritte im Prozessportal und die Administration eines BPMS kommen nicht zu kurz, ebenso wie das Monitoring und Controlling der Prozesse. Ganz bewusst liegt der Fokus des Buchs auf dem klassischen BPMS-Konzept. Neuere Entwicklungen, wie Adaptive Case Management oder Social BPM werden zwar angesprochen, aber nicht vertieft. In diesen Bereichen ist noch sehr vieles im Fluss. Das klassische BPMS-Konzept wird auch in Zukunft eine wesentliche Rolle spielen, vor allem im Bereich standardisierter Prozesse. Und auch für das Verständnis neuerer Entwicklungen ist die fundierte Kenntnis des etablierten BPMS-Ansatzes eine wichtige Voraussetzung.

Damit die Beispielprozesse von jedem Leser ausprobiert und selbst weiterentwickelt werden können, wurden sie mit der frei verfügbaren, kostenlosen Community Edition des Systems Bonita BPM erstellt. Die im Buch vermittelten Grundlagen sind aber allgemeingültig und lassen sich auch auf andere BPM-Systeme übertragen. Da jedes System seine Besonderheiten hat, wird an manchen Stellen beispielhaft erläutert, wie eine bestimmter Aspekt in Bonita umgesetzt wurde. Das jeweilige Prinzip sollte sich bei jedem typischen BPM-System ebenfalls wiederfinden, wobei sich die konkrete Art der Umsetzung unterscheiden kann. Das Buch enthält keine Details zur Bonita-Bedienung. Die notwendigen Informationen zur Ausführung der Prozesse mit Bonita finden sich auf der Webseite zum Buch.

Auch für Anwender anderer BPMS ist das Buch daher nützlich. Bonita kann problemlos als zusätzliche Lernumgebung auf handelsüblichen PCs installiert werden. Ein zusätzlicher Lerneffekt entsteht, wenn man einzelne Beispielprozesse in einem anderen System umsetzt. An entsprechenden Erfahrungen bin ich sehr interessiert und veröffentliche auch gerne auf andere Systeme portierte Prozesse auf der Webseite.

Da der Funktionsumfang der verwendeten Community Edition von Bonita nicht so umfangreich wie der manches kommerziellen Systems ist, war es an mehreren Stellen erforderlich, kreative Lösungen und Workarounds zu entwickeln. So stehen in diesem System z. B. keine komplexen und keine ereignisbasierten Gateways zur Verfügung. Aus didaktischen Gründen sind solche Einschränkungen oftmals gar nicht schlecht, da es besonders lehrreich ist, wenn man sich überlegt, wie man das gewünschte Verhalten auf anderem Wege erreichen kann.

Das Buch richtet sich an alle Einsteiger in Business Process Management-Systeme, die die Konzepte nicht nur theoretisch verstehen, sondern auch praktisch anwenden wollen. Zielgruppe sind somit zum einen Studenten der Informatik, der Wirtschaftsinformatik und verwandter Studiengänge, zum anderen aber auch Entwickler und Prozessmodellierer aus der Praxis, die sich in die Thematik einarbeiten wollen. Auch im Vorfeld einer Systemauswahl ist es nützlich, sich schon einmal intensiv mit den konkreten Problemstellungen der BPMS-basierten Entwicklung auseinanderzusetzen, um mit den Anbietern auf Augenhöhe diskutieren und konkrete Fragen stellen zu können.

Und hier noch eine kleine Verlosung: Wer das Buch gerne kostenlos erhalten möchte, kann bis zum 31.7.2014 eine Mail mit dem Betreff “Verlosung BPMS-Buch” an info@kurze-prozesse.de schicken. Unter allen Einsendern werden drei Exemplare des Buchs verlost. Wer teilnimmt, stimmt zu, dass im Falle eines Gewinns sein Name und Ort veröffentlicht werden. Der Rechtsweg ist ausgeschlossen.

Webseite zum Buch – mit den Prozessen zum Download
Das Buch bei amazon bestellen.

by Thomas Allweyer at July 18, 2014 09:13 AM

July 11, 2014

Keith Swenson: bpmNEXT talk on Personal Assistants

Here is a video from my presentation at bpmNEXT of March 2014 presenting the idea that in the future we might see a kind of agent, which I call a personal assistant, cloning and synchronizing projects such that the large scale processes actually emerge from the interactions of these agents.

Background

The presentation stands on its own, you can access the slides at slideshare, so I won’t repeat any of that here, but rather to give you some of the context.

bpmNEXT is a meeting of the elite in the process technology world, and it is always a great thrill to meet and debate with everyone all together in one place.  Asilomar is a such a nice location to hang out, and the hosts always make sure there is plenty of wine to lubricate the conversation.  About 6 months earlier Jim Sinur released a new book talking about agents, and I think a lot of people are rather misinformed about agents.  In a certain sense, a BPMSuite is actually just an agent because it is programmable. If programmability and autonomy is the only thing to an agent, then what is the big deal?  So to every person attending the conference, I kept asking “what is an agent?”  Is this really something new, or just the same old thing with inflated terminology.

I think there is a real use for an agent to help work out the interface between different domains of control.  That is a really difficult problem.  The SOA people ignored it, and simply said that we would have WSDL interfaces in UDDI repositories.  WSDL does not work because it does not define the meaning behind the data values.  Data values are defined only by name and type, which really tells you nothing.  Different organizations typically use different names for the same thing, so a WSDL interface falls down when the names don’t match.

What if an autonomous agent could work out those details for us?  Within my organization it is pretty easy to come to agreement on terms and processes, but when bridging to another organization, there is a whole negotiation that needs to go on.  You can easily imagine an interchange something like this:

  • Agent A:  Hey there!  I have some work to be done, could you do it?
  • Agent B:  Well, yes, I do consulting from time to time, what do you need done?
  • Agent A: I can’t really tell you until you sign the non-disclosure.
  • Agent B: Well, what kind of work would it be, and I can tell you if I might do it.
  • Agent A: it is in the area of helping with a patient.  Do you help with skeletial problems on the back?
  • Agent B: Yes, I help a lot of people with back problems, it sounds like the sort of thing I might be able to help with.  What time frame are we talking about?
  • Agent A: Patient is in mild discomfort, so I would expect a consultation in the next two weeks would be acceptable.
  • Agent B: Great I have several openings next week.  What kind of non-disclosure agreement should be set up?
  • Agent A: The normal.  Here (passing document) is the standard form.  I see we have used this same form in the past.
  • Agent B: OK, I have noted that this agreement is in force with this patient.  Can I have the name of the patient.
  • Agent A: It is ‘Alex Demo’ and here is the task that is assigned: “investigate back problem”.   Would you like to take this assignment?
  • Agent B: Yes, I automatically accept tasks with that description.  Can you give me the pointer to the case folder?
  • Agent A: OK, the task has been marked as accepted, and you have been given rights as a ‘attending subspecialist’.  Here (passing URL) is the link.
  • Agent B: OK, I am downloading the associated files, and I will take it from here.  I will update you when I have some results.
  • (Agent B notifies Charles about the new case, and at the same time sends a request to Alex for preferred appointment times.)

The dialog is described using the first person pronoun ‘I’ but understand that the agents are speaking on behalf of their owners.  The owners have ‘programmed’ in some sense of the word, the agents to take these actions on their behalf.  That is why I use the term “personal assistant”.

The point about this exchange is that we programmers aways want to simplify this into a single exchange:  (1a) send the job request, and (1b) receive the result back.   This exchange makes use of progressive disclosure on both sides.  The delegating side does not want to disclose information about the patient until it is clarified that the receiving party is willing and able to help.  Similarly, the receiving side may not want to disclose the full laundry list of services that can be performed, especially when different parties describe those tasks using different terms.  I have probably grossly oversimplified the exchange over the work to be done, which very well might include identifiers of specific work which comes from standard tables of services.  Also, keep in mind that the requester does not really know what actual treatment is needed:  part of Charles’ job is to determine that.  So the exchange is not really about doing a particular treatment, but rather about taking ownership of the case for a particular aspect of solving the problem.

Agent B might have all sorts of rules that need to be tested or satisfied before accepting the job.  Agent A might have rules as well, such as probing for background information on previous patients.  It is possible that information is being gathered so that the humans can then make the decision to offer/accept the task before proceeding.  The high level takeaway is that there is no simply a WSDL definition on one side, and a call to the service on the other.

In light of all this, I am demonstrating a framework and a protocol that can accomplish this kind of negotiation.  Yes, it has to get a lot more elaborate, but we have to start someplace, and that place is in basic referral, replication, and synchronization of case data.

What really drives me is the way that this will cause processes that emerge directly from the rules.  Over time, pathways will emerge, from medical centers to supporting specialists, to pharmacies and other service providers.  Just like it is in the business world, each party decides the kinds of jobs it will offer and/or accept depending upon the specialization of the person.   The processes themselves can form out of those rules without being specified in elaborate detail in advance.  The processes that emerge will be resilient and will automatically adapt to environmental changes.  It is a whole new world.


by kswenson at July 11, 2014 10:00 PM

July 10, 2014

Keith Swenson: AdaptiveCM Workshop in Germany September 1

Things are shaping up for a really great workshop to spend a day talking about the latest research findings and possibilities for Adaptive Case Management.  It will be September 1 in Ulm Germany.I am hoping to see all of those Europeans who have a hard time getting the travel budget to come to America.  Register now.

Program

8:00-9:00 – Registration
Session 1: Opening (Ilia Bider)
9:00-09:15 – Presentation of participants
9:15-10:30 – Keynote: “There is Nothing Routine about Innovation”. Keith Swenson
10:30-11:00 – Coffee Break
Session 2. Research. Session (Keith Swenson)
11:00-11:30 “Research Challenges in Adaptive Case Mangement: A Literature Review”. Matheus Hauder, Simon Pigat and Florian Matthes
11:30-12:00 “Examining Case Management Demand using Event Log Complexity Metrics”. Marian Benner-Wickner, Matthias Book, Tobias Brückmann and Volker Gruhn
12:00-12:30 – “Process-Aware Task Management Support for Knowledge-Intensive Business Processes: Findings, Challenges, Requirements”. Nicolas Mundbrod and Manfred Reicher
12:30-14:00 Lunch
Session 3. Practice
14:00-14:30 “A Case for Declarative Process Modelling: Agile Development of a Grant Application System”. Søren Debois, Thomas Hildebrandt, Morten Marquard and Tijs Slaats
14:30-15:00 “Towards a pattern recognition approach for transferring knowledge in ACM”. Thanh Tran Thi Kim, Christoph Ruhsam, Max J. Pucher, Maximilian Kobler and Jan Mendling
15:00-15:30 “How can the blackboard metaphor enrich collaborative ACM systems?”. Helle Frisak Sem, Steinar Carlsen and Gunnar John Coll
15:30-16:00 – Coffee Break
Session 4. Ideas
16:00-16:30 “Towards Aspect Oriented Adaptive Case Management”. Amin Jalali and Ilia Bider
16:30-17.30 – Brainstorming
17:30-17:45 – Closing

Demo

Separately, I will also be demonstrating the Cognoscenti system as an open source platform for use in research around adaptive case management.

Hope to see you there

Update

Here is a link to a review of the workshop


by kswenson at July 10, 2014 09:03 PM

July 02, 2014

John Evdemon: Blog moved

I'm finally starting to blog again but I've decided to move to a different platform. My new blog is at http://looselycoupledthinking.com and has two formats: A Noteblog Traditional long-form blog Most of my Twitter posts are available on my Link Blog ....(read more)

by John_Evdemon at July 02, 2014 09:45 PM

June 27, 2014

Drools & JBPM: Compiling GWT applications on Windows

If you're a developer using Microsoft Windows and you've ever developed a GWT application of any size you've probably encountered the command-line length limitation (http://support.microsoft.com/kb/830473).

The gwt-maven-plugin constructs a command line statement to invoke the GWT compiler containing a list of what can be a very extensive classpath declaration. The length of the command line statement can easily exceed the maximum supported by Microsoft Windows; leaving the developer unable to compile their GWT application without resorting to tricks such as mapping a drive to their local Maven repository thus shortening the classpath entries.

Hopefully this will soon become a thing of the past!

I've submitted a Pull Request to the gwt-maven-plugin project to provide a more concrete solution. With this patch the gwt-maven-plugin is able to compile GWT applications of any size on Microsoft Windows without developers needing to devise tricks.

Until the pull request is accepted and merged you can compile kie-drools-wb or kie-wb by fetching my fork of the gwt-maven-plugin and building it locally. No further changes are then required to compile kie-wb.

Happy hunting!


by Michael Anstis (noreply@blogger.com) at June 27, 2014 04:24 PM

Thomas Allweyer: Modellierung, Simulation und Ausführung in der Cloud mit IYOPRO

Screenshot IYOPRODer Produktname IYOPRO ist eine Abkürzung von “Improve Your Processes”. Die Cloud-basierte Lösung bietet in der Tat einiges, was bei der Prozessverbesserung sehr nützlich sein kann: Angefangen von der Prozessmodellierung über die Simulation und Prozesskostenrechnung bis zur Prozessausführung.

Bemerkenswert ist insbesondere die nahtlose Integration all dieser Funktionalitäten. Bei vielen anderen Produkten benötigt man mehrere getrennte Komponenten, z. T. gar von verschiedenen Herstellern, um das abzudecken, was in IYOPRO komplett integriert ist. So ist etwa kein gesondertes Deployment auf einen Server erforderlich, um einen Prozess auszuführen, da er sich von Anfang an im integrierten Repository befindet. Der Modelleditor und das Prozessportal für die Prozessausführung lassen sich ebenso über die einheitliche Oberfläche im Browser bedienen wie die Simulation oder das Reporting.

Bereits der Funktionsumfang der kostenlos verfügbaren Basisversion zur Prozessmodellierung ist bemerkenswert und geht in vielem über das hinaus, was man von kostenlosen Modellierungswerkzeugen gewohnt ist. Dass man hierarchische Prozesslandkarten und BPMN-Kollaborationsdiagramme erstellen kann, ist noch nicht so ungewöhnlich. IYOPRO bietet daneben aber auch Mehrsprachigkeit, Berechtigungsmanagement, gemeinsame Modellierung im Team, die Generierung von Prozessdokumentationen im Word-Format und die Animation des Sequenzflusses. Das gibt es in dieser Form sonst nur bei kostenpflichtigen Angeboten.

Die Modellierung im Browser erfolgt sehr flüssig und intuitiv. Viele Tätigkeiten, wie die Ausrichtung von Symbolen, die Auswahl des nachfolgenden Elements oder das Einpassen des Gesamtdiagramms in das Modellierungsfenster, lassen sich recht elegant durchführen. Und wer bei einem horizontalen Pool die hochkant angezeigte Beschriftung ändern möchte, der muss seinen Kopf nicht querlegen. Vielmehr dreht sich Modell für die Texteingabe um 90 Grad, um sich danach in seine Ausgangsposition zurückzudrehen. Solche Kleinigkeiten entscheiden mit darüber, wie angenehm die Arbeit für den Modellierer ist. Die eingebaute Konformitätsprüfung weist einen darauf hin, wenn man gegen die BPMN-Syntax verstößt oder z. B. ein Element nicht beschriftet hat.

Wer seine Prozesse simulieren oder mit der integrierten Process Engine ausführen möchte, muss zu einer der kostenpflichtigen IYOPRO-Versionen greifen. Für die zur Prozessausführung notwendigen Ergänzungen der BPMN-Modelle stehen entsprechende Modelltypen zur Verfügung. So kann man z. B. Organigramme als Grundlage für die Rollendefinition modellieren, ebenso wie Datenmodelle zur Generierung von Datenbank-Schemata. Desweiteren gibt es einen Form-Editor, die Einbindung von Web Services und weitere Tools, wie sie von einem leistungsfähigen BPM-System benötigt werden.

Ein besonderer Schwerpunkt von IYOPRO ist die ausgefeilte Komponente zur dynamischen Simulation von Prozessen. Sie erlaubt eine sehr exakte Spezifikation der Prozesslogik mit den verschiedensten statistischen Verteilungen, Ressourcenanforderungen, Schichtkalendern, usw. Die Simulation wird insbesondere auch als Werkzeug für die Prozesskostenrechnung verwendet. Mit Hilfe der Simulation lässt sich für gemeinsam genutzte Ressourcen ermitteln, welche Anteile der Nutzung auf die einzelnen Prozesse entfallen. Hierdurch lassen sich die betreffenden Kosten besser verursachungsgerecht aufteilen. Zwar erfordert eine Simulation zunächst einen recht hohen Aufwand zur Datenerhebung und Validierung, doch können sich die erzielten Einsparungen durch aufgedecktes Optimierungspotenzial und bessere Entscheidungsgrundlagen durchaus recht schnell amortisieren.

Sicherlich lassen sich auf dem Markt Modellierungssuiten mit einem größeren Methodenrepertoire finden. Ebenso gibt es Systeme zur Prozessausführung, die einen höheren Funktionsumfang aufweisen. Dafür punktet IYOPRO mit der hohen Durchgängigkeit über alle Komponenten von der fachlich orientierten Modellierung über die Analyse bis zur Ausführung. Als Cloud-Lösung ist keine Installation erforderlich, und es fallen ausschließlich laufende Kosten für die Softwarelizenzen an. Insbesondere für viele mittelständische Unternehmen dürfte diese Kombination sehr interessant sein.

by Thomas Allweyer at June 27, 2014 06:50 AM

June 25, 2014

BPM-Guide.de: Let’s go US

We are excited to announce the official incorporation of camunda Inc., registered in San Francisco, California. Camunda Inc. will market our product camunda BPM in North Amercia. Besides FINRA and Sony, there are already several US based enterprise edition customers, and with BP3 and Trisotech, there are also strong partners available for consulting services around [...]

by Jakob Freund at June 25, 2014 09:31 PM

June 24, 2014

Keith Swenson: Late-Structured Processes

The term “unstructured” has always bothered me, because without structure you have randomness.  When knowledge workers get things done, it is not random in any way.  They accomplish things in a very structured way, it is just not possible to know ahead of time how it will be structured.

Last week at the BPM & Case Management Summit I presented my talk on how different technology should be brought to bear based how predictable the work being supported is.  There is work on the left of the spectrum that is very predictable, and on the right very unpredictable.

Examples of highly predictable work is that being done at an automobile factory or a fast food restaurant.  This work is predictable mainly because the environment is carefully controlled.  The factory is designed to supply the right things at the right time, and while there may be some (anticipated) variability in the mix of models being produced, one can clearly predict that each car will need four tires, mounted on four rims, attached to the wheel, etc.  A fast food restaurant takes an order, and fulfills it in a few minutes in a very repeatable way.

SevenDomainsSnapshotAs you move to the right across the spectrum, we consider shorter predictability horizons.  Integration with other IT systems (the second pillar) means you have to be prepared on a monthly/yearly scale for systems to change.   Human processes (the third pillar) need to cope with people going on vacations, getting sick, learning new skills, and changing positions with a weekly/monthly predictability horizon.   The fourth pillar is production case management where the operations that one might do are well known, but when to do them is decided on a daily basis.  With adaptive case management (fifth pillar) you also have an hourly/daily predictability horizon, but the operations themselves can not always known in advance,and the knowledge worker plays a bigger role in planning the course of events.

Now compare the predictability horizon with the length of the process.  In the case of the fast food, I can predict a month in advance how a particular type of food will be prepared (after the order is received) and it only takes a couple minutes to do the preparation.  We call this predictable because the process is much shorter than the predictability horizon.  The other extreme might be patient care which can take months or years, while our ability to predict is quite a bit shorter than that.  New procedures, new treatments, new drugs are continually entering the market, while a given patient episode might last months or even years.  While treating the patient, decisions are made, and course of treatment can be predicted for certain durations, it is just that those durations are shorter than the overall process.  When this situation occurs, we call it unpredictable because we can not say when the process begins how the process will unfold.

Patient care is not random and it is not unstructured.  Unstructured implies that there is no thinking being done, and that there is no planning necessar and there is no control.  The truth is exactly the opposite; there is quite a bit of thinking and planning being done, there is quite a bit of control of what happens.  The work is not unstructured, it is simply structured while the work is going on.  The planning and the working happen at the same time, and not as discrete phases in the lifecycle of the process.

For this reason I propose the term “late-structured” to explain what knowledge workers do in case management.   They actively plan and structure the work, it is just that they don’t do it as a separate phase.  There are other implications of this:  since you can not separate the planning from the working, clearly both the planning and the working need to be done by the same person.  Knowledge workers must plan, to some extent, their own work.   Also, there is little point in creating elaborate models of the work, since further planning will change that, and it is likely that each instance of the process will be unique.

There is no loss of control.  Late structured processes can still be analyzed after the fact the same way that any process can, and so one can assess how efficient the work was done, as well as whether it complies to all the laws and customs.

When using the term “unstructured,” it is easy to get confused about nature of the work, thinking instead that things unfold randomly in an uncontrolled way.  If you think about it as late-structured work, where the length of the process is longer than the ability to predict what will happen, but prediction and planning still proceed, you gain a better understanding of what is really going on.


by kswenson at June 24, 2014 06:05 PM

Thomas Allweyer: Version 3.0 BPM Common Body of Knowledge jetzt auf Deutsch erschienen

Cover CBOK 3.0 deutschNachdem die englische Ausgabe des BPM Common Body of Knowledge in der Version 3.0 bereits seit einiger Zeit auf dem Markt ist, ist sie nun auch auf Deutsch erschienen. Dass dies etwas gedauert hat, liegt daran, dass der englische Text nicht nur übersetzt worden ist. Vielmehr haben ihn insgesamt zehn Autoren an die Gegebenheiten in den deutschsprachigen Ländern angepasst.

Über die englische Ausgabe habe ich bereits einen Blogeintrag geschrieben.

Zur deutschen Ausgabe schreibt Guido Fischermanns einige Bemerkungen in seinem Blog.


European Association of Business Process Management EABPM (Hrsg.):
BPM CBOK® – Business Process Management BPM Common Body of Knowledge, Version 3.0, Leitfaden für das Prozessmanagement
Verlag Dr. Götz Schmidt, Wettenberg 2014.
Das Buch bei amazon.

by Thomas Allweyer at June 24, 2014 10:13 AM

June 23, 2014

Sandy Kemsley: BPM In Healthcare: Exploring The Uses

I recently wrote a paper on BPM in healthcare for Siemens Medical Systems: it was interesting to see the uses of both structured processes and case management in this context. You can download it...

[Content summary only, click through for full article and links]

by sandy at June 23, 2014 02:11 PM

June 21, 2014

Keith Swenson: BPM and Case Management Summit 2014

Here are some notes from this years BPM & Case Management Summit in Washington DC.

Wow, what a conference!  This is the first major summit that includes case management.  The location was excellent, and so was the venue: The Ritz.  A number of new vendors there, particularly in the case management space:  Frame Solutions,  AINS eCase,  Emerge Adapt Case Blocks.  It was great to see so many old friends, as well as some new ones as well.  It was nice to see Connie Moore who was awarded the Marvin L. Manheim Award For Significant Contributions in the Field of Workflow.

RitzPan

pan of the meeting room thanks to Chuck Webster

Jim Sinur

JimSinurStartThe first keynote was given by Jim Sinur, who said that Adaptive Case Management is the on-ramp for intelligent business processes.  It was a good overview of the current situation in process management:  old style automation is doing well, but the current challenges are newer, more flexible, less structured, and more knowledge worker oriented processes.

He presented the spectrum of process types, as well as his process IQ five-axis spider chart.  He challenged us to ask the question of what will process be like when we have the equivalent of 1000 Watsons available in the cloud to research answers to questions for us?  Reinforced that we will have ‘personal assistants’ to help us run our processes.

NFSA

It was quite an honor to see two people from the Norwegian Food Safety Authority (NFSA).  I have written about this use case before.  It is such a important use for the kind of flexibility that case management affords.   Most interesting comment came at the end, in response to a question:  even though extensive use cases were created to explore and understand what the users needed to be able to do, no modeling was done in BPMN of CMMN.  Instead, the text of the use case was taken directly to the ‘Task Template’ which is a simple list of tasks that drives a particular scenario.

Setrag Khoshofian

Talked about the “internet of things” (IoT). The market is estimated in the trillions of dollars. Big data today is nothing compared to what we will have when all these things start chatting with each other. “The largest and most durable wearable computer will be the car.” The process of everything.

Used the acronym Social Mobile Analytic Cloud Things: SMACT

Where is the knowledge? You might have policy and procedure manage, however you still need access to experts. Sometimes it is all written down, but only certain people know how to understand and interpret what is written. Applications are developed, but then changed and the design artifacts no longer match. Knowledge is sometimes represented in the code. Also in the patterns of interactions. You can extract this (process mining) and the results is often surprising.

He presented a spectrum of work along these lines:

  1. system, very structured work – flow charts, very popular, useful
  2. clerical worker
  3. knowledge assisted worker. This is the majority of white collar workers. Get assistence from various types of intelligence in the BPM environment.
  4. knowledge worker, Unstructured, dynamic, Knowledge workers do not like to be told what to do.

On problem with self driving cars is if they get hacked. Can we really assume that this will be taken care of?

Device directed warranty scenario: Imagine there is a sensor that determines that the CO2 level in a car is too high. It sends a message to the manufacturer, brought this together with product info, customer info, warranty info into a CASE. Then it is determined that service is required, and the right people are notified. Then a sub case for service order, and a sub-case of a warranty claim.  This is idea of the kind of thing that might be possible today with the IoT.

Whitestein

Presentation of the Living Systems Process Suite where goals drive everything. Governance goal describes how something should be achieved in order to be optimized. Layered process scoping: strategic goals over multiple instances, tactical goals for a particular item or case. then process activities. When you get down to the process they use BPMN. These layered goals give them the ACM capability.

They call them “agents” because they act independent process evaluators: the current situation is compared against the conditions you set to bring the system in line with the goals.  If current state is found, later, to be wrong, the agent can kill that process, and start another. Agents are intelligent enough to start, stop, and modify running processes.  Can insert ad-hoc tasks (issue request, performing query, acting on results).

A question was asked: what about conflicting goals? Goals are in a hierarchy, and that helps prioritize the agents, but you need to take care when designing the goals to avoid a dead lock situation.

Clay Richardson

First keynote on the second day, excellent as well, about “design thinking.”  He sees BPM systems moving from holistic to specific, from linkages to context, from logic to empathy, and from deductive logic to abductive logic.

One of the keys is empathy.   Not empathy with the system, but empathy with the customer.  We migth see a transition from process models to journey maps, from capability maps to personas, and from target operating model (TOM) to storytelling (of how the customer engages).  He feels there are two camps: transaction BPM and engagement BPM.

He cited an example of a Domino’s Pizza app:  it shows where the pizza is in the process:  tossing it, in the oven, on the way, or delivery person knocking on the door.  This more than just the minimal to buy a pizza, it really represents the desires of the customer to know what is happening now.

Instead of focus on cost efficiency, we should focus on revenue growth.  Reconnect to customer journey and customer experience.

Roger Baker, Chief Strategy Officer, Agilex

Gave an excellent talk on agile methodology and why it is needed.  Agile method is defined as 2 week sprints, small teams, requirements discovery, constant prioritization, continuous testing, frequent small releases, and communications, communications, communications.   About 1/3 of what is in a requirements document are things the writers wish they had but will never use.  He said these are like the “froth on the beer” — you want to see it but otherwise not useful.  Agile development is a full contact approach, from execs to workers.  Strict adherence to schedule.  The hardest is “truth telling” — people don’t want to tell you they are having a problem, but if not they can explode.  Raise a problem when you see it, and get help.  If you have a problem and stay quiet, then we will find someone else to do the job.

He shifted the VA around to agile approach, and were delivering, so congress passed a new law in Jan 2011 which changed all the rules.  The VA delivered on 83% of milestones.  You have to plan on some failures, and if so, fail fast.

Waterfall assumes:

  • detailed requirements are clear from the beginning of the project
  • Assumes they don’t change
  • progress can be measured by documents produced
  • assumes that mega programs are manageable by normal humans
  • it systems are it responsibility

Agile assumes

  • Detailed requirements are NOT clear. They will knowit when they see it
  • Requirements and priorities will change
  • produced software is the only measure
  • users and management need constant reassurance
  • everyone must be involved

Only the business knows the process.  Business must take ownership of the process.

RogerBaker

Steinar Carlsen

Talking about organizations, and value formation. People do tasks. They don’t necessarily do processes. They have to relate to customers, authorities, partners, and in a constant flux of change.

How is coordination of value production achieves? Email? Heresay? Sharepoint? Proposition: should have an integrated task management system. When a task spins off another task, you have an emergent task management system.

Step details: mandatory, repeatable, pre-condition, include-condition, post-condition.

To design tasks, use “knowledge editor” Not a graphical tool, but instead text based, and saved in XML.

Steinar

Rudy Montoya – CIO, Texas Attorney General

Keynote speaker on the third day.  He was involved in creating some case management systems for things like crime victims compensation & legal case management

As an example of the explosion in West, Texas.  When it went off, they had to respond at a time when they had no idea if this was a crime, or whether it was terrorism.  The old system required that all information had to be together before they created the case. They needed to verify that a crime occurred before starting the case.  There is a lot of work necessary to get to that point.  Case management starts with the data that exists, and builds forward to the classification of the case and particulars.

They solved this in about 12 months implemented in 3 Phases:

1) eliminated legacy doc mgmt system
2) replace mainframe
3) implement a web portal

Euan McCreath

Very interesting presentation on how Emerge Adapt have implemented a real adaptive case management system.  Great slide on the difference between an adaptive approach and a traditional approach:

2014-06-18 10.57.59

Key elements defined were data structures. Then buckets. The process was very simple. Could create new buckets on the fly. New tasks could be created. Buckets are related to work queues. Could move from any state to any other state, but after a while certain moves were locked out by constraints in the process model.

My Talk

I presented to the following slides:

And, as evidence, Charles Webster took this photo of me:

KeithPresentation

Sorry everyone who gave talks and I was not able to see them.  There were simply too many to see them all!

Other blog posts:


by kswenson at June 21, 2014 10:36 AM

June 19, 2014

Thomas Allweyer: Award für BPM-Initiativen im Bildungs- und Sozialbereich

Wer sich im Bildungs- und Sozialbereich mit Prozessmanagement beschäftigt, kann sich für den neu geschaffenen “BPM2thePeople”-Award bewerben. Die vorgestellte Initiative sollte eine Vorbildfunktion haben und damit auch für andere Organisationen interessante Aspekte umfassen. Als zweites Kriterium wird der Innovationsgrad bewertet. Und schließlich geht es um den effizienten Ressourceneinsatz.

Der Award wird von der Process Management Alliance ausgeschrieben, die ursprünglich aus einer Initiative von Lufthansa Technik entstanden ist und die jährliche BPinPM.net-Konferenz veranstaltet.

Dotiert ist der Preis mit 2.500 Euro, die Preisverleihung findet auf der diesjährigen BPinPM.net-Konferenz am 24. und 25.11. in Seeheim bei Frankfurt statt.

Bewerben kann man sich hier.

by Thomas Allweyer at June 19, 2014 12:26 PM

June 17, 2014

BPM-Guide.de: Webinar: BPMN with camunda BPM

I will give a webinar on July 17 about the best practices around BPMN, especially in terms of business-IT-alignment. Will this be a camunda BPM pitch as well? Of course But hey, that’s how it goes: 1) Collect 4+ years of intensive consulting experience around BPMN, write a book etc. etc. 2) Discover that the [...]

by Jakob Freund at June 17, 2014 11:46 PM

BPinPM.net: BPM2thePeople Award – Spread the word and win a conference ticket!

This week, we are starting a new project to foster process management awareness in the education and social sector. – The BPM2thePeople Award is our prize for best practice examples that increase the quality of processes in organizations from education or social sector.

The winner of the award serves as an ideal for other organizations and supports the future development towards full establishment of BPM in these sectors.

Nowadays, many organizations from education or social sector are still afraid of such topics and feel insecure about implementing BPM projects to encourage the management of their processes. – It is time to change that thinking, now!

All organizations from these two sectors (e.g., school, kindergarten, university, home for the elderly, workshop for the handicapped) that invest in BPM could be the winner of the BPM2thePeople Award. The award will be handed over during our BPinPM.net Process Management Conference in November 2014 and the winner will receive a recognition of 2.500 Euro.

The final decision will be made by a jury of BPM experts from business and research based on the three criteria “role model function”, “innovation”, and “efficiency” of the projects. But even if an organization does not see its project in all of those dimension, they should not hesitate to apply until end of August!

Due to the fact that this blog is primarily read by BPM professionals, we ask you to spread the word and invite people from education or social sector to apply for the award. Please share this post or simply go to the website of the award and invite others:
http://www.BPM2thePeople.org/#einladen

As a THANK YOU we will raffle a ticket for this year’s BPinPM.net Process Management Conference among all supporters.

Best regards,
Mirko

by Mirko Kloppenburg at June 17, 2014 06:56 PM

June 16, 2014

Sandy Kemsley: Webinar On Collaborative Business Process Analysis In The Cloud

I’m giving a webinar on Wednesday, June 18 (11am Eastern) on social cloud-based BPA, sponsored by Software AG – you can register here to watch it live. I’ve written a white paper going into this...

[Content summary only, click through for full article and links]

by sandy at June 16, 2014 11:51 AM

Keith Swenson: Open Source Adaptive Case Management

Interested in trying out Adaptive Case Management without a huge investment?  Cognoscenti might be the option for you.  This post contains most of the contents of a paper I will be presenting in Germany in September on the Cognoscenti open source system which I have used in demos at the last two BPMNext conferences. To anyone wanting to experiment with ACM capabilities, a free solution might be worth trying.

The EDOC conference in Germany is mainly for researchers, and so most of this post focuses more on ways to experiment with the capabilities, and less about simply using the capabilities out of the box.

Demo: Cognoscenti
Open Source Software for Experimentation on
Adaptive Case Management Approaches

Abstract: Cognoscenti is an experimental system for exploring different approaches to supporting of complex, unpredictable work patterns. The tendency with such work environments is to make increasingly sophisticated interaction patterns, which ultimately overwhelm the user with options. The challenge is to keep the necessary cognitive concepts very simple, allow the knowledge worker a lot of freedom, but at the same time offer structural support where necessary for security and access control. Cognoscenti is freely available as an open source platform with a basic set of capabilities for tracking documents, notes, goals, and roles which might be used for further exploration into knowledge worker support patterns.

Introduction

Fujitsu has leadership in the business process space going back to 1991. In 2008, the Advanced Software Design Team started a prototype project from scratch to explore innovative directions in enterprise team work support. Cognoscenti became the test bed for experimental collaboration features to demonstrate properties of an adaptive case management system for supporting knowledge workers. Features that proved to work well were subsequently implemented in the other products. In 2013 internal company changes left the project without any specific strategic value. Since some people were using it as a productivity tool for managing their work, the decision was made to make it available as an open source project for anyone to use and possibly to help maintain.

One experiment was to implement preliminary versions of the “Project Exchange Protocol” which allows case management systems, and business process management (BPM) systems, to exchange notes, documents, and goals using only representational state transfer (REST) oriented web service calls. Cognoscenti is available as a free reference implementation of these protocols for testing of protocol implementations. This paper seeks to demonstrate the open source system, its capabilities, and how research project might use the software for their own research.

Architecture

Cognoscenti stores information in XML files in the file system. This was done for two reasons:

1) to avoid complication in installing the system. Requiring and initializing a database restricts the environments that it can be deployed to. The XML offers a flexible schema that can be evolved efficiently –– a task that can be quite complicated in a database. This allows prototype projects build on Cognoscenti to experiment easily with capabilities.

2) to allow direct manipulation of the files by users. The documents appear as files in the file system which can be opened and edited directly – even when the Cognoscenti server is not running. Changes are detected by file date and size.

Conceptual Object Model

The root of everything is an index which is initialized by scanning the file system. From this you can retrieve “Site” objects, “Project” objects, and “UserProfile” objects.

The Site object represents a space both on the disk and address space on the web. A Site has a set of owners and executives all of whom are allowed to create projects in the site. A Site has a visual style that applies to all projects contained by that site. The site is mapped to a particular folder in the file system, and all of the contained projects are folders within that one.

The Project object is the space where most of the work takes place. A project has a collection of notes (small message like documents with wiki-style formatting), attached documents, goals, roles, history, and email messages. All of the artifacts for a project is stored in the project folder on disk. There is a special sub folder named “.cog” which is where all the housekeeping information is kept about the project, such as old versions of documents, etc. When the server detects that a file has changed, it will display an option to the user to commit those changes, which causes a copy of that file to be saved as a version inside the housekeeping folder.

While Sites and Projects are represented in one directory tree, user information is from a folder that is disassociated from the sites and projects. The UserProfile object contains personal information for a particular user, OpenID addresses, email addresses, and settings. Because the user preferences are disassociated from the sites and projects, a user may play any role in any site or project without restriction. A user logs in once, and can access any number of projects and sites that they have access to.

Implementation Details

Cognoscenti is written in Java and runs in any servlet container, such as Apache TomCat. The user interface is based on Spring framework, which some browser side capability from Yahoo User Interface and Google Windows Toolkit, however grafting a new user interface for specialized purpose projects is easily supported.

The entire code base is licensed under the Apache license, freely available to anyone who wants it.

Innovative Concepts

Security and Access Control

Cognoscenti is first and foremost a collaborative case management system designed for lots of people to work safely with sensitive information, like health care information, social worker information, legal case information, etc. Access control needs to be a primary consideration. It is easy, or trivial even, to make a system that restricts access to particular artifacts to particular named users. But there is a problem with that: managing the many-to-many relationship between all the artifacts and users directly can be tedious and overwhelming. This leads either to users leaving the access too open so that too many people can have access, or alternatively leaving the access too restricted so that people can not get the information that they need to do the job.

An indication that users a frustrated with the access control mechanism is seen when they take a document out of the document repository in order to email it to people they want to give it to. This subversion of access control mechanism is dangerous, because email itself is an unsafe medium for sensitive documents.

The developers of Cognoscenti view security as a usability problem: it must be easy enough to use, so that people get the security right so that only the right the people who need access are getting it. These principles must be followed:

1) It must be easy for a normal, non-technical business user to express the correct security constraint to meet their needs.

2) Such an expression must meet the natural requirements of a social situation, and not merely the technical requirements of the system.

3 )As teams change and evolve, the security constraint in constructed in such a way that it tracks the changing requirements, without needing tedious maintenance by the users.

4) No surprises: the meaning of the access control settings must be clear to non-technical users.

These requirements are considerably higher than most current systems. For example, the Windows file system requires the user to do a kind of set algebra in order to determine whether a particular user can see a particular document or not.

Affordances for Change

If the project is entirely static in terms of membership, it is not difficult to get any such system set correctly so that the fixed set of members have proper access. However, projects are not static. Imagine a police detective working to solve a crime, and needs the help of an expert. That expert will need access to the case folder. Imagine how it would be if the police detective had to invite the expert, and then go to every document and give them access. The preferred expert might not be available, and the job might be done by the expert’s assistant. Imagine how it would be if the detective then had to change the access control of all the documents. And once the immediate goal is done, it might be appropriate to remove them from being able to access. In a real project we expect new people to joining and leaving every day. It does not take too much change before the management of the access rights overwhelms the detective (and he resorts to email).

One experiment built into Cognoscenti is the idea that if a person is assigned to an active goal, they automatically get access to the documents. Goals also have an ability that the person assigned to a goal and delegate the assignment to another person, in effect automatically giving them access to the project folder without further trouble. This has an additional interesting aspect that when the goal is completed, the person doing the goal, if they have no other access, will then automatically lose access which is appropriate in certain situation.

Roles

It became clear that part of the solution will involve creating intermediate constructs, called roles, which represent groups of people who are treated equivalently. Roles, by themselves, are not very innovative, but in a standard implementation of roles, the maintenance of the roles can be tedious and time consuming. Cognoscenti explores the usability problems around roles and use of roles.

Roles are highly contextual, so some experimentation was one to associate roles automatically with certain actions, or to have roles modified as the result of actions in a natural way that does not require extensive maintenance by the users. For example, adding a user to an email message might, optionally, also add that user to an associated role.

Roles were unified with the concept of a view. That is, a role is a group of people in a particular context, but it also contains elements that control how those people see the project. The reason for this is to reduce the number of different conceptual objects that the user must deal with.

Role names are also use as a form of tagging of the content. A document can be associated with particular roles as they are added into the folder, as a way of categorizing the documents. Goals can be associated with roles so that when a person is added to a role, they automatically are assigned the goals, and they have access to the documents. The use of roles gives a lot of flexibility, but the challenge remains to make the usability easy enough so that the case manager does not need to spend a lot of time creating a bunch of roles ahead of time, and instead roles are created easily, in a natural way, whenever needed by the emerging case.

Representation of Goals

Central to any work management system is the idea of tasks, activities, or goals. The challenge here was to explore the usability problems that prevent most users from keeping an accurate task list. Effort focused on how to make it really easy to create goals and assign them to others. Much as attention was given to make goal lists as easy as a checklist. The challenge is to make the creation of a new goal, the assignment of a person to that, and the notification of that user, easier than sending an email asking someone to do something. If it is easier than an email, people will use it. It also needs to be easy for the person receiving the request to access the case even when they had no prior knowledge of that particular system.

An adaptive system needs to build over time reusable templates for reuse when similar situations are recognized in the future. It would be easy to provide a programming language of some sort to allow automation of future cases, however, this approach is not suitable because the intended knowledge workers are not themselves programmers. Effort was spent on trying to make templates that result from normal use of the system, without having to focus on programming like activities.

The second challenge with templates was deciding what is and is not significant in a previous case. In some cases a previous use of a role should create a role with the same users in it, and in other cases the role should be empty.

A third challenge is deferred templates use. Many template systems assume that the template will be known and invoked at the time of case creation. The problem is that users do not always know which template is appropriate at the creation time. Knowledge users will be handed a case to work on, without knowing anything about the case. The job of the knowledge worker is to discover the details and handle whatever work needs to be done, figuring it out on the fly. A knowledge worker needs a place to work, to start collecting those details, and later determine which template to bring in.

Restructuring Over Time

Another use case challenge is that knowledge workers don’t necessarily know what parts will be significant at the time that they start working. What might initially looks like a simple goal might turn into a major project by itself. And sometimes what is expected to be a large project turns out to be trivial.

An experimental feature put into Cognoscenti is the ability to create a simple goal, and then when it looks a little more complicated, put subgoals under it. If it continues to gain complexity, the original goal can be converted to a complete project on its own. Project can be linked to goals in other projects, as if they were that goal. Status reports can be compiled from goals across multiple projects to make it look like it is consolidate in one project. Many experiments were done with trying to make it easy for users to convert back and forth from goals to projects.

Document Repository Support

Knowledge workers are often required to use organizational document repositories, and the philosophy behind Cognoscenti is that such repositories are good for organizations in general. The designers of cognoscenti however designed features to help knowledge workers when they are required to use multiple repositories – often different document storage places for different aspects of their lives. For example a doctor may keep patient data in the clinic system, but at the same time is part of a local university research organization which has thought leading documentation in a different location, while the community outreach program they volunteer has yet another.

One of the challenges with secure document repositories is letting your coworkers who are involved in a project access the same information. For example a doctor accepts a job to verify the results of a research paper located in a secure repository, but would like their recent intern to make the first pass. There are two standard ways to do this: download the file and email it to the intern, or to print it out and give the hard copy to the intern. Both of these are unacceptable because if the document is updated in the original repository, they have no access to the updated version. It is equally unacceptable for the doctor to give the username and password for the intern to access the repository directly.

Cognoscenti resolves by using a synchronized copy. The doctor accessed the repository using Cognoscenti which places a copy of the document into the project. Now the doctor can give the intern access to the copy. But the copy is synchronized with the original – optionally in both directions – so that changes in one can easily be refreshed to the other.

As you might easily imagine this is technically quite easy to do, but making it usable for users – specifically making it easier than emailing a copy of the document – requires some careful thinking about the user interface.

Federated Case Support

Just as knowledge workers are required to more than one document repository; it is also the case that Cognoscenti will not be the only case management system that is used by the pool of people who need to contribute to this case. Therefore, Cognoscenti is designed to live in a world where it presents views of a case to others, and that other case systems will have synchronized copies of those views. There is an explicit upstream / downstream relationship between cases which can be either one way or two way. Again, this is not technically difficult, but the real research is on making what ends up being a complicated collection of capabilities understandable enough, and easy enough that users will actually use them.

Project Exchange Protocol

In order to implement federated case support across different vendors or different types of case support, the protocol for exchange of information needs to be defined independently of single implementation. Workflow Management Coalition has been working on interoperability of collaborative systems for more than 20 years, and this effort is related to the work of the WfMC. Cognoscenti represents a reference implementation of a standard protocol

Location

The open source project, the source, executables and available documentation can be accessed from the following URL: https://code.google.com/p/cognoscenti/

An online video demo using Cognoscenti from the BPMNext conference is available at https://www.youtube.com/watch?v=x-oAAjM6Wh0 .

Plans and Directions

The goal in presenting this demo at EDOC 2014 is not to show numerous accomplishments, but rather to introduce a platform that may be useful for other experimentation in usability. The system is freely available to anyone, and runs in a non-proprietary open environment.

It is the desire of the author that Cognoscenti can be helpful in resolving some of the stickier issues around usability of knowledge work environments, by making a full collaborative adaptive case management system available for free for use in clinical trials involving real knowledge workers.

Acknowledgment

Many thanks to Fujitsu for supporting this work on the open source project.
Significant contributions to the development of Cognoscenti
came from Shamim Quader, Sameer Pradhan, Kumar Raja, Jim Farris,
Sandia Yang, CY Chen, Rajiv Onat, Neal Wang, Dennis Tam, Shikha Srivastava,
Anamika Chaudhari, Ajay Kakkar, Rajeev Rastogi, and many more
people at Fujitsu around the world.

 


by kswenson at June 16, 2014 10:06 AM