Planet BPM

August 21, 2014

Thomas Allweyer: Fraunhofer IAO: Erste von mehreren BPM-Tool-Studien liefert Marktüberblick

BPM-Tools Fraunhofer IAO 2014Gleich vier Marktstudien zu BPM-Tools hat das Stuttgarter Fraunhofer-Institut für Arbeitswirtschaft und Organisation (IAO) angekündigt. Die erste, ein allgemeiner Marktüberblick, liegt bereits vor. Sie soll im Laufe des Jahres um Studien zu den Themen Social BPM, Compliance in Geschäftsprozessen und Überwachung von Geschäftsprozessen ergänzt werden. In dem Marktüberblick sind insgesamt 28 Anbieter mit 27 Werkzeugen vertreten. Dabei ist das Teilnehmerfeld recht heterogen. Die Spanne reicht von einfachen Modellierungswerkzeugen bis zu umfangreichen BPM-Suiten inklusive Prozessausführung und ‑monitoring. Auch ein reines Process Mining-Tool ist vertreten.

Die von den Herstellern per Online-Fragebogen erhobenen Informationen beziehen sich zu großen Teilen auf die Anbieter und Konditionen. Zur eigentlichen Funktionalität wurden nur wenige Fragen gestellt. Hier wird auf die noch folgenden Studien zu speziellen Themen verwiesen. Immerhin erfährt man, dass fast alle betrachteten Werkzeuge über ein integriertes Repository verfügen, und dass BPMN die mit Abstand am weitesten verbreitete Notation ist. Generell sehen die Autoren der Studie einen Trend zu umfassenderen Werkzeugen, die alle Phasen des Prozesslebenszyklus unterstützen.

Die Studie gibt zunächst eine Einführung in das Prozessmanagement und aktuelle Entwicklungen. Anschließend werden die wesentlichen Ergebnisse des Marktüberblicks zusammengefasst. Nähere Einzelheiten zu den Produkten und Anbietern kann man den Einzeldarstellungen entnehmen. Zu jedem Hersteller sind die Antworten zum Online-Fragebogen abgedruckt, sowie vier Seiten Selbstdarstellung.

Der Marktüberblick kann hier heruntergeladen werden.

by Thomas Allweyer at August 21, 2014 10:25 AM

August 19, 2014

Drools & JBPM: Drools Mailing List migration to Google Groups

Drools community member,

The Drools team are moving the rules-usesrs and rules-dev list to Google Groups. This will allow users to have a combined email and web access to the group.
New Forum Information : http://drools.org/community/forum.html (click link to view)

The rules-users mailing list has become high volume and it seems natural to split the group into those asking for help with setup, configuration, installation and administration and those who are asking for help with authoring and executing of rules. For this reason rules-users will be split into two groups - drools-setup and drools-usage.  

Drools Setup - https://groups.google.com/forum/#!forum/drools-setup (click link to subscribe)
Drools Usage - https://groups.google.com/forum/#!forum/drools-usage (click link to subscribe)

The rules-dev mailing list will move to drools-development. 

Drools Development - https://groups.google.com/forum/#!forum/drools-development (click link to subscribe)

Google Groups limits the number of invitations, so we were unable to send invitations. For this reason you will need to manually subscribe. 

The Drools Team

by Mark Proctor (noreply@blogger.com) at August 19, 2014 03:38 PM

August 15, 2014

Drools & JBPM: Drools Execution Server demo (6.2.0.Beta1)

As some of you know already, we are introducing a new Drools Execution Server in version 6.2.0.

I prepared a quick video demo showing what we have done so far (version 6.2.0.Beta1). Make sure you select "Settings -> 720p" and watch it in full screen.


by Edson Tirelli (noreply@blogger.com) at August 15, 2014 12:53 AM

August 12, 2014

Thomas Allweyer: Open Innovation – Prozessoptimierungen in der Radiologie gesucht

logo_medvÜber die Open Innovation-Plattform des in der Region Nürnberg angesiedelten Medizintechnik-Clusters “Medical Valley” werden Experten gesucht, die zur Lösung konkreter Problemen aus der Medizintechnik beitragen. Dass es sich dabei nicht immer um rein technische Lösungen handeln muss, zeigt eine aktuelle Ausschreibung zu Prozessoptimierung in der Radiologie.

Häufig führt ein ineffizienter Informationsaustausch zwischen Patienten, überweisenden Ärzten und Radiologie-Zentren zu langen Durchlaufzeiten und hohen Kosten. Gesucht werden daher Vorschläge zur Verbesserung des gesamten Prozesses zur Patientenuntersuchung mit bildgebenden Verfahren. Die Vorschläge sollen insbesondere auch eine geeignete IT-Unterstützung berücksichtigen. Einreichungen sind bis zum 29.9.2014 über die Medical Valley-Plattform möglich.

by Thomas Allweyer at August 12, 2014 08:30 AM

August 11, 2014

Drools & JBPM: JUDCon 2014 Brazil: Call for Papers

The International JBoss Users and Developer Conference, and premier JBoss developer event “By Developers, For Developers,” is pleased to announce that the call for papers for JUDCon: 2014 Brazil, which will be held in São Paulo on September 26th, is now open! Got Something to Say? Say it at JUDCon: 2014 Brazil! Call for papers ends at 5 PM on August 22nd, 2014 São Paulo time, and selected speakers will be notified by August 29th, so don't delay!

http://www.jboss.org/events/JUDCon/2014/brazil/cfp

by Edson Tirelli (noreply@blogger.com) at August 11, 2014 01:17 PM

August 09, 2014

August 06, 2014

Thomas Allweyer: Kongress zum Prozessmanagement in der Finanzindustrie

pex logoProzessmanager aus Banken und Versicherungen treffen sich vom 27. bis 29. Oktober in Wiesbaden zur “PEX Process Excellence Finance”. Wie erreicht man Process Excellence und Agilität in einem immer stärker regulierten Umfeld? Diese Frage dürfte viele der Teilnehmer beschäftigen. Zahlreiche Praxisvorträge von Referenten namhafter Finanzinstitute werden hierfür umfangreichen Diskussionsstoff liefern.

So wird beispielsweise vorgestellt, wie das Prozessmanagement einer Bank dabei half, ihr Geschäftsmodell erfolgreich von einer Transaktionsbank zu einem Versorger für Wertpapierabwicklungsdienstleistungen umzustellen. Nach wie vor sind viele Banken damit beschäftigt, die Erstellung ihrer Dienstleistungen stärker zu industrialisieren. So werden die Rolle von Shared Service Centers, die Integration von Service Partnern und eine Verbesserung der Kundenorientierung in Backoffice-Prozessen thematisiert. Auch die verstärkte Digitalisierung des Bankgeschäfts steht in Wiesbaden auf der Agenda, ebenso wie Erfolgsfaktoren für Prozess- und Veränderungsmanagement.

Gemeinsam mit Sven Schnägelberger werde ich in einem Workshop einen Überblick über aktuelle Entwicklungen von BPM-Werkzeugen und ‑Technologien vorstellen und Hinweise für die Auswahl der passenden Lösung geben.

Weitere Informationen gibt es auf der Website zur PEX Finance 2014.

 

by Thomas Allweyer at August 06, 2014 11:25 AM

August 04, 2014

Keith Swenson: Organize for Complexity Book

Niels Pflaeging’s amazing little book, Organize for Complexity, gives good advice on how to create self managing organization that are resilient and stable.

There is a lot to like about the book.  It is short: only 114 pages.  Lots of hand drawn diagrams illustrate the concepts.  Instead of bogging down in lengthy descriptions, it keeps statements clear and to the point.

Alpha and Beta

Alpha is a taylorist way of running an organization.  It is the embodiment of command & control, theory X, hierarchical, structured, machine-like, bureaucratic traditional organizations.  The reason that alpha style organizations have worked is an accident of history.  Complexity of marketplaces, and subsequently manufacturing environments, were long ago quite complex, but the dawn of the industrial age brought a century or so where the markets were sluggish and complexity quite diminished.  During this period of diminished complexity, alpha style organizations were able to thrive.  However, this came to an end in the 1970’s or 1980’s, and the world has become more complex again.

OrganizeForComplexityBeta is the style of organizing that is effective at dealing with complexity with a focus on theory Y, decentralization, agile, and self organizing.  He suggests we should form people into teams with a clear boundary.  Keep everything completely transparent within the team so everyone knows what is going on.  Give challenges to the entire team (or better, let them self-identify the tasks) and recognize accomplishments of the team, and not individuals.  Done correctly, the members of the teams will work out the details, taking on the tasks best suited to themselves, without regard to roles, titles, job positions, status symbols, etc.

The book spends a good deal of time motivating why this works.   One subject which I have covered a lot on this blog: a machine-like approach can not work against complexity.  Analytic decomposition of a complex situation, and addressing parts of a complex system can actually do more harm than good.  The one ‘silver bullet’ is that human beings have the ability to work in the face of complexity, so you must set up the organization to leverage native human intelligence. (Reminds me of human 1.0.)

Networked Organizations

The goal is to make an organization networked along informal lines, and also along value creating lines.  Instead of centralized command center pushing ideas out, the network is formed with a periphery which deals directly with the market, while there is a center which supports the periphery.  The network is driven by the periphery … very much the same as a pull organization.  I agree, and have argued that such an organization is indeed more robust and able to handle complexity (see ‘“Pull” Systems are Antifragile‘).  The networked organization decentralizes decision making, putting it closer to the customer, resulting in fast and better decisions.

Leadership

Since teams are self organizing, leadership works a little … differently.  Leadership needs to focus on improving the system, and not so much on the tasks and activities.  Radical transparency, connectedness, team culture are all important.  You might even call it collaborative planning.  He even spends some time discussing the steps you might have to do to transform an organization from an ‘alpha’ to a ‘beta’ working mode.

Summary

I really love the book.  It should be quite accessible to managers and leaders in any organization.  Like most inspirational books, it makes things sound easier than they are.  Ideally, each team, and each team member, would get paid proportionally to the value the team/member provides each time period — as if the organization was a form of idealized market.  Some forms of value are nebulous and defy measurement.  Also, people band into organizations in order gain the stability that comes from a fixed structure so that they don’t have to worry about how their own bills will be paid at the end of the month.  There will always be someone taking the risk, and as a result having a commanding influence.  One can’t be a purist; and it is pragmatic to expect that a mixture of alpha and beta will always be in force.  Still, the book gives an excellent overview of the principles of a networked organization to strive for, along with a reasonable explanation supporting why they work, as the title suggests, in the face of complexity.

 


by kswenson at August 04, 2014 02:35 PM

August 01, 2014

Keith Swenson: The third era of process support: Empathy

Rita Gunther McGrath’s post this week on the HBR Blog called Management’s Three Eras: A Brief History has a lesson for those of us designing business process technology.  The parallel between management and process technology might be stronger than we normally admit.

According to McGrath, management didn’t really exist before the industrial revolution, at which time it came in to being to coordinate these larger organizations.  The organization was conceptualized as a machine to produce products.  The epitome of this thinking is captured by FW Taylor and others who preached scientific management.

Early process technology was similarly oriented around viewing the organization as a machine.  Workflow, and later business process management (BPM), was all about finding the one best process, and constructing machinery that help to enforce those best processes.

The second phase of management emerged in the decades after WWII when organizations started to focus on expertise and to provide services.  Peter Drucker invented the term “knowledge work” and Douglas McGregor called Theory Y a management style distinguished from the earlier Theorey X.  Command and control does not work, and a new contract with workers is needed to retain their talent and expertise.

There is a second phase in process technology as well, with the dramatic rise in interest in Case Management technologies recently to support knowledge workers, to allow them to leverage their expertise, and to enable far more agile organizations necessary to provide services.

Glistening Dew Along the High Sierra TrailMcGrath proposes that we are at the dawn of a third era in management.  The first era was machine-like to produce products, the second collaborative to provide advanced services, the third will be to create “complete and meaningful experiences.”  She says this is a new era of empathy.  A pull organization would be empathetic in the sense that customer desires rather directly drive the working of the organization.  This might be the management style that Margaret Wheatley, Myron Kellner-Rogers, Fritjof Kapra, and other new path writers are hinting at.

We should brace ourselves for a similar emergence of technology that will enhance and improve our ability to work together in this more empathetic style.  A hyper-social organization might be the organizing principle.  What will that new process technology look like?  I don’t know, but we have some time to sort that out.

Management I emerged in the 1800’s to 1950, while that early process technology appeared in the 1980’s and 1990’s.   Management II emerged in the 1950’s and 1960’s and the process technology started appearing in a real way around 2010.  If Management III is appearing now, perhaps we have until 2020 to get to the point where the technology to support it is being worked out. That leaves us plenty of time to work out the details.

Or maybe not.  What if Management III is emerging concomitant with social and enterprise 2.0 technology we see starting to be used today?  What if Management I was originally tied inherently with the rise of use of steam and electric power, while Management II inherently came with technology of telephones and telefaxes?  If Management III is tied directly to new social technologies, it might be that by the time it fully emerges, the technology base will be set.  We see the technology support for management I & II as separate because the information technology came later, but that is not the case for management III.  It might be happening now.

Surely in the future, when we look back on these times, we will recognize the early attempts at systems that support an empathy style of management starting here and now.  We need only look for it, and recognizes it for what it is.

 


by kswenson at August 01, 2014 02:34 PM

Thomas Allweyer: Die Gewinner des BPMS-Buchs stehen fest

Herzlichen Dank an alle, die an der Verlosung des BPMS-Buchs teilgenommen haben.

Je ein Exemplar haben gewonnen:

  • Dr. Wiebke Dresp, Rösrath
  • Tim Pidun, Dresden
  • Dr. Tobias Walter, Offenbach

Herzlichen Glückwunsch! Die Bücher sind auf dem Weg zu Ihnen.

Weitere Informationen zu dem Buch unter www.kurze-prozesse.de/bpms-buch

by Thomas Allweyer at August 01, 2014 10:49 AM

July 30, 2014

Sandy Kemsley: BP3 Brazos Portal For IBM BPM: One Ring To Rule Them All?

Last week BP3 announced the latest addition to their Brazos line of UI tooling for IBM BPM: Brazos Portal. Scott Francis gave me a briefing a few days before the announcement, and he had Ivan...

[Content summary only, click through for full article and links]

by Sandy Kemsley at July 30, 2014 12:38 PM

July 28, 2014

BPinPM.net: Leading BPM – Agenda of 2014 BPinPM.net Conference revealed!

21-07-2014 22-09-19In a growing number of organizations, focus of BPM is moving towards leadership oriented topics to increase acceptance and benefit of process management systems. Basics like process modeling and compliance management are already quite mature and widely discussed. Thus, we are going to put the motto “Leading BPM” into reality and will pay attention to upcoming areas such as “real” BPM training (not system training) for employees and management, change management aspects, and activities to strengthen acceptance of BPM systems.

To facilitate these topics, we joined forces with BPM experts from business areas like engineering, finance, aerospace, social sector, and chemical industry to perform a number of workshops to identify best practices. Combined with latest insights from scientific world and practical examples, results of the workshops will be presented at the 2014 BPinPM.net Process Management Conference on Nov 24/25 at Lufthansa Training & Conference Center Seeheim in the area of Frankfurt, Germany.

In addition, we will focus on future-oriented BPM topics and present detailed results of our “Digital Age BPM” workshop series which we have performed in cooperation with digital leadership expert Willms Buhse and his doubleYUU team. In a group of five organizations from various sectors, we experimented with bringing BPM and digital age aspects such as social media, web 2.0, agile management, and mobile devices together. Results are quite fascinating and will be presented on day two of the conference.

For the first time ever, we will give the BPM2thePeople Award to an organization form social or education sector for its achievements in applying BPM methods. Read more about the award on its website: http://www.BPM2thePeople.org

For sure, the conference will also offer enough space for knowledge exchange with other BPM experts. And especially for a facilitated networking, we will offer several speed dating sessions during the breaks.

Finally, Samsung will support us with latest mobile devices to continue our paperless conference approach and to enable live polls and digital networking. Many thanks to Samsung! :-)

So don’t miss this year’s BPinPM.net Conference and register now!

 

 

PS: Currently, we are offering an early bird discount of 10 percent!

Again, this will be a local conference in Germany, but if enough non-German-speaking experts are interested, we will think about ways to share the know-how with the international BPinPM.net community as well. Please feel free to contact the team.

by Mirko Kloppenburg at July 28, 2014 12:05 PM

Keith Swenson: Wirearchy – a pattern for an adaptive organization?

What is a Wirearchy?  How does it work?  When should it be considered?  When should it be avoided?  What are the advantages?  This post covers the basics elements of a Wirearchy.

What is a Wirearchy?

Jon Husband has a blog “wirearchy.com” which as you can tell from the name is dedicated to the subject.

It is an organizing principle.  Instead of the top down, command and control hierarchy that we are used to, a wirearchy instead organizes around champions and channels.  It is an organization designed around a networked world.  He says:

The working definition of Wirearchy is “a dynamic two-way flow of  power and authority, based on knowledge, trust, credibility and a focus on results, enabled by interconnected people and technology”.

The description reads a little like the communist manifesto where the employee is being liberated from the oppression of bureaucracy, where “rapid flows of information are like electronic grains of sand, eroding the pillars of rigid traditional hierarchies.”  There is no doubt that information technology is having a profound effect on how we organize, and a Wirearchy is an honest attempt to distill the trends that are already happening around us.

Taylorism

Husband feels that Taylorism, or Scientific Management, is coded into the traditional hierarchy.  Scientific management can be seen as the application of Enlightenment (reductionist) principles to work processes.  Breaking highly complicated manufacturing into a sequence of discrete well defined steps, so that work can be passed from person to person in a factory like setting. It is surprising that he draws a parallel between hierarchies and scientific management, because the latter is between 100 and 200 years old, while hierarchies have been used since ancient times, and don’t seem to be related to the industrial revolution at all.  Hierarchies worked for the Egyptians.

“first we shape our structures, then our structures shape us” -Churchill

Is it Technology?

Husband claims that the concept of wirearchy has nothing to do with technology.  I think I know what he means: it is an organization of human interactions, not specifically a something designed in a piece of software.  Thus a wirearchy would then be what we used to call “the grape vine” – an informal network of communications.  In this sense wirearchies have always existed.

To say that it has nothing to do with technology is not really honest.  It is the expansion of telecommunications technologies that allow so many more people to be connected than before.  It is the information technology that allows a wirearchy to be more than just a gossip network.

Indeed Husband seem to contradict himself.  Consider the advise to a manager: “become knowledgeable about online work systems and how the need for collaboration is changing the nature of work.”   A wirearchy is not instigated by an specific technology system, but there is no doubt that a wirearchy results from new modes of communications from social technology in general.

Not a Revolution

Husband does not expect traditional hierarchies to be replaced by wirearchies.  Hierarchies remain, but wirearchies explain some of the changes we are seeing in the interconnected world.

I really want to compare this to Francois Gossieaux’s “Human 1.0″ which is that social technologies are allowing us to working together in a much more natural way.  People have always built their own networks, but during the industrial revolution there was a strong incentive to organize into much more rigid organizational structures.  Call those rigid structures from industrialization and scientific management “human 2.0″.  Then social networks will allows us be just as productive, but get back to relating to each on in a way that people always have.

The Big Shift: Push vs. Pull

Hagel et. al. talk about social technology bringing about a shift from push oriented organizations, to pull organizations.  The point of a wirearchy is that initiatives do not start from the top, and get pushed to the workers.  Instead, initiatives can start from anyplace, and be carried out by ad-hoc teams that know each other and share common goals.  That sound very much the same as a pull organization: the edges of the organization in direct contact with the customer make key decisions about what will be offered, and then are supported by the rest of the organization to deliver the results.  The hierarchy does not go away, instead the focus is on how it  is used, and where the initiative come from.

Agility

One of the central themes is responsiveness to change.  He says people should “be aware of, and identify, the changes and prepare for more change on an ongoing basis.”  In other words, prepare to be Agile.  Don’t forget, it was Alvin Toffler in his 1970 book “Future Shock” said exactly the same thing: in the future success will depend less on perfecting a particular mode of work, and instead in learning how to rapid and continually adopt new patterns of work.” The idea that we need to adapt quickly is not new.

But Still … Highly Relevant

Reading the above I seem critical of the originality of wirearchy, but let me clarify.  Wirearchy is a way of seeing and talking about what is happening.  Many others are seeing the same thing, and that is why it is so important.  Here are some highlights of posts he has written:

Harold Jarche has written a number of posts on wirearchy:

Net-Net

Organizations that do not adapt to the changes that social technology brings to the market and to the office will be left behind by those who adapt.  There is no question that such pressures exist.  It is useful to talk about a wirearchy as a view of how organizations are changing, and as a guiding principle to help determining the better future course of action available to organizaitons.

 

 


by kswenson at July 28, 2014 10:39 AM

July 25, 2014

Thomas Allweyer: Agile Methoden weiter auf dem Vormarsch

Zum zweiten Mal nach 2012 hat das BPM-Labor der Hochschule Koblenz unter Leitung von Ayelt Komus eine Bestandsaufnahme zur Verbreitung agiler Verfahren durchgeführt. Die Macher der Studie freuten sich über mehr als 600 Teilnehmern aus 30 Nationen. “Zwei Jahre später sind agile Methoden wie Scrum und IT-Kanban weiter etabliert und zunehmend auch außerhalb der Software-Entwicklung in der täglichen Praxis angekommen”, fassen die Autoren das Ergebnis zusammen.

Fast zwei Drittel der Studienteilnehmer haben erst in den letzten vier Jahren begonnen, agil zu arbeiten. Meist werden agile Methoden nicht in Reinform angewandt, sondern in Kombination mit Elementen anderer, oftmals klassischer Vorgehen. Als meist genutzte Methode wird nach wie vor Scrum eingesetzt. Kanban und Design Thinking haben aber deutlich höhere Wachstumsraten als andere Methoden. Insgesamt wurden agile Methoden auch in der aktuellen Umfrage wieder deutlich positiver und erfolgreicher beurteilt als klassische Projektmanagement-Methoden.

Der Abschlussbericht der Studie ist über die Seite www.status-quo-agile.de erhältlich.

by Thomas Allweyer at July 25, 2014 10:21 AM

July 21, 2014

Drools & JBPM: Drools Executable Model (Rules in pure Java)

The Executable Model is a re-design of the Drools lowest level model handled by the engine. In the current series (up to 6.x) the executable model has grown organically over the last 8 years, and was never really intended to be targeted by end users. Those wishing to programmatically write rules were advised to do it via code generation and target drl; which was no ideal. There was never any drive to make this more accessible to end users, because extensive use of anonymous classes in Java was unwieldy. With Java 8 and Lambda's this changes, and the opportunity to make a more compelling model that is accessible to end users becomes possible.

This new model is generated during the compilation process of higher level languages, but can also be used on its own. The goal is for this Executable Model to be self contained and avoid the need for any further byte code munging (analysis, transformation or generation); From this model's perspective, everything is provided either by the code or by higher level language layers. For example indexes etc must be provided by arguments, which the higher level language generates through analysis, when it targets the Executable model.
   
It is designed to map well to a Fluent level builders, leveraging Java 8's lambdas. This will make it more appealing to java developers, and language developers. Also this will allow low level engine feature design and testing, independent of any language. Which means we can innovate at an engine level, without having to worry about the language layer.
   
The Executable Model should be generic enough to map into multiple domains. It will be a low level dataflow model in which you can address functional reactive programming models, but still usable to build a rule based system out of it too.

The following example provides a first view of the fluent DSL used to build the executable model
         
DataSource persons = sourceOf(new Person("Mark", 37),
new Person("Edson", 35),
new Person("Mario", 40));

Variable<Person> markV = bind(typeOf(Person.class));

Rule rule = rule("Print age of persons named Mark")
.view(
input(markV, () -> persons),
expr(markV, person -> person.getName().equals("Mark"))
)
.then(
on(markV).execute(mark -> System.out.println(mark.getAge())
)
);

The previous code defines a DataSource containing a few person instances and declares the Variable markV of type Person. The rule itself contains the usual two parts: the LHS is defined by the set of inputs and expressions passed to the view() method, while the RHS is the action defined by the lambda expression passed to the then() method.

Analyzing the LHS in more detail, the statement
         
input(markV, () -> persons)
binds the objects from the persons DataSource to the markV variable, pattern matching by the object class. In this sense the DataSource can be thought as the equivalent of a Drools entry-point.

Conversely the expression
         
expr(markV, person -> person.getName().equals("Mark"))
uses a Predicate to define a condition that the object bound to the markV Variable has to satisfy in order to be successfully matched by the engine. Note that, as anticipated, the evaluation of the pattern matching is not performed by a constraint generated as a result of any sort of analysis or compilation process, but it's merely executed by applying the lambda expression implementing the predicate ( in this case, person -> person.getName().equals("Mark") ) to the object to be matched. In other terms the former DSL produces the executable model of a rule that is equivalent to the one resulting from the parsing of the following drl.
         
rule "Print age of persons named Mark"
when
markV : Person( name == "Mark" ) from entry-point "persons"
then
System.out.println(markV.getAge());
end
It is also under development a rete builder that can be fed with the rules defined with this DSL. In particular it is possible to add these rules to a CanonicalKieBase and then to create KieSessions from it as for any other normal KieBase.
         
CanonicalKieBase kieBase = new CanonicalKieBase();
kieBase.addRules(rule);

KieSession ksession = kieBase.newKieSession();
ksession.fireAllRules();
Of course the DSL also allows to define more complex conditions like joins:
         
Variable<Person> markV = bind(typeOf(Person.class));
Variable<Person> olderV = bind(typeOf(Person.class));

Rule rule = rule("Find persons older than Mark")
.view(
input(markV, () -> persons),
input(olderV, () -> persons),
expr(markV, mark -> mark.getName().equals("Mark")),
expr(olderV, markV, (older, mark) -> older.getAge() > mark.getAge())
)
.then(
on(olderV, markV)
.execute((p1, p2) -> System.out.println(p1.getName() + " is older than " + p2.getName())
)
);
or existential patterns:
 
Variable<Person> oldestV = bind(typeOf(Person.class));
Variable<Person> otherV = bind(typeOf(Person.class));

Rule rule = rule("Find oldest person")
.view(
input(oldestV, () -> persons),
input(otherV, () -> persons),
not(otherV, oldestV, (p1, p2) -> p1.getAge() > p2.getAge())
)
.then(
on(oldestV)
.execute(p -> System.out.println("Oldest person is " + p.getName())
)
);
Here the not() stands for the negation of any expression, so the form used above is actually only a shortcut for
 
not( expr( otherV, oldestV, (p1, p2) -> p1.getAge() > p2.getAge() ) )
Also accumulate is already supported in the following form:
 
Variable<Person> person = bind(typeOf(Person.class));
Variable<Integer> resultSum = bind(typeOf(Integer.class));
Variable<Double> resultAvg = bind(typeOf(Double.class));

Rule rule = rule("Calculate sum and avg of all persons having a name starting with M")
.view(
input(person, () -> persons),
accumulate(expr(person, p -> p.getName().startsWith("M")),
sum(Person::getAge).as(resultSum),
avg(Person::getAge).as(resultAvg))
)
.then(
on(resultSum, resultAvg)
.execute((sum, avg) -> result.value = "total = " + sum + "; average = " + avg)
);
To provide one last more complete use case, the executable model of the classical fire and alarm example can be defined with this DSL as it follows.
 
Variable<Room> room = any(Room.class);
Variable<Fire> fire = any(Fire.class);
Variable<Sprinkler> sprinkler = any(Sprinkler.class);
Variable<Alarm> alarm = any(Alarm.class);

Rule r1 = rule("When there is a fire turn on the sprinkler")
.view(
input(fire),
input(sprinkler),
expr(sprinkler, s -> !s.isOn()),
expr(sprinkler, fire, (s, f) -> s.getRoom().equals(f.getRoom()))
)
.then(
on(sprinkler)
.execute(s -> {
System.out.println("Turn on the sprinkler for room " + s.getRoom().getName());
s.setOn(true);
})
.update(sprinkler, "on")
);

Rule r2 = rule("When the fire is gone turn off the sprinkler")
.view(
input(sprinkler),
expr(sprinkler, Sprinkler::isOn),
input(fire),
not(fire, sprinkler, (f, s) -> f.getRoom().equals(s.getRoom()))
)
.then(
on(sprinkler)
.execute(s -> {
System.out.println("Turn off the sprinkler for room " + s.getRoom().getName());
s.setOn(false);
})
.update(sprinkler, "on")
);

Rule r3 = rule("Raise the alarm when we have one or more fires")
.view(
input(fire),
exists(fire)
)
.then(
execute(() -> System.out.println("Raise the alarm"))
.insert(() -> new Alarm())
);

Rule r4 = rule("Lower the alarm when all the fires have gone")
.view(
input(fire),
not(fire),
input(alarm)
)
.then(
execute(() -> System.out.println("Lower the alarm"))
.delete(alarm)
);

Rule r5 = rule("Status output when things are ok")
.view(
input(alarm),
not(alarm),
input(sprinkler),
not(sprinkler, Sprinkler::isOn)
)
.then(
execute(() -> System.out.println("Everything is ok"))
);

CanonicalKieBase kieBase = new CanonicalKieBase();
kieBase.addRules(r1, r2, r3, r4, r5);

KieSession ksession = kieBase.newKieSession();

// phase 1
Room room1 = new Room("Room 1");
ksession.insert(room1);
FactHandle fireFact1 = ksession.insert(new Fire(room1));
ksession.fireAllRules();

// phase 2
Sprinkler sprinkler1 = new Sprinkler(room1);
ksession.insert(sprinkler1);
ksession.fireAllRules();

assertTrue(sprinkler1.isOn());

// phase 3
ksession.delete(fireFact1);
ksession.fireAllRules();
In this example it's possible to note a few more things:

  • Some repetitions are necessary to bind the parameters of an expression to the formal parameters of the lambda expression evaluating it. Hopefully it will be possible to overcome this issue using the -parameters compilation argument when this JDK bug will be resolved.
  • any(Room.class) is a shortcut for bind(typeOf(Room.class))
  • The inputs don't declare a DataSource. This is a shortcut to state that those objects come from a default empty DataSource (corresponding to the Drools default entry-point). In fact in this example the facts are programmatically inserted into the KieSession.
  • Using an input without providing any expression for that input is actually a shortcut for input(alarm), expr(alarm, a -> true)
  • In the same way an existential pattern without any condition like not(fire) is another shortcut for not( expr( fire, f -> true ) )
  • Java 8 syntax also allows to define a predicate as a method reference accessing a boolean property of a fact like in expr(sprinkler, Sprinkler::isOn)
  • The RHS, together with the block of code to be executed, also provides a fluent interface to define the working memory actions (inserts/updates/deletes) that have to be performed when the rule is fired. In particular the update also gets a varargs of Strings reporting the name of the properties changed in the updated fact like in update(sprinkler, "on"). Once again this information has to be explicitly provided because the executable model has to be created without the need of any code analysis.

by Mario Fusco (noreply@blogger.com) at July 21, 2014 04:48 PM

July 20, 2014

Drools & JBPM: jBPM6 Developer Guide coming out soon!

Hello everyone. This post is just to let you know that jBPM6 Developer Guide is about to get published, and you can pre-order it from here and get from a 20% to a 37% discount on your order! With this book, you can learn how to:
  • Model and implement different business processes using the BPMN2 standard notation
  • Understand how and when to use the different tools provided by the JBoss Business Process Management (BPM) platform
  • Learn how to model complex business scenarios and environments through a step-by-step approach
Here you can find a list of what you will find in each chapter:  

Chapter 1, Why Do We Need Business Process Management?, introduces the BPM discipline. This chapter will provide the basis for the rest of the book, by providing an understanding of why and how the jBPM6 project has been designed, and the path its evolution will follow.  
Chapter 2, BPM Systems Structure, goes in depth into understanding what the main pieces and components inside a Business Process Management System (BPMS) are. This chapter introduces the concept of BPMS as the natural follow up of an understanding of the BPM discipline. The reader will find a deep and technical explanation about how a BPM system core can be built from scratch and how it will interact with the rest of the components in the BPMS infrastructure. This chapter also describes the intimate relationship between the Drools and jBPM projects, which is one of the key advantages of jBPM6 in comparison with all the other BPMSs, as well as existing methodologies where a BPMS connects with other systems.
Chapter 3, Using BPMN 2.0 to Model Business Scenarios, covers the main constructs used to model our business processes, guiding the reader through an example that illustrates the most useful modeling patterns. The BPMN 2.0 specification has become the de facto standard for modeling executable business processes since it was released in early 2011, and is recommended to any BPM implementation, even outside the scope of jBPM6.  
Chapter 4, Understanding the Knowledge Is Everything Workbench, takes a look into the tooling provided by the jBPM6 project, which will enable the reader to both define new processes and configure a runtime to execute those processes. The overall architecture of the tooling provided will be covered as well in this chapter.
Chapter 5, Creating a Process Project in the KIE Workbench, dives into the required steps to create a process definition with the existing tooling, as well as to test it and run it. The BPMN 2.0 specification will be put into practice as the reader creates an executable process and a compiled project where the runtime specifications will be defined.
Chapter 6, Human Interactions, covers in depth the Human Task component inside jBPM6. A big feature of BPMS is the capability to coordinate human and system interactions. It also describes how the existing tooling builds a user interface using the concepts of task lists and task forms, exposing the end users involved in the execution of multiple process definitions’ tasks to a common interface.
Chapter 7, Defining Your Environment with the Runtime Manager, covers the different strategies provided to configure an environment to run our processes. The reader will see the configurations for connecting external systems, human task components, persistence strategies and the relation a specific process execution will have with an environment, as well as methods to define their own custom runtime configuration.
Chapter 8, Implementing Persistence and Transactions, covers the shared mechanisms between the Drools and jBPM projects used to store information and define transaction boundaries. When we want to support processes that coordinate systems and people over long periods of time, we need to understand how the process information can be persisted.  
Chapter 9, Integration with other Knowledge Definitions, gives a brief introduction to the Drools Rule Engine. It is used to mix business processes with business rules, to define advanced and complex scenarios. Also, we cover Drools Fusion, and added feature of the Drools Rule Engine to add the ability of temporal reasoning, allowing business processes to be monitored, improved and covered by business scenarios that require temporal inferences.  
Chapter 10, KIE Workbench Integration with External Systems, describes the ways in which the provided tooling can be extended with extra features, along with a description of all the different extension points provided by the API and exposed by the tooling. A set of good practices is described in order to give the reader a comprehensive way to deal with different scenarios a BPMS will likely face.
Appendix A, The UberFire Framework, goes into detail about the based utility framework used by the KIE Workbench to define its user interface. The reader will learn the structure and use of the framework, along with a demonstration that will enable the extension of any component in the workbench distribution you choose. Hope you like it! Cheers,

by Marian Buenosayres (noreply@blogger.com) at July 20, 2014 09:10 PM

July 18, 2014

Drools & JBPM: Kie Uberfire Social Activities

The Uberfire Framework, has a new extension: Kie Uberfire Social Activities. In this initial version this Uberfire extension will provided an extensible architecture to capture, handle, and present (in a timeline style) configurable types of social events.


  • Basic Architecture
An event is any type of "CDI Event" and will be handled by their respective adapter. The adapter is a CDI Managed Bean, which implements SocialAdapter interface. The main responsibility of the adapter is to translate from a CDI event to a Social Event. This social event will be captured and persisted by Kie Uberfire Social Activities in their respectives timelines (basically user and type timeline). 

That is the basic architecture and workflow of this tech:

Basic Architecture


  • Timelines

There is many ways of interact and display a timeline. This session will briefly describe each one of them.

a-) Atom URL

Social Activities provides a custom URL for each event type. This url is accessible by: http://project/social/TYPE_NAME.



The users timeline works on the same way, being accessible by http://project/social-user/USER_NAME .

Another cool stuff is that an adapter can provide his pluggable url-filters. Implementing the method getTimelineFilters from SocialAdapter interface, he can do anything that he want with his timeline. This filters is accessible by a query parameter, i.e. http://project/social/TYPE_NAME?max-results=1 .


B-) Basic Widgets

Social Activities also includes some basic (extendable) widgets. There is two type of timelines widgets: simple and regular widgets.

Simple Widget

Regular Widget

The ">" symbol on 'Simple Widget' is a pagination component. You can configure it by an easy API. With an object SocialPaged( 2 ) you creates a pagination with 2 items size. This object helps you to customize your widgets using the methods canIGoBackward() and canIGoForward() to display icons, and  forward() and backward() to set the navigation direction.
The Social Activities component has an initial support for avatar. In case you provide an user e-mail for the API, the gravatar image will be displayed in this widgets.


C-) Drools Query API

Another way to interact with a timeline is throught the Social Timeline Drools Query API. This API executes one or more DRLs in a Timeline in all cached events. It's a great way to merge different types of timelines.



  • Followers/Following Social Users

A user can follow another social user.  When a user generates a social event, this event is replicated in all timelines of his followers. Social also provides a basic widget to follow another user, show all social users and display a user following list.


It is important to mention that the current implementation lists socials users through  a "small hack". We search the uberfire default git repository for branch names (each uberfire user has his own branch),  and extract the list of social users.

This hack is needed as we don’t have direct access of the user base (due the container based auth).



  • Persistence Architecture

The persistence architecture of Social Activities is build on two concepts: Local Cache and File Persistence. The local cache is a in memory cache that holds all recent social events. These events are kept only in this cache until the max events threshold is reached. The size of this threshold is configured by a system property org.uberfire.social.threshold (default value 100).

When the threshold is reached, the social persist the current cache into the file system (system.git repository - social branch). Inside this branch there is a social-files directory and this structure:



  • userNames: file that contains all social users name
  • each user has his own file (with his name), that contains a Json with user data.
  • a directory for each social type event .
  • a directory "USER_TIMELINE" that contains specific user timelines


Each directory keeps a file "LAST_FILE_INDEX" that point for the most recent timeline file.




Inside each file, there is a persisted list of Social Events in JSON format:

({"timestamp":"Jul16,2014,5:04:13PM","socialUser":{"name":"stress1","followersName":[],"followingName":[]},"type":"FOLLOW_USER","adicionalInfo":["follow stress2"]})

Separating each JSONs there is a HEX and the size in bytes of the JSON. The file is read by social in reverse order.

The METADATA file current hold only the number of social events on that file (used for pagination support).

It is important to mention that this whole structure is transparent to the widgets and pagination. All the file structure and respective cache are MERGED to compose a timeline.

  • Clustering
In case that your application is using Uberfire in a cluster environment, Kie Social Activities also supports distributed persistence. His cluster sync is build on top of UberfireCluster support (Apache Zookeeper and Apache Helix).


Each node broadcast social events to the cluster via a cluster message  SocialClusterMessage.NEW_EVENT containing Social Event data. With this message, all the nodes receive the event and can store it on their own local cache. In that point all nodes caches are consistent.
When a cache from a node reaches the threshold, it lock the filesystem to persist his cache on filesystem. Then the node sends a SOCIAL_FILE_SYSTEM_PERSISTENCE message to the cluster notifying all the nodes that the cache is persisted on filesystem.
If during this persistence process, any node receives a new event, this stale event is merged during this sync.

  • Stress Test and Performance

In my github account, there is an example Stress Test class used to test the performance of this project.  This class isn't imported to our official repository.

The results of that test, find out that Social Actitivies can write ~1000 events per second in my personal laptop (Mb Pro,  Intel Core i5 2.4 GHZ, 8Gb 1600MHz DDR3, SSD). In a single instance enviroment, it writes 10k events in 7s, writed 100k in 48s, and 500k events in 512s.
  • Demo
A sample project of this feature can be found at my GitHub account or you can just download and install the war of this demo. Please take a note that this repository moved from my account to our official uberfire extensions repository.

  • Roadmap
This is an early version of Kie Uberfire Social Activities. In the nexts versions we plan to provide:

  • A "Notification Center" tool, inspired by OSX notification tool; (far term)
  • Integrate this project with dashbuilder KPI's;(far term)
  • A purge tool, able to move old events from filesystem to another persistence store; (short term)
  • In this version, we only provide basic widgets. We need to create a way to allow to use customized templates on this widgets.(near term)
  • A dashboard to group multiple social widgets.(near term)

If you want start contributing to Open Source, this is a nice opportunity. Fell free to contact me!

by ederign (noreply@blogger.com) at July 18, 2014 07:40 PM

Thomas Allweyer: Mein neues Buch: Eine praxisorientierte Einführung in Business Process Management-Systeme

Frontpage BPMS-Buch_klIn dem neuen Buch geht es um Business Process Management-Systeme (BPMS), also um Systeme zur Prozessausführung. Wie lernt man am besten, wie ein solches System funktioniert? Indem man es selbst ausprobiert. Ähnlich wie man zum Erlernen einer Programmiersprache viele Beispielprogramme erstellt und zum Laufen bringt, sollte man für den Einstieg in BPMS möglichst viele ausführbare Prozesse modellieren und zur Ausführung bringen. Aus diesem Grund enthält das Buch über 50 Beispielprozesse, die man auf der Webseite zum Buch herunterladen und selbst ausprobieren kann.

Darunter finden sich nicht nur einfache Standardprozesse, wie sie in typischen Einsteiger-Tutorials verwendet werden, sondern auch Umsetzungen komplexerer Aufgabenstellungen, wie z. B. Mehrfachteilnehmer, Ausnahmebehandlungen, Kollaboration mehrerer Prozesse in unterschiedlichen Systemen, und viele mehr.

Dabei spielt die Prozessmodellierung mit BPMN eine zentrale Rolle. Ein ausführbarer Prozess besteht aber nicht nur aus einem Prozessmodell, sondern auch noch aus zahlreichen weiteren Elementen, wie z. B. Daten, Benutzer-Dialogen, Benutzer-Rollen und Organisationsstrukturen, Geschäftsregeln, Anwendungsfunktionalität, usw. Auch diese Aspekte werden ausführlich anhand vieler weiterer Beispiele erläutert und praktisch angewendet. So lernt der Leser, wie man komplexe Datenobjekte anlegt und benutzt, Nachrichtenflüsse definiert, Benutzer-Dialoge und Screenflows spezifiziert, Skripte erstellt, Web Services einbindet, Benutzer dynamisch auswählt, Entscheidungstabellen einsetzt, und vieles mehr.

Auch die Bearbeitung der einzelnen Schritte im Prozessportal und die Administration eines BPMS kommen nicht zu kurz, ebenso wie das Monitoring und Controlling der Prozesse. Ganz bewusst liegt der Fokus des Buchs auf dem klassischen BPMS-Konzept. Neuere Entwicklungen, wie Adaptive Case Management oder Social BPM werden zwar angesprochen, aber nicht vertieft. In diesen Bereichen ist noch sehr vieles im Fluss. Das klassische BPMS-Konzept wird auch in Zukunft eine wesentliche Rolle spielen, vor allem im Bereich standardisierter Prozesse. Und auch für das Verständnis neuerer Entwicklungen ist die fundierte Kenntnis des etablierten BPMS-Ansatzes eine wichtige Voraussetzung.

Damit die Beispielprozesse von jedem Leser ausprobiert und selbst weiterentwickelt werden können, wurden sie mit der frei verfügbaren, kostenlosen Community Edition des Systems Bonita BPM erstellt. Die im Buch vermittelten Grundlagen sind aber allgemeingültig und lassen sich auch auf andere BPM-Systeme übertragen. Da jedes System seine Besonderheiten hat, wird an manchen Stellen beispielhaft erläutert, wie eine bestimmter Aspekt in Bonita umgesetzt wurde. Das jeweilige Prinzip sollte sich bei jedem typischen BPM-System ebenfalls wiederfinden, wobei sich die konkrete Art der Umsetzung unterscheiden kann. Das Buch enthält keine Details zur Bonita-Bedienung. Die notwendigen Informationen zur Ausführung der Prozesse mit Bonita finden sich auf der Webseite zum Buch.

Auch für Anwender anderer BPMS ist das Buch daher nützlich. Bonita kann problemlos als zusätzliche Lernumgebung auf handelsüblichen PCs installiert werden. Ein zusätzlicher Lerneffekt entsteht, wenn man einzelne Beispielprozesse in einem anderen System umsetzt. An entsprechenden Erfahrungen bin ich sehr interessiert und veröffentliche auch gerne auf andere Systeme portierte Prozesse auf der Webseite.

Da der Funktionsumfang der verwendeten Community Edition von Bonita nicht so umfangreich wie der manches kommerziellen Systems ist, war es an mehreren Stellen erforderlich, kreative Lösungen und Workarounds zu entwickeln. So stehen in diesem System z. B. keine komplexen und keine ereignisbasierten Gateways zur Verfügung. Aus didaktischen Gründen sind solche Einschränkungen oftmals gar nicht schlecht, da es besonders lehrreich ist, wenn man sich überlegt, wie man das gewünschte Verhalten auf anderem Wege erreichen kann.

Das Buch richtet sich an alle Einsteiger in Business Process Management-Systeme, die die Konzepte nicht nur theoretisch verstehen, sondern auch praktisch anwenden wollen. Zielgruppe sind somit zum einen Studenten der Informatik, der Wirtschaftsinformatik und verwandter Studiengänge, zum anderen aber auch Entwickler und Prozessmodellierer aus der Praxis, die sich in die Thematik einarbeiten wollen. Auch im Vorfeld einer Systemauswahl ist es nützlich, sich schon einmal intensiv mit den konkreten Problemstellungen der BPMS-basierten Entwicklung auseinanderzusetzen, um mit den Anbietern auf Augenhöhe diskutieren und konkrete Fragen stellen zu können.

Und hier noch eine kleine Verlosung: Wer das Buch gerne kostenlos erhalten möchte, kann bis zum 31.7.2014 eine Mail mit dem Betreff “Verlosung BPMS-Buch” an info@kurze-prozesse.de schicken. Unter allen Einsendern werden drei Exemplare des Buchs verlost. Wer teilnimmt, stimmt zu, dass im Falle eines Gewinns sein Name und Ort veröffentlicht werden. Der Rechtsweg ist ausgeschlossen.

Webseite zum Buch – mit den Prozessen zum Download
Das Buch bei amazon bestellen.

by Thomas Allweyer at July 18, 2014 09:13 AM

July 11, 2014

Keith Swenson: bpmNEXT talk on Personal Assistants

Here is a video from my presentation at bpmNEXT of March 2014 presenting the idea that in the future we might see a kind of agent, which I call a personal assistant, cloning and synchronizing projects such that the large scale processes actually emerge from the interactions of these agents.

Background

The presentation stands on its own, you can access the slides at slideshare, so I won’t repeat any of that here, but rather to give you some of the context.

bpmNEXT is a meeting of the elite in the process technology world, and it is always a great thrill to meet and debate with everyone all together in one place.  Asilomar is a such a nice location to hang out, and the hosts always make sure there is plenty of wine to lubricate the conversation.  About 6 months earlier Jim Sinur released a new book talking about agents, and I think a lot of people are rather misinformed about agents.  In a certain sense, a BPMSuite is actually just an agent because it is programmable. If programmability and autonomy is the only thing to an agent, then what is the big deal?  So to every person attending the conference, I kept asking “what is an agent?”  Is this really something new, or just the same old thing with inflated terminology.

I think there is a real use for an agent to help work out the interface between different domains of control.  That is a really difficult problem.  The SOA people ignored it, and simply said that we would have WSDL interfaces in UDDI repositories.  WSDL does not work because it does not define the meaning behind the data values.  Data values are defined only by name and type, which really tells you nothing.  Different organizations typically use different names for the same thing, so a WSDL interface falls down when the names don’t match.

What if an autonomous agent could work out those details for us?  Within my organization it is pretty easy to come to agreement on terms and processes, but when bridging to another organization, there is a whole negotiation that needs to go on.  You can easily imagine an interchange something like this:

  • Agent A:  Hey there!  I have some work to be done, could you do it?
  • Agent B:  Well, yes, I do consulting from time to time, what do you need done?
  • Agent A: I can’t really tell you until you sign the non-disclosure.
  • Agent B: Well, what kind of work would it be, and I can tell you if I might do it.
  • Agent A: it is in the area of helping with a patient.  Do you help with skeletial problems on the back?
  • Agent B: Yes, I help a lot of people with back problems, it sounds like the sort of thing I might be able to help with.  What time frame are we talking about?
  • Agent A: Patient is in mild discomfort, so I would expect a consultation in the next two weeks would be acceptable.
  • Agent B: Great I have several openings next week.  What kind of non-disclosure agreement should be set up?
  • Agent A: The normal.  Here (passing document) is the standard form.  I see we have used this same form in the past.
  • Agent B: OK, I have noted that this agreement is in force with this patient.  Can I have the name of the patient.
  • Agent A: It is ‘Alex Demo’ and here is the task that is assigned: “investigate back problem”.   Would you like to take this assignment?
  • Agent B: Yes, I automatically accept tasks with that description.  Can you give me the pointer to the case folder?
  • Agent A: OK, the task has been marked as accepted, and you have been given rights as a ‘attending subspecialist’.  Here (passing URL) is the link.
  • Agent B: OK, I am downloading the associated files, and I will take it from here.  I will update you when I have some results.
  • (Agent B notifies Charles about the new case, and at the same time sends a request to Alex for preferred appointment times.)

The dialog is described using the first person pronoun ‘I’ but understand that the agents are speaking on behalf of their owners.  The owners have ‘programmed’ in some sense of the word, the agents to take these actions on their behalf.  That is why I use the term “personal assistant”.

The point about this exchange is that we programmers aways want to simplify this into a single exchange:  (1a) send the job request, and (1b) receive the result back.   This exchange makes use of progressive disclosure on both sides.  The delegating side does not want to disclose information about the patient until it is clarified that the receiving party is willing and able to help.  Similarly, the receiving side may not want to disclose the full laundry list of services that can be performed, especially when different parties describe those tasks using different terms.  I have probably grossly oversimplified the exchange over the work to be done, which very well might include identifiers of specific work which comes from standard tables of services.  Also, keep in mind that the requester does not really know what actual treatment is needed:  part of Charles’ job is to determine that.  So the exchange is not really about doing a particular treatment, but rather about taking ownership of the case for a particular aspect of solving the problem.

Agent B might have all sorts of rules that need to be tested or satisfied before accepting the job.  Agent A might have rules as well, such as probing for background information on previous patients.  It is possible that information is being gathered so that the humans can then make the decision to offer/accept the task before proceeding.  The high level takeaway is that there is no simply a WSDL definition on one side, and a call to the service on the other.

In light of all this, I am demonstrating a framework and a protocol that can accomplish this kind of negotiation.  Yes, it has to get a lot more elaborate, but we have to start someplace, and that place is in basic referral, replication, and synchronization of case data.

What really drives me is the way that this will cause processes that emerge directly from the rules.  Over time, pathways will emerge, from medical centers to supporting specialists, to pharmacies and other service providers.  Just like it is in the business world, each party decides the kinds of jobs it will offer and/or accept depending upon the specialization of the person.   The processes themselves can form out of those rules without being specified in elaborate detail in advance.  The processes that emerge will be resilient and will automatically adapt to environmental changes.  It is a whole new world.


by kswenson at July 11, 2014 10:00 PM

July 10, 2014

Keith Swenson: AdaptiveCM Workshop in Germany September 1

Things are shaping up for a really great workshop to spend a day talking about the latest research findings and possibilities for Adaptive Case Management.  It will be September 1 in Ulm Germany.I am hoping to see all of those Europeans who have a hard time getting the travel budget to come to America.  Register now.

Program

8:00-9:00 – Registration
Session 1: Opening (Ilia Bider)
9:00-09:15 – Presentation of participants
9:15-10:30 – Keynote: “There is Nothing Routine about Innovation”. Keith Swenson
10:30-11:00 – Coffee Break
Session 2. Research. Session (Keith Swenson)
11:00-11:30 “Research Challenges in Adaptive Case Mangement: A Literature Review”. Matheus Hauder, Simon Pigat and Florian Matthes
11:30-12:00 “Examining Case Management Demand using Event Log Complexity Metrics”. Marian Benner-Wickner, Matthias Book, Tobias Brückmann and Volker Gruhn
12:00-12:30 – “Process-Aware Task Management Support for Knowledge-Intensive Business Processes: Findings, Challenges, Requirements”. Nicolas Mundbrod and Manfred Reicher
12:30-14:00 Lunch
Session 3. Practice
14:00-14:30 “A Case for Declarative Process Modelling: Agile Development of a Grant Application System”. Søren Debois, Thomas Hildebrandt, Morten Marquard and Tijs Slaats
14:30-15:00 “Towards a pattern recognition approach for transferring knowledge in ACM”. Thanh Tran Thi Kim, Christoph Ruhsam, Max J. Pucher, Maximilian Kobler and Jan Mendling
15:00-15:30 “How can the blackboard metaphor enrich collaborative ACM systems?”. Helle Frisak Sem, Steinar Carlsen and Gunnar John Coll
15:30-16:00 – Coffee Break
Session 4. Ideas
16:00-16:30 “Towards Aspect Oriented Adaptive Case Management”. Amin Jalali and Ilia Bider
16:30-17.30 – Brainstorming
17:30-17:45 – Closing

Demo

Separately, I will also be demonstrating the Cognoscenti system as an open source platform for use in research around adaptive case management.

Hope to see you there


by kswenson at July 10, 2014 09:03 PM

July 02, 2014

John Evdemon: Blog moved

I'm finally starting to blog again but I've decided to move to a different platform. My new blog is at http://looselycoupledthinking.com and has two formats: A Noteblog Traditional long-form blog Most of my Twitter posts are available on my Link Blog ....(read more)

by John_Evdemon at July 02, 2014 09:45 PM

June 27, 2014

Drools & JBPM: Compiling GWT applications on Windows

If you're a developer using Microsoft Windows and you've ever developed a GWT application of any size you've probably encountered the command-line length limitation (http://support.microsoft.com/kb/830473).

The gwt-maven-plugin constructs a command line statement to invoke the GWT compiler containing a list of what can be a very extensive classpath declaration. The length of the command line statement can easily exceed the maximum supported by Microsoft Windows; leaving the developer unable to compile their GWT application without resorting to tricks such as mapping a drive to their local Maven repository thus shortening the classpath entries.

Hopefully this will soon become a thing of the past!

I've submitted a Pull Request to the gwt-maven-plugin project to provide a more concrete solution. With this patch the gwt-maven-plugin is able to compile GWT applications of any size on Microsoft Windows without developers needing to devise tricks.

Until the pull request is accepted and merged you can compile kie-drools-wb or kie-wb by fetching my fork of the gwt-maven-plugin and building it locally. No further changes are then required to compile kie-wb.

Happy hunting!


by Michael Anstis (noreply@blogger.com) at June 27, 2014 04:24 PM

Thomas Allweyer: Modellierung, Simulation und Ausführung in der Cloud mit IYOPRO

Screenshot IYOPRODer Produktname IYOPRO ist eine Abkürzung von “Improve Your Processes”. Die Cloud-basierte Lösung bietet in der Tat einiges, was bei der Prozessverbesserung sehr nützlich sein kann: Angefangen von der Prozessmodellierung über die Simulation und Prozesskostenrechnung bis zur Prozessausführung.

Bemerkenswert ist insbesondere die nahtlose Integration all dieser Funktionalitäten. Bei vielen anderen Produkten benötigt man mehrere getrennte Komponenten, z. T. gar von verschiedenen Herstellern, um das abzudecken, was in IYOPRO komplett integriert ist. So ist etwa kein gesondertes Deployment auf einen Server erforderlich, um einen Prozess auszuführen, da er sich von Anfang an im integrierten Repository befindet. Der Modelleditor und das Prozessportal für die Prozessausführung lassen sich ebenso über die einheitliche Oberfläche im Browser bedienen wie die Simulation oder das Reporting.

Bereits der Funktionsumfang der kostenlos verfügbaren Basisversion zur Prozessmodellierung ist bemerkenswert und geht in vielem über das hinaus, was man von kostenlosen Modellierungswerkzeugen gewohnt ist. Dass man hierarchische Prozesslandkarten und BPMN-Kollaborationsdiagramme erstellen kann, ist noch nicht so ungewöhnlich. IYOPRO bietet daneben aber auch Mehrsprachigkeit, Berechtigungsmanagement, gemeinsame Modellierung im Team, die Generierung von Prozessdokumentationen im Word-Format und die Animation des Sequenzflusses. Das gibt es in dieser Form sonst nur bei kostenpflichtigen Angeboten.

Die Modellierung im Browser erfolgt sehr flüssig und intuitiv. Viele Tätigkeiten, wie die Ausrichtung von Symbolen, die Auswahl des nachfolgenden Elements oder das Einpassen des Gesamtdiagramms in das Modellierungsfenster, lassen sich recht elegant durchführen. Und wer bei einem horizontalen Pool die hochkant angezeigte Beschriftung ändern möchte, der muss seinen Kopf nicht querlegen. Vielmehr dreht sich Modell für die Texteingabe um 90 Grad, um sich danach in seine Ausgangsposition zurückzudrehen. Solche Kleinigkeiten entscheiden mit darüber, wie angenehm die Arbeit für den Modellierer ist. Die eingebaute Konformitätsprüfung weist einen darauf hin, wenn man gegen die BPMN-Syntax verstößt oder z. B. ein Element nicht beschriftet hat.

Wer seine Prozesse simulieren oder mit der integrierten Process Engine ausführen möchte, muss zu einer der kostenpflichtigen IYOPRO-Versionen greifen. Für die zur Prozessausführung notwendigen Ergänzungen der BPMN-Modelle stehen entsprechende Modelltypen zur Verfügung. So kann man z. B. Organigramme als Grundlage für die Rollendefinition modellieren, ebenso wie Datenmodelle zur Generierung von Datenbank-Schemata. Desweiteren gibt es einen Form-Editor, die Einbindung von Web Services und weitere Tools, wie sie von einem leistungsfähigen BPM-System benötigt werden.

Ein besonderer Schwerpunkt von IYOPRO ist die ausgefeilte Komponente zur dynamischen Simulation von Prozessen. Sie erlaubt eine sehr exakte Spezifikation der Prozesslogik mit den verschiedensten statistischen Verteilungen, Ressourcenanforderungen, Schichtkalendern, usw. Die Simulation wird insbesondere auch als Werkzeug für die Prozesskostenrechnung verwendet. Mit Hilfe der Simulation lässt sich für gemeinsam genutzte Ressourcen ermitteln, welche Anteile der Nutzung auf die einzelnen Prozesse entfallen. Hierdurch lassen sich die betreffenden Kosten besser verursachungsgerecht aufteilen. Zwar erfordert eine Simulation zunächst einen recht hohen Aufwand zur Datenerhebung und Validierung, doch können sich die erzielten Einsparungen durch aufgedecktes Optimierungspotenzial und bessere Entscheidungsgrundlagen durchaus recht schnell amortisieren.

Sicherlich lassen sich auf dem Markt Modellierungssuiten mit einem größeren Methodenrepertoire finden. Ebenso gibt es Systeme zur Prozessausführung, die einen höheren Funktionsumfang aufweisen. Dafür punktet IYOPRO mit der hohen Durchgängigkeit über alle Komponenten von der fachlich orientierten Modellierung über die Analyse bis zur Ausführung. Als Cloud-Lösung ist keine Installation erforderlich, und es fallen ausschließlich laufende Kosten für die Softwarelizenzen an. Insbesondere für viele mittelständische Unternehmen dürfte diese Kombination sehr interessant sein.

by Thomas Allweyer at June 27, 2014 06:50 AM

June 25, 2014

BPM-Guide.de: Let’s go US

We are excited to announce the official incorporation of camunda Inc., registered in San Francisco, California. Camunda Inc. will market our product camunda BPM in North Amercia. Besides FINRA and Sony, there are already several US based enterprise edition customers, and with BP3 and Trisotech, there are also strong partners available for consulting services around [...]

by Jakob Freund at June 25, 2014 09:31 PM

June 24, 2014

Keith Swenson: Late-Structured Processes

The term “unstructured” has always bothered me, because without structure you have randomness.  When knowledge workers get things done, it is not random in any way.  They accomplish things in a very structured way, it is just not possible to know ahead of time how it will be structured.

Last week at the BPM & Case Management Summit I presented my talk on how different technology should be brought to bear based how predictable the work being supported is.  There is work on the left of the spectrum that is very predictable, and on the right very unpredictable.

Examples of highly predictable work is that being done at an automobile factory or a fast food restaurant.  This work is predictable mainly because the environment is carefully controlled.  The factory is designed to supply the right things at the right time, and while there may be some (anticipated) variability in the mix of models being produced, one can clearly predict that each car will need four tires, mounted on four rims, attached to the wheel, etc.  A fast food restaurant takes an order, and fulfills it in a few minutes in a very repeatable way.

SevenDomainsSnapshotAs you move to the right across the spectrum, we consider shorter predictability horizons.  Integration with other IT systems (the second pillar) means you have to be prepared on a monthly/yearly scale for systems to change.   Human processes (the third pillar) need to cope with people going on vacations, getting sick, learning new skills, and changing positions with a weekly/monthly predictability horizon.   The fourth pillar is production case management where the operations that one might do are well known, but when to do them is decided on a daily basis.  With adaptive case management (fifth pillar) you also have an hourly/daily predictability horizon, but the operations themselves can not always known in advance,and the knowledge worker plays a bigger role in planning the course of events.

Now compare the predictability horizon with the length of the process.  In the case of the fast food, I can predict a month in advance how a particular type of food will be prepared (after the order is received) and it only takes a couple minutes to do the preparation.  We call this predictable because the process is much shorter than the predictability horizon.  The other extreme might be patient care which can take months or years, while our ability to predict is quite a bit shorter than that.  New procedures, new treatments, new drugs are continually entering the market, while a given patient episode might last months or even years.  While treating the patient, decisions are made, and course of treatment can be predicted for certain durations, it is just that those durations are shorter than the overall process.  When this situation occurs, we call it unpredictable because we can not say when the process begins how the process will unfold.

Patient care is not random and it is not unstructured.  Unstructured implies that there is no thinking being done, and that there is no planning necessar and there is no control.  The truth is exactly the opposite; there is quite a bit of thinking and planning being done, there is quite a bit of control of what happens.  The work is not unstructured, it is simply structured while the work is going on.  The planning and the working happen at the same time, and not as discrete phases in the lifecycle of the process.

For this reason I propose the term “late-structured” to explain what knowledge workers do in case management.   They actively plan and structure the work, it is just that they don’t do it as a separate phase.  There are other implications of this:  since you can not separate the planning from the working, clearly both the planning and the working need to be done by the same person.  Knowledge workers must plan, to some extent, their own work.   Also, there is little point in creating elaborate models of the work, since further planning will change that, and it is likely that each instance of the process will be unique.

There is no loss of control.  Late structured processes can still be analyzed after the fact the same way that any process can, and so one can assess how efficient the work was done, as well as whether it complies to all the laws and customs.

When using the term “unstructured,” it is easy to get confused about nature of the work, thinking instead that things unfold randomly in an uncontrolled way.  If you think about it as late-structured work, where the length of the process is longer than the ability to predict what will happen, but prediction and planning still proceed, you gain a better understanding of what is really going on.


by kswenson at June 24, 2014 06:05 PM

Thomas Allweyer: Version 3.0 BPM Common Body of Knowledge jetzt auf Deutsch erschienen

Cover CBOK 3.0 deutschNachdem die englische Ausgabe des BPM Common Body of Knowledge in der Version 3.0 bereits seit einiger Zeit auf dem Markt ist, ist sie nun auch auf Deutsch erschienen. Dass dies etwas gedauert hat, liegt daran, dass der englische Text nicht nur übersetzt worden ist. Vielmehr haben ihn insgesamt zehn Autoren an die Gegebenheiten in den deutschsprachigen Ländern angepasst.

Über die englische Ausgabe habe ich bereits einen Blogeintrag geschrieben.

Zur deutschen Ausgabe schreibt Guido Fischermanns einige Bemerkungen in seinem Blog.


European Association of Business Process Management EABPM (Hrsg.):
BPM CBOK® – Business Process Management BPM Common Body of Knowledge, Version 3.0, Leitfaden für das Prozessmanagement
Verlag Dr. Götz Schmidt, Wettenberg 2014.
Das Buch bei amazon.

by Thomas Allweyer at June 24, 2014 10:13 AM

June 23, 2014

Sandy Kemsley: BPM In Healthcare: Exploring The Uses

I recently wrote a paper on BPM in healthcare for Siemens Medical Systems: it was interesting to see the uses of both structured processes and case management in this context. You can download it...

[Content summary only, click through for full article and links]

by Sandy Kemsley at June 23, 2014 02:11 PM

June 21, 2014

Keith Swenson: BPM and Case Management Summit 2014

Here are some notes from this years BPM & Case Management Summit in Washington DC.

Wow, what a conference!  This is the first major summit that includes case management.  The location was excellent, and so was the venue: The Ritz.  A number of new vendors there, particularly in the case management space:  Frame Solutions,  AINS eCase,  Emerge Adapt Case Blocks.  It was great to see so many old friends, as well as some new ones as well.  It was nice to see Connie Moore who was awarded the Marvin L. Manheim Award For Significant Contributions in the Field of Workflow.

RitzPan

pan of the meeting room thanks to Chuck Webster

Jim Sinur

JimSinurStartThe first keynote was given by Jim Sinur, who said that Adaptive Case Management is the on-ramp for intelligent business processes.  It was a good overview of the current situation in process management:  old style automation is doing well, but the current challenges are newer, more flexible, less structured, and more knowledge worker oriented processes.

He presented the spectrum of process types, as well as his process IQ five-axis spider chart.  He challenged us to ask the question of what will process be like when we have the equivalent of 1000 Watsons available in the cloud to research answers to questions for us?  Reinforced that we will have ‘personal assistants’ to help us run our processes.

NFSA

It was quite an honor to see two people from the Norwegian Food Safety Authority (NFSA).  I have written about this use case before.  It is such a important use for the kind of flexibility that case management affords.   Most interesting comment came at the end, in response to a question:  even though extensive use cases were created to explore and understand what the users needed to be able to do, no modeling was done in BPMN of CMMN.  Instead, the text of the use case was taken directly to the ‘Task Template’ which is a simple list of tasks that drives a particular scenario.

Setrag Khoshofian

Talked about the “internet of things” (IoT). The market is estimated in the trillions of dollars. Big data today is nothing compared to what we will have when all these things start chatting with each other. “The largest and most durable wearable computer will be the car.” The process of everything.

Used the acronym Social Mobile Analytic Cloud Things: SMACT

Where is the knowledge? You might have policy and procedure manage, however you still need access to experts. Sometimes it is all written down, but only certain people know how to understand and interpret what is written. Applications are developed, but then changed and the design artifacts no longer match. Knowledge is sometimes represented in the code. Also in the patterns of interactions. You can extract this (process mining) and the results is often surprising.

He presented a spectrum of work along these lines:

  1. system, very structured work – flow charts, very popular, useful
  2. clerical worker
  3. knowledge assisted worker. This is the majority of white collar workers. Get assistence from various types of intelligence in the BPM environment.
  4. knowledge worker, Unstructured, dynamic, Knowledge workers do not like to be told what to do.

On problem with self driving cars is if they get hacked. Can we really assume that this will be taken care of?

Device directed warranty scenario: Imagine there is a sensor that determines that the CO2 level in a car is too high. It sends a message to the manufacturer, brought this together with product info, customer info, warranty info into a CASE. Then it is determined that service is required, and the right people are notified. Then a sub case for service order, and a sub-case of a warranty claim.  This is idea of the kind of thing that might be possible today with the IoT.

Whitestein

Presentation of the Living Systems Process Suite where goals drive everything. Governance goal describes how something should be achieved in order to be optimized. Layered process scoping: strategic goals over multiple instances, tactical goals for a particular item or case. then process activities. When you get down to the process they use BPMN. These layered goals give them the ACM capability.

They call them “agents” because they act independent process evaluators: the current situation is compared against the conditions you set to bring the system in line with the goals.  If current state is found, later, to be wrong, the agent can kill that process, and start another. Agents are intelligent enough to start, stop, and modify running processes.  Can insert ad-hoc tasks (issue request, performing query, acting on results).

A question was asked: what about conflicting goals? Goals are in a hierarchy, and that helps prioritize the agents, but you need to take care when designing the goals to avoid a dead lock situation.

Clay Richardson

First keynote on the second day, excellent as well, about “design thinking.”  He sees BPM systems moving from holistic to specific, from linkages to context, from logic to empathy, and from deductive logic to abductive logic.

One of the keys is empathy.   Not empathy with the system, but empathy with the customer.  We migth see a transition from process models to journey maps, from capability maps to personas, and from target operating model (TOM) to storytelling (of how the customer engages).  He feels there are two camps: transaction BPM and engagement BPM.

He cited an example of a Domino’s Pizza app:  it shows where the pizza is in the process:  tossing it, in the oven, on the way, or delivery person knocking on the door.  This more than just the minimal to buy a pizza, it really represents the desires of the customer to know what is happening now.

Instead of focus on cost efficiency, we should focus on revenue growth.  Reconnect to customer journey and customer experience.

Roger Baker, Chief Strategy Officer, Agilex

Gave an excellent talk on agile methodology and why it is needed.  Agile method is defined as 2 week sprints, small teams, requirements discovery, constant prioritization, continuous testing, frequent small releases, and communications, communications, communications.   About 1/3 of what is in a requirements document are things the writers wish they had but will never use.  He said these are like the “froth on the beer” — you want to see it but otherwise not useful.  Agile development is a full contact approach, from execs to workers.  Strict adherence to schedule.  The hardest is “truth telling” — people don’t want to tell you they are having a problem, but if not they can explode.  Raise a problem when you see it, and get help.  If you have a problem and stay quiet, then we will find someone else to do the job.

He shifted the VA around to agile approach, and were delivering, so congress passed a new law in Jan 2011 which changed all the rules.  The VA delivered on 83% of milestones.  You have to plan on some failures, and if so, fail fast.

Waterfall assumes:

  • detailed requirements are clear from the beginning of the project
  • Assumes they don’t change
  • progress can be measured by documents produced
  • assumes that mega programs are manageable by normal humans
  • it systems are it responsibility

Agile assumes

  • Detailed requirements are NOT clear. They will knowit when they see it
  • Requirements and priorities will change
  • produced software is the only measure
  • users and management need constant reassurance
  • everyone must be involved

Only the business knows the process.  Business must take ownership of the process.

RogerBaker

Steinar Carlsen

Talking about organizations, and value formation. People do tasks. They don’t necessarily do processes. They have to relate to customers, authorities, partners, and in a constant flux of change.

How is coordination of value production achieves? Email? Heresay? Sharepoint? Proposition: should have an integrated task management system. When a task spins off another task, you have an emergent task management system.

Step details: mandatory, repeatable, pre-condition, include-condition, post-condition.

To design tasks, use “knowledge editor” Not a graphical tool, but instead text based, and saved in XML.

Steinar

Rudy Montoya – CIO, Texas Attorney General

Keynote speaker on the third day.  He was involved in creating some case management systems for things like crime victims compensation & legal case management

As an example of the explosion in West, Texas.  When it went off, they had to respond at a time when they had no idea if this was a crime, or whether it was terrorism.  The old system required that all information had to be together before they created the case. They needed to verify that a crime occurred before starting the case.  There is a lot of work necessary to get to that point.  Case management starts with the data that exists, and builds forward to the classification of the case and particulars.

They solved this in about 12 months implemented in 3 Phases:

1) eliminated legacy doc mgmt system
2) replace mainframe
3) implement a web portal

Euan McCreath

Very interesting presentation on how Emerge Adapt have implemented a real adaptive case management system.  Great slide on the difference between an adaptive approach and a traditional approach:

2014-06-18 10.57.59

Key elements defined were data structures. Then buckets. The process was very simple. Could create new buckets on the fly. New tasks could be created. Buckets are related to work queues. Could move from any state to any other state, but after a while certain moves were locked out by constraints in the process model.

My Talk

I presented to the following slides:

And, as evidence, Charles Webster took this photo of me:

KeithPresentation

Sorry everyone who gave talks and I was not able to see them.  There were simply too many to see them all!

Other blog posts:


by kswenson at June 21, 2014 10:36 AM

June 19, 2014

Thomas Allweyer: Award für BPM-Initiativen im Bildungs- und Sozialbereich

Wer sich im Bildungs- und Sozialbereich mit Prozessmanagement beschäftigt, kann sich für den neu geschaffenen “BPM2thePeople”-Award bewerben. Die vorgestellte Initiative sollte eine Vorbildfunktion haben und damit auch für andere Organisationen interessante Aspekte umfassen. Als zweites Kriterium wird der Innovationsgrad bewertet. Und schließlich geht es um den effizienten Ressourceneinsatz.

Der Award wird von der Process Management Alliance ausgeschrieben, die ursprünglich aus einer Initiative von Lufthansa Technik entstanden ist und die jährliche BPinPM.net-Konferenz veranstaltet.

Dotiert ist der Preis mit 2.500 Euro, die Preisverleihung findet auf der diesjährigen BPinPM.net-Konferenz am 24. und 25.11. in Seeheim bei Frankfurt statt.

Bewerben kann man sich hier.

by Thomas Allweyer at June 19, 2014 12:26 PM

June 17, 2014

BPM-Guide.de: Webinar: BPMN with camunda BPM

I will give a webinar on July 17 about the best practices around BPMN, especially in terms of business-IT-alignment. Will this be a camunda BPM pitch as well? Of course But hey, that’s how it goes: 1) Collect 4+ years of intensive consulting experience around BPMN, write a book etc. etc. 2) Discover that the [...]

by Jakob Freund at June 17, 2014 11:46 PM

BPinPM.net: BPM2thePeople Award – Spread the word and win a conference ticket!

This week, we are starting a new project to foster process management awareness in the education and social sector. – The BPM2thePeople Award is our prize for best practice examples that increase the quality of processes in organizations from education or social sector.

The winner of the award serves as an ideal for other organizations and supports the future development towards full establishment of BPM in these sectors.

Nowadays, many organizations from education or social sector are still afraid of such topics and feel insecure about implementing BPM projects to encourage the management of their processes. – It is time to change that thinking, now!

All organizations from these two sectors (e.g., school, kindergarten, university, home for the elderly, workshop for the handicapped) that invest in BPM could be the winner of the BPM2thePeople Award. The award will be handed over during our BPinPM.net Process Management Conference in November 2014 and the winner will receive a recognition of 2.500 Euro.

The final decision will be made by a jury of BPM experts from business and research based on the three criteria “role model function”, “innovation”, and “efficiency” of the projects. But even if an organization does not see its project in all of those dimension, they should not hesitate to apply until end of August!

Due to the fact that this blog is primarily read by BPM professionals, we ask you to spread the word and invite people from education or social sector to apply for the award. Please share this post or simply go to the website of the award and invite others:
http://www.BPM2thePeople.org/#einladen

As a THANK YOU we will raffle a ticket for this year’s BPinPM.net Process Management Conference among all supporters.

Best regards,
Mirko

by Mirko Kloppenburg at June 17, 2014 06:56 PM

June 16, 2014

Sandy Kemsley: Webinar On Collaborative Business Process Analysis In The Cloud

I’m giving a webinar on Wednesday, June 18 (11am Eastern) on social cloud-based BPA, sponsored by Software AG – you can register here to watch it live. I’ve written a white paper going into this...

[Content summary only, click through for full article and links]

by Sandy Kemsley at June 16, 2014 11:51 AM

Keith Swenson: Open Source Adaptive Case Management

Interested in trying out Adaptive Case Management without a huge investment?  Cognoscenti might be the option for you.  This post contains most of the contents of a paper I will be presenting in Germany in September on the Cognoscenti open source system which I have used in demos at the last two BPMNext conferences. To anyone wanting to experiment with ACM capabilities, a free solution might be worth trying.

The EDOC conference in Germany is mainly for researchers, and so most of this post focuses more on ways to experiment with the capabilities, and less about simply using the capabilities out of the box.

Demo: Cognoscenti
Open Source Software for Experimentation on
Adaptive Case Management Approaches

Abstract: Cognoscenti is an experimental system for exploring different approaches to supporting of complex, unpredictable work patterns. The tendency with such work environments is to make increasingly sophisticated interaction patterns, which ultimately overwhelm the user with options. The challenge is to keep the necessary cognitive concepts very simple, allow the knowledge worker a lot of freedom, but at the same time offer structural support where necessary for security and access control. Cognoscenti is freely available as an open source platform with a basic set of capabilities for tracking documents, notes, goals, and roles which might be used for further exploration into knowledge worker support patterns.

Introduction

Fujitsu has leadership in the business process space going back to 1991. In 2008, the Advanced Software Design Team started a prototype project from scratch to explore innovative directions in enterprise team work support. Cognoscenti became the test bed for experimental collaboration features to demonstrate properties of an adaptive case management system for supporting knowledge workers. Features that proved to work well were subsequently implemented in the other products. In 2013 internal company changes left the project without any specific strategic value. Since some people were using it as a productivity tool for managing their work, the decision was made to make it available as an open source project for anyone to use and possibly to help maintain.

One experiment was to implement preliminary versions of the “Project Exchange Protocol” which allows case management systems, and business process management (BPM) systems, to exchange notes, documents, and goals using only representational state transfer (REST) oriented web service calls. Cognoscenti is available as a free reference implementation of these protocols for testing of protocol implementations. This paper seeks to demonstrate the open source system, its capabilities, and how research project might use the software for their own research.

Architecture

Cognoscenti stores information in XML files in the file system. This was done for two reasons:

1) to avoid complication in installing the system. Requiring and initializing a database restricts the environments that it can be deployed to. The XML offers a flexible schema that can be evolved efficiently –– a task that can be quite complicated in a database. This allows prototype projects build on Cognoscenti to experiment easily with capabilities.

2) to allow direct manipulation of the files by users. The documents appear as files in the file system which can be opened and edited directly – even when the Cognoscenti server is not running. Changes are detected by file date and size.

Conceptual Object Model

The root of everything is an index which is initialized by scanning the file system. From this you can retrieve “Site” objects, “Project” objects, and “UserProfile” objects.

The Site object represents a space both on the disk and address space on the web. A Site has a set of owners and executives all of whom are allowed to create projects in the site. A Site has a visual style that applies to all projects contained by that site. The site is mapped to a particular folder in the file system, and all of the contained projects are folders within that one.

The Project object is the space where most of the work takes place. A project has a collection of notes (small message like documents with wiki-style formatting), attached documents, goals, roles, history, and email messages. All of the artifacts for a project is stored in the project folder on disk. There is a special sub folder named “.cog” which is where all the housekeeping information is kept about the project, such as old versions of documents, etc. When the server detects that a file has changed, it will display an option to the user to commit those changes, which causes a copy of that file to be saved as a version inside the housekeeping folder.

While Sites and Projects are represented in one directory tree, user information is from a folder that is disassociated from the sites and projects. The UserProfile object contains personal information for a particular user, OpenID addresses, email addresses, and settings. Because the user preferences are disassociated from the sites and projects, a user may play any role in any site or project without restriction. A user logs in once, and can access any number of projects and sites that they have access to.

Implementation Details

Cognoscenti is written in Java and runs in any servlet container, such as Apache TomCat. The user interface is based on Spring framework, which some browser side capability from Yahoo User Interface and Google Windows Toolkit, however grafting a new user interface for specialized purpose projects is easily supported.

The entire code base is licensed under the Apache license, freely available to anyone who wants it.

Innovative Concepts

Security and Access Control

Cognoscenti is first and foremost a collaborative case management system designed for lots of people to work safely with sensitive information, like health care information, social worker information, legal case information, etc. Access control needs to be a primary consideration. It is easy, or trivial even, to make a system that restricts access to particular artifacts to particular named users. But there is a problem with that: managing the many-to-many relationship between all the artifacts and users directly can be tedious and overwhelming. This leads either to users leaving the access too open so that too many people can have access, or alternatively leaving the access too restricted so that people can not get the information that they need to do the job.

An indication that users a frustrated with the access control mechanism is seen when they take a document out of the document repository in order to email it to people they want to give it to. This subversion of access control mechanism is dangerous, because email itself is an unsafe medium for sensitive documents.

The developers of Cognoscenti view security as a usability problem: it must be easy enough to use, so that people get the security right so that only the right the people who need access are getting it. These principles must be followed:

1) It must be easy for a normal, non-technical business user to express the correct security constraint to meet their needs.

2) Such an expression must meet the natural requirements of a social situation, and not merely the technical requirements of the system.

3 )As teams change and evolve, the security constraint in constructed in such a way that it tracks the changing requirements, without needing tedious maintenance by the users.

4) No surprises: the meaning of the access control settings must be clear to non-technical users.

These requirements are considerably higher than most current systems. For example, the Windows file system requires the user to do a kind of set algebra in order to determine whether a particular user can see a particular document or not.

Affordances for Change

If the project is entirely static in terms of membership, it is not difficult to get any such system set correctly so that the fixed set of members have proper access. However, projects are not static. Imagine a police detective working to solve a crime, and needs the help of an expert. That expert will need access to the case folder. Imagine how it would be if the police detective had to invite the expert, and then go to every document and give them access. The preferred expert might not be available, and the job might be done by the expert’s assistant. Imagine how it would be if the detective then had to change the access control of all the documents. And once the immediate goal is done, it might be appropriate to remove them from being able to access. In a real project we expect new people to joining and leaving every day. It does not take too much change before the management of the access rights overwhelms the detective (and he resorts to email).

One experiment built into Cognoscenti is the idea that if a person is assigned to an active goal, they automatically get access to the documents. Goals also have an ability that the person assigned to a goal and delegate the assignment to another person, in effect automatically giving them access to the project folder without further trouble. This has an additional interesting aspect that when the goal is completed, the person doing the goal, if they have no other access, will then automatically lose access which is appropriate in certain situation.

Roles

It became clear that part of the solution will involve creating intermediate constructs, called roles, which represent groups of people who are treated equivalently. Roles, by themselves, are not very innovative, but in a standard implementation of roles, the maintenance of the roles can be tedious and time consuming. Cognoscenti explores the usability problems around roles and use of roles.

Roles are highly contextual, so some experimentation was one to associate roles automatically with certain actions, or to have roles modified as the result of actions in a natural way that does not require extensive maintenance by the users. For example, adding a user to an email message might, optionally, also add that user to an associated role.

Roles were unified with the concept of a view. That is, a role is a group of people in a particular context, but it also contains elements that control how those people see the project. The reason for this is to reduce the number of different conceptual objects that the user must deal with.

Role names are also use as a form of tagging of the content. A document can be associated with particular roles as they are added into the folder, as a way of categorizing the documents. Goals can be associated with roles so that when a person is added to a role, they automatically are assigned the goals, and they have access to the documents. The use of roles gives a lot of flexibility, but the challenge remains to make the usability easy enough so that the case manager does not need to spend a lot of time creating a bunch of roles ahead of time, and instead roles are created easily, in a natural way, whenever needed by the emerging case.

Representation of Goals

Central to any work management system is the idea of tasks, activities, or goals. The challenge here was to explore the usability problems that prevent most users from keeping an accurate task list. Effort focused on how to make it really easy to create goals and assign them to others. Much as attention was given to make goal lists as easy as a checklist. The challenge is to make the creation of a new goal, the assignment of a person to that, and the notification of that user, easier than sending an email asking someone to do something. If it is easier than an email, people will use it. It also needs to be easy for the person receiving the request to access the case even when they had no prior knowledge of that particular system.

An adaptive system needs to build over time reusable templates for reuse when similar situations are recognized in the future. It would be easy to provide a programming language of some sort to allow automation of future cases, however, this approach is not suitable because the intended knowledge workers are not themselves programmers. Effort was spent on trying to make templates that result from normal use of the system, without having to focus on programming like activities.

The second challenge with templates was deciding what is and is not significant in a previous case. In some cases a previous use of a role should create a role with the same users in it, and in other cases the role should be empty.

A third challenge is deferred templates use. Many template systems assume that the template will be known and invoked at the time of case creation. The problem is that users do not always know which template is appropriate at the creation time. Knowledge users will be handed a case to work on, without knowing anything about the case. The job of the knowledge worker is to discover the details and handle whatever work needs to be done, figuring it out on the fly. A knowledge worker needs a place to work, to start collecting those details, and later determine which template to bring in.

Restructuring Over Time

Another use case challenge is that knowledge workers don’t necessarily know what parts will be significant at the time that they start working. What might initially looks like a simple goal might turn into a major project by itself. And sometimes what is expected to be a large project turns out to be trivial.

An experimental feature put into Cognoscenti is the ability to create a simple goal, and then when it looks a little more complicated, put subgoals under it. If it continues to gain complexity, the original goal can be converted to a complete project on its own. Project can be linked to goals in other projects, as if they were that goal. Status reports can be compiled from goals across multiple projects to make it look like it is consolidate in one project. Many experiments were done with trying to make it easy for users to convert back and forth from goals to projects.

Document Repository Support

Knowledge workers are often required to use organizational document repositories, and the philosophy behind Cognoscenti is that such repositories are good for organizations in general. The designers of cognoscenti however designed features to help knowledge workers when they are required to use multiple repositories – often different document storage places for different aspects of their lives. For example a doctor may keep patient data in the clinic system, but at the same time is part of a local university research organization which has thought leading documentation in a different location, while the community outreach program they volunteer has yet another.

One of the challenges with secure document repositories is letting your coworkers who are involved in a project access the same information. For example a doctor accepts a job to verify the results of a research paper located in a secure repository, but would like their recent intern to make the first pass. There are two standard ways to do this: download the file and email it to the intern, or to print it out and give the hard copy to the intern. Both of these are unacceptable because if the document is updated in the original repository, they have no access to the updated version. It is equally unacceptable for the doctor to give the username and password for the intern to access the repository directly.

Cognoscenti resolves by using a synchronized copy. The doctor accessed the repository using Cognoscenti which places a copy of the document into the project. Now the doctor can give the intern access to the copy. But the copy is synchronized with the original – optionally in both directions – so that changes in one can easily be refreshed to the other.

As you might easily imagine this is technically quite easy to do, but making it usable for users – specifically making it easier than emailing a copy of the document – requires some careful thinking about the user interface.

Federated Case Support

Just as knowledge workers are required to more than one document repository; it is also the case that Cognoscenti will not be the only case management system that is used by the pool of people who need to contribute to this case. Therefore, Cognoscenti is designed to live in a world where it presents views of a case to others, and that other case systems will have synchronized copies of those views. There is an explicit upstream / downstream relationship between cases which can be either one way or two way. Again, this is not technically difficult, but the real research is on making what ends up being a complicated collection of capabilities understandable enough, and easy enough that users will actually use them.

Project Exchange Protocol

In order to implement federated case support across different vendors or different types of case support, the protocol for exchange of information needs to be defined independently of single implementation. Workflow Management Coalition has been working on interoperability of collaborative systems for more than 20 years, and this effort is related to the work of the WfMC. Cognoscenti represents a reference implementation of a standard protocol

Location

The open source project, the source, executables and available documentation can be accessed from the following URL: https://code.google.com/p/cognoscenti/

An online video demo using Cognoscenti from the BPMNext conference is available at https://www.youtube.com/watch?v=x-oAAjM6Wh0 .

Plans and Directions

The goal in presenting this demo at EDOC 2014 is not to show numerous accomplishments, but rather to introduce a platform that may be useful for other experimentation in usability. The system is freely available to anyone, and runs in a non-proprietary open environment.

It is the desire of the author that Cognoscenti can be helpful in resolving some of the stickier issues around usability of knowledge work environments, by making a full collaborative adaptive case management system available for free for use in clinical trials involving real knowledge workers.

Acknowledgment

Many thanks to Fujitsu for supporting this work on the open source project.
Significant contributions to the development of Cognoscenti
came from Shamim Quader, Sameer Pradhan, Kumar Raja, Jim Farris,
Sandia Yang, CY Chen, Rajiv Onat, Neal Wang, Dennis Tam, Shikha Srivastava,
Anamika Chaudhari, Ajay Kakkar, Rajeev Rastogi, and many more
people at Fujitsu around the world.

 


by kswenson at June 16, 2014 10:06 AM

June 12, 2014

Thomas Allweyer: Alles über die Enterprise Architecture-Notation ArchiMate

Mastering_Archimate_CoverArchiMate ist eine standardisierte Notation zur Modellierung von Enterprise-Architekturen, die in letzter Zeit eine zunehmende Verbreitung findet. So haben bereits eine ganze Reihe von Modellierungstool-Herstellern ArchiMate in ihre Methodenportfolio aufgenommen. Die offizielle Spezifikation kann von der Webseite der Open Group heruntergeladen werden, einem Konsortium für IT-Standards. Aber wie bei den meisten Standards ist das offizielle Spezifikationsdokument nicht unbedingt die beste Grundlage um die Notation zu erlernen.

Das englischsprachige Buch “Mastering ArchiMate” liefert eine fundierte Einführung in die verschiedenen Notationselemente und das zugrunde liegende Metamodell von ArchiMate. Darüber hinaus wird aber auch sehr ausführlich beschrieben, wie die einzelnen Konstrukte konkret eingesetzt werden können, um verschiedene Aspekte und häufig in der Praxis vorkommende Architekturen zu modellieren.

ArchiMate bietet oft viele verschiedene Möglichkeiten einen bestimmten Sachverhalt abzubilden, weshalb es einiger Erfahrung und einheitlicher Modellierungskonventionen bedarf, um brauchbare Modelle zu entwickeln. Zudem können zielgruppenspezifische Modell-Sichten genutzt werden, in denen nur Ausschnitte aus der gesamten Modellpalette genutzt werden. Hier macht sich die Erfahrung des Autors Gerben Wierda bezahlt, der als Lead Enterprise Architect bei einem Finanzdienstleister sehr umfangreiche ArchiMate-Modelle erstellt hat.

Sehr aufschlussreich sind die verschiedenen vorgestellten Patterns. Hierzu gehören etwa die Modellierung von Desktop-Anwendungen, Zwei- und Dreischichten-Architekturen, Software-as-a-Service-Szenarien, Hochverfügbarkeits-Datenbankclustern und vielen weiteren. Auch wenn sich die Frage stellt, ob und zu welchem Zweck man das jeweilige System in der Praxis tatsächlich in der vorgestellten Detailtiefe modellieren würde, lernt man doch viel über ArchiMate. Manche Diskussion erscheint möglicherweise etwas akademisch. Schließlich kann man sich lange darüber streiten, ob es sich bei einer Excel-Datei mit Makros um ein Datenobjekt oder eine Anwendung handelt, und ob Excel dann eher als Anwendung oder als Teil der Infrastruktur betrachtet werden soll. Andererseits zwingen einen solche Überlegungen, sich genaue Gedanken über die Strukturierung der IT zu machen.

An vielen Stellen diskutiert Wierda auch grundsätzliche Fragen, die sich bei der Architekturmodellierung ergeben, wie z. B. den Unterschied zwischen Business Processes und Business Functions. Dem Zusammenhang zwischen der Geschäftsprozessmodellierung mit BPMN und der EA-Modellierung mit ArchiMate ist ein eigenes Kapitel gewidmet. Da eine Enterprise Architecture auch Geschäftsprozesse, Funktionen, Rollen etc. umfasst, liegt es nahe, diese Inhalte mit den entsprechenden Konstrukten in BPMN-Modellen zu verknüpfen.

Schließlich werden auch die Vor- und Nachteile von ArchiMate besprochen und Verbesserungsvorschläge entwickelt. Der Autor beurteilt ArchiMate trotz einiger Schwächen als sehr gut in der Praxis anwendbar. Gelegentlich sieht er sich aber auch veranlasst, einige Regeln von ArchiMate etwas locker auszulegen um ein gut verständliches Diagramm zu entwickeln.

Das einführende Kapitel bieten einen gut verständlichen Einstieg in die Gundlagen von ArchiMate. Ein Großteil des Buchs ist aber keine leichte Kost und eher für ArchiMate-Experten zu empfehlen. Dem Autor ist dies durchaus bewusst, weshalb er auf der Webseite zum Buch eine verbilligte Kurzfassung anbietet, die nur etwa die erste Hälfte des Buchs umfasst. Ein Auszug mit dem Einführungskapitel kann sogar kostenlos angefordert werden.

Insbesondere für Einsteiger dürften viele Beispielmodelle aufgrund ihres hohen Detaillierungsgrades zumindest auf den ersten Blick abschreckend wirken. Wierda erwähnt, dass er mit Modell-Landschaften arbeitet, die mehrere Zigtausend Elemente umfassen. Diese Modelle erfüllen auch die Aufgaben einer Konfigurationsmanagement-Datenbank, in der alle IT-bezogenen Elemente des Unternehmens verwaltet werden. Ob es wirklich immer sinnvoll ist, all diese Details in grafischen Modellen zu verwalten, darf bezweifelt werden. Zumal die verschiedenen von Wierda vorgestellten Modell-Sichten offensichtlich die Pflege z. T. redundanter Modellinformationen erfordern.

Für den praktischen Einsatz dürfte es sinnvoller sein, weniger detaillierte grafische Modelle zu erstellen und die einzelnen IT-Assets in einer gewöhnlichen Konfigurationsmanagement-Datenbank zu verwalten. Das schmälert keineswegs die große Leistung, die Wierda mit seiner fundierten und umfassenden Analyse und Erläuterung des ArchiMate-Standards vollbracht hat. Nur ist ein Großteil dieser Ausführungen eben eher für Experten geeignet.


Gerben Wierda:
Mastering ArchiMate
Edition II
Das Buch bei amazon.
Webseite zum Buch

by Thomas Allweyer at June 12, 2014 02:15 PM

BPM-Guide.de: New Whitepaper: The Zero-Code BPM Myth

Yay! We had 400+ registrations for our webinar with Sandy Kemsley, covering the “Zero-Code BPM Myth” and comparing that to a developer-friendly BPM approach like camunda BPM delivers. In case you missed it, there is a recording: http://camunda.com/landing/webinar-developer-friendly-bpm/ And there is also a whitepaper! Sandy wrote it and I think it is a very fine [...]

by Jakob Freund at June 12, 2014 12:52 AM

June 11, 2014

Sandy Kemsley: Developer-Friendly BPM

I gave a webinar today sponsored by camunda on developer-friendly BPM, discussing the myth of zero-code BPM. I covered the different paradigms of BPM development, that is, fully model-driven versus...

[Content summary only, click through for full article and links]

by Sandy Kemsley at June 11, 2014 09:37 PM

June 10, 2014

Sandy Kemsley: Becoming A Digital Enterprise: McKinsey At PegaWORLD

The day 2 keynotes at PegaWORLD 2014 wrapped up with Vik Sohoni of McKinsey, who talked about becoming a digital enterprise, and the seven habits that they observe in successful digital enterprises:...

[Content summary only, click through for full article and links]

by Sandy Kemsley at June 10, 2014 03:08 PM

Sandy Kemsley: PegaWORLD: Service Excellence At BNY Mellon

Jeffrey Kuhn, EVP of client service delivery at BNY Mellon, spoke in the morning keynote at PegaWORLD about the journey over the 230-year history of the bank towards improved customer focus....

[Content summary only, click through for full article and links]

by Sandy Kemsley at June 10, 2014 01:38 PM

June 09, 2014

Sandy Kemsley: PegaWORLD Breakout: The Process Of Everything

Setrag Khoshafian and Bruce Williams of Pega led a breakout session discussing the crossover between the internet of things (IoT) — also known as the internet of everything (IoE) or the...

[Content summary only, click through for full article and links]

by Sandy Kemsley at June 09, 2014 04:25 PM

Sandy Kemsley: A Vision Of Business Transformation At PegaWORLD

The second half of today’s keynote started with a customer panel of C-level executives: Bruce Mitchell, CTO at Lloyds Banking Group, Jessica Kral, CIO for Medicare & Retirement at...

[Content summary only, click through for full article and links]

by Sandy Kemsley at June 09, 2014 03:07 PM

Sandy Kemsley: PegaWORLD Gets Big

My attendance at PegaWORLD has been spotty the past few years because of conflicts with other conferences during June, so it was a bit of a surprise to show up in DC this week to a crowd of more than...

[Content summary only, click through for full article and links]

by Sandy Kemsley at June 09, 2014 01:25 PM

Keith Swenson: XWand Cloud for Financial Data Exchange

Fujitsu is announcing today XWand Cloud, a new online server for financial information exchange. What is it? Why is it important?

XBRL Format

The offering is centered around the eXtensible Business Reporting Language (XBRL). This is a comprehensive format for exchanging financial information between parties. Each XBRL document is a financial report of some type. Think of it as a spreadsheet full of values. Normally the problem with a spreadsheet is while a number (e.g. $2,534,210) is completely clear, but what exactly this number represents is not so clear. Specifying which kinds of values are included in this number, what time period it is for, geography, or which parts of the company has to come separately from the number itself.

That is where the taxonomy comes in. Part of XBRL “filing” is a set of associated documents that define the terms both descriptively and mathematically. The root taxonomy is produced by regulatory agency, and so all companies have to comply with these meanings, but industries and individual companies can extend the taxonomies in order to report thing that are specific to their business, and not just the things common to every business.

Furthermore, each value reported is associated clearly with a point in time, or a time period, and potentially a specific region. The result is a complete definition that can be automatically read and understood by the receiver.

The XBRL format has revolutionized the US Securities and Exchange Commission which adopted XBRL a few years ago, and today all 15,000 publicly traded companies must report their figures to the SEC in XBRL format. The SEC then automatically receive this information, and because the semantic definition of the figures is available, extract those figures that it needs, and can compare companies to each other on an apples-to-apples basis. At the same time, this has opened up a a large potential for analytics across industry sectors, because these reports are freely available from the SEC to analysts, who can easily consume the reports, and use the values. XBRL greatly improves the efficiency of monitoring public companies.

Not only in the US, XBRL is being widely adopted in Europe as well, where the European Union’s European Banking Authority (EBA) requires reports from the other central banks to be delivered in XBRL format, and the EIOPA, the European insurance industry regulator, has also stated that insurance companies, under Solvency II regulation, will have to submit their first interim reports in XBRL most probably from the beginning of 2016.

XWand and Fujitsu Cloud

Fujitsu has been a leader in the XBRL space, playing a key role in the creation of the standard. Fujitsu’s product, Interstage XWand, is recognized as a leading product in the marketplace.

The other key ingredient is Fujitsu’s Trusted Public S5 cloud which is a high availability high security cloud hosting environment that is able to handle this kind of application.

XWand Cloud

XWand Cloud bring these together in a free offering that spotlights both products. When submitting a financial report to the SEC, the one thing that financial filers want to know is whether the entire report is valid according to a set of standard validation rules. XWand can do this.

XWC_Screen

Marked as a “beta” the offering is currently modest. Users can register for a free account in order to easily and quickly upload their reports to XWand Cloud and get a thorough validation check. The resulting report pinpoints where the problems in the document are, if any. If the filing is proper and complete, the report will show a cleam bill of health.

Fujitsu is not saying anything about where this is ultimately headed, but the cloud based platform opens possibilities for collaboration around the reports, possibly reviews and approvals, as well as selective distribution of the information to third parties. There is a growing community of XBRL suppliers, and XWand Cloud could be a meeting place where such parties can offer specialized services.

Cross Organization Integration

I have been saying for a while that the correct way to integrate ICT systems from different companies is to exchange documents that are a fully self-describing in the way that XBRL is. Because the XBRL document comes with the precise semantics described by the taxonomy, the standard service oriented architecture problem with “API versions” is avoided. The two parties must agree on the standard taxonomy (set by the regulatory agency or possibly an industry authority) but they then map these values into their own independent systems. What we are seeing is beginning of true integration of financial systems across organizations.

I believe that use of XBRL will expand beyond the financial field. The same technology could be used to describe ust about anything, for example giving product information to an e-commerce store, product materials definitions for a outsourcing, descriptions of services that might be provided and exchanged, etc. This approach will allow looser, and more complete, integration across many fields.

Who knows where it will go? XWand Cloud is a small step in this direction, bringing Fujitsu’s Interstage XWand capability together with Fujitsu’s cloud offering.


by kswenson at June 09, 2014 12:25 PM

Sandy Kemsley: Webinar On Developer-Friendly BPM And The Zero-Code Myth

I’m giving a webinar on Wednesday this week (June 11) on developer-friendly BPM and the myth of zero-code BPM when it comes to many complex integrated core business processes. It’s sponsored by...

[Content summary only, click through for full article and links]

by Sandy Kemsley at June 09, 2014 11:12 AM

June 06, 2014

Keith Swenson: SSL Browser Nonsense

Thank you Word Press! WordPress has turned on HTTPS for all blogs, and my blog is hosted at WordPress. They deserve recognition for being proactive in the fight for privacy. But we need more from the browsers.

Let me ask you a question.  Did you access this blog at HTTP://social-biz.org?   Or did you use HTTPS://social-biz.org?  The second one, https is more secure, more private.  Click on this link and try it.

You will probably get a threatening warning.  Oh no!  This might not be the site you were looking for.  But, with the HTTP, you are equally unsure about the site.  It still might not be the site.  Did you get a warning with HTTP?  No you didn’t.

The reason you get this, is because I am too cheap to go buy a certificate for this site.  My blog is available for free, and I don’t make any business from it, it is pretty hard for me to justify spending to get a certificate for this purpose.  At the same time privacy experts are suggesting that all internet traffic should be HTTPS.  The warning is unnecessary, especially given that on HTTP you don’t get a warning either.  Since HTTPS without a certificate is not less secure that plain HTTP, there is no reason for the warning in this situation.  Here is what those warnings look like today.

On Firefox, the warning looks like this:

https_security_moz2

 

This scary warning still has a “Get me out of here!” button.  To get by this, you have to first open the heading that says “I Understand the Risks” and only at that point the button to add an exception is exposed.  Click that, and Mozilla remembers it!  In the future, visits to this site, you will not get the scary warning.  Kudos to Mozilla as this is a significant advance in usability.  At least you get the scary warning only once.   After clicking through, the address link looks like this:

https_icon_moz

It looks mostly like a normal HTTP site, and the warning symbol is suitable.  If you are accessing a branded site, you will not see the site icon, which is reasonable since you don’t have assurance that the site is genuine, how one might make the argument that you didn’t have that assurance with HTTP either, so why show it there?  If it had been fully signed, it would look like this:

https_icon_moz2

On Chrome is looks like this:

https_security_chrome

This is direct and to the point.  This is better than Mozilla because the button to proceed is immediately available.  After pressing this, like Mozilla, Chrome remembers the fact, and you are not bugged next time you come here.  After clicking through, you get a display on the address bar like one of these:

https_icon_chrome  https_security_chrome2

I feel this is pretty suitable.  You should not have any assurance that this is an authentic site, and it should look mostly like a regular HTTP site with some indication OK.  Regular HTTP should be shown also with a red line through it, since you have no assurance in that case that the site is authentic.  As I said earlier, it is inconsistent to make a big deal out of not certifying the site with HTTPS, when HTTP is equally uncertified.  Here is what Chrome looks like for a fully signed site:

https_security_chrome3

On Internet Explorer it looks like this:

https_security_ie

The scary recommendation is to “close and do not continue.”  As I have pointed out elsewhere, there is actually no greater chance that this is a rogue site than if you were using HTTP which has no certificate at all.  Therefor this recommendation is unwarranted.  with IE you will get this scary warning every time you visit the site.  It does not remember that you clicked through and approved this once.  What is perhaps even more concerning is the address bar:

https_icon_ie

This looks completely like a regular HTTP site, and that is good.   When you access a fully trusted site, it looks very similar, only the color is green!  It does show a lock icon an you can access more information about the certificate.  The only problem is the warning page coming up every single time.

https_icon_ie2

What should the behavior be?

Quite simple, there should not be any warning at all when using an uncertified connection. It should look and act essentially exactly the same as a regular HTTP link, although some visual indicator in the address bar is acceptable.

The lock symbol, or the special site specific display, should be displayed only when a correct, signed certificate is presented and the browser can then indicate that the site is authentic.

If the browser wants to go the extra mile in keeping people safe, it should remember whether a site used a certificate last time.  If so, any link to the site using HTTP should be automatically converted to HTTPS if you click on it.  Then, if the certificate for a site that you know should have a certificate fails to provide a correct one, then, and only then, display the scary warning.  It should say:

This site normally has a signed certificate, but this time something is wrong with the certificate, and this might be an impostor site.  Are you sure you want to proceed?

That is it … display the warning ONLY if you have reason to believe that the site intended to have a proper certificate in the first place.

 


by kswenson at June 06, 2014 12:47 PM

BPM+ (Martin Wieschollek): Toolmarktmonitor 2014

Die BPM&O hat eine Marktstudie zu BPM-Tools veröffentlicht. In dieser Marktstudie werden 22 Tools aus D-A-CH vorgestellt und verglichen. Die Schwerpunkte der dargestellten Softwareprodukte liegen in den Bereichen Design und Analyse von Geschäftsprozessen. Jeder, der sich einen aktuellen Überblick verschaffen möchte oder über die Einführung eines neuen Tools nachdenkt kann hier nützliche Informationen finden. http://bit.ly/Toolmarktmonitor

by Martin Wieschollek at June 06, 2014 08:35 AM

June 04, 2014

BPM+ (Martin Wieschollek): Der Prozess-Steckbrief

Nicht nur beim Aufbau einer Prozesslandkarte, sondern immer dann wenn man über Prozesse redet, ist es sehr hilfreich zu beschreiben was eigentlich “der Prozess” ist. Hierfür findet man zum Beispiel das SIPOC-Diagramm aus dem Six Sigma. Im SIPOC-Diagramm werden alle wesentlichen Bestandteile eines Prozesses beschrieben: S – Supplier (Lieferant I – Inputs (Einsatzfaktoren) P – [...]

by Martin Wieschollek at June 04, 2014 12:35 PM

May 28, 2014

Sandy Kemsley: June BPM Conferences

After a month at home, I’m hitting the road for a few vendor events. First, a couple where I’m attending, but not presenting: IBM Content 2014 in Toronto (so technically not hitting much of the road...

[Content summary only, click through for full article and links]

by Sandy Kemsley at May 28, 2014 11:24 AM

May 26, 2014

BPinPM.net: First results in pushing ‘Digital Age BPM’ ahead!

digital_age_bpm_workshop_session1Good news: We found fearless BPM experts!

Two weeks ago, we met for the first session of our workshop series. After getting to know each other and understanding the process management systems in the different organizations, we started to familiarize ourselves with the possibilities of web 2.0 and social media in the context of BPM.

After dreaming and dreading what the future might hold for us, we began to design useful approaches of embedding social media in process management environments. The initial skepticism faded after we generated many applicable ideas – but was not washed away. Our main goal became manifested in one word: Benefit. Whatever we are going to design, the premise is the advantage the users are gaining by applying it.

Throughout clustering, we defined six core focuses: ‘Participation’, ‘Training and Communication’, ‘Feedback and Exchange’, ‘Search Engine’, ‘Process Transparency’, and ‘Mobile Access’.

To meet our premise of ‘Benefit’ we combined the concrete ideas of ‘Digital Age’ applications within the core focuses with personas. Personas are fictional characters that stand for different stakeholder groups by representing their average character traits and capabilities.

Hence, the question is: “Which ‘Digital Age’ application is beneficial to whom?”

The task for our next workshop session is to find out which web 2.0 applications and social media functions might contribute to a real advantage in the context of BPM. We all agreed on doing some ‘homework’ until the next workshop – by dint of these evaluations we hope to get a detailed understanding of the perceived benefit of our concrete ‘Digital Age’ applications.

We are curious what the results of our evaluation will reveal, we excitedly prepare the next workshop and we are looking forward to push the “Digital Age BPM” further ahead!

Best regards,
Mirko

 

by Mirko Kloppenburg at May 26, 2014 08:57 PM

BPM-Guide.de: Webinar: Developer-Friendly BPM

“Buy now! BPM without programming!” This is how many BPM vendors lure their customers into the ‘zero-code BPM trap’. But as soon as you try to create a solution that goes beyond the vacation workflow from the vendor presentation, the suffering begins. In this free live webinar, the independant BPM industry expert Sandy Kemsley challenges [...]

by Jakob Freund at May 26, 2014 01:08 AM

May 23, 2014

Bruce Silver: Visualizing Responsive Processes

Merging BPMN and CMMN standards in OMG is, for the moment at least, a dead issue.  The question remains how best to visually represent logic formerly known as case management, which I will henceforth refer to as responsive processes.  Responsive processes are driven forward by events (including ad-hoc user action) and conditions, rather than by sequence flow.  In a responsive process, an activity is enabled to start when its preconditions are satisfied.

I believe that a BPMN 2.0 process engine that can execute event subprocesses, including those with Escalation and Conditional triggers, can implement many if not most features of a responsive process, as IBM’s BPM 8.5.5 amply demonstrates.  To be more precise, it should be able to implement a responsive process in which all activities, including those that CMMN calls discretionary, are specified at design time.  CMMN goes beyond this, however, in allowing the design, or “plan,” to be modified arbitrarily at runtime on an instance by instance basis.  We cannot assume that a BPMN 2.0 engine can handle this, but at this point I am not sure how critical this feature is.  It may turn out to be critical, but for now let’s call it responsive-plus.

Whether or not you agree with me that BPMN as a “language” can handle responsive processes, you probably agree that as a notation it fails to visually communicate the responsive process logic.  IBM’s Case Designer is a little better  at that, and CMMN is a little better still. But I think all of these fall well short of the mark.  So I have been thinking about what kind of notation would achieve that goal.

As I have said previously, responsive process/case logic is inherently much more difficult to represent in a printed diagram than flowchart-based logic.  Scoped event logic tends toward some kind of state diagram, but I think it’s safe to say that business users (and most business analysts) would have a hard time with state diagrams and find them unacceptable.  There is one form of diagram that could possibly fit the bill if sufficiently enhanced, and that is a Gantt chart, such as you might find in Microsoft Project.  In a Gantt chart, activities are enabled by preconditions called dependencies.  A Gantt’s chart has a very primitive notion of dependency, which is limited to completion (or possibly start) of another activity.  It has no notion of end state, for example – an activity completing successfully versus unsuccessfully.

A Gantt chart takes the form of a table – an indented list of activities, each row specifying the activity’s start and end (both anticipated and actual).  The indents (usually reinforced by a hierarchical numbering scheme) provide aggregation of activities for summary reporting, but if we give these summary activities their own entry and exit conditions they become subprocesses, or what CMMN calls stages.  Anticipated start – actually, enablement – is based on the dependency logic, and actual start is based on when work actually starts.  (CMMN has this distinction in the runtime state model, but BPMN unfortunately does not.)  Each row in the Gantt chart also contributes a bar in a calendar view of the table.  A vertical line slicing through the calendar view separates past from future.  Things to the left of the line are actuals, things to the right are anticipated.

Gantt charts provide something that most BPM users instinctively desire – an estimate of when the process will complete, based on its current state.   (BPM Suites are happy to provide this in the runtime if you purchase the predictive analytics module, but Gantt makes it part of the model itself.)  BPMN has no standard property to record the mean activity duration, although many modeling tools provide this to support simulation.  Gantt charts require that property.

Gantt charts also have the responsive-plus feature of being modifiable at runtime, including addition of new activities and dependencies.  That sounds great!  But they cheat, because a normal Gantt chart describes only a single instance of the process.  It does not pretend to describe the general case, including alternative paths to the end state.  In fact, the whole idea of exception end states – for the process as a whole or for individual activities – is absent.

Economy and expressiveness are key to visually communicating responsive process logic.  We want to pack the most semantic value into the simplest diagram possible.  The fewer distinct shapes and icons the better. Connectors are extremely valuable in communicating the dependency logic.  Not all Gantt charts have them, but MS Project uses them quite effectively.  An arrow into the left end of a bar indicates a precondition; an arrow into the right end of a bar indicates a completion condition.  In MS Project, the precondition is always either completion (arrow out of the right end) or actual start (arrow out of the left end) of an activity.  We’d like to extend this to event triggers and data conditions as well.  CMMN supports 4 basic event types: state change in an activity (such as completion), state change in an information item, timer (relative to some other selected trigger), and ad-hoc user action.  I think that’s about right, but we probably need to add BPMN Message, Error, and possibly Signal, and maybe distinguish interrupting and non-interrupting.  As with sequence flows in BPMN, the label of a connector can be used to suggest the data condition.  (The full data condition could be stuffed into the spreadsheet part of the Gantt, to the left of the chart.)  For example, we should use line styles on the connectors and border styles on events to denote different triggering semantics.  If done right, we could eliminate CMMN’s diamond sentry shapes, which add graphical complexity but little incremental semantic value.

Like CMMN, our responsive process model needs an information model that can be referenced in both data conditions and in state change events.  BPMN 2.0 doesn’t really have this, and without it, Conditional events are kind of useless because the only data visible to them are process variables and properties.  The information model should include both data and documents, so changes in content value, metadata, and lifecycle state can all be recognized as events.  CMMN already has this, but it does not reveal the logic clearly in the printed diagram.

In a followup post, I will put up some examples of what this could look like.

 

 

The post Visualizing Responsive Processes appeared first on Business Process Watch.

by bruce at May 23, 2014 09:01 PM

BPM-Guide.de: camunda BPM Online Training available

Get a camunda BPM training when and where you want. The new self-paced online course is now available. The course provides praticipants with a head-start needed for creating powerful process applications. It includes more than 6 hours of easy-to-follow training videos, hands-on exercises and lab tutorials as well as weekly 2-hour live sessions (Monday 8am [...]

by Jakob Freund at May 23, 2014 07:29 PM

Drools & JBPM: Running drools-wb with GWT's SuperDevMode

Like most, I like surprises!

Some surprises aren't always welcome though; and one such surprise bit me yesterday.

As a good citizen I upgraded my installation of Google Chrome when advised a new version was available. With hind-sight I don't know why I so gleefully went along with the upgrade (after all, I'd recently removed the latest version from my mobile telephone as it didn't "feel" as good... anyway I digress).

The surprise was that Chrome 35 stops supporting GWT's "DevMode" (something I'd long been used to with FireFox) and as from GWT 2.6.0 support for "DevMode" is to come to an end ("GWT Development Mode will no longer be available for Chrome sometime in 2014, so we improved alternate ways of debugging. There are improvements to Super Dev Mode, asserts, console logging, and error messages.")

Options were to find an installation of Chrome 34, or switch to SuperDevMode (that seems inevitable). Electing for the latter, I present my findings on how to configure your webapp, IDE and run (or debug) it in "SuperDevMode".

These instructions are for IDEA (NetBeans will probably follow a similar route).

(1) Create a regular GWT Launcher:


(2) Create a new GWT Launcher for SuperDevMode:



(3) Add the following to your webapp's gwt.xml (module) file:

  <!-- SuperDevMode -->
  <add-linker name="xsiframe"/>
  <set-configuration-property name="devModeRedirectEnabled" value="true"/>
  <set-property name="compiler.useSourceMaps" value="true"/>
  <set-configuration-property name='xsiframe.failIfScriptTag' value='false'/>

(4) Launch your regular webapp (the "classic" GWT Launcher):

... <tick>, <tock>, <tick>, <tock> while it compiles and launches...


(5) Launch the SuperDevMode code server (the "SuperDevMode" GWT Launcher):

... <tick>, <tock>, <tick>, <tock> while it compiles and launches...


​(6) Drag the "Dev Mode On" and "Dev Mode Off" buttons to your bookmark bar (as advised) - but we don't normally read these sort of things, right! ;)

(7) Go back to the webapp's browser tab

(8) Click on the "Dev Mode On" bookmark you created in step (6)



(9) Click on "compile"



(10) Delete the "codesvr" part of the URL and press enter (dismiss the popups that appear; which ones depends on what browser your GWT module targets; e.g. I had to dismiss a popup about using Chrome but the GWT model targets FireFox).



​(11) Done!



(12) What's that? You want to debug your application?!?

This isn't too bad. Just launch both your "classic" GWT Launcher in debug mode and the "SuperDevMode" GWT Launcher in normal mode.

Server-side code needs break-points in IDEA, and client-side break-points need to be added using Chrome's Developer Tools (you'll need to make sure "sourceMaps" are enabled, but this appears to be the default in Chrome 35).

Accessing Chrome's debugger:



Debugging:



Simple!

It takes a bit of getting used to debugging client-side stuff in Chrome, and server-side stuff in IDEA, but it's not terrible (although don't expect to be able to introspect everything in Chrome like you used to in IDEA).

I hope this helps (probably more so as others find "DevMode" stops working for them.... and when we move to GWT 2.6.1 --- for IE10 support --- so it is coming sooner than later).

Have fun!

Mike

by Michael Anstis (noreply@blogger.com) at May 23, 2014 09:47 AM

May 22, 2014

Drools & JBPM: London (May 26th) Drools & jBPM community contributor meeting

London, Chiswick, May 26th to May 30th

During next week a large percentage of the Drools team, some of the jBPM team and some community members will be meeting in London (Chiswick). There won’t be any presentations, we’ll just be in a room hacking, designing, exchanging ideas and planing. This is open to community members who wish to contribute towards Drools or jBPM, and want help with those contributions. This also includes people working on open source or academic projects that utilise Drools or jBPM. Email me if you want to attend, our locations may very (but within chiswick) each day. 

We will not be able to make the day time available to people looking for general Drools or jBPM guidance (unless you want to buy us all lunch). But we will be organising evenings things (like bowling) and could make wed or thu evening open to people wanting general chats and advice. Email me if you’re interested, and after discussing with the team, I’ll let you know.

Those currently attending:
Mark Proctor (mon-fri) Group architect
Edson Tirelli (mon-fri) Drools backend, and project lead
Mario Fusco (mon-fri) Drools backend
Davide Sottara (wed-fri) Drools backend
Alex Porcelli (mon-fri) Drools UI
Michael Anstis (thu-fri) Drools UI
Kris Verlaenen (wed-thu) jBPM backend, and project lead
Mauricio Salatino (mon-fri) jBPM tasks and general UI

by Mark Proctor (noreply@blogger.com) at May 22, 2014 11:28 AM

May 20, 2014

Bruce Silver: Method and Style Wizard Generates BPMN Automatically

itp commerce has just released a new BPMN Method and Style wizard that automatically creates well-structured BPMN from a simple interview.  In my BPMN training, the “Method” is the hardest part because it asks students to describe the process top-down and abstractly, as opposed to the bottom-up “what came next?” format of the SME fact-finding.  It’s especially hard when you’re first learning the shapes and symbols, and have all those label-matching style rules to keep in mind, as well.  Process Modeler for Visio now lets a wizard do all the work.  Modelers just need to answer questions about activities, their end states, and what comes after what.  The wizard generates hierarchical BPMN automatically.  Great job, guys!

I made a 13-minute video that explains the issue and demonstrates the tool in action.  Check it out here.

If the “good BPMN” idea is something you’re interested in, there’s still room in my next BPMN Method and Style class, June 3-5, which includes the bpmnPRO gamified eLearning app and post-class certification.  More info on that here.

The post Method and Style Wizard Generates BPMN Automatically appeared first on Business Process Watch.

by bruce at May 20, 2014 11:22 PM

Thomas Allweyer: Kostenloses Tool ermöglicht Modellieren im Browser – online und offline

Screenshot BIC Design Free Web EditionImmer mehr Hersteller von Modellierungstools bieten kostenlose Einsteigerversionen an, die ein unkompliziertes Ausprobieren der Grundfunktionalitäten im Rahmen kleinerer Szenarien ermöglichen. Auch die Bochumer Firma GBTEC hat jetzt eine Free Web Edition ihres Modellierungstools BIC Design veröffentlicht, mit der man nicht nur Prozesse in BPMN oder EPK modellieren kann, sondern auch Wertschöpfungsketten, Organigramme und IT-Landschaften. Daneben gibt es noch Universaldiagramme, in denen man sämtliche Freiheiten hat, beliebige Elemente miteinander zu verbinden. Die Modellierung erfolgt komplett im Browser. Allerdings werden die Modelle nicht wie bei anderen Tools in der Cloud gespeichert, sondern lokal auf dem Computer des Benutzers. Auch wenn der Computer offline ist, kann man weiter modellieren.

Eine Installation ist nicht erforderlich. Beim ersten Aufruf der Free Web Edition über den Link des Herstellers wird das Tool geladen. Die Modelle befinden sich ausschließlich im lokalen Speicher des Browsers. Beim nächsten Öffnen des Links stehen damit automatisch wieder die eigenen Modelle zur Verfügung. Das funktioniert auch dann, wenn man nicht im Internet ist. Lediglich für einige spezielle Funktionen, wie Export oder Drucken, muss man online sein. Das beschriebene Konzept hat allerdings zur Folge, dass die Modelle beim Leeren des Browser-Caches gelöscht werden. Man sollte seine Arbeitsergebnisse daher regelmäßig mit Hilfe der Export-Funktion sichern.

Beim Arbeiten mit der HTML 5-basierten Web-Oberfläche fällt nicht mehr auf, dass man im Browser arbeitet. Sämtliche Oberflächen-Elemente verhalten sich so, wie man es von lokalen Anwendungen gewohnt ist. So öffnen sich neue Modelle nicht etwa in gesonderten Browser-Tabs, sondern in der integrierten Modellierungsoberfläche. Die meisten Funktionen lassen sich komfortabel über Kontextmenüs der einzelnen Modellobjekte erreichen.

Der Funktionsumfang ist für ein kostenloses Tool beachtlich. Selbstverständlich können Modellhierarchien aufgebaut werden, wobei auch Modelle unterschiedlichen Typs einbezogen werden können. So kann man etwa dem Pool eines BPMN-Modells ein Organigramm hinterlegen. Die aufgebaute Modellhierarchie mitsamt den enthaltenen Objekten und ihren verschiedenen Attributen kann beispielsweise in Form von Prozesshandbüchern und Excel-Reports ausgewertet werden. Das Tool erkennt gleichnamige Objekte und fragt bei einer Namensänderung, ob alle Objekte desselben Namens mit geändert werden sollen. Allerdings funktioniert dies nur innerhalb eines Modells und nicht über verschiedene Modelle hinweg.

Vielfältige Formatierungsmöglichkeiten helfen dabei, optisch ansprechende Modelle zu entwickeln. Zwar mag es fast selbstverständlich erscheinen, dass man Modelle mit Freiformtexten und beliebigen grafischen Elementen anreichern kann, doch fehlt diese Möglichkeit bei manch anderem Tool, insbesondere bei reinen BPMN-Tools. Dass man ein komplettes Diagramm oder Teile davon um einen beliebigen Winkel neigen kann, ist noch weniger verbreitet. Zwar wird man nicht so häufig diagonale Modelle wie in der obigen Abbildung benötigen, doch ist es durchaus praktisch, ein Modell kurz einmal um 90 Grad kippen zu können, damit es besser auf eine Seite passt.

Das Modellieren funktioniert recht intuitiv. Lediglich wenn man den Verlauf von Kanten ändern möchte, muss man etwas herumprobieren, bis man den Bogen heraus hat, an welchen Stellen man die Kanten greifen muss um das gewünschte Resultat zu erreichen. BPMN-Modellierer werden sich wundern, warum man bei jeder Kante den gewünschten Kantentyp (Sequenzfluss, Nachrichtenfluss, …) auswählen muss. Schließlich ist in den meisten Fällen ja nur eine Kantentyp erlaubt. Auch finden keine weiteren Syntaxüberprüfungen statt. So wird man z. B. nicht daran gehindert, einen Nachrichtenfluss innerhalb eines Pools zu ziehen, obwohl die BPMN-Spezifikation dies verbietet.

Wer die Vollversion von BIC Design oder andere Repository-basierte Modellierungsplattformen kennt, wird die Möglichkeit zur Sichten-übergreifenden Integration und Navigation vermissen. So ist es beispielsweise nicht direkt möglich, alle Prozesse herauszufinden, an denen eine bestimmte Organisationseinheit beteiligt ist. Diese Features sind der Vollversion vorbehalten. Modelle, die man mit der Free Web Edition erstellt hat, können in die Vollversion übernommen werden.

Link zu BIC Design Free Web Edition

by Thomas Allweyer at May 20, 2014 08:21 AM

May 19, 2014

BPM-Guide.de: BPMCon 2014 – Agenda komplett

Was haben LVM Versicherung, DVAG, Provinzial NordWest und Wüstenrot gemeinsam? Sie sprechen am 19. September auf der schönsten BPM-Konferenz des Jahres! Die diesjährige BPMCon findet in einem spektakulären Bauwerk am Berliner Spreeufer statt. Neben handfesten Berichten aus der Praxis entzaubert die kanadische BPM-Koryphäe Sandy Kemsley den “Mythos Zero Coding BPM”, der Geisterjäger Bernd Rücker bezwingt [...]

by Jakob Freund at May 19, 2014 11:06 PM

May 15, 2014

Thomas Allweyer: Modellierungstools – Große Unterschiede bei den Total Costs of Ownership

Wie hoch sind die Kosten für die Einführung und Nutzung von Prozessmodellierungsools? Laut dem neu erschienenen BPM&O-Toolmarktmonitor fallen in typischen Projekten bis zehn Personen über fünf Jahre durchschnittlich 8.500 € pro Einzelplatzlizenz an. Das sind 1.700 € pro Jahr. Einbezogen wurden die Aufwände für Installation, Konfiguration, Lizenzen, Wartung und Schulung. Dabei gibt es zwischen den insgesamt 22 untersuchten Toolanbietern große Unterschiede: Die Spanne liegt zwischen 2.000 € und 20.000 € für den betrachteten Fünfjahreszeitraum. Für größere Projekte und Unternehmenslizenzen können sich diese Kosten deutlich reduzieren.

In der Studie wurden im deutschsprachigen Raum erhältliche Tools einbezogen, die speziell auf das Design und die Analyse von Prozessen ausgerichtet sind. Die Prozessautomatisierung wurde explizit nicht untersucht. Es ging vielmehr um Funktionalitäten in den Bereichen Modellierung, Modellverwaltung, Reporting, Prozesscontrolling, Prozessportal und Simulation. Insgesamt wurden 155 Einzelkriterien abgefragt. Die Studie fasst die von den Anbietern gemachten Angaben zusammen. Bei einer konkreten Toolauswahl muss man daher natürlich noch genauer nachprüfen, wie bestimmte angegebene Funktionen konkret in dem jeweiligen Tool umgesetzt sind.

Bei den im deutschsprachigen Markt vertretenen Toolanbietern handelt es sich überwiegend um kleinere Unternehmen mit bis zu 50 Mitarbeitern, die meist schon viele Jahre am Markt aktiv sind. Der Abdeckungsgrad der abgefragten Funktionalitäten ist in den meisten Bereichen recht hoch, insbesondere im Bereich des Reporting und des Portals. Größere Unterschiede gab es vor allem in den Bereichen Controlling/Monitoring und Simulation.

Die Studie gibt einen ganz guten Überblick über die prinzipielle Abdeckung verschiedener Funktionsbereiche durch die am Markt vertretenen Tools. Allerdings wird zu jeder untersuchten Kategorie nur für einige ausgewählte Funktionen angegeben, wie viele Tools darüber verfügen. Man erfährt also nicht, um welche Tools es sich jeweils handelt. Dies muss man ggf. im Rahmen einer eigenen Toolauswahl selbst bei den einzelnen Anbietern abfragen. Als Hilfestellung hierfür wird in der Studie ein Vorgehen zur Toolauswahl und -einführung vorgestellt. Zudem enthält die Studie zu jedem Toolhersteller ein kurzes Profil.

Download der Studie bei BPM&O (Registrierung erforderlich)

by Thomas Allweyer at May 15, 2014 08:43 AM

May 14, 2014

Bruce Silver: BPMN and CMMN Compared

IBM’s presentation at bpmNEXT of their implementation of case management inside of BPMN (and their subsequent launch of same at Impact) inspired Paul Harmon to start a lively thread on BPTrends on whether BPMN and CMMN should be merged.  To me the answer is an obvious “yes,” but I doubt it will happen anytime soon.  Most of the sentiment on BPTrends is either against or (more often) completely beside the point.  Fred Cummins, a honcho on the OMG committee that oversees both standards, was sneeringly dismissive of the idea.  BPMN, you see, is procedural while CMMN is declarative. There’s no comparison.  Yeah, right.

OK, so let’s look at the CMMN spec.  Here is the one example of a case model in the spec, which I will explain.

claimscase

The outer container, with the tab that looks like a manila folder, is the case file.  All activities in the case are contained within it.  Isn’t that like a pool in BPMN?  No, nothing at all like it!

The octagons, called stages, are fragments of case logic.  You can nest stages inside other stages.  Isn’t that sort of like a subprocess in BPMN?  NO!  Stop saying that.

The rounded rectangles are tasks, and the icon in the upper left signifies the task type.  I know that sounds like BPMN tasks, but I assure you, NOTHING LIKE THEM!

The rounded rectangles with the dashed border are discretionary, meaning things in the diagram that may not be executed in every instance.  Oh, BPMN has nothing like that!

The # markers mean retriggerable tasks.  In BPMN all non-interrupting events are implicitly retriggerable.  So there’s a big difference right there.

The dashed connectors (I think they are supposed to be dotted) represent dependencies.  The white diamond on a shape means an entry condition, and the connector into that diamond means that completion of the task at the other end of the connector is part of the entry condition.  In BPMN, instead of a diamond at the end of a connector, we have the diamond at the start of the connector, which is a solid arrow… so NOTHING AT ALL LIKE THIS!  Well, actually there is a difference, since there could be other parts of the entry condition, such as “a user just decided to do it.”  And you’re right, BPMN sequence flow can’t do that!  But a BPMN Escalation event subprocess can do that.

The double rings that look like BPMN intermediate events are CMMN event listeners.  The two shown here mean “a user just decided to do it.”  Kind of like an Escalation event sub in BPMN.  The black diamonds are exit conditions.  So this diagram means a user could decide to set the milestone Claims processed and close the case, or just close the case.

Here is the same case logic in BPMN.  What???!!

claimsbpmn

The operational semantics are essentially identical. They both include activities initiated ad-hoc by a user and possibly other conditions, sometimes constrained by the current state of the case/process.  Neither one really communicates the event dependency logic clearly in the diagram, although CMMN does a better job:  A BPMN Escalation event could represent ad hoc user action or an explicit throw, and Parallel-Multiple event could represent any event plus condition; CMMN at least tries to suggest the dependency with a connector.  But honestly, representing this type of logic clearly in a printed diagram is really hard!

Actually there is a lot in the CMMN spec to like, and it would be good if BPMN were expanded to include it.  Timer events, for example, are much more usable.  In BPMN, the start of the timer is the start of the activity or process level the event is attached to, and the deadline is a literal value.  In CMMN, the start is some selected event and the deadline is an expression.  Is that something that only “knowledge workers” need, as opposed to the mindless droids that use BPM?  I doubt it.  State changes in any case information – not just “documents” as some would have you believe, but data as well – can trigger case activities, and BPMN should have that also.

Here is the simple truth: There is a mix of procedural and declarative logic in most business processes.   CMMN expresses the declarative logic a bit better than BPMN, but only “hints” at the simplest procedural logic, as you see in the claims example.  As anyone who has been through my BPMN Method and Style book or training knows, the key to communicating process logic in a diagram is labeling, and CMMN fails totally there.  The thing most in need of labeling – the dependency connector – doesn’t even exist in the semantic model!  An entry condition merely has a sourceRef pointer to a task or other precursor object.  No connector element means no name attribute to hold a label.  I looked through the schema; maybe I just missed it…  Also, CMMN for some unexplained reason has NO graphical model at all!  After a false start, BPMN 2.0 eventually came up with a nice solution for that, completely separable from the semantic model, but CMMN didn’t use it (or substitute something else).  I guess model interchange between tools wasn’t a priority there.

The bottom line is that both BPMN and CMMN would benefit by unification.  The separation is purely vendor-driven and counterproductive.

 

The post BPMN and CMMN Compared appeared first on Business Process Watch.

by bruce at May 14, 2014 11:04 PM

May 07, 2014

Drools & JBPM: Drools - Bayesian Belief Network Integration Part 3


This follows my earlier Part 2 posting in April,

Things now work end to end, and I have a clean separation from the Creation of the JunctionTree and initialisation of all state, and the state that change after evidence insertion. This separation ensures that multiple instances of the same bayesian network can be created cheaply.

I'm now working on integrating this into the belief system. One issue I have is that I can automate the update of the bayesian network as soon as the evidence changes. The reason for this is updating of the  network is expensive, if you insert three pieces of evidence, you only want it to update one not three times. So for now I will add a dirty check, and allow users to call update. For best practice I will recommend people separate reasoning of the results of bayesian and entering new evidence, so that it becomes clearer when it's efficient to call update.

For now I'm only dealing with hard evidence. We will be using superiority rules to resolve conflict evidence for a variable. Any unresolved conflicts will leave a variable marked as "Undecided". Handling of soft or virtual evidence would be nice, this would add way to resolve conflicted evidence statistically; but for now this is out of scope. There is a paper here on who to do it, if anyone wants to help me :)

I'll be committing this to github in a few days, for now if anyone is interested,  here is the jar in a zip form from dropbox.

--update--
The XMLBIF parser provided by Horacio Antar is now integrated and tested. I'm just working on refactoring Drools for pluggable knowledge types, to fully integrate Bayesian as a new type of knowledge.

Graph<BayesVariable> graph = new BayesNetwork();

GraphNode<BayesVariable> burglaryNode = graph.addNode();
GraphNode<BayesVariable> earthquakeNode = graph.addNode();
GraphNode<BayesVariable> alarmNode = graph.addNode();
GraphNode<BayesVariable> johnCallsNode = graph.addNode();
GraphNode<BayesVariable> maryCallsNode = graph.addNode();

BayesVariable burglary = new BayesVariable<String>("Burglary", burglaryNode.getId(), new String[]{"true", "false"}, new double[][]{{0.001, 0.999}});
BayesVariable earthquake = new BayesVariable<String>("Earthquake", earthquakeNode.getId(), new String[]{"true", "false"}, new double[][]{{0.002, 0.998}});
BayesVariable alarm = new BayesVariable<String>("Alarm", alarmNode.getId(), new String[]{"true", "false"}, new double[][]{{0.95, 0.05}, {0.94, 0.06}, {0.29, 0.71}, {0.001, 0.999}});
BayesVariable johnCalls = new BayesVariable<String>("JohnCalls", johnCallsNode.getId(), new String[]{"true", "false"}, new double[][]{{0.90, 0.1}, {0.05, 0.95}});
BayesVariable maryCalls = new BayesVariable<String>("MaryCalls", maryCallsNode.getId(), new String[]{"true", "false"}, new double[][]{{0.7, 0.3}, {0.01, 0.99}});

BayesVariableState burglaryState;
BayesVariableState earthquakeState;
BayesVariableState alarmState;
BayesVariableState johnCallsState;
BayesVariableState maryCallsState;

JunctionTreeNode jtNode1;
JunctionTreeNode jtNode2;
JunctionTreeNode jtNode3;

JunctionTree jTree;

BayesEngine engine;

@Before
public void setUp() {
connectParentToChildren(burglaryNode, alarmNode);
connectParentToChildren(earthquakeNode, alarmNode);
connectParentToChildren(alarmNode, johnCallsNode, maryCallsNode);

burglaryNode.setContent(burglary);
earthquakeNode.setContent(earthquake);
alarmNode.setContent(alarm);
johnCallsNode.setContent(johnCalls);
maryCallsNode.setContent(maryCalls);

JunctionTreeBuilder jtBuilder = new JunctionTreeBuilder(graph);
jTree = jtBuilder.build();
jTree.initialize();

jtNode1 = jTree.getRoot();
jtNode2 = jtNode1.getChildren().get(0).getChild();
jtNode3 = jtNode1.getChildren().get(1).getChild();

engine = new BayesEngine(jTree);

burglaryState = engine.getVarStates()[burglary.getId()];
earthquakeState = engine.getVarStates()[earthquake.getId()];
alarmState = engine.getVarStates()[alarm.getId()];
johnCallsState = engine.getVarStates()[johnCalls.getId()];
maryCallsState = engine.getVarStates()[maryCalls.getId()];
}

@Test
public void testInitialize() {
// johnCalls
assertArray(new double[]{0.90, 0.1, 0.05, 0.95}, scaleDouble( 3, jtNode1.getPotentials() ));


// maryCalls
assertArray( new double[]{ 0.7, 0.3, 0.01, 0.99 }, scaleDouble( 3, jtNode2.getPotentials() ));

// burglary, earthquake, alarm
assertArray( new double[]{0.0000019, 0.0000001, 0.0009381, 0.0000599, 0.0005794, 0.0014186, 0.0009970, 0.9960050 },
scaleDouble( 7, jtNode3.getPotentials() ));
}

@Test
public void testNoEvidence() {
engine.globalUpdate();

assertArray( new double[]{0.052139, 0.947861}, scaleDouble(6, engine.marginalize("JohnCalls").getDistribution()) );

assertArray( new double[]{0.011736, 0.988264 }, scaleDouble( 6, engine.marginalize("MaryCalls").getDistribution() ) );

assertArray( new double[]{0.001, 0.999}, scaleDouble(3, engine.marginalize("Burglary").getDistribution()) );

assertArray( new double[]{ 0.002, 0.998}, scaleDouble( 3, engine.marginalize("Earthquake").getDistribution() ) );

assertArray( new double[]{0.002516, 0.997484}, scaleDouble(6, engine.marginalize("Alarm").getDistribution()) );
}

@Test
public void testAlarmEvidence() {
BayesEngine nue = new BayesEngine(jTree);

nue.setLikelyhood( new BayesLikelyhood( graph, jtNode3, alarmNode, new double[] { 1.0, 0.0 }) );

nue.globalUpdate();

assertArray( new double[]{0.9, 0.1}, scaleDouble( 6, nue.marginalize("JohnCalls").getDistribution() ) );

assertArray( new double[]{0.7, 0.3 }, scaleDouble( 6, nue.marginalize("MaryCalls").getDistribution() ) );

assertArray( new double[]{0.374, 0.626}, scaleDouble( 3, nue.marginalize("Burglary").getDistribution() ) );

assertArray( new double[]{ 0.231, 0.769}, scaleDouble( 3, nue.marginalize("Earthquake").getDistribution() ) );

assertArray( new double[]{1.0, 0.0}, scaleDouble( 6, nue.marginalize("Alarm").getDistribution() ) ); }

@Test
public void testEathQuakeEvidence() {
BayesEngine nue = new BayesEngine(jTree);

nue.setLikelyhood(new BayesLikelyhood(graph, jtNode3, earthquakeNode, new double[]{1.0, 0.0}));
nue.globalUpdate();

assertArray( new double[]{0.297, 0.703}, scaleDouble( 6, nue.marginalize("JohnCalls").getDistribution() ) );

assertArray( new double[]{0.211, 0.789 }, scaleDouble( 6, nue.marginalize("MaryCalls").getDistribution() ) );

assertArray( new double[]{.001, 0.999}, scaleDouble( 3, nue.marginalize("Burglary").getDistribution() ) );

assertArray( new double[]{1.0, 0.0}, scaleDouble( 3, nue.marginalize("Earthquake").getDistribution() ) );

assertArray( new double[]{0.291, 0.709}, scaleDouble( 6, nue.marginalize("Alarm").getDistribution() ) );
}

@Test
public void testJoinCallsEvidence() {
BayesEngine nue = new BayesEngine(jTree);

nue.setLikelyhood( new BayesLikelyhood( graph, jtNode1, johnCallsNode, new double[] { 1.0, 0.0 }) );
nue.globalUpdate();

assertArray( new double[]{1.0, 0.0}, scaleDouble( 6, nue.marginalize("JohnCalls").getDistribution() ) );

assertArray( new double[]{0.04, 0.96 }, scaleDouble( 6, nue.marginalize("MaryCalls").getDistribution() ) );

assertArray( new double[]{0.016, 0.984}, scaleDouble( 3, nue.marginalize("Burglary").getDistribution() ) );

assertArray( new double[]{0.011, 0.989}, scaleDouble( 3, nue.marginalize("Earthquake").getDistribution() ) );

assertArray( new double[]{0.043, 0.957}, scaleDouble( 6, nue.marginalize("Alarm").getDistribution() ) );
}

@Test
public void testEarthquakeAndJohnCallsEvidence() {
BayesEngine nue = new BayesEngine(jTree);
nue.setLikelyhood( new BayesLikelyhood( graph, jtNode1, johnCallsNode, new double[] { 1.0, 0.0 }) );

nue.setLikelyhood( new BayesLikelyhood( graph, jtNode3, earthquakeNode, new double[] { 1.0, 0.0 }) );
nue.globalUpdate();

assertArray( new double[]{1.0, 0.0}, scaleDouble( 6, nue.marginalize("JohnCalls").getDistribution() ) );

assertArray( new double[]{0.618, 0.382 }, scaleDouble( 6, nue.marginalize("MaryCalls").getDistribution() ) );

assertArray( new double[]{0.003, 0.997}, scaleDouble( 3, nue.marginalize("Burglary").getDistribution() ) );

assertArray( new double[]{ 1.0, 0.0}, scaleDouble( 3, nue.marginalize("Earthquake").getDistribution() ) );

assertArray( new double[]{0.881, 0.119}, scaleDouble( 6, nue.marginalize("Alarm").getDistribution() ) );
}

by Mark Proctor (noreply@blogger.com) at May 07, 2014 03:41 AM

May 06, 2014

Bruce Silver: Details on BPMN Master Class

Details of my BPMN Master Class on June 2 and 9 have now been finalized.  If you know BPMN Method and Style and you want to take the next step, this class is for you!

The class is split into two 5-hour sessions one week apart, so students will have time to complete problem sets assigned at the end of the first session and mail them in before the second session, when selected solutions will be presented.  Here is the outline of the class:

Day 1

  1. Overview and Objectives
  2. Method and Style Review
    • Instance alignment
    • Hierarchical modeling and gateway end state test
    • Avoiding deadlocks, multimerge, and unsafe models
    • Big 3 event types – Message, Timer, Error
    • Loop vs MI activity
  3. Batching and Multi-Pool Models
  4. Signal, Conditional, Escalation Events
  5. Event Subprocesses
  6. Problem Set Assignment

Day 2

  1. Problem Set Presentations and Discussion
  2. Enterprise Process Map
  3. Case Management and Declarative BPMN
    • CMMN vs BPMN
    • Ad hoc activities in BPMN
    • Event-condition-action pattern
    • Declarative BPMN
  4. Your Scenarios and Patterns
  5. Master Class Certification Exercise

The Master Class is open to students who are already Method and Style certified, but it begins with a quick review of some of its more technical concepts: alignment of the activity and process instance; using gateways to test child-level end state, merging parallel and conditionally parallel flows; basic patterns for Message, Timer, and Error events; and the difference between Loop and Multi-Instance activities.  We then go into mostly new material, beginning with how to deal with batching in end-to-end business processes, using multiple BPMN processes coordinated via messages and shared data.  We’ll spend some time on the “lesser” Level 2 event types – Signal, Conditional, and Escalation – why each is a little strange, and the most important use cases for each one.  We finish Day 1 with event subprocesses, which will prove extremely valuable when we get to case management and “declarative BPMN” on Day 2.

At the end of the session, four homework exercises will be assigned based on the Day 1 material. Students will mail in their solutions prior to Day 2, at which time selected solutions will be presented to the class and discussed.  Students are also invited to send in their own questions and scenarios, which we will discuss on Day 2 as well.  That thorny problem you have been struggling with in your own process models?  Send it in, and we’ll discuss various ways to model it on Day 2. In addition, on Day 2 we will discuss how BPMN models relate to enterprise BPM architecture models, a topic rarely given adequate treatment.  We’ll also explore how BPMN can do what it’s not supposed to be able to do: case management.  We’ll look at how escalation event subprocesses, parallel-multiple events, and other BPMN 2.0 constructs can be used to describe ad hoc behavior and “declarative” process models.

At the end of Day 2, we explain the certification exercise.  As in the BPMN Method and Style class, students have 60 days to complete the certification.  I’ll be using itp commerce Process Modeler for Visio in my slides, but students have the option of using Signavio instead.  Sixty-day use of either tool is provided as part of the training.

Sound interesting?  The class runs June 2 and 9, live-online, from 11am to 4pm ET each day (that’s 5pm to 10pm CET in Europe).  We will use internet audio, and students are encouraged to use a headset and microphone to facilitate 2-way voice discussion.  Click here to register by credit card, or contact me by email to sign up by PO.

The post Details on BPMN Master Class appeared first on Business Process Watch.

by bruce at May 06, 2014 06:26 PM