Planet BPM

May 23, 2016

Thomas Allweyer: BPM-Systeme werden zu Low Code Entwicklungs-Plattformen

Nachdem sich das „Zero Code“-Versprechen so manchen Herstellers als unrealistisch herausgestellt hat, stößt man in letzter Zeit vermehrt auf den Begriff „Low Code“. Damit werden Plattformen charakterisiert, die die Softwareentwicklung durch geeignete Tools wesentlich vereinfachen sollen. Vierzehn solcher Plattformen wurden jüngst vom Markforschungsinstitut Forrester evaluiert. Darunter findet sich auch eine ganze Reihe von BPM-System, wie Appian, AgilePoint, Bizagi, K2 und Nintex. Mit ihren grafischen Modellierungsumgebungen für die Ablaufsteuerung, Formulareditoren und Datenbank-Konnektoren bringen diese Systeme bereits eine ganze Reihe von Features mit, die den erforderlichen Anteil herkömmlicher Programmierung deutlich reduzieren.

Forrester definiert Low Code-Plattformen als Systeme zur schnellen Auslieferung von Geschäftsanwendungen mit einem Minimum an händischer Programmierung und geringen Anfangsinvestitionen in Setup, Training und Deployment. Viele Firmen sind heute darauf angewiesen, auch große, komplexe und zuverlässige Lösungen innerhalb von Tagen und Wochen anstatt Monaten zu entwickeln. Low Code-Plattformen sollen dies ermöglichen.

Der Markt ist momentan recht breit und zersplittert. Forrester unterscheidet je nach Schwerpunkt der Systems zwischen „Data Base Application Platforms“, „Request Handling Platforms“, „Mobile First Application Platforms“ und „Process Application Platforms“, worunter die bereits erwähnten BPM-Systeme fallen. Dabei ist die Tendenz zu erkennen, dass die Hersteller den Funktionsumfang ihrer Systeme in Richtung „General Purpose Plattforms“ erweitern, mit denen ganz unterschiedliche Typen von Unternehmensanwendungen entwickelt werden können.

Als wichtigste Features nennen die Forrester-Analysten:

  • Die grafische Konfiguration virtueller Datenmodelle und die Integration von Datenquellen per Drag & Drop
  • Deklarative Werkzeuge zur Definition von Geschäftslogik und Workflows mit Hilfe von Prozessmodellen, Entscheidungstabellen und Geschäftsregeln
  • Der Aufbau responsiver User Interfaces per Drag & Drop mit automatischer Generierung von Oberflächen für verschiedene Endgeräte
  • Tools für das Management von Entwicklung, Testen und Deployment

Speziellen Wert legt die Studie außerdem auf die Unterstützung des Cloud-Deployment und mobiler App-Stores. Anbieter sollten hierfür auch über Zertifikate zur Cloud-Sicherheit verfügen. Nicht zuletzt werden Hersteller positiv bewertet, die ein Freemium-Modell mit einer kostenlosen Version und Tutorials anbieten, wodurch ein Einstieg ohne aufwändige Schulungen und hohe Anfangsinvestitionen ermöglicht wird.


The Forrester Wave™: Low-Code Development Platforms, Q2 2016
Download der Studie auf der Appian-Seite (Registrierung erforderlich)

by Thomas Allweyer at May 23, 2016 11:53 AM

May 19, 2016

Sandy Kemsley: Analytics customer keynote at TIBCONOW 2016

Michael O’Connell hosted the last general session for TIBCO NOW 2016, focusing on analytics customer stories with the help of five customers: State Street, Shell, Vestas, Monsanto and Western...

[Content summary only, click through for full article and links]

by sandy at May 19, 2016 12:26 AM

May 18, 2016

Sandy Kemsley: ING Turkey’s journey to becoming a digital bank

I wanted to catch an ActiveMatrix BPM customer breakout session here at TIBCONOW 2016, so sat in on Rahsan Kalci from ING Turkey talking about their transformation to a digital bank using BPM,...

[Content summary only, click through for full article and links]

by sandy at May 18, 2016 10:58 PM

Sandy Kemsley: ActiveMatrix BPM update at TIBCONOW

Roger King, head of BPM product management, gave us an update on ActiveMatrix BPM and Nimbus. The most recent updates in AMX BPM have focused on data and case management. As we saw in the previous...

[Content summary only, click through for full article and links]

by sandy at May 18, 2016 09:57 PM

Sandy Kemsley: Case management at TIBCONOW 2016

Breakout sessions continue with Jeremy Smith and Nicolas Marzin of TIBCO presenting their case management functionality. Marzin went through the history of process and how we have moved from...

[Content summary only, click through for full article and links]

by sandy at May 18, 2016 08:46 PM

Sandy Kemsley: Intelligent Business Operations at TIBCONOW 2016

Nicolas Marzin of TIBCO gave a breakout session on making business operations intelligent, starting with the drivers of efficiency, agility, quality and transparency. There are a number of challenges...

[Content summary only, click through for full article and links]

by sandy at May 18, 2016 07:17 PM

Sandy Kemsley: Closing the loop with analytics: TIBCONOW 2016 day 2 keynote

Yesterday at TIBCO NOW 2016, we heard about the first half of TIBCO’s theme — interconnect everything — and today, Matt Quinn introduced the second half — augment intelligence...

[Content summary only, click through for full article and links]

by sandy at May 18, 2016 06:25 PM

May 17, 2016

Sandy Kemsley: TIBCO Nimbus for regulatory compliance at Bank of Montreal

It’s the first afternoon of breakout sessions at TIBCO NOW 2016, and Alex Kurm from Bank of Montreal is presenting how the bank has used Nimbus for process documentation, to serve the goals of...

[Content summary only, click through for full article and links]

by sandy at May 17, 2016 11:54 PM

Sandy Kemsley: Destination: Digital at the TIBCONOW 2016 day 1 keynote

TIBCO had a bit of a hiatus on their conference while they were being acquired, but are back in force this week in Las Vegas with TIBCO NOW 2016. The theme is “Destination: Digital” with...

[Content summary only, click through for full article and links]

by sandy at May 17, 2016 07:14 PM

May 16, 2016

Keith Swenson: AI as an Interface to Business Process

Dag Kittlaus demonstrated Viv last week; business software world should pay attention.  “Viv” is a conversational approach to interacting with systems.  The initial presentation talks about personal applications, but there are even greater opportunities in the workplace.

What is it?

If you have not yet seen it, then take a look at the Techcrunch Video.  It is billed as a artificial intelligence personal assistant.   Dag Kittlaus brought Siri to Apple to provide basic spoken language recognition to the iPhone.  Viv goes a lot further.  It takes what you say and start creating a map of what you want.  As you say more, it modifies and refines the map.  It taps information service providers, and these are combined in real time based on a semantic model of those services.

This is precisely what Nathaniel Palmer was presenting in his forward looking presentation at the bpmNext conference, and coincidentally something I brought up as well.  Businesses moved from big heavy equipment, to laptops, and then to smart phones.  Mobile is so last year!   The devices got more portable, and the graphical user interface got better over the years, but the paradigm remained the same: humans collect the information together, and submit it to the system, to allow the system to process it.  You write an email, edit to final form, and then send it.  You fill out an expense report, and then submit it.

A conversational UI is very different.  You have a single agent that you contact by voice message, text message, email and yes probably also by web forms, which hen in turn interfaces with the system software.  It learns about you, and the kind of things you normally want, so that it can understand what you are talking about, and translate to the relatively dumber systems.

I was not that impressed

All of the examples were simply, one-off requests.   Ask for weather, and ask a more complicated query which shows some nice parsing capability, but it is still just a single query with a single answer.   Dynamic program generation?  Software that writes itself?  Give me a break: every screen generator, every application generator, generates a program that executes. This is a bit hyperbolic.  The important thing is not that it creates a sequence of steps that satisfy the intent, but that it is able to understand the intent in the first place.

Order flowers.  I could call the one person and order flowers.  I can order a Uber car without needing an assistant.  Booking a hotel is only a few mouse clicks.  That is always the problem with demonstrations — they have to simple enough to grasp, short enough to complete in a few minutes, but hopefully compelling enough to understand the potential.

The most interesting part is after he has the list of flowers, he simply says “what about tulips” and Viv refined the situation.  This shows the power of the back and forth conversation. The conversation constitutes a kind of learning that works together with you to incrementally get to what you want to do.  That is the news: Viv has an impressive understanding about you and what you mean with a few words, and it extends that understanding on a case by case basis.

What is the Potential?

One of the biggest problems with BPM is this idea that you have to know everything at the time that the process starts.  You have to put all your expenses into the expense report for processing.  You need to fill in the complete order form before you can purchase something.  As we illustrated in Mastering the Unpredictable, many business have to start working long before they know all the data.  The emergency room has to accept patients long before they know what care is needed.

The conversational approach to applications will radically transform the ability of software to help out.  Instead of being required to give the full details up front, you can tell the agent what you know now.  It can start working on part of that.  Later, you tell it a little more, maybe after reviewing what it had found so far.  If it is heading down the wrong path, you steer it back in the right direction.

I personally hate forms that that ask for all the potential bit of information that might be needed somewhere in the process.  Like at the doctor’s office where you fill in the same details every time, most of which are going to be needed on this visit, but there is a spot there just in case.  A conversational approach would allow me to add information as it is needed.

PersonalAssistant1

With a group of people this starts to get real interesting.  The doctor is unsure on the direction to go with a patient, so they bring an expert into the conversation.  That expert could start asking questions about the patient.  The agent answers when it can, but it also can pass those questions on to the doctor and the patient.  The conversation is facilitated by the map that represents the case so far.  The agent learns what needs to be done, and over time can facilitate this interaction by learning what the various participants normally mean by their spoken words.

It is not that far fetched.  It will radically change the way we think about our business applications.  It is certainly is disruptive.  This demonstration by Viv makes it clear that this is already happening today.  You might want to buckle your seat belts.

 


by kswenson at May 16, 2016 12:54 PM

May 13, 2016

Drools & JBPM: #Drools & #jBPM @ #JBCNConf 2016 (Barcelona, June)

Great news! Once again the amazing and flamboyant leaders of the Java User Group from Barcelona manage to put together their anual conference JBCNConf. And, of course, Drools & jBPM will be there. Take a look at their website for more information about the talks and speakers, and if you are close enough to Barcelona I hope to see you all there.
LOGO_FINAL_PNG_500x250
This year I will be doing a Drools Workshop there (Thursday, first day of the conference), hoping to introduce people to Drools in a very hands on session. So if you are looking to start using Drools straight away, this is a great opportunity to do so. If you are a more advanced user and wants to bring your examples or issues to the workshop you are more than welcome. I will be sharing the projects that I will be using on the workshop a couple of weeks before the event so  can take a look and bring more questions to the session. It is also probable that I will be bringing with me freshly printed copies of the new Mastering Drools book, so you might be able to get some copies for free :)
Maciej Swiderski will be covering the jBPM and Knowledge Driven Microservices this year. I totally recommend this talk to anyone interested in how to improve your micro services by adopting tools to formalise and automate domain specific knowledge.
Finally, this year Maciej and I will be given the closing talk of the conference titled : The Open Source Way were we will be sharing with the audience the main benefits of getting involved with the open source community & projects but most importantly we will be sharing how to do achieve that. If you are already an Open Source project contributor and you plan to attend to the conference, get in touch!
Stay tuned for more news, and get in touch if you want to hang around with us before and after the conference!

by salaboy (noreply@blogger.com) at May 13, 2016 09:05 AM

May 12, 2016

Thomas Allweyer: Aktuelle Auflage des BPMN-Buchs in Englisch erschienen

BPMN 2.0 Frontpage-smZwischenzeitlich ist die aktuelle Auflage meines BPMN-Buchs, das insbesondere um eine Sammlung von Modellierungsmustern erweitert wurde, auf Englisch erschienen. Die zweite englische Auflage entspricht inhaltlich der dritten deutschen Auflage. Wenn man das Buch bestellt, sollte man auf die richtige ISBN achten (und ggf. direkt danach suchen), insbesondere bei verschiedenen internationalen Amazon-Webseiten bekommt man öfter nur die alte Ausgabe angezeigt. Da das Buch on demand gedruckt wird, ist es auf jeden Fall innerhalb einiger Tage lieferbar – auch wenn bei amazon manchmal etwas anderes steht.

Weitere Infos zum Buch (inkl. Direktlinks zu den Bestellseiten)

by Thomas Allweyer at May 12, 2016 10:06 AM

May 11, 2016

Keith Swenson: DMN at bpmNEXT 2016

bpmNEXT is 2 and half days of intense examination and evaluation of the leading trends in the business process community, and Decision Modeling Notation was clearly highlighted this year.

This is the year for DMN

The Decision Modeling Notation standard was released mid 2015. There are several implementations, but none of them quite mature yet.  If you are not familiar with DMN, here is what you need to know:

  • You can think of it simplistically as a tree of decision tables. There is so much more to it than that, but probably 80% of usage will a tree of decision tables
  • It has a specific expression language that allows the writing of conditions and results
  • Actually it is a tree of block expressions. A block expression can be a decision table, a simple if/then/else statement, or a number of other types of expression.
  • The results of blocks lower in the tree can be used in blocks further up.

The idea is to represent complicated expressions in a compact, reusable way.

In general, the market response to DMN has been very good.  Some business rule purists say it is too technical, however is strikes a balance between what you would need to do in a programming language, and a completely natural language rule implementation.  Like BPMN, it will probably tend to be used by specialists, but there is also a good chance, like BPMN, that the results will at least be readable by regular business users.  In my talk, I claimed “This is the Year for DMN

Demonstrations:

  • Denis Gagne, Trisotech, demonstrated DMN modeling as part of his suite of cloud based process modeling tools.  Execution is notably absent.
  • Alvin To, Oracle, demonstrated their version, which only supports linear box expressions (as opposed to the more general tree structure) putting particular attention to their contribution to the spec: FEEL (Friendly Enough Expression Language).
  • Larry Goldberg, Sapiens, demonstrated their ability to create DMN models and transform them into a large variety of execution formats.
  • Jacob Feldman, Open Rules, demonstrates his rules optimization capability.
  • Jacob Freund, Camunda, has an implementation that focuses on single decision tables.

Missing Run-time

Most of the demonstrations focused on the modeling of the decisions.  This is a problem.  The specification covers the modeling, however as with any software standard, the devil is in the details.  You can model using several tools in exactly the same way, but there is no guarantee that the execution of the model will be the same.  A similar situation existed with BPMN where different implementations treated things like the Inclusive-OR node completely differently.  The model is meaningless unless you can show that the models actually produce the same decisions — and that requires a standard run time library that can execute the model and show that what they actually mean.

The semantics are described in the specification using words that can never be precise enough to ensure reliable interoperability.  Until an actual reference implementation is available, there will be no way to decide who has interpreted these words correctly.   The problems occur in what might seem to be pathological edge cases, but experience shows that these are surprisingly more numerous than anyone anticipates.

Call To Action

For this reason I am calling for a standard implementation of the DMN evaluator that is widely available to everyone.  A reference implementation.  I think it needs to be an open source implementation, one that works well enough that products can actually use the code in a product.  Much like the way that Apache web server has eliminated the need for each company to write their own web server.

WfMC will be starting a working group to identify and promote the best open source implementation of DMN run-time.  We don’t want to invent yet another implementation, so we plan to identify the best existing implementation and promote it.  There are a couple of good examples out there.

If you believe you have a good open source implementation of DMN run-time then please leave a comment on this blog post.

If you are interested in helping identify and recognize the best implementation, leave a comment as well.

Resources


by kswenson at May 11, 2016 05:27 AM

May 09, 2016

April 25, 2016

Thomas Allweyer: Über wie viel Prozessintelligenz verfügen Unternehmen?

Prozessintelligenz zhawKürzlich ist die Umfrage für die diesjährige BPM-Studie der ZHAW School of Management and Law gestartet. Dabei liegt die Veröffentlichung der vorangehenden Studie zum Thema Prozessintelligenz noch gar nicht lange zurück. Gemeinhin wird unter der Bezeichnung „Process Intelligence“ meist die Sammlung und Analyse von prozessbezogenen Daten verstanden. In dieser Studie wird der Begriff weiter gefasst. Er umfasst die gesamten Fähigkeiten einer Organisation, die es ihr ermöglichen, intelligent mit ihren Prozessen umzugehen, und umfasst die Teilbereiche „Kreative Intelligenz“, „Analytische Intelligenz“ und „Praktische Intelligenz“. So gehören etwa auch die Fähigkeiten zur strategische Verankerung des Prozessmanagements, zur Prozessoptimierung und zur Prozess-Steuerung zur Prozessintelligenz. In der BPM-Studie 2015 wurde untersucht, wie es um die Prozessintelligenz in den Unternehmen bestellt ist. Hierbei wurden einerseits fünf Fallstudien durchgeführt, andererseits eine Umfrage.

Die Fallstudien beschreiben Projekte zur Prozessverbesserung bei drei Unternehmen (Axa Winterthur, St. Galler Kantonalbank und Hoffmann-La Roche) sowie zwei Stadtverwaltungen (Lausanne und Konstanz). Hierbei kamen ganz unterschiedliche Methoden und Werkzeuge zum Einsatz, wie z. B. Process Mining, Simulation, Prozessautomatisierung, Business Rules Management, Lean Six Sigma, Wertstromanalyse und ein Verfahren zum agilen Geschäftsprozessmanagement. Die Fallbeispiele sind ausführlich beschrieben, und es wird jeweils herausgearbeitet, welche Aspekte der Prozessintelligenz genutzt und verbessert wurden.

In der Umfrage wurde deutlich, dass in vielen Unternehmen Anspruch und Wirklichkeit bezogen auf das Nutzenpotenzial von BPM auseinanderklaffen. So werdem Effizienzsteigerungen und Kundenorientierung als die wichtigsten Ziele genannt, doch führen nur wenige Firmen auch auf diese Ziele bezogene Maßnahmen durch. So gibt nur jeweils etwa ein Fünftel der Befragten an, systematisch Standardisierungs- und Automatisierungspotenziale zu ermitteln, oder die operative Prozessleistung zu überwachen. Entsprechend werden bislang nur recht selten Business Intelligence-Werkzeuge im Zusammenhang mit Geschäftsprozessmanagement eingesetzt. Auch die IT-Unterstützung von schwach strukturierten, wissensintensiven Prozessen ist derzeit wenig ausgeprägt. Insbesondere wird BPM noch kaum im Zusammenhang mit Themen wie Digitalisierung, Entwicklung von Innovationen oder Optimierung des Kundenerlebnisses gesehen. Welche Möglichkeiten das Prozessmanagement für diese strategischen Zukunftsthemen hat, wird in der gerade angelaufenen Studie BPM 2016 untersucht.

Download der Studie unter www.zhaw.ch/iwi/prozessintelligenz

by Thomas Allweyer at April 25, 2016 07:35 AM

April 21, 2016

Sandy Kemsley: bpmNEXT 2016 demo: Capital BPM and Fujitsu

Our final demo session of bpmNEXT — can’t believe it’s all over. How I Learned to Tell the Truth with BPM – Gene Rawls, Capital BPM Their Veracity tool overlays architecture...

[Content summary only, click through for full article and links]

by sandy at April 21, 2016 06:53 PM

Sandy Kemsley: bpmNEXT 2016 demos: Appian, Bonitasoft, Camunda and Capital BPM

Last day of bpmNEXT 2016 already, and we have a full morning of demos in two sessions, the first of which has a focus on more technical development. Intent-Driven, Future-Proof User Experience...

[Content summary only, click through for full article and links]

by sandy at April 21, 2016 05:28 PM

Sandy Kemsley: bpmNEXT 2016 demos: IBM, Orquestra, Trisotech and BPM.com

On the home stretch of the Wednesday agenda, with the last session of the four last demos for the day. BPM in the Cloud: Changing the Playing Field – Eric Herness, IBM IBM Bluemix...

[Content summary only, click through for full article and links]

by sandy at April 21, 2016 12:33 AM

April 20, 2016

Sandy Kemsley: bpmNEXT 2016 demos: Oracle, OpenRules and Sapiens DECISION

This afternoon’s first demo session shifts the focus to decision management and DMN. Decision Modeling Service – Alvin To, Oracle Oracle Process Cloud as an alternative to their Business...

[Content summary only, click through for full article and links]

by sandy at April 20, 2016 10:09 PM

Sandy Kemsley: bpmNEXT 2016 demos: W4 and BP3

Second round of demos for the day, with more case management. This time with pictures! BPM and Enterprise Social Networks for Flexible Case Management – Francois Bonnet, W4 (now ITESOFT Group)...

[Content summary only, click through for full article and links]

by sandy at April 20, 2016 07:05 PM

Sandy Kemsley: bpmNEXT 2016 demos: Salesforce, BP Logix and RedHat

Day 2 of bpmNEXT is all demos! Four sessions with a total of 12 demos coming up, with most of the morning focused on case management. Cloud Architecture Accelerating Innovation in Application...

[Content summary only, click through for full article and links]

by sandy at April 20, 2016 05:33 PM

April 19, 2016

Sandy Kemsley: bpmNEXT 2016 demo session: Signavio and Princeton Blue

Second demo round, and the last for this first day of bpmNEXT 2016. Process Intelligence – Sven Wagner-Boysen, Signavio Signavio allows creating a BPMN model with definitions of KPIs for the...

[Content summary only, click through for full article and links]

by sandy at April 19, 2016 11:26 PM

Sandy Kemsley: bpmNEXT 2016 demo session: 8020 and SAP

My panel done — which probably set some sort of record for containing exactly 50% of the entire female attendees at the conference — we’re on to the bpmNEXT demo session: each is 5...

[Content summary only, click through for full article and links]

by sandy at April 19, 2016 10:05 PM

Sandy Kemsley: Building a Value-Added BPM Business panel at bpmNEXT

BPM implementations aren’t just about the software vendors, since the vendor vision of “just take it out of the box and run it” or “have your business analyst build...

[Content summary only, click through for full article and links]

by sandy at April 19, 2016 07:05 PM

Sandy Kemsley: Positioning Business Modeling panel at bpmNEXT

We had a panel of Clay Richardson of Forrester, Kramer Reeves of Sapiens and Denis Gagne of Trisotech, moderated by Bruce Silver, discussing the current state of business modeling in the face of...

[Content summary only, click through for full article and links]

by sandy at April 19, 2016 06:06 PM

Sandy Kemsley: bpmNEXT 2016

It’s back! My favorite conference of the year, where the industry insiders get together to exchange stories and show what cool stuff that they’re working on, bpmNEXT is taking place this...

[Content summary only, click through for full article and links]

by sandy at April 19, 2016 05:01 PM

April 18, 2016

Thomas Allweyer: Umfrage zu BPM und digitaler Transformation gestartet

Unter der Leitfrage „Kundennutzen durch digitale Transformation?“ hat die School of Management und Law an der Zürcher Hochschule für Angewandte Wissenschaften (ZHAW) eine Umfrage zu ihre BPM-Studie 2016 gestartet. Im Fokus stehen dieses Jahr insbesondere die Potenziale des Prozessmanagements für die Optimierung von Kundenerlebnissen und die Entwicklung und Umsetzung neuer Geschäftsmodelle. Es soll untersucht werden, welche Konzepte und Methoden in diesen Bereichen bereits eingesetzt werden und inwiefern sie Teil der digitalen Transformation von Unternehmen sind. Die Teilnahme an der Umfrage ist ab sofort möglich. Link zur Umfrage.

by Thomas Allweyer at April 18, 2016 06:33 PM

Drools & JBPM: Drools 6.4.0.Final is available

The latests and greatest Drools 6.4.0.Final release is now available for download.

This is an incremental release on our previous build that brings several improvements in the core engine and the web workbench.

You can find more details, downloads and documentation here:




Read below some of the highlights of the release.

You can also check the new releases for:




Happy drooling.

Drools Workbench

New look and feel

The general look and feel in the entire workbench has been updated to adopt PatternFly. The update brings a cleaner, lightweight and more consistent user experience throughout every screen. Allowing users focus on the data and the tasks by removing all unnecessary visual elements. Interactions and behaviour remain mostly unchanged, limiting the scope of this change to visual updates.


Various UI improvements

In addition to the PatternFly update described above which targeted the general look and feel, many individual components in the workbench have been improved to create a better user experience. This involved making sure the default size of modal popup windows is appropriate to fit the corresponding content, adjusting the size of text fields as well as aligning labels, and improving the resize behaviour of various components when used on smaller screens.


New Locales

Locales ru (Russian) and zh_TW (Chineses Traditional) have now been added.

New Decision Server Management UI

The KIE Execution Server Management UI has been completely redesigned to adjust to major improvements introduced recently. Besides the fact that new UI has been built from scratch and following best practices provided by PatternFly, the new interface expands previous features giving users more control of their servers.


Core Engine


Better Java 8 compatibility

It is now possible to use Java 8 syntax (lambdas and method references) in the Right Hand Side (then) part of a rule.

More robust incremental compilation

The incremental compilation (dynamic rule-base update) had some relevant flaws when one or more rules with a subnetwork (rules with complex existential patterns) were involved, especially when the same subnetwork was shared among different rules. This issue required a partial rewriting of the existing incremental compilation algorithm, followed by a complete audit that has also been validated by brand new test suite made by more than 20,000 test cases only in this area.

Improved multi-threading behaviour

Engine's code dealing with multi-threading has been partially rewritten in order to remove a large number of synchronisation points and improve stability and predictability.


OOPath improvements

OOPath has been introduced with Drools 6.3.0. In Drools 6.4.0 it has been enhanced to support a number of new features.


by Edson Tirelli (noreply@blogger.com) at April 18, 2016 03:50 PM

Drools & JBPM: Oficial Wildfly Swarm #Drools Fraction

Oficial what? Long title for a quite small but useful contribution. Wildfly Swarm allows us to create rather small and self contained application including just what we need from the Wildfly Application Server. On this post we will be looking at the Drools Fraction provided to work with Wildfly Swarm. The main idea behind this fraction is to provide a quick way to bundle the Drools Server among with your own services inside a jar file that you can run anywhere.

Microservices World

Nowadays, while micro services are a trending topic we need to make sure that we can bundle our services as decoupled from other software as possible. For such a task, we can use Wildfly Swarm that allows us to create our services using a set of fractions instead of a whole JEE container. It also saves us a lot of time by allowing us to run our application without the need of downloading or installing a JEE container. With Swarm we will be able to just run java -jar <our services.jar> and we are ready to go.
In the particular case of Drools, the project provides a Web Application called Kie-Server (Drools Server) which offers a set of REST/SOAP/JMS endpoints to use as a service. You can load your domain specific rules inside this server and create new containers to use your different set of rules. But again, if we want to use it, we will need to worry about how to install it in Tomcat, Wildfly, Jetty, WebSphere, WebLogic, or any other Servlet Container. Each of these containers represent a different challenge while it comes to configurations, so instead of that we can start using the Wildfly Swarm Drools Fraction, which basically enables the Drools Server inside your Wildfly Swarm application. In a way you are bundling the Drools Server with your own custom services. By doing this, you can start the Drools Server by doing java -jar <your.jar> and you ready to go.
Imagine the other situation of dealing with several instances of Servlet Containers and deploying the WAR file to each of those containers. It gets worst if those containers are not all the same "brand" and version.
So let's take a quick look at an example of how you can get started using the Wildfly Swarm Drools Fraction.

Example

I recommend you to take a look at the Wildfly Swarm Documentation first, to get you started on using Wildfly Swarm. If you know the basics, then you can include the Drools Fraction.
I've created an example using this fraction here: https://github.com/Salaboy/drools-workshop/tree/master/drools-server-swarm
The main goal of this example is to show how simple is to get you started with the Drools Fraction, and for that reason I'm not including any other service in this project. You are not restricted by that, and you can expose your own endpoints.
Notice in the pom.xml file two things:
  1. The Drools Server Fraction: https://github.com/Salaboy/drools-workshop/blob/master/drools-server-swarm/pom.xml#L18 By adding this dependency, the fraction is going to be activated while Wildfly Swarm bootstrap.
  2. The wildfly-swarm plugin: https://github.com/Salaboy/drools-workshop/blob/master/drools-server-swarm/pom.xml#L25. Notice in the plugin configuration that we are pointing to the App class which basically just start the container. (This can be avoided, but I wanted to show that if you want to start your own services or do your own deployments you can do that inside that class)
If you compile and package this project by doing mvn clean install, you will find in the target/ directory a file called:
drools-server-swarm-1.0-SNAPSHOT-swarm.jar which you can start by doing
[code]

java -jar drools-server-swarm-1.0-SNAPSHOT-swarm.jar

[/code]
For this example, we will include one more flag when we start our project to make sure that our Drools Server can resolve the artefacts that I'm going to use later on, so it will be like this:
[code]

java -Dkie.maven.settings.custom=../src/main/resources/settings.xml -jar drools-server-swarm-1.0-SNAPSHOT-swarm.jar

[/code]
By adding the "kie.maven.setting.custom" flag here we are letting the Drools Server know that we had configured an external maven repository to be used to resolve our artefacts. You can find the custom settings.xml file here.
Once you start this project and everything boots up (less than 2 seconds to start wildfly-swarm core + less than 14 to boot up the drools server) you are ready to start creating your KIE Containers with your domain specific rules.
You can find the output of running this app here. Notice the binding address for the http port:
WFLYUT0006: Undertow HTTP listener default listening on [0:0:0:0:0:0:0:0]:8083
Now you can start sending requests to http://localhost:8083/drools to interact with the server.
I've included in this project also a Chrome's Postman project for you to test some very simple request like:
  • Getting All the registered Containers -> GET http://localhost:8083/drools/server/containers
  • Creating a new container - > PUT http://localhost:8083/drools/server/containers/sample
  • Sending some commands like Insert Fact + Fire All Rules -> POST http://localhost:8083/drools/server/containers/instances/sample
You can import this file to Postman and fire the requests against your newly created Drools Server. Besides knowing to which URLs to PUT,POST or GET data, you also need to know about the required headers and Authentication details:
Headers
Headers
Authentication -> Basic
User: kieserver
Password: kieserver1!
Finally, you can find the source code of the Fraction here: https://github.com/wildfly-swarm/wildfly-swarm-drools
There are tons of things that can be improved, helpers to be provided, bugs to be fixed, so if you are up to the task, get in touch and let's the Drools fraction better for everyone.

Summing up

While I'm still writing the documentation for this fraction, you can start using it right away. Remember that the main goal of these Wildfly Swarm extensions is to make your life easier and save you some time when  you need to get something like the Drools Server in a small bundle and isolated package that doesn't require a server to be installed and configured.
If you have any questions about the Drools Fraction don't hesitate to write a comment here.



by salaboy (noreply@blogger.com) at April 18, 2016 01:21 PM

April 15, 2016

Thomas Allweyer: Tagung Insight diskutiert Modellierung im digitalen Unternehmen

Insight2016Die von dem Modellierungsspezialisten MID in Nürnberg veranstaltete Tagung dürfte mittlerweile die größte deutschsprachige Veranstaltung rund um das Thema Modellierung sein. Unter dem Motto „Models Drive Digital“ stand dieses Jahr auch hier das allgegenwärtige Thema Digitalisierung im Vordergrund. So drehten sich sowohl die Einführungs-Keynote von Innovationsforscher Nick Sohnemann als auch der Abschlussvortrag von Ranga Yogeshwar um die zum Teil atemberaubend schnellen Entwicklungen, mit denen unsere Gesellschaft konfrontiert ist und die alle Branchen verändern werden, wobei der Fernsehjournalist Yogeshwar auch zahlreiche kritische Annmerkungen machte. So sei zu beobachten, dass Innovationen vielfach zu einer Verstärkung von Ungleichheit führen.

Ein weiterer Plenumsvortrag stellte die Digitalisierungsstrategie des FC Bayern München vor. Der größte Sportverein der Welt ist auch ein großes Unternehmen mit zum Teil ganz speziellen Anforderungen an die IT. Beispielsweise müssen die Planung, Überwachung und Steuerung der An- und Abreise von zigtausend Besuchern eines Heimspiels durchgängig unterstützt werden. Die Anmeldung als Vereinsmitglied muss unter anderem auch über eine App erfolgen können – nicht zuletzt weil besonders innige Fans ihren neugeborenen Nachwuchs direkt aus dem Kreißsaal beim FC anmelden wollen.

Beim Veranstalter MID dreht sich alles um die Plattform „smartfacts“, die Modelle aus unterschiedlichsten Tools in einer kollaborativen Umgebung integriert. Die Geschäftsführer Andreas Ditze und Jochen Seemann stellten die neuesten Entwicklungen vor, u. a. die verbesserte Unterstützung von Review- und Freigabeprozessen, die Integration eines Web-Modelers und die Aufbereitung von Prozessmodellen in Form einer „Process Guidance“, die Endanwender Schritt für Schritt durch Prozesse führt.

Im Vortragsprogramm gab es insgesamt zehn parallele Tracks zur Auswahl. Neben der Digitalisierung standen Themen wie Geschäftsprozessmanagement, agile Methoden, Business Intelligence, Master Data Management und SAP auf dem Programm. In den Pausen konnten die Teilnehmer Datenbrillen und andere Gadgets ausprobieren oder die Wissensvermittlung durch Serious Games erleben.

Vielfach stellt man fest, dass gerade auch Vorreiter der digitalen Transformation kaum etablierte Modellierungsmethoden einsetzen. Sie werden als zu schwergewichtig betrachtet um hilfreich für die schnelle Entwicklung und Umsetzung digitaler Geschäftsmodelle zu sein. So wies Nick Sohnemann bereits im Eröffnungsvortrag darauf hin, dass etwa bei Google Trends das Interesse am Suchbegriff „Business Process Modeling“ stark gesunken ist. Und auch Elmar Nathe, der bei MID das Thema Digitalisierung verantwortet, sagte mir, dass es Kunden gebe, die nach einer eher groben Skizzierung der Facharchitektur direkt in die Codierung einsteigen und auf eine genauere Modellierung weitgehend verzichten – obwohl die fehlende Dokumentation zu Problemen bei Wartung und Weiterentwicklung führen dürfte.

Geschäftsführer Jochen Seemann zitierte eine Gartner-Studie, der zufolge 80% der Unternehmen aufgrund eines mangelnden BPM-Reifegrades mit ihren digitalen Strategien nicht die erhofften Erfolge erzielen werden. Insofern spielen Themen wie Prozessmanagement und Prozessmodellierung eine wichtige Rolle im digitalen Unternehmen, denn die neuen Geschäftsmodelle funktionieren nur, wenn die zur Umsetzung benötigten Prozesse und Systeme beherrscht werden. MID beobachtet, dass auch Themen wie die modellgetriebene Entwicklung wieder auf verstärktes Interesse stoßen. So setzen beispielsweise Automobilkonzerne verstärkt auf modellbasierte Ansätze um die Variantenvielfalt in Hard- und Software in den Griff zu bekommen.

by Thomas Allweyer at April 15, 2016 09:45 AM

April 11, 2016

BPinPM.net: Invitation to BPinPM.net Conference 2016 – The Human Side of BPM: From Process Operation to Process Innovation

We are very happy to invite you to the most comprehensive Best Practice in Process Management Conference ever! Meet us at Lufthansa Training & Conference Center and join the journey from Process Operation to Process Innovation.

It took more than a year to evaluate, to double-check, and to combine all workshop results to a new and holistic approach for sustainable process management.

But now, the ProcessInnovation8 is there and will guide us at the conference! 🙂

The ProcessInnovation8 provides direction to BPM professionals and management throughout the phases Process Strategy, Process Enhancement, Process Implementation, Process Steering, and Process Innovation while keeping a special focus onto the human side of BPM to maximize acceptance and benefit of BPM.

To share our learnings, introduce practical examples, discuss latest BPM insights, experience the BPinPM.net community, enjoy the dinner, and, and, and…, we are looking forward to meet you in Seeheim! 🙂

Please order your tickets now. Capacity is limited and the early bird tickets will be available for a short period of time only.

Please visit the conference site to access the agenda and to get all the details…

 

 

Again, this will be a local conference in Germany, but if enough non-German-speaking experts are interested, we will think about ways to share the know-how with the international BPinPM.net community as well. Please feel free to contact the team.

by Mirko Kloppenburg at April 11, 2016 07:17 PM

April 10, 2016

Tom Debevoise: Lists in Decision Model Notation

This image was inspired by Nick Broom's post to the DMN group in linked in.

The use case posed by Nick which is here: https://www.linkedin.com/groups/4225568/4225568-6123464175038586884

In the Signavio Decision Modeler’s implementation of DMN, we provide the ability to check whether a set contains an element of another input item or a static set. The expression it uses in the column is an an equivalent of the intersection set operator . The DMN diagram sbove that does this 3 different ways:

1)      With the Signavio ‘Multi-Decision’ extension to DMN. This iterates through an input that is a list and checks item by item if the inputs mstch.

2)      An internal operator that corresponds to a test of one item or set or items existence as a subset of another using a fixed subset

3)      An internal operator that corresponds to a test of one item or set or items existence as a subset of another using an input data type

You do not need the multi decision to support a simple data type list. However, if the input item is a list of complex types (multi attribute types) or complex logic is needed, then the multi-decision is needed. 

The Signavio export for this disagram is here.

 

by Tom Debevoise at April 10, 2016 07:03 PM

Thomas Allweyer: Webseite zum BPMN-Buch aktualisiert

BPMN 2.0 - 3. Auflage - Titel 183pxZur Zeit bereite ich die englische Ausgabe der aktuellen Auflage des BPMN-Buchs vor. Dabei sind mir im deutschen Buch ein paar Kleinigkeiten aufgefallen, die man verbessern könnte. Außerdem gibt es ein paar Änderungen und Erweiterungen zu den Quellen im Literaturverzeichnis und den angegebenen Internet-Links. Daher habe ich die Gelegenheit genutzt und die Webseite zum Buch aktualisiert: www.kurze-prozesse.de/bpmn-buch

by Thomas Allweyer at April 10, 2016 10:25 AM

April 08, 2016

Thomas Allweyer: Kostenfreie Modellierungstools im Test

BPMO-Studie Kostenfreie Modellierungstools17 kostenfreie Prozessmodellierungstools hat BPM&O in ihrer jüngsten Studie untersucht. Dabei wurden nur solche Tools einbezogen, deren kostenlose Nutzung zeitlich unbefristet ist, und die auch keiner Einschränkung hinsichtlich der Modellgröße unterliegen. Bewertet wurden technische Voraussetzungen, Schnittstellen, Modelltypen und Verknüpfungen, Sprachen, Dokumentation und Support. Einige der Tools weisen einen beachtlichen Funktionsumfang auf und sind durchaus für den kurzfristigen Einsatz in Projekten oder zur Überbrückung der Beschaffungszeit eines kostenpflichtigen Modellierungsplattform geeignet. Dennoch, so das Fazit der Studienautoren, muss man sich bewusst sein, dass es sich bei allen kostenlos erhältlichen Modellierungstools um bessere Malwerkzeuge handelt. Ein umfassendes Prozessmanagement lässt sich damit nicht sinnvoll unterstützen, da wesentliche Funktionen fehlen, wie z. B. Kollaborationsmöglichkeiten oder Prozessportale. Einen Eindruck von der Bedienung der Modellierungsfunktionen bieten die Videos, die BPM&O zu jedem untersuchten Tool erstellt hat. Link zum Download der Studie (Registrierung erforderlich).

by Thomas Allweyer at April 08, 2016 12:21 PM

April 06, 2016

Drools & JBPM: User and group management in jBPM and Drools Workbenches

Introduction

This article talks about a new feature that allows the administration of the application's users and groups using an intuitive and friendly user interface that comes integrated in both jBPM and Drools Workbenches.

User and group management
Before the installation, setup and usage of this feature, this article talks about some previous concepts that need to be completely understood for the further usage.

So this article is split in those sections:
  • Security management providers and capabilities
  • Installation and setup
  • Usage
Notes: 
  • This feature is included from version 6.4.0.Final.
  • Sources available here.


Security management providers

A security environment is usually provided by the use of a realm. Realms are used to restrict the access for the different application's resources. So realms contains information about the users, groups, roles, permissions and and any other related information.

In most of the typical scenarios the application's security is delegated to the container's security mechanism, which consumes a given realm at same time. It's important to consider that there exist several realm implementations, for example Wildfly provides a realm based on the application-users.properties/application-roles.properties files, Tomcat provides a realm based on the tomcat-users.xml file, etc. So keep in mind that there is no single security realm to rely on, it can be different in each installation.

The jBPM and Drools workbenches are not an exception, they're build on top Uberfire framework (aka UF), which delegates the authorization and authentication to the underlying container's security environment as well, so the consumed realm is given by the concrete deployment configuration.

 
Security management providers

Due to the potential different security environments that have to be supported, the users and groups management provides a well defined management services API with some default built-in security management providers. A security management provider is the formal name given to a concrete user and group management service implementation for a given realm.

At this moment, by default there are three security management providers available:
Keep updated on new security management providers on further releases. You can easily build and register your own security management provider if non of the defaults fits in your environment.

 
Security management providers's capabilities

Each security realm can provide support different operations. For example consider the use of a Wildfly's realm based on properties files,  The contents for the applications-users.properties is like:

admin=207b6e0cc556d7084b5e2db7d822555c
salaboy=d4af256e7007fea2e581d539e05edd1b
maciej=3c8609f5e0c908a8c361ca633ed23844
kris=0bfd0f47d4817f2557c91cbab38bb92d
katy=fd37b5d0b82ce027bfad677a54fbccee
john=afda4373c6021f3f5841cd6c0a027244
jack=984ba30e11dda7b9ed86ba7b73d01481
director=6b7f87a92b62bedd0a5a94c98bd83e21
user=c5568adea472163dfc00c19c6348a665
guest=b5d048a237bfd2874b6928e1f37ee15e
kiewb=78541b7b451d8012223f29ba5141bcc2
kieserver=16c6511893651c9b4b57e0c027a96075

As you can see, it's based on key-value pairs where the key is the username, and the value is the hashed value for the user's password. So a user is just defined by the key, by its username, it  does not have a name nor address, etc.

On the other hand, consider the use of a realm provided by a Keycloak server. The information for a user is composed by more user meta-data, such as surname, address, etc, as in the following image:

Admin user edit using the Keycloak sec. management provider

So the different services and client side components from the users and group management API are based on capabilitiesCapabilities are used to expose or restrict the available functionality provided by the different services and client side components. Examples of capabilities are:
  • Create user
  • Update user
  • Delete user
  • Update user attributes
  • Create group
  • Assign groups
  • Assign roles 
  • etc

Each security management provider must specify a set of capabilities supported. From the previous examples you can note that the Wildfly security management provider does not support the capability for the management of the attributes for a user - the user is only composed by the user name. On the other hand the Keycloak provider does support this capability.

The different views and user interface components rely on the capabilities supported by each provider, so if a capability is not supported by the provider in use, the UI does not provide the views for the management of that capability. As an example, consider that a concrete provider does not support deleting users - the delete user button on the user interface will be not available.

Please take a look at the concrete service provider documentation to check all the supported capabilities for each one, the default ones can be found here.

If the security environment is not supported by any of the default providers, you can build your own. Please keep updated on further articles about how to create a custom security management provider.

 
Installation and setup

Before considering the installation and setup steps please note the following Drools and jBPM distributions come with built-in, pre-installed security management providers by default:
If your realm settings are different from the defaults, please read each provider's documentation in order to apply the concrete settings.

On the other hand, if you're building your own security management provider or need to include it on an existing application, consider the following installation options:
  • Enable the security management feature on an existing WAR distribution
     
  • Setup and installation in an existing or new project (from sources)
NOTE: If no security management provider is installed in the application, there will be no available user interface for managing the security realm. Once a security management provider is installed and setup, the user and group management user interfaces are automatically enabled and accessible from the main menu.

Enable the security management feature on an existing WAR distribution
Given an existing WAR distribution of either Drools and jBPM workbenches, follow these steps in order to install and enable the user management feature:

  1. Ensure the following libraries are present on WEB-INF/lib:
    • WEB-INF/lib/uberfire-security-management-api-6.4.0.Final..jar
    •  WEB-INF/lib/uberfire-security-management-backend-6.4.0.Final..jar
        
  2. Add the concrete library for the security management provider to use in WEB-INF/lib:
    • Example: WEB-INF/lib/uberfire-security-management-wildfly-6.4.0.Final..jar
    • If the concrete provider you're using requires more libraries, add those as well. Please read each provider's documentation for more information.
        
  3. Replace the whole content for file WEB-INF/classes/security-management.properties, or if not present, create it. The settings present on this file depend on the concrete implementation you're using. Please read each provider's documentation for more information.
      
  4. If you're deploying on Wildfly or EAP, please check if the WEB-INF/jboss-deployment-structure.xml requires any update. Please read each provider's documentation for more information.

Setup and installation in an existing or new project (from sources)

If you're building an Uberfire based web application and you want to include the user and group management feature, please read this instructions.

Disabling the security management feature

he security management feature can be disabled, and thus no services or user interface will be available, by any of

  • Uninstalling the security management provider from the application

    When no concrete security management provider installed on the application, the user and group management feature will be disabled and no services or user interface will be presented to the user.
       
  • Removing or commenting the security management configuration file

    Removing or commenting all the lines in the configuration file located at WEB-INF/classes/security-management.properties will disable the user and group management feature and no services or user interface will be presented to the user.


Usage

The user and group management feature is presented using two different perspectives that are available from the main Home menu (considering that the feature is enabled) as:
User and group management menu entries
Read the following sections for using both user and group management perspectives.

User management

The user management interface is available from the User management menu entry in the Home menu.

The interface is presented using two main panels:  the users explorer on the west panel and the user editor on the center one:

User management perspective

The users explorer, on west panel, lists by default all the users present on the application's security realm:

Users explorer panel
In addition to listing all users, the users explorer allows:

  • Searching users


    When specifying the search pattern in the search box the users list will be reduced and will display only the users that matches the search pattern.

    Search patterns depend on the concrete security management provider being used by the application's. Please read each provider's documentation for more information.
  • Creating new users:



    By clicking on the Create new user button, a new screen will be presented on the center panel to perform a new user creation.
The user editor, on the center panel, is used to create, view, update or delete users. Once creating a new user o clicking an existing user on the users explorer, the user editor screen is opened. 

To view an existing user, click on an existing user in the Users Explorer to open the User Editor screen. For example, viewing the admin user when using the Wildfly security management provider results in this screen:

Viewing the admin user
Same admin user view operation but when using the Keycloak security management provider, instead of the Wildfly's one, results in this screen:

Using the Keycloak sec. management provider
As you can see, the user editor when using the Keycloak sec. management provider includes the user attributes management section, but it's not present when using the Wildfly's one. So remember that the information and actions available on the user interface depends on each provider's capabilities (as explained in previous sections),

Viewing a user in the user editor provides the following information (if provider supports it):
  • The user name
  • The user's attributes
  • The assigned groups
  • The assigned roles
In order to update or delete an existing user, click on the Edit button present near to the username in the user editor screen:

Editing admin user
Once the user editor presented in edit mode, different operations can be done (if the security management provider in use supports it):
  • Update the user's attributes



    Existing user attributes can be updated, such as the user name, the surname, etc. New attributes can be created as well, if the security management provider supports it.
  • Update assigned groups

    A group selection popup is presented when clicking on Add to groups button:



    This popup screen allows the user to search and select or deselect the groups assigned for the user currently being edited.
  • Update assigned roles

    A role selection popup is presented when clicking on Add to roles button:



    This popup screen allows the user to search and select or deselect the roles assigned for the user currently being edited.
  • Change user's password

    A change password popup screen is presented when clicking on the Change password button:

  • Delete user

    The currently being edited user can be deleted from the realm by clicking on the Delete button. 
Group management

The group management interface is available from the Group management menu entry in the Home menu.

The interface is presented using two main panels:  the groups explorer on the west panel and the group editor on the center one:

Group management perspective
The groups explorer, on west panel, lists by default all the groups present on the application's security realm:

Groups explorer
In addition to listing all groups, the groups explorer allows:

  • Searching for groups

    When specifying the search pattern in the search box the users list will be reduced and will display only the users that matches the search pattern.
    Groups explorer filtered using search
    Search patterns depend on the concrete security management provider being used by the application's. Please read each provider's documentation for more information.
  • Create new groups



    By clicking on the Create new group button, a new screen will be presented on the center panel to perform a new group creation. Once the new group has been created, it allows to assign users to it:
    Assign users to the recently created group
The group editor, on the center panel, is used to create, view or delete groups. Once creating a new group o clicking an existing group on the groups explorer, the group editor screen is opened. 

To view an existing group, click on an existing user in the Groups Explorer to open the Group Editor screen. For example, viewing the sales group results in this screen:


Viewing the sales group
To delete an existing group just click on the Delete button.


by Roger Martinez (noreply@blogger.com) at April 06, 2016 06:17 PM

April 05, 2016

Thomas Allweyer: BPM & ERP im digitalen Unternehmen

Die unterschiedlichsten Facetten des IT- und Prozessmanagement im Zeitalter der Digitalisierung beleuchtet das 9. Praxisforum BPM & ERP. Als Keynotesprecher wurde kein geringerer als Professor August-Wilhem Scheer gewonnen. Sein Thema: „Digitialisierung verschlingt die Welt“. Die Frage, welche Bedeutung das Prozessmanagement im digitalisierten Unternehmen hat, kann unter anderem auch an verschiedenen Thementischen diskutiert werden. Auf den Punkt gebrachte Diskussionsanstöße versprechen auch mehrere Kurzvorträge im Pecha Kucha-Format. Und auch Cornelius Clauser, der Leiter der SAP Productivity Consulting Group, plädiert in seinem Abschlussvortrag „From Paper to Impact“ für eine neue Ausrichtung des BPM. Zuvor erwarten die Teilnehmer aber noch eine ganze Reihe von Praxisvorträgen, u. a. von Böhringer Ingelheim, EnBW, Infraserv und Zalando. Außerdem werden die Ergebnisse der internationalen Studie BPM Compass präsentiert, an der die Teilnahme noch bis zum 8. Mai möglich ist.
Die eintägige Veranstaltung findet am 21. Juni in Höhr-Grenzhausen in der Nähe von Koblenz statt. Zudem besteht die Möglichkeit, am Vortrag einen Intensivworkshop zum Prozessmanagement zu besuchen, sowie am Folgetag eine Praxiswerkstatt zum Thema „Agile und hybride Methoden auch im klassischen Umfeld“. Weitere Informationen gibt es unter www.bpmerp.de.

by Thomas Allweyer at April 05, 2016 06:25 PM

April 04, 2016

Drools & JBPM: Mastering #Drools 6 book is out!

Hi everyone, just a quick post to share the good news! The book is out and ready to ship! You can buy it from Packt or from Amazon directly. I'm happy to announce also that we are going to be presenting the book next week in Denmark, with the local JBug: http://www.meetup.com/jbug-dk/events/229407454/ if you are around or know someone that might be interested in attending please let them know!

Mastering Drools 6
The book covers a wide range of topics from the basic ones including how to set up your environment and how to write simple rules, to more advanced topics such as Complex Event Processing and the core of the Rule Engine, the PHREAK algorithm.

by salaboy (noreply@blogger.com) at April 04, 2016 09:02 AM

March 24, 2016

Thomas Allweyer: Entscheidungstabellen in der Cloud

DMN Entscheidungstabelle CamundaWer Geschäftslogik in Form von Entscheidungstabellen gemäß dem Standard „Decision Model and Notation“ (DMN) ausführen und in eine Anwendung integrieren möchte, kann einen neuen Cloud-Service von Camunda nutzen. Die Entscheidungstabelle kann über ein Web-Interface angelegt oder mit einem Offline-Editor erstellt, hochgeladen und mit einem Klick deployed werden. Die Ausführung der Entscheidungslogik lässt sich über ein REST-API anstoßen. Hierdurch ist eine einfache Integration in beliebige Anwendungen möglich. Code-Beispiele für verschiedene gängige Programmiersprachen stehen zur Verfügung. Allerdings handelt es sich bislang erst um einen Beta-Test, bei dem noch nicht bekannt ist, wie lange er kostenfrei zur Verfügung stehen wird.

by Thomas Allweyer at March 24, 2016 11:14 AM

March 23, 2016

Drools & JBPM: Packt is doing it again: 50% off on all eBooks and Videos

Packt Publishing has another great promotion going: 50% off on all Packt eBooks and Videos until April 30th.

It is a great opportunity to grab all those Drools books as well as any others you might be interested in.

Click on the image bellow to be redirected to their online store:




by Edson Tirelli (noreply@blogger.com) at March 23, 2016 10:20 PM

March 21, 2016

Drools & JBPM: High Availability Drools Stateless Service in Openshift Origin

openshift-origin-logoHi everyone! On this blog post I wanted to cover a simple example showing how easy it is to scale our Drools Stateless services by using Openshift 3 (Docker and Kubernetes). I will be showing how we can scale our service by provisioning new instances on demand and how these instances are load balanced by Kubernetes using a round robin strategy.

Our Drools Stateless Service

First of all we need a stateless Kie Session to play around with. In these simple example I've created a food recommendation service to demonstrate what kind of scenarios you can build up using this approach. All the source code can be found inside the Drools Workshop repository hosted on github: https://github.com/Salaboy/drools-workshop/tree/master/drools-openshift-example
In this project you will find 4 modules:
  • drools-food-model: our business model including the domain classes, such as Ingredient, Sandwich, Salad, etc
  • drools-food-kjar: our business knowledge, here we have our set of rules to describe how the food recommendations will be done.
  • drools-food-services: using wildfly swarm I'm exposing a domain specific service encapsulating the rule engine. Here a set of rest services is exposed so our clients can interact.
  • drools-controller: by using the Kubernetes Java API we can programatically provision new instances of our Food Recommendation Service on demand to the Openshift environment.
Our unit of work will be the Drools-Food-Services project which expose the REST endpoints to interact with our stateless sessions.
Also notice that there is another Service that gives us very basic information about where our Service is running: https://github.com/Salaboy/drools-workshop/blob/master/drools-openshift-example/drools-food-services/src/main/java/org/drools/workshop/food/endpoint/api/NodeStatsService.java
We will call this service to know exactly which instance of the service is answering our clients later on.
The rules for this example are simple and not doing much, if you are looking to learn Drools, I recommend you to create more meaning full rules and share it with me so we can improve the example ;) You can take a look at the rules here:
As you might expect: Sandwiches for boys and Salads for girls :)
One last important thing about our service that is important for you to see is how the rules are being picked up by the Service Endpoint. I'm using the Drools CDI extension to @Inject a KieContainer which is resolved using the KIE-CI module, explained in some of my previous posts.
We will bundle this project into a Docker Image that can be started as many times as we want/need. If you have a Docker client installed in your local environment you can start this food recommendation service by looking at the salaboy/drools-food-services image which is hosted in hub.docker.com/salaboy
By starting the Docker image without even knowing what is running inside we immediately notice the following advantages:
  • We don't need to install Java or any other tool besides Docker
  • We don't need to configure anything to run our Rest Service
  • We don't even need to build anything locally due the fact that the image is hosted in hub.docker.com
  • We can run on top of any operating system
At the same time we get notice the following disadvantages:
  • We need to know in which IP and Port our service is exposed by Docker
  • If we run more than one image we need to keep track of all the IPs and Ports and notify to all our clients about those
  • There is no built in way of load balance between different instances of the same docker image instance
For solving these disadvantages Openshift, and more specifically, Kubernetes to our rescue!

Provisioning our Service inside Openshift

As I mention before, if we just start creating new Docker Image instances of our service we soon find out that our clients will need to know about how many instances do we have running and how to contact each of them. This is obviously no good, and for that reason we need an intermediate layer to deal with this problem. Kubernetes provides us with this layer of abstraction and provisioning, which allows us to create multiple instances of our PODs (abstraction on top of the docker image) and configure to it Replication Controllers and Services.
The concept of Replication Controller provides a way to define how many instances should be running our our service at a given time. Replication controllers are in charge of guarantee that if we need at least 3 instances running, those instances are running all the time. If one of these instances died, the replication controller will automatically spawn one for us.
Services in Kubernetes solve the problem of knowing all and every Docker instance details.  Services allows us to provide a Facade for our clients to use to interact with our instances of our Pods. The Service layer also allows us to define a strategy (called session affinity) to define how to load balance our Pod instances behind the service. There are to built in strategies: ClientIP and Round Robin.
So we need to things now, we need an installation of Openshift Origin (v3) and our project Drools Controller which will interact with the Kubernetes REST endpoints to provision our Pods, Replicator Controllers and Services.
For the Openshift installation, I recommend you to follow the steps described here: https://github.com/openshift/origin/blob/master/CONTRIBUTING.adoc
I'm running here in my laptop the Vagrant option (second option) described in the previous link.
Finally, an ultra simple example can be found of how to use the Kubernetes API to provision in this case our drools-food-services into Openshift.
Notice that we are defining everything at runtime, which is really cool, because we can start from scratch or modify existing Services, Replication Controllers and Pods.
You can take a look at the drools-controller project. which shows how we can create a Replication Controller which points to our Docker image and defines 1 replica (one replica by default is created).
If you log in into the Openshift Console you will be able to see the newly created service with the Replication Controller and just one replica of our Pod. By using the UI (or the APIs, changing the Main class) we can provision more replicas, as many as we need. The Kubernetes Service will make sure to load balance between the different pod instances.
Voila! Our Services Replicas are up and running!
Voila! Our Services Replicas are up and running!
Now if you access the NodeStat service by doing a GET to the mapped Kubernetes Service Port you will get the Pod that is answering you that request. If you execute the request multiple times you should be able to see that the Round Robin strategy is kicking in.
wget http://localhost:9999/api/node {"node":"drools-controller-8tmby","version":"version 1"}
wget http://localhost:9999/api/node {"node":"drools-controller-k9gym","version":"version 1"}
wget http://localhost:9999/api/node {"node":"drools-controller-pzqlu","version":"version 1"}
wget http://localhost:9999/api/node {"node":"drools-controller-8tmby","version":"version 1"}
In the same way you can interact with the Statless Sessions in each of these 3 Pods. In such case, you don't really need to know which Pod is answering your request, you just need to get the job done by any of them.

Summing up

By leveraging the Openshift origin infrastructure we manage to simplify our architecture by not reinventing mechanisms that already exists in tools such as Kubernetes & Docker. On following posts I will be writing about some other nice advantages of using this infrastructure such as roll ups to upgrade the version of our services, adding security and Api Management to the mix.
If you have questions about this approach please share your thoughts.

by salaboy (noreply@blogger.com) at March 21, 2016 06:21 PM

Thomas Allweyer: Ein Standard für die EPK

Nach wie vor werden zur Modellierung von Geschäftsprozessen vielerorts ereignisgesteuerte Prozessketten (EPK) eingesetzt, vor allem für die Darstellung aus fachlicher Sicht. Und obwohl diese Notation schon fast seit einem Vierteljahrhundert existiert, gibt es für sie – im Gegensatz zur wesentlich jüngeren BPMN – bis heute keinen verbindlichen Standard. Die Folge sind unterschiedliche Interpretationen und damit eine uneinheitliche Anwendung und fehlende Austauschmöglichkeiten von EPKs zwischen unterschiedlichen Tools. Das soll sich nun ändern. Unter Federführung der Professoren Oliver Thomas von der Universität Osnabrück und Jörg Becker von der Universität Münster wurde nun ein Arbeitskreis zur EPK-Standardisierung gegründet. Die Arbeit an dem Standard wird durch eine Wiki-Kollaborationsplattform unterstützt, die unter www.epc-standard.org erreichbar ist. Wer an der Mitarbeit interessiert ist, kann sich dort als Teilnehmer registrieren.

by Thomas Allweyer at March 21, 2016 12:30 PM

March 19, 2016

Drools & JBPM: Keycloak SSO Integration into jBPM and Drools Workbench

Introduction


Single Sign On (SSO) and related token exchange mechanisms are becoming the most common scenario for the authentication and authorization in different environments on the web, specially when moving into the cloud.

This article talks about the integration of Keycloak with jBPM or Drools applications in order to use all the features provided on Keycloak. Keycloak is an integrated SSO and IDM for browser applications and RESTful web services. Lean more about it in the Keycloak's home page.

The result of the integration with Keycloak has lots of advantages such as:
  • Provide an integrated SSO and IDM environment for different clients, including jBPM and Drools workbenches
  • Social logins - use your Facebook, Google, Linkedin, etc accounts
  • User session management
  • And much more...
       
Next sections cover the following integration points with Keycloak:

  • Workbench authentication through a Keycloak server
    It basically consists of securing both web client and remote service clients through the Keycloak SSO. So either web interface or remote service consumers ( whether a user or a service ) will authenticate into trough KC.
       
  • Execution server authentication through a Keycloak server
    Consists of securing the remote services provided by the execution server (as it does not provides web interface). Any remote service consumer ( whether a user or a service ) will authenticate trough KC.
      
  • Consuming remote services
    This section describes how a third party clients can consume the remote service endpoints provided by both Workbench and Execution Server.
       
Scenario

Consider the following diagram as the environment for this article's example:

Example scenario

Keycloak is a standalone process that provides remote authentication, authorization and administration services that can be potentially consumed by one or more jBPM applications over the network.

Consider these main steps for building this environment:
  • Install and setup a Keycloak server
      
  • Create and setup a Realm for this example - Configure realm's clients, users and roles
      
  • Install and setup the SSO client adapter & jBPM application

Notes: 

  • The resulting environment and the different configurations for this article are based on the jBPM (KIE) Workbench, but same ones can also be applied for the KIE Drools Workbench as well. 
  • This example uses latest 6.4.0.CR2 community release version

Step 1 - Install and setup a Keycloak server


Keycloak provides an extensive documentation and several articles about the installation on different environments. This section describes the minimal setup for being able to build the integrated environment for the example. Please refer to the Keycloak documentation if you need more information.

Here are the steps for a minimal Keycloak installation and setup:
  1. Download latest version of Keycloak from the Downloads section. This example is based on Keycloak 1.9.0.Final.
      
  2. Unzip the downloaded distribution of Keycloak into a folder, let's refer it as 
    $KC_HOME

      
  3. Run the KC server - This example is based on running both Keycloak and jBPM on same host. In order to avoid port conflicts you can use a port offset for the Keycloak's server as:

        $KC_HOME/bin/standalone.sh -Djboss.socket.binding.port-offset=100
      
  4. Create a Keycloak's administration user - Execute the following command to create an admin user for this example:

        $KC_HOME/bin/add-user.sh -r master -u 'admin' -p 'admin'
The Keycloak administration console will be available at http://localhost:8180/auth/admin (use the admin/admin for login credentials)

Step 2 - Create and setup the demo Realm


Security realms are used to restrict the access for the different application's resources. 

Once the Keycloak server is running next step is about creating a realm. This realm will provide the different users, roles, sessions, etc for the jBPM application/s.

Keycloak provides several examples for the realm creation and management, from the official examples to different articles with more examples.

You can create the realm manually or just import the given json files.

Creating the realm step by step

Follow these steps in order to create the demo realm used later in this article:
  1. Go to the Keycloak administration console and click on Add realm button. Give it the name demo.
      
  2. Go to the Clients section (from the main admin console menu) and create a new client for the demo realm:
    • Client ID: kie
    • Client protocol: openid-connect
    • Access type: confidential
    • Root URL: http://localhost:8080
    • Base URL: /kie-wb-6.4.0.Final
    • Redirect URIs: /kie-wb-6.4.0.Final/*
The resulting kie client settings screen:

Settings for the kie client

Note: As you can see in the above settings it's being considered the value kie-wb-6.4.0.Final for the application's context path. If your jbpm application will be deployed on a different context path, host or port, just use your concrete settings here.

Last step for being able to use the demo realm from the jBPM workbench is create the application's user and roles:
  • Go to the Roles section and create the roles admin, kiemgmt and rest-all
      
  • Go to the Users section and create the admin user. Set the password with value "password" in the credentials tab, unset the temporary switch.
      
  • In the Users section navigate to the Role Mappings tab and assign the admin, kiemgmt and rest-all roles to the admin user
Role mappings for admin user


Importing the demo realm

Import both:

  • Demo Realm - Click on Add Realm and use the demo-realm.json file
      
  • Realm users - Once demo realm imported, click on Import in the main menu and use the demo-users-0.json file as import source
At this point a Keycloak server is running on the host, setup with a minimal configuration set. Let's move to the jBPM workbench setup.

Step 3 - Install and setup jBPM workbench


For this tutorial let's use a Wildfly as the application server for the jBPM workbench, as the jBPM installer does by default.

Let's assume, after running the jBPM installer, the $JBPM_HOME as the root path for the Wildfly server where the application has been deployed.

Step 3.1 - Install the KC adapter

In order to use the Keycloak's authentication and authorization modules from the jBPM application, the Keycloak adapter for Wildfly must be installed on our server at $JBPM_HOME. Keycloak provides multiple adapters for different containers out of the box, if you are using another container or need to use another adapter, please take a look at the adapters configuration from Keycloak docs. Here are the steps to install and setup the adapter for Wildfly 8.2.x:

  1. Download the adapter from here
      
  2. Execute the following commands:

     
    cd $JBPM_HOME/
    unzip keycloak-wf8-adapter-dist.zip // Install the KC client adapter

    cd $JBPM_HOME/bin
    ./standalone.sh -c standalone-full.xml // Setup the KC client adapter.

    // ** Once server is up, open a new command line terminal and run:
    cd $JBPM_HOME/bin
    ./jboss-cli.sh -c --file=adapter-install.cli
Step 3.2 - Configure the KC adapter

Once installed the KC adapter into Wildfly, next step is to configure the adapter in order to specify different settings such as the location for the authentication server, the realm to use and so on.

Keycloak provides two ways of configuring the adapter:
  • Per WAR configuration
  • Via Keycloak subsystem 
In this example let's use the second option, use the Keycloak subsystem, so our WAR is free from this kind of settings. If you want to use the per WAR approach, please take a look here.

Edit the configuration file $JBPM_HOME/standalone/configuration/standalone-full.xml and locate the subsystem configuration section. Add the following content:

<subsystem xmlns="urn:jboss:domain:keycloak:1.1">
<secure-deployment name="kie-wb-6.4.0-Final.war">
<realm>demo</realm>
<realm-public-key>MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA2Q3RNbrVBcY7xbpkB2ELjbYvyx2Z5NOM/9gfkOkBLqk0mWYoOIgyBj4ixmG/eu/NL2+sja6nzC4VP4G3BzpefelGduUGxRMbPzdXfm6eSIKsUx3sSFl1P1L5mIk34vHHwWYR+OUZddtAB+5VpMZlwpr3hOlfxJgkMg5/8036uebbn4h+JPpvtn8ilVAzrCWqyaIUbaEH7cPe3ecou0ATIF02svz8o+HIVQESLr2zPwbKCebAXmY2p2t5MUv3rFE5jjFkBaY25u4LiS2/AiScpilJD+BNIr/ZIwpk6ksivBIwyfZbTtUN6UjPRXe6SS/c1LaQYyUrYDlDpdnNt6RboQIDAQAB</realm-public-key>
<auth-server-url>http://localhost:8180/auth</auth-server-url>
<ssl-required>external</ssl-required>
<resource>kie</resource>
<enable-basic-auth>true</enable-basic-auth>
<credential name="secret">925f9190-a7c1-4cfd-8a3c-004f9c73dae6</credential>
<principal-attribute>preferred_username</principal-attribute>
</secure-deployment>
</subsystem>

If you have imported the example json files from this article in step 2, you can just use same configuration as above by using your concrete deployment name . Otherwise please use your values for these configurations:
  • Name for the secure deployment - Use your concrete application's WAR file name
      
  • Realm - Is the realm that the applications will use, in our example, the demo realm created on step 2.
      
  • Realm Public Key - Provide here the public key for the demo realm. It's not mandatory, if it's not specified, it will be retrieved from the server. Otherwise, you can find it in the Keycloak admin console -> Realm settings ( for demo realm ) -> Keys
      
  • Authentication server URL - The URL for the Keycloak's authentication server
      
  • Resource - The name for the client created on step 2. In our example, use the value kie.
      
  • Enable basic auth - For this example let's enable Basic authentication mechanism as well, so clients can use both Token (Baerer) and Basic approaches to perform the requests.
      
  • Credential - Use the password value for the kie client. You can find it in the Keycloak admin console -> Clients -> kie -> Credentials tab -> Copy the value for the secret.

For this example you have to take care about using your concrete values for secure-deployment namerealm-public-key and credential password. You can find detailed information about the KC adapter configurations here.

Step 3.3 - Run the environment

At this point a Keycloak server is up and running on the host, and the KC adapter is installed and configured for the jBPM application server. You can run the application using:

    $JBPM_HOME/bin/standalone.sh -c standalone-full.xml

You can navigate into the application once the server is up at:


jBPM & SSO - Login page 
Use your Keycloak's admin user credentials to login: admin/password

Securing workbench remote services via Keycloak

Both jBPM and Drools workbenches provides different remote service endpoints that can be consumed by third party clients using the remote API.

In order to authenticate those services thorough Keycloak the BasicAuthSecurityFilter must be disabled, apply those modifications for the the WEB-INF/web.xml file (app deployment descriptor)  from jBPM's WAR file:

1.- Remove the filter :

 <filter>
  <filter-name>HTTP Basic Auth Filter</filter-name>
<filter-class>org.uberfire.ext.security.server.BasicAuthSecurityFilter</filter-class>
<init-param>
<param-name>realmName</param-name>
<param-value>KIE Workbench Realm</param-value>
</init-param>
</filter>

<filter-mapping>
<filter-name>HTTP Basic Auth Filter</filter-name>
<url-pattern>/rest/*</url-pattern>
<url-pattern>/maven2/*</url-pattern>
<url-pattern>/ws/*</url-pattern>
</filter-mapping>

2.- Constraint the remote services url patterns as:

<security-constraint>
<web-resource-collection>
<web-resource-name>remote-services</web-resource-name>
<url-pattern>/rest/*</url-pattern>
<url-pattern>/maven2/*</url-pattern>
<url-pattern>/ws/*</url-pattern>
</web-resource-collection>
<auth-constraint>
<role-name>rest-all</role-name>
</auth-constraint>
</security-constraint>


Important note: The user that consumes the remote services must be member of role rest-all. As on described on step 2, the admin user in this example it's already a member of the rest-all role.





Execution server


The KIE Execution Server provides a REST API than can be consumed for any third party clients,. This this section is about how to integration the KIE Execution Server with the Keycloak SSO in order to delegate the third party clients identity management to the SSO server.
Consider the above environment running, so consider having:
  • A Keycloak server running and listening on http://localhost:8180/auth
      
  • A realm named demo with a client named kie for the jBPM Workbench
      
  • A jBPM Workbench running at http://localhost:8080/kie-wb-6.4.0-Final
Follow these steps in order to add an execution server into this environment:


  • Create the client for the execution server on Keycloak
  • Install setup and the Execution server ( with the KC client adapter  )
Step 1 - Create the client for the execution server on Keycloak

As per each execution server is going to be deployed, you have to create a new client on the demo realm in Keycloak.
  1. Go to the KC admin console -> Clients -> New client
  2. Name: kie-execution-server
  3. Root URL: http://localhost:8280/  
  4. Client protocol: openid-connect
  5. Access type: confidential ( or public if you want so, but not recommended )
  6. Valid redirect URIs: /kie-server-6.4.0.Final/*
  7. Base URL: /kie-server-6.4.0.Final
In this example the admin user already created on previous steps is the one used for the client requests. So ensure that the admin user is member of the role kie-server in order to use the execution server's remote services. If the role does not exist, create it.

Note: This example considers that the execution server will be configured to run using a port offset of 200, so the HTTP port will be available at localhost:8280

Step 2 - Install and setup the KC client adapter and the Execution server

At this point, a client named kie-execution-server is ready on the KC server to use from the execution server. Let's install, setup and deploy the execution server:
  
1.- Install another Wildfly server to use for the execution server and the KC client adapter as well. You can follow above instructions for the Workbench or follow the official adapters documentation.
  
2.- Edit the standalone-full.xml file from the Wildfly server's configuration path and configure the KC subsystem adapter as:

<secure-deployment name="kie-server-6.4.0.Final.war">
<realm>demo</realm>
<realm-public-key>
MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCrVrCuTtArbgaZzL1hvh0xtL5mc7o0NqPVnYXkLvgcwiC3BjLGw1tGEGoJaXDuSaRllobm53JBhjx33UNv+5z/UMG4kytBWxheNVKnL6GgqlNabMaFfPLPCF8kAgKnsi79NMo+n6KnSY8YeUmec/p2vjO2NjsSAVcWEQMVhJ31LwIDAQAB
</realm-public-key>
<auth-server-url>http://localhost:8180/auth</auth-server-url>
<ssl-required>external</ssl-required>
<resource>kie-execution-server</resource>
<enable-basic-auth>true</enable-basic-auth>
<credential name="secret">e92ec68d-6177-4239-be05-28ef2f3460ff</credential>
<principal-attribute>preferred_username</principal-attribute>
</secure-deployment>

Consider your concrete environment settings if different from this example:
  • Secure deployment name -> use the name of the execution server war file being deployed
      
  • Public key -> Use the demo realm public key or leave it blank, the server will provide one if so
       
  • Resource -> This time, instead of the kie client used in the WB configuration, use the kie-execution-server client
      
  • Enable basic auth -> Up to you. You can enable Basic auth for third party service consumers
       
  • Credential -> Use the secret key for the kie-execution-server client. You can find it in the Credentials tab of the KC admin console.
       
Step 3 - Deploy and run an Execution Server

Just deploy the execution server in Wildfly using any of the available mechanisms.
Run the execution server using this command:
$EXEC_SERVER_HOME/bin/standalone.sh -c standalone-full.xml -Djboss.socket.binding.port-offset=200 -Dorg.kie.server.id=<ID> -Dorg.kie.server.user=<USER> -Dorg.kie.server.pwd=<PWD> -Dorg.kie.server.location=<LOCATION_URL>  -Dorg.kie.server.controller=<CONTROLLER_URL> -Dorg.kie.server.controller.user=<CONTROLLER_USER> -Dorg.kie.server.controller.pwd=<CONTOLLER_PASSWORD>  
Example:

$EXEC_SERVER_HOME/bin/standalone.sh -c standalone-full.xml -Djboss.socket.binding.port-offset=200 -Dorg.kie.server.id=kieserver1 -Dorg.kie.server.user=admin -Dorg.kie.server.pwd=password -Dorg.kie.server.location=http://localhost:8280/kie-server-6.4.0.Final/services/rest/server -Dorg.kie.server.controller=http://localhost:8080/kie-wb-6.4.0.Final/rest/controller -Dorg.kie.server.controller.user=admin -Dorg.kie.server.controller.pwd=password  
Important note:  The users that will consume the execution server remote service endpoints must have the role kie-server assigned. So create and assign this role in the KC admin console for the users that will consume the execution server remote services.
  
Once up, you can check the server status as (considered using Basic authentication for this request, see next Consuming remote services for more information):
 
curl http://admin:password@localhost:8280/kie-server-6.4.0.Final/services/rest/server/

Consuming remote services

In order to use the different remote services provided by the Workbench or by an Execution Server, your client must be authenticated on the KC server and have a valid token to perform the requests.

NOTE: Remember that in order to use the remote services, the authenticated user must have assigned:

  • The role rest-all for using the WB remote services
  • The role kie-server for using the Execution Server remote services

Please ensure necessary roles are created and assigned to the users that will consume the remote services on the Keycloak admin console.

You have two options to consume the different remove service endpoints:

  • Using basic authentication, if the application's client supports it
  • Using Bearer ( token) based authentication

Using basic authentication

If the KC client adapter configuration has the Basic authentication enabled, as proposed in this guide for both WB (step 3.2) and Execution Server, you can avoid the token grant/refresh calls and just call the services as the following examples.

Example for a WB remote repositories endpoint:

curl http://admin:password@localhost:8080/kie-wb-6.4.0.Final/rest/repositories

Example to check the status for the Execution Server :

curl http://admin:password@localhost:8280/kie-server-6.4.0.Final/services/rest/server/

Using token based authentication

First step is to create a new client on Keycloak that allows the third party remote service clients to obtain a token. It can be done as:
  • Go to the KC admin console and create a new client using this configuration:
    • Client id: kie-remote
    • Client protocol: openid-connect
    • Access type: public
    • Valid redirect URIs: http://localhost/
         
  • As we are going to manually obtain a token and invoke the service let's increase the lifespan of tokens slightly. In production access tokens should have a relatively low timeout, ideally less than 5 minutes:
    • Go to the KC admin console
    • Click on your Realm Settings
    • Click on Tokens tab
    • Change the value for Access Token Lifespan to 15 minutes ( That should give us plenty of time to obtain a token and invoke the service before it expires ) 

Once a public client for our remote clients has been created, you can now obtain the token by performing an HTTP request to the KC server's tokens endpoint. Here is an example for command line:

RESULT=`curl --data "grant_type=password&client_id=kie-remote&username=admin&passwordpassword=<the_client_secret>" http://localhost:8180/auth/realms/demo/protocol/openid-connect/token`

TOKEN=`echo $RESULT | sed 's/.*access_token":"//g' | sed 's/".*//g'`

At this point, if you echo the $TOKEN it will output the token string obtained from the KC server, that can be now used to authorize further calls to the remote endpoints.  For exmple, if you want to check the internal jBPM repositories:

curl -H "Authorization: bearer $TOKEN" http://localhost:8080/kie-wb-6.4.0.Final/rest/repositories


by Roger Martinez (noreply@blogger.com) at March 19, 2016 09:19 PM

March 17, 2016

Keith Swenson: Develop for Self-Managed Organizations

Here is a message from my friend, Robert Gilman, about participating with us on an open source platform for supporting a sociocratic organization.  It is the most interesting thing I have been involved in for years.

Message from the Context Institute

ContextInstituteDo you have or do you know someone who would be willing to use programming skills on an open-source project that could really make a difference for a better world? The project is part of the movement toward effective self management. In a real sense, it is working on a new category of software.

Starting in November of 2014, John Buck (Sociocracy author and trainer), Ian Gilman (software developer) and Robert Gilman (sustainability pioneer and futurist) started on a quest for good integrated software that supports self-management in organizations with distributed decision-making (like Sociocracy and many of the organizations profiled in Reinventing Organizations). We searched for existing software and looked for existing components that might be patched together, but no luck, so we started to build software, now called Weaver, that could serve this need.

We got a big boost in April 2015 when Keith Swenson joined our team of volunteers and brought his existing group-management platform called Cognoscenti. We’ve all been working since to adapt and extend Cognoscenti to the needs of sociocratic and similar groups.

We’ve made considerable progress, enough so that we could really use some additional part-time volunteer programming help. If you, or someone you know, would like to work on this open-source project under Keith’s leadership, using modern technologies like AngularJS and Bootstrap – please contact Robert Gilman.

My Addendum

Some of you are probably already familiar with Cognoscenti as I have been using it for most of my recent demonstrations of collaborative software.  It has advanced tremendously in the last 9 months.  We have added a lot to support meetings, agendas, discussion topics, decisions (not branch nodes!),  proposals, discussion rounds, and making it all work to make group decision making easier and more inclusive.  Also, the entire user interface has been rewritten in Angular.js and Bootstrap.   It is not about automation, it is about facilitating good decisions.  It is all freely available.  If you are interested in groups of people working together, you probably owe it to yourself to take another look.  If you want to help, contact Robert Gilman.  Questions about the software, just make a comment right here.


by kswenson at March 17, 2016 10:14 AM

March 14, 2016

Thomas Allweyer: Nachlese zur CPOs@BPM&O 2016

CPOs@BPM&OÜber 80 Teilnehmer lockte die zweitägige Tagung nach Köln, die von dem auf das Prozessmanagement spezialisierte Beratungshaus BPM&O veranstaltet wurde. Auf dem Programm stand ein abwechslungsreicher Mix aus Praxisberichten, Expertenvorträgen, Toolvorstellungen und „Hands on“-Workshops. „Quo vadis, Prozessmanagement?“ fragten zur Eröffnung die BPM&O-Geschäftsführer Thilo Knuppertz und Uwe Feddern. Und konstatierten, dass das Thema Prozessmanagement in den vergangenen Jahren vollends in den Fachabteilungen angekommen sei. Waren ihre Ansprechpartner in den Unternehmen früher überwiegend in der IT angesiedelt, so sprechen sie heute meist mit dem Business. Knuppertz erläuterte, welche Rolle die Prozesse für die erfolgreiche und rasche Strategieumsetzung spielen. Insbesondere die vielbeschworene digitale Transformation könne nur gelingen, wenn das magische Dreieck aus Kunden, Produkten und Prozessen geeignet aufeinander abgestimmt werde. Viele Unternehmen haben in den vergangenen Jahren Verbesserungen einzelner Prozesse mit Hilfe von Methoden wie Six Sigma oder Lean Management erzielt, stellen nun aber fest, dass zur dauerhaften Sicherung der Erfolge ein unternehmensweites Prozessmanagement erforderlich ist.

Praktische Beispiele für die Einführung von Prozessmanagement lieferten Erfahrungsberichte aus unterschiedlichen Branchen. So verwendet das Frankfurter Nahverkehrsunternehmen traffiQ das Reifegradmodell EDEN als Grundlage für die Bewertung und Weiterentwicklung der Prozessmanagement-Initiativen. Hierbei wurden auch die Aufbauorganisation und die Führungsstrukturen verändert. Wie eine Mitarbeiterbefragung zeigte, wurden allerdings nicht alle Änderungen positiv aufgenommen, weshalb nun die bemängelten Defizite gezielt angegangen werden. Noch recht neu ist das Thema Prozessmanagement bei den Stadtwerken Karlsruhe. Hier steht unter anderem die Integration der verschiedenen im Unternehmen vorhandenen Managementsysteme auf dem Programm, wie z. B. Qualitäts-, Umwelt- und Energiemanagement. Thorsten Speil berichtete, dass immer wieder der Nutzen des Prozessmanagements in Frage gestellt werde. Daher wurde ein Workshop durchgeführt, bei dem alle Bereichsleiter die aus ihrer Sicht wichtigsten Nutzenpotenziale bewerteten. Von zentraler Bedeutung sein die ständige Kommunikation. Immer wieder stelle man fest, dass Mitarbeiter gar nichts über die Prozessmanagement-Initiative wüssten.

Matthias Adelhard vom Messgerätehersteller Diehl Metering betreibt die Einführung von Prozessmanagement als Change-Projekt. Entsprechend setzt er Methoden des Veränderungsmanagements ein. Dabei kommt ihm seine Ausbildung als systemischer Organisationsentwickler zugute. Interessanterweise stimmen zwar die meisten Prozessmanagement-Praktiker der Aussage zu, dass ein gelingendes Veränderungsmanagement einer der wichtigsten Erfolgsfaktoren für das Prozessmanagement sei, doch hat kaum einer von ihnen eine Qualifikation im Bereich Organisationsentwicklung. Die bestätigte sich auch bei einer kurzen Umfrage im Publikum.

Im Laufe des ersten Tages präsentierten mehrere Toolhersteller ihre Prozessportale, mit denen Informationen über die Prozesse im Intranet veröffentlicht werden können. Zumeist wurden Beispiele aus konkreten Kundeninstallationen gezeigt, die einen Eindruck von den vielfältigen Navigations- und Kollaborationsmöglichkeiten gaben. Bei allen Herstellern hat sich in den vergangenen Jahren viel in Sachen Benutzerfreundlichkeit, rollenbasierten Konfigurationsmöglichkeiten und Unterstützung mobiler Endgeräte getan. Bei einer Publikums-Abstimmung setzte sich keiner der etablierten Plattformhersteller durch, sondern das noch recht neue Produkt „Ask Delphi“ von der Firma MTAC. Darin werden keine Prozessmodelle oder formale Beschreibungen ins Intranet gebracht, sondern auf die jeweilige Rolle abgestimmte Anleitungen, Videos, E-Learning-Sequenzen u. ä., die den Mitarbeiter bei der Durchführung seiner Arbeit auf recht intuitive Weise unterstützen.

Der zweite Tag widmete sich dem Thema Innovationen im Prozessmanagement. Prof. Mevius vom Konstanzer Institut für Prozesssteuerung griff erneut das Thema Digitalisierung auf und betonte, dass der Mehrwert neuer Technologien durch Prozesse entsteht. Prozessmanagement und BPM-Software haben heute einen hohen Entwicklungsstand erreicht. Dennoch stellt man fest, dass Ziele wie Kundenzufriedenheit oft nicht im gewünschten Ausmaß erreicht werden. Sein Credo: Der Mensch muss viel stärker im Mittelpunkt stehen. Ziel ist eine User Experience, wie man sie aus dem Consumer-Bereich gewohnt ist. Beispielsweise sind BPMN-Modelle ein hervorragendes Instrument für Experten, aber nicht für Fachanwender. Als Beispiel zur Unterstützung einer besseren Prozessaufnahme zeigte er eine intuitive, multimediabasierte App zur Prozesserfassung. Letztlich entsteht die „Process Experience“ aber vor allem bei der Prozessausführung. Auch hierfür präsentierte er einige Beispiele, z. B. die Integration von inEar-Devices zur Mitarbeiterunterstützung bei der Bearbeitung der Prozesse.

Lars Büsing von Learnical plädierte für die Integration von Innovationen ins Tagesgeschäft. Angesichts der Bedrohung vieler etablierter Geschäftsmodelle suchen Unternehmen nach Erfolgsformeln um sie zu kopieren. Das funktioniert in einem komplexen und chaotischen Umfeld aber nicht. Gefragt sind Innovationen, wobei es sich nicht vorrangig um einzelne Erfindungen handelt, sondern um ständiges Lernen. Erkenntnisse aus der Gehirnforschung besagen, dass Innovation nicht durch willentliche Anstrengung erzwungen werden können. Methoden wie „Lego Serious Play“ sind deswegen erfolgreich, weil Neues vor allem durch Spielen und beim Austausch zwischen Menschen entsteht.

Dies konnten die Teilnehmer anschließend selbst im Rahmen verschiedener Workshops selbst erleben. Neben Lego Serious Play, mit dem prozessbezogene Fragestellungen im buchstäblichen Sinne „be-greifbar“ gemacht werden, wurde auch ein Workshop zur kollaborativen Prozessmodellierung mit t.BPM angeboten, bei dem die in Form von kleinen Plättchen zur Verfügung stehenden Modellierungssymbole zunächst auf einem Tisch platziert und so ganz leicht neu arrangiert werden können. In einer dritten Runde stand das Spiel „Slotter“ im Mittelpunkt, mit dem das Zusammenspiel innerhalb von Prozessen ausprobiert und optimiert werden konnte. Uwe Feddern moderierte einen Dialog-Workshop, bei dem klare Regeln dazu beitragen, dass jeder zu Wort kommt und ein besseres Verständnis der Anliegen und Meinungen der anderen Gruppenmitglieder erreicht wird, als dies bei einer gewöhnlichen Diskussion der Fall ist.

Abgerundet wurde die Tagung von dem Trendforscher Walter Matthias Kunze, der die These vertrat, dass der Digitale Wandel zu einem sozialen Wandel führt und die Unternehmen daher ernst machen müssen mit der Umsetzung neuer Führungswerte. Hierzu gehört es, die Verantwortung an selbstorganisierte Teams zu übertragen und Kontrollen abzubauen. Unternehmen wie die brasilianische Firma Semco, die Werbeagentur Ministry, aber auch XING sind Beispiele dafür, wie dies funktionieren kann. Mit der zunehmenden Verbreitung von Technologie entsteht auch ein Gegentrend, nämlich ein steigendes Bedürfnis nach ethischen und spirituellen Werten. Unternehmen müssen dies berücksichtigen und die Werte und Ideale ihrer Kunden und Mitarbeiter achten sowie glaubwürdig handeln und kommunizieren.

Die Besucher erlebten eine hochkarätige Tagung, die sich neben den Vorträgen durch einen hohen Grad an Interaktionen und intensive Diskussionen auszeichnete.

by Thomas Allweyer at March 14, 2016 09:14 AM

March 09, 2016

Thomas Allweyer: Verlosung von Freikarten für die Insight 2016

MIDInsightUpdate 12.3.16: Die Aktion ist abgeschlossen, die Gewinner wurden benachrichtigt Vielen Dank an alle Teilnehmer!
Die MID hat freundlicherweise fünf Freikarten für die Insight 2016 zur Verfügung gestellt, die am 12. April in Nürnberg stattfindet. Wer eine davon gewinnen will, sollte mir bis kommenden Freitag, 11. März, eine Mail schicken und darin den Titel des Vortrags von Rangar Yogeshwar angeben. Der Rechtsweg ist wie immer ausgeschlossen.

by Thomas Allweyer at March 09, 2016 10:52 AM

March 04, 2016

Keith Swenson: Key Process Activities for 2016

Six key process activities coming in 2016: Adaptive CM Workshop, ACM Awards, BPM Next, BPM and Case Management Global Summit, BPM 2016 Conference and (updated) CBI Conference.

1. Adaptive CM Workshop – Sept 5 or 6, 2016

This marks the fifth year that we have been able to hold this full day International Workshop on Adaptive Case Management and other non-workflow approaches to BPM.   Past workshops have been the premier place to publish rigorously peer reviewed scientific papers on the groundbreaking new technologies.  See the submission instructions.   Submission abstracts are due 10 April 2016, and notification to authors in June 2016.  Co-located with the IEEE EDOC 2016 September 5-9, 2016 in Vienna, Austria.

2. ACM Awards – Apply Now

The WfMC will be running another ACM awards program to recognize excellent use of case management style approaches to supporting knowledge workers.  The awards are designed to help show how flexible, task tracking software is increasingly used by knowledge workers with unpredictable work patterns.  Winners are recognized on the hall of fame site (see sample page highlighting a winner) and in a ceremony at the BPM and Case Management Summit in June. Each winning use case is published so that others can know about the good work you have been doing, and can follow your lead.  This series of books is the premier source of best practices in the case management space.  Submit proposals and abstracts now for help and guidance in preparing a high quality entry, and final submission due April 2015.

3. BPM Next – April 19-21, 2016

The meeting of the gurus in the BPM space.  BPM Next is where the leaders of the industry come together to discuss evolving new approaches, and to help understand the leading trends.  The engineering-oriented talks are required to have a demo of actual running code to avoid imaginative, but unrealistic, fantasies.  This year the presentations will all start with an “Ignite” presentation which has exactly 20 slides and lasts exactly 5 minutes to reign in the guru’s natural tendency for lengthy and wordy presentations.  The program is already set however attendee registration is still open.  This year it will be held again in  the quaint old-town of Santa Barabara.

4. BPM and Case Management Global Summit – June 2016

The premier independent industry show for the full range of process technologies.  Many of last year’s attendees described this as the best, most informative, conference on BPM and ACM that they had ever seen.  It’s the third year to be held at the Ritz-Carlton in Washington DC.  The last two years have seen this as the premier place for serious discussions of both Case Management and BPM.

5. BPM 2016 Conference

This year the BPM2016 Conference will be held September 18-22, 2016 in exotic Rio de Janiero, Brazil.  The conference includes the Main Track, Doctoral Consortium, Workshops, Demos, Tutorials and Panels, Industry track, and other Co-located Events.  (I can’t go this year, but I sure wish I was!)

6.CBI 2016 Conference  (updated)

The IEEE Conference on Business Informatics will be held Aug 28 – Sept 01 in Paris.  There you can submit invited case reports of around 10 pages showing experience with the technology.  The deadline is floating (until mid July)


by kswenson at March 04, 2016 08:02 PM

March 03, 2016

Thomas Allweyer: Teilnahme an neuer Studie BPM Compass möglich

Die Kollegen Komus und Gadatsch von den Hochschulen Koblenz und Bonn-Rhein-Sieg sind für ihre Studien im Umfeld des Prozess- und IT-Managements bekannt. Die jetzt gestartete Umfrage „BPM Compass“ zu den Erfolgsfaktoren des Prozessmanagements hat einen größeren Rahmen als bislang. Zum einen wurden Professor Jan Mendling von der Wirtschaftsuniversität Wien und die Gesellschaft für Prozessmanagement als Partner gewonnen, zum anderen werden Ablauf und Gestaltung der Studie von einem Beirat mit namhaften Experten aus der Praxis unterstützt. Und schließlich steht die Umfrage für internationale Teilnehmer auch auf Englisch zur Verfügung. Teilnehmen können bis zum 8. Mai alle Praktiker, die sich mit den Geschäftsprozessen in ihren Organisationen befassen. Link zum Fragebogen.

by Thomas Allweyer at March 03, 2016 04:11 PM

March 02, 2016

Sandy Kemsley: DSTAdvance16 Keynote with @KevinMitnick

Hacker and security consultant Kevin Mitnick gave today’s opening keynote at DST’s ADVANCE 2016 conference. Mitnick became famous for hacking into a lot of places that he shouldn’t...

[Content summary only, click through for full article and links]

by sandy at March 02, 2016 05:01 PM

March 01, 2016

Sandy Kemsley: DSTAdvance16 Day 1 Keynote with @PeterGSheahan

I’m back at DST‘s annual AWD ADVANCE user conference, where I’ll be speaking this afternoon on microservices and component architectures. First, however, I’m sitting in on the...

[Content summary only, click through for full article and links]

by sandy at March 01, 2016 05:00 PM

Thomas Allweyer: Führende Case Management-Plattformen integrieren Predictive Analytics

14 Plattformen für Dynamic Case Management, d. h. zur Unterstützung schwach strukturierter, wissensintensiver Prozesse, evaluierte Forrester in einer jüngst erschienenen Studie. Die als führend eingestuften Systeme wurden insbesondere für die Integration leistungsfähiger Analyse-Funktionen und die Bereitstellung vorgefertigter Applikationen für verschiedene Anwendungsfälle gelobt.

So werden bei Pega etwa historische Wartungsdaten und Echtzeitdaten dazu verwendet, Reparaturvorschläge zu machen. Auch bei IBM können automatische Vorhersagen durch Predictive Analytics-Funktionen in die Fallbearbeitung integriert werden, etwa zur Betrugserkennung. Bei beiden Anbietern werden hierfür mächtige – und auch nicht ganz billige – Analysekomponenten aus dem Bereich Big Data eingesetzt.

Appian, der dritte als „Leader“ eingestufte Anbieter, punktet unter anderem mit einem App-Markt, auf dem derzeit 32 anwendungsspezifische Lösungen für die Kunden bereitstehen. In vielen Fällen dürfte eine eine solche App zumindest einen großen Teil der Anforderungen abdecken, was die Entwicklungszeit deutlich reduziert.

Zwischen den verschiedenen Systemen und den zugrunde liegenden Ansätzen gibt es deutliche Unterschiede. So bieten einige Anbieter starke Enterprise Content Management-Funktionalitäten, wogegen andere Systeme Content lediglich als einen weiteren Datentyp behandeln. Verbesserungspotential sehen die Analysten u. a. im Bereich Rules Management. Nach wie vor seien verschiedene Arten von Regeln – z. B. für die Benutzernavigation oder für die Weiterleitung von Fällen – an mehreren Stellen verstreut. Auch die Entwicklungswerkzeuge für User Interfaces seien noch ausbaufähig. Hierin stecken große Einsparmöglichkeiten, denn etwa 50% der Aufwände für externe Entwickler werden für den Bereich Benutzerinteraktion ausgegeben

The Forrester Wave: Dynamic Case Management, Q1 2016.
The 14 Providers That Matter Most And How They Stack Up.
Download bei Appian (Registrierung erforderlich).

by Thomas Allweyer at March 01, 2016 09:42 AM

February 25, 2016

Sandy Kemsley: BPM and IoT in Home and Hospice Healthcare with @PNMSoft

I listened in on a webinar by Vasileios Kospanos of PNMSoft today about business process management (BPM) and the internet of things (IoT). They started with some basic definitions and origins of IoT...

[Content summary only, click through for full article and links]

by sandy at February 25, 2016 04:42 PM

Thomas Allweyer: Klassischer Ansatz zur Geschäftsprozessoptimierung

Cover Gronau Geschäftsprozessmanagement in Wirtschaft und VerwaltungDie Struktur des vorliegenden Werkes folgt dem vom Autor entwickelten Vorgehensmodell „RAIL“ für Projekte zur Prozessverbesserung. Dieses Vorgehensmodell umfasst die Phasen Projektvorbereitung, Istanalyse, Sollkonzeption, Umsetzung und Integration, sowie die laufende Optimierung. Zudem gehören die Querschnittsaufgaben „Projektmanagement und -steuerung“ sowie „Change Management“ dazu.

Insofern wird der Begriff des Geschäftsprozessmanagements wesentlich enger gefasst, als dies heute meist üblich ist. So werden sämtliche Aspekte des strategischen Prozessmanagements außen vor gelassen. Das RAIL-Modell enthält auch explizit „keine Regelkreise, die eine ständige Überprüfung der Geschäftsprozesse auf Effizienz sicherstellen. Die Einrichtung dieses Überprüfungsprozesses ist Aufgabe des General Management“ (S. 67). Es wird also ein „klassischer“ Ansatz des Prozessmanagements verfolgt, der häufig eher als „Geschäftsprozessoptimierung“ bezeichnet wird. An vielen Stellen werden entsprechend die Klassiker der Prozessorientierung zitiert. So wird etwa das Business Process Reengineering-Konzept von Hammer und Champy ausführlich in einem eigenen Unterkapitel erläutert – leider ohne eine kritische Bewertung aus heutiger Sicht.

Das Buch beginnt mit einem historischen Abriss und einer Klärung der wichtigsten Begriffe. Anschließend werden verschiedene Vorgehensmodelle des Prozessmanagements vorgestellt und bewertet. Auf Grundlage dieser Analyse entwickelt Gronau sein RAIL-Vorgehensmodell. Der vorliegende Band beschäftigt sich vor allem mit der Ist-Analyse und der Sollkonzeption sowie dem Querschnittsthema Projektmanagement. Die anderen Elemente des Vorgehensmodells sind einem zweiten Band vorbehalten.

Das Kapitel zur Ist-Analyse befasst sich insbesondere mit verschiedenen Methoden zur Ist-Aufnahme, wie z. B. Interviews, Fragebögen und Beobachtungen, sowie Kriterien und Werkzeugen zur Schwachstellenanalyse. Der Prozessmodellierung als wichtigem Instrument ist ebenfalls ein eigenes Kapitel gewidmet. Es werden verschiedene Modellierungsmethoden vorgestellt, darunter bekannte Notationen wie UML, EPK und BPMN, abe auch die am Institut des Autors entwickelte „Knowledge Modeling and Description Language“ (KDML) zur Untersuchung wissensintensiver Prozesse. Für die Sollkonzeption werden neben dem bereits erwähnten Business Process Reenginering-Ansatz verschiedene Heuristiken besprochen. Schließlich folgt ein Kapitel mit einer Übersicht zentraler Begriffe und Methoden des Projektmanagements.

Etwas schade ist es, dass verschiedene aktuell diskutierte Fragestellungen nicht thematisiert werden, wie z. B. die Rolle des Geschäftsprozessmanagements als Enabler digitaler Geschäftsmodelle, oder der Einsatz agiler Methoden im Prozessmanagement.


Gronau, Norbert:
Geschäftsprozessmanagement in Wirtschaft und Verwaltung: Analyse, Modellierung und Konzeption.
GITO 2016.
Das Buch bei amazon.

by Thomas Allweyer at February 25, 2016 08:10 AM

February 19, 2016

Thomas Allweyer: Digitalisierung: Häufig fehlen die Basisstrukturen

Cover Branchenatlas Digitale TransformationLaut der vorliegenden Studie ist die deutsche Wirtschaft noch sehr zögerlich, was die Digitalisierung angeht. „Klare Strategien sind die Ausnahme“, konstatieren die Autoren, denn nur bei jedem dritten der befragten Unternehmen mit über 25 Millionen Euro Umsatz wird dem Thema Digitale Transformation strategische Bedeutung eingeräumt. Entsprechend sind auch die Investitionen in dieses Zukunftsthema niedrig. Und in vielen Fällen fehlen die notwendigen Basisstrukturen.

So weisen die Geschäftsprozesse vielerorts nicht den erforderlichen Digitalisierungsgrad auf, der erforderlich wäre um neue, digitale Geschäftsmodelle umzusetzen. In über 50% der Unternehmen wird in mindestens zwei von fünf Fällen noch papierbasiert gearbeitet. Zumeist liegt weniger als die Hälfte aller Dokumente in digitaler Form vor. Dabei gibt es deutliche Unterschiede zwischen den verschiedenen Branchen. IT-Firmen und Telekommunikationsunternehmen sind vergleichsweise gut aufgestellt, wohingegen insbesondere die Branchen Logistik und Versorgung noch einigen Nachholbedarf haben.


d.velop AG:
Branchenatlas Digitale Transformation
Die kostenlose Studie kann hier angefordert werden.

by Thomas Allweyer at February 19, 2016 08:01 AM

February 17, 2016

Sandy Kemsley: When Lack Of System Integration Incurs Costs – And Embarrassment

BPM systems are often used as integrating mechanisms for disparate systems, passing along information from one to another to ensure that they stay in sync. They aren’t the only type of systems used...

[Content summary only, click through for full article and links]

by sandy at February 17, 2016 05:32 PM

February 13, 2016

Keith Swenson: To link or not to link

Google assumes that if you make a link to a site, then that is a worthwhile link to remember. Your link is a “vote” in increasing the popularity of that page in the search results. When a page contains false or misleading information, you don’t want to make that page easier to find. That is where DoNotLink comes in.

Essentials

The Page Rank Algorithm works on the idea that the more links to a page, the more valuable that page is.  But when linking to a misleading page, you would only be increasing the effect of that misleading page.  You would like to link, without increasing the rank of that page in the search sites.

I recently discovered DoNotLink (http://donotlink.com/) which allows you to create an obfuscated link that will redirect the browser there for readers, but keeps search engines from following the link to the destination.

It is easy to use.  Past the URL into the input box and hit continue.  It remembers that address, and gives you a much shorter address to use in your web page or blog post.  licking on the link will redirect the browser to that site … after a small delay.

donotlink

What about nofollow?

If you are editing the HTML, you can add an attribute to the “a” link tag named “nofollow”.  When the search engines find this in the tag, they will avoid following the link, and avoid raising the rank of the page.  I don’t know if every search engine follow this, but I would bet that Google does, and does any other search engine matter?

You can only use nofollow if you have control of the HTML.  Blogging sites, social media sites, twitter, and any automatically converted links will not have this attribute set.  The DoNotLink site will work no matter how the URL is given and no matter how the hyperlink tag is created.

Conclusion

Make a note about DoNotLink; save it someplace convenient.  Use it whenever making a link to a site that you would rather nobody ever find again.  No more guilt.

 


by kswenson at February 13, 2016 07:59 PM

Drools & JBPM: Free Webinar: Decisions-as-a-Service with Drools/Red Hat BRMS

Red Hat will be hosting a free webinar on Tuesday, Feb 23rd, on Decisions-as-a-Service with Drools/Red Hat BRMS.

This is the perfect opportunity to watch how easy it is to author and publish decision services with Drools/Red Hat BRMS.

For more details and to register, click here.

Happy Drooling.

by Edson Tirelli (noreply@blogger.com) at February 13, 2016 02:45 PM

February 12, 2016

Thomas Allweyer: Keine Business-Analyse ohne Prozessmanagement

Business-Analyse-CoverEs ist erstaunlich, dass das Wort „Geschäftsprozessmanagement“ nicht im Titel dieses Buchs auftaucht, denn die zentrale Bedeutung der Geschäftsprozesse für jede Business-Analyse zieht sich als roter Faden durch das gesamte Buch.

Strukturiert ist das Werk anhand eines Kreislaufmodells. Ausgehend von der Geschäftsstrategie werden konkrete Geschäftsfälle definiert. Es folgt die Erhebung der Geschäftsprozesse, auf deren Grundlage Geschäftsanforderungen abgeleitet und die Geschäftsarchitektur optimiert wird. Mit der Erfolgsbewertung schließt sich der Kreislauf.

Ausgehend von seinen umfangreichen Praxiserfahrung als Prozessberater sowie empirischen Untersuchungen zeigt der Autor, welche Probleme und Defizite in der Praxis häufig auftreten und wie diesen begegnet werden kann. Insbesondere betont er, wie wichtig der konsequente Methoden-Einsatz für den Erfolg ist.

Neben bekannten Konzepten und Vorgehensweisen stellt Minonne auch einige methodische Erweiterungen und Eigenentwicklungen vor, etwa eine Vorgehensweise zur Prozesserhebung und eine durchgängige Systematik zur Einordnung von Kennzahlen und ihren Abhängigkeiten, basierend auf der eingesetzten Ebene (Organisation, Prozess, Aktivität) und der Erfolgselevanz. Am Ende jedes Kapitels findet sich eine Reihe von Wiederholungsfragen und ausführlichen Lösungshinweisen.

Zudem wird die Anwendung der beschriebenen Methoden jeweils am durchgängigen Fallbeispiel eines Bauunternehmens exemplarisch dargestellt. An einigen Stellen sind diese konkreten Beispielanwendungen recht knapp ausgeführt, hier wäre eine etwas umfassendere Darstellung hilfreich für die Nachvollziehbarkeit und das tiefere Verständnis. Recht ausführlich beschrieben sind dafür die Ergebnisse einer vom Autor durchgeführten Studie zum Status Quo des Geschäftsprozessmanagement-Reifegrads.


Minonne, Clemente:
Business-Analyse
Konzepte, Methoden und Instrumente der Business-Architektur
Schäffer-Pöschel 2016.
Das Buch bei amazon.

by Thomas Allweyer at February 12, 2016 11:05 AM

Drools & JBPM: Drools @ JUG Cork, Ireland (2nd March 2016)

Hi all, I'm creating this blog post to share with you all that I'm doing a presentation in the Java User Group at Cork, Ireland
For more information about the meet up go here: http://www.meetup.com/corkjug/events/226397805/
The session is titled "Go back home knowing how to use Drools 6 in your own projects" and it will be focused on the basics to get you started using Drools and the common pitfalls that new developers might find when using Drools. We will also quickly cover some common architectural patterns of using Drools and how it can be integrated into your existing applications. 

If you are planning to attend please share this blog post with your friend and post a message here or in the meet up page with the topics that you would like to see during the presentation. As usual, I'm pretty open to cover what the audience consider more important. 

I will be back with more information more close to the meet up date. 

by salaboy (noreply@blogger.com) at February 12, 2016 10:31 AM

February 09, 2016

Thomas Allweyer: 3. IT-Radar: Prozessmanager ignorieren Innovationen

Chart-aus-IT-Radar1516Sicherheit steht ganz oben auf der Agenda der Prozessmanagement- und IT-Experten, die sich an der jüngsten Befragung zum „IT-Radar für BPM und ERP“ beteiligt haben. Bemerkenswert daran ist, dass das Thema Sicherheit in den beiden vorangehenden Studien aus den Jahren 2012 und 2013 überhaupt nicht unter den zwölf wichtigsten Themen aufgetaucht war. Hier haben wohl die verschiedenen bekannt gewordenen Sicherheitsvorfälle zu einem Umdenken geführt. Auch das Thema Governance landete als Neueinsteiger in den Top zwölf auf Platz vier. Herausgerutscht aus dem oberen Dutzend sind hingegen Master Data-Management, SOA, Automatisiertes Testen und EAM.

Es gibt aber auch Aspekte, die in allen drei Befragungen als wichtig erachtet wurden, wie Compliance, Integration von Prozessen und IT, BPMN und Prozessautomatisierung. Nach wie vor liegt der Schwerpunkt also vor allem auf den klassischen Themen, wohingegen es kein aktueller Trend wie Big Data, Industrie 4.0, Cloud Computing und Digitalisierung unter die ersten zehn geschafft hat. Hier sehen die Autoren der Studie die Gefahr, dass sich Prozess- und IT-Experten zusehends von den Prioritäten der Unternehmensleitung entfernen. Während sich die Führungsebene mit innovativen Ansätzen wie Digitalisierung und Big Data beschäftigt, konzentieren sich IT- und Prozessverantwortliche fast nur auf die sichere Beherrschung der aktuellen Prozesse und Systeme.

Immerhin prognostizieren die Studienteilnehmer eine zukünftig wachsende Bedeutung dieser Innovationsthemen. Aber auch auf längere Sicht schätzen sie diese als weniger bedeutend ein als Sicherheit, Compliance und Prozessautomatisierung. Hierin sind sich Anwender, Software-Anbieter und Wissenschaftlicher weitgehend einig.

Informationen zur Studie finden sich unter it-radar.info. Dort kann auch der gerade fertiggestellte Ergebnisbericht angefordert werden.

by Thomas Allweyer at February 09, 2016 10:48 AM

February 08, 2016

Sandy Kemsley: Bruce Silver Now Stylish With DMN As Well As BPMN

I thought that Bruce Silver’s blog had been quiet for a while: turns out that he moved to a new, more representative domain name, and my feed reader wasn’t updating from there. He’s rebranding his...

[Content summary only, click through for full article and links]

by sandy at February 08, 2016 04:35 PM