Planet BPM

June 17, 2016

Drools & JBPM: UberFire Forms Builder for jBPM

The new UberFire form builder, that will be part of the jBPM 7.0 distribution, is making great progress. Underneath it is a Bootstrap grid system, but it addresses the issue of other Bootstrap layout builders that require the user to explicit add the grid layout first. Instead it dynamically alters the underlying grid as the user drags and places the components. The same builder code will be used for the latest DashBuilder dashboards too. There are more CSS improvements to come, but you can watch a video below (don't forget to turn on HD and watch it full screen), demonstrating nested form capabilities. Eventually you should be able to build and deploy these types of applications live on OpenShift. Good work Pere and Eder.


by Mark Proctor (noreply@blogger.com) at June 17, 2016 06:04 PM

Thomas Allweyer: Prozessanalyse aus wissenschaftlicher Sicht

Cover Process AnalyticsDas englischsprachige Buch „Process Analytics“ gibt einen Überblick über verschiedene Aspekte und Methoden der Prozessanalyse aus wissenschaftlicher Sicht. Hierbei stehen Verfahren im Fokus, bei denen prozessbezogene Daten aus IT-Systemen ausgewertet werden. In der Vergangenheit wurden viele Methoden entwickelt, die voraussetzten, dass die zu untersuchenden Prozesse komplett durch ein Workflow- oder BPM-System ausgeführt werden. Ein Großteil der Prozesse werden in der Praxis aber nicht durch eine solche Process Engine gesteuert. Prozessbezogene Daten liegen daher unter Umständen über viele verschiedene Systeme verstreut und in uneinheitlicher Form vor. Zudem sind viele Prozesse nur schwach strukturiert. Ihr konkreter Ablauf ergibt sich erst ad hoc während der Durchführung. Neben den auf den Ablauf bezogenen Daten, wie Start- und Endzeitpunkte der durchgeführten Aktivitäten, können auch zahlreiche andere Daten von Interesse sein, wie z. B. die bearbeiteten Geschäftsobjekte. Bei der Prozessausführung fallen häufig sehr große Datenmengen an, weshalb die im Buch beschriebenen Verfahren vielfach auf Ansätzen aus dem Bereich „Big Data“ aufbauen.

Das Buch ist in sechs Kapitel gegliedert. Das erste Kapitel gibt einen Überblick über das Thema Process Analytics und die wichtigsten Fragestellungen. In Kapitel 2 werden die Grundlagen IT-gestützter Geschäftsprozesse vorgestellt. Gegenstand des dritten Kapitels sind Algorithmen zum „Process Matching“, d. h. zum Vergleich von Prozessmodellen und zum Auffinden ähnlicher Prozessmodelle. Dies kann bei großen Sammlungen von Prozessmodellen interessant sein, z. B. wenn man Prozesse wiederverwenden, Prozessvarianten identifizieren oder die Einhaltung von Compliance-Regeln überprüfen möchte. In Kapitel 4 werden Abfrage-Techniken und -Sprachen für Prozessmodelle und ausgeführte Prozessinstanzen besprochen. Ähnlich wie man mit SQL Datenbank-Abfragen formulieren kann, kann man mit Prozessabfragesprachen Prozessmodelle mit bestimmten Eigenschaften finden oder Informationen über das Prozessgeschehen abfragen.

Mit der Organisation von Prozessdaten und Methoden zu ihrer Analyse befasst sich Kapitel 5. Hierzu gehört der Aufbau von „Process Spaces“. Dabei handelt es sich um die systemübergreifende Zusammenfassung aller auf einen Prozess bezogenen Daten und Informationen und die Bereitstellung verschiedener Sichten darauf. Die Prozessdaten können in Form von Data Services für die weitere Verarbeitung verfügbar gemacht werden. Es werden verschiedene Analyseverfahren vorgestellt, u. a. Process Mining, und prozessübergreifende Querschnittsaspekte diskutiert, wie z. B. Sicherheit und Zuverlässigkeit.

Das sechste Kapitel gibt einen zusammenfassenden Überblick über Analysefunktionen verschiedener BPM-Systeme sowie einen Ausblick auf weitere Forschungsrichtungen. Unter den betrachteten Systemen findet sich sowohl kommerzielle als auch Open Source-Software. Anhand eines Fallbeispiels wird der Einsatz verschiedener Toolfunktionalitäten und Analysemethoden im Zusammenspiel illustriert. Dabei fällt auf, dass sich die in den vorangegangenen Kapitel vorgestellten Methoden kaum in den vorgestellten Werkzeugen wiederfinden. So wird beschrieben, wo diese Verfahren in dem Fallbeispiel eingesetzt werden können, allerdings nicht, mit welchen Werkzeugen dies erfolgen soll. Zum Teil wird auf grundlegende Technologien aus dem Bereich „Big Data“ verwiesen, wo die konkret auf die Prozessanalyse bezogenen Verfahren freilich erst programmiert werden müssten.

Auch ist nicht ganz klar, wie die Auswahl der besprochenen Tools erfolgte. So findet sich in der Liste das stark in der Funktion beschränkte kostenfreie Werkzeug „ARIS Express“, nicht jedoch die kostenpflichtige ARIS-Suite des gleichen Herstellers, die über wesentlich umfangreichere Analysemethoden verfügt. Es verwundert zudem, dass eine Plattform wie „smartfacts“ von MID fehlt, mit der sich systemübergreifende Prozessmodellsammlungen realisieren lassen, wie sie in den vorangehenden Kapiteln beschrieben werden.

Aber auch in den anderen Kapiteln wird die eine oder andere für das Thema relevante aktuelle Entwicklung nicht berücksichtigt. So wird im Zusammenhang mit der Integration von Geschäftsregeln und Prozessmodellen der in der Praxis wenig verbreitete Standard SBVR diskutiert, nicht jedoch DMN (Decision Model and Notation), die bereits an vielen Orten im praktischen Einsatz ist.

Insgesamt stellt das Buch dennoch einen guten Überblick dar, der insbesondere für Wissenschaftler und Toolhersteller interessant sein dürfte.


Behesti, S.; Benatallah, B. et al:
Process Analytics
Concepts and Techniques for Querying and Analzying Process Data
Springer 2016
Das Buch bei amazon.

by Thomas Allweyer at June 17, 2016 03:47 PM

Thomas Allweyer: How to Model Parallel Checks in BPMN

One of the modeling patterns I describe in the new edition of the BPMN book, is „Parallel Checks“. When different persons need to check applications, requests, etc. according to different criteria, these checks can be carried out in parallel. There are two different ways to model this. The simple solution only requires basic BPMN elements, while the more sophisticated solution requires a sub-process and a terminate end event. We start with the simple solution.

Since each check can have a positive or negative result, there can be many different combinations of positive and negative results. If all these possible combinations are considered, the models quickly become large and confusing. However, in most cases it is not important exactly which of the checks have a positive or a negative outcome. Instead, only two cases need to be considered: Either all checks have a positive result, or at least one check has a negative result.

Therefore, in the first diagram the checking activities are not directly followed by exclusive splits. Instead, the parallel paths are joined before there is an exclusive split that distinguishes whether all checks have produced a positive result, or not.

Parallel Checks1

In this model, all parallel checks are always carried out entirely, even if one of the checks has already had a negative result, and the other checks would not be required anymore.

This can be avoided by using a terminate end event, as in the following diagram. If both checks are succesful, both parallel tokens reach the end event of the sub-process, and the parent process continues. If one of the checks produces a negative result, its token flows to the terminate end event. This immediately terminates the entire sub-process, regardless where the other token is. It may either still be in front of the checking activity, or it may already have reached the normal end event.

Parallel Checks2

In the parent process, one token is emitted from the sub-process, regardless whether the application has been accepted or rejected. Therefore, the sub process is followed by an exclusive gateway that routes the sequence flow according to the sub-process’s result.

More about BPMN an modeling patterns in the second edition of „BPMN 2.0 – Introduction to the Standard for Business Process Modeling“:
BPMN 2.0 Frontpage-tiny

by Thomas Allweyer at June 17, 2016 11:36 AM

June 13, 2016

Thomas Allweyer: AuraPortal zeigt, was im BPM ohne Codierung möglich ist

Screenshot AuraPortalIn der Vergangenheit haben sich viele BPMS-Anbieter mit Zero-Coding-Versprechen weit aus dem Fenster gelehnt. Außer bei sehr kleinen Demonstrationsprozessen konnten Sie diese aber häufig nicht einhalten. Daher wird es heute meist als unrealistisch angesehen, ernsthafte Prozessanwendungen zu realisieren ohne zumindest an der einen oder anderen Stelle Programmcode schreiben zu müssen. Und so vermarkten einige BPMS-Hersteller ihre Produkte mittlerweile nicht mehr als „Zero Code“-, sondern als „Low Code“-Plattformen (vgl. hierzu den Report von Forrester).

Ganz im Gegensatz dazu positioniert die Firma AuraPortal ihre BPM-Suite selbstbewusst als einzige echte „No Code“-Plattform, mit der man auch komplexe Prozesse komplett ohne Codierung automatisieren kann. Eine derartige Ansage weckt zunächst einmal Skepsis. Das, was in AuraPortals einführenden Präsentationen gezeigt wird, ähnelt dem, was auch in anderen BPMS möglich ist: Ein kleiner Prozess wird grafisch modelliert und ein einfacher Dialog mit einigen Feldern angelegt. Sodann wird das Ganze zur Ausführung gebracht, wodurch den Prozessbeteiligten ihre jeweiligen Aufgaben in Task-Listen bereitgestellt werden. Von dort aus können Sie dann jeweils die Bearbeitung starten.

Großer Funktionsumfang „Out of the Box“

Werden die Anforderungen etwas komplexer, so kommt bei den meisten BPMS-Vorführungen früher oder später der Punkt, an dem an der einen oder anderen Stelle ein kleines Skript programmiert oder selbst geschriebener Code eingebunden werden muss. Dies ist beispielsweise der Fall, wenn die Ermittlung des nächsten Bearbeiters speziellen Regeln folgt, sich Dialoge aufgrund von Eingabewerten dynamisch verändern, komplexe Datenstrukturen verwendet oder individuelle Analysereports benötigt werden.

Ich hatte die Gelegenheit, mir AuraPortal vorführen zu lassen. Dabei beeindruckte mich einerseits der große Funktionsumfang, der „Out of the Box“ zur Verfügung gestellt wird, zum anderen, wie schnell und einfach sich auch etwas schwierigere Anforderungen umsetzen lassen. So verfügt AuraPortal unter anderem über ein komplett integriertes Dokumentenmanagement-System, über Module zum Web Content-Management und zum Aufbau von Internet-Shops, sowie eine Business Intelligence-Komponente – um nur einige zu nennen. Hierbei kommen keine Komponenten von Drittanbietern zum Einsatz, alles ist komplett selbst entwickelt und sehr nahtlos integriert. Mich interessierte besonders, ob und wie sich die oben geschilderten komplexeren Probleme tatsächlich ohne Programmierung lösen lassen. Und in der Tat wurde mir zu jeder meiner Fragestellungen nachvollziehbar gezeigt, wie sie mit Hilfe von Modellierung und Konfiguration umgesetzt werden kann.

Wie geschieht das? Zum einen steht sehr viel vorgefertigte Funktionalität zur Verfügung, mit der sich bereits ein sehr großer Teil typischer Anforderungen abdecken lässt. So gibt es etwa schon eine größere Zahl von möglichen Zuordnungsstrategien von Aufgaben zu Bearbeitern. Zum anderen sind umfangreiche Konfigurationsmöglichkeiten vorhanden. So bietet etwa der Formular-Editor zahlreiche Einstellungsmöglichkeiten zu jedem einzelnen Dialog-Element. Der Editor-Bereich mit den Einstellungsmöglichkeiten wird hierdurch recht umfangreich. Meist kann man aber mit den Standardeinstellungen einfache Fälle bereits ganz gut abdecken. Um sehr ausgefeilte, dynamische Dialoge zu entwickeln, ist hingegen eine gute Kenntnis der verschiedenen Möglichkeiten erforderlich. Umfassende Berechnungen oder Eingabevalidierungen erfordern natürlich schon die Eingabe der entsprechenden mathematischen Formeln oder regulären Ausdrücke. Programmcode ist hingegen nicht erforderlich.

Auch komplexe Regeln ohne Programmcode

Auch dort, wo die angebotenen Standardfunktionen nicht ausreichen, lassen sich z. B. Geschäftsregeln aufstellen und integrieren. Auch dies ist ohne Programmierung möglich. So kann etwa die Zuordnung eines Tasks zu Bearbeitern nicht nur über vordefinierte Mechanismen erfolgen, wie z. B. über Rollen oder die manuelle Auswahl in einem vorangehenden Schritt. Stattdessen kann man auch entsprechende Regeln zuordnen, die während der Prozessausführung ausgewertet werden um den nächsten Bearbeiter zu bestimmen. Die Geschäftsregeln selbst werden tabellarisch erfasst, wobei ggf. beliebig komplexe Formeln verwendet werden können.

Der „No Code“-Anspruch scheint also nicht übertrieben zu sein. Sicherlich kann man sich Funktionalitäten ausdenken, die nicht im Standardfunktionsumfang von AuraPortal vorhanden sind und daher Programmierung erfordern würden. Für Anwendungen zur Automatisierung und Unterstützung von Geschäftsprozessen scheint die Abdeckung aber recht umfassend zu sein. Ich durfte auch einen Blick auf das recht umfangreiche Modell der von der Firma AuraPortal intern zur Projektverwaltung und -steuerung verwendeten Prozesse werfen. Laut eigenen Angaben arbeitet die Firma intern ausschließlich mit dieser – ebenfalls komplett ohne Programmierung entwickelten – Anwendung, d. h. es sind auch sämtliche benötigten Funktionalitäten realisiert, für die sonst ERP- oder CRM-Systeme genutzt werden.

Verstärkte Aktivitäten im deutschsprachigen Raum

Um keine Missverständnisse aufkommen zu lassen: Das Wegfallen der Programmierung bedeutet nicht, dass es plötzlich kinderleicht wäre, umfassende Prozessanwendungen zu erstellen. Komplexe Prozesse und Anforderungen erfordern umfangreiche und durchdachte Modelle, Einstellungen, Formeln etc. Hierfür benötigt man eine genaue Kenntnis des Systems und der zugrunde liegenden Konzepte, sowie die ausgeprägte Fähigkeit zu analytischem Denken. Andererseits muss man keine Programmiersprache beherrschen, und es sinkt insbesondere der Aufwand, der sonst oftmals durch das Schreiben von Boilerplate-Code entsteht, der sich immer wieder in ähnlicher Form wiederholt. Überhaupt ist es in AuraPortal kaum nötig dasselbe mehrfach zu tun, da sich praktisch alles, was man einmal erstellt hat, in anderen Prozessen wiederverwenden lässt.

Dass der Ansatz zu funktionieren scheint, zeigt sich anhand zahlreicher erfolgreicher Implementierungen bei Unternehmen aus den verschiedensten Branchen, darunter bekannte Namen wie General Motors, Toyota, Carrefour, Danone, KPN und Santander. Die Firma AuraPortal, die in Spanien ansässig ist, ist weltweit tätig, wobei es besonders viele Installationen in Südamerika gibt. Im deutschsprachigen Raum ist der BPM-Hersteller, den Gartner als „one of the best kept secrets in the iBPMS market“ bezeichnet, bislang noch weniger bekannt. Das soll sich aber künftig ändern. Gemeinsam mit Partnern werden derzeit die Aktivitäten im hiesigen Markt ausgebaut. Auch wenn die Konkurrenz nicht gerade klein ist, dürfte AuraPortal mit seinem beachtlichen Funktionsumfang auch hierzulande auf einiges Interesse stoßen.

by Thomas Allweyer at June 13, 2016 09:27 AM

June 12, 2016

BPM-Guide.de: Scientific performance benchmark of open source BPMN engines

In May 2016, a group of authors from the universities of Stuttgart (Germany) and Lugano (Switzerland) has conducted a profound performance benchmark of three open source BPMN process engines, Camunda being one of them.

As the authors state in their introduction:

“This work proposes the first microbenchmark for WfMSs that can execute BPMN 2.0 workflows. To this end, we focus on studying the performance impact of well-known workflow patterns expressed in BPMN 2.0 with respect to three open source WfMSs. We executed all the experiments under a reliable environment and produced a set of meaningful metrics.”

Besides Camunda, two other well-known …

by Jakob Freund at June 12, 2016 03:49 PM

June 08, 2016

Drools & JBPM: Tutorial oriented user guides for Drools and jBPM

Community member Nicolas Heron, is creating tutorial oriented user guides for Drools and jBPM (Red Hat BRMS and BPMS). He’s focusing on the backends first, but it will eventually cover all the web tooling too, as well as installation and setup.

All this work is available from bitbucket, using asciidoc and gitbook (free for public projects), so I encourage you all to get involved and help Nicolas out by reviewing and providing feedback.

Click the Table of Contents, to get started
https://www.gitbook.com/book/nheron/droolsonboarding/details

Or just read the pdf:
https://www.gitbook.com/download/pdf/book/nheron/droolsonboarding

He’s just finished the Drools parts, and will moving onto other areas next.

by Mark Proctor (noreply@blogger.com) at June 08, 2016 02:15 PM

June 07, 2016

Drools & JBPM: DecisionCamp And RuleML 2016, 6-9 July New York

This year RuleML 2016 is hosted by Stony Brook University, New York USA. Decision Camp 2016 is co-locating at the same event. I'll be presenting at DecisionCamp and helping to chair the industrial track at RuleML. Looking forward to seeing everyone there and spending a week immersed in discussions on reasoning systems :)

http://2016.ruleml.org
http://2016.ruleml.org/decisioncamp

RuleML Schedule

Decision Camp Schedule(pasted below)

July 6, 2016

OMG DMN 1.2 RTF Meeting at DecisionCAMP 10:00 - 17:00 
The Revision Task Force (RTF) for DMN 1.2 will be meeting in at the Stony Brook University, room NCS 220. The meeting is open only 
to members of the RTF, but others are welcome to meet members of the RTF at the DecisionCAMP on 7th and 8th. 

July 7, 2016

StartEndTitleAuthors
9:009:15Welcome and KickoffJacob Feldman
9:1510:00Modeling Decision-Making Processes: Melding Process Models and Decision ModelsAlan Fish
10:0010:15Coffee Break
10:1510:50Oracle Decision Modeling ServiceGary Hallmark, Alvin To
10:5011:25Decision Management at the Speed of EventsDaniel Selman
11:2512:00Factors Affecting Rule PerformanceCharles Forgy 
12:0012:35DMN: how to satisfy multiple objectives?Jan Vanthienen
12:3514:00Lunch Break
14:0015:00Natural Language Access to Data: It Needs Reasoning
(RuleML Keynote)
Richard Waldinger
15:0015:35Welcome to Method for Parsing Regulations into DMNTom Debevoise, Will Thomas
15:3516:10Using Machine Learning, Business Rules, and Optimization for Flash Sale PricingIgor Elbert, Jacob Feldman
16:1016:25Coffee Break
16:2517:00Improving BRMS Efficiency and Performance and Using Conflict ResolutionJames Owen, Charles Forgy
17:00.18:00QnA Panel "DMN from OMG, Vendor, and Practitioner Perspectives"Moderated by Bruce Silver
19:00-Joint Dinner
July 8, 2016 

StartEndTitleAuthors
9:0010:00DMN as a Decision Modeling Language
(RuleML Keynote)
Bruce Silver
10:0010:15Coffee Break
10:1510:50Solving the "Last Mile" in model based developmentLarry Goldberg
10:5011:25What-If Analyzer for DMN-based Decision Models(Challenge Demo)Jacob Feldman
11:2512:00Advanced Decision Analytics via Deep Reasoning on Diverse Data: For Health Care and MoreBenjamin Grosof, Janine Bloomfield
12:0012:35The Decision Boundary Map: An Interactive Visual Interface to Make Informed Decisions and Selections in the Presence of TradeoffsShenghui Cheng, Klaus Mueller
12:3514:00Lunch Break
15:1515:50Learning Rule Base Programming with Classic Computer GamesMark Proctor

by Mark Proctor (noreply@blogger.com) at June 07, 2016 11:29 PM

Sandy Kemsley: Pega 7 roadmap at Pegaworld 2016

I finished up Pegaworld 2016 at a panel of Pega technology executives who provided the vision and roadmap for CRM and Pega 7. Don Schuerman moderated the panel, which included Bill Baggott, Kerim...

[Content summary only, click through for full article and links]

by sandy at June 07, 2016 11:20 PM

Sandy Kemsley: American Express digital transformation at Pegaworld 2016

Howard Johnson and Keith Weber from American Express talked about their digital transformation to accommodate their expanding market of corporate card services for global accounts, middle market and...

[Content summary only, click through for full article and links]

by sandy at June 07, 2016 10:17 PM

Sandy Kemsley: Rethinking personal data: Pegaworld 2016 panel

I attended a breakout panel on how the idea and usage of personal data are changing was moderated by Alan Marcus of the World Economic Forum (nice socks!), and included Richard Archdeacon of HP, Rob...

[Content summary only, click through for full article and links]

by sandy at June 07, 2016 07:31 PM

Sandy Kemsley: Pegaworld 2016 day 2 keynote: digital transformation and the 4th industrial revolution

Day 2 of Pegaworld 2016 – another full day on the schedule. The keynote started with Gilles Leyrat, SVP of Customer and Partner Services at Cisco, discussing how they became a more digital...

[Content summary only, click through for full article and links]

by sandy at June 07, 2016 06:17 PM

June 06, 2016

Sandy Kemsley: OpenSpan at Pegaworld 2016: RPA meets BPM

Less than two months ago, Pega announced their acquisition of OpenSpan, a software vendor in the robotic process automation (RPA) market. That wasn’t my first exposure to OpenSpan, however: I...

[Content summary only, click through for full article and links]

by sandy at June 06, 2016 07:03 PM

Sandy Kemsley: Pegaworld 2016 Day 1 Keynote: Pega direction, Philips and Allianz

It seems like I was just here in Vegas at the MGM Grand…oh, wait, I *was* just here. Well, I’m back for Pegaworld 2016, and 4,000 of us congregated in the Grand Garden Arena for the...

[Content summary only, click through for full article and links]

by sandy at June 06, 2016 06:03 PM

June 01, 2016

Drools & JBPM: Parallel Drools is coming - 12 core machine benchmark results

We are working on a number of different usage patterns for multi-core processing. Our first attempt is at fireAllRules batch processing (no rule chaining) of 1000 facts against increasing 12, 48, 192, and 768 rules - one join per rule. The break even point is around 48 rules. Below 48 rules the running time was less than 100ms and the thread co-ordination costs starts to cancel out the advantage. But after 48 rules, things get better, much faster.

Smaller is better (ms/op)


The running machine is 12 cores, which we put into 12 partitions and rules are evenly split across partitions. This is all organised by the engine, and not end user code. There are still a lot more improvements we can do, to get more optimal rule to partition assignment and to avoid sending all data to all partitions.

Next we'll be turning out attention to long running fireUntilHalt stream use cases.

We don't have any code yet that others can run, as it's still a bit of hack. But as we progress, we'll tidy things up and try and get it so others can try it.

by Mark Proctor (noreply@blogger.com) at June 01, 2016 06:54 PM

Thomas Allweyer: Prozessorientierung stagniert

Angesichts der zahlreichen Angebote, Veröffentlichungen und Tagungen zum Thema BPM müsste man annehmen, dass der Prozessmanagement-Reifegrad vieler Unternehmen steigt. Laut der aktuellen Studie „The State of the BPM Market“, die im zweijährlichen Rhythmus erscheint, ist dies nicht der Fall. Seit zehn Jahren zeigen die Umfragen, dass die Zahl der Unternehmen mit einem hohen Reifegrad gleich geblieben ist. Die Autoren gehen davon aus, dass es eine kleine Zahl wirklich prozessorientierter Unternehmen gibt. In einer Reihe weiterer Unternehmen entwickeln sich immer wieder vielversprechende Prozessmanagement-Initiativen, doch lässt nach einiger Zeit das Engagement deutlich nach.

Häufig ist das sinkende Interesse mit einem Wechsel von Führungskräften verbunden. Wenn bei einem Nachfolger andere Themen auf der Agenda stehen, verliert BPM an Bedeutung. Von den Studienteilnehmern antworteten sowieso nur 24%, dass sie von der obersten Führungsebene Unterstützung für ihre Arbeit mit den Prozessen erhalten. Positiv ist immerhin zu werten, dass sich der Trend aus der letzten Studie zu integrierten unternehmensweiten Initiativen fortgesetzt hat. Demgegenüber sinkt das Interesse an rein inkrementellen Verbesserungsansätzen für individuelle Prozesse, wie z. B. Six Sigma. Insgesamt gab es wenig Veränderungen gegenüber der letzten Untersuchung, die vor zwei Jahren erschienen ist.


Paul Harmon, Celia Wolf:
The State of Business Process Management – 2014
Download auf BPMTrends

by Thomas Allweyer at June 01, 2016 07:52 AM

May 31, 2016

Sandy Kemsley: Camunda BPM 7.5: CMMN, BPMN element templates, and more

I attended an analyst briefing earlier today with Jakob Freund, CEO of Camunda, on the latest release of their product, Camunda BPM 7.5. This includes both the open source version available for free...

[Content summary only, click through for full article and links]

by sandy at May 31, 2016 05:17 PM

May 27, 2016

Drools & JBPM: Drools & jBPM are Hiring - Web Developer needed for Low-Code/No-Code framework

This position is now filed. Thank you.
------
The Drools & jBPM projects are looking to hire a web developer to help build and improve our low-code/no-code web framework and workbench. This framework is trying to make it possible to model business applications, end to end, fully within a web based environment - utilising data models, forms, workflow, rules, and case management.

The initial focus of the work will be around improving how the workbench uses and exposes Git and Maven. You'll be expected to figure out a Git workflow, suitable for our target users and build a UI to simplify how they work with that. This will also include some pull request like system, to control code reviews and code contributions. The main aim will be to simplify and hide as much complexity as possible. You will be working extensibly with our User Experience group to achieve these goals.

Over time you will tackle other aspects of our low-code/no-code framework and it will be expected that a percentage of your time will help with general sustaining across the product - i.e. bug fixing and maintenance.

We are looking for someone passionate about software development, who can demonstrate they love what they do - such as contributing to open source projects in their own time.

The work will make extensive use of Java, GWTErrai and UberFire.  You do not need GWT, Errai or UberFire experience, but you should have a strong understanding of general web technologies and a willingness to learn. A working knowledge of Git and Maven will be necessary, and you will be asked to give ideas on how to achieve a workflow that is more suitable for less technical people. No prior experience of rules or workflow is necessary, but helps.

The role is remote and can be in any location for which Red Hat has an office. Salaries are based on country ranges and you should check salary suitability with the recruiter. You may apply through this generic job requisition page. https://careers-redhat.icims.com/jobs/52676/senior-software-engineer/job

by Mark Proctor (noreply@blogger.com) at May 27, 2016 06:29 PM

May 23, 2016

Thomas Allweyer: BPM-Systeme werden zu Low Code Entwicklungs-Plattformen

Nachdem sich das „Zero Code“-Versprechen so manchen Herstellers als unrealistisch herausgestellt hat, stößt man in letzter Zeit vermehrt auf den Begriff „Low Code“. Damit werden Plattformen charakterisiert, die die Softwareentwicklung durch geeignete Tools wesentlich vereinfachen sollen. Vierzehn solcher Plattformen wurden jüngst vom Markforschungsinstitut Forrester evaluiert. Darunter findet sich auch eine ganze Reihe von BPM-System, wie Appian, AgilePoint, Bizagi, K2 und Nintex. Mit ihren grafischen Modellierungsumgebungen für die Ablaufsteuerung, Formulareditoren und Datenbank-Konnektoren bringen diese Systeme bereits eine ganze Reihe von Features mit, die den erforderlichen Anteil herkömmlicher Programmierung deutlich reduzieren.

Forrester definiert Low Code-Plattformen als Systeme zur schnellen Auslieferung von Geschäftsanwendungen mit einem Minimum an händischer Programmierung und geringen Anfangsinvestitionen in Setup, Training und Deployment. Viele Firmen sind heute darauf angewiesen, auch große, komplexe und zuverlässige Lösungen innerhalb von Tagen und Wochen anstatt Monaten zu entwickeln. Low Code-Plattformen sollen dies ermöglichen.

Der Markt ist momentan recht breit und zersplittert. Forrester unterscheidet je nach Schwerpunkt der Systems zwischen „Data Base Application Platforms“, „Request Handling Platforms“, „Mobile First Application Platforms“ und „Process Application Platforms“, worunter die bereits erwähnten BPM-Systeme fallen. Dabei ist die Tendenz zu erkennen, dass die Hersteller den Funktionsumfang ihrer Systeme in Richtung „General Purpose Plattforms“ erweitern, mit denen ganz unterschiedliche Typen von Unternehmensanwendungen entwickelt werden können.

Als wichtigste Features nennen die Forrester-Analysten:

  • Die grafische Konfiguration virtueller Datenmodelle und die Integration von Datenquellen per Drag & Drop
  • Deklarative Werkzeuge zur Definition von Geschäftslogik und Workflows mit Hilfe von Prozessmodellen, Entscheidungstabellen und Geschäftsregeln
  • Der Aufbau responsiver User Interfaces per Drag & Drop mit automatischer Generierung von Oberflächen für verschiedene Endgeräte
  • Tools für das Management von Entwicklung, Testen und Deployment

Speziellen Wert legt die Studie außerdem auf die Unterstützung des Cloud-Deployment und mobiler App-Stores. Anbieter sollten hierfür auch über Zertifikate zur Cloud-Sicherheit verfügen. Nicht zuletzt werden Hersteller positiv bewertet, die ein Freemium-Modell mit einer kostenlosen Version und Tutorials anbieten, wodurch ein Einstieg ohne aufwändige Schulungen und hohe Anfangsinvestitionen ermöglicht wird.


The Forrester Wave™: Low-Code Development Platforms, Q2 2016
Download der Studie auf der Appian-Seite (Registrierung erforderlich)

by Thomas Allweyer at May 23, 2016 11:53 AM

May 19, 2016

Sandy Kemsley: Analytics customer keynote at TIBCONOW 2016

Michael O’Connell hosted the last general session for TIBCO NOW 2016, focusing on analytics customer stories with the help of five customers: State Street, Shell, Vestas, Monsanto and Western...

[Content summary only, click through for full article and links]

by sandy at May 19, 2016 12:26 AM

May 18, 2016

Sandy Kemsley: ING Turkey’s journey to becoming a digital bank

I wanted to catch an ActiveMatrix BPM customer breakout session here at TIBCONOW 2016, so sat in on Rahsan Kalci from ING Turkey talking about their transformation to a digital bank using BPM,...

[Content summary only, click through for full article and links]

by sandy at May 18, 2016 10:58 PM

Sandy Kemsley: ActiveMatrix BPM update at TIBCONOW

Roger King, head of BPM product management, gave us an update on ActiveMatrix BPM and Nimbus. The most recent updates in AMX BPM have focused on data and case management. As we saw in the previous...

[Content summary only, click through for full article and links]

by sandy at May 18, 2016 09:57 PM

Sandy Kemsley: Case management at TIBCONOW 2016

Breakout sessions continue with Jeremy Smith and Nicolas Marzin of TIBCO presenting their case management functionality. Marzin went through the history of process and how we have moved from...

[Content summary only, click through for full article and links]

by sandy at May 18, 2016 08:46 PM

Sandy Kemsley: Intelligent Business Operations at TIBCONOW 2016

Nicolas Marzin of TIBCO gave a breakout session on making business operations intelligent, starting with the drivers of efficiency, agility, quality and transparency. There are a number of challenges...

[Content summary only, click through for full article and links]

by sandy at May 18, 2016 07:17 PM

Sandy Kemsley: Closing the loop with analytics: TIBCONOW 2016 day 2 keynote

Yesterday at TIBCO NOW 2016, we heard about the first half of TIBCO’s theme — interconnect everything — and today, Matt Quinn introduced the second half — augment intelligence...

[Content summary only, click through for full article and links]

by sandy at May 18, 2016 06:25 PM

May 17, 2016

Sandy Kemsley: TIBCO Nimbus for regulatory compliance at Bank of Montreal

It’s the first afternoon of breakout sessions at TIBCO NOW 2016, and Alex Kurm from Bank of Montreal is presenting how the bank has used Nimbus for process documentation, to serve the goals of...

[Content summary only, click through for full article and links]

by sandy at May 17, 2016 11:54 PM

Sandy Kemsley: Destination: Digital at the TIBCONOW 2016 day 1 keynote

TIBCO had a bit of a hiatus on their conference while they were being acquired, but are back in force this week in Las Vegas with TIBCO NOW 2016. The theme is “Destination: Digital” with...

[Content summary only, click through for full article and links]

by sandy at May 17, 2016 07:14 PM

May 16, 2016

Keith Swenson: AI as an Interface to Business Process

Dag Kittlaus demonstrated Viv last week; business software world should pay attention.  “Viv” is a conversational approach to interacting with systems.  The initial presentation talks about personal applications, but there are even greater opportunities in the workplace.

What is it?

If you have not yet seen it, then take a look at the Techcrunch Video.  It is billed as a artificial intelligence personal assistant.   Dag Kittlaus brought Siri to Apple to provide basic spoken language recognition to the iPhone.  Viv goes a lot further.  It takes what you say and start creating a map of what you want.  As you say more, it modifies and refines the map.  It taps information service providers, and these are combined in real time based on a semantic model of those services.

This is precisely what Nathaniel Palmer was presenting in his forward looking presentation at the bpmNext conference, and coincidentally something I brought up as well.  Businesses moved from big heavy equipment, to laptops, and then to smart phones.  Mobile is so last year!   The devices got more portable, and the graphical user interface got better over the years, but the paradigm remained the same: humans collect the information together, and submit it to the system, to allow the system to process it.  You write an email, edit to final form, and then send it.  You fill out an expense report, and then submit it.

A conversational UI is very different.  You have a single agent that you contact by voice message, text message, email and yes probably also by web forms, which hen in turn interfaces with the system software.  It learns about you, and the kind of things you normally want, so that it can understand what you are talking about, and translate to the relatively dumber systems.

I was not that impressed

All of the examples were simply, one-off requests.   Ask for weather, and ask a more complicated query which shows some nice parsing capability, but it is still just a single query with a single answer.   Dynamic program generation?  Software that writes itself?  Give me a break: every screen generator, every application generator, generates a program that executes. This is a bit hyperbolic.  The important thing is not that it creates a sequence of steps that satisfy the intent, but that it is able to understand the intent in the first place.

Order flowers.  I could call the one person and order flowers.  I can order a Uber car without needing an assistant.  Booking a hotel is only a few mouse clicks.  That is always the problem with demonstrations — they have to simple enough to grasp, short enough to complete in a few minutes, but hopefully compelling enough to understand the potential.

The most interesting part is after he has the list of flowers, he simply says “what about tulips” and Viv refined the situation.  This shows the power of the back and forth conversation. The conversation constitutes a kind of learning that works together with you to incrementally get to what you want to do.  That is the news: Viv has an impressive understanding about you and what you mean with a few words, and it extends that understanding on a case by case basis.

What is the Potential?

One of the biggest problems with BPM is this idea that you have to know everything at the time that the process starts.  You have to put all your expenses into the expense report for processing.  You need to fill in the complete order form before you can purchase something.  As we illustrated in Mastering the Unpredictable, many business have to start working long before they know all the data.  The emergency room has to accept patients long before they know what care is needed.

The conversational approach to applications will radically transform the ability of software to help out.  Instead of being required to give the full details up front, you can tell the agent what you know now.  It can start working on part of that.  Later, you tell it a little more, maybe after reviewing what it had found so far.  If it is heading down the wrong path, you steer it back in the right direction.

I personally hate forms that that ask for all the potential bit of information that might be needed somewhere in the process.  Like at the doctor’s office where you fill in the same details every time, most of which are going to be needed on this visit, but there is a spot there just in case.  A conversational approach would allow me to add information as it is needed.

PersonalAssistant1

With a group of people this starts to get real interesting.  The doctor is unsure on the direction to go with a patient, so they bring an expert into the conversation.  That expert could start asking questions about the patient.  The agent answers when it can, but it also can pass those questions on to the doctor and the patient.  The conversation is facilitated by the map that represents the case so far.  The agent learns what needs to be done, and over time can facilitate this interaction by learning what the various participants normally mean by their spoken words.

It is not that far fetched.  It will radically change the way we think about our business applications.  It is certainly is disruptive.  This demonstration by Viv makes it clear that this is already happening today.  You might want to buckle your seat belts.

Resources


by kswenson at May 16, 2016 12:54 PM

May 13, 2016

Drools & JBPM: #Drools & #jBPM @ #JBCNConf 2016 (Barcelona, June)

Great news! Once again the amazing and flamboyant leaders of the Java User Group from Barcelona manage to put together their anual conference JBCNConf. And, of course, Drools & jBPM will be there. Take a look at their website for more information about the talks and speakers, and if you are close enough to Barcelona I hope to see you all there.
LOGO_FINAL_PNG_500x250
This year I will be doing a Drools Workshop there (Thursday, first day of the conference), hoping to introduce people to Drools in a very hands on session. So if you are looking to start using Drools straight away, this is a great opportunity to do so. If you are a more advanced user and wants to bring your examples or issues to the workshop you are more than welcome. I will be sharing the projects that I will be using on the workshop a couple of weeks before the event so  can take a look and bring more questions to the session. It is also probable that I will be bringing with me freshly printed copies of the new Mastering Drools book, so you might be able to get some copies for free :)
Maciej Swiderski will be covering the jBPM and Knowledge Driven Microservices this year. I totally recommend this talk to anyone interested in how to improve your micro services by adopting tools to formalise and automate domain specific knowledge.
Finally, this year Maciej and I will be given the closing talk of the conference titled : The Open Source Way were we will be sharing with the audience the main benefits of getting involved with the open source community & projects but most importantly we will be sharing how to do achieve that. If you are already an Open Source project contributor and you plan to attend to the conference, get in touch!
Stay tuned for more news, and get in touch if you want to hang around with us before and after the conference!

by salaboy (noreply@blogger.com) at May 13, 2016 09:05 AM

May 12, 2016

Thomas Allweyer: Aktuelle Auflage des BPMN-Buchs in Englisch erschienen

BPMN 2.0 Frontpage-smZwischenzeitlich ist die aktuelle Auflage meines BPMN-Buchs, das insbesondere um eine Sammlung von Modellierungsmustern erweitert wurde, auf Englisch erschienen. Die zweite englische Auflage entspricht inhaltlich der dritten deutschen Auflage. Wenn man das Buch bestellt, sollte man auf die richtige ISBN achten (und ggf. direkt danach suchen), insbesondere bei verschiedenen internationalen Amazon-Webseiten bekommt man öfter nur die alte Ausgabe angezeigt. Da das Buch on demand gedruckt wird, ist es auf jeden Fall innerhalb einiger Tage lieferbar – auch wenn bei amazon manchmal etwas anderes steht.

Weitere Infos zum Buch (inkl. Direktlinks zu den Bestellseiten)

by Thomas Allweyer at May 12, 2016 10:06 AM

May 11, 2016

Keith Swenson: DMN at bpmNEXT 2016

bpmNEXT is 2 and half days of intense examination and evaluation of the leading trends in the business process community, and Decision Modeling Notation was clearly highlighted this year.

This is the year for DMN

The Decision Modeling Notation standard was released mid 2015. There are several implementations, but none of them quite mature yet.  If you are not familiar with DMN, here is what you need to know:

  • You can think of it simplistically as a tree of decision tables. There is so much more to it than that, but probably 80% of usage will a tree of decision tables
  • It has a specific expression language that allows the writing of conditions and results
  • Actually it is a tree of block expressions. A block expression can be a decision table, a simple if/then/else statement, or a number of other types of expression.
  • The results of blocks lower in the tree can be used in blocks further up.

The idea is to represent complicated expressions in a compact, reusable way.

In general, the market response to DMN has been very good.  Some business rule purists say it is too technical, however is strikes a balance between what you would need to do in a programming language, and a completely natural language rule implementation.  Like BPMN, it will probably tend to be used by specialists, but there is also a good chance, like BPMN, that the results will at least be readable by regular business users.  In my talk, I claimed “This is the Year for DMN

Demonstrations:

  • Denis Gagne, Trisotech, demonstrated DMN modeling as part of his suite of cloud based process modeling tools.  Execution is notably absent.
  • Alvin To, Oracle, demonstrated their version, which only supports linear box expressions (as opposed to the more general tree structure) putting particular attention to their contribution to the spec: FEEL (Friendly Enough Expression Language).
  • Larry Goldberg, Sapiens, demonstrated their ability to create DMN models and transform them into a large variety of execution formats.
  • Jacob Feldman, Open Rules, demonstrates his rules optimization capability.
  • Jacob Freund, Camunda, has an implementation that focuses on single decision tables.

Missing Run-time

Most of the demonstrations focused on the modeling of the decisions.  This is a problem.  The specification covers the modeling, however as with any software standard, the devil is in the details.  You can model using several tools in exactly the same way, but there is no guarantee that the execution of the model will be the same.  A similar situation existed with BPMN where different implementations treated things like the Inclusive-OR node completely differently.  The model is meaningless unless you can show that the models actually produce the same decisions — and that requires a standard run time library that can execute the model and show that what they actually mean.

The semantics are described in the specification using words that can never be precise enough to ensure reliable interoperability.  Until an actual reference implementation is available, there will be no way to decide who has interpreted these words correctly.   The problems occur in what might seem to be pathological edge cases, but experience shows that these are surprisingly more numerous than anyone anticipates.

Call To Action

For this reason I am calling for a standard implementation of the DMN evaluator that is widely available to everyone.  A reference implementation.  I think it needs to be an open source implementation, one that works well enough that products can actually use the code in a product.  Much like the way that Apache web server has eliminated the need for each company to write their own web server.

WfMC will be starting a working group to identify and promote the best open source implementation of DMN run-time.  We don’t want to invent yet another implementation, so we plan to identify the best existing implementation and promote it.  There are a couple of good examples out there.

If you believe you have a good open source implementation of DMN run-time then please leave a comment on this blog post.

If you are interested in helping identify and recognize the best implementation, leave a comment as well.

Resources


by kswenson at May 11, 2016 05:27 AM

May 09, 2016

April 25, 2016

Thomas Allweyer: Über wie viel Prozessintelligenz verfügen Unternehmen?

Prozessintelligenz zhawKürzlich ist die Umfrage für die diesjährige BPM-Studie der ZHAW School of Management and Law gestartet. Dabei liegt die Veröffentlichung der vorangehenden Studie zum Thema Prozessintelligenz noch gar nicht lange zurück. Gemeinhin wird unter der Bezeichnung „Process Intelligence“ meist die Sammlung und Analyse von prozessbezogenen Daten verstanden. In dieser Studie wird der Begriff weiter gefasst. Er umfasst die gesamten Fähigkeiten einer Organisation, die es ihr ermöglichen, intelligent mit ihren Prozessen umzugehen, und umfasst die Teilbereiche „Kreative Intelligenz“, „Analytische Intelligenz“ und „Praktische Intelligenz“. So gehören etwa auch die Fähigkeiten zur strategische Verankerung des Prozessmanagements, zur Prozessoptimierung und zur Prozess-Steuerung zur Prozessintelligenz. In der BPM-Studie 2015 wurde untersucht, wie es um die Prozessintelligenz in den Unternehmen bestellt ist. Hierbei wurden einerseits fünf Fallstudien durchgeführt, andererseits eine Umfrage.

Die Fallstudien beschreiben Projekte zur Prozessverbesserung bei drei Unternehmen (Axa Winterthur, St. Galler Kantonalbank und Hoffmann-La Roche) sowie zwei Stadtverwaltungen (Lausanne und Konstanz). Hierbei kamen ganz unterschiedliche Methoden und Werkzeuge zum Einsatz, wie z. B. Process Mining, Simulation, Prozessautomatisierung, Business Rules Management, Lean Six Sigma, Wertstromanalyse und ein Verfahren zum agilen Geschäftsprozessmanagement. Die Fallbeispiele sind ausführlich beschrieben, und es wird jeweils herausgearbeitet, welche Aspekte der Prozessintelligenz genutzt und verbessert wurden.

In der Umfrage wurde deutlich, dass in vielen Unternehmen Anspruch und Wirklichkeit bezogen auf das Nutzenpotenzial von BPM auseinanderklaffen. So werdem Effizienzsteigerungen und Kundenorientierung als die wichtigsten Ziele genannt, doch führen nur wenige Firmen auch auf diese Ziele bezogene Maßnahmen durch. So gibt nur jeweils etwa ein Fünftel der Befragten an, systematisch Standardisierungs- und Automatisierungspotenziale zu ermitteln, oder die operative Prozessleistung zu überwachen. Entsprechend werden bislang nur recht selten Business Intelligence-Werkzeuge im Zusammenhang mit Geschäftsprozessmanagement eingesetzt. Auch die IT-Unterstützung von schwach strukturierten, wissensintensiven Prozessen ist derzeit wenig ausgeprägt. Insbesondere wird BPM noch kaum im Zusammenhang mit Themen wie Digitalisierung, Entwicklung von Innovationen oder Optimierung des Kundenerlebnisses gesehen. Welche Möglichkeiten das Prozessmanagement für diese strategischen Zukunftsthemen hat, wird in der gerade angelaufenen Studie BPM 2016 untersucht.

Download der Studie unter www.zhaw.ch/iwi/prozessintelligenz

by Thomas Allweyer at April 25, 2016 07:35 AM

April 21, 2016

Sandy Kemsley: bpmNEXT 2016 demo: Capital BPM and Fujitsu

Our final demo session of bpmNEXT — can’t believe it’s all over. How I Learned to Tell the Truth with BPM – Gene Rawls, Capital BPM Their Veracity tool overlays architecture...

[Content summary only, click through for full article and links]

by sandy at April 21, 2016 06:53 PM

Sandy Kemsley: bpmNEXT 2016 demos: Appian, Bonitasoft, Camunda and Capital BPM

Last day of bpmNEXT 2016 already, and we have a full morning of demos in two sessions, the first of which has a focus on more technical development. Intent-Driven, Future-Proof User Experience...

[Content summary only, click through for full article and links]

by sandy at April 21, 2016 05:28 PM

Sandy Kemsley: bpmNEXT 2016 demos: IBM, Orquestra, Trisotech and BPM.com

On the home stretch of the Wednesday agenda, with the last session of the four last demos for the day. BPM in the Cloud: Changing the Playing Field – Eric Herness, IBM IBM Bluemix...

[Content summary only, click through for full article and links]

by sandy at April 21, 2016 12:33 AM

April 20, 2016

Sandy Kemsley: bpmNEXT 2016 demos: Oracle, OpenRules and Sapiens DECISION

This afternoon’s first demo session shifts the focus to decision management and DMN. Decision Modeling Service – Alvin To, Oracle Oracle Process Cloud as an alternative to their Business...

[Content summary only, click through for full article and links]

by sandy at April 20, 2016 10:09 PM

Sandy Kemsley: bpmNEXT 2016 demos: W4 and BP3

Second round of demos for the day, with more case management. This time with pictures! BPM and Enterprise Social Networks for Flexible Case Management – Francois Bonnet, W4 (now ITESOFT Group)...

[Content summary only, click through for full article and links]

by sandy at April 20, 2016 07:05 PM

Sandy Kemsley: bpmNEXT 2016 demos: Salesforce, BP Logix and RedHat

Day 2 of bpmNEXT is all demos! Four sessions with a total of 12 demos coming up, with most of the morning focused on case management. Cloud Architecture Accelerating Innovation in Application...

[Content summary only, click through for full article and links]

by sandy at April 20, 2016 05:33 PM

April 19, 2016

Sandy Kemsley: bpmNEXT 2016 demo session: Signavio and Princeton Blue

Second demo round, and the last for this first day of bpmNEXT 2016. Process Intelligence – Sven Wagner-Boysen, Signavio Signavio allows creating a BPMN model with definitions of KPIs for the...

[Content summary only, click through for full article and links]

by sandy at April 19, 2016 11:26 PM

Sandy Kemsley: bpmNEXT 2016 demo session: 8020 and SAP

My panel done — which probably set some sort of record for containing exactly 50% of the entire female attendees at the conference — we’re on to the bpmNEXT demo session: each is 5...

[Content summary only, click through for full article and links]

by sandy at April 19, 2016 10:05 PM

Sandy Kemsley: Building a Value-Added BPM Business panel at bpmNEXT

BPM implementations aren’t just about the software vendors, since the vendor vision of “just take it out of the box and run it” or “have your business analyst build...

[Content summary only, click through for full article and links]

by sandy at April 19, 2016 07:05 PM

Sandy Kemsley: Positioning Business Modeling panel at bpmNEXT

We had a panel of Clay Richardson of Forrester, Kramer Reeves of Sapiens and Denis Gagne of Trisotech, moderated by Bruce Silver, discussing the current state of business modeling in the face of...

[Content summary only, click through for full article and links]

by sandy at April 19, 2016 06:06 PM

Sandy Kemsley: bpmNEXT 2016

It’s back! My favorite conference of the year, where the industry insiders get together to exchange stories and show what cool stuff that they’re working on, bpmNEXT is taking place this...

[Content summary only, click through for full article and links]

by sandy at April 19, 2016 05:01 PM

April 18, 2016

Thomas Allweyer: Umfrage zu BPM und digitaler Transformation gestartet

Unter der Leitfrage „Kundennutzen durch digitale Transformation?“ hat die School of Management und Law an der Zürcher Hochschule für Angewandte Wissenschaften (ZHAW) eine Umfrage zu ihre BPM-Studie 2016 gestartet. Im Fokus stehen dieses Jahr insbesondere die Potenziale des Prozessmanagements für die Optimierung von Kundenerlebnissen und die Entwicklung und Umsetzung neuer Geschäftsmodelle. Es soll untersucht werden, welche Konzepte und Methoden in diesen Bereichen bereits eingesetzt werden und inwiefern sie Teil der digitalen Transformation von Unternehmen sind. Die Teilnahme an der Umfrage ist ab sofort möglich. Link zur Umfrage.

by Thomas Allweyer at April 18, 2016 06:33 PM

Drools & JBPM: Drools 6.4.0.Final is available

The latests and greatest Drools 6.4.0.Final release is now available for download.

This is an incremental release on our previous build that brings several improvements in the core engine and the web workbench.

You can find more details, downloads and documentation here:




Read below some of the highlights of the release.

You can also check the new releases for:




Happy drooling.

Drools Workbench

New look and feel

The general look and feel in the entire workbench has been updated to adopt PatternFly. The update brings a cleaner, lightweight and more consistent user experience throughout every screen. Allowing users focus on the data and the tasks by removing all unnecessary visual elements. Interactions and behaviour remain mostly unchanged, limiting the scope of this change to visual updates.


Various UI improvements

In addition to the PatternFly update described above which targeted the general look and feel, many individual components in the workbench have been improved to create a better user experience. This involved making sure the default size of modal popup windows is appropriate to fit the corresponding content, adjusting the size of text fields as well as aligning labels, and improving the resize behaviour of various components when used on smaller screens.


New Locales

Locales ru (Russian) and zh_TW (Chineses Traditional) have now been added.

New Decision Server Management UI

The KIE Execution Server Management UI has been completely redesigned to adjust to major improvements introduced recently. Besides the fact that new UI has been built from scratch and following best practices provided by PatternFly, the new interface expands previous features giving users more control of their servers.


Core Engine


Better Java 8 compatibility

It is now possible to use Java 8 syntax (lambdas and method references) in the Right Hand Side (then) part of a rule.

More robust incremental compilation

The incremental compilation (dynamic rule-base update) had some relevant flaws when one or more rules with a subnetwork (rules with complex existential patterns) were involved, especially when the same subnetwork was shared among different rules. This issue required a partial rewriting of the existing incremental compilation algorithm, followed by a complete audit that has also been validated by brand new test suite made by more than 20,000 test cases only in this area.

Improved multi-threading behaviour

Engine's code dealing with multi-threading has been partially rewritten in order to remove a large number of synchronisation points and improve stability and predictability.


OOPath improvements

OOPath has been introduced with Drools 6.3.0. In Drools 6.4.0 it has been enhanced to support a number of new features.


by Edson Tirelli (noreply@blogger.com) at April 18, 2016 03:50 PM

Drools & JBPM: Oficial Wildfly Swarm #Drools Fraction

Oficial what? Long title for a quite small but useful contribution. Wildfly Swarm allows us to create rather small and self contained application including just what we need from the Wildfly Application Server. On this post we will be looking at the Drools Fraction provided to work with Wildfly Swarm. The main idea behind this fraction is to provide a quick way to bundle the Drools Server among with your own services inside a jar file that you can run anywhere.

Microservices World

Nowadays, while micro services are a trending topic we need to make sure that we can bundle our services as decoupled from other software as possible. For such a task, we can use Wildfly Swarm that allows us to create our services using a set of fractions instead of a whole JEE container. It also saves us a lot of time by allowing us to run our application without the need of downloading or installing a JEE container. With Swarm we will be able to just run java -jar <our services.jar> and we are ready to go.
In the particular case of Drools, the project provides a Web Application called Kie-Server (Drools Server) which offers a set of REST/SOAP/JMS endpoints to use as a service. You can load your domain specific rules inside this server and create new containers to use your different set of rules. But again, if we want to use it, we will need to worry about how to install it in Tomcat, Wildfly, Jetty, WebSphere, WebLogic, or any other Servlet Container. Each of these containers represent a different challenge while it comes to configurations, so instead of that we can start using the Wildfly Swarm Drools Fraction, which basically enables the Drools Server inside your Wildfly Swarm application. In a way you are bundling the Drools Server with your own custom services. By doing this, you can start the Drools Server by doing java -jar <your.jar> and you ready to go.
Imagine the other situation of dealing with several instances of Servlet Containers and deploying the WAR file to each of those containers. It gets worst if those containers are not all the same "brand" and version.
So let's take a quick look at an example of how you can get started using the Wildfly Swarm Drools Fraction.

Example

I recommend you to take a look at the Wildfly Swarm Documentation first, to get you started on using Wildfly Swarm. If you know the basics, then you can include the Drools Fraction.
I've created an example using this fraction here: https://github.com/Salaboy/drools-workshop/tree/master/drools-server-swarm
The main goal of this example is to show how simple is to get you started with the Drools Fraction, and for that reason I'm not including any other service in this project. You are not restricted by that, and you can expose your own endpoints.
Notice in the pom.xml file two things:
  1. The Drools Server Fraction: https://github.com/Salaboy/drools-workshop/blob/master/drools-server-swarm/pom.xml#L18 By adding this dependency, the fraction is going to be activated while Wildfly Swarm bootstrap.
  2. The wildfly-swarm plugin: https://github.com/Salaboy/drools-workshop/blob/master/drools-server-swarm/pom.xml#L25. Notice in the plugin configuration that we are pointing to the App class which basically just start the container. (This can be avoided, but I wanted to show that if you want to start your own services or do your own deployments you can do that inside that class)
If you compile and package this project by doing mvn clean install, you will find in the target/ directory a file called:
drools-server-swarm-1.0-SNAPSHOT-swarm.jar which you can start by doing
[code]

java -jar drools-server-swarm-1.0-SNAPSHOT-swarm.jar

[/code]
For this example, we will include one more flag when we start our project to make sure that our Drools Server can resolve the artefacts that I'm going to use later on, so it will be like this:
[code]

java -Dkie.maven.settings.custom=../src/main/resources/settings.xml -jar drools-server-swarm-1.0-SNAPSHOT-swarm.jar

[/code]
By adding the "kie.maven.setting.custom" flag here we are letting the Drools Server know that we had configured an external maven repository to be used to resolve our artefacts. You can find the custom settings.xml file here.
Once you start this project and everything boots up (less than 2 seconds to start wildfly-swarm core + less than 14 to boot up the drools server) you are ready to start creating your KIE Containers with your domain specific rules.
You can find the output of running this app here. Notice the binding address for the http port:
WFLYUT0006: Undertow HTTP listener default listening on [0:0:0:0:0:0:0:0]:8083
Now you can start sending requests to http://localhost:8083/drools to interact with the server.
I've included in this project also a Chrome's Postman project for you to test some very simple request like:
  • Getting All the registered Containers -> GET http://localhost:8083/drools/server/containers
  • Creating a new container - > PUT http://localhost:8083/drools/server/containers/sample
  • Sending some commands like Insert Fact + Fire All Rules -> POST http://localhost:8083/drools/server/containers/instances/sample
You can import this file to Postman and fire the requests against your newly created Drools Server. Besides knowing to which URLs to PUT,POST or GET data, you also need to know about the required headers and Authentication details:
Headers
Headers
Authentication -> Basic
User: kieserver
Password: kieserver1!
Finally, you can find the source code of the Fraction here: https://github.com/wildfly-swarm/wildfly-swarm-drools
There are tons of things that can be improved, helpers to be provided, bugs to be fixed, so if you are up to the task, get in touch and let's the Drools fraction better for everyone.

Summing up

While I'm still writing the documentation for this fraction, you can start using it right away. Remember that the main goal of these Wildfly Swarm extensions is to make your life easier and save you some time when  you need to get something like the Drools Server in a small bundle and isolated package that doesn't require a server to be installed and configured.
If you have any questions about the Drools Fraction don't hesitate to write a comment here.



by salaboy (noreply@blogger.com) at April 18, 2016 01:21 PM

April 15, 2016

Thomas Allweyer: Tagung Insight diskutiert Modellierung im digitalen Unternehmen

Insight2016Die von dem Modellierungsspezialisten MID in Nürnberg veranstaltete Tagung dürfte mittlerweile die größte deutschsprachige Veranstaltung rund um das Thema Modellierung sein. Unter dem Motto „Models Drive Digital“ stand dieses Jahr auch hier das allgegenwärtige Thema Digitalisierung im Vordergrund. So drehten sich sowohl die Einführungs-Keynote von Innovationsforscher Nick Sohnemann als auch der Abschlussvortrag von Ranga Yogeshwar um die zum Teil atemberaubend schnellen Entwicklungen, mit denen unsere Gesellschaft konfrontiert ist und die alle Branchen verändern werden, wobei der Fernsehjournalist Yogeshwar auch zahlreiche kritische Annmerkungen machte. So sei zu beobachten, dass Innovationen vielfach zu einer Verstärkung von Ungleichheit führen.

Ein weiterer Plenumsvortrag stellte die Digitalisierungsstrategie des FC Bayern München vor. Der größte Sportverein der Welt ist auch ein großes Unternehmen mit zum Teil ganz speziellen Anforderungen an die IT. Beispielsweise müssen die Planung, Überwachung und Steuerung der An- und Abreise von zigtausend Besuchern eines Heimspiels durchgängig unterstützt werden. Die Anmeldung als Vereinsmitglied muss unter anderem auch über eine App erfolgen können – nicht zuletzt weil besonders innige Fans ihren neugeborenen Nachwuchs direkt aus dem Kreißsaal beim FC anmelden wollen.

Beim Veranstalter MID dreht sich alles um die Plattform „smartfacts“, die Modelle aus unterschiedlichsten Tools in einer kollaborativen Umgebung integriert. Die Geschäftsführer Andreas Ditze und Jochen Seemann stellten die neuesten Entwicklungen vor, u. a. die verbesserte Unterstützung von Review- und Freigabeprozessen, die Integration eines Web-Modelers und die Aufbereitung von Prozessmodellen in Form einer „Process Guidance“, die Endanwender Schritt für Schritt durch Prozesse führt.

Im Vortragsprogramm gab es insgesamt zehn parallele Tracks zur Auswahl. Neben der Digitalisierung standen Themen wie Geschäftsprozessmanagement, agile Methoden, Business Intelligence, Master Data Management und SAP auf dem Programm. In den Pausen konnten die Teilnehmer Datenbrillen und andere Gadgets ausprobieren oder die Wissensvermittlung durch Serious Games erleben.

Vielfach stellt man fest, dass gerade auch Vorreiter der digitalen Transformation kaum etablierte Modellierungsmethoden einsetzen. Sie werden als zu schwergewichtig betrachtet um hilfreich für die schnelle Entwicklung und Umsetzung digitaler Geschäftsmodelle zu sein. So wies Nick Sohnemann bereits im Eröffnungsvortrag darauf hin, dass etwa bei Google Trends das Interesse am Suchbegriff „Business Process Modeling“ stark gesunken ist. Und auch Elmar Nathe, der bei MID das Thema Digitalisierung verantwortet, sagte mir, dass es Kunden gebe, die nach einer eher groben Skizzierung der Facharchitektur direkt in die Codierung einsteigen und auf eine genauere Modellierung weitgehend verzichten – obwohl die fehlende Dokumentation zu Problemen bei Wartung und Weiterentwicklung führen dürfte.

Geschäftsführer Jochen Seemann zitierte eine Gartner-Studie, der zufolge 80% der Unternehmen aufgrund eines mangelnden BPM-Reifegrades mit ihren digitalen Strategien nicht die erhofften Erfolge erzielen werden. Insofern spielen Themen wie Prozessmanagement und Prozessmodellierung eine wichtige Rolle im digitalen Unternehmen, denn die neuen Geschäftsmodelle funktionieren nur, wenn die zur Umsetzung benötigten Prozesse und Systeme beherrscht werden. MID beobachtet, dass auch Themen wie die modellgetriebene Entwicklung wieder auf verstärktes Interesse stoßen. So setzen beispielsweise Automobilkonzerne verstärkt auf modellbasierte Ansätze um die Variantenvielfalt in Hard- und Software in den Griff zu bekommen.

by Thomas Allweyer at April 15, 2016 09:45 AM

April 11, 2016

BPinPM.net: Invitation to BPinPM.net Conference 2016 – The Human Side of BPM: From Process Operation to Process Innovation

We are very happy to invite you to the most comprehensive Best Practice in Process Management Conference ever! Meet us at Lufthansa Training & Conference Center and join the journey from Process Operation to Process Innovation.

It took more than a year to evaluate, to double-check, and to combine all workshop results to a new and holistic approach for sustainable process management.

But now, the ProcessInnovation8 is there and will guide us at the conference! 🙂

The ProcessInnovation8 provides direction to BPM professionals and management throughout the phases Process Strategy, Process Enhancement, Process Implementation, Process Steering, and Process Innovation while keeping a special focus onto the human side of BPM to maximize acceptance and benefit of BPM.

To share our learnings, introduce practical examples, discuss latest BPM insights, experience the BPinPM.net community, enjoy the dinner, and, and, and…, we are looking forward to meet you in Seeheim! 🙂

Please order your tickets now. Capacity is limited and the early bird tickets will be available for a short period of time only.

Please visit the conference site to access the agenda and to get all the details…

 

 

Again, this will be a local conference in Germany, but if enough non-German-speaking experts are interested, we will think about ways to share the know-how with the international BPinPM.net community as well. Please feel free to contact the team.

by Mirko Kloppenburg at April 11, 2016 07:17 PM

April 10, 2016

Tom Debevoise: Lists in Decision Model Notation

This image was inspired by Nick Broom's post to the DMN group in linked in.

The use case posed by Nick which is here: https://www.linkedin.com/groups/4225568/4225568-6123464175038586884

In the Signavio Decision Modeler’s implementation of DMN, we provide the ability to check whether a set contains an element of another input item or a static set. The expression it uses in the column is an an equivalent of the intersection set operator . The DMN diagram sbove that does this 3 different ways:

1)      With the Signavio ‘Multi-Decision’ extension to DMN. This iterates through an input that is a list and checks item by item if the inputs mstch.

2)      An internal operator that corresponds to a test of one item or set or items existence as a subset of another using a fixed subset

3)      An internal operator that corresponds to a test of one item or set or items existence as a subset of another using an input data type

You do not need the multi decision to support a simple data type list. However, if the input item is a list of complex types (multi attribute types) or complex logic is needed, then the multi-decision is needed. 

The Signavio export for this disagram is here.

 

by Tom Debevoise at April 10, 2016 07:03 PM

Thomas Allweyer: Webseite zum BPMN-Buch aktualisiert

BPMN 2.0 - 3. Auflage - Titel 183pxZur Zeit bereite ich die englische Ausgabe der aktuellen Auflage des BPMN-Buchs vor. Dabei sind mir im deutschen Buch ein paar Kleinigkeiten aufgefallen, die man verbessern könnte. Außerdem gibt es ein paar Änderungen und Erweiterungen zu den Quellen im Literaturverzeichnis und den angegebenen Internet-Links. Daher habe ich die Gelegenheit genutzt und die Webseite zum Buch aktualisiert: www.kurze-prozesse.de/bpmn-buch

by Thomas Allweyer at April 10, 2016 10:25 AM

April 08, 2016

Thomas Allweyer: Kostenfreie Modellierungstools im Test

BPMO-Studie Kostenfreie Modellierungstools17 kostenfreie Prozessmodellierungstools hat BPM&O in ihrer jüngsten Studie untersucht. Dabei wurden nur solche Tools einbezogen, deren kostenlose Nutzung zeitlich unbefristet ist, und die auch keiner Einschränkung hinsichtlich der Modellgröße unterliegen. Bewertet wurden technische Voraussetzungen, Schnittstellen, Modelltypen und Verknüpfungen, Sprachen, Dokumentation und Support. Einige der Tools weisen einen beachtlichen Funktionsumfang auf und sind durchaus für den kurzfristigen Einsatz in Projekten oder zur Überbrückung der Beschaffungszeit eines kostenpflichtigen Modellierungsplattform geeignet. Dennoch, so das Fazit der Studienautoren, muss man sich bewusst sein, dass es sich bei allen kostenlos erhältlichen Modellierungstools um bessere Malwerkzeuge handelt. Ein umfassendes Prozessmanagement lässt sich damit nicht sinnvoll unterstützen, da wesentliche Funktionen fehlen, wie z. B. Kollaborationsmöglichkeiten oder Prozessportale. Einen Eindruck von der Bedienung der Modellierungsfunktionen bieten die Videos, die BPM&O zu jedem untersuchten Tool erstellt hat. Link zum Download der Studie (Registrierung erforderlich).

by Thomas Allweyer at April 08, 2016 12:21 PM

April 06, 2016

Drools & JBPM: User and group management in jBPM and Drools Workbenches

Introduction

This article talks about a new feature that allows the administration of the application's users and groups using an intuitive and friendly user interface that comes integrated in both jBPM and Drools Workbenches.

User and group management
Before the installation, setup and usage of this feature, this article talks about some previous concepts that need to be completely understood for the further usage.

So this article is split in those sections:
  • Security management providers and capabilities
  • Installation and setup
  • Usage
Notes: 
  • This feature is included from version 6.4.0.Final.
  • Sources available here.


Security management providers

A security environment is usually provided by the use of a realm. Realms are used to restrict the access for the different application's resources. So realms contains information about the users, groups, roles, permissions and and any other related information.

In most of the typical scenarios the application's security is delegated to the container's security mechanism, which consumes a given realm at same time. It's important to consider that there exist several realm implementations, for example Wildfly provides a realm based on the application-users.properties/application-roles.properties files, Tomcat provides a realm based on the tomcat-users.xml file, etc. So keep in mind that there is no single security realm to rely on, it can be different in each installation.

The jBPM and Drools workbenches are not an exception, they're build on top Uberfire framework (aka UF), which delegates the authorization and authentication to the underlying container's security environment as well, so the consumed realm is given by the concrete deployment configuration.

 
Security management providers

Due to the potential different security environments that have to be supported, the users and groups management provides a well defined management services API with some default built-in security management providers. A security management provider is the formal name given to a concrete user and group management service implementation for a given realm.

At this moment, by default there are three security management providers available:
Keep updated on new security management providers on further releases. You can easily build and register your own security management provider if non of the defaults fits in your environment.

 
Security management providers's capabilities

Each security realm can provide support different operations. For example consider the use of a Wildfly's realm based on properties files,  The contents for the applications-users.properties is like:

admin=207b6e0cc556d7084b5e2db7d822555c
salaboy=d4af256e7007fea2e581d539e05edd1b
maciej=3c8609f5e0c908a8c361ca633ed23844
kris=0bfd0f47d4817f2557c91cbab38bb92d
katy=fd37b5d0b82ce027bfad677a54fbccee
john=afda4373c6021f3f5841cd6c0a027244
jack=984ba30e11dda7b9ed86ba7b73d01481
director=6b7f87a92b62bedd0a5a94c98bd83e21
user=c5568adea472163dfc00c19c6348a665
guest=b5d048a237bfd2874b6928e1f37ee15e
kiewb=78541b7b451d8012223f29ba5141bcc2
kieserver=16c6511893651c9b4b57e0c027a96075

As you can see, it's based on key-value pairs where the key is the username, and the value is the hashed value for the user's password. So a user is just defined by the key, by its username, it  does not have a name nor address, etc.

On the other hand, consider the use of a realm provided by a Keycloak server. The information for a user is composed by more user meta-data, such as surname, address, etc, as in the following image:

Admin user edit using the Keycloak sec. management provider

So the different services and client side components from the users and group management API are based on capabilitiesCapabilities are used to expose or restrict the available functionality provided by the different services and client side components. Examples of capabilities are:
  • Create user
  • Update user
  • Delete user
  • Update user attributes
  • Create group
  • Assign groups
  • Assign roles 
  • etc

Each security management provider must specify a set of capabilities supported. From the previous examples you can note that the Wildfly security management provider does not support the capability for the management of the attributes for a user - the user is only composed by the user name. On the other hand the Keycloak provider does support this capability.

The different views and user interface components rely on the capabilities supported by each provider, so if a capability is not supported by the provider in use, the UI does not provide the views for the management of that capability. As an example, consider that a concrete provider does not support deleting users - the delete user button on the user interface will be not available.

Please take a look at the concrete service provider documentation to check all the supported capabilities for each one, the default ones can be found here.

If the security environment is not supported by any of the default providers, you can build your own. Please keep updated on further articles about how to create a custom security management provider.

 
Installation and setup

Before considering the installation and setup steps please note the following Drools and jBPM distributions come with built-in, pre-installed security management providers by default:
If your realm settings are different from the defaults, please read each provider's documentation in order to apply the concrete settings.

On the other hand, if you're building your own security management provider or need to include it on an existing application, consider the following installation options:
  • Enable the security management feature on an existing WAR distribution
     
  • Setup and installation in an existing or new project (from sources)
NOTE: If no security management provider is installed in the application, there will be no available user interface for managing the security realm. Once a security management provider is installed and setup, the user and group management user interfaces are automatically enabled and accessible from the main menu.

Enable the security management feature on an existing WAR distribution
Given an existing WAR distribution of either Drools and jBPM workbenches, follow these steps in order to install and enable the user management feature:

  1. Ensure the following libraries are present on WEB-INF/lib:
    • WEB-INF/lib/uberfire-security-management-api-6.4.0.Final..jar
    •  WEB-INF/lib/uberfire-security-management-backend-6.4.0.Final..jar
        
  2. Add the concrete library for the security management provider to use in WEB-INF/lib:
    • Example: WEB-INF/lib/uberfire-security-management-wildfly-6.4.0.Final..jar
    • If the concrete provider you're using requires more libraries, add those as well. Please read each provider's documentation for more information.
        
  3. Replace the whole content for file WEB-INF/classes/security-management.properties, or if not present, create it. The settings present on this file depend on the concrete implementation you're using. Please read each provider's documentation for more information.
      
  4. If you're deploying on Wildfly or EAP, please check if the WEB-INF/jboss-deployment-structure.xml requires any update. Please read each provider's documentation for more information.

Setup and installation in an existing or new project (from sources)

If you're building an Uberfire based web application and you want to include the user and group management feature, please read this instructions.

Disabling the security management feature

he security management feature can be disabled, and thus no services or user interface will be available, by any of

  • Uninstalling the security management provider from the application

    When no concrete security management provider installed on the application, the user and group management feature will be disabled and no services or user interface will be presented to the user.
       
  • Removing or commenting the security management configuration file

    Removing or commenting all the lines in the configuration file located at WEB-INF/classes/security-management.properties will disable the user and group management feature and no services or user interface will be presented to the user.


Usage

The user and group management feature is presented using two different perspectives that are available from the main Home menu (considering that the feature is enabled) as:
User and group management menu entries
Read the following sections for using both user and group management perspectives.

User management

The user management interface is available from the User management menu entry in the Home menu.

The interface is presented using two main panels:  the users explorer on the west panel and the user editor on the center one:

User management perspective

The users explorer, on west panel, lists by default all the users present on the application's security realm:

Users explorer panel
In addition to listing all users, the users explorer allows:

  • Searching users


    When specifying the search pattern in the search box the users list will be reduced and will display only the users that matches the search pattern.

    Search patterns depend on the concrete security management provider being used by the application's. Please read each provider's documentation for more information.
  • Creating new users:



    By clicking on the Create new user button, a new screen will be presented on the center panel to perform a new user creation.
The user editor, on the center panel, is used to create, view, update or delete users. Once creating a new user o clicking an existing user on the users explorer, the user editor screen is opened. 

To view an existing user, click on an existing user in the Users Explorer to open the User Editor screen. For example, viewing the admin user when using the Wildfly security management provider results in this screen:

Viewing the admin user
Same admin user view operation but when using the Keycloak security management provider, instead of the Wildfly's one, results in this screen:

Using the Keycloak sec. management provider
As you can see, the user editor when using the Keycloak sec. management provider includes the user attributes management section, but it's not present when using the Wildfly's one. So remember that the information and actions available on the user interface depends on each provider's capabilities (as explained in previous sections),

Viewing a user in the user editor provides the following information (if provider supports it):
  • The user name
  • The user's attributes
  • The assigned groups
  • The assigned roles
In order to update or delete an existing user, click on the Edit button present near to the username in the user editor screen:

Editing admin user
Once the user editor presented in edit mode, different operations can be done (if the security management provider in use supports it):
  • Update the user's attributes



    Existing user attributes can be updated, such as the user name, the surname, etc. New attributes can be created as well, if the security management provider supports it.
  • Update assigned groups

    A group selection popup is presented when clicking on Add to groups button:



    This popup screen allows the user to search and select or deselect the groups assigned for the user currently being edited.
  • Update assigned roles

    A role selection popup is presented when clicking on Add to roles button:



    This popup screen allows the user to search and select or deselect the roles assigned for the user currently being edited.
  • Change user's password

    A change password popup screen is presented when clicking on the Change password button:

  • Delete user

    The currently being edited user can be deleted from the realm by clicking on the Delete button. 
Group management

The group management interface is available from the Group management menu entry in the Home menu.

The interface is presented using two main panels:  the groups explorer on the west panel and the group editor on the center one:

Group management perspective
The groups explorer, on west panel, lists by default all the groups present on the application's security realm:

Groups explorer
In addition to listing all groups, the groups explorer allows:

  • Searching for groups

    When specifying the search pattern in the search box the users list will be reduced and will display only the users that matches the search pattern.
    Groups explorer filtered using search
    Search patterns depend on the concrete security management provider being used by the application's. Please read each provider's documentation for more information.
  • Create new groups



    By clicking on the Create new group button, a new screen will be presented on the center panel to perform a new group creation. Once the new group has been created, it allows to assign users to it:
    Assign users to the recently created group
The group editor, on the center panel, is used to create, view or delete groups. Once creating a new group o clicking an existing group on the groups explorer, the group editor screen is opened. 

To view an existing group, click on an existing user in the Groups Explorer to open the Group Editor screen. For example, viewing the sales group results in this screen:


Viewing the sales group
To delete an existing group just click on the Delete button.


by Roger Martinez (noreply@blogger.com) at April 06, 2016 06:17 PM

April 05, 2016

Thomas Allweyer: BPM & ERP im digitalen Unternehmen

Die unterschiedlichsten Facetten des IT- und Prozessmanagement im Zeitalter der Digitalisierung beleuchtet das 9. Praxisforum BPM & ERP. Als Keynotesprecher wurde kein geringerer als Professor August-Wilhem Scheer gewonnen. Sein Thema: „Digitialisierung verschlingt die Welt“. Die Frage, welche Bedeutung das Prozessmanagement im digitalisierten Unternehmen hat, kann unter anderem auch an verschiedenen Thementischen diskutiert werden. Auf den Punkt gebrachte Diskussionsanstöße versprechen auch mehrere Kurzvorträge im Pecha Kucha-Format. Und auch Cornelius Clauser, der Leiter der SAP Productivity Consulting Group, plädiert in seinem Abschlussvortrag „From Paper to Impact“ für eine neue Ausrichtung des BPM. Zuvor erwarten die Teilnehmer aber noch eine ganze Reihe von Praxisvorträgen, u. a. von Böhringer Ingelheim, EnBW, Infraserv und Zalando. Außerdem werden die Ergebnisse der internationalen Studie BPM Compass präsentiert, an der die Teilnahme noch bis zum 8. Mai möglich ist.
Die eintägige Veranstaltung findet am 21. Juni in Höhr-Grenzhausen in der Nähe von Koblenz statt. Zudem besteht die Möglichkeit, am Vortrag einen Intensivworkshop zum Prozessmanagement zu besuchen, sowie am Folgetag eine Praxiswerkstatt zum Thema „Agile und hybride Methoden auch im klassischen Umfeld“. Weitere Informationen gibt es unter www.bpmerp.de.

by Thomas Allweyer at April 05, 2016 06:25 PM

April 04, 2016

Drools & JBPM: Mastering #Drools 6 book is out!

Hi everyone, just a quick post to share the good news! The book is out and ready to ship! You can buy it from Packt or from Amazon directly. I'm happy to announce also that we are going to be presenting the book next week in Denmark, with the local JBug: http://www.meetup.com/jbug-dk/events/229407454/ if you are around or know someone that might be interested in attending please let them know!

Mastering Drools 6
The book covers a wide range of topics from the basic ones including how to set up your environment and how to write simple rules, to more advanced topics such as Complex Event Processing and the core of the Rule Engine, the PHREAK algorithm.

by salaboy (noreply@blogger.com) at April 04, 2016 09:02 AM

March 24, 2016

Thomas Allweyer: Entscheidungstabellen in der Cloud

DMN Entscheidungstabelle CamundaWer Geschäftslogik in Form von Entscheidungstabellen gemäß dem Standard „Decision Model and Notation“ (DMN) ausführen und in eine Anwendung integrieren möchte, kann einen neuen Cloud-Service von Camunda nutzen. Die Entscheidungstabelle kann über ein Web-Interface angelegt oder mit einem Offline-Editor erstellt, hochgeladen und mit einem Klick deployed werden. Die Ausführung der Entscheidungslogik lässt sich über ein REST-API anstoßen. Hierdurch ist eine einfache Integration in beliebige Anwendungen möglich. Code-Beispiele für verschiedene gängige Programmiersprachen stehen zur Verfügung. Allerdings handelt es sich bislang erst um einen Beta-Test, bei dem noch nicht bekannt ist, wie lange er kostenfrei zur Verfügung stehen wird.

by Thomas Allweyer at March 24, 2016 11:14 AM

March 23, 2016

Drools & JBPM: Packt is doing it again: 50% off on all eBooks and Videos

Packt Publishing has another great promotion going: 50% off on all Packt eBooks and Videos until April 30th.

It is a great opportunity to grab all those Drools books as well as any others you might be interested in.

Click on the image bellow to be redirected to their online store:




by Edson Tirelli (noreply@blogger.com) at March 23, 2016 10:20 PM

March 21, 2016

Drools & JBPM: High Availability Drools Stateless Service in Openshift Origin

openshift-origin-logoHi everyone! On this blog post I wanted to cover a simple example showing how easy it is to scale our Drools Stateless services by using Openshift 3 (Docker and Kubernetes). I will be showing how we can scale our service by provisioning new instances on demand and how these instances are load balanced by Kubernetes using a round robin strategy.

Our Drools Stateless Service

First of all we need a stateless Kie Session to play around with. In these simple example I've created a food recommendation service to demonstrate what kind of scenarios you can build up using this approach. All the source code can be found inside the Drools Workshop repository hosted on github: https://github.com/Salaboy/drools-workshop/tree/master/drools-openshift-example
In this project you will find 4 modules:
  • drools-food-model: our business model including the domain classes, such as Ingredient, Sandwich, Salad, etc
  • drools-food-kjar: our business knowledge, here we have our set of rules to describe how the food recommendations will be done.
  • drools-food-services: using wildfly swarm I'm exposing a domain specific service encapsulating the rule engine. Here a set of rest services is exposed so our clients can interact.
  • drools-controller: by using the Kubernetes Java API we can programatically provision new instances of our Food Recommendation Service on demand to the Openshift environment.
Our unit of work will be the Drools-Food-Services project which expose the REST endpoints to interact with our stateless sessions.
Also notice that there is another Service that gives us very basic information about where our Service is running: https://github.com/Salaboy/drools-workshop/blob/master/drools-openshift-example/drools-food-services/src/main/java/org/drools/workshop/food/endpoint/api/NodeStatsService.java
We will call this service to know exactly which instance of the service is answering our clients later on.
The rules for this example are simple and not doing much, if you are looking to learn Drools, I recommend you to create more meaning full rules and share it with me so we can improve the example ;) You can take a look at the rules here:
As you might expect: Sandwiches for boys and Salads for girls :)
One last important thing about our service that is important for you to see is how the rules are being picked up by the Service Endpoint. I'm using the Drools CDI extension to @Inject a KieContainer which is resolved using the KIE-CI module, explained in some of my previous posts.
We will bundle this project into a Docker Image that can be started as many times as we want/need. If you have a Docker client installed in your local environment you can start this food recommendation service by looking at the salaboy/drools-food-services image which is hosted in hub.docker.com/salaboy
By starting the Docker image without even knowing what is running inside we immediately notice the following advantages:
  • We don't need to install Java or any other tool besides Docker
  • We don't need to configure anything to run our Rest Service
  • We don't even need to build anything locally due the fact that the image is hosted in hub.docker.com
  • We can run on top of any operating system
At the same time we get notice the following disadvantages:
  • We need to know in which IP and Port our service is exposed by Docker
  • If we run more than one image we need to keep track of all the IPs and Ports and notify to all our clients about those
  • There is no built in way of load balance between different instances of the same docker image instance
For solving these disadvantages Openshift, and more specifically, Kubernetes to our rescue!

Provisioning our Service inside Openshift

As I mention before, if we just start creating new Docker Image instances of our service we soon find out that our clients will need to know about how many instances do we have running and how to contact each of them. This is obviously no good, and for that reason we need an intermediate layer to deal with this problem. Kubernetes provides us with this layer of abstraction and provisioning, which allows us to create multiple instances of our PODs (abstraction on top of the docker image) and configure to it Replication Controllers and Services.
The concept of Replication Controller provides a way to define how many instances should be running our our service at a given time. Replication controllers are in charge of guarantee that if we need at least 3 instances running, those instances are running all the time. If one of these instances died, the replication controller will automatically spawn one for us.
Services in Kubernetes solve the problem of knowing all and every Docker instance details.  Services allows us to provide a Facade for our clients to use to interact with our instances of our Pods. The Service layer also allows us to define a strategy (called session affinity) to define how to load balance our Pod instances behind the service. There are to built in strategies: ClientIP and Round Robin.
So we need to things now, we need an installation of Openshift Origin (v3) and our project Drools Controller which will interact with the Kubernetes REST endpoints to provision our Pods, Replicator Controllers and Services.
For the Openshift installation, I recommend you to follow the steps described here: https://github.com/openshift/origin/blob/master/CONTRIBUTING.adoc
I'm running here in my laptop the Vagrant option (second option) described in the previous link.
Finally, an ultra simple example can be found of how to use the Kubernetes API to provision in this case our drools-food-services into Openshift.
Notice that we are defining everything at runtime, which is really cool, because we can start from scratch or modify existing Services, Replication Controllers and Pods.
You can take a look at the drools-controller project. which shows how we can create a Replication Controller which points to our Docker image and defines 1 replica (one replica by default is created).
If you log in into the Openshift Console you will be able to see the newly created service with the Replication Controller and just one replica of our Pod. By using the UI (or the APIs, changing the Main class) we can provision more replicas, as many as we need. The Kubernetes Service will make sure to load balance between the different pod instances.
Voila! Our Services Replicas are up and running!
Voila! Our Services Replicas are up and running!
Now if you access the NodeStat service by doing a GET to the mapped Kubernetes Service Port you will get the Pod that is answering you that request. If you execute the request multiple times you should be able to see that the Round Robin strategy is kicking in.
wget http://localhost:9999/api/node {"node":"drools-controller-8tmby","version":"version 1"}
wget http://localhost:9999/api/node {"node":"drools-controller-k9gym","version":"version 1"}
wget http://localhost:9999/api/node {"node":"drools-controller-pzqlu","version":"version 1"}
wget http://localhost:9999/api/node {"node":"drools-controller-8tmby","version":"version 1"}
In the same way you can interact with the Statless Sessions in each of these 3 Pods. In such case, you don't really need to know which Pod is answering your request, you just need to get the job done by any of them.

Summing up

By leveraging the Openshift origin infrastructure we manage to simplify our architecture by not reinventing mechanisms that already exists in tools such as Kubernetes & Docker. On following posts I will be writing about some other nice advantages of using this infrastructure such as roll ups to upgrade the version of our services, adding security and Api Management to the mix.
If you have questions about this approach please share your thoughts.

by salaboy (noreply@blogger.com) at March 21, 2016 06:21 PM

Thomas Allweyer: Ein Standard für die EPK

Nach wie vor werden zur Modellierung von Geschäftsprozessen vielerorts ereignisgesteuerte Prozessketten (EPK) eingesetzt, vor allem für die Darstellung aus fachlicher Sicht. Und obwohl diese Notation schon fast seit einem Vierteljahrhundert existiert, gibt es für sie – im Gegensatz zur wesentlich jüngeren BPMN – bis heute keinen verbindlichen Standard. Die Folge sind unterschiedliche Interpretationen und damit eine uneinheitliche Anwendung und fehlende Austauschmöglichkeiten von EPKs zwischen unterschiedlichen Tools. Das soll sich nun ändern. Unter Federführung der Professoren Oliver Thomas von der Universität Osnabrück und Jörg Becker von der Universität Münster wurde nun ein Arbeitskreis zur EPK-Standardisierung gegründet. Die Arbeit an dem Standard wird durch eine Wiki-Kollaborationsplattform unterstützt, die unter www.epc-standard.org erreichbar ist. Wer an der Mitarbeit interessiert ist, kann sich dort als Teilnehmer registrieren.

by Thomas Allweyer at March 21, 2016 12:30 PM

March 19, 2016

Drools & JBPM: Keycloak SSO Integration into jBPM and Drools Workbench

Introduction


Single Sign On (SSO) and related token exchange mechanisms are becoming the most common scenario for the authentication and authorization in different environments on the web, specially when moving into the cloud.

This article talks about the integration of Keycloak with jBPM or Drools applications in order to use all the features provided on Keycloak. Keycloak is an integrated SSO and IDM for browser applications and RESTful web services. Lean more about it in the Keycloak's home page.

The result of the integration with Keycloak has lots of advantages such as:
  • Provide an integrated SSO and IDM environment for different clients, including jBPM and Drools workbenches
  • Social logins - use your Facebook, Google, Linkedin, etc accounts
  • User session management
  • And much more...
       
Next sections cover the following integration points with Keycloak:

  • Workbench authentication through a Keycloak server
    It basically consists of securing both web client and remote service clients through the Keycloak SSO. So either web interface or remote service consumers ( whether a user or a service ) will authenticate into trough KC.
       
  • Execution server authentication through a Keycloak server
    Consists of securing the remote services provided by the execution server (as it does not provides web interface). Any remote service consumer ( whether a user or a service ) will authenticate trough KC.
      
  • Consuming remote services
    This section describes how a third party clients can consume the remote service endpoints provided by both Workbench and Execution Server.
       
Scenario

Consider the following diagram as the environment for this article's example:

Example scenario

Keycloak is a standalone process that provides remote authentication, authorization and administration services that can be potentially consumed by one or more jBPM applications over the network.

Consider these main steps for building this environment:
  • Install and setup a Keycloak server
      
  • Create and setup a Realm for this example - Configure realm's clients, users and roles
      
  • Install and setup the SSO client adapter & jBPM application

Notes: 

  • The resulting environment and the different configurations for this article are based on the jBPM (KIE) Workbench, but same ones can also be applied for the KIE Drools Workbench as well. 
  • This example uses latest 6.4.0.CR2 community release version

Step 1 - Install and setup a Keycloak server


Keycloak provides an extensive documentation and several articles about the installation on different environments. This section describes the minimal setup for being able to build the integrated environment for the example. Please refer to the Keycloak documentation if you need more information.

Here are the steps for a minimal Keycloak installation and setup:
  1. Download latest version of Keycloak from the Downloads section. This example is based on Keycloak 1.9.0.Final.
      
  2. Unzip the downloaded distribution of Keycloak into a folder, let's refer it as 
    $KC_HOME

      
  3. Run the KC server - This example is based on running both Keycloak and jBPM on same host. In order to avoid port conflicts you can use a port offset for the Keycloak's server as:

        $KC_HOME/bin/standalone.sh -Djboss.socket.binding.port-offset=100
      
  4. Create a Keycloak's administration user - Execute the following command to create an admin user for this example:

        $KC_HOME/bin/add-user.sh -r master -u 'admin' -p 'admin'
The Keycloak administration console will be available at http://localhost:8180/auth/admin (use the admin/admin for login credentials)

Step 2 - Create and setup the demo Realm


Security realms are used to restrict the access for the different application's resources. 

Once the Keycloak server is running next step is about creating a realm. This realm will provide the different users, roles, sessions, etc for the jBPM application/s.

Keycloak provides several examples for the realm creation and management, from the official examples to different articles with more examples.

You can create the realm manually or just import the given json files.

Creating the realm step by step

Follow these steps in order to create the demo realm used later in this article:
  1. Go to the Keycloak administration console and click on Add realm button. Give it the name demo.
      
  2. Go to the Clients section (from the main admin console menu) and create a new client for the demo realm:
    • Client ID: kie
    • Client protocol: openid-connect
    • Access type: confidential
    • Root URL: http://localhost:8080
    • Base URL: /kie-wb-6.4.0.Final
    • Redirect URIs: /kie-wb-6.4.0.Final/*
The resulting kie client settings screen:

Settings for the kie client

Note: As you can see in the above settings it's being considered the value kie-wb-6.4.0.Final for the application's context path. If your jbpm application will be deployed on a different context path, host or port, just use your concrete settings here.

Last step for being able to use the demo realm from the jBPM workbench is create the application's user and roles:
  • Go to the Roles section and create the roles admin, kiemgmt and rest-all
      
  • Go to the Users section and create the admin user. Set the password with value "password" in the credentials tab, unset the temporary switch.
      
  • In the Users section navigate to the Role Mappings tab and assign the admin, kiemgmt and rest-all roles to the admin user
Role mappings for admin user


Importing the demo realm

Import both:

  • Demo Realm - Click on Add Realm and use the demo-realm.json file
      
  • Realm users - Once demo realm imported, click on Import in the main menu and use the demo-users-0.json file as import source
At this point a Keycloak server is running on the host, setup with a minimal configuration set. Let's move to the jBPM workbench setup.

Step 3 - Install and setup jBPM workbench


For this tutorial let's use a Wildfly as the application server for the jBPM workbench, as the jBPM installer does by default.

Let's assume, after running the jBPM installer, the $JBPM_HOME as the root path for the Wildfly server where the application has been deployed.

Step 3.1 - Install the KC adapter

In order to use the Keycloak's authentication and authorization modules from the jBPM application, the Keycloak adapter for Wildfly must be installed on our server at $JBPM_HOME. Keycloak provides multiple adapters for different containers out of the box, if you are using another container or need to use another adapter, please take a look at the adapters configuration from Keycloak docs. Here are the steps to install and setup the adapter for Wildfly 8.2.x:

  1. Download the adapter from here
      
  2. Execute the following commands:

     
    cd $JBPM_HOME/
    unzip keycloak-wf8-adapter-dist.zip // Install the KC client adapter

    cd $JBPM_HOME/bin
    ./standalone.sh -c standalone-full.xml // Setup the KC client adapter.

    // ** Once server is up, open a new command line terminal and run:
    cd $JBPM_HOME/bin
    ./jboss-cli.sh -c --file=adapter-install.cli
Step 3.2 - Configure the KC adapter

Once installed the KC adapter into Wildfly, next step is to configure the adapter in order to specify different settings such as the location for the authentication server, the realm to use and so on.

Keycloak provides two ways of configuring the adapter:
  • Per WAR configuration
  • Via Keycloak subsystem 
In this example let's use the second option, use the Keycloak subsystem, so our WAR is free from this kind of settings. If you want to use the per WAR approach, please take a look here.

Edit the configuration file $JBPM_HOME/standalone/configuration/standalone-full.xml and locate the subsystem configuration section. Add the following content:

<subsystem xmlns="urn:jboss:domain:keycloak:1.1">
<secure-deployment name="kie-wb-6.4.0-Final.war">
<realm>demo</realm>
<realm-public-key>MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA2Q3RNbrVBcY7xbpkB2ELjbYvyx2Z5NOM/9gfkOkBLqk0mWYoOIgyBj4ixmG/eu/NL2+sja6nzC4VP4G3BzpefelGduUGxRMbPzdXfm6eSIKsUx3sSFl1P1L5mIk34vHHwWYR+OUZddtAB+5VpMZlwpr3hOlfxJgkMg5/8036uebbn4h+JPpvtn8ilVAzrCWqyaIUbaEH7cPe3ecou0ATIF02svz8o+HIVQESLr2zPwbKCebAXmY2p2t5MUv3rFE5jjFkBaY25u4LiS2/AiScpilJD+BNIr/ZIwpk6ksivBIwyfZbTtUN6UjPRXe6SS/c1LaQYyUrYDlDpdnNt6RboQIDAQAB</realm-public-key>
<auth-server-url>http://localhost:8180/auth</auth-server-url>
<ssl-required>external</ssl-required>
<resource>kie</resource>
<enable-basic-auth>true</enable-basic-auth>
<credential name="secret">925f9190-a7c1-4cfd-8a3c-004f9c73dae6</credential>
<principal-attribute>preferred_username</principal-attribute>
</secure-deployment>
</subsystem>

If you have imported the example json files from this article in step 2, you can just use same configuration as above by using your concrete deployment name . Otherwise please use your values for these configurations:
  • Name for the secure deployment - Use your concrete application's WAR file name
      
  • Realm - Is the realm that the applications will use, in our example, the demo realm created on step 2.
      
  • Realm Public Key - Provide here the public key for the demo realm. It's not mandatory, if it's not specified, it will be retrieved from the server. Otherwise, you can find it in the Keycloak admin console -> Realm settings ( for demo realm ) -> Keys
      
  • Authentication server URL - The URL for the Keycloak's authentication server
      
  • Resource - The name for the client created on step 2. In our example, use the value kie.
      
  • Enable basic auth - For this example let's enable Basic authentication mechanism as well, so clients can use both Token (Baerer) and Basic approaches to perform the requests.
      
  • Credential - Use the password value for the kie client. You can find it in the Keycloak admin console -> Clients -> kie -> Credentials tab -> Copy the value for the secret.

For this example you have to take care about using your concrete values for secure-deployment namerealm-public-key and credential password. You can find detailed information about the KC adapter configurations here.

Step 3.3 - Run the environment

At this point a Keycloak server is up and running on the host, and the KC adapter is installed and configured for the jBPM application server. You can run the application using:

    $JBPM_HOME/bin/standalone.sh -c standalone-full.xml

You can navigate into the application once the server is up at:


jBPM & SSO - Login page 
Use your Keycloak's admin user credentials to login: admin/password

Securing workbench remote services via Keycloak

Both jBPM and Drools workbenches provides different remote service endpoints that can be consumed by third party clients using the remote API.

In order to authenticate those services thorough Keycloak the BasicAuthSecurityFilter must be disabled, apply those modifications for the the WEB-INF/web.xml file (app deployment descriptor)  from jBPM's WAR file:

1.- Remove the filter :

 <filter>
  <filter-name>HTTP Basic Auth Filter</filter-name>
<filter-class>org.uberfire.ext.security.server.BasicAuthSecurityFilter</filter-class>
<init-param>
<param-name>realmName</param-name>
<param-value>KIE Workbench Realm</param-value>
</init-param>
</filter>

<filter-mapping>
<filter-name>HTTP Basic Auth Filter</filter-name>
<url-pattern>/rest/*</url-pattern>
<url-pattern>/maven2/*</url-pattern>
<url-pattern>/ws/*</url-pattern>
</filter-mapping>

2.- Constraint the remote services url patterns as:

<security-constraint>
<web-resource-collection>
<web-resource-name>remote-services</web-resource-name>
<url-pattern>/rest/*</url-pattern>
<url-pattern>/maven2/*</url-pattern>
<url-pattern>/ws/*</url-pattern>
</web-resource-collection>
<auth-constraint>
<role-name>rest-all</role-name>
</auth-constraint>
</security-constraint>


Important note: The user that consumes the remote services must be member of role rest-all. As on described on step 2, the admin user in this example it's already a member of the rest-all role.





Execution server


The KIE Execution Server provides a REST API than can be consumed for any third party clients,. This this section is about how to integration the KIE Execution Server with the Keycloak SSO in order to delegate the third party clients identity management to the SSO server.
Consider the above environment running, so consider having:
  • A Keycloak server running and listening on http://localhost:8180/auth
      
  • A realm named demo with a client named kie for the jBPM Workbench
      
  • A jBPM Workbench running at http://localhost:8080/kie-wb-6.4.0-Final
Follow these steps in order to add an execution server into this environment:


  • Create the client for the execution server on Keycloak
  • Install setup and the Execution server ( with the KC client adapter  )
Step 1 - Create the client for the execution server on Keycloak

As per each execution server is going to be deployed, you have to create a new client on the demo realm in Keycloak.
  1. Go to the KC admin console -> Clients -> New client
  2. Name: kie-execution-server
  3. Root URL: http://localhost:8280/  
  4. Client protocol: openid-connect
  5. Access type: confidential ( or public if you want so, but not recommended )
  6. Valid redirect URIs: /kie-server-6.4.0.Final/*
  7. Base URL: /kie-server-6.4.0.Final
In this example the admin user already created on previous steps is the one used for the client requests. So ensure that the admin user is member of the role kie-server in order to use the execution server's remote services. If the role does not exist, create it.

Note: This example considers that the execution server will be configured to run using a port offset of 200, so the HTTP port will be available at localhost:8280

Step 2 - Install and setup the KC client adapter and the Execution server

At this point, a client named kie-execution-server is ready on the KC server to use from the execution server. Let's install, setup and deploy the execution server:
  
1.- Install another Wildfly server to use for the execution server and the KC client adapter as well. You can follow above instructions for the Workbench or follow the official adapters documentation.
  
2.- Edit the standalone-full.xml file from the Wildfly server's configuration path and configure the KC subsystem adapter as:

<secure-deployment name="kie-server-6.4.0.Final.war">
<realm>demo</realm>
<realm-public-key>
MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCrVrCuTtArbgaZzL1hvh0xtL5mc7o0NqPVnYXkLvgcwiC3BjLGw1tGEGoJaXDuSaRllobm53JBhjx33UNv+5z/UMG4kytBWxheNVKnL6GgqlNabMaFfPLPCF8kAgKnsi79NMo+n6KnSY8YeUmec/p2vjO2NjsSAVcWEQMVhJ31LwIDAQAB
</realm-public-key>
<auth-server-url>http://localhost:8180/auth</auth-server-url>
<ssl-required>external</ssl-required>
<resource>kie-execution-server</resource>
<enable-basic-auth>true</enable-basic-auth>
<credential name="secret">e92ec68d-6177-4239-be05-28ef2f3460ff</credential>
<principal-attribute>preferred_username</principal-attribute>
</secure-deployment>

Consider your concrete environment settings if different from this example:
  • Secure deployment name -> use the name of the execution server war file being deployed
      
  • Public key -> Use the demo realm public key or leave it blank, the server will provide one if so
       
  • Resource -> This time, instead of the kie client used in the WB configuration, use the kie-execution-server client
      
  • Enable basic auth -> Up to you. You can enable Basic auth for third party service consumers
       
  • Credential -> Use the secret key for the kie-execution-server client. You can find it in the Credentials tab of the KC admin console.
       
Step 3 - Deploy and run an Execution Server

Just deploy the execution server in Wildfly using any of the available mechanisms.
Run the execution server using this command:
$EXEC_SERVER_HOME/bin/standalone.sh -c standalone-full.xml -Djboss.socket.binding.port-offset=200 -Dorg.kie.server.id=<ID> -Dorg.kie.server.user=<USER> -Dorg.kie.server.pwd=<PWD> -Dorg.kie.server.location=<LOCATION_URL>  -Dorg.kie.server.controller=<CONTROLLER_URL> -Dorg.kie.server.controller.user=<CONTROLLER_USER> -Dorg.kie.server.controller.pwd=<CONTOLLER_PASSWORD>  
Example:

$EXEC_SERVER_HOME/bin/standalone.sh -c standalone-full.xml -Djboss.socket.binding.port-offset=200 -Dorg.kie.server.id=kieserver1 -Dorg.kie.server.user=admin -Dorg.kie.server.pwd=password -Dorg.kie.server.location=http://localhost:8280/kie-server-6.4.0.Final/services/rest/server -Dorg.kie.server.controller=http://localhost:8080/kie-wb-6.4.0.Final/rest/controller -Dorg.kie.server.controller.user=admin -Dorg.kie.server.controller.pwd=password  
Important note:  The users that will consume the execution server remote service endpoints must have the role kie-server assigned. So create and assign this role in the KC admin console for the users that will consume the execution server remote services.
  
Once up, you can check the server status as (considered using Basic authentication for this request, see next Consuming remote services for more information):
 
curl http://admin:password@localhost:8280/kie-server-6.4.0.Final/services/rest/server/

Consuming remote services

In order to use the different remote services provided by the Workbench or by an Execution Server, your client must be authenticated on the KC server and have a valid token to perform the requests.

NOTE: Remember that in order to use the remote services, the authenticated user must have assigned:

  • The role rest-all for using the WB remote services
  • The role kie-server for using the Execution Server remote services

Please ensure necessary roles are created and assigned to the users that will consume the remote services on the Keycloak admin console.

You have two options to consume the different remove service endpoints:

  • Using basic authentication, if the application's client supports it
  • Using Bearer ( token) based authentication

Using basic authentication

If the KC client adapter configuration has the Basic authentication enabled, as proposed in this guide for both WB (step 3.2) and Execution Server, you can avoid the token grant/refresh calls and just call the services as the following examples.

Example for a WB remote repositories endpoint:

curl http://admin:password@localhost:8080/kie-wb-6.4.0.Final/rest/repositories

Example to check the status for the Execution Server :

curl http://admin:password@localhost:8280/kie-server-6.4.0.Final/services/rest/server/

Using token based authentication

First step is to create a new client on Keycloak that allows the third party remote service clients to obtain a token. It can be done as:
  • Go to the KC admin console and create a new client using this configuration:
    • Client id: kie-remote
    • Client protocol: openid-connect
    • Access type: public
    • Valid redirect URIs: http://localhost/
         
  • As we are going to manually obtain a token and invoke the service let's increase the lifespan of tokens slightly. In production access tokens should have a relatively low timeout, ideally less than 5 minutes:
    • Go to the KC admin console
    • Click on your Realm Settings
    • Click on Tokens tab
    • Change the value for Access Token Lifespan to 15 minutes ( That should give us plenty of time to obtain a token and invoke the service before it expires ) 

Once a public client for our remote clients has been created, you can now obtain the token by performing an HTTP request to the KC server's tokens endpoint. Here is an example for command line:

RESULT=`curl --data "grant_type=password&client_id=kie-remote&username=admin&passwordpassword=<the_client_secret>" http://localhost:8180/auth/realms/demo/protocol/openid-connect/token`

TOKEN=`echo $RESULT | sed 's/.*access_token":"//g' | sed 's/".*//g'`

At this point, if you echo the $TOKEN it will output the token string obtained from the KC server, that can be now used to authorize further calls to the remote endpoints.  For exmple, if you want to check the internal jBPM repositories:

curl -H "Authorization: bearer $TOKEN" http://localhost:8080/kie-wb-6.4.0.Final/rest/repositories


by Roger Martinez (noreply@blogger.com) at March 19, 2016 09:19 PM