Planet BPM

August 17, 2016

Drools & JBPM: Red Hat BPMS and BRMS 7.0 Roadmap Document - With a Focus on UI Usability

BPMS and BRMS  6.x put in a lot of foundations, but the UI aspects fell short in a number of areas with regards to maturity and usability.

In the last 4 years Red Hat has made considerable investment into the BPMS and BRMS space. Our engineering numbers have tripled, and so have our QE numbers. We also now have a number of User Experience and Design (UXD0 people, to improve our UI designs and usability.

The result is we hope the 7x series will take our product to a whole new level, with a much stronger focus on maturity and usability, now with the talent and bandwidth to deliver.

We had an internal review where we had to demonstrate how we were going to go about delivering a kick ass product in 7.0. I thought I would share, in this blog, what we produced, which is a roadmap document with a focus on UI Usability. The live version can be found at google docs, here - feel free to leave comments.

Enjoy :)

Mark
BPMS and BRMS Platform Architect.

Other Links:
Drools 7.0 Happenings  (Includes videos)
Page and Form Builder Improvements (Video blog)
Security Management (Detailed blog on 7.0 improvements)
User and Group Management (Default blog on 7.0 improvements)
--------

About This Document

This document presents the 7.0 roadmap with an eye on usability, in terms of where, how and who for. It is an aggressive and optimistic plan for 7.0 and it is fully expected that some items or a percentage of some items will eventually be pushed to 7.1, to ensure we can deliver close to time. Longer term 7.1 and onward items are not discussed or presented in this document, although it does touch on some of the items which would be raised as a result of reading this document - such as the “What’s not being improved” (for 7.0) section.

Wider field feedback remains limited, with a scarcity of specifics. This creates challenges in undertaking a more evidence based approach to planning, which can stand up strongly to scrutiny on all sides. However, engineering and UXD have been working with the field, primarily through Jim Tyrrell and Justin Holmes over the last year on this topic and this document represents the culmination of many discussions over the last year. As such it represents a good heuristic, based on the information and resources available to us at the time.

Understanding Feedback from the  Field

Broadly speaking, we have two types of customers:
  1. Those who want developers to use our product, often times embedded in their apps
  2. Those who want a cross-functional team to use our product

Generally speaking, we do quite well with customer 1, but we have a huge challenge with customer 2. The market has set a pretty clear expectation, on features and quality for targeted audiences, with IBM ODM and Pega’s BPM/Case Management. Almost every customer type 2 either has a significant deployment of these two competitors in place, or the decision maker has done significant work with these products in the past. Moreover, customer 2 is interested in larger, department or organization wide deployments. Customer 1 is usually interested on project level deployments.

Customer 2 is primarily upset with our authoring experience, both in eclipse and in Business Central. It is uncommon that customer 1 or 2 is upset with missing features or functions from our runtime (especially now that 6.3 has been released with a solid execution server and management function), and when she is, our current process to resolve these gaps works well. Therefore, the field feedback in this document (and our current process) is focused on the authoring experience. This isn’t to say other elements of the product are perfect, but simply an acknowledgement that we have limited time and energy and that the authoring experience is the most important barrier to success with customer 2.

The key issues that we have authoring side are fundamental (customer stories available here at request - some are a bit off color). Generally, these issues fall into 3 areas which are further enumerated in “Product Analysis and Planned Changes.”
  1. Lack of support for a team centric workflow - Functional
    1. See Asset Manager (we need to add detail here)
  2. Knowledge Asset Editors
    1. See BPMN2 designer (functional / reliable), decision table editor (usable) and data modeller (usable), forms (usable)
  3. Navigation between functions and layout of those functions in the design perspective
    1. See Design (Authoring perspective)
    2. Deepak - (usable/reliable)
    3. Aimee - Functional

Introduction - The Product Maturity Model and what is Usability

Version 6.x has done well getting BRMS and BPMS to where it is today, with a strong revenue stream. The product maturity model (see image below) is a useful tool for discussing product improvements. It demonstrates that we are low on the model and need to mature and move up if we are to continue to improve sales. Too many aspects of the system, within the UI, may be considered neither functional (F), nor reliable (R), nor usable (U). The purpose of this document is to articulate a plan to address these issues, and in particular highlight the type of users the tool is being designed for and what they’ll be doing with it. The goal for 7.0 is to get as close to the “chasm” described in the model, with an aim to go beyond it as 7.x matures.



When discussing usability it’s very important we understand whether we are talking about lack of features (F), too many or too serious defects (R) or poor UI design (U).

Quite often people report an issue as usability simply because they want to go from A to D, but get stuck at B or C. Either because the functionality is not there to complete the task, or it’s too buggy and they cannot progress. So while good UI design is important, we must balance our efforts across F, R and U to become usable - a focus on UI design only will not help usability, if the underlying product is neither reliable or functional. Commonly this is called Human Centered Design.  By leveraging this common vocabulary, we can foster a more effective and inclusive dialogue with the wider team. So going forward, we are asking our stakeholders to employ the usability model presented here, and in particular the Functional, Reliable, Usable and Convenient terms.

High Level Goal

A minimal viable product for case management is the main goal for 7.0. Case management provides a well defined end-to-end use case for product management, engineering and UXD. This is more than just adding another feature. When a user creates an end-to-end case management solution they will need to use most aspects of our system. Case management also has a clear set of target audiences (personas) for design UI and case worker UI. This allows us to identify both where and how and who for our “fit and finish” efforts are spent to improve things. Ensuring a strong  directed focus on what we do and making it easier to communicate this, with hopefully a more realistic understanding of expectations from others within the organisation.

High Level Plan

When considering the plan as a whole, the initial target user for 7.0, or persona, for the design ui is that of a casual or low skilled developer, who typically favours tooled (low code) environments where possible. See Deepak in Persons:
Where possible and it makes sense, designs will be optimized for the less technical, citizen developers, of Aimee and Cameron Personas. With either optional advanced functionality for Deepak or common denominator designs suitable for all personas. While
Citizen developers are not the primary focus for 7.0, it will become increasingly important and should ideally be targeted for 7.1 onwards, so it’s important as much as practically possible is done for this direction n 7.0.  See “The advent of the citizen developer”.

Throughout this work, where possible and time permitting, designs will be put in place, either as alternative persona support or common denominator persona support for the, 
7.0 will primarily be focusing on all the components and parts that a Business Central user will come into contact with, while building a case management solution. For each of those areas we will try to have a sustained effort, over a long period of time to ensure depth and maturity, with UXD fully involved.

The aim for case management, the targeted components it uses and the Deepak persona is to achieve an acceptable level for functional, reliable and usable. For 7.1 we hope to look more holistically across the system and cross the chasm to become convenient. To become convenient we will need a strong effort in looking at the end-to-end user interaction using the system and trying to streamline all the steps they go through and making it easier and faster for them to achieve the goals they set out to achieve.

Detailed plans here
Detailed resource allocation, here.

Product Changes Done (6.3)

  • The whole business central was updated to PatternFly for v6.3. (See screenshots at end).
  • Execution server UI has been fully redesign with UXD involvement and great field feedback. (See screenshots at end).
    • “I want to congratulate you on the great work on the new kie server management features and UI. It's surprisingly intuitive and does just what it needs to do. Keep up the good work!” (Justin Holmes, Business Automation Practice Lead).
  • The process runtime views have been augmented with the redesigned and newly integrated DashBuilder. They look great and have already had good feedback.  (See screenshots at end).

Product Analysis and Planned Changes

The 7.0 development cycle only started early/mid May, we do not yet have UXD input (wireframes/css/html) for every area. This UXD input will take time and will be produced incrementally across the product, throughout the 7.0 life cycle. What we do have, and is included below, where those efforts will be.
  • Design (Authoring perspective)
    • Problem:
      • The authoring perspective is designed for power users, and fails to work for less technical personas.
      • The project configuration works just like normal editors, which is confusing.
      • The project explorer mixes switching org/repo/project and navigation, which crowds the area. It’s also repository oriented.
      • Versioning, branching are too hard and commits do not squash, creating long unreadable logs for every small save.
    • Solution:
      • See UXD wire diagrams for most of what is described here, although there is still more to do.
      • Create new views for navigating projects, that is content and information oriented and more suitable for the casual coder and moving towards citizen developer. Make things project oriented.
      • Centralise project settings, and improve their reliability and usability.
  • Support for Collaborative Team Based Workflow
    • Problem:
      • Most customers using Business Central want it to support a team, which generally reflects the Deepak, Aimee, Paula and Cameron from our personas.
      • We have no clear workflow for changes to be approved and promoted in the team.
        • The asset manager (versioning and branch management) needs an overhaul. It is extremely confusing to the point of not being functional even for technical users. The current feature does weird branching/merging with git in a single repo, so it’s too technical for Aimee but confusing for Deepak as it doesn’t follow conventions.
        • The screens are way too small to be usable and the actual workflow can be quite confusing
        • The feature hasn’t been QE’d
      • The single git repository model can make integrating Business Central into a CI/CD flow complicated. It’s doable now that we have git hooks, but it is far from convenient. Give our strength in the CI/CD space, this needs to get to convenient.
    • Solution
      • Underlying changes going on for the cloud work (every user gets their own fork) will put in place the backside which will make this easier to progress. Exactly how we will improve the UXD here, to hide and simplify GIT has to be investigated. We have a hiring slot open for someone to focus on this area.
      • Will move to a repository per user. This will support a pull request type workflow in the tool between users.
      • Repo per user will simplify CI/CD
      • To be clear 7.0 will work to improve around the scope of what we have in 6x now, as we have limited time left for 7.0 on this now. With the aim of being minimally viable for deepak. It’s not clear how easy we can make this for aimee too. Likewise wider collaborative workflow, really needs to be considered future work, to avoid expectation problems.
  • BPMN Designer
    • Problem
      • The BPMN designer is the most important area in the product and also the area that gets the most complaints. These complaints are primarily about reliability, Oryx was inherited from an old community project (for time to market) and came with too much technical debt. There are lots of small details, which can detract from the overall experience.
      • Oryx is not testable and regressions happen with almost every fix, making it very hard and costly to stabilise.
    • Solution
      • Work with the Lienzo (a modern canvas library) community to build a new visio like tool, that can support BPMN2, and provide a commercial quality experience.
      • Have a strong focus on enabling testability.
      • Real time drawing of shapes and lines during drag. Including real time alignment and distribution guidelines and snap.
      • Proper orthogonal lines, with multipoint support, and heuristics to provide minimal number of turns for each line.
      • Reduced and more attractive property panels (designed by UXD) for each of the node types, focusing on hiding technical details and (also) targeting less technical users.
      • Change palette from accordion to vertical bar with fly-outs. Support standard and compact palettes.
    • Eclipse
      • To unify authoring experience across web and Eclipse, we are investigating using web-based modelling components inside of Eclipse, without the need for business-central or any other server. However this is a research topic and we are unable to promise anything. We plan to investigate decision tables first, as they are simpler, as they require a single view (and also use lienzo), which may make 7.0. If that goes well, we will look into the designer - but this is not planned for 7.0.
      • Until we have a supported Lienzo based BPMN2 designer for eclipse, we will continue to support and maintain the existing eclipse plug in. The existing items, such as project wizards, will remain and have support.
  • Administration/Settings
    • Problem:
      • Administration and settings are spread out in different locations and are neither consistent nor intuitive. In some cases, such as imports, they have been buggy.
    • Solution:
      • Centralise administrations and settings and ensure they are consistent and intuitive.
      • Ensure all administration and settings are reliable.
      • Work with UXD on improving designs.
        • Designs TBD.
  • Case Management
    • This does not exist yet, but UXD are involved. They have produced visionary documents, which go beyond what we can implement now, and are working with us to produce more incremental and simpler steps that we can achieve for 7.0
  • Decision Tables
    • Problem
      • There are not a lot of complaints about decision tables, other than they could be more attractive. The main issue is they are not functional compared to our competitors.
    • Solution
      • Focus the two Drools UI developers solely on decision tables and moving towards Decision Model and Notation, an OMG standard for decision tables that compliments BPMN2.
    • Must support tabular chaining (part of DMN spec),  design time verification and validation and excel import/export.
    • Work with UXD to improve the aesthetics.
  • Reporting (DashBuilder)
    • Problem
      • Dashbuilder is already a mature and well featured product, with few complaints.  However it came from Polymita and uses a different technology stack, which produces a design miss match - as it’s not PatternFly. Nor can its charts be easily integrated into other pages, which is necessary for process views and case management.
    • Solution
      • An effort has been going on for some time to port Dashbuilder to the same technology as the rest of the platform and adopt Pattern Fly. The results for this can already be seen in the improved jBPM process views for 6.3 and we should have full migration for 7.0
  • Forms
    • Problem
      • This is an inherited Polymita item which was written in a different technology stack and it never integrated well nor is it PatternFly, creating an impedance mismatch.
      • It has some powerful parts to it, but it’s layout capabilities are too limited, where users are restricted to new items in rows only. There is no row spanning, or grid like views.
    • Solution
      • A new effort has been going on for some time now that ports the forms to the same technology stack as the rest of the platform and adopt PatternFly.
      • We are focusing around a bootstrap grid layout system, to ensure we have intuitive and powerful layout capabilities. We have invested in a dynamic-grid system for bootstrap grids, to avoid the issue of having to design your layout first, as it’s hard to change after.
    • Working with UXD to redesign each of the editors for the form components.
  • Data Modeller
    • Problem
      • There are less complaints on this item than others, probably due to it’s more simplistic nature. But UXD have a number requests, to try and improve the overall experience anyway.
    • Solution
      • Support simple business types, optionally and in addition to java types.
    • i.e. number, string, currency, but we won’t lose the ability to use the Java types when required.
    • Layout changes and CSS improvements
    • Longer term we need a visual ERD/UML style modeller, but that will not happen for 7.0
  • Data Services/Management
    • This does not exist yet, but is necessary for case management to work end-to-end. It entails the system allowing data sources to be used, tables to be viewed and their data to be edited. More importantly it allows design time data driven components for forms.

What's Not being improved for 7.0

  • 7.1 will need to have a stronger focus on trying to become more convenient and pleasurable. This will require stronger focus on streamlining how the user uses the tool as a whole, making it easier and faster for them to get things done. Wizards and task oriented flows will be essential here, and general improved interaction design.
  • General
    • Refactoring
  • BRMS
    • Guided Editor
    • Scenario/Simulation
      • We hope to pick this up for 7.1 in 2017.
    • DSLs
  • BPMS
    • Redesign of the navigation
    • Major redesign of process instance list or task list (though adding features to support case management)
      • More focus on building custom case applications that can be tailored specifically to what the customer needs
  • Product Installer
  • It is unclear if the product team will be improving the usability of the installer and patching.
  • Product Portal and Download
    • It is unclear if the product team will be improving how product and patches are found.

Other Notable Roadmap Work

  • Drools
    • Drools is currently focusing on trying to enable multi-core scalability for CEP use cases and also high availability for CEP use cases. There is also ongoing longer term research into pojo-rules and a drl replacement (will most likely be a superset of java).
  • jBPM
    • Horizontal scaling for the cloud is the main focus for jBPM and represents a number of challenges for jBPM, related to how processes running on different services work with each other, as well as how signals and messages are routed and information collected and aggregated.
  • OptaPlanner
    • Horizontal scaling through Partitioned Search is the main focus for OptaPlanner.

Organisational Changes Done and Ongoing

  • The group is now focusing engineers for longer periods of time to specific parts of the product. This will bring about depth and maturity to those areas the engineers work on.
    • 6.x focus was on rapid breadth expansion of features. This gave time to market, which allowed the revenue growth we have, but comes with the pains we have now. The shift to depth will help address this.
  • Migrating to PatternFly
    • Allows engineering and UXD to be more fully engaged. Ensures our product is consistent with all other Red Hat products. Allow Business Central to leverage ongoing research from the PatternFly team.
  • UXD team has increased from 1 person to 2.5. With one person dedicated to providing HTML and CSS to developers.
  • Usability testing of primary workflows and new features with participants representing target Personas for the given workflows/features.
  • The field has become and continues to become more engaged, via the BPM and BRMS Community of Practice initiative, and in particular Justin Holmes and Jim Tyrell’s involvement.
    • They have attended multiple team meetings now, and provide constant feedback and guidance. This has been invaluable.
    • The field engages with UXD in a twice monthly meeting, which lead the effort in developing Personas. These design tools provide a structure to discussions about who our users are and what we need to build in order to make them happy. Today, these personas are all focused on the design/authoring experience, as this is currently the field’s biggest perceived gap in features and we want to focus our effort as much as possible.
    • Jim Tyrrell is proposing to lead a regular field UXD review, to review any changes going on in community, as they happen.  This effort should be scheduled to be done every 3 weeks or so.
    • We also should think about bringing in System Integrator Consulting Partners to help with designing our offering.
    • Engineering releases of the product are being consumed by SA’s and Consultants in order to do exploratory testing before GA.
  • More continuous sustaining effort: organisational and planning changes to support a continuous effort on improving the quality of the platform across the board.  Rather than continuous switching of developer’s focus or postponing bug fixing towards the end of the cycle, there should be a continuous effort to fix known issues (large and small) to improve the overall quality and experience.  Currently set at 20% on average across the team (where some developers are much more focused on sustaining than others).
  • The documentation team have agreed to move to the same tooling (asciidoc) and content source (git repo) as engineering. This should make it easier for them to stay in sync and add value.
    • For 6.x and prior the documentation team had been silo’d before and using a completely different tool chain and document source. They were unable to effectively track community docs, meaning that products docs were lagging behind, as well as lacking content and often wrong. This means the product docs devalued the product, compared to community. We would typically hear field people say they wish they could just show community docs to customers, rather than product docs - this is a situation that cannot be allowed to continue.
  • A subcontractor has been hired to assist with user guide and getting started documentation in a tutorial format, as well as installation and setup - to improve the onboarding experience. This work is currently focused on 6x, but it will be updated to 7.0 towards the end of the project life cycle.
  • QE are now working far more closely with engineering, adding tests upstream into community, ensuring they run earlier and regressions found faster. We have also been working to embed the QE team within engineering, so that there is a greater communication and thus understanding and collaboration between engineering and QE (which did not happen on 6x or earlier).
  • We have greatly improved our PR process. With gatekeepers and an insistence that all code now, backend and frontend is reviewed for tests. 6x has no community provided UI tests, this is no longer the case for 7x.
  • We have also improved our CI/CD situation.

6.3 Improvement Images

Execution Server



Data Modeller (Before and After)



jBPM Runtime Views (Before and After)









by Mark Proctor (noreply@blogger.com) at August 17, 2016 12:27 AM

August 05, 2016

Drools & JBPM: Page and Form builder for Bootstrap responsive grid views - a progress update

Eder has made great progress on the page and form builder,  which are built on top of Bootstrap responsive grid views.

We love the responsive aspects of Bootstrap grid views, but felt existing tools (such as Layoutit) exposed the construction of the grid too much to users. Further changing the structure of a page after it was made and populated is not easy. We wanted something that built the grid automatically and invisibly based on the dragging and positioning of components.

The latest results can be seen in this youtube video (best to watch full screen and select HD):
 https://www.youtube.com/watch?v=LZdU7cCUfrM

We have other videos, from earlier revisions of the tool, that you can also watch, as well as peripheral related tools.
Page Builder
Form Builder
Page/App Deployment
Page Permissions
User and Groups

by Mark Proctor (noreply@blogger.com) at August 05, 2016 11:54 PM

July 29, 2016

Drools & JBPM: Security management in jBPM & Drools workbenches


The jBPM 7 release will include an integrated security management solution to allow administrator users to manage the application’s users, groups and permissions using an intuitive and friendly user interface. Once released, users will be able to configure who can access the different resources and features available in the workbench.

In that regards, a first implementation of the user & group management features was announced about 3 months ago (see the announcement here). This is the second article of this series and it describes what are permissions and how they extend the user and group management features in order to deliver a full security management solution. So before going further, let's introduce some concepts:

Basic concepts

Roles vs Groups

Users can be assigned with more than one role and/or group. It is always mandatory to assign at least one role to the user, otherwise he/she won’t be able to login.

Roles are defined at application server level and they are defined as <security-role> entries in the application’s web.xml descriptor. On the other hand, groups are a more flexible concept, since they can be defined at runtime. Both can be used together without any trouble. Groups are recommended as they are a more flexible than roles.

Permissions

A permission is basically something the user can do within the application. Usually, an action related to a specific resource. For instance:

  • View a perspective 
  • Save a project 
  • View a repository 
  • Delete a dashboard 

A permission can be granted or denied and it can be global or resource specific. For instance:

  • Global: “Create new perspectives” 
  • Specific: “View the home perspective” 

As you can see, a permission is a resource + action pair. In the concrete case of a perspective we have: read, update, delete and create as the actions available. That means that there are four possible permissions that could be granted for perspectives. 


Permissions do not necessarily need to be tied to a resource. Sometimes it is also neccessary to protect access to specific features, like for instance "generate a sales report". That means, permissions can be used not only to protect access to resources but also to custom features within the application.

Authorization policy

The set of permissions assigned to every role and/or group is called the authorization (or security) policy. Every application contains a single security policy which is used every time the system checks a permission.

The authorization policy file is initialized from a file called WEB-INF/classes/security-policy.properties under the application’s WAR structure.

NOTE: If no policy is defined then the authorization management features are disabled and the application behaves as if all the resources & features were granted by default. 

Here is an example of a security policy file:

# Role "admin"
role.admin.permission.perspective.read=true
role.admin.permission.perspective.read.Dashboard=false


# Role "user" 
role.user.permission.perspective.read=false 
role.user.permission.perspective.read.Home=true 
role.user.permission.perspective.read.Dashboard=true

Every entry defines a single permission which is assigned to a role/group. On application start up, the policy file is loaded and stored into memory.

Usage

The Security Management perspective is available under the Home section in the workbench's top menu bar.










The next screenshot shows how this new perspective looks:          

               Security Management Perspective             




















Compared to the previous version this new perspective integrates into a single UI the management of roles, groups & users as well as the edition of the permissions assigned to both roles & groups. In concrete:
  • List all the roles, groups and users available 
  • Create & delete users and groups 
  • Edit users, assign roles or groups, and change user properties
  • Edit both roles & groups security settings, which include: 
    • The home perspective a user will be directed to after login 
    • The permissions granted or denied to the different workbench resources and features available 

All of the above together provides a complete users and groups management subsystem as well as a permission configuration UI for protecting access to some of the workbench resources and features.

Role management

By selecting the Roles tab on the left sidebar, the application shows all the application roles:


























Unlike users and groups, roles can not be created nor deleted as they come from the application’s web.xml descriptor.

NOTE: User & group management features were described in detail in this previous article

After clicking on a role in the left sidebar, the role editor is opened on the screen’s right, which is exactly the same editor used for groups. 

Security settings editor

Security Settings 

The above editor is used to set several security settings regarding both roles and groups.



Home perspective

This is the perspective where the user is directed after login. This makes it possible to have different home pages for different users, since users can be assigned to different roles or groups.

Priority

It is used to determine what settings (home perspective, permissions, …​) have precedence for those users with more that one role or group assigned.

Without this setting, it wouldn’t be possible to determine what role/group should take precedence. For instance, an administrative role has higher priority than a non-administrative one. For users with both administrative and non-administrative roles granted, administrative privileges will always win, provided the administrative role’s priority is greater than the other.

Permissions

Currently, the workbench support the following permission categories.

  • Workbench: General workbench permissions, not tied to any specific resource type. 
  • Perspectives: If access to a perspective is denied then it will not be shown in any of application menus. Update, Delete and Create permissions change the behaviour of the perspective management plugin editor. 
  • Organizational Units: Sets who can Create, Update or Delete organizational units from the Organizational Unit section at the Administration perspective. Sets also what organizational units are visible in the Project Explorer at the Project Authoring perspective. 
  • Repositories: Sets who can Create, Update or Delete repositories from the Repositories section at the Administration perspective. Sets also what repositories are visible in the Project Explorer at the Project Authoring perspective. 
  • Projects: In the Project Authoring perspective, sets who can Create, Update, Delete or Build projects from the Project Editor screen as well as what projects are visible in the Project Explorer. 


For perspectives, organizational units, repositories and projects it is possible to define global permissions and add single instance exceptions afterwards. For instance, Read access can be granted to all the perspectives and deny access just to an individual perspective. This is called the grant all deny a few strategy.

The opposite, deny all grant a few strategy is also supported:

NOTE: In the example above, the Update and Delete permissions are disabled as it does not makes sense to define such permissions if the user is not even able to read perspectives.

Security Policy Storage

The security policy is stored under the workbench’s VFS. Most concrete, in a GIT repo called “security”. The ACL table is stored in a file called “security-policy.properties” under the “authz” directory. Next is an example of the entries this file contains:

role.admin.home=HomePerspective 
role.admin.priority=0 
role.admin.permission.perspective.read=true 
role.admin.permission.perspective.create=true 
role.admin.permission.perspective.delete=true 
role.admin.permission.perspective.update=true

Every time the ACL is modified from the security settings UI the changes are stored into the GIT repo. Initially, when the application is deployed for the first time there is no security policy stored in GIT. However, the application might need to set-up a default policy with the different access profiles for each of the application roles. 

In order to support default policies the system allows for declaring a security policy as part of the webapp’s content. This can be done just by placing a security-policy.properties file under the webapp’s resource classpath (the WEB-INF/classes directory inside the WAR archive is a valid one). On app start-up the following steps are executed:

  • Check if an active policy is already stored in GIT 
  • If not, then check if a policy has been defined under the webapp’s classpath 
  • If found, such policy is stored under GIT 

The above is an auto-deploy mechanism which is used in the workbench to set-up its default security policy.

One slight variation of the deployment process is the ability to split the “security-policy.properties” file into small pieces so that it is possible, for example, to define one file per role. The split files must start by the “security-module-” prefix, for instance: “security-module-admin.properties”. The deployment mechanism will read and deploy both the "security-policy.properties" and all the optional “security-module-?.properties” found on the classpath.

Notice, despite using the split approach, the “security-policy.properties” must always be present as it is used as a marker file by the security subsystem in order to locate the other policy files. This split mechanism allows for a better organization of the whole security policy.

Authorization API

Uberfire provides a complete API around permissions. The AuthorizationManager is the main interface for checking if permissions are granted to users.
@Inject
AuthorizationManager authzManager;

Perspective perpsective1;
User user;
...
boolean result = authzManager.authorize(perspective1, user);
Using the fluent API can also be expressed as:

authorizationManager.check(perspective1, user)
.granted(() -> ...)
.denied(() -> ...); 
The security check calls always use the permissions defined in the security policy.

For those interested in those APIs, an entire chapter can be found in the Uberfire's documentation.

Summary


The features described above will bring even more flexibility to the workbench. Users and groups can be created right from the workbench, new assets like perspectives or projects can be authored and, finally, specific permissions can be granted or denied for those assets.  
In the future, along the improvement of the authoring capabilities more permission types will be added. The ultimate goal is to deliver a zero/low code, very flexible and customizable tooling which allows to develop, build and deploy business applications in the cloud.


by David Gutiérrez (noreply@blogger.com) at July 29, 2016 06:58 AM

July 28, 2016

Thomas Allweyer: Buch mit Anleitung zur Erstellung von Prozessapplikationen mit Bonita

Cover Designing Efficient BPM ApplicationsDas vorliegende, englischsprachige Buch führt in die Entwicklung von Prozessanwendungen mit dem BPM-System „Bonita“ ein, dessen kostenfreie Community Edition ich selbst auch in der Lehre einsetze und in meinem BPMS-Buch verwende. Die Möglichkeit zur Erstellung kompletter Prozessanwendungen wurde vergangenes Jahr zusammen mit einigen weiteren Neuerungen in Bonita-Version 7 eingeführt.

Bei vielen BPMS-Installationen bildet eine Task-Liste das zentrale User-Interface. Sie enthält alle von einem Mitarbeiter durchzuführenden Aufgaben, die aus unterschiedlichen Prozessen stammen können. Im Gegensatz dazu hat eine Prozessanwendung eine individuell angepasste Oberfläche. Für den Benutzer ist es gar nicht unmittelbar ersichtlich, dass er mit einem BPMS arbeitet. Als Beispiel wird in dem Buch eine Reisekosten-Anwendung entwickelt. Einstiegspunkt ist eine Webseite mit einer Übersicht der eigenen Reiseanträge und ihrem Genehmigungsstatus. Von hier aus kann man neue Reiseanträge stellen oder vorhandene ändern bzw. stornieren. Dabei wird dann jeweils ein Prozess gestartet. Vorgesetzte sehen auf der Startseite zudem die von ihren Mitarbeitern gestellten Anträge und können diese genehmigen oder ablehnen.

Das Buch führt einen nacheinander durch die einzelnen Schritte zur Entwicklung dieser Prozessanwendung. Zunächst wird die als Startseite dienende Webseite erstellt und mit manuell eingegebenen Beispieldaten getestet. In den weiteren Schritten wird das BPMN-Modell des Reisekostenantrags aufgebaut und sukzessive erweitert. Hinzu kommen die Zuordnung zu den Bearbeitern, das Datenmodell, die Benutzerdialoge für die einzelnen Schritte, die Formulierung von Bedingungen an Verzweigungen, sowie Schnittstellen zu externen Systemen, etwa zum automatischen Versand von E-Mails. Für den Fall, dass ein Manager die Bearbeitung eines Antrags vergisst, werden verschiedenen Eskalationsschritte eingebaut. In der Endausbaustufe der Anwendung spielen mehrere Prozesse zusammen. So sorgt etwa der Prozess zum Stornieren eines Antrags dazu, dass der zugehörige, ggf. noch laufende Antragsprozess abgebrochen wird.

Die einzelnen Schritte sind sehr detailliert beschrieben, so dass sie sich gut am System nachvollziehen lassen. Möchte man das Beispiel hingegen nicht selbst mit dem Bonita-System umsetzen, so lohnt sich die Lektüre weniger, da allgemeine Ausführungen im Vergleich zu den Schritt-für-Schritt-Anleitungen nur einen kleinen Teil einnehmen. An den meisten Stellen wird ganz gut erklärt, warum was gemacht wird. Leider ist dies bei einigen Parametern und Einstellungen für die Benutzeroberfläche nicht immer der Fall. Hier muss manchmal einfach ein Stück Code abgetippt werden, ohne dass er im Einzelnen erläutert wird. Das erschwert es, die Inhalte später auf eigene Entwicklungen zu übertragen.

An einer Stelle muss auch eine auf der Website zum Buch bereitgestellte Erweiterung, eine sogenannte „REST API Extension“, importiert werden. Leider funktionierte diese bei mir im Beispielprozess nicht so wie beschrieben. Eventuell hängt dies damit zusammen, dass ich eine neuere Bonita-Version einsetze als die im Buch verwendete. Leider lassen sich solche REST API-Extensions auch nur in der kostenpflichtigen Bonita-Edition selbst erstellen, so dass sie auch nicht näher untersucht werden konnte. Auch eine im Buch vorgestellte Integration mit dem Google-Kalender lässt sich leider nur nutzen, wenn man einen kostenpflichtigen Google Apps Service Account besitzt.

Trotz der genannten Einschränkungen ist das Buch für jeden nützlich, der sich ernsthaft in das System einarbeiten möchte, da es die verschiedenen auf der Website von Bonita bereitgestellten Tutorials und Dokumentationen um eine Reihe aufschlussreicher Beispiele zur Lösung verschiedener Fragestellungen ergänzt.


Christine McKinty, Antoine Mottier:
Designing Efficient BPM Applications: A Process-Based Guide for Beginners
O’Reilly, 2016
Das Buch bei amazon

by Thomas Allweyer at July 28, 2016 01:07 PM

July 20, 2016

Thomas Allweyer: Fachbuch zu Prozessmanagement in Einkauf und Logistik

Cover Prozessmanagement in Einkauf und LogistikDas vorliegende Buch liefert einen fundierte, prozessorientierte Darstellung der Bereiche Einkauf und Logistik. Prozesse in diesen bereichen weisen viele spezielle Eigenschaften auf. Entsprechend gibt es auch zahlreiche Methoden und Konzepte, die sich speziell mit der Analyse und der Gestaltung der Lieferketten befassen. Diese werden in dem Buch im Kontext eines durchgängigen Prozessmanagements dargestellt.

Das Werk besteht aus insgesamt sechs Kapiteln. Gegenstand des einführenden Kapitels sind die grundlegenden Konzepte des Prozessmanagements einerseits und des Einkaufs und der Logistik andererseits. Abschließend werden die Einflüsse aktueller Megatrends wie Globalisierung oder Ressourcenknappheit auf die Supply Chains betrachtet. Kapitel zwei stellt verschiedene Methoden zur Prozessmodellierung vor. Neben allgemeinen Notationen wie BPMN werden vor allem solche Methoden erläutert, die einen speziellen Bezug zum Logistikbereich haben, wie z. B. Materialflussmatrix oder Wertstromanalyse. Gegenstand des dritten Kapitels ist die Prozessanalyse. Auch hier wird einerseits eine allgemein gültige Vorgehensweise beschrieben, andererseits wird ein besonderer Schwerpunkt auf die Analyse der Dienstleistungsqualität in Logistik und Einkauf gelegt.

Im vierten Kapitel geht es um die Neugestaltung und Verbesserung von Prozessen. Als ausgewählte Konzepte zur Prozessverbesserung werden Methoden des Lean Managements, das Outsourcing von Logistikdienstleistungen sowie Industrie 4.0 besprochen. Bei dem umfassenden und noch recht jungen Thema Industrie 4.0 werden anstelle konkreter Handlungsempfehlungen ausgewählte Praxisbeispiele beschrieben.

Konsequenzen für die Aufbauorganisation werden in Kapitel 5 thematisiert. Hier geht es insbesondere um die Gestaltung einer prozessorientierten Beschaffungsorganisation und den Aufbau flexibler und widerstandsfähiger Supply Chains. Eine zentrale Rolle spielt hierbei das Risikomanagement. Das abschließende Kapitel 6 befasst sich mit dem Supply Chain Controlling. Unter anderem werden die verschiedenen qualitativ oder quantitativ bewertbaren Aspekte auf der operativen und der strategischen Controlling-Ebene diskutiert.


Liebetruth, Th.:
Prozessmanagement in Einkauf und Logistik
Springer 2016
Das Buch bei amazon.

by Thomas Allweyer at July 20, 2016 10:16 AM

July 06, 2016

Sandy Kemsley: 10 years on WordPress, 11+ blogging

This popped up in the WordPress Android app the other day: This blog started in March 2005 (and my online journalling goes back to 2000 or so), but I passed through a Moveable Type phase before...

[Content summary only, click through for full article and links]

by sandy at July 06, 2016 02:19 PM

July 05, 2016

Sandy Kemsley: Take Mike Marin’s CMMN survey: learn something and help CMMN research

Mike Marin, who had a hand in creating FileNet’s ECM platform and continued the work at IBM as chief architect on their Case Manager product, is taking a bit of time away from IBM to complete...

[Content summary only, click through for full article and links]

by sandy at July 05, 2016 01:10 PM

Thomas Allweyer: Mehr als nur der Kontrollfluss: Integriertes Methodenportfolio für ausführbare Prozesse

Cover Hagenberg Business Process Modelling MethodProzessmodellierungsnotationen wie BPMN sind ein sehr gutes Hilfsmittel um den Kontrollfluss von Geschäftsprozessen abzubilden. Für die Prozessautomatisierung sind aber noch eine Reihe weiterer Aspekte wichtig, die sich nicht so gut modellieren lassen. Beispiele sind die Spezifikation von Benutzerdialogen oder komplexere Fälle der Zuordnung von Aktivitäten zu Akteuren. So kann man z. B. mit BPMN nicht modellieren, dass eine bestimmte Aufgabe nur von dem Benutzer durchgeführt werden darf, der vorher bereits eine andere Aufgabe in demselben Prozess ausgeführt hat.

Die „Hagenberg Process Modelling Method“ umfasst Methoden zur Abbildung derartiger Aspekte und integriert sie mit BPMN-Modellen. Die Bezeichnung geht auf die österreichische Stadt Hagenberg zurück. Am dortigen Software Competence Center wurden die Forschungen durchgeführt, die der Methodik zugrunde liegen.

Bei dem englischsprachigen Buch handelt es sich um eine wissenschaftliche Veröffentlichung, die einige Vorkenntnisse erfordert. Es richtet sich somit hauptsächlich an Wissenschaftler sowie an Hersteller von Modellierungswerkzeugen und BPM-Systemen.

Es werden folgende Methoden und Methoden-Erweiterungen beschrieben:

  • Erweiterung von BPMN-Tasks um „deontische Operatoren“. Mittels Farben und Ergänzungen der Task-Bezeichnungen wird unterschieden, ob Tasks z. B. verpflichtend, erlaubt oder verboten sind – auch in Abhängigkeit von den Ergebnissen vorangehender Aktivitäten. Damit lassen sich BPMN-Diagramme kompakter darstellen, da zahlreiche Gateways entfallen können.
  • Modellierung von Akteuren. Im Gegensatz zu herkömmlichen BPMN-Diagrammen, bei denen die Akteur-Zuordnung meist mittels Pools und Lanes stattfindet, werden die möglichen Akteure bei den Aktivitäten eingetragen. Hierbei lässt sich u. a. auch unterscheiden, ob mehrere Rollen gemeinsam oder alternativ tätig werden. Die verwendeten Rollen werden in einem separaten Rollendiagramm modelliert. Und schließlich werden einzuhaltende Regeln formuliert, mit denen sich beispielsweise ausdrücken lässt, dass zwei Aktivitäten von unterschiedlichen Personen ausgeführt werden müssen.
  • Modellierung der Benutzer-Interaktionen. Zur Spezifikation der Benutzerdialoge wird ein weiterer Diagrammtyp verwendet, das Workflow Chart. Darin werden die im User Interface angezeigten Formulare mit den nachfolgenden Server-Aktionen modelliert. Es werden zweierlei Arten von Server-Aktionen unterschieden. Sofortige Aktionen werden direkt nach dem Absenden eines Formulars durchgeführt. Verzögerte Aktionen werden in Benutzer-Tasklisten eingetragen. Sie werden also erst ausgeführt, wenn sie von einem Benutzer gestartet werden.
    Es ergeben sich Überschneidungen mit BPMN-Diagrammen. Da es sich bei den verzögerten Aktionen zugleich um eigenständige Tasks handelt, sind diese sowohl im Workflow Chart als auch im Prozessdiagramm vorhanden.
  • Erweiterte Kommunikationsmöglichkeiten mittels Ereignissen. Zwar umfasst der BPMN-Standard sehr viele Ereignistypen, doch gibt es noch weitere relevante Aspekte, wie z. B. die Lebensdauer eines Triggers oder die Anforderung, dass die Benutzer entscheiden können, auf welche Ereignisse im Prozess reagiert werden soll. Hierfür werden zusätzliche Eigenschaften für Ereignisse definiert und „Ereignis-Pools“ für Ereignisse eingeführt, die keinem speziellen Prozess zugeordnet sind.

Die aufgeführten Konzepte werden in dem Buch unter Verwendung von Abstract State Machines formal beschrieben und anhand von Anwendungsbeispielen illustriert. Schließlich wird beschrieben wie die aufgeführten Methoden integriert und bei der Entwicklung ausführbarer Prozesse eingesetzt werden können. Zur Ausführung der mit dem vorgestellten  Methodenportfolio erstellten Modelle wird eine Software-Plattform benötigt. Die Autoren stellen die Architektur einer solchen „Enhanced Process Platform“ ausführlich dar.

Wer sich mit der Entwicklung von BPM-Tools und -Methoden befasst, dürfte von dem Buch profitieren. Es werden viele relevante Fragestellungen diskutiert, die in herkömmlichen Methoden wenig oder nicht abgedeckt sind. Zu fragen wäre, ob es nicht noch weitere, ebenso wichtige Aspekte gibt, die auch in der Hagenberg-Methode nicht abgedeckt sind, wie z. B. die Integration von Geschäftsregeln, die sich nicht auf die Akteur-Zuordnung beziehen, oder die Definition von Messpunkten zur Kennzahlenermittlung. Auch ein Vergleich des vorgestellten Ansatzes mit den Konzepten der CMMN (Case Management Model and Notation) wäre interessant.

Für eine erfolgreiche Umsetzung in die Praxis wäre es sicher hilfreich, die grafischen Darstellungen der Methoden intuitiver zu gestalten. Die vorgestellten Diagramme sind noch wenig nutzerfreundlich, insbesondere wenn auch eher fachlich orientierte Modellierer angesprochen werden sollen.


Felix Kossak et al.:
Hagenberg Business Process Modelling Method
Springer 2016
Das Buch bei amazon.

by Thomas Allweyer at July 05, 2016 07:50 AM

June 29, 2016

Thomas Allweyer: Denkanstöße für die „Process Revolution“

Cover Process RevolutionDas englischsprachige E-Book des australischen Unternehmensberaters Craig Reid liefert zahlreiche Anregungen und Denkanstöße für die heute notwendigen Veränderungen von Unternehmen und ihren Prozessen. Der Hauptteil besteht aus über 50 Mini-Kapiteln. Jedes dieser zwei- bis dreiseitigen Mini-Kapitel greift einen Aspekt heraus, illustriert ihn mit einem Beispiel aus der Praxis und gibt Tipps, wie man das Thema im eigenen Unternehmen angehen kann. Darin finden sich zum Teil altbekannte Prinzipien der Prozessorientierung, wie z. B. die Reduzierung redundanter Prüf-Aktivitäten oder die Auflösung funktionaler Silos. Im Zentrum steht aber vor allem der Kunde und seine Erfahrungen mit dem Unternehmen. Und so warnt der Autor davor, Prozesse zu stark zu strukturieren und zu standardisieren, wenn dadurch das Kundenerlebnis leidet.

Reid predigt den ständigen Wandel und agiles Vorgehen. Mehrere Kapitel setzen sich außerdem kritisch mit zu ausführlichen Prozessdokumentationen und schwergewichtigen Methoden auseinander. Nützlicher seien einfache, für die Mitarbeiter verständliche Dokumentationsmittel, wie z. B. simple Flowcharts auf Packpapier.

Schließlich geht es bei sämtlichen Prozessinitiativen darum, Wert für das Unternehmen zu schaffen. Und während das eigene Unternehmen noch dabei ist, detaillierte Prozessmodelle zu analysieren, hat die Konkurrenz vielleicht schon längst neue Innovationen umgesetzt.

Fazit: Dieses Buch geht nicht ins Detail, doch es macht Spaß es zu lesen. Dabei vermittelt es eine prozessorientierte und agile Denkweise und motiviert dazu, das eine oder andere Thema direkt im eigenen Umfeld anzugehen.


Craig Reid:
The Process Revolution
The Process Improvement Group 2016
Anmeldung zum Newsletter und Download des E-Books

by Thomas Allweyer at June 29, 2016 09:00 AM

June 28, 2016

Sandy Kemsley: Now available: Best Practices for Knowledge Workers

I couldn’t make it to the BPM and Case Management Summit in DC this week, but it includes the launch of the new book, Best Practices for Knowledge Workers: Innovation in Adaptive Case Management, for...

[Content summary only, click through for full article and links]

by sandy at June 28, 2016 08:23 PM

BPM-Guide.de: BPMCon 2016: Agenda komplett

+++ Preisnachlass nur noch bis 30.06. – Jetzt anmelden +++

Die Agenda der BPMCon 2016 am 16.09. in Berlin ist jetzt komplett, und hier kommen die Highlights:

In seiner Keynote wird der angesehene BPM-Analyst Neil Ward-Dutton erklären, welche Rolle BPM in der Digitalisierung von Unternehmen spielt.

Im Anschluss beschreibt Jakob Freund seine Vision zur Nutzung von BPM in der Cloud – ein Thema, das er am Nachmittag in einer ganzheitlichen Betrachtung von der Modellierung bis zur Ausführung noch einmal vertiefen wird.

Bernd Rücker wird zeigen, wie die BPM-Standards BPMN (Workflow-Steuerung), CMMN (Fallmanagement) und DMN (Automatisierung von Entscheidungen) erfolgreich angewandt und kombiniert werden können.

Konkrete …

by Jakob Freund at June 28, 2016 07:32 AM

June 27, 2016

Vishal Saxena: Long silence...

In a world of Quiet revolution, there is nothing better than keeping heads down and delivering on a promise. Roubroo is fully integrated (initial release) and on top of that easy to use.

Some excerpts from users:
http://www.nojitter.com/post/240171778/getting-handson-with-avaya-breeze-engagement-designer


by Vishal Saxena (noreply@blogger.com) at June 27, 2016 11:29 PM

June 17, 2016

Drools & JBPM: UberFire Forms Builder for jBPM

The new UberFire form builder, that will be part of the jBPM 7.0 distribution, is making great progress. Underneath it is a Bootstrap grid system, but it addresses the issue of other Bootstrap layout builders that require the user to explicit add the grid layout first. Instead it dynamically alters the underlying grid as the user drags and places the components. The same builder code will be used for the latest DashBuilder dashboards too. There are more CSS improvements to come, but you can watch a video below (don't forget to turn on HD and watch it full screen), demonstrating nested form capabilities. Eventually you should be able to build and deploy these types of applications live on OpenShift. Good work Pere and Eder.


by Mark Proctor (noreply@blogger.com) at June 17, 2016 06:04 PM

Thomas Allweyer: Prozessanalyse aus wissenschaftlicher Sicht

Cover Process AnalyticsDas englischsprachige Buch „Process Analytics“ gibt einen Überblick über verschiedene Aspekte und Methoden der Prozessanalyse aus wissenschaftlicher Sicht. Hierbei stehen Verfahren im Fokus, bei denen prozessbezogene Daten aus IT-Systemen ausgewertet werden. In der Vergangenheit wurden viele Methoden entwickelt, die voraussetzten, dass die zu untersuchenden Prozesse komplett durch ein Workflow- oder BPM-System ausgeführt werden. Ein Großteil der Prozesse werden in der Praxis aber nicht durch eine solche Process Engine gesteuert. Prozessbezogene Daten liegen daher unter Umständen über viele verschiedene Systeme verstreut und in uneinheitlicher Form vor. Zudem sind viele Prozesse nur schwach strukturiert. Ihr konkreter Ablauf ergibt sich erst ad hoc während der Durchführung. Neben den auf den Ablauf bezogenen Daten, wie Start- und Endzeitpunkte der durchgeführten Aktivitäten, können auch zahlreiche andere Daten von Interesse sein, wie z. B. die bearbeiteten Geschäftsobjekte. Bei der Prozessausführung fallen häufig sehr große Datenmengen an, weshalb die im Buch beschriebenen Verfahren vielfach auf Ansätzen aus dem Bereich „Big Data“ aufbauen.

Das Buch ist in sechs Kapitel gegliedert. Das erste Kapitel gibt einen Überblick über das Thema Process Analytics und die wichtigsten Fragestellungen. In Kapitel 2 werden die Grundlagen IT-gestützter Geschäftsprozesse vorgestellt. Gegenstand des dritten Kapitels sind Algorithmen zum „Process Matching“, d. h. zum Vergleich von Prozessmodellen und zum Auffinden ähnlicher Prozessmodelle. Dies kann bei großen Sammlungen von Prozessmodellen interessant sein, z. B. wenn man Prozesse wiederverwenden, Prozessvarianten identifizieren oder die Einhaltung von Compliance-Regeln überprüfen möchte. In Kapitel 4 werden Abfrage-Techniken und -Sprachen für Prozessmodelle und ausgeführte Prozessinstanzen besprochen. Ähnlich wie man mit SQL Datenbank-Abfragen formulieren kann, kann man mit Prozessabfragesprachen Prozessmodelle mit bestimmten Eigenschaften finden oder Informationen über das Prozessgeschehen abfragen.

Mit der Organisation von Prozessdaten und Methoden zu ihrer Analyse befasst sich Kapitel 5. Hierzu gehört der Aufbau von „Process Spaces“. Dabei handelt es sich um die systemübergreifende Zusammenfassung aller auf einen Prozess bezogenen Daten und Informationen und die Bereitstellung verschiedener Sichten darauf. Die Prozessdaten können in Form von Data Services für die weitere Verarbeitung verfügbar gemacht werden. Es werden verschiedene Analyseverfahren vorgestellt, u. a. Process Mining, und prozessübergreifende Querschnittsaspekte diskutiert, wie z. B. Sicherheit und Zuverlässigkeit.

Das sechste Kapitel gibt einen zusammenfassenden Überblick über Analysefunktionen verschiedener BPM-Systeme sowie einen Ausblick auf weitere Forschungsrichtungen. Unter den betrachteten Systemen findet sich sowohl kommerzielle als auch Open Source-Software. Anhand eines Fallbeispiels wird der Einsatz verschiedener Toolfunktionalitäten und Analysemethoden im Zusammenspiel illustriert. Dabei fällt auf, dass sich die in den vorangegangenen Kapitel vorgestellten Methoden kaum in den vorgestellten Werkzeugen wiederfinden. So wird beschrieben, wo diese Verfahren in dem Fallbeispiel eingesetzt werden können, allerdings nicht, mit welchen Werkzeugen dies erfolgen soll. Zum Teil wird auf grundlegende Technologien aus dem Bereich „Big Data“ verwiesen, wo die konkret auf die Prozessanalyse bezogenen Verfahren freilich erst programmiert werden müssten.

Auch ist nicht ganz klar, wie die Auswahl der besprochenen Tools erfolgte. So findet sich in der Liste das stark in der Funktion beschränkte kostenfreie Werkzeug „ARIS Express“, nicht jedoch die kostenpflichtige ARIS-Suite des gleichen Herstellers, die über wesentlich umfangreichere Analysemethoden verfügt. Es verwundert zudem, dass eine Plattform wie „smartfacts“ von MID fehlt, mit der sich systemübergreifende Prozessmodellsammlungen realisieren lassen, wie sie in den vorangehenden Kapiteln beschrieben werden.

Aber auch in den anderen Kapiteln wird die eine oder andere für das Thema relevante aktuelle Entwicklung nicht berücksichtigt. So wird im Zusammenhang mit der Integration von Geschäftsregeln und Prozessmodellen der in der Praxis wenig verbreitete Standard SBVR diskutiert, nicht jedoch DMN (Decision Model and Notation), die bereits an vielen Orten im praktischen Einsatz ist.

Insgesamt stellt das Buch dennoch einen guten Überblick dar, der insbesondere für Wissenschaftler und Toolhersteller interessant sein dürfte.


Behesti, S.; Benatallah, B. et al:
Process Analytics
Concepts and Techniques for Querying and Analzying Process Data
Springer 2016
Das Buch bei amazon.

by Thomas Allweyer at June 17, 2016 03:47 PM

Thomas Allweyer: How to Model Parallel Checks in BPMN

One of the modeling patterns I describe in the new edition of the BPMN book, is „Parallel Checks“. When different persons need to check applications, requests, etc. according to different criteria, these checks can be carried out in parallel. There are two different ways to model this. The simple solution only requires basic BPMN elements, while the more sophisticated solution requires a sub-process and a terminate end event. We start with the simple solution.

Since each check can have a positive or negative result, there can be many different combinations of positive and negative results. If all these possible combinations are considered, the models quickly become large and confusing. However, in most cases it is not important exactly which of the checks have a positive or a negative outcome. Instead, only two cases need to be considered: Either all checks have a positive result, or at least one check has a negative result.

Therefore, in the first diagram the checking activities are not directly followed by exclusive splits. Instead, the parallel paths are joined before there is an exclusive split that distinguishes whether all checks have produced a positive result, or not.

Parallel Checks1

In this model, all parallel checks are always carried out entirely, even if one of the checks has already had a negative result, and the other checks would not be required anymore.

This can be avoided by using a terminate end event, as in the following diagram. If both checks are succesful, both parallel tokens reach the end event of the sub-process, and the parent process continues. If one of the checks produces a negative result, its token flows to the terminate end event. This immediately terminates the entire sub-process, regardless where the other token is. It may either still be in front of the checking activity, or it may already have reached the normal end event.

Parallel Checks2

In the parent process, one token is emitted from the sub-process, regardless whether the application has been accepted or rejected. Therefore, the sub process is followed by an exclusive gateway that routes the sequence flow according to the sub-process’s result.

More about BPMN an modeling patterns in the second edition of „BPMN 2.0 – Introduction to the Standard for Business Process Modeling“:
BPMN 2.0 Frontpage-tiny

by Thomas Allweyer at June 17, 2016 11:36 AM

June 13, 2016

Thomas Allweyer: AuraPortal zeigt, was im BPM ohne Codierung möglich ist

Screenshot AuraPortalIn der Vergangenheit haben sich viele BPMS-Anbieter mit Zero-Coding-Versprechen weit aus dem Fenster gelehnt. Außer bei sehr kleinen Demonstrationsprozessen konnten Sie diese aber häufig nicht einhalten. Daher wird es heute meist als unrealistisch angesehen, ernsthafte Prozessanwendungen zu realisieren ohne zumindest an der einen oder anderen Stelle Programmcode schreiben zu müssen. Und so vermarkten einige BPMS-Hersteller ihre Produkte mittlerweile nicht mehr als „Zero Code“-, sondern als „Low Code“-Plattformen (vgl. hierzu den Report von Forrester).

Ganz im Gegensatz dazu positioniert die Firma AuraPortal ihre BPM-Suite selbstbewusst als einzige echte „No Code“-Plattform, mit der man auch komplexe Prozesse komplett ohne Codierung automatisieren kann. Eine derartige Ansage weckt zunächst einmal Skepsis. Das, was in AuraPortals einführenden Präsentationen gezeigt wird, ähnelt dem, was auch in anderen BPMS möglich ist: Ein kleiner Prozess wird grafisch modelliert und ein einfacher Dialog mit einigen Feldern angelegt. Sodann wird das Ganze zur Ausführung gebracht, wodurch den Prozessbeteiligten ihre jeweiligen Aufgaben in Task-Listen bereitgestellt werden. Von dort aus können Sie dann jeweils die Bearbeitung starten.

Großer Funktionsumfang „Out of the Box“

Werden die Anforderungen etwas komplexer, so kommt bei den meisten BPMS-Vorführungen früher oder später der Punkt, an dem an der einen oder anderen Stelle ein kleines Skript programmiert oder selbst geschriebener Code eingebunden werden muss. Dies ist beispielsweise der Fall, wenn die Ermittlung des nächsten Bearbeiters speziellen Regeln folgt, sich Dialoge aufgrund von Eingabewerten dynamisch verändern, komplexe Datenstrukturen verwendet oder individuelle Analysereports benötigt werden.

Ich hatte die Gelegenheit, mir AuraPortal vorführen zu lassen. Dabei beeindruckte mich einerseits der große Funktionsumfang, der „Out of the Box“ zur Verfügung gestellt wird, zum anderen, wie schnell und einfach sich auch etwas schwierigere Anforderungen umsetzen lassen. So verfügt AuraPortal unter anderem über ein komplett integriertes Dokumentenmanagement-System, über Module zum Web Content-Management und zum Aufbau von Internet-Shops, sowie eine Business Intelligence-Komponente – um nur einige zu nennen. Hierbei kommen keine Komponenten von Drittanbietern zum Einsatz, alles ist komplett selbst entwickelt und sehr nahtlos integriert. Mich interessierte besonders, ob und wie sich die oben geschilderten komplexeren Probleme tatsächlich ohne Programmierung lösen lassen. Und in der Tat wurde mir zu jeder meiner Fragestellungen nachvollziehbar gezeigt, wie sie mit Hilfe von Modellierung und Konfiguration umgesetzt werden kann.

Wie geschieht das? Zum einen steht sehr viel vorgefertigte Funktionalität zur Verfügung, mit der sich bereits ein sehr großer Teil typischer Anforderungen abdecken lässt. So gibt es etwa schon eine größere Zahl von möglichen Zuordnungsstrategien von Aufgaben zu Bearbeitern. Zum anderen sind umfangreiche Konfigurationsmöglichkeiten vorhanden. So bietet etwa der Formular-Editor zahlreiche Einstellungsmöglichkeiten zu jedem einzelnen Dialog-Element. Der Editor-Bereich mit den Einstellungsmöglichkeiten wird hierdurch recht umfangreich. Meist kann man aber mit den Standardeinstellungen einfache Fälle bereits ganz gut abdecken. Um sehr ausgefeilte, dynamische Dialoge zu entwickeln, ist hingegen eine gute Kenntnis der verschiedenen Möglichkeiten erforderlich. Umfassende Berechnungen oder Eingabevalidierungen erfordern natürlich schon die Eingabe der entsprechenden mathematischen Formeln oder regulären Ausdrücke. Programmcode ist hingegen nicht erforderlich.

Auch komplexe Regeln ohne Programmcode

Auch dort, wo die angebotenen Standardfunktionen nicht ausreichen, lassen sich z. B. Geschäftsregeln aufstellen und integrieren. Auch dies ist ohne Programmierung möglich. So kann etwa die Zuordnung eines Tasks zu Bearbeitern nicht nur über vordefinierte Mechanismen erfolgen, wie z. B. über Rollen oder die manuelle Auswahl in einem vorangehenden Schritt. Stattdessen kann man auch entsprechende Regeln zuordnen, die während der Prozessausführung ausgewertet werden um den nächsten Bearbeiter zu bestimmen. Die Geschäftsregeln selbst werden tabellarisch erfasst, wobei ggf. beliebig komplexe Formeln verwendet werden können.

Der „No Code“-Anspruch scheint also nicht übertrieben zu sein. Sicherlich kann man sich Funktionalitäten ausdenken, die nicht im Standardfunktionsumfang von AuraPortal vorhanden sind und daher Programmierung erfordern würden. Für Anwendungen zur Automatisierung und Unterstützung von Geschäftsprozessen scheint die Abdeckung aber recht umfassend zu sein. Ich durfte auch einen Blick auf das recht umfangreiche Modell der von der Firma AuraPortal intern zur Projektverwaltung und -steuerung verwendeten Prozesse werfen. Laut eigenen Angaben arbeitet die Firma intern ausschließlich mit dieser – ebenfalls komplett ohne Programmierung entwickelten – Anwendung, d. h. es sind auch sämtliche benötigten Funktionalitäten realisiert, für die sonst ERP- oder CRM-Systeme genutzt werden.

Verstärkte Aktivitäten im deutschsprachigen Raum

Um keine Missverständnisse aufkommen zu lassen: Das Wegfallen der Programmierung bedeutet nicht, dass es plötzlich kinderleicht wäre, umfassende Prozessanwendungen zu erstellen. Komplexe Prozesse und Anforderungen erfordern umfangreiche und durchdachte Modelle, Einstellungen, Formeln etc. Hierfür benötigt man eine genaue Kenntnis des Systems und der zugrunde liegenden Konzepte, sowie die ausgeprägte Fähigkeit zu analytischem Denken. Andererseits muss man keine Programmiersprache beherrschen, und es sinkt insbesondere der Aufwand, der sonst oftmals durch das Schreiben von Boilerplate-Code entsteht, der sich immer wieder in ähnlicher Form wiederholt. Überhaupt ist es in AuraPortal kaum nötig dasselbe mehrfach zu tun, da sich praktisch alles, was man einmal erstellt hat, in anderen Prozessen wiederverwenden lässt.

Dass der Ansatz zu funktionieren scheint, zeigt sich anhand zahlreicher erfolgreicher Implementierungen bei Unternehmen aus den verschiedensten Branchen, darunter bekannte Namen wie General Motors, Toyota, Carrefour, Danone, KPN und Santander. Die Firma AuraPortal, die in Spanien ansässig ist, ist weltweit tätig, wobei es besonders viele Installationen in Südamerika gibt. Im deutschsprachigen Raum ist der BPM-Hersteller, den Gartner als „one of the best kept secrets in the iBPMS market“ bezeichnet, bislang noch weniger bekannt. Das soll sich aber künftig ändern. Gemeinsam mit Partnern werden derzeit die Aktivitäten im hiesigen Markt ausgebaut. Auch wenn die Konkurrenz nicht gerade klein ist, dürfte AuraPortal mit seinem beachtlichen Funktionsumfang auch hierzulande auf einiges Interesse stoßen.

by Thomas Allweyer at June 13, 2016 09:27 AM

June 12, 2016

BPM-Guide.de: Scientific performance benchmark of open source BPMN engines

In May 2016, a group of authors from the universities of Stuttgart (Germany) and Lugano (Switzerland) has conducted a profound performance benchmark of three open source BPMN process engines, Camunda being one of them.

As the authors state in their introduction:

“This work proposes the first microbenchmark for WfMSs that can execute BPMN 2.0 workflows. To this end, we focus on studying the performance impact of well-known workflow patterns expressed in BPMN 2.0 with respect to three open source WfMSs. We executed all the experiments under a reliable environment and produced a set of meaningful metrics.”

Besides Camunda, two other well-known …

by Jakob Freund at June 12, 2016 03:49 PM

June 08, 2016

Drools & JBPM: Tutorial oriented user guides for Drools and jBPM

Community member Nicolas Heron, is creating tutorial oriented user guides for Drools and jBPM (Red Hat BRMS and BPMS). He’s focusing on the backends first, but it will eventually cover all the web tooling too, as well as installation and setup.

All this work is available from bitbucket, using asciidoc and gitbook (free for public projects), so I encourage you all to get involved and help Nicolas out by reviewing and providing feedback.

Click the Table of Contents, to get started
https://www.gitbook.com/book/nheron/droolsonboarding/details

Or just read the pdf:
https://www.gitbook.com/download/pdf/book/nheron/droolsonboarding

He’s just finished the Drools parts, and will moving onto other areas next.

by Mark Proctor (noreply@blogger.com) at June 08, 2016 02:15 PM

June 07, 2016

Drools & JBPM: DecisionCamp And RuleML 2016, 6-9 July New York

This year RuleML 2016 is hosted by Stony Brook University, New York USA. Decision Camp 2016 is co-locating at the same event. I'll be presenting at DecisionCamp and helping to chair the industrial track at RuleML. Looking forward to seeing everyone there and spending a week immersed in discussions on reasoning systems :)

http://2016.ruleml.org
http://2016.ruleml.org/decisioncamp

RuleML Schedule

Decision Camp Schedule(pasted below)

July 6, 2016

OMG DMN 1.2 RTF Meeting at DecisionCAMP 10:00 - 17:00 
The Revision Task Force (RTF) for DMN 1.2 will be meeting in at the Stony Brook University, room NCS 220. The meeting is open only 
to members of the RTF, but others are welcome to meet members of the RTF at the DecisionCAMP on 7th and 8th. 

July 7, 2016

StartEndTitleAuthors
9:009:15Welcome and KickoffJacob Feldman
9:1510:00Modeling Decision-Making Processes: Melding Process Models and Decision ModelsAlan Fish
10:0010:15Coffee Break
10:1510:50Oracle Decision Modeling ServiceGary Hallmark, Alvin To
10:5011:25Decision Management at the Speed of EventsDaniel Selman
11:2512:00Factors Affecting Rule PerformanceCharles Forgy 
12:0012:35DMN: how to satisfy multiple objectives?Jan Vanthienen
12:3514:00Lunch Break
14:0015:00Natural Language Access to Data: It Needs Reasoning
(RuleML Keynote)
Richard Waldinger
15:0015:35Welcome to Method for Parsing Regulations into DMNTom Debevoise, Will Thomas
15:3516:10Using Machine Learning, Business Rules, and Optimization for Flash Sale PricingIgor Elbert, Jacob Feldman
16:1016:25Coffee Break
16:2517:00Improving BRMS Efficiency and Performance and Using Conflict ResolutionJames Owen, Charles Forgy
17:00.18:00QnA Panel "DMN from OMG, Vendor, and Practitioner Perspectives"Moderated by Bruce Silver
19:00-Joint Dinner
July 8, 2016 

StartEndTitleAuthors
9:0010:00DMN as a Decision Modeling Language
(RuleML Keynote)
Bruce Silver
10:0010:15Coffee Break
10:1510:50Solving the "Last Mile" in model based developmentLarry Goldberg
10:5011:25What-If Analyzer for DMN-based Decision Models(Challenge Demo)Jacob Feldman
11:2512:00Advanced Decision Analytics via Deep Reasoning on Diverse Data: For Health Care and MoreBenjamin Grosof, Janine Bloomfield
12:0012:35The Decision Boundary Map: An Interactive Visual Interface to Make Informed Decisions and Selections in the Presence of TradeoffsShenghui Cheng, Klaus Mueller
12:3514:00Lunch Break
15:1515:50Learning Rule Base Programming with Classic Computer GamesMark Proctor

by Mark Proctor (noreply@blogger.com) at June 07, 2016 11:29 PM

Sandy Kemsley: Pega 7 roadmap at Pegaworld 2016

I finished up Pegaworld 2016 at a panel of Pega technology executives who provided the vision and roadmap for CRM and Pega 7. Don Schuerman moderated the panel, which included Bill Baggott, Kerim...

[Content summary only, click through for full article and links]

by sandy at June 07, 2016 11:20 PM

Sandy Kemsley: American Express digital transformation at Pegaworld 2016

Howard Johnson and Keith Weber from American Express talked about their digital transformation to accommodate their expanding market of corporate card services for global accounts, middle market and...

[Content summary only, click through for full article and links]

by sandy at June 07, 2016 10:17 PM

Sandy Kemsley: Rethinking personal data: Pegaworld 2016 panel

I attended a breakout panel on how the idea and usage of personal data are changing was moderated by Alan Marcus of the World Economic Forum (nice socks!), and included Richard Archdeacon of HP, Rob...

[Content summary only, click through for full article and links]

by sandy at June 07, 2016 07:31 PM

Sandy Kemsley: Pegaworld 2016 day 2 keynote: digital transformation and the 4th industrial revolution

Day 2 of Pegaworld 2016 – another full day on the schedule. The keynote started with Gilles Leyrat, SVP of Customer and Partner Services at Cisco, discussing how they became a more digital...

[Content summary only, click through for full article and links]

by sandy at June 07, 2016 06:17 PM

June 06, 2016

Sandy Kemsley: OpenSpan at Pegaworld 2016: RPA meets BPM

Less than two months ago, Pega announced their acquisition of OpenSpan, a software vendor in the robotic process automation (RPA) market. That wasn’t my first exposure to OpenSpan, however: I...

[Content summary only, click through for full article and links]

by sandy at June 06, 2016 07:03 PM

Sandy Kemsley: Pegaworld 2016 Day 1 Keynote: Pega direction, Philips and Allianz

It seems like I was just here in Vegas at the MGM Grand…oh, wait, I *was* just here. Well, I’m back for Pegaworld 2016, and 4,000 of us congregated in the Grand Garden Arena for the...

[Content summary only, click through for full article and links]

by sandy at June 06, 2016 06:03 PM

June 01, 2016

Drools & JBPM: Parallel Drools is coming - 12 core machine benchmark results

We are working on a number of different usage patterns for multi-core processing. Our first attempt is at fireAllRules batch processing (no rule chaining) of 1000 facts against increasing 12, 48, 192, and 768 rules - one join per rule. The break even point is around 48 rules. Below 48 rules the running time was less than 100ms and the thread co-ordination costs starts to cancel out the advantage. But after 48 rules, things get better, much faster.

Smaller is better (ms/op)


The running machine is 12 cores, which we put into 12 partitions and rules are evenly split across partitions. This is all organised by the engine, and not end user code. There are still a lot more improvements we can do, to get more optimal rule to partition assignment and to avoid sending all data to all partitions.

Next we'll be turning out attention to long running fireUntilHalt stream use cases.

We don't have any code yet that others can run, as it's still a bit of hack. But as we progress, we'll tidy things up and try and get it so others can try it.

by Mark Proctor (noreply@blogger.com) at June 01, 2016 06:54 PM

Thomas Allweyer: Prozessorientierung stagniert

Angesichts der zahlreichen Angebote, Veröffentlichungen und Tagungen zum Thema BPM müsste man annehmen, dass der Prozessmanagement-Reifegrad vieler Unternehmen steigt. Laut der aktuellen Studie „The State of the BPM Market“, die im zweijährlichen Rhythmus erscheint, ist dies nicht der Fall. Seit zehn Jahren zeigen die Umfragen, dass die Zahl der Unternehmen mit einem hohen Reifegrad gleich geblieben ist. Die Autoren gehen davon aus, dass es eine kleine Zahl wirklich prozessorientierter Unternehmen gibt. In einer Reihe weiterer Unternehmen entwickeln sich immer wieder vielversprechende Prozessmanagement-Initiativen, doch lässt nach einiger Zeit das Engagement deutlich nach.

Häufig ist das sinkende Interesse mit einem Wechsel von Führungskräften verbunden. Wenn bei einem Nachfolger andere Themen auf der Agenda stehen, verliert BPM an Bedeutung. Von den Studienteilnehmern antworteten sowieso nur 24%, dass sie von der obersten Führungsebene Unterstützung für ihre Arbeit mit den Prozessen erhalten. Positiv ist immerhin zu werten, dass sich der Trend aus der letzten Studie zu integrierten unternehmensweiten Initiativen fortgesetzt hat. Demgegenüber sinkt das Interesse an rein inkrementellen Verbesserungsansätzen für individuelle Prozesse, wie z. B. Six Sigma. Insgesamt gab es wenig Veränderungen gegenüber der letzten Untersuchung, die vor zwei Jahren erschienen ist.


Paul Harmon, Celia Wolf:
The State of Business Process Management – 2014
Download auf BPMTrends

by Thomas Allweyer at June 01, 2016 07:52 AM

May 31, 2016

Sandy Kemsley: Camunda BPM 7.5: CMMN, BPMN element templates, and more

I attended an analyst briefing earlier today with Jakob Freund, CEO of Camunda, on the latest release of their product, Camunda BPM 7.5. This includes both the open source version available for free...

[Content summary only, click through for full article and links]

by sandy at May 31, 2016 05:17 PM

May 27, 2016

Drools & JBPM: Drools & jBPM are Hiring - Web Developer needed for Low-Code/No-Code framework

This position is now filed. Thank you.
------
The Drools & jBPM projects are looking to hire a web developer to help build and improve our low-code/no-code web framework and workbench. This framework is trying to make it possible to model business applications, end to end, fully within a web based environment - utilising data models, forms, workflow, rules, and case management.

The initial focus of the work will be around improving how the workbench uses and exposes Git and Maven. You'll be expected to figure out a Git workflow, suitable for our target users and build a UI to simplify how they work with that. This will also include some pull request like system, to control code reviews and code contributions. The main aim will be to simplify and hide as much complexity as possible. You will be working extensibly with our User Experience group to achieve these goals.

Over time you will tackle other aspects of our low-code/no-code framework and it will be expected that a percentage of your time will help with general sustaining across the product - i.e. bug fixing and maintenance.

We are looking for someone passionate about software development, who can demonstrate they love what they do - such as contributing to open source projects in their own time.

The work will make extensive use of Java, GWTErrai and UberFire.  You do not need GWT, Errai or UberFire experience, but you should have a strong understanding of general web technologies and a willingness to learn. A working knowledge of Git and Maven will be necessary, and you will be asked to give ideas on how to achieve a workflow that is more suitable for less technical people. No prior experience of rules or workflow is necessary, but helps.

The role is remote and can be in any location for which Red Hat has an office. Salaries are based on country ranges and you should check salary suitability with the recruiter. You may apply through this generic job requisition page. https://careers-redhat.icims.com/jobs/52676/senior-software-engineer/job

by Mark Proctor (noreply@blogger.com) at May 27, 2016 06:29 PM

May 23, 2016

Thomas Allweyer: BPM-Systeme werden zu Low Code Entwicklungs-Plattformen

Nachdem sich das „Zero Code“-Versprechen so manchen Herstellers als unrealistisch herausgestellt hat, stößt man in letzter Zeit vermehrt auf den Begriff „Low Code“. Damit werden Plattformen charakterisiert, die die Softwareentwicklung durch geeignete Tools wesentlich vereinfachen sollen. Vierzehn solcher Plattformen wurden jüngst vom Markforschungsinstitut Forrester evaluiert. Darunter findet sich auch eine ganze Reihe von BPM-System, wie Appian, AgilePoint, Bizagi, K2 und Nintex. Mit ihren grafischen Modellierungsumgebungen für die Ablaufsteuerung, Formulareditoren und Datenbank-Konnektoren bringen diese Systeme bereits eine ganze Reihe von Features mit, die den erforderlichen Anteil herkömmlicher Programmierung deutlich reduzieren.

Forrester definiert Low Code-Plattformen als Systeme zur schnellen Auslieferung von Geschäftsanwendungen mit einem Minimum an händischer Programmierung und geringen Anfangsinvestitionen in Setup, Training und Deployment. Viele Firmen sind heute darauf angewiesen, auch große, komplexe und zuverlässige Lösungen innerhalb von Tagen und Wochen anstatt Monaten zu entwickeln. Low Code-Plattformen sollen dies ermöglichen.

Der Markt ist momentan recht breit und zersplittert. Forrester unterscheidet je nach Schwerpunkt der Systems zwischen „Data Base Application Platforms“, „Request Handling Platforms“, „Mobile First Application Platforms“ und „Process Application Platforms“, worunter die bereits erwähnten BPM-Systeme fallen. Dabei ist die Tendenz zu erkennen, dass die Hersteller den Funktionsumfang ihrer Systeme in Richtung „General Purpose Plattforms“ erweitern, mit denen ganz unterschiedliche Typen von Unternehmensanwendungen entwickelt werden können.

Als wichtigste Features nennen die Forrester-Analysten:

  • Die grafische Konfiguration virtueller Datenmodelle und die Integration von Datenquellen per Drag & Drop
  • Deklarative Werkzeuge zur Definition von Geschäftslogik und Workflows mit Hilfe von Prozessmodellen, Entscheidungstabellen und Geschäftsregeln
  • Der Aufbau responsiver User Interfaces per Drag & Drop mit automatischer Generierung von Oberflächen für verschiedene Endgeräte
  • Tools für das Management von Entwicklung, Testen und Deployment

Speziellen Wert legt die Studie außerdem auf die Unterstützung des Cloud-Deployment und mobiler App-Stores. Anbieter sollten hierfür auch über Zertifikate zur Cloud-Sicherheit verfügen. Nicht zuletzt werden Hersteller positiv bewertet, die ein Freemium-Modell mit einer kostenlosen Version und Tutorials anbieten, wodurch ein Einstieg ohne aufwändige Schulungen und hohe Anfangsinvestitionen ermöglicht wird.


The Forrester Wave™: Low-Code Development Platforms, Q2 2016
Download der Studie auf der Appian-Seite (Registrierung erforderlich)

by Thomas Allweyer at May 23, 2016 11:53 AM

May 16, 2016

Keith Swenson: AI as an Interface to Business Process

Dag Kittlaus demonstrated Viv last week; business software world should pay attention.  “Viv” is a conversational approach to interacting with systems.  The initial presentation talks about personal applications, but there are even greater opportunities in the workplace.

What is it?

If you have not yet seen it, then take a look at the Techcrunch Video.  It is billed as a artificial intelligence personal assistant.   Dag Kittlaus brought Siri to Apple to provide basic spoken language recognition to the iPhone.  Viv goes a lot further.  It takes what you say and start creating a map of what you want.  As you say more, it modifies and refines the map.  It taps information service providers, and these are combined in real time based on a semantic model of those services.

This is precisely what Nathaniel Palmer was presenting in his forward looking presentation at the bpmNext conference, and coincidentally something I brought up as well.  Businesses moved from big heavy equipment, to laptops, and then to smart phones.  Mobile is so last year!   The devices got more portable, and the graphical user interface got better over the years, but the paradigm remained the same: humans collect the information together, and submit it to the system, to allow the system to process it.  You write an email, edit to final form, and then send it.  You fill out an expense report, and then submit it.

A conversational UI is very different.  You have a single agent that you contact by voice message, text message, email and yes probably also by web forms, which hen in turn interfaces with the system software.  It learns about you, and the kind of things you normally want, so that it can understand what you are talking about, and translate to the relatively dumber systems.

I was not that impressed

All of the examples were simply, one-off requests.   Ask for weather, and ask a more complicated query which shows some nice parsing capability, but it is still just a single query with a single answer.   Dynamic program generation?  Software that writes itself?  Give me a break: every screen generator, every application generator, generates a program that executes. This is a bit hyperbolic.  The important thing is not that it creates a sequence of steps that satisfy the intent, but that it is able to understand the intent in the first place.

Order flowers.  I could call the one person and order flowers.  I can order a Uber car without needing an assistant.  Booking a hotel is only a few mouse clicks.  That is always the problem with demonstrations — they have to simple enough to grasp, short enough to complete in a few minutes, but hopefully compelling enough to understand the potential.

The most interesting part is after he has the list of flowers, he simply says “what about tulips” and Viv refined the situation.  This shows the power of the back and forth conversation. The conversation constitutes a kind of learning that works together with you to incrementally get to what you want to do.  That is the news: Viv has an impressive understanding about you and what you mean with a few words, and it extends that understanding on a case by case basis.

What is the Potential?

One of the biggest problems with BPM is this idea that you have to know everything at the time that the process starts.  You have to put all your expenses into the expense report for processing.  You need to fill in the complete order form before you can purchase something.  As we illustrated in Mastering the Unpredictable, many business have to start working long before they know all the data.  The emergency room has to accept patients long before they know what care is needed.

The conversational approach to applications will radically transform the ability of software to help out.  Instead of being required to give the full details up front, you can tell the agent what you know now.  It can start working on part of that.  Later, you tell it a little more, maybe after reviewing what it had found so far.  If it is heading down the wrong path, you steer it back in the right direction.

I personally hate forms that that ask for all the potential bit of information that might be needed somewhere in the process.  Like at the doctor’s office where you fill in the same details every time, most of which are going to be needed on this visit, but there is a spot there just in case.  A conversational approach would allow me to add information as it is needed.

PersonalAssistant1

With a group of people this starts to get real interesting.  The doctor is unsure on the direction to go with a patient, so they bring an expert into the conversation.  That expert could start asking questions about the patient.  The agent answers when it can, but it also can pass those questions on to the doctor and the patient.  The conversation is facilitated by the map that represents the case so far.  The agent learns what needs to be done, and over time can facilitate this interaction by learning what the various participants normally mean by their spoken words.

It is not that far fetched.  It will radically change the way we think about our business applications.  It is certainly is disruptive.  This demonstration by Viv makes it clear that this is already happening today.  You might want to buckle your seat belts.

Resources


by kswenson at May 16, 2016 12:54 PM

May 13, 2016

Drools & JBPM: #Drools & #jBPM @ #JBCNConf 2016 (Barcelona, June)

Great news! Once again the amazing and flamboyant leaders of the Java User Group from Barcelona manage to put together their anual conference JBCNConf. And, of course, Drools & jBPM will be there. Take a look at their website for more information about the talks and speakers, and if you are close enough to Barcelona I hope to see you all there.
LOGO_FINAL_PNG_500x250
This year I will be doing a Drools Workshop there (Thursday, first day of the conference), hoping to introduce people to Drools in a very hands on session. So if you are looking to start using Drools straight away, this is a great opportunity to do so. If you are a more advanced user and wants to bring your examples or issues to the workshop you are more than welcome. I will be sharing the projects that I will be using on the workshop a couple of weeks before the event so  can take a look and bring more questions to the session. It is also probable that I will be bringing with me freshly printed copies of the new Mastering Drools book, so you might be able to get some copies for free :)
Maciej Swiderski will be covering the jBPM and Knowledge Driven Microservices this year. I totally recommend this talk to anyone interested in how to improve your micro services by adopting tools to formalise and automate domain specific knowledge.
Finally, this year Maciej and I will be given the closing talk of the conference titled : The Open Source Way were we will be sharing with the audience the main benefits of getting involved with the open source community & projects but most importantly we will be sharing how to do achieve that. If you are already an Open Source project contributor and you plan to attend to the conference, get in touch!
Stay tuned for more news, and get in touch if you want to hang around with us before and after the conference!

by salaboy (noreply@blogger.com) at May 13, 2016 09:05 AM

May 12, 2016

Thomas Allweyer: Aktuelle Auflage des BPMN-Buchs in Englisch erschienen

BPMN 2.0 Frontpage-smZwischenzeitlich ist die aktuelle Auflage meines BPMN-Buchs, das insbesondere um eine Sammlung von Modellierungsmustern erweitert wurde, auf Englisch erschienen. Die zweite englische Auflage entspricht inhaltlich der dritten deutschen Auflage. Wenn man das Buch bestellt, sollte man auf die richtige ISBN achten (und ggf. direkt danach suchen), insbesondere bei verschiedenen internationalen Amazon-Webseiten bekommt man öfter nur die alte Ausgabe angezeigt. Da das Buch on demand gedruckt wird, ist es auf jeden Fall innerhalb einiger Tage lieferbar – auch wenn bei amazon manchmal etwas anderes steht.

Weitere Infos zum Buch (inkl. Direktlinks zu den Bestellseiten)

by Thomas Allweyer at May 12, 2016 10:06 AM

May 11, 2016

Keith Swenson: DMN at bpmNEXT 2016

bpmNEXT is 2 and half days of intense examination and evaluation of the leading trends in the business process community, and Decision Modeling Notation was clearly highlighted this year.

This is the year for DMN

The Decision Modeling Notation standard was released mid 2015. There are several implementations, but none of them quite mature yet.  If you are not familiar with DMN, here is what you need to know:

  • You can think of it simplistically as a tree of decision tables. There is so much more to it than that, but probably 80% of usage will a tree of decision tables
  • It has a specific expression language that allows the writing of conditions and results
  • Actually it is a tree of block expressions. A block expression can be a decision table, a simple if/then/else statement, or a number of other types of expression.
  • The results of blocks lower in the tree can be used in blocks further up.

The idea is to represent complicated expressions in a compact, reusable way.

In general, the market response to DMN has been very good.  Some business rule purists say it is too technical, however is strikes a balance between what you would need to do in a programming language, and a completely natural language rule implementation.  Like BPMN, it will probably tend to be used by specialists, but there is also a good chance, like BPMN, that the results will at least be readable by regular business users.  In my talk, I claimed “This is the Year for DMN

Demonstrations:

  • Denis Gagne, Trisotech, demonstrated DMN modeling as part of his suite of cloud based process modeling tools.  Execution is notably absent.
  • Alvin To, Oracle, demonstrated their version, which only supports linear box expressions (as opposed to the more general tree structure) putting particular attention to their contribution to the spec: FEEL (Friendly Enough Expression Language).
  • Larry Goldberg, Sapiens, demonstrated their ability to create DMN models and transform them into a large variety of execution formats.
  • Jacob Feldman, Open Rules, demonstrates his rules optimization capability.
  • Jacob Freund, Camunda, has an implementation that focuses on single decision tables.

Missing Run-time

Most of the demonstrations focused on the modeling of the decisions.  This is a problem.  The specification covers the modeling, however as with any software standard, the devil is in the details.  You can model using several tools in exactly the same way, but there is no guarantee that the execution of the model will be the same.  A similar situation existed with BPMN where different implementations treated things like the Inclusive-OR node completely differently.  The model is meaningless unless you can show that the models actually produce the same decisions — and that requires a standard run time library that can execute the model and show that what they actually mean.

The semantics are described in the specification using words that can never be precise enough to ensure reliable interoperability.  Until an actual reference implementation is available, there will be no way to decide who has interpreted these words correctly.   The problems occur in what might seem to be pathological edge cases, but experience shows that these are surprisingly more numerous than anyone anticipates.

Call To Action

For this reason I am calling for a standard implementation of the DMN evaluator that is widely available to everyone.  A reference implementation.  I think it needs to be an open source implementation, one that works well enough that products can actually use the code in a product.  Much like the way that Apache web server has eliminated the need for each company to write their own web server.

WfMC will be starting a working group to identify and promote the best open source implementation of DMN run-time.  We don’t want to invent yet another implementation, so we plan to identify the best existing implementation and promote it.  There are a couple of good examples out there.

If you believe you have a good open source implementation of DMN run-time then please leave a comment on this blog post.

If you are interested in helping identify and recognize the best implementation, leave a comment as well.

Resources


by kswenson at May 11, 2016 05:27 AM

May 09, 2016

April 25, 2016

Thomas Allweyer: Über wie viel Prozessintelligenz verfügen Unternehmen?

Prozessintelligenz zhawKürzlich ist die Umfrage für die diesjährige BPM-Studie der ZHAW School of Management and Law gestartet. Dabei liegt die Veröffentlichung der vorangehenden Studie zum Thema Prozessintelligenz noch gar nicht lange zurück. Gemeinhin wird unter der Bezeichnung „Process Intelligence“ meist die Sammlung und Analyse von prozessbezogenen Daten verstanden. In dieser Studie wird der Begriff weiter gefasst. Er umfasst die gesamten Fähigkeiten einer Organisation, die es ihr ermöglichen, intelligent mit ihren Prozessen umzugehen, und umfasst die Teilbereiche „Kreative Intelligenz“, „Analytische Intelligenz“ und „Praktische Intelligenz“. So gehören etwa auch die Fähigkeiten zur strategische Verankerung des Prozessmanagements, zur Prozessoptimierung und zur Prozess-Steuerung zur Prozessintelligenz. In der BPM-Studie 2015 wurde untersucht, wie es um die Prozessintelligenz in den Unternehmen bestellt ist. Hierbei wurden einerseits fünf Fallstudien durchgeführt, andererseits eine Umfrage.

Die Fallstudien beschreiben Projekte zur Prozessverbesserung bei drei Unternehmen (Axa Winterthur, St. Galler Kantonalbank und Hoffmann-La Roche) sowie zwei Stadtverwaltungen (Lausanne und Konstanz). Hierbei kamen ganz unterschiedliche Methoden und Werkzeuge zum Einsatz, wie z. B. Process Mining, Simulation, Prozessautomatisierung, Business Rules Management, Lean Six Sigma, Wertstromanalyse und ein Verfahren zum agilen Geschäftsprozessmanagement. Die Fallbeispiele sind ausführlich beschrieben, und es wird jeweils herausgearbeitet, welche Aspekte der Prozessintelligenz genutzt und verbessert wurden.

In der Umfrage wurde deutlich, dass in vielen Unternehmen Anspruch und Wirklichkeit bezogen auf das Nutzenpotenzial von BPM auseinanderklaffen. So werdem Effizienzsteigerungen und Kundenorientierung als die wichtigsten Ziele genannt, doch führen nur wenige Firmen auch auf diese Ziele bezogene Maßnahmen durch. So gibt nur jeweils etwa ein Fünftel der Befragten an, systematisch Standardisierungs- und Automatisierungspotenziale zu ermitteln, oder die operative Prozessleistung zu überwachen. Entsprechend werden bislang nur recht selten Business Intelligence-Werkzeuge im Zusammenhang mit Geschäftsprozessmanagement eingesetzt. Auch die IT-Unterstützung von schwach strukturierten, wissensintensiven Prozessen ist derzeit wenig ausgeprägt. Insbesondere wird BPM noch kaum im Zusammenhang mit Themen wie Digitalisierung, Entwicklung von Innovationen oder Optimierung des Kundenerlebnisses gesehen. Welche Möglichkeiten das Prozessmanagement für diese strategischen Zukunftsthemen hat, wird in der gerade angelaufenen Studie BPM 2016 untersucht.

Download der Studie unter www.zhaw.ch/iwi/prozessintelligenz

by Thomas Allweyer at April 25, 2016 07:35 AM

April 18, 2016

Thomas Allweyer: Umfrage zu BPM und digitaler Transformation gestartet

Unter der Leitfrage „Kundennutzen durch digitale Transformation?“ hat die School of Management und Law an der Zürcher Hochschule für Angewandte Wissenschaften (ZHAW) eine Umfrage zu ihre BPM-Studie 2016 gestartet. Im Fokus stehen dieses Jahr insbesondere die Potenziale des Prozessmanagements für die Optimierung von Kundenerlebnissen und die Entwicklung und Umsetzung neuer Geschäftsmodelle. Es soll untersucht werden, welche Konzepte und Methoden in diesen Bereichen bereits eingesetzt werden und inwiefern sie Teil der digitalen Transformation von Unternehmen sind. Die Teilnahme an der Umfrage ist ab sofort möglich. Link zur Umfrage.

by Thomas Allweyer at April 18, 2016 06:33 PM

Drools & JBPM: Drools 6.4.0.Final is available

The latests and greatest Drools 6.4.0.Final release is now available for download.

This is an incremental release on our previous build that brings several improvements in the core engine and the web workbench.

You can find more details, downloads and documentation here:




Read below some of the highlights of the release.

You can also check the new releases for:




Happy drooling.

Drools Workbench

New look and feel

The general look and feel in the entire workbench has been updated to adopt PatternFly. The update brings a cleaner, lightweight and more consistent user experience throughout every screen. Allowing users focus on the data and the tasks by removing all unnecessary visual elements. Interactions and behaviour remain mostly unchanged, limiting the scope of this change to visual updates.


Various UI improvements

In addition to the PatternFly update described above which targeted the general look and feel, many individual components in the workbench have been improved to create a better user experience. This involved making sure the default size of modal popup windows is appropriate to fit the corresponding content, adjusting the size of text fields as well as aligning labels, and improving the resize behaviour of various components when used on smaller screens.


New Locales

Locales ru (Russian) and zh_TW (Chineses Traditional) have now been added.

New Decision Server Management UI

The KIE Execution Server Management UI has been completely redesigned to adjust to major improvements introduced recently. Besides the fact that new UI has been built from scratch and following best practices provided by PatternFly, the new interface expands previous features giving users more control of their servers.


Core Engine


Better Java 8 compatibility

It is now possible to use Java 8 syntax (lambdas and method references) in the Right Hand Side (then) part of a rule.

More robust incremental compilation

The incremental compilation (dynamic rule-base update) had some relevant flaws when one or more rules with a subnetwork (rules with complex existential patterns) were involved, especially when the same subnetwork was shared among different rules. This issue required a partial rewriting of the existing incremental compilation algorithm, followed by a complete audit that has also been validated by brand new test suite made by more than 20,000 test cases only in this area.

Improved multi-threading behaviour

Engine's code dealing with multi-threading has been partially rewritten in order to remove a large number of synchronisation points and improve stability and predictability.


OOPath improvements

OOPath has been introduced with Drools 6.3.0. In Drools 6.4.0 it has been enhanced to support a number of new features.


by Edson Tirelli (noreply@blogger.com) at April 18, 2016 03:50 PM

Drools & JBPM: Oficial Wildfly Swarm #Drools Fraction

Oficial what? Long title for a quite small but useful contribution. Wildfly Swarm allows us to create rather small and self contained application including just what we need from the Wildfly Application Server. On this post we will be looking at the Drools Fraction provided to work with Wildfly Swarm. The main idea behind this fraction is to provide a quick way to bundle the Drools Server among with your own services inside a jar file that you can run anywhere.

Microservices World

Nowadays, while micro services are a trending topic we need to make sure that we can bundle our services as decoupled from other software as possible. For such a task, we can use Wildfly Swarm that allows us to create our services using a set of fractions instead of a whole JEE container. It also saves us a lot of time by allowing us to run our application without the need of downloading or installing a JEE container. With Swarm we will be able to just run java -jar <our services.jar> and we are ready to go.
In the particular case of Drools, the project provides a Web Application called Kie-Server (Drools Server) which offers a set of REST/SOAP/JMS endpoints to use as a service. You can load your domain specific rules inside this server and create new containers to use your different set of rules. But again, if we want to use it, we will need to worry about how to install it in Tomcat, Wildfly, Jetty, WebSphere, WebLogic, or any other Servlet Container. Each of these containers represent a different challenge while it comes to configurations, so instead of that we can start using the Wildfly Swarm Drools Fraction, which basically enables the Drools Server inside your Wildfly Swarm application. In a way you are bundling the Drools Server with your own custom services. By doing this, you can start the Drools Server by doing java -jar <your.jar> and you ready to go.
Imagine the other situation of dealing with several instances of Servlet Containers and deploying the WAR file to each of those containers. It gets worst if those containers are not all the same "brand" and version.
So let's take a quick look at an example of how you can get started using the Wildfly Swarm Drools Fraction.

Example

I recommend you to take a look at the Wildfly Swarm Documentation first, to get you started on using Wildfly Swarm. If you know the basics, then you can include the Drools Fraction.
I've created an example using this fraction here: https://github.com/Salaboy/drools-workshop/tree/master/drools-server-swarm
The main goal of this example is to show how simple is to get you started with the Drools Fraction, and for that reason I'm not including any other service in this project. You are not restricted by that, and you can expose your own endpoints.
Notice in the pom.xml file two things:
  1. The Drools Server Fraction: https://github.com/Salaboy/drools-workshop/blob/master/drools-server-swarm/pom.xml#L18 By adding this dependency, the fraction is going to be activated while Wildfly Swarm bootstrap.
  2. The wildfly-swarm plugin: https://github.com/Salaboy/drools-workshop/blob/master/drools-server-swarm/pom.xml#L25. Notice in the plugin configuration that we are pointing to the App class which basically just start the container. (This can be avoided, but I wanted to show that if you want to start your own services or do your own deployments you can do that inside that class)
If you compile and package this project by doing mvn clean install, you will find in the target/ directory a file called:
drools-server-swarm-1.0-SNAPSHOT-swarm.jar which you can start by doing
[code]

java -jar drools-server-swarm-1.0-SNAPSHOT-swarm.jar

[/code]
For this example, we will include one more flag when we start our project to make sure that our Drools Server can resolve the artefacts that I'm going to use later on, so it will be like this:
[code]

java -Dkie.maven.settings.custom=../src/main/resources/settings.xml -jar drools-server-swarm-1.0-SNAPSHOT-swarm.jar

[/code]
By adding the "kie.maven.setting.custom" flag here we are letting the Drools Server know that we had configured an external maven repository to be used to resolve our artefacts. You can find the custom settings.xml file here.
Once you start this project and everything boots up (less than 2 seconds to start wildfly-swarm core + less than 14 to boot up the drools server) you are ready to start creating your KIE Containers with your domain specific rules.
You can find the output of running this app here. Notice the binding address for the http port:
WFLYUT0006: Undertow HTTP listener default listening on [0:0:0:0:0:0:0:0]:8083
Now you can start sending requests to http://localhost:8083/drools to interact with the server.
I've included in this project also a Chrome's Postman project for you to test some very simple request like:
  • Getting All the registered Containers -> GET http://localhost:8083/drools/server/containers
  • Creating a new container - > PUT http://localhost:8083/drools/server/containers/sample
  • Sending some commands like Insert Fact + Fire All Rules -> POST http://localhost:8083/drools/server/containers/instances/sample
You can import this file to Postman and fire the requests against your newly created Drools Server. Besides knowing to which URLs to PUT,POST or GET data, you also need to know about the required headers and Authentication details:
Headers
Headers
Authentication -> Basic
User: kieserver
Password: kieserver1!
Finally, you can find the source code of the Fraction here: https://github.com/wildfly-swarm/wildfly-swarm-drools
There are tons of things that can be improved, helpers to be provided, bugs to be fixed, so if you are up to the task, get in touch and let's the Drools fraction better for everyone.

Summing up

While I'm still writing the documentation for this fraction, you can start using it right away. Remember that the main goal of these Wildfly Swarm extensions is to make your life easier and save you some time when  you need to get something like the Drools Server in a small bundle and isolated package that doesn't require a server to be installed and configured.
If you have any questions about the Drools Fraction don't hesitate to write a comment here.



by salaboy (noreply@blogger.com) at April 18, 2016 01:21 PM

April 15, 2016

Thomas Allweyer: Tagung Insight diskutiert Modellierung im digitalen Unternehmen

Insight2016Die von dem Modellierungsspezialisten MID in Nürnberg veranstaltete Tagung dürfte mittlerweile die größte deutschsprachige Veranstaltung rund um das Thema Modellierung sein. Unter dem Motto „Models Drive Digital“ stand dieses Jahr auch hier das allgegenwärtige Thema Digitalisierung im Vordergrund. So drehten sich sowohl die Einführungs-Keynote von Innovationsforscher Nick Sohnemann als auch der Abschlussvortrag von Ranga Yogeshwar um die zum Teil atemberaubend schnellen Entwicklungen, mit denen unsere Gesellschaft konfrontiert ist und die alle Branchen verändern werden, wobei der Fernsehjournalist Yogeshwar auch zahlreiche kritische Annmerkungen machte. So sei zu beobachten, dass Innovationen vielfach zu einer Verstärkung von Ungleichheit führen.

Ein weiterer Plenumsvortrag stellte die Digitalisierungsstrategie des FC Bayern München vor. Der größte Sportverein der Welt ist auch ein großes Unternehmen mit zum Teil ganz speziellen Anforderungen an die IT. Beispielsweise müssen die Planung, Überwachung und Steuerung der An- und Abreise von zigtausend Besuchern eines Heimspiels durchgängig unterstützt werden. Die Anmeldung als Vereinsmitglied muss unter anderem auch über eine App erfolgen können – nicht zuletzt weil besonders innige Fans ihren neugeborenen Nachwuchs direkt aus dem Kreißsaal beim FC anmelden wollen.

Beim Veranstalter MID dreht sich alles um die Plattform „smartfacts“, die Modelle aus unterschiedlichsten Tools in einer kollaborativen Umgebung integriert. Die Geschäftsführer Andreas Ditze und Jochen Seemann stellten die neuesten Entwicklungen vor, u. a. die verbesserte Unterstützung von Review- und Freigabeprozessen, die Integration eines Web-Modelers und die Aufbereitung von Prozessmodellen in Form einer „Process Guidance“, die Endanwender Schritt für Schritt durch Prozesse führt.

Im Vortragsprogramm gab es insgesamt zehn parallele Tracks zur Auswahl. Neben der Digitalisierung standen Themen wie Geschäftsprozessmanagement, agile Methoden, Business Intelligence, Master Data Management und SAP auf dem Programm. In den Pausen konnten die Teilnehmer Datenbrillen und andere Gadgets ausprobieren oder die Wissensvermittlung durch Serious Games erleben.

Vielfach stellt man fest, dass gerade auch Vorreiter der digitalen Transformation kaum etablierte Modellierungsmethoden einsetzen. Sie werden als zu schwergewichtig betrachtet um hilfreich für die schnelle Entwicklung und Umsetzung digitaler Geschäftsmodelle zu sein. So wies Nick Sohnemann bereits im Eröffnungsvortrag darauf hin, dass etwa bei Google Trends das Interesse am Suchbegriff „Business Process Modeling“ stark gesunken ist. Und auch Elmar Nathe, der bei MID das Thema Digitalisierung verantwortet, sagte mir, dass es Kunden gebe, die nach einer eher groben Skizzierung der Facharchitektur direkt in die Codierung einsteigen und auf eine genauere Modellierung weitgehend verzichten – obwohl die fehlende Dokumentation zu Problemen bei Wartung und Weiterentwicklung führen dürfte.

Geschäftsführer Jochen Seemann zitierte eine Gartner-Studie, der zufolge 80% der Unternehmen aufgrund eines mangelnden BPM-Reifegrades mit ihren digitalen Strategien nicht die erhofften Erfolge erzielen werden. Insofern spielen Themen wie Prozessmanagement und Prozessmodellierung eine wichtige Rolle im digitalen Unternehmen, denn die neuen Geschäftsmodelle funktionieren nur, wenn die zur Umsetzung benötigten Prozesse und Systeme beherrscht werden. MID beobachtet, dass auch Themen wie die modellgetriebene Entwicklung wieder auf verstärktes Interesse stoßen. So setzen beispielsweise Automobilkonzerne verstärkt auf modellbasierte Ansätze um die Variantenvielfalt in Hard- und Software in den Griff zu bekommen.

by Thomas Allweyer at April 15, 2016 09:45 AM

April 11, 2016

BPinPM.net: Invitation to BPinPM.net Conference 2016 – The Human Side of BPM: From Process Operation to Process Innovation

We are very happy to invite you to the most comprehensive Best Practice in Process Management Conference ever! Meet us at Lufthansa Training & Conference Center and join the journey from Process Operation to Process Innovation.

It took more than a year to evaluate, to double-check, and to combine all workshop results to a new and holistic approach for sustainable process management.

But now, the ProcessInnovation8 is there and will guide us at the conference! 🙂

The ProcessInnovation8 provides direction to BPM professionals and management throughout the phases Process Strategy, Process Enhancement, Process Implementation, Process Steering, and Process Innovation while keeping a special focus onto the human side of BPM to maximize acceptance and benefit of BPM.

To share our learnings, introduce practical examples, discuss latest BPM insights, experience the BPinPM.net community, enjoy the dinner, and, and, and…, we are looking forward to meet you in Seeheim! 🙂

Please order your tickets now. Capacity is limited and the early bird tickets will be available for a short period of time only.

Please visit the conference site to access the agenda and to get all the details…

 

 

Again, this will be a local conference in Germany, but if enough non-German-speaking experts are interested, we will think about ways to share the know-how with the international BPinPM.net community as well. Please feel free to contact the team.

by Mirko Kloppenburg at April 11, 2016 07:17 PM

April 10, 2016

Tom Debevoise: Lists in Decision Model Notation

This image was inspired by Nick Broom's post to the DMN group in linked in.

The use case posed by Nick which is here: https://www.linkedin.com/groups/4225568/4225568-6123464175038586884

In the Signavio Decision Modeler’s implementation of DMN, we provide the ability to check whether a set contains an element of another input item or a static set. The expression it uses in the column is an an equivalent of the intersection set operator . The DMN diagram sbove that does this 3 different ways:

1)      With the Signavio ‘Multi-Decision’ extension to DMN. This iterates through an input that is a list and checks item by item if the inputs mstch.

2)      An internal operator that corresponds to a test of one item or set or items existence as a subset of another using a fixed subset

3)      An internal operator that corresponds to a test of one item or set or items existence as a subset of another using an input data type

You do not need the multi decision to support a simple data type list. However, if the input item is a list of complex types (multi attribute types) or complex logic is needed, then the multi-decision is needed. 

The Signavio export for this disagram is here.

 

by Tom Debevoise at April 10, 2016 07:03 PM

Thomas Allweyer: Webseite zum BPMN-Buch aktualisiert

BPMN 2.0 - 3. Auflage - Titel 183pxZur Zeit bereite ich die englische Ausgabe der aktuellen Auflage des BPMN-Buchs vor. Dabei sind mir im deutschen Buch ein paar Kleinigkeiten aufgefallen, die man verbessern könnte. Außerdem gibt es ein paar Änderungen und Erweiterungen zu den Quellen im Literaturverzeichnis und den angegebenen Internet-Links. Daher habe ich die Gelegenheit genutzt und die Webseite zum Buch aktualisiert: www.kurze-prozesse.de/bpmn-buch

by Thomas Allweyer at April 10, 2016 10:25 AM

April 08, 2016

Thomas Allweyer: Kostenfreie Modellierungstools im Test

BPMO-Studie Kostenfreie Modellierungstools17 kostenfreie Prozessmodellierungstools hat BPM&O in ihrer jüngsten Studie untersucht. Dabei wurden nur solche Tools einbezogen, deren kostenlose Nutzung zeitlich unbefristet ist, und die auch keiner Einschränkung hinsichtlich der Modellgröße unterliegen. Bewertet wurden technische Voraussetzungen, Schnittstellen, Modelltypen und Verknüpfungen, Sprachen, Dokumentation und Support. Einige der Tools weisen einen beachtlichen Funktionsumfang auf und sind durchaus für den kurzfristigen Einsatz in Projekten oder zur Überbrückung der Beschaffungszeit eines kostenpflichtigen Modellierungsplattform geeignet. Dennoch, so das Fazit der Studienautoren, muss man sich bewusst sein, dass es sich bei allen kostenlos erhältlichen Modellierungstools um bessere Malwerkzeuge handelt. Ein umfassendes Prozessmanagement lässt sich damit nicht sinnvoll unterstützen, da wesentliche Funktionen fehlen, wie z. B. Kollaborationsmöglichkeiten oder Prozessportale. Einen Eindruck von der Bedienung der Modellierungsfunktionen bieten die Videos, die BPM&O zu jedem untersuchten Tool erstellt hat. Link zum Download der Studie (Registrierung erforderlich).

by Thomas Allweyer at April 08, 2016 12:21 PM

April 06, 2016

Drools & JBPM: User and group management in jBPM and Drools Workbenches

Introduction

This article talks about a new feature that allows the administration of the application's users and groups using an intuitive and friendly user interface that comes integrated in both jBPM and Drools Workbenches.

User and group management
Before the installation, setup and usage of this feature, this article talks about some previous concepts that need to be completely understood for the further usage.

So this article is split in those sections:
  • Security management providers and capabilities
  • Installation and setup
  • Usage
Notes: 
  • This feature is included from version 6.4.0.Final.
  • Sources available here.


Security management providers

A security environment is usually provided by the use of a realm. Realms are used to restrict the access for the different application's resources. So realms contains information about the users, groups, roles, permissions and and any other related information.

In most of the typical scenarios the application's security is delegated to the container's security mechanism, which consumes a given realm at same time. It's important to consider that there exist several realm implementations, for example Wildfly provides a realm based on the application-users.properties/application-roles.properties files, Tomcat provides a realm based on the tomcat-users.xml file, etc. So keep in mind that there is no single security realm to rely on, it can be different in each installation.

The jBPM and Drools workbenches are not an exception, they're build on top Uberfire framework (aka UF), which delegates the authorization and authentication to the underlying container's security environment as well, so the consumed realm is given by the concrete deployment configuration.

 
Security management providers

Due to the potential different security environments that have to be supported, the users and groups management provides a well defined management services API with some default built-in security management providers. A security management provider is the formal name given to a concrete user and group management service implementation for a given realm.

At this moment, by default there are three security management providers available:
Keep updated on new security management providers on further releases. You can easily build and register your own security management provider if non of the defaults fits in your environment.

 
Security management providers's capabilities

Each security realm can provide support different operations. For example consider the use of a Wildfly's realm based on properties files,  The contents for the applications-users.properties is like:

admin=207b6e0cc556d7084b5e2db7d822555c
salaboy=d4af256e7007fea2e581d539e05edd1b
maciej=3c8609f5e0c908a8c361ca633ed23844
kris=0bfd0f47d4817f2557c91cbab38bb92d
katy=fd37b5d0b82ce027bfad677a54fbccee
john=afda4373c6021f3f5841cd6c0a027244
jack=984ba30e11dda7b9ed86ba7b73d01481
director=6b7f87a92b62bedd0a5a94c98bd83e21
user=c5568adea472163dfc00c19c6348a665
guest=b5d048a237bfd2874b6928e1f37ee15e
kiewb=78541b7b451d8012223f29ba5141bcc2
kieserver=16c6511893651c9b4b57e0c027a96075

As you can see, it's based on key-value pairs where the key is the username, and the value is the hashed value for the user's password. So a user is just defined by the key, by its username, it  does not have a name nor address, etc.

On the other hand, consider the use of a realm provided by a Keycloak server. The information for a user is composed by more user meta-data, such as surname, address, etc, as in the following image:

Admin user edit using the Keycloak sec. management provider

So the different services and client side components from the users and group management API are based on capabilitiesCapabilities are used to expose or restrict the available functionality provided by the different services and client side components. Examples of capabilities are:
  • Create user
  • Update user
  • Delete user
  • Update user attributes
  • Create group
  • Assign groups
  • Assign roles 
  • etc

Each security management provider must specify a set of capabilities supported. From the previous examples you can note that the Wildfly security management provider does not support the capability for the management of the attributes for a user - the user is only composed by the user name. On the other hand the Keycloak provider does support this capability.

The different views and user interface components rely on the capabilities supported by each provider, so if a capability is not supported by the provider in use, the UI does not provide the views for the management of that capability. As an example, consider that a concrete provider does not support deleting users - the delete user button on the user interface will be not available.

Please take a look at the concrete service provider documentation to check all the supported capabilities for each one, the default ones can be found here.

If the security environment is not supported by any of the default providers, you can build your own. Please keep updated on further articles about how to create a custom security management provider.

 
Installation and setup

Before considering the installation and setup steps please note the following Drools and jBPM distributions come with built-in, pre-installed security management providers by default:
If your realm settings are different from the defaults, please read each provider's documentation in order to apply the concrete settings.

On the other hand, if you're building your own security management provider or need to include it on an existing application, consider the following installation options:
  • Enable the security management feature on an existing WAR distribution
     
  • Setup and installation in an existing or new project (from sources)
NOTE: If no security management provider is installed in the application, there will be no available user interface for managing the security realm. Once a security management provider is installed and setup, the user and group management user interfaces are automatically enabled and accessible from the main menu.

Enable the security management feature on an existing WAR distribution
Given an existing WAR distribution of either Drools and jBPM workbenches, follow these steps in order to install and enable the user management feature:

  1. Ensure the following libraries are present on WEB-INF/lib:
    • WEB-INF/lib/uberfire-security-management-api-6.4.0.Final..jar
    •  WEB-INF/lib/uberfire-security-management-backend-6.4.0.Final..jar
        
  2. Add the concrete library for the security management provider to use in WEB-INF/lib:
    • Example: WEB-INF/lib/uberfire-security-management-wildfly-6.4.0.Final..jar
    • If the concrete provider you're using requires more libraries, add those as well. Please read each provider's documentation for more information.
        
  3. Replace the whole content for file WEB-INF/classes/security-management.properties, or if not present, create it. The settings present on this file depend on the concrete implementation you're using. Please read each provider's documentation for more information.
      
  4. If you're deploying on Wildfly or EAP, please check if the WEB-INF/jboss-deployment-structure.xml requires any update. Please read each provider's documentation for more information.

Setup and installation in an existing or new project (from sources)

If you're building an Uberfire based web application and you want to include the user and group management feature, please read this instructions.

Disabling the security management feature

he security management feature can be disabled, and thus no services or user interface will be available, by any of

  • Uninstalling the security management provider from the application

    When no concrete security management provider installed on the application, the user and group management feature will be disabled and no services or user interface will be presented to the user.
       
  • Removing or commenting the security management configuration file

    Removing or commenting all the lines in the configuration file located at WEB-INF/classes/security-management.properties will disable the user and group management feature and no services or user interface will be presented to the user.


Usage

The user and group management feature is presented using two different perspectives that are available from the main Home menu (considering that the feature is enabled) as:
User and group management menu entries
Read the following sections for using both user and group management perspectives.

User management

The user management interface is available from the User management menu entry in the Home menu.

The interface is presented using two main panels:  the users explorer on the west panel and the user editor on the center one:

User management perspective

The users explorer, on west panel, lists by default all the users present on the application's security realm:

Users explorer panel
In addition to listing all users, the users explorer allows:

  • Searching users


    When specifying the search pattern in the search box the users list will be reduced and will display only the users that matches the search pattern.

    Search patterns depend on the concrete security management provider being used by the application's. Please read each provider's documentation for more information.
  • Creating new users:



    By clicking on the Create new user button, a new screen will be presented on the center panel to perform a new user creation.
The user editor, on the center panel, is used to create, view, update or delete users. Once creating a new user o clicking an existing user on the users explorer, the user editor screen is opened. 

To view an existing user, click on an existing user in the Users Explorer to open the User Editor screen. For example, viewing the admin user when using the Wildfly security management provider results in this screen:

Viewing the admin user
Same admin user view operation but when using the Keycloak security management provider, instead of the Wildfly's one, results in this screen:

Using the Keycloak sec. management provider
As you can see, the user editor when using the Keycloak sec. management provider includes the user attributes management section, but it's not present when using the Wildfly's one. So remember that the information and actions available on the user interface depends on each provider's capabilities (as explained in previous sections),

Viewing a user in the user editor provides the following information (if provider supports it):
  • The user name
  • The user's attributes
  • The assigned groups
  • The assigned roles
In order to update or delete an existing user, click on the Edit button present near to the username in the user editor screen:

Editing admin user
Once the user editor presented in edit mode, different operations can be done (if the security management provider in use supports it):
  • Update the user's attributes



    Existing user attributes can be updated, such as the user name, the surname, etc. New attributes can be created as well, if the security management provider supports it.
  • Update assigned groups

    A group selection popup is presented when clicking on Add to groups button:



    This popup screen allows the user to search and select or deselect the groups assigned for the user currently being edited.
  • Update assigned roles

    A role selection popup is presented when clicking on Add to roles button:



    This popup screen allows the user to search and select or deselect the roles assigned for the user currently being edited.
  • Change user's password

    A change password popup screen is presented when clicking on the Change password button:

  • Delete user

    The currently being edited user can be deleted from the realm by clicking on the Delete button. 
Group management

The group management interface is available from the Group management menu entry in the Home menu.

The interface is presented using two main panels:  the groups explorer on the west panel and the group editor on the center one:

Group management perspective
The groups explorer, on west panel, lists by default all the groups present on the application's security realm:

Groups explorer
In addition to listing all groups, the groups explorer allows:

  • Searching for groups

    When specifying the search pattern in the search box the users list will be reduced and will display only the users that matches the search pattern.
    Groups explorer filtered using search
    Search patterns depend on the concrete security management provider being used by the application's. Please read each provider's documentation for more information.
  • Create new groups



    By clicking on the Create new group button, a new screen will be presented on the center panel to perform a new group creation. Once the new group has been created, it allows to assign users to it:
    Assign users to the recently created group
The group editor, on the center panel, is used to create, view or delete groups. Once creating a new group o clicking an existing group on the groups explorer, the group editor screen is opened. 

To view an existing group, click on an existing user in the Groups Explorer to open the Group Editor screen. For example, viewing the sales group results in this screen:


Viewing the sales group
To delete an existing group just click on the Delete button.


by Roger Martinez (noreply@blogger.com) at April 06, 2016 06:17 PM

April 05, 2016

Thomas Allweyer: BPM & ERP im digitalen Unternehmen

Die unterschiedlichsten Facetten des IT- und Prozessmanagement im Zeitalter der Digitalisierung beleuchtet das 9. Praxisforum BPM & ERP. Als Keynotesprecher wurde kein geringerer als Professor August-Wilhem Scheer gewonnen. Sein Thema: „Digitialisierung verschlingt die Welt“. Die Frage, welche Bedeutung das Prozessmanagement im digitalisierten Unternehmen hat, kann unter anderem auch an verschiedenen Thementischen diskutiert werden. Auf den Punkt gebrachte Diskussionsanstöße versprechen auch mehrere Kurzvorträge im Pecha Kucha-Format. Und auch Cornelius Clauser, der Leiter der SAP Productivity Consulting Group, plädiert in seinem Abschlussvortrag „From Paper to Impact“ für eine neue Ausrichtung des BPM. Zuvor erwarten die Teilnehmer aber noch eine ganze Reihe von Praxisvorträgen, u. a. von Böhringer Ingelheim, EnBW, Infraserv und Zalando. Außerdem werden die Ergebnisse der internationalen Studie BPM Compass präsentiert, an der die Teilnahme noch bis zum 8. Mai möglich ist.
Die eintägige Veranstaltung findet am 21. Juni in Höhr-Grenzhausen in der Nähe von Koblenz statt. Zudem besteht die Möglichkeit, am Vortrag einen Intensivworkshop zum Prozessmanagement zu besuchen, sowie am Folgetag eine Praxiswerkstatt zum Thema „Agile und hybride Methoden auch im klassischen Umfeld“. Weitere Informationen gibt es unter www.bpmerp.de.

by Thomas Allweyer at April 05, 2016 06:25 PM

April 04, 2016

Drools & JBPM: Mastering #Drools 6 book is out!

Hi everyone, just a quick post to share the good news! The book is out and ready to ship! You can buy it from Packt or from Amazon directly. I'm happy to announce also that we are going to be presenting the book next week in Denmark, with the local JBug: http://www.meetup.com/jbug-dk/events/229407454/ if you are around or know someone that might be interested in attending please let them know!

Mastering Drools 6
The book covers a wide range of topics from the basic ones including how to set up your environment and how to write simple rules, to more advanced topics such as Complex Event Processing and the core of the Rule Engine, the PHREAK algorithm.

by salaboy (noreply@blogger.com) at April 04, 2016 09:02 AM

March 24, 2016

Thomas Allweyer: Entscheidungstabellen in der Cloud

DMN Entscheidungstabelle CamundaWer Geschäftslogik in Form von Entscheidungstabellen gemäß dem Standard „Decision Model and Notation“ (DMN) ausführen und in eine Anwendung integrieren möchte, kann einen neuen Cloud-Service von Camunda nutzen. Die Entscheidungstabelle kann über ein Web-Interface angelegt oder mit einem Offline-Editor erstellt, hochgeladen und mit einem Klick deployed werden. Die Ausführung der Entscheidungslogik lässt sich über ein REST-API anstoßen. Hierdurch ist eine einfache Integration in beliebige Anwendungen möglich. Code-Beispiele für verschiedene gängige Programmiersprachen stehen zur Verfügung. Allerdings handelt es sich bislang erst um einen Beta-Test, bei dem noch nicht bekannt ist, wie lange er kostenfrei zur Verfügung stehen wird.

by Thomas Allweyer at March 24, 2016 11:14 AM

March 23, 2016

Drools & JBPM: Packt is doing it again: 50% off on all eBooks and Videos

Packt Publishing has another great promotion going: 50% off on all Packt eBooks and Videos until April 30th.

It is a great opportunity to grab all those Drools books as well as any others you might be interested in.

Click on the image bellow to be redirected to their online store:




by Edson Tirelli (noreply@blogger.com) at March 23, 2016 10:20 PM

March 21, 2016

Drools & JBPM: High Availability Drools Stateless Service in Openshift Origin

openshift-origin-logoHi everyone! On this blog post I wanted to cover a simple example showing how easy it is to scale our Drools Stateless services by using Openshift 3 (Docker and Kubernetes). I will be showing how we can scale our service by provisioning new instances on demand and how these instances are load balanced by Kubernetes using a round robin strategy.

Our Drools Stateless Service

First of all we need a stateless Kie Session to play around with. In these simple example I've created a food recommendation service to demonstrate what kind of scenarios you can build up using this approach. All the source code can be found inside the Drools Workshop repository hosted on github: https://github.com/Salaboy/drools-workshop/tree/master/drools-openshift-example
In this project you will find 4 modules:
  • drools-food-model: our business model including the domain classes, such as Ingredient, Sandwich, Salad, etc
  • drools-food-kjar: our business knowledge, here we have our set of rules to describe how the food recommendations will be done.
  • drools-food-services: using wildfly swarm I'm exposing a domain specific service encapsulating the rule engine. Here a set of rest services is exposed so our clients can interact.
  • drools-controller: by using the Kubernetes Java API we can programatically provision new instances of our Food Recommendation Service on demand to the Openshift environment.
Our unit of work will be the Drools-Food-Services project which expose the REST endpoints to interact with our stateless sessions.
Also notice that there is another Service that gives us very basic information about where our Service is running: https://github.com/Salaboy/drools-workshop/blob/master/drools-openshift-example/drools-food-services/src/main/java/org/drools/workshop/food/endpoint/api/NodeStatsService.java
We will call this service to know exactly which instance of the service is answering our clients later on.
The rules for this example are simple and not doing much, if you are looking to learn Drools, I recommend you to create more meaning full rules and share it with me so we can improve the example ;) You can take a look at the rules here:
As you might expect: Sandwiches for boys and Salads for girls :)
One last important thing about our service that is important for you to see is how the rules are being picked up by the Service Endpoint. I'm using the Drools CDI extension to @Inject a KieContainer which is resolved using the KIE-CI module, explained in some of my previous posts.
We will bundle this project into a Docker Image that can be started as many times as we want/need. If you have a Docker client installed in your local environment you can start this food recommendation service by looking at the salaboy/drools-food-services image which is hosted in hub.docker.com/salaboy
By starting the Docker image without even knowing what is running inside we immediately notice the following advantages:
  • We don't need to install Java or any other tool besides Docker
  • We don't need to configure anything to run our Rest Service
  • We don't even need to build anything locally due the fact that the image is hosted in hub.docker.com
  • We can run on top of any operating system
At the same time we get notice the following disadvantages:
  • We need to know in which IP and Port our service is exposed by Docker
  • If we run more than one image we need to keep track of all the IPs and Ports and notify to all our clients about those
  • There is no built in way of load balance between different instances of the same docker image instance
For solving these disadvantages Openshift, and more specifically, Kubernetes to our rescue!

Provisioning our Service inside Openshift

As I mention before, if we just start creating new Docker Image instances of our service we soon find out that our clients will need to know about how many instances do we have running and how to contact each of them. This is obviously no good, and for that reason we need an intermediate layer to deal with this problem. Kubernetes provides us with this layer of abstraction and provisioning, which allows us to create multiple instances of our PODs (abstraction on top of the docker image) and configure to it Replication Controllers and Services.
The concept of Replication Controller provides a way to define how many instances should be running our our service at a given time. Replication controllers are in charge of guarantee that if we need at least 3 instances running, those instances are running all the time. If one of these instances died, the replication controller will automatically spawn one for us.
Services in Kubernetes solve the problem of knowing all and every Docker instance details.  Services allows us to provide a Facade for our clients to use to interact with our instances of our Pods. The Service layer also allows us to define a strategy (called session affinity) to define how to load balance our Pod instances behind the service. There are to built in strategies: ClientIP and Round Robin.
So we need to things now, we need an installation of Openshift Origin (v3) and our project Drools Controller which will interact with the Kubernetes REST endpoints to provision our Pods, Replicator Controllers and Services.
For the Openshift installation, I recommend you to follow the steps described here: https://github.com/openshift/origin/blob/master/CONTRIBUTING.adoc
I'm running here in my laptop the Vagrant option (second option) described in the previous link.
Finally, an ultra simple example can be found of how to use the Kubernetes API to provision in this case our drools-food-services into Openshift.
Notice that we are defining everything at runtime, which is really cool, because we can start from scratch or modify existing Services, Replication Controllers and Pods.
You can take a look at the drools-controller project. which shows how we can create a Replication Controller which points to our Docker image and defines 1 replica (one replica by default is created).
If you log in into the Openshift Console you will be able to see the newly created service with the Replication Controller and just one replica of our Pod. By using the UI (or the APIs, changing the Main class) we can provision more replicas, as many as we need. The Kubernetes Service will make sure to load balance between the different pod instances.
Voila! Our Services Replicas are up and running!
Voila! Our Services Replicas are up and running!
Now if you access the NodeStat service by doing a GET to the mapped Kubernetes Service Port you will get the Pod that is answering you that request. If you execute the request multiple times you should be able to see that the Round Robin strategy is kicking in.
wget http://localhost:9999/api/node {"node":"drools-controller-8tmby","version":"version 1"}
wget http://localhost:9999/api/node {"node":"drools-controller-k9gym","version":"version 1"}
wget http://localhost:9999/api/node {"node":"drools-controller-pzqlu","version":"version 1"}
wget http://localhost:9999/api/node {"node":"drools-controller-8tmby","version":"version 1"}
In the same way you can interact with the Statless Sessions in each of these 3 Pods. In such case, you don't really need to know which Pod is answering your request, you just need to get the job done by any of them.

Summing up

By leveraging the Openshift origin infrastructure we manage to simplify our architecture by not reinventing mechanisms that already exists in tools such as Kubernetes & Docker. On following posts I will be writing about some other nice advantages of using this infrastructure such as roll ups to upgrade the version of our services, adding security and Api Management to the mix.
If you have questions about this approach please share your thoughts.

by salaboy (noreply@blogger.com) at March 21, 2016 06:21 PM

Thomas Allweyer: Ein Standard für die EPK

Nach wie vor werden zur Modellierung von Geschäftsprozessen vielerorts ereignisgesteuerte Prozessketten (EPK) eingesetzt, vor allem für die Darstellung aus fachlicher Sicht. Und obwohl diese Notation schon fast seit einem Vierteljahrhundert existiert, gibt es für sie – im Gegensatz zur wesentlich jüngeren BPMN – bis heute keinen verbindlichen Standard. Die Folge sind unterschiedliche Interpretationen und damit eine uneinheitliche Anwendung und fehlende Austauschmöglichkeiten von EPKs zwischen unterschiedlichen Tools. Das soll sich nun ändern. Unter Federführung der Professoren Oliver Thomas von der Universität Osnabrück und Jörg Becker von der Universität Münster wurde nun ein Arbeitskreis zur EPK-Standardisierung gegründet. Die Arbeit an dem Standard wird durch eine Wiki-Kollaborationsplattform unterstützt, die unter www.epc-standard.org erreichbar ist. Wer an der Mitarbeit interessiert ist, kann sich dort als Teilnehmer registrieren.

by Thomas Allweyer at March 21, 2016 12:30 PM

March 19, 2016

Drools & JBPM: Keycloak SSO Integration into jBPM and Drools Workbench

Introduction


Single Sign On (SSO) and related token exchange mechanisms are becoming the most common scenario for the authentication and authorization in different environments on the web, specially when moving into the cloud.

This article talks about the integration of Keycloak with jBPM or Drools applications in order to use all the features provided on Keycloak. Keycloak is an integrated SSO and IDM for browser applications and RESTful web services. Lean more about it in the Keycloak's home page.

The result of the integration with Keycloak has lots of advantages such as:
  • Provide an integrated SSO and IDM environment for different clients, including jBPM and Drools workbenches
  • Social logins - use your Facebook, Google, Linkedin, etc accounts
  • User session management
  • And much more...
       
Next sections cover the following integration points with Keycloak:

  • Workbench authentication through a Keycloak server
    It basically consists of securing both web client and remote service clients through the Keycloak SSO. So either web interface or remote service consumers ( whether a user or a service ) will authenticate into trough KC.
       
  • Execution server authentication through a Keycloak server
    Consists of securing the remote services provided by the execution server (as it does not provides web interface). Any remote service consumer ( whether a user or a service ) will authenticate trough KC.
      
  • Consuming remote services
    This section describes how a third party clients can consume the remote service endpoints provided by both Workbench and Execution Server.
       
Scenario

Consider the following diagram as the environment for this article's example:

Example scenario

Keycloak is a standalone process that provides remote authentication, authorization and administration services that can be potentially consumed by one or more jBPM applications over the network.

Consider these main steps for building this environment:
  • Install and setup a Keycloak server
      
  • Create and setup a Realm for this example - Configure realm's clients, users and roles
      
  • Install and setup the SSO client adapter & jBPM application

Notes: 

  • The resulting environment and the different configurations for this article are based on the jBPM (KIE) Workbench, but same ones can also be applied for the KIE Drools Workbench as well. 
  • This example uses latest 6.4.0.CR2 community release version

Step 1 - Install and setup a Keycloak server


Keycloak provides an extensive documentation and several articles about the installation on different environments. This section describes the minimal setup for being able to build the integrated environment for the example. Please refer to the Keycloak documentation if you need more information.

Here are the steps for a minimal Keycloak installation and setup:
  1. Download latest version of Keycloak from the Downloads section. This example is based on Keycloak 1.9.0.Final.
      
  2. Unzip the downloaded distribution of Keycloak into a folder, let's refer it as 
    $KC_HOME

      
  3. Run the KC server - This example is based on running both Keycloak and jBPM on same host. In order to avoid port conflicts you can use a port offset for the Keycloak's server as:

        $KC_HOME/bin/standalone.sh -Djboss.socket.binding.port-offset=100
      
  4. Create a Keycloak's administration user - Execute the following command to create an admin user for this example:

        $KC_HOME/bin/add-user.sh -r master -u 'admin' -p 'admin'
The Keycloak administration console will be available at http://localhost:8180/auth/admin (use the admin/admin for login credentials)

Step 2 - Create and setup the demo Realm


Security realms are used to restrict the access for the different application's resources. 

Once the Keycloak server is running next step is about creating a realm. This realm will provide the different users, roles, sessions, etc for the jBPM application/s.

Keycloak provides several examples for the realm creation and management, from the official examples to different articles with more examples.

You can create the realm manually or just import the given json files.

Creating the realm step by step

Follow these steps in order to create the demo realm used later in this article:
  1. Go to the Keycloak administration console and click on Add realm button. Give it the name demo.
      
  2. Go to the Clients section (from the main admin console menu) and create a new client for the demo realm:
    • Client ID: kie
    • Client protocol: openid-connect
    • Access type: confidential
    • Root URL: http://localhost:8080
    • Base URL: /kie-wb-6.4.0.Final
    • Redirect URIs: /kie-wb-6.4.0.Final/*
The resulting kie client settings screen:

Settings for the kie client

Note: As you can see in the above settings it's being considered the value kie-wb-6.4.0.Final for the application's context path. If your jbpm application will be deployed on a different context path, host or port, just use your concrete settings here.

Last step for being able to use the demo realm from the jBPM workbench is create the application's user and roles:
  • Go to the Roles section and create the roles admin, kiemgmt and rest-all
      
  • Go to the Users section and create the admin user. Set the password with value "password" in the credentials tab, unset the temporary switch.
      
  • In the Users section navigate to the Role Mappings tab and assign the admin, kiemgmt and rest-all roles to the admin user
Role mappings for admin user


Importing the demo realm

Import both:

  • Demo Realm - Click on Add Realm and use the demo-realm.json file
      
  • Realm users - Once demo realm imported, click on Import in the main menu and use the demo-users-0.json file as import source
At this point a Keycloak server is running on the host, setup with a minimal configuration set. Let's move to the jBPM workbench setup.

Step 3 - Install and setup jBPM workbench


For this tutorial let's use a Wildfly as the application server for the jBPM workbench, as the jBPM installer does by default.

Let's assume, after running the jBPM installer, the $JBPM_HOME as the root path for the Wildfly server where the application has been deployed.

Step 3.1 - Install the KC adapter

In order to use the Keycloak's authentication and authorization modules from the jBPM application, the Keycloak adapter for Wildfly must be installed on our server at $JBPM_HOME. Keycloak provides multiple adapters for different containers out of the box, if you are using another container or need to use another adapter, please take a look at the adapters configuration from Keycloak docs. Here are the steps to install and setup the adapter for Wildfly 8.2.x:

  1. Download the adapter from here
      
  2. Execute the following commands:

     
    cd $JBPM_HOME/
    unzip keycloak-wf8-adapter-dist.zip // Install the KC client adapter

    cd $JBPM_HOME/bin
    ./standalone.sh -c standalone-full.xml // Setup the KC client adapter.

    // ** Once server is up, open a new command line terminal and run:
    cd $JBPM_HOME/bin
    ./jboss-cli.sh -c --file=adapter-install.cli
Step 3.2 - Configure the KC adapter

Once installed the KC adapter into Wildfly, next step is to configure the adapter in order to specify different settings such as the location for the authentication server, the realm to use and so on.

Keycloak provides two ways of configuring the adapter:
  • Per WAR configuration
  • Via Keycloak subsystem 
In this example let's use the second option, use the Keycloak subsystem, so our WAR is free from this kind of settings. If you want to use the per WAR approach, please take a look here.

Edit the configuration file $JBPM_HOME/standalone/configuration/standalone-full.xml and locate the subsystem configuration section. Add the following content:

<subsystem xmlns="urn:jboss:domain:keycloak:1.1">
<secure-deployment name="kie-wb-6.4.0-Final.war">
<realm>demo</realm>
<realm-public-key>MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA2Q3RNbrVBcY7xbpkB2ELjbYvyx2Z5NOM/9gfkOkBLqk0mWYoOIgyBj4ixmG/eu/NL2+sja6nzC4VP4G3BzpefelGduUGxRMbPzdXfm6eSIKsUx3sSFl1P1L5mIk34vHHwWYR+OUZddtAB+5VpMZlwpr3hOlfxJgkMg5/8036uebbn4h+JPpvtn8ilVAzrCWqyaIUbaEH7cPe3ecou0ATIF02svz8o+HIVQESLr2zPwbKCebAXmY2p2t5MUv3rFE5jjFkBaY25u4LiS2/AiScpilJD+BNIr/ZIwpk6ksivBIwyfZbTtUN6UjPRXe6SS/c1LaQYyUrYDlDpdnNt6RboQIDAQAB</realm-public-key>
<auth-server-url>http://localhost:8180/auth</auth-server-url>
<ssl-required>external</ssl-required>
<resource>kie</resource>
<enable-basic-auth>true</enable-basic-auth>
<credential name="secret">925f9190-a7c1-4cfd-8a3c-004f9c73dae6</credential>
<principal-attribute>preferred_username</principal-attribute>
</secure-deployment>
</subsystem>

If you have imported the example json files from this article in step 2, you can just use same configuration as above by using your concrete deployment name . Otherwise please use your values for these configurations:
  • Name for the secure deployment - Use your concrete application's WAR file name
      
  • Realm - Is the realm that the applications will use, in our example, the demo realm created on step 2.
      
  • Realm Public Key - Provide here the public key for the demo realm. It's not mandatory, if it's not specified, it will be retrieved from the server. Otherwise, you can find it in the Keycloak admin console -> Realm settings ( for demo realm ) -> Keys
      
  • Authentication server URL - The URL for the Keycloak's authentication server
      
  • Resource - The name for the client created on step 2. In our example, use the value kie.
      
  • Enable basic auth - For this example let's enable Basic authentication mechanism as well, so clients can use both Token (Baerer) and Basic approaches to perform the requests.
      
  • Credential - Use the password value for the kie client. You can find it in the Keycloak admin console -> Clients -> kie -> Credentials tab -> Copy the value for the secret.

For this example you have to take care about using your concrete values for secure-deployment namerealm-public-key and credential password. You can find detailed information about the KC adapter configurations here.

Step 3.3 - Run the environment

At this point a Keycloak server is up and running on the host, and the KC adapter is installed and configured for the jBPM application server. You can run the application using:

    $JBPM_HOME/bin/standalone.sh -c standalone-full.xml

You can navigate into the application once the server is up at:


jBPM & SSO - Login page 
Use your Keycloak's admin user credentials to login: admin/password

Securing workbench remote services via Keycloak

Both jBPM and Drools workbenches provides different remote service endpoints that can be consumed by third party clients using the remote API.

In order to authenticate those services thorough Keycloak the BasicAuthSecurityFilter must be disabled, apply those modifications for the the WEB-INF/web.xml file (app deployment descriptor)  from jBPM's WAR file:

1.- Remove the filter :

 <filter>
  <filter-name>HTTP Basic Auth Filter</filter-name>
<filter-class>org.uberfire.ext.security.server.BasicAuthSecurityFilter</filter-class>
<init-param>
<param-name>realmName</param-name>
<param-value>KIE Workbench Realm</param-value>
</init-param>
</filter>

<filter-mapping>
<filter-name>HTTP Basic Auth Filter</filter-name>
<url-pattern>/rest/*</url-pattern>
<url-pattern>/maven2/*</url-pattern>
<url-pattern>/ws/*</url-pattern>
</filter-mapping>

2.- Constraint the remote services url patterns as:

<security-constraint>
<web-resource-collection>
<web-resource-name>remote-services</web-resource-name>
<url-pattern>/rest/*</url-pattern>
<url-pattern>/maven2/*</url-pattern>
<url-pattern>/ws/*</url-pattern>
</web-resource-collection>
<auth-constraint>
<role-name>rest-all</role-name>
</auth-constraint>
</security-constraint>


Important note: The user that consumes the remote services must be member of role rest-all. As on described on step 2, the admin user in this example it's already a member of the rest-all role.





Execution server


The KIE Execution Server provides a REST API than can be consumed for any third party clients,. This this section is about how to integration the KIE Execution Server with the Keycloak SSO in order to delegate the third party clients identity management to the SSO server.
Consider the above environment running, so consider having:
  • A Keycloak server running and listening on http://localhost:8180/auth
      
  • A realm named demo with a client named kie for the jBPM Workbench
      
  • A jBPM Workbench running at http://localhost:8080/kie-wb-6.4.0-Final
Follow these steps in order to add an execution server into this environment:


  • Create the client for the execution server on Keycloak
  • Install setup and the Execution server ( with the KC client adapter  )
Step 1 - Create the client for the execution server on Keycloak

As per each execution server is going to be deployed, you have to create a new client on the demo realm in Keycloak.
  1. Go to the KC admin console -> Clients -> New client
  2. Name: kie-execution-server
  3. Root URL: http://localhost:8280/  
  4. Client protocol: openid-connect
  5. Access type: confidential ( or public if you want so, but not recommended )
  6. Valid redirect URIs: /kie-server-6.4.0.Final/*
  7. Base URL: /kie-server-6.4.0.Final
In this example the admin user already created on previous steps is the one used for the client requests. So ensure that the admin user is member of the role kie-server in order to use the execution server's remote services. If the role does not exist, create it.

Note: This example considers that the execution server will be configured to run using a port offset of 200, so the HTTP port will be available at localhost:8280

Step 2 - Install and setup the KC client adapter and the Execution server

At this point, a client named kie-execution-server is ready on the KC server to use from the execution server. Let's install, setup and deploy the execution server:
  
1.- Install another Wildfly server to use for the execution server and the KC client adapter as well. You can follow above instructions for the Workbench or follow the official adapters documentation.
  
2.- Edit the standalone-full.xml file from the Wildfly server's configuration path and configure the KC subsystem adapter as:

<secure-deployment name="kie-server-6.4.0.Final.war">
<realm>demo</realm>
<realm-public-key>
MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCrVrCuTtArbgaZzL1hvh0xtL5mc7o0NqPVnYXkLvgcwiC3BjLGw1tGEGoJaXDuSaRllobm53JBhjx33UNv+5z/UMG4kytBWxheNVKnL6GgqlNabMaFfPLPCF8kAgKnsi79NMo+n6KnSY8YeUmec/p2vjO2NjsSAVcWEQMVhJ31LwIDAQAB
</realm-public-key>
<auth-server-url>http://localhost:8180/auth</auth-server-url>
<ssl-required>external</ssl-required>
<resource>kie-execution-server</resource>
<enable-basic-auth>true</enable-basic-auth>
<credential name="secret">e92ec68d-6177-4239-be05-28ef2f3460ff</credential>
<principal-attribute>preferred_username</principal-attribute>
</secure-deployment>

Consider your concrete environment settings if different from this example:
  • Secure deployment name -> use the name of the execution server war file being deployed
      
  • Public key -> Use the demo realm public key or leave it blank, the server will provide one if so
       
  • Resource -> This time, instead of the kie client used in the WB configuration, use the kie-execution-server client
      
  • Enable basic auth -> Up to you. You can enable Basic auth for third party service consumers
       
  • Credential -> Use the secret key for the kie-execution-server client. You can find it in the Credentials tab of the KC admin console.
       
Step 3 - Deploy and run an Execution Server

Just deploy the execution server in Wildfly using any of the available mechanisms.
Run the execution server using this command:
$EXEC_SERVER_HOME/bin/standalone.sh -c standalone-full.xml -Djboss.socket.binding.port-offset=200 -Dorg.kie.server.id=<ID> -Dorg.kie.server.user=<USER> -Dorg.kie.server.pwd=<PWD> -Dorg.kie.server.location=<LOCATION_URL>  -Dorg.kie.server.controller=<CONTROLLER_URL> -Dorg.kie.server.controller.user=<CONTROLLER_USER> -Dorg.kie.server.controller.pwd=<CONTOLLER_PASSWORD>  
Example:

$EXEC_SERVER_HOME/bin/standalone.sh -c standalone-full.xml -Djboss.socket.binding.port-offset=200 -Dorg.kie.server.id=kieserver1 -Dorg.kie.server.user=admin -Dorg.kie.server.pwd=password -Dorg.kie.server.location=http://localhost:8280/kie-server-6.4.0.Final/services/rest/server -Dorg.kie.server.controller=http://localhost:8080/kie-wb-6.4.0.Final/rest/controller -Dorg.kie.server.controller.user=admin -Dorg.kie.server.controller.pwd=password  
Important note:  The users that will consume the execution server remote service endpoints must have the role kie-server assigned. So create and assign this role in the KC admin console for the users that will consume the execution server remote services.
  
Once up, you can check the server status as (considered using Basic authentication for this request, see next Consuming remote services for more information):
 
curl http://admin:password@localhost:8280/kie-server-6.4.0.Final/services/rest/server/

Consuming remote services

In order to use the different remote services provided by the Workbench or by an Execution Server, your client must be authenticated on the KC server and have a valid token to perform the requests.

NOTE: Remember that in order to use the remote services, the authenticated user must have assigned:

  • The role rest-all for using the WB remote services
  • The role kie-server for using the Execution Server remote services

Please ensure necessary roles are created and assigned to the users that will consume the remote services on the Keycloak admin console.

You have two options to consume the different remove service endpoints:

  • Using basic authentication, if the application's client supports it
  • Using Bearer ( token) based authentication

Using basic authentication

If the KC client adapter configuration has the Basic authentication enabled, as proposed in this guide for both WB (step 3.2) and Execution Server, you can avoid the token grant/refresh calls and just call the services as the following examples.

Example for a WB remote repositories endpoint:

curl http://admin:password@localhost:8080/kie-wb-6.4.0.Final/rest/repositories

Example to check the status for the Execution Server :

curl http://admin:password@localhost:8280/kie-server-6.4.0.Final/services/rest/server/

Using token based authentication

First step is to create a new client on Keycloak that allows the third party remote service clients to obtain a token. It can be done as:
  • Go to the KC admin console and create a new client using this configuration:
    • Client id: kie-remote
    • Client protocol: openid-connect
    • Access type: public
    • Valid redirect URIs: http://localhost/
         
  • As we are going to manually obtain a token and invoke the service let's increase the lifespan of tokens slightly. In production access tokens should have a relatively low timeout, ideally less than 5 minutes:
    • Go to the KC admin console
    • Click on your Realm Settings
    • Click on Tokens tab
    • Change the value for Access Token Lifespan to 15 minutes ( That should give us plenty of time to obtain a token and invoke the service before it expires ) 

Once a public client for our remote clients has been created, you can now obtain the token by performing an HTTP request to the KC server's tokens endpoint. Here is an example for command line:

RESULT=`curl --data "grant_type=password&client_id=kie-remote&username=admin&passwordpassword=<the_client_secret>" http://localhost:8180/auth/realms/demo/protocol/openid-connect/token`

TOKEN=`echo $RESULT | sed 's/.*access_token":"//g' | sed 's/".*//g'`

At this point, if you echo the $TOKEN it will output the token string obtained from the KC server, that can be now used to authorize further calls to the remote endpoints.  For exmple, if you want to check the internal jBPM repositories:

curl -H "Authorization: bearer $TOKEN" http://localhost:8080/kie-wb-6.4.0.Final/rest/repositories


by Roger Martinez (noreply@blogger.com) at March 19, 2016 09:19 PM

March 17, 2016

Keith Swenson: Develop for Self-Managed Organizations

Here is a message from my friend, Robert Gilman, about participating with us on an open source platform for supporting a sociocratic organization.  It is the most interesting thing I have been involved in for years.

Message from the Context Institute

ContextInstituteDo you have or do you know someone who would be willing to use programming skills on an open-source project that could really make a difference for a better world? The project is part of the movement toward effective self management. In a real sense, it is working on a new category of software.

Starting in November of 2014, John Buck (Sociocracy author and trainer), Ian Gilman (software developer) and Robert Gilman (sustainability pioneer and futurist) started on a quest for good integrated software that supports self-management in organizations with distributed decision-making (like Sociocracy and many of the organizations profiled in Reinventing Organizations). We searched for existing software and looked for existing components that might be patched together, but no luck, so we started to build software, now called Weaver, that could serve this need.

We got a big boost in April 2015 when Keith Swenson joined our team of volunteers and brought his existing group-management platform called Cognoscenti. We’ve all been working since to adapt and extend Cognoscenti to the needs of sociocratic and similar groups.

We’ve made considerable progress, enough so that we could really use some additional part-time volunteer programming help. If you, or someone you know, would like to work on this open-source project under Keith’s leadership, using modern technologies like AngularJS and Bootstrap – please contact Robert Gilman.

My Addendum

Some of you are probably already familiar with Cognoscenti as I have been using it for most of my recent demonstrations of collaborative software.  It has advanced tremendously in the last 9 months.  We have added a lot to support meetings, agendas, discussion topics, decisions (not branch nodes!),  proposals, discussion rounds, and making it all work to make group decision making easier and more inclusive.  Also, the entire user interface has been rewritten in Angular.js and Bootstrap.   It is not about automation, it is about facilitating good decisions.  It is all freely available.  If you are interested in groups of people working together, you probably owe it to yourself to take another look.  If you want to help, contact Robert Gilman.  Questions about the software, just make a comment right here.


by kswenson at March 17, 2016 10:14 AM

March 14, 2016

Thomas Allweyer: Nachlese zur CPOs@BPM&O 2016

CPOs@BPM&OÜber 80 Teilnehmer lockte die zweitägige Tagung nach Köln, die von dem auf das Prozessmanagement spezialisierte Beratungshaus BPM&O veranstaltet wurde. Auf dem Programm stand ein abwechslungsreicher Mix aus Praxisberichten, Expertenvorträgen, Toolvorstellungen und „Hands on“-Workshops. „Quo vadis, Prozessmanagement?“ fragten zur Eröffnung die BPM&O-Geschäftsführer Thilo Knuppertz und Uwe Feddern. Und konstatierten, dass das Thema Prozessmanagement in den vergangenen Jahren vollends in den Fachabteilungen angekommen sei. Waren ihre Ansprechpartner in den Unternehmen früher überwiegend in der IT angesiedelt, so sprechen sie heute meist mit dem Business. Knuppertz erläuterte, welche Rolle die Prozesse für die erfolgreiche und rasche Strategieumsetzung spielen. Insbesondere die vielbeschworene digitale Transformation könne nur gelingen, wenn das magische Dreieck aus Kunden, Produkten und Prozessen geeignet aufeinander abgestimmt werde. Viele Unternehmen haben in den vergangenen Jahren Verbesserungen einzelner Prozesse mit Hilfe von Methoden wie Six Sigma oder Lean Management erzielt, stellen nun aber fest, dass zur dauerhaften Sicherung der Erfolge ein unternehmensweites Prozessmanagement erforderlich ist.

Praktische Beispiele für die Einführung von Prozessmanagement lieferten Erfahrungsberichte aus unterschiedlichen Branchen. So verwendet das Frankfurter Nahverkehrsunternehmen traffiQ das Reifegradmodell EDEN als Grundlage für die Bewertung und Weiterentwicklung der Prozessmanagement-Initiativen. Hierbei wurden auch die Aufbauorganisation und die Führungsstrukturen verändert. Wie eine Mitarbeiterbefragung zeigte, wurden allerdings nicht alle Änderungen positiv aufgenommen, weshalb nun die bemängelten Defizite gezielt angegangen werden. Noch recht neu ist das Thema Prozessmanagement bei den Stadtwerken Karlsruhe. Hier steht unter anderem die Integration der verschiedenen im Unternehmen vorhandenen Managementsysteme auf dem Programm, wie z. B. Qualitäts-, Umwelt- und Energiemanagement. Thorsten Speil berichtete, dass immer wieder der Nutzen des Prozessmanagements in Frage gestellt werde. Daher wurde ein Workshop durchgeführt, bei dem alle Bereichsleiter die aus ihrer Sicht wichtigsten Nutzenpotenziale bewerteten. Von zentraler Bedeutung sein die ständige Kommunikation. Immer wieder stelle man fest, dass Mitarbeiter gar nichts über die Prozessmanagement-Initiative wüssten.

Matthias Adelhard vom Messgerätehersteller Diehl Metering betreibt die Einführung von Prozessmanagement als Change-Projekt. Entsprechend setzt er Methoden des Veränderungsmanagements ein. Dabei kommt ihm seine Ausbildung als systemischer Organisationsentwickler zugute. Interessanterweise stimmen zwar die meisten Prozessmanagement-Praktiker der Aussage zu, dass ein gelingendes Veränderungsmanagement einer der wichtigsten Erfolgsfaktoren für das Prozessmanagement sei, doch hat kaum einer von ihnen eine Qualifikation im Bereich Organisationsentwicklung. Die bestätigte sich auch bei einer kurzen Umfrage im Publikum.

Im Laufe des ersten Tages präsentierten mehrere Toolhersteller ihre Prozessportale, mit denen Informationen über die Prozesse im Intranet veröffentlicht werden können. Zumeist wurden Beispiele aus konkreten Kundeninstallationen gezeigt, die einen Eindruck von den vielfältigen Navigations- und Kollaborationsmöglichkeiten gaben. Bei allen Herstellern hat sich in den vergangenen Jahren viel in Sachen Benutzerfreundlichkeit, rollenbasierten Konfigurationsmöglichkeiten und Unterstützung mobiler Endgeräte getan. Bei einer Publikums-Abstimmung setzte sich keiner der etablierten Plattformhersteller durch, sondern das noch recht neue Produkt „Ask Delphi“ von der Firma MTAC. Darin werden keine Prozessmodelle oder formale Beschreibungen ins Intranet gebracht, sondern auf die jeweilige Rolle abgestimmte Anleitungen, Videos, E-Learning-Sequenzen u. ä., die den Mitarbeiter bei der Durchführung seiner Arbeit auf recht intuitive Weise unterstützen.

Der zweite Tag widmete sich dem Thema Innovationen im Prozessmanagement. Prof. Mevius vom Konstanzer Institut für Prozesssteuerung griff erneut das Thema Digitalisierung auf und betonte, dass der Mehrwert neuer Technologien durch Prozesse entsteht. Prozessmanagement und BPM-Software haben heute einen hohen Entwicklungsstand erreicht. Dennoch stellt man fest, dass Ziele wie Kundenzufriedenheit oft nicht im gewünschten Ausmaß erreicht werden. Sein Credo: Der Mensch muss viel stärker im Mittelpunkt stehen. Ziel ist eine User Experience, wie man sie aus dem Consumer-Bereich gewohnt ist. Beispielsweise sind BPMN-Modelle ein hervorragendes Instrument für Experten, aber nicht für Fachanwender. Als Beispiel zur Unterstützung einer besseren Prozessaufnahme zeigte er eine intuitive, multimediabasierte App zur Prozesserfassung. Letztlich entsteht die „Process Experience“ aber vor allem bei der Prozessausführung. Auch hierfür präsentierte er einige Beispiele, z. B. die Integration von inEar-Devices zur Mitarbeiterunterstützung bei der Bearbeitung der Prozesse.

Lars Büsing von Learnical plädierte für die Integration von Innovationen ins Tagesgeschäft. Angesichts der Bedrohung vieler etablierter Geschäftsmodelle suchen Unternehmen nach Erfolgsformeln um sie zu kopieren. Das funktioniert in einem komplexen und chaotischen Umfeld aber nicht. Gefragt sind Innovationen, wobei es sich nicht vorrangig um einzelne Erfindungen handelt, sondern um ständiges Lernen. Erkenntnisse aus der Gehirnforschung besagen, dass Innovation nicht durch willentliche Anstrengung erzwungen werden können. Methoden wie „Lego Serious Play“ sind deswegen erfolgreich, weil Neues vor allem durch Spielen und beim Austausch zwischen Menschen entsteht.

Dies konnten die Teilnehmer anschließend selbst im Rahmen verschiedener Workshops selbst erleben. Neben Lego Serious Play, mit dem prozessbezogene Fragestellungen im buchstäblichen Sinne „be-greifbar“ gemacht werden, wurde auch ein Workshop zur kollaborativen Prozessmodellierung mit t.BPM angeboten, bei dem die in Form von kleinen Plättchen zur Verfügung stehenden Modellierungssymbole zunächst auf einem Tisch platziert und so ganz leicht neu arrangiert werden können. In einer dritten Runde stand das Spiel „Slotter“ im Mittelpunkt, mit dem das Zusammenspiel innerhalb von Prozessen ausprobiert und optimiert werden konnte. Uwe Feddern moderierte einen Dialog-Workshop, bei dem klare Regeln dazu beitragen, dass jeder zu Wort kommt und ein besseres Verständnis der Anliegen und Meinungen der anderen Gruppenmitglieder erreicht wird, als dies bei einer gewöhnlichen Diskussion der Fall ist.

Abgerundet wurde die Tagung von dem Trendforscher Walter Matthias Kunze, der die These vertrat, dass der Digitale Wandel zu einem sozialen Wandel führt und die Unternehmen daher ernst machen müssen mit der Umsetzung neuer Führungswerte. Hierzu gehört es, die Verantwortung an selbstorganisierte Teams zu übertragen und Kontrollen abzubauen. Unternehmen wie die brasilianische Firma Semco, die Werbeagentur Ministry, aber auch XING sind Beispiele dafür, wie dies funktionieren kann. Mit der zunehmenden Verbreitung von Technologie entsteht auch ein Gegentrend, nämlich ein steigendes Bedürfnis nach ethischen und spirituellen Werten. Unternehmen müssen dies berücksichtigen und die Werte und Ideale ihrer Kunden und Mitarbeiter achten sowie glaubwürdig handeln und kommunizieren.

Die Besucher erlebten eine hochkarätige Tagung, die sich neben den Vorträgen durch einen hohen Grad an Interaktionen und intensive Diskussionen auszeichnete.

by Thomas Allweyer at March 14, 2016 09:14 AM

March 09, 2016

Thomas Allweyer: Verlosung von Freikarten für die Insight 2016

MIDInsightUpdate 12.3.16: Die Aktion ist abgeschlossen, die Gewinner wurden benachrichtigt Vielen Dank an alle Teilnehmer!
Die MID hat freundlicherweise fünf Freikarten für die Insight 2016 zur Verfügung gestellt, die am 12. April in Nürnberg stattfindet. Wer eine davon gewinnen will, sollte mir bis kommenden Freitag, 11. März, eine Mail schicken und darin den Titel des Vortrags von Rangar Yogeshwar angeben. Der Rechtsweg ist wie immer ausgeschlossen.

by Thomas Allweyer at March 09, 2016 10:52 AM

March 04, 2016

Keith Swenson: Key Process Activities for 2016

Six key process activities coming in 2016: Adaptive CM Workshop, ACM Awards, BPM Next, BPM and Case Management Global Summit, BPM 2016 Conference and (updated) CBI Conference.

1. Adaptive CM Workshop – Sept 5 or 6, 2016

This marks the fifth year that we have been able to hold this full day International Workshop on Adaptive Case Management and other non-workflow approaches to BPM.   Past workshops have been the premier place to publish rigorously peer reviewed scientific papers on the groundbreaking new technologies.  See the submission instructions.   Submission abstracts are due 10 April 2016, and notification to authors in June 2016.  Co-located with the IEEE EDOC 2016 September 5-9, 2016 in Vienna, Austria.

2. ACM Awards – Apply Now

The WfMC will be running another ACM awards program to recognize excellent use of case management style approaches to supporting knowledge workers.  The awards are designed to help show how flexible, task tracking software is increasingly used by knowledge workers with unpredictable work patterns.  Winners are recognized on the hall of fame site (see sample page highlighting a winner) and in a ceremony at the BPM and Case Management Summit in June. Each winning use case is published so that others can know about the good work you have been doing, and can follow your lead.  This series of books is the premier source of best practices in the case management space.  Submit proposals and abstracts now for help and guidance in preparing a high quality entry, and final submission due April 2015.

3. BPM Next – April 19-21, 2016

The meeting of the gurus in the BPM space.  BPM Next is where the leaders of the industry come together to discuss evolving new approaches, and to help understand the leading trends.  The engineering-oriented talks are required to have a demo of actual running code to avoid imaginative, but unrealistic, fantasies.  This year the presentations will all start with an “Ignite” presentation which has exactly 20 slides and lasts exactly 5 minutes to reign in the guru’s natural tendency for lengthy and wordy presentations.  The program is already set however attendee registration is still open.  This year it will be held again in  the quaint old-town of Santa Barabara.

4. BPM and Case Management Global Summit – June 2016

The premier independent industry show for the full range of process technologies.  Many of last year’s attendees described this as the best, most informative, conference on BPM and ACM that they had ever seen.  It’s the third year to be held at the Ritz-Carlton in Washington DC.  The last two years have seen this as the premier place for serious discussions of both Case Management and BPM.

5. BPM 2016 Conference

This year the BPM2016 Conference will be held September 18-22, 2016 in exotic Rio de Janiero, Brazil.  The conference includes the Main Track, Doctoral Consortium, Workshops, Demos, Tutorials and Panels, Industry track, and other Co-located Events.  (I can’t go this year, but I sure wish I was!)

6.CBI 2016 Conference  (updated)

The IEEE Conference on Business Informatics will be held Aug 28 – Sept 01 in Paris.  There you can submit invited case reports of around 10 pages showing experience with the technology.  The deadline is floating (until mid July)


by kswenson at March 04, 2016 08:02 PM

March 03, 2016

Thomas Allweyer: Teilnahme an neuer Studie BPM Compass möglich

Die Kollegen Komus und Gadatsch von den Hochschulen Koblenz und Bonn-Rhein-Sieg sind für ihre Studien im Umfeld des Prozess- und IT-Managements bekannt. Die jetzt gestartete Umfrage „BPM Compass“ zu den Erfolgsfaktoren des Prozessmanagements hat einen größeren Rahmen als bislang. Zum einen wurden Professor Jan Mendling von der Wirtschaftsuniversität Wien und die Gesellschaft für Prozessmanagement als Partner gewonnen, zum anderen werden Ablauf und Gestaltung der Studie von einem Beirat mit namhaften Experten aus der Praxis unterstützt. Und schließlich steht die Umfrage für internationale Teilnehmer auch auf Englisch zur Verfügung. Teilnehmen können bis zum 8. Mai alle Praktiker, die sich mit den Geschäftsprozessen in ihren Organisationen befassen. Link zum Fragebogen.

by Thomas Allweyer at March 03, 2016 04:11 PM

March 01, 2016

Thomas Allweyer: Führende Case Management-Plattformen integrieren Predictive Analytics

14 Plattformen für Dynamic Case Management, d. h. zur Unterstützung schwach strukturierter, wissensintensiver Prozesse, evaluierte Forrester in einer jüngst erschienenen Studie. Die als führend eingestuften Systeme wurden insbesondere für die Integration leistungsfähiger Analyse-Funktionen und die Bereitstellung vorgefertigter Applikationen für verschiedene Anwendungsfälle gelobt.

So werden bei Pega etwa historische Wartungsdaten und Echtzeitdaten dazu verwendet, Reparaturvorschläge zu machen. Auch bei IBM können automatische Vorhersagen durch Predictive Analytics-Funktionen in die Fallbearbeitung integriert werden, etwa zur Betrugserkennung. Bei beiden Anbietern werden hierfür mächtige – und auch nicht ganz billige – Analysekomponenten aus dem Bereich Big Data eingesetzt.

Appian, der dritte als „Leader“ eingestufte Anbieter, punktet unter anderem mit einem App-Markt, auf dem derzeit 32 anwendungsspezifische Lösungen für die Kunden bereitstehen. In vielen Fällen dürfte eine eine solche App zumindest einen großen Teil der Anforderungen abdecken, was die Entwicklungszeit deutlich reduziert.

Zwischen den verschiedenen Systemen und den zugrunde liegenden Ansätzen gibt es deutliche Unterschiede. So bieten einige Anbieter starke Enterprise Content Management-Funktionalitäten, wogegen andere Systeme Content lediglich als einen weiteren Datentyp behandeln. Verbesserungspotential sehen die Analysten u. a. im Bereich Rules Management. Nach wie vor seien verschiedene Arten von Regeln – z. B. für die Benutzernavigation oder für die Weiterleitung von Fällen – an mehreren Stellen verstreut. Auch die Entwicklungswerkzeuge für User Interfaces seien noch ausbaufähig. Hierin stecken große Einsparmöglichkeiten, denn etwa 50% der Aufwände für externe Entwickler werden für den Bereich Benutzerinteraktion ausgegeben

The Forrester Wave: Dynamic Case Management, Q1 2016.
The 14 Providers That Matter Most And How They Stack Up.
Download bei Appian (Registrierung erforderlich).

by Thomas Allweyer at March 01, 2016 09:42 AM

February 25, 2016

Thomas Allweyer: Klassischer Ansatz zur Geschäftsprozessoptimierung

Cover Gronau Geschäftsprozessmanagement in Wirtschaft und VerwaltungDie Struktur des vorliegenden Werkes folgt dem vom Autor entwickelten Vorgehensmodell „RAIL“ für Projekte zur Prozessverbesserung. Dieses Vorgehensmodell umfasst die Phasen Projektvorbereitung, Istanalyse, Sollkonzeption, Umsetzung und Integration, sowie die laufende Optimierung. Zudem gehören die Querschnittsaufgaben „Projektmanagement und -steuerung“ sowie „Change Management“ dazu.

Insofern wird der Begriff des Geschäftsprozessmanagements wesentlich enger gefasst, als dies heute meist üblich ist. So werden sämtliche Aspekte des strategischen Prozessmanagements außen vor gelassen. Das RAIL-Modell enthält auch explizit „keine Regelkreise, die eine ständige Überprüfung der Geschäftsprozesse auf Effizienz sicherstellen. Die Einrichtung dieses Überprüfungsprozesses ist Aufgabe des General Management“ (S. 67). Es wird also ein „klassischer“ Ansatz des Prozessmanagements verfolgt, der häufig eher als „Geschäftsprozessoptimierung“ bezeichnet wird. An vielen Stellen werden entsprechend die Klassiker der Prozessorientierung zitiert. So wird etwa das Business Process Reengineering-Konzept von Hammer und Champy ausführlich in einem eigenen Unterkapitel erläutert – leider ohne eine kritische Bewertung aus heutiger Sicht.

Das Buch beginnt mit einem historischen Abriss und einer Klärung der wichtigsten Begriffe. Anschließend werden verschiedene Vorgehensmodelle des Prozessmanagements vorgestellt und bewertet. Auf Grundlage dieser Analyse entwickelt Gronau sein RAIL-Vorgehensmodell. Der vorliegende Band beschäftigt sich vor allem mit der Ist-Analyse und der Sollkonzeption sowie dem Querschnittsthema Projektmanagement. Die anderen Elemente des Vorgehensmodells sind einem zweiten Band vorbehalten.

Das Kapitel zur Ist-Analyse befasst sich insbesondere mit verschiedenen Methoden zur Ist-Aufnahme, wie z. B. Interviews, Fragebögen und Beobachtungen, sowie Kriterien und Werkzeugen zur Schwachstellenanalyse. Der Prozessmodellierung als wichtigem Instrument ist ebenfalls ein eigenes Kapitel gewidmet. Es werden verschiedene Modellierungsmethoden vorgestellt, darunter bekannte Notationen wie UML, EPK und BPMN, abe auch die am Institut des Autors entwickelte „Knowledge Modeling and Description Language“ (KDML) zur Untersuchung wissensintensiver Prozesse. Für die Sollkonzeption werden neben dem bereits erwähnten Business Process Reenginering-Ansatz verschiedene Heuristiken besprochen. Schließlich folgt ein Kapitel mit einer Übersicht zentraler Begriffe und Methoden des Projektmanagements.

Etwas schade ist es, dass verschiedene aktuell diskutierte Fragestellungen nicht thematisiert werden, wie z. B. die Rolle des Geschäftsprozessmanagements als Enabler digitaler Geschäftsmodelle, oder der Einsatz agiler Methoden im Prozessmanagement.


Gronau, Norbert:
Geschäftsprozessmanagement in Wirtschaft und Verwaltung: Analyse, Modellierung und Konzeption.
GITO 2016.
Das Buch bei amazon.

by Thomas Allweyer at February 25, 2016 08:10 AM