Friday, January 3, 2020

Designing an Event-Driven Business Process at Scale: an Health Management Example.

This post was originally published on Red Hat Developer as a 3 parts series. To read the original post, visit developer.redhat.com.

The concept of business process (BP) or workflow (WF) and the discipline and practice of business process management (BPM) have been around since the early '90s. Since then WF/BPM tools have evolved considerably. More recently a convergence of different tools has been taking place adding decision management (DM) and case management (CM) to the mix. The ascendance of data science, machine learning and the various forms of artificial intelligence in the last few years have further complicated the picture. The mature field of BPM has been subsumed into the much hyped pseudo-novelties of digital business automation, digital reinvention, digital everything, etc. with the new entries of "low code" and robotic process automation (RPA).

A common requirement of business applications today is to be event-driven, that is specific events should trigger a workflow or decision in real time. This leads to a fundamental problem. In realistic situations the are many different types of events each one requiring specific handling. An event-driven business application may have hundreds of qualitatively different workflows or processes. As new types of events arise in today's ever changing business conditions new processes have to be designed and deployed as quickly as possible.

This situation is different than the common requirement of scalability at run time. It's not just a problem of making an architecture to scale to a very large number of events per second. That is a problem that is in many respects easy to solve. The problem of scalability at design time is what I am concerned about here.

In this blog I will show a concrete example from the health management industry implemented with jBPM, the open source business automation suite.

This example will illustrate several BPMN constructs as they are implemented in jBPM:
  • Business process abstraction
  • Service tasks or work item handlers
  • REST API call from within a process
  • Sending email from within a process
  • Sending and catching signals
  • Timer based reminders and escalations

█ The Business Use Case


Population health management (PHM) is an important approach to health care that leverages recent advances in technology to aggregate health data across a variety of sources, analyze these data into a unified and actionable view of the patient and trigger specific actions that should improve both clinical and financial results. Of course this implies handling protected personal information which should be done in full compliance of existing legislation or controversy will ensue.

An insurance company or health management organization tracks a considerable wealth of information about the health history of every member. For example a member known to have a certain medical condition is supposed to periodically doing things such as visiting a doctor or undergoing a test. Missing such actions should trigger a workflow in the PHM system to make sure that the member is back on track.

▓ Detailed Requirements


For the sake of example, let's say that a member has rheumatoid arthritis. This person is supposed to take a DMARD drug. It is easy to check periodically if a prescription for such drug has been filled over a certain period of time, say one year. If no such prescription has been filled in the past year for a member with this condition certain actions should be taken by a given actor:

 Task Code  Activities  Actor
 A490.0  The member's doctor should be notified.  PRO
 B143  An insurance channel worker should perform related administrative tasks.  CHW
 C178  The member should be educated.  MEM
 C201  The member should talk to a pharmacist.  RXS

Certain tasks should occur only after another task has completed:

 Task Code  Predecessor Task
 A490.0
 B143
 C178  A490.0
 C201  A490.0

The task life cycle should be determined by the change of task status where the task status has the following values and meanings:

 Status Description
 Inactive Not the season for this task.
 Suppressed Specific member situation dictates this task should be suppressed.
 Closed Soft close: assumed completed until hard close notification.
 Completed Hard close: verification of completion received.
 Expired Not completed prior to expiration.

The distinction between closed (soft close) and completed (hard close) is important. An actor is allowed to soft close a task but the task should be considered completed only upon verification of the task outcome. Only then the task should contribute to the measurement of key performance indicators (KPI).
A soft close should be accomplished by the actor by either clicking on a Complete button or specifying that the task is Not Applicable (and providing an explanation with supplemental data that can be uploaded). The hard close should require a notification from some external system.

 Code Soft Close Hard Close
 A490.0 Completed or N/A. HEDIS Engine Compliance.
 B143 Completed or N/A. Provider and pharmacy attestation.
 C178 Completed or N/A. With soft close.
 C201 Click to call pharmacist. With soft close

If a task actor is late in completing the assigned task a reminder must be sent to the corresponding actor and if the task is not closed after a certain period of time the action must be escalated to a manager:

 Task Code  Reminder Frequency  Escalation After  Escalation Actor
 A490.0  14 days  30 days  PEA
 B143  30 days  90 days  MCH
 C178  7 days  60 days  MRX
 C201  7 days  30 days  CHW

In any case each action should expire by the end of the year.

The last requirement is that it should be possible to prevent a task from being executed during a defined suppression period. This is the equivalent of dozing an alarm clock.

The following table briefly describes all the actors mentioned so far:

 Actor  Description
 PRO  Provider
 MEM  Member
 CHW  Community Health Worker
 RXS  Pharmacist
 PEA  Provider Engagement Advocate
 MCH  Community Health Manager
 MRX  Pharmacy Manager

In essence these requirements define a workflow that must be completed for a given PHM event or trigger such as the member missing a DMARD prescription.

You don't want to have to redo all the implementation work if the member is diabetic and did not receive a statin medication within the year instead of missing a DMARD prescription. There are possibly hundreds of distinct events/triggers in PHM and having to model the workflow of each one of them separately does not scale. This is the most important requirement from a business perspective: the design of the implementation must be able to scale to as many different types of triggers that there could possibly be and must be such that new triggers can be added with the least possible amount of effort.


█ Implementation as a Business Process


A complete business process implementing these requirements in jBPM can be imported from GitHub. However I encourage you to build it from scratch following the detailed steps below.

Business processes in jBPM follow the latest BPMN 2.0 specification. You can design a business process to be data-driven as well as event-driven. However business processes are essentially a realization of procedural imperative programming. This means that the business logic has to be explicitly spelled out in its entirety.

The implementation should be completely data-driven to satisfy as much as possible the business scalability requirement. The trigger workflow should be parameterized with data fed to the process engine in some fashion so that one business process definition should be capable of handling any trigger event.

▓ The Data Model


Most of the workflow related  properties of a task are contained in the custom data type Task:

 Attribute Description
 id The id of the task.
 original id The task code.
 status The status tracking the task life cycle.
 predecessor The task preceding the current one in the task workflow.
 close The task closing type (soft or hard).
 close signal The signal to hard close the task.
 reminder initiation When the first reminder should occur.
 reminder frequency The frequency of the reminders.
 escalated A flag indicating if an escalation should occur.
 escalation timer When an escalation should occur.
 suppressed A flag indicating if task is suppressed.
 suppression period The period of time the task has to remain suppressed.


The custom data type TaskActorAssignment holds the information needed to assign the task to an actor:

 Attribute Description
 actor The actor of the task
 channel The application (user interface) where the task is performed (data entry)
 escalation actor The actor responsible for the escalation
 escalation channel The application (user interface) used in the escalation


The custom data type Reminder holds the information needed to send a reminder to the task actor:

 Attribute Description
 address The (email) address of the actor of the task
 subject The subject of the reminder
 body The content of the reminder
 from The (email) address sending the reminder


All of these data is retrieved from a service. You need to represent the response of the service as the custom data type Response:

 Attribute Description
  task The task data
  assignment The task actor assignment data
  reminder The reminder data


It is understood that for any given PHM event or trigger the service will produce a list of such response objects one for each task in the PHM event workflow.

The Java class diagram  shown in Figure 1 summarizes what is needed:

Java Class Diagram
Figure 1: Java class diagram.

You should implement the model in Java. All classes must be serializable. Overriding the toString method is optional but it helps when tracing process execution.

This model can be imported into jBPM from GitHub.

▓ The Trigger Process

After creating the project in jBPM for the business process implementing the trigger workflow make sure that the model project is a dependency in the project settings.

Create the process in the process designer with the properties shown in Figure 2:


Trigger Process Properties
Figure 2: Trigger process properties.

and with the process variables shown in Figure 3.

Trigger Process Variables
Figure 3: Trigger process variables.

Add the imports as shown in Figure 4.

Trigger Process Imports
Figure 4: Trigger process imports.


Then draw the diagram in the process designer as shown in Figure 5.

Trigger Process Diagram
Figure 5: Trigger process diagram.


This type of business application is typically subscribed in some fashion to a topic of a data streaming solution such as Apache Kafka. However we are not concerned with the precise feeding data mechanism right now and the process is simply started by REST API. We will go into the integration of the business process with a streaming source in a future post.

The first activity is an external service call to get all the data needed to execute the subprocess as a function of the member token and the trigger id. This frees the streaming application starting the process from the burden of orchestrating data services. This activity is implemented as a REST Work Item Handler which is pre-installed in jBPM. You will need to implement the service called by the service task and I will show you how to do that later.

Once the data are available a multiple instance subprocess iterates over each task in the trigger workflow. You need to implement the requirement that certain tasks must come in the workflow after given tasks. In our example tasks C178 and C201 must follow A490.0.

▒ The "Get the Data" Service Task
You should now configure the service task with this on entry action:


You will need this on exit action as well:


The reason of the on exit action is that the REST API service delivers the data as a list of maps. The actual Response objects must be obtained by converting each map in the list. There is no way around this. So the collection that contains the list of Response objects is the variable pDataList and not pReturn.

The only parameters to configure are

  • Method as GET, 
  • Url as pGetInfoUrl, 
  • ContentType as application/json, 
  • ResultClass as java.util.List,
  • Result as pResult.


In real life more parameters will be needed. For example the BPM will have to authenticate to retrieve the data. However for the sake of this exercise you will keep this service as simple as possible.

▒ The Multiple Instance Subprocess
Configure the multiple instance subprocess with parallel execution (you want all subprocess instances to start at the same time), pDataList as the collection and pData as the item (see Figure 6).
Multiple instance subprocess implementation
Figure 6: Multiple instance subprocess implementation.


The individual task workflow is modeled as a reusable subprocess with the properties shown in Figure 7.

Task subprocess implementation
Figure 7: Task subprocess implementation.

The task subprocess input variables should be defined as in Figure 8.
Task subprocess variables.
Figure 8: Task subprocess variables.
The subprocess variables are initialized in a service task:



The task sorting logic is based on the predecessor property of the task as shown in Figure 9.
Task sorting logic
Figure 9: Task sorting logic.
The first diverging exclusive gateway lets the process proceed to the task subprocess if this property is null. You configure this branch of the gateway as in Figure 10.
The task has no predecessor
Figure 10: The task has no predecessor.
If the current task has a predecessor the catching intermediate signal will wait for the signal that the predecessor task has closed before going further. You should configure this signal as in Figure 11.
Predecessor task has closed
Figure 11: Predecessor task has closed.

You will need to create the Task subprocess. Leave the "Called Element" property blank until you have done that.

The only variable that needs to be passed to the subprocess is pData of type com.health_insurance.phm_model.Response as in Figure 12:

Data passed to the Task subprocess
Figure 12: Data passed to the Task subprocess.

Finally sends the signal that the current process is closed. Note that the name of the signal is parameterized with the current task id as in Figure 13.
The current Task subprocess is closed
Figure 13: The current Task subprocess is closed.

 ▓ The Task Subprocess


Now you can create the Task subprocess with the properties shown in Figure 13.


Task subprocess properties
Figure 13: Task subprocess properties.


The Task subprocess needs variables to be defined as in Figure 14.

Task subprocess variables
Figure 14: Task subprocess variables.

The type com.jbpm.document.Document of the variable sSupplementalData is available out of the box.

Now draw the diagram of the Task subprocess following Figure 15.

Task subprocess diagram
Figure 15: Task subprocess diagram.

Once the process variables are initialized in a script task a user task must be completed. A reminder is set for the completion of the user task. An escalation is defined as well. Both reminder and escalation are implemented with timers leading to subprocesses that you need to implement. See below for a note concerning timers from a scalability perspective.

Note that we have implemented the task suppression requirement using a timer as well.

The hard close requirement is realized as an embedded subprocess which is simply catching a signal. Following the requirements the escalation follows from a timer on this subprocess.

▒ The Expired? Gate
Expired? gate
Figure 16: Expired? gate.

The "Expired?" exclusive gate (Figure 16) two branches should be configured using the expirationDate attribute of the Task class.

The configuration of the Yes branch is shown in Figure 17.
Yes branch of the Expired? gate
Figure 17: Yes branch of the Expired? gate.

▒ The Suppressed? Gate
The "Suppressed?" exclusive gate (Figure 18) two branches should be configured using the suppressed boolean attribute of the Task class.

Suppressed? gate
Figure 18: Suppressed? gate.

Now configure the "Yes" branch as in Figure 19.

Yes branch of the Suppressed? gate
Figure 19: Yes branch of the Suppressed? gate.
The suppression of the task is accomplished by a timer. The timer will cause the process to wait for the period specified in the suppressionPeriod attribute of the Task object as shown in Figure 20.
Suppress Task timer
Figure 20: Suppress Task timer.

▒ The Human Task
Next you should configure the human task. Figure 21 shows the human task and the related logic in the Task subprocess diagram.

Human task logic in the Task subprocess
Figure 21: Human task logic in the Task subprocess.

The task actor is resolved from the task actor assignment process variable as shown in Figure 22.

Human task properties
Figure 22: Human task properties.

You want the ability to capture some text as well as supplemental documentation that can be uploaded. You also want the ability to complete the task with a Not Applicable or Not Available response. These requirements can be satisfied by configuring the task parameters as shown in Figure 23.

Human task parameters
Figure 23: Human task parameters.
You need to satisfy the requirement of sending a periodic reminder to the task actor. There is a built-in notification capability in jBPM that allows sending email messages to notify groups and/or individuals to complete a task. For example you could configure notifications for the Task activity as shown in Figure 24.

Built-in task notification
Figure 24. Built-in task notification.


In many cases the built-in notification capability may be adequate. However there are cases where one would need more control over the notification process. The advantage of using timers for this purpose allows designing as complex a reminder process as one would want. In this example you will implement timer based reminders.

Configure the timer on the task border which will trigger the first reminder after a period of time defined in the reminderInitiation attribute of the Task object as shown in Figure 25.

Reminder timer.
Figure 25: Reminder timer.
The Reminder Subprocess

Now you should configure the subprocess Reminder as shown in Figure 26. Notice that both the properties "Independent" and "Abort Parent" must be left unchecked while "Wait for Completion" is checked. If you don't configure it in this way the Reminder subprocess will not stop as it should.

Reminder subprocess properties
Figure 26: Reminder subprocess properties.

Set the subprocess parameters to be the Reminder and Task objects as in Figure 27.

Figure 27: Reminder input parameters.

After completion of the task a signal should be sent to stop the reminder subprocess as shown in Figure 28. Note that the signal scope should be the instance.
Stop reminder signal.
Figure 28: Stop reminder signal.

▒ The "What Type of Close?" Gate
An exclusive gate decides if the task close state should be soft or hard.

The soft close branch of the gate configuration is shown in Figure 29.

What Type of Close? gate SOFT branch
Figure 29: What Type of Close? gate SOFT branch.
The hard close branch is configured in Figure 30.

What Type of Close? gate HARD branch
Figure 30: What Type of Close? gate HARD branch.

▒ The Hard Close Embedded Subprocess
If the task close is hard the process must wait for a confirmation signal coming from an external system. You have to do this in an embedded subprocess because of the escalation requirement.

The subprocess is very simple. There is just an intermediate signal catch which is shown in Figure 31.

Hard close signal catch
Figure 31: Hard close signal catch.
You also need to implement an escalation subprocess if the task is not closed in a given time. Again jBPM has a built-in capability to reassign a task if it is not completed in a timely fashion. For example you could configure how to reassign the task as shown in Figure 32.

Reassignment of a task
Figure 32: Reassignment of a task.

However the customer requirements say that you need to base the reassignment on the reception of a confirmation of  the task being close coming from an external system. Moreover the period must be configurable as a variable because it depends on the specific task. Because of these reasons you need to implement the escalation as a timer triggered subprocess.

A border timer determines when an escalation is needed based on the value of the escalationTimer attribute of the Task object as seen in Figure 33.

Figure 33: Escalation timer.

There is yet another way to implement SLA escalations. The Process Event Listener interface has two methods capturing the event of an SLA violation that can be implemented with custom code specifying what to do in such an event:


▒ The Escalation Subprocess
Figure 34 shows the Escalation subprocess configuration:

Escalation subprocess properties
Figure 34: Escalation subprocess properties.


Define the parameters of the subprocess. You need to pass the Task and the TaskActorAssignment objects as shown in Figure 35.

Escalation subprocess parameters
Figure 35: Escalation subprocess parameters.


▓ The Reminder Subprocess


You should now create the Reminder subprocess with the properties shown in Figure 36.

Reminder subprocess properties
Figure 36: Reminder subprocess properties.

Now define this subprocess variables as in Figure 37.

Reminder subprocess variables
Figure 37: Reminder subprocess variables.

Now create the process diagram as shown in Figure 38.

Reminder subprocess diagram
Figure 38: Reminder subprocess diagram.


The timer is causing the email reminder to be executed at the frequency defined in the reminderFrequency attribute of the Task object as seen in Figure 39.

Reminder timer configuration
Figure 39: Reminder timer configuration.

The Email Reminder is the Email Custom Work Item Handler that comes pre-installed in jBPM.

Figure 40 shows all the service task parameters.

Email custom work item handler input parameters
Figure 40: Email custom work item handler input parameters.

You can use the On Entry Action to set the parameters from the _Reminder variable passed from the Task subprocess:



Don't forget to catch the signal to stop the reminder as shown in Figure 41..

Stop reminder signal catch
Figure 41: Stop reminder signal catch.


▓ The Escalation Subprocess


This process is the simplest of all. It's just a human task as seen in Figure 42.

Escalation subprocess diagram
Figure 42: Escalation subprocess diagram.

 ▓ The Get the Data Service


The Get the Data Service is implemented in the Express framework on node.js and can be cloned from GitHub. It is a simple REST service with hard coded responses. Its only purpose is to be used when unit testing the business process described in this blog.

Here is the app.js:



and this is the deployment descriptor in src/main/resources/META-INF/kie-deployment-descriptor.xml:


▓ The Email Service


This is the deployment descriptor of the Email service task:

The SMTP server parameters are passed from environment variables. When trying it out one can use an email test service such as Mailtrap.

▓ Considerations When Using Timers


By default jBPM uses EJB Timer services to implement timers when deployed on a Java EE (Jakarta EE) server. EJB Timers are not recommended for high volume situations. Quartz is a better alternative which is also available in a Spring Boot deployment.

Another option is to use the SLA Due Date property of a node. A blog by Maciej Swiderski covers the SLA Due Date capability in jBPM.

Each capability, Timers or SLA, has pros and cons and should be adopted depending on the specifics of your use case. Here though the number of members may be large the number of PHM triggers per member in a given time period is typically small and the period of each timer is large (weeks) so that the chance of a lot of timers triggering on the same instant is probably not high.

▓ Forms


Data entry forms can be automatically generated for each human task and for each process. They are mostly useful for the purpose of quickly testing a process execution during development. Typically the production user interface of a human task is going to be custom made. In this specific use case the actor of each task should be entering data using an existing application that passes the data over to jBPM using a REST API exposed by jBPM. Therefore you will not be concerning yourself with UI development.

▓ Event Listeners


The process is configured to use event listeners to trace process and rule information at runtime. If you want to take advantage of them you will need to clone two projects and build two jar files to be installed in the lib directory of the Kie Server otherwise just unregister them in the process deployment descriptor.


▓ Demo Users


A shell script is provided in the directory src/main/sh that will create users and groups to run a few scenarios with this business process. The script is intended to be used with JBoss Wildfly.

▓ Conclusion


After reading this tutorial and hopefully built a jBPM project with the population health management business processes you should have learned:
  • Business process abstraction
  • Service tasks or work item handlers
  • REST API call from within a process
  • Sending email from within a process
  • Sending and catching signals
  • Timer based reminders and escalations
The next blog article will walk you through a couple of live scenarios.

Tuesday, April 23, 2019

Red Hat Decision Manager on Heroku

Introduction


Salesforce is a very popular CRM and if you use it most likely you have developed quite sophisticated workflows on it. Most importantly you may have the need to automate complex logic as well as allow non-technical users to configure this logic by themselves without waiting for a programmer to implement it. If this is the case business rules are definitely the right approach. Having business rules in Salesforce is a critical requirement. 

What about using Drools, the popular open source rule engine? Or Red Hat Decision Manager, the supported version of Drools? Any third party business rules engine will need to get input data from Salesforce through some of the available API's. The issue is that there are limits on the number of such calls as well as a cost.

There is a solution to this problem. Salesforce acquired Heroku in 2010. Heroku offers a cloud application development platform. If you are a Salesforce customer and develop applications on Heroku you can take advantage of an add-on called Heroku Connect. This add-on synchronizes Salesforce data with a Postgres database running on the Heroku platform without suffering the limitations or cost mentioned earlier. Clearly Salesforce wants to motivate developers to use Heroku as the platform for serious Salesforce applications.

The solution is now clear: run the KIE Server, Drools business rules and business process run time, on Heroku, and leverage the Heroku Connect Postgres database to read data to process from it as well as to write the output of business rules execution to it.

The question is then how do you get the KIE Server to run on Heroku? Not only that but how can you run Red Hat Decision Manager as a licensed Red Hat customer on Heroku and still get support? Customers can create custom container images with a Red Hat product and still get a reasonable level of support for that product as long as they follow certain rules when customizing the Red Hat official container image for that product. The Red Hat official container image is intended to be deployed on Red Hat Openshift and that is the platform where it receives full support. Heroku is definitely not a supported platform. For Red Hat granting you an exception and give reasonable commercial support in this case is critical that the amount of customization required to get Red Hat Decision Manager KIE Server run on Heroku is absolutely minimal.

Bear in mind that the Heroku platform compared with Red Hat Openshift for example has quite a few critical limitations.

Heroku applications are normally deployed and execute in a container technology called Dyno. This technology is not compatible with OCI container images such as the ones used by Docker or Openshift. An OCI container runs on Heroku on top of a Dyno. The problem is that a Dyno cannot have persistent volume. If a Heroku developer needs persistent volume he has to use a Postgres database available on Heroku. This pretty much rule out deploying Business Central on Heroku because a persistent volume is needed by this application. Business Central will have to run somewhere else where persistent volume is available and deploy the KIE container generated from the business rules project to a KIE Server running on Heroku. In fact one could simply add the business rules executable artifact to the KIE Server image during a custom build and then deploy it with the KIE Container already there. To keep it as simple as possible you will not do that in this exercise.

Another Heroku requirement is that the HTTP port where the KIE Server listens to must be defined as the environmant variable PORT. None of the other ports exposed by JBoss EAP can be exposed because there is only one PORT variable available.

With this limitations in mind you can now proceed with the exercise.

Of course I cannot promise that such a procedure will deploy a Red Hat Decision Manager KIE Server on Heroku that Red Hat will support. The only modification being the HTTP port one can hope that an exception can be made.

Deployment Exercise


Login to the Red Hat Container Registry with docker and pull the latest Red Hat Decision Manager KIE Server container image (version 7.3 at the time of this writing). Run the image and test that it is fine by requesting the server information through the KIE Server REST API. Make sure that you are using the port mapped to 8080 by docker and that you are authenticating to the server with the default credentials.


The Red Hat official container images are intended for deployment on OpenShift. The container CMD is /opt/eap/bin/openshift-launch.sh. If you login to the container and view this script you will see that it starts the JBoss EAP server with a configuration file called standalone-openshift.xml. Copy this JBoss EAP configuration file from the container to the local directory.


The HTTP port is bound in the socket-binding-group section in several places.


You just need to replace it with the environment variable PORT as expected by Heroku.


Now create a .dockerignore file with this content:


as well as a Dockerfile such as this:


Setting the server user credentials in the Dockerfile should not be necessary since it should be possible to pass them as arguments during build but it does not seem to be working. More about this in a moment. Now login to the Heroku CLI and then to the Heroku container registry. Create the Heroku app and then run the container:push command which builds the image and push it to the registry. According to the documentation one should be able to set environment variables in the image during build with --arg VAR1=val1,VAR2=val2 but it did not work for me. Once the image is in the registry the container:release command makes it available and you can test that the KIE server is indeed running on Heroku.