| 
 
Posted
about 6 years
ago
by
[email protected] (Mark Proctor)
 
test (ignore)
      
 
 | 
| 
 
Posted
about 6 years
ago
by
[email protected] (Matteo Mortari)
 
    We are always looking to improve the performance of the Drools DMN open source engine. We have recently reviewed a DMN use-case where the actual input population of Input Data nodes varied to some degree; this highlighted a suboptimal behavior of the
    
    ... [More]
     engine, which we improved in recent releases. I would like to share our findings!Benchmark developmentAs we started running a supporting benchmark for this use-case, especially when investigating the scenario of large DMN models with sparse-populated input data nodes, we noticed some strange results: the flamegraph data highlighted a substantial performance hit when logging messages, consuming very significant time in comparison to the application logic itself.
This flamegraph highlight specifically that a large portion of time is consumed by stacktrace synthesis, artificially induced by the logging framework. The correction, in this case, was to tune the logging configuration to avoid this problem; specifically, we disabled a feature of the logging framework which is very convenient during debugging activities, enabling to quickly locate the original calling class and methods: unfortunately this feature come at the expense of synthesizing stacktraces, which originally contaminated the benchmark results. Lesson learned here: always check first if non-functional requirements are actually masking the real issue!
This was a necessary and propaedeutic step, before proceeding to investigate the use-case in more details.Improving performance
Moving on and focusing now on DMN optimizations, we specifically developed a benchmark to be general enough, but also highlighting the use-case which was presented to us. This benchmark consists of a DMN model with many (500) decision nodes to be evaluated. Another parameter controls sparseness of input data nodes valorization for evaluation; ranging from a value of 1 where all inputs are populated, to 2 where only one out of two inputs is actually populated, etc.
This specific benchmark proved to be a very instrumental tool to highlight some potential improvements. 
Setting the comparison baseline to Drools release 7.23.0.Final, the first optimization implemented with DROOLS-4204 focused on improving context handling while evaluating FEEL expressions and demonstrated to offer a ~3x improvement, while further optimization implemented with DROOLS-4266 focusing on specific case for decision table input clauses demonstrated an additional ~2x improvement on top of DROOLS-4204.We also collected these measurements in the following graphs.
This graph highlights the compounding improvements in the case of sparseness factor equal to 1, where all inputs are populated; this was a very important result, as in fact it did represent the main, “happy path” scenario in the original use-case.In other words, we achieved a ~6x improvement in comparison to running the same use-case on 7.23.0.Final. The lesson I learned here is to always strive for these kind of compounding improvements when possible, as they really build on top of each other, for greater results!For completeness, we repeated the analysis with sparseness factor equals to 2 (1 every 2 inputs is actually populate) and 50 (1 every 50 inputs is actually populated) with the following measurements:
Results show that the optimizations were also significant for sparseness factor equal to 2, but not as relevant improvements as this factor grows -- which is expected, as the impact of the decision nodes evaluations on the overall logic of execution become now less relevant. For completeness, analysis was also performed with another, already existing benchmark for single decision table consisting of many rules rows:
Results show that these code changes considered as a whole, still offered a relevant improvement; although clearly not of the same magnitude as for the original use-case. This was another important check to ensure that these improvements were not overfitting on the specific use-case.
Conclusions
Considering Drools release 7.23.0.Final as the baseline, and a reference benchmark consisting of a DMN model with many decision nodes to be evaluated, we implemented several optimizations that once combined demonstrated to offer a total of ~6x speed-up on that specific use case!I hope this was an interesting post to highlight some of the dimensions were to look into to achieve better performances; let us know you thoughts and feedback.You can already benefit today from these Kie DMN open source engine improvements in the most recent releases of Drools! 
      
 [Less]
    
 
 | 
| 
 
Posted
over 6 years
ago
by
[email protected] (Edoardo Vacchi)
 
    
This is the second post of a series of updates on the Kogito initiative and our efforts to bring Drools to the cloud. In this post we delve into the details of rule units and show you why we are excited about them.
An All-Encompassing Execution
    
    ... [More]
     Model for Rules
If you’ve been carefully scrutinising the Drools manual looking for new features at every recent release, you may have noticed that the term rule unit has been sitting there for a while, as an extremely experimental feature. In short, a rule unit is both a module for rules and a unit of execution—the reason why we are not calling them modules is to avoid confusion with JVM modules. In Kogito, we are revisiting and expanding upon our original prototype.  
A rule unit collects a set of rules together with the description of the working memory such rules act upon. The description of the working memory is written as a regular Java class, with DataSource fields. Each data source represents a typed partition of the working memory, and different types of data sources exist, with different features. For instance, in the following example we used an append-only data source, called data stream.
     Rules of a given rule unit are collected in DRL files with the unitdeclaration Each rule in a unit has visibility over all the data sources that have been declared in the corresponding class. In fact, the class and the collection of DRL files of a unit form a whole: you can think of such a whole as of one single classwhere fields are globals that are scoped to the current unit, and methods are rules. In fact, the use of fields supersedes the use of DRL globals.
A rule unit is submitted for execution to a scheduler. Rule units may decide to yield their execution to other rule units, effectively putting them into execution. For instance:            But rule units may be also put in a long-running state. In this case, other rule units may be run concurrently at the same time; because DataSources can be shared across units, units can be coordinated by exchanging messages.
Consider the following example:           In a certain way, rule units behave as “actors” exchanging messages. However, in a very distinctive way, rule units allow for much more complex chains of executions, that are proper to rule-based reasoning. For instance, consider this example from Akka's manual:  As you can see, pattern matches in Akka are strictly over single messages. This is unsurprising, because actors process one message at a time. In a rule engine, we are allowed to write several rules, reacting upon the entire state of the working memory at the execution time: this significantly departs from a pure actor model design, but at the same time gives a great deal of flexibility in the way you may write the business logic of your application.
Data Sources  It is worth to spend a few words on data sources as well. The data source construct can be seen as both a partition and an abstraction over the traditional working memory. Different kinds of data sources will be available: full-featured data stores may support to add, remove and update values, allowing for more traditional operations over the working memory; while the more constrained append-only data streams would be easier to integrate with external data sources and data sinks, such as Camel connectors; such constraints would be also valuable to enable more advanced use cases, such as parallel, thread-safe execution and persisted shared channel (e.g.: Kafka) across nodes of an OpenShift cluster, realizing a fully distributed rule engine.
             Kogito: ergo Cloud
The parallel and distributed use cases are intriguing, but we need to get there with baby steps.However, this does not mean that the first steps won't be as exciting in their own way.
For Kogitowe want to stress the cloud-native, stateless use case, where control flow is externalized using processes and, with the power of Quarkuswe can compile this into super-fast native binaries. This is why in the next few weeks we will complete and release rule units for automated REST service implementation.  
In this use case, the typed, Java-based declaration of a rule unit is automatically mapped to the signature of a REST endpoint. POSTing to the endpoint implies instantiating the unit, inserting data into the data sources, firing rules, returning the response payload. The response is computed using a user-provided query. For instance, consider this example: Users may post events using the auto-generated /monitoring-serviceendpoint.
         the reply will be the result of the query. In our case:  Cloudy with a Chance of Rules
 We have presented our vision for the next generation of our rule engine in Kogito and beyond. The stateless use case is only the first step towards what we think will be a truly innovative take on rule engines. In the following months we will work on delivering better support for scheduling and deploying units in parallel (local) and distributed (on Openshift), so stay tuned for more. In the meantime, we do want to hear from you about the direction we are taking.  
The future of Drools is cloudy… and bright!
      
 [Less]
    
 
 | 
| 
 
Posted
over 6 years
ago
by
[email protected] (Edoardo Vacchi)
 
    
The Kogito initiative is our pledge to bring our business automation suite to the cloud and the larger Kubernetes ecosystem. But what does this mean for our beloved rule engine, Drools? In this post we introduce modular rule bases using rule units:
    
    ... [More]
     a feature that has been experimental for a while in Drools 7, but that will be instrumental for Kogito, where it will play a much bigger role. This is the first post of a series where we will give you an overview of this feature.
Bringing Drools Further  Droolsis our state-of-the-art, high-performance, feature-rich open source rule engine. People love it because it is a swiss-army knife to the many problems that can be solved using rule-based artificial intelligence. But as the computer programming landscape evolves, we need to think of ways to bring further Drools as well. As you may already know, Kogito is our effort to make Drools and jBPM really cloud-native, and well-suited for serverless deployments: we are embracing the Quarkus framework and GraalVM’s native binary compilation for super-fast startup times and low memory footprint; but we are not stopping there.
 The way we want to bring further Drools evolution is twofold: on the one hand, we want to make our programming model easier to reason about, by providing better ways to define boundaries in a rule base with a better concept of module; on the other hand, the concept of modular programming dates back at least to the 1970s and to Parnas’ original seminal paper. Needless to say, if our contribution stopped there, we would be bringing nothing new to the plate. In the last few years, computing has evolved, slowly but steadily embracing the multicore and distributed revolution; yet, to this day, many general-purpose programming languages do not really make it simple to write parallel or distributed programs. rule-based programming system we have the chance to propose something different: a rule engine that is great when stand-alone, but outstanding in the cloud.
 Modular Rule Bases. As you already know, Drools provides a convenient way to partition set of rules into knowledge bases. Such knowledge bases can be composed together, yielding larger sets of rules. When a knowledge base is instantiated (the so-called session), rules are put together in the same execution environment (the production memory), and values (the facts) are all inserted together in the same working memory.
 This model is very simple and powerful but in some senses it is also very limited. It is very simple, because, as a user of the rule base, you just worry about your data: the values are inserted into the working memory, and the engine does its magic. It is very powerful, because, as a rule author, you can rely upon the rules you have written to realize complex flows of reasoning, without worrying about how and when they will trigger.  
 At the same time, such an execution model lacks all of the principles, that over the years we have been learning are good programming practice. For instance, there is no proper notion of a module: it is not possible to perfectly isolate one rule from another, or to properly partition the working memory. As the rule base scales up in complexity, it may become harder to understand which rules trigger and why. In some senses, it is as if you were programming in an odd world where proper encapsulation of state does not exist, as if years of programming language evolution had not happened.  
 Object-Oriented Programming. The term object-oriented programming has been overloaded over the years to mean a lot of different things; it has to do both with inheritance, with encapsulation of state, with code reuse, with polymorphism. All these terms get often confused, but they are not really related: you can reuse code without inheritance, you can encapsulate state without objects, you can write polymorphic code without classes. Very recent, imperative programming languages such as Go and Rust do not come with proper classes, yet they support a form of object-orientation; there is even a beautiful 2015 talk from C++’s dad, Bjarne Stroustrup, showing how his child supports object-orientation without inheritance.
 Alan Kay, who fathered the term in his Smalltalk days at Xerox, in his inspiring lecture at OOPSLA 1997 said «I made up the term "object-oriented", and I can tell you I did not have C++ in mind». In fact, the idea of objects that Alan Kay pioneered was more similar to the concept of actors and microservices. In proper object-oriented programming, objects encapsulate their internal state and expose their behavior by exchanging messages (usually called methods) with the external world.  
 Today actor systems have seen a renaissance, message buses are very central to what today we call reactive programming, microservices are almost given for granted. So, we wondered, what would it mean for Drools to become a first-class citizen of this new programming landscape?Kogito, ergo CloudIn the next post we will see our take on rule-based, modular programming, using rule units. Rule units will provide an alternative to plain knowledge base composition and an extended model of execution. We believe that rule units will make room for a wider spectrum of use cases, including parallel and distributedarchitectures. Stay tuned to read how they fit in the Kogito story, and the exciting possibilities that they may open for the future of our automation platform.
 
 
  
      
 [Less]
    
 
 | 
| 
 
Posted
over 6 years
ago
by
[email protected] (Edoardo Vacchi)
 
    
The Kogito initiative is our pledge to bring our business automation suite to the cloud and the larger Kubernetes ecosystem. But what does this mean for our beloved rule engine, Drools? In this post we introduce modular rule bases using rule units:
    
    ... [More]
     a feature that has been experimental for a while in Drools 7, but that will be instrumental for Kogito, where it will play a much bigger role. This is the first post of a series where we will give you an overview of this feature (read part 2)
Bringing Drools Further  Droolsis our state-of-the-art, high-performance, feature-rich open source rule engine. People love it because it is a swiss-army knife to the many problems that can be solved using rule-based artificial intelligence. But as the computer programming landscape evolves, we need to think of ways to bring further Drools as well. As you may already know, Kogito is our effort to make Drools and jBPM really cloud-native, and well-suited for serverless deployments: we are embracing the Quarkus framework and GraalVM’s native binary compilation for super-fast startup times and low memory footprint; but we are not stopping there.The way we want to bring further Drools evolution is twofold: on the one hand, we want to make our programming model easier to reason about, by providing better ways to define boundaries in a rule base with a better concept of module; on the other hand, the concept of modular programming dates back at least to the 1970s and to Parnas’ original seminal paper. Needless to say, if our contribution stopped there, we would be bringing nothing new to the plate. In the last few years, computing has evolved, slowly but steadily embracing the multicore and distributed revolution; yet, to this day, many general-purpose programming languages do not really make it simple to write parallel or distributed programs. rule-based programming system we have the chance to propose something different: a rule engine that is great when stand-alone, but outstanding in the cloud.
Modular Rule Bases. As you already know, Drools provides a convenient way to partition set of rules into knowledge bases. Such knowledge bases can be composed together, yielding larger sets of rules. When a knowledge base is instantiated (the so-called session), rules are put together in the same execution environment (the production memory), and values (the facts) are all inserted together in the same working memory.
This model is very simple and powerful but in some senses it is also very limited. It is very simple, because, as a user of the rule base, you just worry about your data: the values are inserted into the working memory, and the engine does its magic. It is very powerful, because, as a rule author, you can rely upon the rules you have written to realize complex flows of reasoning, without worrying about how and when they will trigger.At the same time, such an execution model lacks all of the principles, that over the years we have been learning are good programming practice. For instance, there is no proper notion of a module: it is not possible to perfectly isolate one rule from another, or to properly partition the working memory. As the rule base scales up in complexity, it may become harder to understand which rules trigger and why. In some senses, it is as if you were programming in an odd world where proper encapsulation of state does not exist, as if years of programming language evolution had not happened.Object-Oriented Programming. The term object-oriented programming has been overloaded over the years to mean a lot of different things; it has to do both with inheritance, with encapsulation of state, with code reuse, with polymorphism. All these terms get often confused, but they are not really related: you can reuse code without inheritance, you can encapsulate state without objects, you can write polymorphic code without classes. Very recent, imperative programming languages such as Go and Rust do not come with proper classes, yet they support a form of object-orientation; there is even a beautiful 2015 talk from C++’s dad, Bjarne Stroustrup, showing how his child supports object-orientation without inheritance.Alan Kay, who fathered the term in his Smalltalk days at Xerox, in his inspiring lecture at OOPSLA 1997 said «I made up the term "object-oriented", and I can tell you I did not have C++ in mind». In fact, the idea of objects that Alan Kay pioneered was more similar to the concept of actors and microservices. In proper object-oriented programming, objects encapsulate their internal state and expose their behavior by exchanging messages (usually called methods) with the external world.Today actor systems have seen a renaissance, message buses are very central to what today we call reactive programming, microservices are almost given for granted. So, we wondered, what would it mean for Drools to become a first-class citizen of this new programming landscape?Kogito, ergo CloudIn the next post we will see our take on rule-based, modular programming, using rule units. Rule units will provide an alternative to plain knowledge base composition and an extended model of execution. We believe that rule units will make room for a wider spectrum of use cases, including parallel and distributedarchitectures. Stay tuned to read how they fit in the Kogito story, and the exciting possibilities that they may open for the future of our automation platform.
 
 
  
      
 [Less]
    
 
 | 
| 
 
Posted
over 6 years
ago
by
[email protected] (Mark Proctor)
 
WEBINAR  Title: Re-imagining business automation: Convergence of decisions, workflow, AI/ML, RPA—vision and futuresTime: June 20, 2019, 5:00 p.m. BST (UTC+ 1)Registration https://www.redhat.com/en/events/webinar/re-imagining-business-automation-convergence-decisions-workflow-aiml-rpa%E2%80%94vision-and-futures
      
 
 | 
| 
 
Posted
over 6 years
ago
by
[email protected] (Edoardo Vacchi)
 
    
Image courtesy of Massimiliano Dessì
You can find the full source code for this blog post in the submarine-examplesrepository. 
Different programming languages are better for different purposes. Imagine how hard would it be to query a database
    
    ... [More]
     using an imperative language: luckily, we use SQL for that. Now, imagine how useless would a rule engine be, if defining rules were not convenient! This is the reason why Drools comes with its own custom language, the DRL. The Drools Rule Language is in a so-called domain-specific language, a special-purpose programming language specifically designed to make interaction with a rule engine easier.In particular, a rule is made of two main parts, the condition and the consequence.
The condition is a list of logic predicates, usually pattern matches, while the consequence is written using an imperative language, usually Java.
An Abstract Rule EngineRules are what really make a rule engine. After all, that's what a rule engine does: processing rules. Thus, it might sound kind of logical for the engine to be a bit entangled with the language for rule definitions. Our engine is not specially tied to the DRL; but it used to.In the last year or so, we spent a lot of time unbundling the innards of the DRL from the guts of the Drools core. The result of this effort is what we called the Canonical Model; that is, an abstract representation of the components that make up a rule engine, including rule definitions. Incidentally, this also paved the way for supporting GraalVMand the Quarkusframework; but our goal was also different. We wanted to abstract our engine from the rule language.  
Internally, the DRL is now translated into the canonical representation; but, as we said previously, this canonical model is described using Java code. While this representation is not currently intended to be hand-coded, it is very possible to do so. The following is a simple rewriting of the previous DRL rule.   As you can see, although the rule definition is now embedded in a Java "host" language, it still shows the main features of a DRL definition, namely, the logic condition and the imperative consequence(introduced by the on...executepair) In other words, this is a so-called embedded or internal domain-specific language.A small disclaimer applies: the code above works, but our translator takes extra steps for best performance, such as introducing indexes. In fact, one of the reasons why we do not intend this API for public consumption is that, currently, a naive rewrite like this may produce inefficient rules.
A Polyglot Automation PlatformAs part of our journey experimenting with our programming model, we wanted to see whether it was feasible to interact with our engine using different programming languages. DRL aside, the canonical model rule definition API is pure-Java.  But GraalVM is not only a tool to generate native binaries: in fact, this is only oneof the capabilities of this terrific project. GraalVM is, first and foremost, the one VM to rule them all: that is, a polyglotruntime, with first-class support for both JVM languages and many other dynamic programming languages, with a state-of-the-art JITcompiler, that easily compares or exceeds the performance of the industry standards. For instance, there is already support for R, Ruby, JavaScript and Python; and, compared to writing a JIT compiler from scratch, the Truffle framework makes it terribly easy to write your own, and fine-tuning it to perfection.
GraalVM gave us a great occasionto show how easy could it be to make Drools polyglot, and, above all, to play an awful practical joke on our beloved, hard-working, conference-speaking, JavaScript-hating, resident Java Champion and tech lead Mario!Enter drools.js: 
And here's a picture of Mario screaming in fear at the monster we have created
Jokes aside, this experiment is a window over one of the many possible futures of our platform. The world of application development today is polyglot. We cannot ignore this, and we are trying to understand how to reach a wider audience with our technologies, be it our rule engine, or our workflow orchestration engine; in fact, we are doing the same experiments with other parts of the platform, such as jBPM.  jBPM provides its own DSL for workflow definition. Although this is, again, work in progress, it shows a lot of promise as well. Behold: jbpm.js!
ConclusionThe DRL has served its purpose for a very long time, but we are already providing different ways to interact with our powerful engine, such as DMN and PMML; but power users will always want to reach for finer tuning and write their own rules.  The canonical model API is still a work-in-progress, and, above all, an internal API that is notintended for human consumption; but, if there is enough interest, wedo plan to work further to provide a more convenient embedded DSL for rule definition. Through the power of GraalVM, we will be able to realize an embedded DSL that is just as writable in Java as any other language that GraalVM supports.  And this includes JavaScript; sorry Mario!
      
 [Less]
    
 
 | 
| 
 
Posted
over 6 years
ago
by
[email protected] (Mario Fusco)
 
    “The  question of whether a computer can think is no more interestingthan the  question of whether a submarine can swim.”- Edsger W. DijkstraRule-based artificial intelligence (AI) is  often overlooked, possibly because people think it’s only useful
    
    ... [More]
     in  heavyweight enterprise software products. However, that’s not  necessarily true. Simply put, a rule engine is just a piece of software  that allows you to separate domain and business-specific constraint from  the main application flow. We are part of the team developing and  maintaining Drools—the world’s  most popular open source rule engine and part of Red Hat—and, in this  article, we will describe how we are changing Drools to make it part of  the cloud and serverless revolution.
Technical overviewOur main goal was to make the core of the  rule engine lighter, isolated, easily portable across different  platforms, and well-suited to run in a container. The software  development landscape has changed a lot in the past 20 years. We are  moving more and more toward a polyglot world, which is one reason why we  are working to make our technology work across a lot of different  platforms. This is also why we started looking into GraalVM, the new Oracle Labs polyglot virtual machine (VM) ecosystem, consisting of:
A polyglot VM runtime, alternative to the Java virtual machine (JVM) with a just-in-time (JIT) compiler that improves  efficiency and speed of applications over traditional HotSpot. This is  also the “proper” GraalVM.
A framework to write efficient dynamic programming languages (e.g.,  JavaScript, Python, and R) and to mix and match them (Truffle).
A tool to compile programs ahead-of-time (AOT) into a native executable.
Meanwhile at Red Hat, another team was  already experimenting with GraalVM and native binary generation for  application development. This effort has been realized in a new project  you may have heard of called Quarkus.  The Quarkus project is a best-of-breed Java stack that works on good  old JVM but is also especially tailored for GraalVM, native binary  compilation, and cloud-native application development.GraalVM is an amazing tool, but it also comes with some (understandable) limitations.  Thus, Quarkus is designed to integrate seamlessly with GraalVM and  native image generation, as well as provide useful utilities to overcome  any related limitations. In particular, Drools used to make extensive  use of dynamic class generation, class-loading, and quite a bit of  reflection. To produce fast, efficient, and small native executables,  Graal performs aggressive inlining and dead-code elimination, and it  operates under a closed-world assumption: that is, the  compiler removes any references to class and methods that cannot be  statically reachable in the code. In other words, unrestricted  reflective calls and dynamic class loading are a no-go. Although this  may at first sound like a showstopper, here we will document in detail  how we modified the core of Drools to overcome such limitations, and we  will explain why such limitations are not evil and can be liberating.The Executable ModelIn a rule engine, facts are inserted into a working memory. Rules describe actions to take when certain constraints over the facts that are inserted into the working memory become true. For instance, the sentence “when the sun goes down : turn on the lights” expresses a rule over the sun. The fact is that the sun is going down. The action is to turn on the lights. In a rule engine, we insert the “sun is going down” fact inside the working memory. When we fire the rules, the action of turning on the lights will execute.A rule definition has the form
constraints → consequence
The constraints part, also called the left-hand side of the rule, describes the constraints that activate the rule and make it ready to fire; the consequence part, also called the right-hand side of the rule, contains the action that rule will take when the rule is fired.In Drools, a rule is written using the Drools Rule Language (in short, DRL), and it has the form:rule R1 when   $r : Result()                               // constraints   $p : Person( age >= 18 )     then   $r.setValue( $p.getName() + " can drink");  // consequenceendConstraints are written using a form of  pattern-matching over the data (Java objects) that is inserted into the  working memory. Actions are basically a block of Java code with a few  Drools-specific extensions.Historically, the DRL used to be a dynamic  language that was interpreted at runtime by the Drools engine. In  particular, the pattern matching syntax had a major drawback: it made  extensive use of reflection unless the engine detected a constraint was  “hot” enough for further optimization; that is, if it had evaluated a  certain number of times; in that case the engine would compile it into  bytecode on-the-fly.About one year ago, for performance  reasons, we decided to go away with runtime reflection and dynamic code  generation and completed the implementation of what we called the Drools Executable Model,  providing a pure Java-based representation of a rule set, together with  a convenient Java DSL to programmatically define such model.To give an idea of how this Java API  looks, like let’s consider again the simple Drools rule reported above.  The rule will fire if the working memory contains any Result instance  and any instance of Person where the age field is greater or equal to  18. The consequence is to set the value of the Result object to a String  saying that the person can drink. The equivalent rule expressed with  the executable model API looks like the following (pretty-printed for  readability):var r = declarationOf(Result.class, "$r");var p = declarationOf(Person.class, "$p");var rule =   rule("com.example", "R1").build(         pattern(r),         pattern(p).expr("e", p -> p.getAge() >= 18),         alphaIndexedBy(int.class, GREATER_OR_EQUAL, 1, this::getAge, 18),         reactOn("age")),    on(p, r).execute(($p, $r) ->         $r.setValue($p.getName() + " can drink"))); As you can see, this representation is  more verbose and harder to understand, partly because of the Java  syntax, but mostly because it explicitly contains lots of details, such  as the specification of how Drools should internally index a given  constraint, which was implicit in the corresponding DRL. We did this on  purpose because we wanted a totally explicit rule representation that  did not require any convoluted inference or reflection sorcery. However,  we knew it would be crazy to ask users to be aware of all such  intricate details, so we wrote a compiler to translate DRL into the  equivalent Java code. We achieved this using JavaParser, a really nice open source library that allows to parse, modify, and generate any Java source code through a convenient API.In all honesty, when we designed and  implemented the executable model, we didn’t have strictly GraalVM in  mind. We simply wanted an intermediate and pure Java representation of  the rule that could be efficiently interpreted and executed by the  engine. Yet, by completely avoiding reflection and dynamic code  generation,  the executable model was key to allowing us to support  native binary generation with Graal. For instance, because the new model  expresses all constraints as lambda predicates, we don’t need to  optimize the constraints evaluators through bytecode generation and  dynamic classloading, which are totally forbidden in native image  generation.The design and implementation of  executable model taught us an important lesson in the process of making  Drools compatible with Graal: any limitation can be overcome with a  sufficient amount of code generation. We will further discuss this in  the next section.Overcoming other Graal limitationsHaving a plain Java model of a Drools rule  base was a very good starting point, but more work was needed to make  our project compatible with native binary generation.The executable model makes reflection  largely unnecessary; however, our engine still needs reflection for one  last feature called property reactivity.  Our plans are to get rid of reflection altogether, but, because the  change is nontrivial, for this time we resorted to a handy feature of  the binary image compiler. This feature does support a form of  reflection, provided that we can declare upfront the classes we will  need to reflect upon at runtime. This can be supplied by providing a JSON descriptor file to the compiler, or, if you are using Quarkus, you can just annotate the domain classes. For instance, in the rule shown above, our domain classes would be Result and Person. Then we can write:[ {    "name" : "org.drools.simple.project.Person",    "allPublicMethods" : true }, {    "name" : "org.drools.simple.project.Result",    "allPublicMethods" : true }]Then, we can instruct the native binary compiler with the flag-H:ReflectionConfigurationFiles=reflection.jsonWe segregated other redundant reflection trickery to a dynamic module and implemented an alternative static version of the same components that users can choose to import into  their project. This approach is especially useful for binary image  generation, but it has benefits for regular use cases as well. In  particular, avoiding reflection and dynamic loading can result in faster  startup time and improved run-time.At startup time, Drools projects read an XML descriptor called the kmodule,  where the user declaratively defines the configuration of the project.  Usually, we parse this XML file and load it into memory, but our current  XStream-based parser uses a lot of reflection; so, first, we can load  the XML with an alternative strategy that avoids reflection. However, we  can go further: if we can guarantee that the in-memory representation  of the XML will never change across runs, and we can afford to run a  quick code-generation phase before repackaging a project for deployment,  then we can avoid loading the XML at each boot-up altogether. In fact,  we are now able to translate the XML file into a class file that will be  loaded at startup time, like any other hand-coded class. Here’s a  comparison of the XML with a snippet of the generated code (again,  pretty-printed for readability). The generated code is more verbose  because it makes explicit all of the configuration defaults.
       packages="org.drools.simple.project">  
var m = KieServices.get().newKieModuleModel();var kb = m.newKieBaseModel("simpleKB");kb.setEventProcessingMode(CLOUD);kb.addPackage("org.drools.simple.project");var ks = kb.newKieSessionModel("simpleKS");ks.setDefault(true);ks.setType(STATEFUL);ks.setClockType(ClockTypeOption.get("realtime"));
Another issue with startup time is dynamic classpath scanning. Drools supports alternate ways to take decisions other than DRL-based rules, such as decision-tables, the Decision Model and Notation (DMN) or predictive models using the Predictive Model Markup Language (PMML).  Such extensions are implemented as dynamically loadable modules, that  are hooked into the core engine by scanning the classpath at boot-time.  Although this is extremely flexible, it is not essential: even in this  case, we can avoid runtime classpath scanning and provide static wiring  of the required components either by generating code at build-time, or  by providing an explicit API to end users to hook components manually.  We resorted to provide a pre-built static module with a minimal core.private Map, Object> serviceMap = new HashMap<>();private void wireServices() {  serviceMap.put(ServiceInterface.class,                 Class.forName("org.drools.ServiceImpl").newInstance());  // … more services here}Note that, although here we are using  Class.forName(), the compiler is smart enough to recognize the constant  and substitute it with an actual constructor. Of course, it is possible  to simplify this further by generating a chain of if statements.Finally, we tied everything together by  getting rid of the last few pre-executable model leftovers: the legacy  Drools class-loader. This was the culprit behind the following  apparently cryptic error message:Error: unsupported features in 2 methodsDetailed message:Error: com.oracle.graal.pointsto.constraints.UnsupportedFeatureException:Unsupported method java.lang.ClassLoader.defineClass(String, byte[], int, int, ProtectionDomain)is reachable: The declaring class of this element has been substituted, but this element is notpresent in the substitution classTo diagnose the issue, you can add the option --report-unsupported-elements-at-runtime. The unsupported element is then reported at run time when it is accessed the first time.Trace:       at parsing org.drools.dynamic.common.DynamicComponentsSupplier$DefaultByteArrayClassLoader.defineClass(DynamicComponentsSupplier.java:49)Call path from entry point to org.drools.dynamic.common.DynamicComponentsSupplier$DefaultByteArrayClassLoader.defineClass(String, byte[], ProtectionDomain):Really, however, the message is pretty clear: our custom class-loader is able to dynamically define a class, which is useful when you generate bytecode at run-time. But, if the codebase relies completely on the executable model, we can  avoid this altogether, so we isolated the legacy class-loader into the dynamic module.This is the last step that was necessary  to successfully generate a native image of our simple test project, and  the results exceeded our expectations, thereby confirming that the time  and efforts we spent in this experiment were well invested. Indeed,  executing the main class of our test case with a normal JVM takes 43  milliseconds with a occupation of 73M of memory. The corresponding  native image generated by Graal lasted is timed at less than 1  millisecond and uses only 21M of memory.Integrating with QuarkusOnce we had a first version of Drools  compatible with Graal native binary generation, the next natural step  was to start leveraging the features provided by Quarkus and try to  create a simple web service with it. We noticed that Quarkus offers a  different and simpler mechanism to let the compiler know that we need  reflection on a specific class. In fact, instead of having to declare  this in a JSON file as before, you can annotate the class of your domain  model as follows:@RegisterForReflectionpublic class Person { … } We also decided to go one small step  forward with our code generation machinery. In particular, we added one  small interface to Drools codepublic interface KieRuntimeBuilder {    KieSession newKieSession();    KieSession newKieSession(String sessionName);} so that when the Drools compiler creates  the executable model from the DRL files it also generates an  implementation of this class. This implementation has the purpose of  supplying a Drools session automatically configured with the rules and  the parameters defined by the user.After that, we were ready to put both  dependency injection and REST support provided by Quarkus to work, and  we developed a simple web service exercising the Drools runtime.@Path("/candrink/{name}/{age}")public class CanDrinkResource {    @Inject    KieRuntimeBuilder runtimeBuilder;    @GET    @Produces(MediaType.TEXT_PLAIN)    public String canDrink( @PathParam("name") String name,                            @PathParam("age") int age ) {       KieSession ksession = runtimeBuilder.newKieSession();       Result result = new Result();       ksession.insert(result);       ksession.insert(new Person( name, age ));       ksession.fireAllRules();       return result.toString();    }} The example is straightforward enough to  not require any further explanation and is fully deployable as a  microservice in an OpenShift cluster. Thanks to the extremely low  startup time—due to the work we did on Drools and the low overhead of  Quarkus—this microservice is fast enough to be deployable in a KNative cloud. You can find the full source code on GitHub.Introducing SubmarineThese days, rule engines are seldom a matter of discussion. This is because they just work.  A rule engine is not necessarily antithetical to a cloud environment,  but work might be needed to fit the new paradigm. This was the story of  our journey. We started with courage and curiosity. In the next few  months, we will push this work forward to become more than a simple  prototype, to realize a complete suite of business automation tools  ready for the cloud. The name of the initiative is Submarine, from the famous Dijkstra quote. So, sit tight, and get ready to dive in.This article has been originally published on the Red Hat Developer blog here
      
 [Less]
    
 
 | 
| 
 
Posted
almost 7 years
ago
by
[email protected] (Mario Fusco)
 
    Article originally posted here by Alex Porcelli 5 years ago, on November 2013 we released 6.0.0.Final,  that included the first version of the Workbench. It has been a long  journey so far, but today we’re announcing that we’re retiring the 
    
    ... [More]
     Workbench brand and officially adopting Business Central as the new  brand!
Historical Context
The  KIE group has been developing web tooling for more than a decade now.  The first public release that shipped a browser based tool was 5.0.1 (May of 2009), but its R&D started way before that.
Tip
KIE stands for Knowledge Is Everything, and it is pronounced exactly the word key in English: /kē/ .
The  first experiments of a web-based tool to manage a central repository  for business rules that provided an intuitive user interface to create  and package rules started back in September 2006 as jbrms. After a couple of years, it was renamed to Guvnor, and this is the name that would become known as the official name of our first web tooling.
Figure 1. Guvnor Guided Rule Editor User Interface.
The next generation that became known as Workbench started its R&D in May of 2012 with multiple goals and challenges that could be summarized in the following major two key points:
A modular, plug-in based, composable tooling, to avoid the single project/repo that Guvnor became over the years
A Virtual File System based storage, to avoid getting stuck with a technology like JCR
Figure 2. Workbench Original NSEW Compass Drag-and-Drop
If you’re feeling nostalgic, you can play this  video, and get back to May of 2013, a few months before the release.  But if you have time, and want to see how our fresh released second  generation looked like in practice, you have a full playlist.
Evolution
The  Workbench has been distributed in two flavors: KIE Drools Workbench and  KIE Workbench. Initially, the KIE Workbench used to be shipped with an  embedded jBPM engine, what made the distributions significantly  different. However, with the KIE-Server release and the engine  unification, the embedded jBPM engine was removed from Workbench, and  the differences between the two distros became just a matter of  show/hide some user interface items.
It’s  also clear that the current Workbench has very little in common with  its original format. Over the years it not only get more polished and  stable, but the target audience has evolved from a developer focused to a  collaborative platform between practitioners and developers.
Based  on the above facts and also looking for a more concise branding  strategy a decision was made: unify the distributions and re-brand it as  Business Central!
Business Central
So  what’s in the new name? The major positive impact is that we have now a  single distribution and terminology to reference the KIE group web  tooling, that also unifies the product and community terminology.
Here’s a quick walkthrough of the changes you’ll see in the new Business Central:
Profiles and Entitlements
By  default Business Central bootstrap with all the features enabled, that  includes Drools, OptaPlanner, and jBPM. However, for those that are not  taking advantage of our jBPM engine, we provide in settings a Profiles  option that allows admins to adjust Business Central to display only the  relevant profile to your needs.
The default profile can be also be defined in the startup using the org.kie.workbench.profile environment variable with the following possible values:
FULL
PLANNER_AND_RULES
FORCE_FULL
FORCE_PLANNER_AND_RULES
The main difference between the "FORCE_" values is that it will hide the settings configuration, forcing the chosen profile.
Conclusion
After  five years, the KIE group has decided that was time to retire the  Workbench brand. Our web tooling evolved quite a lot and the use of the  word Workbench, a common term for developers, didn’t reflect the current  state of it.
The consolidation and  re-branding to Business Central aims to provide a clear message about  its targeted audience with a concise communication strategy. If you’re  interested in giving it a try, Business Central is available to download today!
      
 [Less]
    
 
 | 
| 
 
Posted
almost 7 years
ago
by
[email protected] (Mario Fusco)
 
    Article originally posted here by Alex Porcelli 5 years ago, on November 2013 we released 6.0.0.Final,  that included the first version of the Workbench. It has been a long  journey so far, but today we’re announcing that we’re retiring the 
    
    ... [More]
     Workbench brand and officially adopting Business Central as the new  brand!
Historical Context
The  KIE group has been developing web tooling for more than a decade now.  The first public release that shipped a browser based tool was 5.0.1 (May of 2009), but its R&D started way before that.
Tip
KIE stands for Knowledge Is Everything, and it is pronounced exactly the word key in English: /kē/ .
The  first experiments of a web-based tool to manage a central repository  for business rules that provided an intuitive user interface to create  and package rules started back in September 2006 as jbrms. After a couple of years, it was renamed to Guvnor, and this is the name that would become known as the official name of our first web tooling.
Figure 1. Guvnor Guided Rule Editor User Interface.
The next generation that became known as Workbench started its R&D in May of 2012 with multiple goals and challenges that could be summarized in the following major two key points:
A modular, plug-in based, composable tooling, to avoid the single project/repo that Guvnor became over the years
A Virtual File System based storage, to avoid getting stuck with a technology like JCR
Figure 2. Workbench Original NSEW Compass Drag-and-Drop
If you’re feeling nostalgic, you can play this  video, and get back to May of 2013, a few months before the release.  But if you have time, and want to see how our fresh released second  generation looked like in practice, you have a full playlist.
Evolution
The  Workbench has been distributed in two flavors: KIE Drools Workbench and  KIE Workbench. Initially, the KIE Workbench used to be shipped with an  embedded jBPM engine, what made the distributions significantly  different. However, with the KIE-Server release and the engine  unification, the embedded jBPM engine was removed from Workbench, and  the differences between the two distros became just a matter of  show/hide some user interface items.
It’s  also clear that the current Workbench has very little in common with  its original format. Over the years it not only get more polished and  stable, but the target audience has evolved from a developer focused to a  collaborative platform between practitioners and developers.
Based  on the above facts and also looking for a more concise branding  strategy a decision was made: unify the distributions and re-brand it as  Business Central!
Business Central
So  what’s in the new name? The major positive impact is that we have now a  single distribution and terminology to reference the KIE group web  tooling, that also unifies the product and community terminology.
Here’s a quick walkthrough of the changes you’ll see in the new Business Central:
Profiles and Entitlements
By  default Business Central bootstrap with all the features enabled, that  includes Drools, OptaPlanner, and jBPM. However, for those that are not  taking advantage of our jBPM engine, we provide in settings a Profiles  option that allows admins to adjust Business Central to display only the  relevant profile to your needs.
The default profile can be also be defined in the startup using the org.kie.workbench.profile environment variable with the following possible values:
FULL
PLANNER_AND_RULES
FORCE_FULL
FORCE_PLANNER_AND_RULES
The main difference between the "FORCE_" values is that it will hide the settings configuration, forcing the chosen profile.
Conclusion
After  five years, the KIE group has decided that was time to retire the  Workbench brand. Our web tooling evolved quite a lot and the use of the  word Workbench, a common term for developers, didn’t reflect the current  state of it.
The consolidation and  re-branding to Business Central aims to provide a clear message about  its targeted audience with a concise communication strategy. If you’re  interested in giving it a try, Business Central is available to download today!
      
 [Less]
    
 
 |