A CEP Quiz: streams and relations

October 28, 2011

Last week, we were training some of Oracle’s top consulting and partners on CEP.

The training was realized in Reading, UK (near London). Beautiful weather, contrary to what I was told is the common English predicament for October.

At the end, we gave a quiz, which I am reproducing here:

Consider an input channel C1 of the event type E1, defined as having a single property called p1 of type Integer.

An example of events of type E1 are { p1 = 1 } and { p1 = 2 }.

This input channel C1 is connected to a processor P, which is then connected to another (output) channel C2, whose output is sent to cache CACHE. Assume CACHE is keyed on p1.

Next, consider three CQL queries as follows which reside on processor P:

  1. SELECT * FROM C1[NOW]
  2. ISTREAM (SELECT * FROM C1[NOW])
  3. DSTREAM (SELECT * FROM C1[NOW])

Finally, send a single event e1 = { p1 = 1 } to S1.

The question is: what should be the content of the cache at the end for each one of these three queries?

To answer this, a couple of points need to be observed.

First, as I have mentioned in the past, CEP deals with two main concepts: that of a stream and that of a relation.

A stream is container of events, which is unbounded, and only supports inserts. Why only inserts? Well, because there is no such thing as a stream delete, think about it, how could we delete an event that has already happened?!

Whereas a relation is a container of events that is bounded by a certain number of events. A relation supports inserts, deletes, and updates.

Second, remember that a cache is treated like a table, or more precisely, like a relation, and therefore supports insert, delete, and update operations. In the case the query outputs a stream, then the events inserted into the stream are mapped to inserts (or puts) into the cache. If the query outputs a relation, then inserts into the relation are likewise mapped into puts into the cache, however a delete on the relation, becomes a remove of an entry in the cache.

Third, keep in mind that the operations ISTREAM (i.e. insert stream) and DSTREAM (i.e. delete stream) convert relations to streams. The former converts relation inserts into stream inserts, but ignores the relation updates and deletes. The latter converts relation deletes into stream inserts, and ignores relation inserts and updates (in reality, things are a bit more complicated, but let’s ignore the details for the time being).

Fourth, we want the answer as if time has moved on from ‘now’. For all purpose, say we measuring time in seconds, and we sent event e1 at time 1 second and want the answer at time 2 seconds.

I will post the answer in a follow up post next week.

The crucial point of this exercise is to understand the difference between two of the most important CEP concepts: that of a STREAM and RELATION, and how they relate to each other.


Oracle OpenWorld 2011

September 10, 2011

For those attending OOW this year, I will be co-presenting two sessions:

  • Complex Event Processing and Business Activity Monitoring Best Practices (Venue / Room: Marriott Marquis – Salon 3/4, Date and Time: 10/3/11, 12:30 – 13:30)

In this first session, we talk about how to best integrate CEP and BAM. BAM (Business Activity Monitoring) is a great fit to CEP, as it can serve as the CEP dashboard for visualizing and acting on complex events that are found to be business related.

  • Using Real-Time GPS Data with Oracle Spatial and Oracle Complex Event Processing (Venue / Room: Marriott Marquis – Golden Gate C3, Date and Time: 10/3/11, 19:30 – 20:15)

In this following talk, we walk through ours and our customers’ real-world experience on using GPS together with Oracle Spatial and CEP. The combination of CEP and Spatial has become an important trend and a very useful scenario.

If you are at San Francisco at this time, please stop by to chat.


Blending Space and Time in CEP

July 31, 2011

Space and time are two dimensions that are increasingly more important in today’s online world. It is therefore no surprise that the blending of CEP and spatial is a natural one and ever more important.

Recently at DEBS 2001, I presented our work on integrating Oracle Spatial with Oracle CEP, where we are seamless referencing to spatial functions and types in CQL (e.g. our event processing language). This allows us to implement geo-fencing and telematics within the real-timeness of CEP.

For example, consider the following query:

SELECT shopId, customerIdFROM Location-Stream [ NOW ] AS loc, Shop
WHERE contains@spatial(Shop.geometry, loc.point)

Noteworthy to mention:

  • Location-Stream is a stream of events containing the customer’s location as a point, perhaps being emitted by a GPS.
  • Shop is a relation defining the physical location of shops as a geometry.
  • The contains predicate function verifies if a point (event) from the stream is contained by any of the geometries (row) of the relation.

This single query selects an event in time (i.e. now) and joins it with a spatial table!

Joining point stream with a geometry relation

The join happens in memory aided by a R-Tree (i.e. region-tree) data structure, which is also provided by the spatial library, or as we called, the spatial cartridge.

Further, as better detailed in the presentation, CQL accomplishes this in a pluggable form using links and cartridges, but this is the subject of a future post…


Event Processing Patterns at DEBS 2011

July 13, 2011

This week, I co-presented a tutorial on event processing design patterns and their mapping to the EPTS reference architecture at DEBS 2011. For this presentation, I had the pleasure to work together with Paul Vincent (TIBCO), Adrian Paschke (Freie Universitaet Berlin), and Catherine Moxey (IBM).

Event processing design patterns is an interesting and new topic, one which I have talked about in the past. However, we have added an interesting aspect to this, which is the illustration of the event processing patterns using different event processing languages. Specifically, we showed how to realize filtering, aggregation, enrichment, and routing using both a stream-based EPL, as well as a rule-based (i.e. logical-programming) EPL.

This not only made it easier to understand and learn these patterns, but also in my opinion showed how both approaches are mostly similar in their fundamentals and achieve similar results.

For example, there is a trivial and obvious mapping between a stream event and a rule’s fact. Likewise, the specification of predicate conditions (e.g. a < b and c = d) are but identical.

Nonetheless, there are a few areas where there is a best fit. For example, aggregation is typically easier to express in the stream-based idiom, whereas semantic reasoning on events is best expressed in the logical-programming style, as I learned using Prova.

The intent of this work is ultimately to be published as a book, and with that in mind we continue to extend it with additional event processing patterns.

We welcome feedback, particularly around new event processing patterns to be documented and included in this project.


Concurrency at the EPN

January 3, 2011

Opher, from IBM, recently posted an interesting article explaining the need for an EPN.

As I have posted in the past, an Event Processing Network (EPN) is a directed graph that specifies the flow of the events from and to event processing agents  (EPAs), or, for short, processors.

Opher’s posting raised the valid question on why do we need an EPN at all, and instead couldn’t we just let all processors receive all events and drop those that are not of interest.

He pointed out two advantages of using an EPN, firstly it improves usability, and secondly it improves efficiency.

I believe there is a third advantage, the EPN allows one to specify concurrency.

For example, the following EPN specifies three sequential processor A, B and C. It is clear from the graph that processor C will only commence processing its events after B has finished, which likewise only processes its events after they have been processed by A.

Conversely, in the following EPN, processor B and C execute in parallel, only after the events have been processed by A.

Events are concurrent by nature, therefore being able to specify concurrency is a very important aspect of designing a CEP system. Surely, there are cases when the concurrency model can be inferred from the queries (i.e. rules) themselves by looking at their dependencies, however that is not always the case, or rather, that may not in itself be enough.

By the way, do see any resemblances between an EPN and a Petri-Network? This is not merely coincidental, but alas the subject of a later posting.


Hadoop’s Programming Model

December 12, 2010

Hadoop is a Java implementation of Map-Reduce. Map-Reduce is a software architecture used to process large amounts of data, also know as “big data”, in a distributed fashion. It is based upon the idea of mapping data items into key and value pairs, which are grouped by the key and reduced into a single value. From a service perspective, Hadoop allows an application to map, group, and reduce data across a distributed cloud of machines, thus permitting the applications to process an enormous amount of data.

A common Hadoop application is the processing of data located in web sites. For example, let’s consider an application that counts the number of occurrences of a word in web pages. In other words, if we had 10 web pages, where each uses the word “hello” twice, then we expect the result to include the key and value pair: {“hello” -> 20}. This word counting application can be easily hosted in Hadoop by using Hadoop’s map and reduce services in the following form:

  1. Generate a map of word token to word occurrence for each word present in the input web pages. For example, if a page includes the word “hello” only once, we should generate the map entry {“hello” -> 1}.
  2. Reduce all maps that have been grouped together by Hadoop with the same key into a single map entry where the key is the word token and the value is the sum of all values in that group. In other words, Hadoop collects all maps that have the same key, that is, the same word token, and then groups them together providing the application with their values. The application is then responsible in reducing all values into a single item. For example, if step one generated the entries {“hello” -> 1} and {“hello” -> 2}, then we reduce these to a single entry {“hello” -> 3}.

Following, we have a walk-through of a simple scenario:

This is done by:

  • Load each web-page as input.

    web-page-1: “first web-page”

    web-page-2: “and the second and final web-page”

    • Map each input (i.e, page) into a collection of sequences (word, occurrences).

    {(first, 1), (web-page, 1)},

    {(and, 2), (the, 1), (second, 1), (final, 1), (web-page, 1)}

    • Group all sequences by ‘word’. Thus, the output will be collections in which all member sequences have the same ‘word’.

    {(first, 1)},

    {(web-page, 1), (web-page, 1)},

    {(and, 2)},

    {(the, 1)},

    {(second, 1)},

    {(final, 1)}

    • For each group, reduce to a single sequence by summing to together all word occurrences.

    {(first, 1)},

    {(web-page, 2)},

    {(and, 2)},

    {(the, 1)},

    {(second, 1)},

    {(final, 1)}

    • Store each sequence.

    We have described a word counting Hadoop application, the next task is to implement it using Hadoop’s programming model. Hadoop provides two basic programming models. The first one is a collection of Java classes, centered on a Mapper and Reducer interfaces. The application needs to extend a base class called MapReduceBase, and implement the Mapper and Reducer interfaces, specifying the data types of the input and output data. The application then registers its Mapper and Reducer classes into a Hadoop job, together with the distributed location of the input and output, and fires it away into the framework. The framework takes care of reading the data from the input location, calls back the Mapper and Reducer application classes when needed in a concurrent and distributed fashion, and writes the result to the output location.

    The second option is to use a domain language called Pig. Pig defines keywords such as FOREACH, GROUP, and GENERATE, which fit naturally into the map, group and reduce actions. Using Pig, a developer can write a Hadoop application in a matter of a few lines of code, almost as if writing a SQL query, although Pig is rather more imperative than declarative as SQL.

    map_result = FOREACH webpage GENERATE FLATTEN(count_word_occurrences(*)); 
    key_groups = GROUP map_result BY $0; 
    output = FOREACH key_groups GENERATE sum_word_occurrences(*);

    Hadoop is configured through XML configuration files. A good part of Hadoop is to deal with the distribution of jobs across a distributed file system; hence a large aspect of Hadoop’s configuration is related to configuring servers in the processing cloud.

    Hadoop is an excellent example of a newly created application framework targeted for the emergent problem presented by the web where we need to deal with mammoth amounts of data in a soft real-time fashion. As new problems arise, they will be accompanied by new solutions, some of which will certainly take the form of new development platforms, as Hadoop does.


    Event Processing Reference Architecture at DEBS 2010

    July 29, 2010

    Recently, the EPTS architecture working group, which I am part of, presented its reference architecture for event processing at DEBS 2010, which was realized at Cambridge in July 12th. The presentation can be found at SlideShare.

    The presentation first highlights general concepts around event processing and reference architecture models, the latter based upon IEEE. This is needed for us to be able to normalize the architectures of the different vendors and players to be presented into a cohesive set. Following, individual architectures were presented from University of Trento (Themis Palpanas), TIBCO (Paul Vincent), Oracle (myself), and IBM (Catherine Moxey).

    Following, I include the functional view for Oracle’s EP reference architecture:

    The pattern for each architecture presentation is to describe different views of the system, in particular a conceptual view, a logical view, a functional view, and a deployment view. Finally, a common event-processing use-case, specifically the Fast-Flower-Delivery use-case, was selected to be mapped to each architecture, thus showing how each architecture models and solves this same problem.

    Having exposed the different architectures, we then collide all into a single reference model, which becomes the EPTS reference architecture for Event Processing.

    What are the next steps?

    We need to further select event processing use-cases and to continue applying them to the reference architecture, hopefully fine-tuning it and expanding it. In particular, I feel we should tackle some distributed CEP scenarios, in an attempt to improve our deployment models and validate the logical and functional views.

    Furthermore, I should also mention that at Oracle we are also working on a general EDA architecture that collaborates with SOA. More on this subject later.


    New Point Release for Oracle CEP 11gR1

    May 6, 2010

    This week, Oracle announced the release of Oracle CEP 11gR1 11.1.1.3.

    Even though it is a point release, there are noteworthy improvements and features:

    Integration of CQL with Java

    CQL (or any other event processing language) allows the authoring of event processing applications at a higher level of abstraction, making them less suitable for dealing with low-level tasks, such as String manipulation, and other programming-in-the-small problems; and lack the richness of other programming language libraries (e.g. Java), which have been built over several years of usage.

    In this new release of Oracle CEP, we solve this problem by fully integrating the Java programming language into CQL. This is done at the type-system level, rather than through User-Defined Functions or call-outs, allowing the usage of Java classes (e.g. constructors, methods, fields) directly in CQL in a blended form.

    CQL and Java.jpg

    In this example, we make use of the Java class Date, by invoking its constructor, and then we invoke the instance method toString() on the new object.

    The JDK has several useful utility classes, such as Date, RegExp, and String, making it a perfect choice for CQL.

    Integration of CQL and Spatial

    Location tracking and CEP go hand-in-hand. One example of a spatial-related CEP application is automobile traffic monitoring, where the automobile location is received as a continuous event stream.

    Oracle CEP now supports the direct usage of spatial types (e.g. Geometry) and spatial functions in CQL, as shown by the next example, which verifies if “the current location of a bus is contained within a pre-determined arrival location”.

     

    CQL and Spatial.jpg

    One very important aspect of this integration is that indexing of the spatial types (e.g. Geometry) are also being handled in the appropriate form. In other words, not only a user is able to leverage the spatial package, but also OCEP takes care of using the right indexing mechanism for the spatial data, such as a R-tree instead of a hash-based index.

    High-Availability Adapters

    CEP applications are characterized by their quick response-time. This is also applicable for high-available CEP applications, hence a common approach for achieving high-availability in CEP systems is to use an active/active architecture.

    In the previous release of OCEP, several APIs were made available for OCEP developers to create their active/active HA CEP solutions.

    HA OCEP app.jpgIn this new release, we take a step further and provide several built-in adapters to aide in the creation of HA OCEP applications. Amongst these, there are HA adapters that synchronize time in the upstream portions of the EPN, and synchronize the events in the downstream portions of the EPN, as illustrated in the previous figure.

    Much More…

    There is much more, please check the documentation for the full set of features and their details, but here are other examples:

    • Visualizer’s Event Injection and Trace, which allows a user to easily and dynamically send and receive events into and from a runnign application without having to write any code
    • Manage the revision history of a OCEP configuration
    • Deploy OCEP application libraries, facilitating re-use of adapters, JDBC drivers, and event-types
    • Support for a WITHIN clause in CQL’s pattern matching to limit the amount of time to wait for a pattern to be detected
    • Create aliases in CQL, facilitating the creation and management of complex queries
    • Support for TABLE functions in CQL, thus allowing a query to invoke a function that returns a full set of rows (e.g. table)

    Dealing with different timing models in CEP

    March 20, 2010

    Time can be a subtle thing to understand in CEP systems.

    Following, we have a scenario that yields different results depending on how one interprets time.

    First, let’s define the use-case:

    Consider a stream of events that has a rate of one event every 500 milliseconds. The event contains a single value property of type int. We want to calculate the average of this value property for the last 1 second of events. Note how simple is the use-case.

    Is the specification of this use-case complete? Not yet, so far we have described the input, and the event processing (EP) function, but we have not specified when to output. Let’s do so: we want to output every time the calculated average value changes.

    As an illustration, the following CQL implements this EP function:

    ISTREAM( SELECT AVG(value) AS average FROM stream [RANGE 1 second] )

    The final question is: how should we interpret time? Say we let the CEP system timestamp the events as they arrive using the wall clock (i.e. CPU clock) time. We shall call this the system-timestamped timing model.

    Table 1 shows the output of this use-case for a set of input events when applying the system-timestamped model:

    What’s particularly interesting in this scenario is the output event o4. A CEP system that supports the system-timestamped model can progress time as the wall clock progresses. Let’s say that our system has a heart-beat of 300 milliseconds, what this means is that at time 1300 milliseconds (i.e. i3 + heart-beat) the CEP system is able to automatically update the stream window by expiring the event i1. Note that this only happens when the stream window is full and thus events can be expired.

    Next, let’s assume that the time is defined by the application itself, and this is done by including a timestamp property in the event. Let’s look what happens when we input the same set of events at exactly the same time as if we were using the wall clock time:

    Interesting enough, now we only get four output events, that is, event o4 is missing. When time is defined by the application, the CEP system itself does not know how to progress time, in other words, even though the wall clock may have progressed several minutes, the application time may not have changed at all. What this means is that the CEP system will only be able to determine that time has moved when it receives a new input event with an updated timestamp property. Hence we don’t get a situation like in the previous case where the CEP system itself was able to expire events automatically.

    In summary, in this simple example we went through two different timing models, system-timestamped and application timestamped. Timing models are very important in CEP, as it allows great flexibility, however you must be aware of the caveats.


    Logic Programming (LP) Extensions for CEP

    October 30, 2009

    Logic programming, as supported by inference engines, have been recently extended to support the development of Complex Event Processing (CEP) applications. One such an example is the product Drools Fusion.

    Let’s investigate logic programming extensions needed for CEP, by grouping them into functional groups:

    1. Event definition

    The first logical step in supporting CEP is to be able to annotate an object as an event. If one considers the definition of an event as an “observable thing that happens”, a natural mapping in logical programming is a fact; hence events can be modeled as types of facts, which have additional metadata, such as expiration and timestamp. The latter is crucial to be able to support temporal constraints, as we shall see later.

    Finally, it is important that the inference engine understands the concept of a clock, so that it can determine what is the current time. This is particularly important when supporting the idea of non-events.

    2. Input handling

    CEP applications are flooded with events; hence there is a need to limit the events that make it into the working memory of the inference engines.

    Drools fusion provides two facilities for doing this, the ability to partition the input set, and the definition of windows of events.

    To partition the input set, the developer can define entry points, which function as a named channel, providing an entity that can receive and send events. This is done with the following syntax:

    Event from entry-point Channel-Name

    For example, the following clause informs the engine that we are interested on an event named Stock coming from the channel Exchange-Market:

    Stock() from entry-point “Exchange-Market”

    Generally, the semantic of a channel is similar to that of a JMS queue, that is, an event is removed from the channel after a client has consumed it. However, due to the nature of inference engines, this is not the case for entry-points; the event continues to exist until it has been retracted from the working memory, either explicitly by the developer, or implicitly through some other facility, such as the usage of the @expires metadata associated to the event definition, or through the usage of the window function, as we shall see next.

    Drools fusion supports the definition of time-based and length-based sliding windows:

    Event over window:time(time-argument) Event over window:length(number-argument)

    A window defines a workable sub-set of a possibly infinite set of events. A time- based window is defined in terms of time, and a length-based is defined in terms of number of events. A sliding window moves on each progression of time for time- based windows or on each new event for a length-based window. For example, the following clause informs the engine that we are interested only on stocks that happened on the last 1 minute:

    Stock() over window:time(1m)

    Hence, one can use windows to manipulate the retraction of events from the working memory. However, there is some model impedance when compared to other CEP models. As logic programming is centered on symbols (e.g. variables, functions) under a single namespace sort of speak, the retraction of an event happens across the inference engine’s working memory; that is, it is not limited to a entry-point, or channel, which could have been desired effect. For example, the use-case could be to have different windows for different exchange markets, each exchange market being defined as a separate channel.

    Sliding windows is just one of the common patterns for window functions. Other window functions are:

    • Time-based and length based batching windows: in this case the progression of the window happens in batches of time or events.

    • Partitioned windows: in this case sub-windows are created for each partition of the input events as established by some partitioning attribute of the events

    • User-defined windows: these are arbitrarily windows plugged in by the developer.

    Does logic programming have any model impedance towards these window functions? This remains an open research question, however intuitively batch windows should not provide to be a problem. The other two types are less clear.

    3. Event matching

    In logic programming, event matching relates to the unification process, or in simpler terms, the “if condition” of the rules. Events are ground terms (facts), hence inference engines already have a native and powerful mechanism for matching events through first order logic predicates, which include (the non-exhaustive) list of operators: and, or, exists, not, for-all, collect, accumulate, etc.

    In particular, collect and accumulate allow for the creation of complex events from simple events, which is one of the primary definitions of CEP.

    Furthermore, temporal operators as defined by Allen [2] have been added: after, before, coincides, during, finishes, includes, meets, overlaps, and starts.

    For example, the following clause creates a FireAlarm event after a SmokeDetection event is detected:

    FireAlarm(this after SmokeDetection())

    The support for these operators are based upon the fact that each event is tagged with a timestamp, which can either be set by the event producer (external) or by the inference engine itself (internal).

    Also, as it should have been made clear by the presence of operators such as coincides and during that the time domain is defined as being interval based, rather than a point-in-time. The advantages of an interval-based approach are well explained by [2], however it is worth noticing that most other CEP solutions are point-in-time.

    An important matching function is correlation (i.e. join operator). In logic programming, join is achieved through the unification of terms on common variables [6]. This process has been subject of innumerous researches and optimizations. In particular, most modern inference engines are implemented using the RETE approach [9]. However, at the time of writing, the author has not been able to find relevant work on the performance of join in inference engines when subject to the volume of events and the latency requirements needed in CEP.

    4. External data integration

    An example of external data integration is to join a stream of customer sales transactions with a remote database containing the customer profile, with the intent of verifying the customer’s risk level for a sale transaction.

    In the context of logic programming, data takes the role of facts, hence integration to external data relates to being able to work with remote working memory in a seamless and efficient manner.

    However, the common practice for supporting such a scenario is to “copy” the data over to the local working memory of the inference engine that is handling the events. In other words, the external data is inserted into the local inference engine, which then allows the enrichment to proceed normally.

    This approach provides good performance, however has the following draw-backs:

    • It is only applicable when the size of the external data is reasonably small; it is not viable to copy over entire RDBM tables into the working memory.

    • Data integrity is lost, as the data is duplicated and no longer in sync with original. For example, the original data can change and the inference engine would be working with stale data until the change gets propagated.

    An alternative approach is to use associative memory, or tuple spaces [5]. Nevertheless, this also assumes that the external data is part of the overall system, even though it is distributed.

    The underlying issue is to realize that the external data is owned by external systems, and therefore what is needed is a native bridge in the inference engine, whereby it is able to convert the logic programming paradigm seamlessly into the external system’s model. An example of this approach is Java programs using the JDBC API [12] to integrate to external database systems.

    5. Output handling

    Conversely to input handling, output handling is related on how to convert the RHS (i.e. right-hand-side of if-then statement) back into events, or more properly, into streams that flow out through output channels.

    Before we address this, it is important to ask why is this needed, as it is not directly clear from our requirements. The reason being is that CEP applications are a component of a larger event processing network (EPN), that is, a CEP application plays the role of an event processing agent (EPA) node that receives events from other upstream nodes in the network and whose output must be send to downstream nodes to it in the network. Please, reference [8] for the definition of EPNs and EPAs.

    This larger context entails that the CEP application must be able to send output that conforms to the expected semantic, as defined by the EPN to facilitate communication.

    This semantic, which is similar to that of the input events, is presented here informally:

    • Events must be able to be published to different channels. Within each channel, events must be ordered in time forming a stream.

    Getting back to our original question, how does one generate stream of events out of the RHS, that is, out of the facts that exist in the working memory?

    Consider that the facts (i.e. objects) in the working memory have no ordering constraints, that is, they are not ordered in time by any means. Further, we may be interested on different types of conversions; for example, we may want to notify not only when a new object is inserted into working memory, but also when an object is removed from the working memory.

    In general, there is no first-class support for publishing time-ordered events out of an inference engine.

    One approach is to support a new send(Object event, String entry- point) verb in the action clause, which guarantees the time-ordering restrictions.

    Conclusion

    Due to its declarative nature, logic programming is a natural fit for performing event matching. First-order predicates suffice for several complex pattern-matching scenarios, and, as seen, logic programming can be extended to support temporal constraints. Conversely, integration of logic programming inference engines into the large event processing network (EPN) eco-system is less preferable. Finally, future work is needed to research the performance of joins and other complex operators in light of the demanding CEP environment.

    References

    1. Luckham, D. The Power of Events, An Introduction to Complex Event Processing in Distributed Enterprise Systems (2002).

    2. Allen, J.F..An Interval-based Representation of Temporal Knowledge. 1981. 3. The Drools Fusion User Guide (2009),

    http://downloads.jboss.com/drools/docs/5.0.1.26597.FINAL/drools- fusion/html_single/index.html#d0e1169.8 Extensions to logic programming inference engines to support CEP

    4. Carriero, Nicholas (1992). “Coordination Languages and their Significance” Communications of the ACM.

    5. Wells, George. “Coordination Languages: Back to the Future with Linda” (PDF). Rhodes University.

    6. Kowalski, R., “Predicate Logic as Programming Language”, in Proceedings IFIP Congress, Stockholm, North Holland Publishing Co., 1974, pp. 569-574.

    7. Arasu, A. and Babcock, B. and Babu, S. and Cieslewicz, J. and Datar, M. and Ito, K. and Motwani, R and Srivastava, U. and Widom, J. (2004) STREAM: The Stanford Data Stream Management System.

    8. Schulte, R., Bradely, A.: A Gartner Reference Architecture for Event Processing Networks, ID G00162454, 2009.

    9. Charles Forgy, “Rete: A Fast Algorithm for the Many Pattern/Many Object Pattern Match Problem”, Artificial Intelligence, 19, pp 17–37, 1982

    10.Luckham, D. and Schulte, R. Event Processing Glossary – Version 1.1 (2008). 11.Mitchell, J. Concepts in Programming Languages (2003). 12. JDBC API, http://java.sun.com/javase/technologies/database/.