Plato Data Intelligence.
Vertical Search & Ai.

Who. What. Why. New IBM algorithm models how the order of prior actions impacts events

Date:

Share this post:

Actions have consequences. And typically, when something happens, the order of the causes of the event really does matter. But understanding exactly how each action affects the final result is not always easy. Our latest work, “Order-Dependent Event Models for Agent Interactions,” presented at the International Joint Conferences on Artificial Intelligence Organization (IJCAI), can help.

Take two countries historically in conflict. Say one makes a negative statement about the other, leading the latter to retaliate but not as strongly. It then becomes more likely that the first country will react much less aggressively than if the order of the initial actions was reversed.

It is possible to learn such behavior from datasets involving events that occur irregularly over time, following a so-called “multivariate point process.” They include electronic health records, wearable device data, financial data around trades by automated agents, user behavior in online retail and entertainment, and similar type of data. (See figure 1(a), below, for an example of three types of events occurring over a month).

Such events are said to be of “ordinal impacts” — and until now, it’s been particularly challenging to understand how, when, and to what degree various causes of such events actually impact the result.

This is where we come in.

To address the problem of ordinal impacts, our team at IBM T. J. Watson Research Center has developed OGEMs — or Ordinal Graphical Event Models — new dynamic, probabilistic graphical models for events. These models are part of the broader family of statistical and causal models called graphical event models (GEMs) that represent temporal relations where the dynamics are governed by a multivariate point process. We studied GEMs in our previous work, where the rate of an event’s occurrence depends only on the presence or absence of its causes in the recent past at any time, and the order is ignored.

OGEMs go a step further. They aim to capture the effect of the order in which preceding events have occurred and detail how each one has affected the event of interest. They do so using a new algorithm we’ve developed, which learns an event’s causes and the quantification of the effect of the order of the causes using event streams as input.

The algorithm provides a data-driven and domain-agnostic alternative to traditional approaches that involve hand-crafted models requiring prior domain knowledge for representing the behavior of single or multiple agents. It does so using a graph with a node for each type of event, windows for each node that give user-specified historical time period for determining the order, and a set of conditional intensity parameters for each node in the graph. (See fig. 1(b), below, for an example of an illustrative OGEM).

The graph itself may include cycles and even self-loops for event labels, capturing the dynamics of the process. For instance, event type C in fig. 1b depends on historical occurrences of event types A and B — meaning there is a parameter for every potential order of every subset of {A, B}.

We’ve done an empirical investigation using event streams from various domains such as healthcare, politics and employment role changes, and have shown that OGEMs fit various datasets better than many other interpretable baseline models.

Fig. 1(a): An illustrative event stream with 3 types of events occurring irregularly over a month; 1 (b): An illustrative Ordinal Graphical Event Model over these 3 types of events – the model includes a graph, windows for every node and conditional intensity rate parameters for all nodes.

The main power of OGEMs is that they are able to reveal orders of causal events that are influential for a particular event of interest. This way, they enable what we call a “causal orders analysis.” In the IJCAI paper, we show this analysis with a qualitative investigation for social unrest-related events on an extract of a political event dataset called ICEWS, composed of machine-generated events in four Latin American countries over five years.

Analysts in professions such as business, intelligence, and finance would likely find such an interpretable model and its order-related insights beneficial for the purpose of knowledge discovery.

Still, OGEMs aren’t without limitations. The main one is that the number of parameters is super-exponential in the number of parents/causes, preventing it from learning complex models using finite data. We’ve addressed this challenge in our follow-up work in a forthcoming paper in AAAI 2021, where the parameters of the OGEM are captured through a tree representation.

Inventing What’s Next.

Stay up to date with the latest announcements, research, and events from IBM Research through our newsletter.

spot_img

Latest Intelligence

spot_img