zur Startseite


Load Shedding via Window Drop in Complex Event Processing
Betreuer M. Sc. Ahmad Slo
Prüfer Prof. Dr. rer. nat. Dr. h. c. Kurt Rothermel

Thesis Description

The tremendous increase in data volume and the need to interpret this data in real- time, to extract useful information have motivated many research communities to develop technologies that process such huge data online. Complex event processing (CEP) is one effective system to process such stream of data. CEP is used in many domains such as IoT, social media, E-commerce, etc.

Parallel CEP is a well defined paradigm that processes such huge data streams. A powerful parallelization technique employed in CEP is data parallelization where each CEP operator is composed of three components: splitter, operator instances and merger. The splitter partitions input event streams into different windows which are processed by different operator instances in parallel. The merger reorders the produced complex events before emitting them.

However in burst situations, the input stream volume may exceed the system capacity. This increases the processing latency of events or it even may break down the whole system. One way to handle burst situations is to drop a part of input data, also known as load shedding.

This Master thesis investigates dropping windows as a load shedding mechanism. The goal is to design a model that can predict the utility of windows. The utility of a window depends on number of complex events in the window and on processing latency of the window. Later, this model should be used to drop windows that have low utility values, i.e., have less impact of the quality of results.


  • Understand our available prototype CEP framework.
  • Develop a model to predict number of complex events in windows and processing latency of windows
  •  Derive the utility of windows.
  • Design an algorithm that in case of overload drops windows which keep the gain maximized.
  •  Implement and integrate the proposed model and algorithm in the framework.
  • Evaluate the developed model and algorithm extensively.
  • Document the concepts, algorithms and the evaluations in written form.
  •  Present your results in the department colloquium.


  • Good background in Machine Learning.
  • Very good programming knowledge in Java.
  • Good background in parallel and mutlithreaded programming.
Download the description