Loading...
Please wait, while we are loading the content...
Similar Documents
Forum : Processing Real-time Data Streams on Accelerator-based Systems
| Content Provider | Semantic Scholar |
|---|---|
| Author | Verner, Uri |
| Copyright Year | 2013 |
| Abstract | Stream processing is one of the most di cult problems from the algorithmic and system design perspectives. This is because the data processing rate should not fall behind the aggregate throughput of arriving data streams, otherwise leading to bu er explosion or packet loss. Stateful processing, which is the focus of this work, also requires the previous processing results to be available for computations on newly arrived data. Even more challenging is the problem of hard real-time stream processing. In such applications, each stream speci es its deadline, which restricts the maximum time arrived data may stay in the system. The deadline requirement fundamentally changes the system design space, rendering throughput-optimized stream processing techniques inappropriate for several reasons. First, tight deadlines may prevent computations from being distributed across multiple computers because of unpredictable network delay. Furthermore, schedulability criteria must be devised in order to predetermine whether a given set of streams can be processed without violating their deadline requirements and exceeding the aggregate system throughput. Runtime predictions must thus take into account every aspect of the processing pipeline. A precise and detailed performance model is therefore crucial. The runtime prediction problem, hard in the general case, is even more challenging here: to allow deadline-compliant processing, the predictions must be conservative, in con ict with the goal of higher aggregate throughput. A compute engine that aims to manipulate hard real-time streams can be as small as a smartphone or as large as a data center. It is commonly agreed that an important building block of such engines is a system that combines CPUs and Graphics Processing Units (GPUs), where the OS, scheduler, drivers and applications are running on the CPUs, while the GPUs are used as e cient accelerators for applications that need to manipulate data streams. |
| File Format | PDF HTM / HTML |
| Alternate Webpage(s) | https://www.amrita.edu/icdcn/Uri.pdf |
| Language | English |
| Access Restriction | Open |
| Content Type | Text |
| Resource Type | Article |