Cloud-based data management and processing systems, spanning tens of thousands of machines and supporting thousands of concurrent users, require new architectures, programming models and system designs that go beyond approaches used in fixed-sized compute clusters.
A wide range of systems and platform have been proposed to address these needs, ranging from batch-oriented systems such as Map/Reduce or Apache Hadoop to new, real-time streaming platforms such as Apache S4.
To enable new types of big data applications, it is necessary to overcome challenges spread across the areas of software systems, distributed systems, networking, data management and security research.
This workshop seeks for submissions describing radical, new alternatives on how data can be processed and managed in cloud environments. The goal is to attract research that has the potential to underpin the next generation of scalable and efficient cloud data applications on top of high-level, flexible and generic platforms.
Keynote Speakers
- Ant Rowstron , Principal Researcher, Microsoft Research Cambridge:
"The hardware is evolving but do cloud applications care?"
Abstract: We are beginning to see changes in the design of clusters being built for
commodity data centers that could be highly disruptive. The main motivation
for the change is the drive to increase server density and reduce costs
and, while it is early days, we see many major companies beginning to
support the new approaches. An important question is what does this mean
for the software platforms that run in the data center? Is it a case of new
hardware new opportunities, or is this change even going to be visible to
the applications running on these data centers? In this talk, I will
describe some of the innovations we are seeing in cluster design, try to
give a feel for why they are happening and highlight how they could impact
data processing in the future. In particular, I'll use a MapReduce as an
example of where the hardware changes could impact performance.
Bio: Ant Rowstron works in Networked Systems, at the junction of Networking, Systems and Distributed Systems. Most recently he has been interested in "big data" and in particular in building hardware (clusters) and software stacks to support big data processing. He has been exploring this from a number of angles, including using ideas from HPC in the data center, in a project called CamCube.
-
Leendert van Doorn, Corporate Fellow, AMD:
"Challenges in Securing the Cloud"
Abstract: Scale and complexity are a deadly combination when it comes to securing
the Cloud. In this talk I will discuss Cloud security from a systems
perspective and in particular from a semiconductor manufacturer
perspective. I'll take a critical look at the current and future system
trends (both hardware and software). I'll talk about new security
technologies that will appear in hardware and encourage the audience to
take a closer look at some "forgotten" technologies. I'll specifically
focus on what works well and what does not work well in a world where
scale and cost are the overriding engineering principles.
Bio: Dr. Leendert van Doorn is a Corporate Fellow/CVP at Advanced Micro Devices
(AMD). There he is responsible for software strategy and in particular he
has the corporate technology responsibility for virtualization, security,
system software and development tools. Before joining AMD he was a Sr.
Manager at IBM's T.J. Watson Research Center where he worked on secure
hypervisors, trusted computing, and secure coprocessors.
Important Dates
Paper Submission deadline |
January 27 2013 Extended: February 10 2013
|
Notification of acceptance |
February 27 2013 (delayed) |
Camera-ready deadline |
March 13 2013 |
Workshop |
April 14 2013 |
You can find information about the workshop location on the EuroSys'13 website. Please email any questions to the organisers.
|