Workshops

International Workshop on Resource brokering with blockchain (RBchain)


Blockchain is a unique technology and a fundamental concept of computer science. Moreover, Blockchain has proven itself to be instrumental for many applications, ranging from financial to governmental services. In many ways, this transformed applications and development to rethinking the how information is distributed and processed over its infrastructure. While the majority of these applications are in the financial domain, key opportunities lie in connecting infrastructural services, such as Cloud, Fog and Edge computing with blockchain to create and provide new applications and services.

The aim of this workshop is to bring together researchers and practitioners working in distributed systems, cryptography, and security from both academia and industry, who are interested in the technology and theory of blockchains and their protocols. The workshop will provide the international research community with a venue to discuss and present the requirements of blockchain fundamentals and applications and to bring forward new designs that meet new requirements. We encourage contributions that analyze a large variety of blockchain applications from a network, storage and computational perspective to understand how they may take full advantage. The program will consist of submitted papers and one or more invited speakers presenting recent results in the area of blockchains, which could be of interest also to the audience of CloudCom 2018.

For more details visit the workshop website.

2nd International Workshop on Uncertainty in Cloud Computing


Cloud computing is an emerging technology that offers various services on demand. The correct characterization and management of Cloud environment objects (clouds, data centers, providers, services, data, users, etc.) is the first step towards effective provisioning and integration of cloud services. However, Cloud computing environment is often subject to uncertainty. This could be attributed to the incompleteness and imprecision of Cloud available information, as well as the highly dynamic environment.

Cloud services are associated with some uncertainty in their information, including QoS levels, users rating, available resources, workload and performance changes, dynamic elasticity, availability zones, service descriptions, etc. Also, the highly dynamic cloud environment adds a new factor of uncertainty, as it may have a negative impact on the quality of cloud services and, consequently, on services provisioning and integration. This uncertainty regarding the cloud services context raises a question about how to trust the available cloud information and brings additional challenges to the cloud actors. Therefore, the need to handle uncertainty in the context of cloud environments is of paramount importance to maintain the sustainable use of such technology.

Extensive research has been conducted to address uncertainty issues in various fields including computational biology, e-commerce, social networks, decision making, data integration, location-based services, and recently Internet of Things. However, uncertainty issues in the context of Cloud computing have not been solved yet. A main motivation behind addressing uncertainty in the cloud, to satisfy user needs, is the growing reliance on such highly dynamic cross-platform also considered as a big distributed container of uncertain Cloud services and their related data.

More information can be found on the workshop website.

ADON – International Workshop on Anomaly Detection on the Cloud and the Internet of Things


Anomalies are detected in systems as a result of malicious behavior of users or as unscheduled changes in the operation of a system. With the advent of cloud, similar behavior is now detected in virtualized environments such as the environment of a cloud provider (now affecting the operation of the system in scale and of a much large number of users) with certain economic and operational impact. Although cloud systems are considered to be more efficient, for example in terms of reliability, security etc. compared to legacy systems operating within the premises of a company, they are exposed to a much larger number of users and the internet. At the same time, due to its scalability and affordability, the cloud is considered to be the ideal environment for deploying IoT applications. This exposes the cloud to even more risks as IoT is operating in the periphery of the cloud and is generally less protected than the cloud itself. In particular, the advent of the cloud and Internet of Things (IoT) open-up new possibilities in the design and development of methodologies ensuring reliable security protection and, in the case this fails, of methodologies for detecting and for dealing with the cause and point of system failure.

Malicious  behaviour detection

Anomaly detection for malicious behavior detection which is typically expressed as (a) Fraud detection in which case, authorized of unauthorized users operate the system for the purpose of unfair or unlawful gain and (b) Intrusion detection in which case, unauthorized users are attempting to disrupt normal system operation.

Large sale system failures

Anomaly detection on large scale system failures which is due to heavy (CPU, network and memory) workloads or faulty/misconfigured resources. A special case of system failure is encountered when parts of the system fails to operate as scheduled due to power failure or material fatigue (e.g. disk failure).

IoT systems

Anomaly detection on large scale system failures which is due to heavy (CPU, network and memory) workloads or faulty/misconfigured resources. A special case of system failure is encountered when parts of the system fails to operate as scheduled due to power failure or material fatigue (e.g. disk failure).

Anomaly detection has been studied extensively in recent years and new methods are now becoming available on the cloud. Depending on application, anomalies can be detected either in real time i.e. typically by the analysis of stream data acquired by the application and operation of the system or, in batch (i.e. by analyzing system log data). Methods and systems for stream processing for example Storm, Spark, Flink, big data analysis techniques (as log data eventually become big) combined with Machine Learning techniques (for adapting anomaly detection to the peculiarities of the data and of the operation environment) are of particular importance to the design of anomaly detection methods. Combined with methods of system security analysis in virtualized environments (such as the cloud), the new era of methods for anomaly detection will soon arise.

The purpose of this Workshop in to bring together experts from the fields of distributed computing systems including security, cloud and Internet of Things as well as experts on algorithms for signal processing, log analysis, pattern recognition and statistical learning models, working in all aspects of anomaly detection such as those referred to above.

More information can be found on the workshop website.

1st International Workshop on Next Generation Clouds for Extreme Data Analytics (XtremeCLOUD 2018)


The goal of the 1st International Workshop on Next Generation Clouds for Extreme Data (XtremeCLOUD) is to bring together researchers and practitioners from both academia and industry to explore, discuss and possibly redefine the state of the art in Cloud Computing relative to heterogeneity, resource management, scalability, methods and tools applied over any part of algorithms and computing infrastructures, as well as use-cases and applications that relate to extreme data analytics. This workshop will solicit original research work on fundamental aspects of Cloud Computing that enable extreme scale data processing as well as the design, implementation and evaluation of novel tools and methods for optimizing Big Data applications and workflows.

Topics of Interest
  • Hardware-aware Big Data frameworks
  • Benchmarking/modeling of performance/cost/energy consumption of resource demanding cloud applications on heterogeneous hardware
  • Algorithms, methods and tools to improve the utilization and scalability of Cloud infrastructures
  • Holistic and efficient management of Cloud resources
  • Novel architectures and programming models for extreme scale data processing
  • Extreme scale batch/streaming applications optimization
  • Scheduling algorithms and tools for heterogeneous execution
  • Applications and use cases of Big Data analytics over heterogeneous architectures/hardware
  • Visionary ideas on extreme data analytics and heterogeneous environments

More information can be found on the workshop website.

Registration