Are you working in the Shipping industry? Do you have a management, technical, or IT related role in a large company or an SME, interested in implementing BigDatatechnologies to improve shipping? Are you interested in optimizing and cutting costs on maintenance and spare parts inventory planning and dynamic routing? Then you should join BigDataStack’s webinar on 26 June 2019!
BigDataStack, a leading project where ATC and 13 other companies and research institutions are collaborating to deliver an architecture of a complete stack based on a frontrunner infrastructure management system that drives decisions according to data aspects, is organizing a series of three webinars for end-users to learn more about the technologies developed within the project and put to practice in its three industry use cases.
The organizations using the BigDataStack technologies in these use-cases will explain how they are using these technologies and how they will improve the end-user’s life. BigDataStack technology providers will explain the technologies used in more detail and answer any questions you may have about using these technologies in your own organization.
About the BigDataStack Technologies for Shipping Webinar
The BigDataStack algorithms will optimize and help cut costs on maintenance and spare parts inventory planning and dynamic routing. These predictions will be estimated and provided to DANAOS, a leading international maritime player with more than 60 container ships.
The webinar focuses on the added value of BigDataStack technologies for shipping:
- Performing predictive analytics on top of both streaming and stored/historical data as key for the optimization of all processes.
- The underlying infrastructure system will allow for larger datasets to be exploited towards more accurate predictions, while the CEP approach over cross-streams and federated environments (given that different data are obtained by different sources) will enable the combination and consideration of additional aspects (e.g. inventory locations), which is not feasible today.
- Moreover, the overall maintenance process will be modelled through the Process Modelling framework and process mining techniques will provide insights regarding points of optimization or potential bottlenecks.
Speakers
Dimosthenis Kyriazis
Technical Coordinator of the BigdataStack project and Assistant Professor at University of Piraeus (Department of Digital Systems), will give an overview of the project and architecture of BigDataStack.
Stathis Plitsos
Head of Development DeepSea Technologies – DANAOS will present the Danaos Shipping Use-Case and show how BigDataStack Technologies optimize and help cut costs on maintenance and spare parts inventory planning and dynamic routing.
Orlando Avila
Ph.D. in Computer Science – Artificial Intelligence and senior software architect at Atos, will discuss the technologies used used in the BigDataStack shipping use-case.
You can register here.
About BigDataStack
BigDataStack is a Research & Innovation Action (RIA) funded as part of the H2020 programme of the European Commission, which lasts 36 months and kicked off on the 1st January 2018. The project aims to deliver an architecture of a complete stack based on a frontrunner infrastructure management system that drives decisions according to data aspects, thus being fully scalable, runtime adaptable and high-performant to address the needs of big data operations and data-intensive applications. Furthermore, the stack goes beyond purely infrastructure elements by introducing techniques for the dimensioning of big data applications, modelling and analysis of processes as well as the provision of data-as-a-service exploiting a proposed seamless analytics framework.
ATC leads all the interaction mechanisms of the BigDataStack platform. These mechanisms will aim at:
- Increased and predictable performance of data operations and data-intensive applications by dimensioning them regarding the required infrastructure resources.
- Efficiency and agility through declarative process modelling allowing the stakeholders to specify functionality-based process models that will be turned to process analytics and mining tasks in an automated way by the modelling framework and analyzed through the data mining mechanisms.
- Usability and extensibility by delivering a toolkit allowing big data practitioners both to ingest their analytics tasks (through declarative methods) and to set their requirements / preferences.
- Exploitation through the visualization environment of the analytics outcomes and providing a complete view of the data (e.g. outcomes of incremental queries) and also of the infrastructure. ATC is responsible to deliver the Declarative process modeling framework and the Visualization framework. Furthermore, ATC leads Innovation Management supporting the project innovation strategy and exploitation activities.