DELIVERABLES

FACTLOG DOCUMENTATION

Every factory has different operational contexts and needs. In FACTLOG, we are proposing a generic technology pipeline to model all assets as digital twins and offer some cognition capabilities (ability to understand, reason and act through optimization). This offering should be open, configurable per case and supported both by models and services that can be deployed and fit into the different industry needs.

This deliverable is trying to bridge our project vision and approach with the real needs of the industry. The overall approach used was to work into two inter-related ways: first, to identify “what the offer is”, i.e. to create an operational model of FACTLOG and explaining how it works and mention indicative problems and challenges that we can address. Second, to talk to the users (industry) to understand their real challenges and needs.

In D1.1, we present the results of those two approaches and describe the overall FACTLOG operational model where all its enablers are placed in an integrated way (cognition, analytics, optimization, etc.). Also, we detail the user needs and how FACTLOG operational model contributes to them.

The operational model has a generic flow pattern: it starts with the concept of modelling any asset/system/process in the industry as a network of inter-related Digital Twins (DTs). Those DTs interact with the physical assets in a bilateral way: collecting data and sending data. At the operational phase, we have collection of data streams from different information sources using a messaging service. Using a Reasoning engine (combining both data- driven and model-driven approaches), we can identify patterns of behaviour and potential shortfalls. Simulation and forecasting can propagate the behaviour of the system into the near future and assess the impact of the identified anomaly. Last, using robust optimization methods, we can improve decision-making (planning, scheduling, auto-configuration, etc.).

FACTLOG has five interesting and different industrial cases. Most of them correspond to the need of predictive maintenance, anomaly detection and mitigation, energy monitoring, scheduling and optimal machine operation status. We have identified the needs and reference scenarios (information flow and actors in line with the FACTLOG operational model) that will help in the definition of the system specifications, boundaries and pilot particularities for deployments. As a next step, we will conduct workshops with external industrial players to confirm the operational model and also identify other scenarios, which will give us a more comprehensive picture of the market perspectives of our solution. This was planned to be done in the period of the pilot analysis but due to the COVID-19 outbreak we faced problems in organizing such workshops. We expect to do this in the next months and an updated version of the deliverable to be submitted with such findings.

Through a holistic requirements elicitation approach, we expect such reference scenarios to be further detailed in the next step, which is the definition of the use cases and functional/non-functional requirements

Download

The main goal of this deliverable is to provide the foundation for the realization of the cognition-driven solutions in pilots. Since this type of systems is a novel one, this deliverable also explains some of basic concepts, such as cognition and cognition process. It appears that a metaphor of human cognition, as taken from psychology, can be a very suitable basis for developing the concept for an efficient design of cognition-driven industrial systems, which we consider as Cognitive Factory Framework (CFF).

The main advantage of CFF is that it brings together two perspectives. One is the way in which human cognition deals with new information/situations, esp. in the case of unknowns (it is not known how to react based on existing models and past data). Another perspective is the industry-process oriented one: how a process behaves under variations (internal, external), i.e. how (un)stable are the process performances (KPIs) in such situations. The main goal of CFF is to make industry processes able to deal with variations efficiently, based on the analogy to the human cognition. More precisely, we define cognition process consisting of four basic phases:

  1. Detect variations
  2. Understand root causes of variations
  3. Understand the impact of variations
  4. Find optimal reaction

We also define the roles of four basic technologies, data analytics, knowledge graph, process modelling and simulations and optimization in these phases, illustrating that the envisioned cognition-driven processing is feasible.

In addition, we provide a deep analysis of all pilots regarding the realization of Cognitive Factory Framework, demonstrating that CFF is general enough to be applied in various use cases / scenarios.

This deliverable will serve as a kind of guideline for the realization of the specific components in other WPs, which combined together (based on the architecture described in D1.3) will provide desired cognition-driven solutions in all pilots.

Download

This deliverable reports on the specification of the FACTLOG architecture, reflecting the work performed in the context of Tasks 1.3 “Requirements analysis” and 1.4 “Architecture definition”, and the outcomes thereof. Following the analysis of the pilot scenarios identified, as well as the elaboration of use cases and system requirements, this deliverable proceeds with describing the components and main functions of the FACTLOG ecosystem.

To a great extent, FACTLOG architecture is centred around the concept of Digital Twins (DTs). In the context of Cyber-Physical Systems (CPS), a digital twin of a production system can address the challenge of making systems easily and quickly reconfigurable by being applied for realising and testing reconfiguration scenarios in a simulative environment; recommissioning of the production system therefore takes less time and enables a higher system availability. FACTLOG in fact extends this concept with the Enhanced Cognitive Twin (ECT), which can be seen as the evolution of the digital twin in the big data era. As such it provides cognitive capabilities to the digital twin, enabling learning from the vast streams of data that flow through it and thus the continuous modelling of the physical element’s behaviour. In other words, an ECT consists of all the characteristics of a digital twin, with the addition of artificial intelligence features enabling it to optimize operation as well as continuously test what-if-scenarios, paving the way for predictive maintenance and an overall more flexible and efficient production using stored operation data in the digital twin throughout its lifecycle. Furthermore, having an intelligent cyber layer expands the digital twin with self-x capabilities such as self-learning or self-healing, facilitating its inner data management as well as its autonomous communication with other digital twins.

In order to achieve the above, a digital twins platform is put in place as the focal point of access to all manufacturing entities (MEs) operational context in FACTLOG, facilitating the realisation of the three main aspects characterising digital twins: synchronisation with the physical asset, active data acquisition from the production environment, and the ability of simulation through the appropriate interfaces. On this basis, a number of cognitive services
cooperate in order to turn digital twins into ECTs.

The analytics modules hold the critical role of extracting knowledge from the data ingested into FACTLOG. This may take various forms, beginning with the creation of data-driven models based on detected anomalies and variations in past data. They may also perform predictions, which, combined with the above models, help identify situations of interest and thus facilitate variation understanding and root-cause establishment in the cognition cycle. Certain analytics tools are also in position to perform simulations based on industry data, which can in turn be exploited for indicating potential actions to improve the situations
(insights).

However, data is not the only source of knowledge in FACTLOG; the solution also incorporates human domain experience formally expressed by means of two frameworks, the Knowledge Graph Model (KGM) and the Process & Simulation Model (PSM). The former formalises rather static aspects of FACTLOG deployments, denoting a generic ontology representation and description of all FACTLOG mechanisms and tools, but also of the specific MEs on occasion guiding their consistent representation through digital twins. The latter models the dynamic behaviour of each considered manufacturing process, may it be a continuous or a discreet one. Both the KGM and the PSM are invaluable in putting the results generated by other components in context, as well as verifying and evaluating them, while they themselves are refined through their interactions with other modules, thus exploiting newly-acquired knowledge towards the continuous evolution of FACTLOG cognition capabilities.

FACTLOG also contains a dedicated module which can solve short-, mid- and long-term well-defined production optimization problems. Depending on each particular production setting, it can address a variety of optimization issues, ranging from production schedules for approval at the shopfloor to re-scheduling and re-configuration of machines or entire processes, taking into account a variety of constraints. In that, the optimization module uses appropriate outputs of analytics as inputs to perform its functions, guided throughout the procedure by the KGM and PSM. This in itself is, among others, a good example of how new knowledge extracted from the data and used to refine the behaviour models of the digital twin benefits the cognition process; indeed, given that the optimization module provides assistance to optimize the production in various concerns, above refinement allows the digital twin to incrementally improve its behaviour and features and thus to steadily optimize the above-mentioned assistance, due to the fact that the data analytics results are
directly incorporated into the system and models.

Finally, in order to provide for effective interaction, coordination and orchestration of FACTLOG components and operations, the architecture includes the Message and Service Bus (MSB), being a mediation middleware between the components of the FACTLOG ecosystem. The MSB comprises a messaging and streaming system providing for asynchronous and point-to-point message exchange between the system entities, circulation of events, and delivery of data in a real-time or batch-oriented fashion. Furthermore, the MSB supports integration of the production infrastructure objects and data sources, by means of appropriate connectors. The MSB incorporates also the functionality for the orchestration of FACTLOG components and operations as regards the execution of dataflows.

This document describes in detail all the requirements for the analytics system to be set-up and used in the FACTLOG project. The requirements are collected and presented on a per-pilot basis, considering the types and volumes of data available for each pilot as well as the target problems. Finally, the design specifications for the analytics system are described, both from the conceptual and the technical standpoint.

Each pilot use-case has a set of target problem scenarios they want to address with the technology from the FACTLOG platform. This deliverable presents all these scenarios for each pilot and identifies the role of analytics in them. Once each scenario is formulated as an analytics problem the methodology for addressing it is identified. The data sources and types are also inspected, and it is assessed if they are appropriate for the planned approach. An aggregated overview of the requirements is given for clarity.

Based on the requirements a design specification for the analytics system is drafted. First the analytics system is placed in relation to other components. It’s role as a building block of the cognitive factory framework is explained as well as the interactions it has with the optimisation system and the knowledge graph and process models. Then, a set of tools (i.e. analytics libraries and platforms) is identified along with the methods and approaches that address the requirements collected in the preceding sections of the document.

The deliverable is a comprehensive collection of requirements for the analytics system and the specification of its conceptual and technical design. It is built on the information and insights collected regarding the pilots and the project challenges up to the time of its preparation. The requirements and specifications may evolve as the project progresses and any adaptations will be reported in future deliverables.

Download

This document presents the analytics platform developed in the FACTLOG project for use in the process industry. The platform analyses the data from the manufacturing systems and produces insights and predictions which inform the other components. This includes predicting future states of systems from past sensor readings and machine settings information; computing the most likely values of missing readings, specialised models of key infrastructure assets from the pilots such as distillation columns; and identifying and analysing unusual situations in complex multi-variate datasets.

The methodology section (Section 2) introduces the main concepts on which the platform tools are based. The stream forecasting problem is framed, where future values of timeseries is predicted from past observations and possibly contextual data. The approach based on Artificial Neural Networks is introduced and extended with the capability to impute missing values in the timeseries in a pre-processing layer. A specialised approach for modelling distillation columns is presented as they are a common asset in two of the pilots. Finally, methodology for detecting and analysing unusual situations for the cognitions
process is described.

Implementation details are given in Section 3. The focus is on the description of the API structure of the Batch Learning Forecasting component. The description includes building forecasting models and data imputation as well as details regarding deployment of multiple instances and attaching the models to a messaging queue (i.e. a Kafka stream). A similar technical description is given for the distillation columns model.

Finally, demonstration scenarios are presented in Section 4. These include example tool deployments from the Tupras, JEMS and Continental pilots with their input data, model setup, evaluation setup, and results. Though these deployments are still in active development, the results achieved show promise.

Download

This deliverable describes a manufacturing chain recommender system used to recommend decision-making options regarding the manufacturing process, and logistics based on demand forecasts. While the recommendations across the manufacturing process are based on domain knowledge and heuristics, models regarding logistics are based on demand forecasts and collected data regarding different shipping options. The models we developed are relatively simple but introduce a great added value for the end users (e.g., shop floor managers, production planners, and logisticians), aiding them in decision-making.

The heuristic version regarding the manufacturing process was already deployed to production. The recommender system applied to the logistics use case was developed with real data, but not yet deployed to a productive environment: we expect to incorporate this solution to QlectorLEAP in the future. Qlector LEAP is an AI platform dedicated to co-creation of optimal plans and maintaining lean production. In last two years, QlectorLEAP has been recognised as one of the most innovative solutions recognised by Slovenian Chamber of Commerce and Industry.

This document presents the anomaly detection system of the FACTLOG project. In the FACTLOG architecture anomaly detection methods play the role of the eyes and ears. They monitor the operation of the manufacturing environments and raise alerts when unexpected situations occur. Other components such as process modelling and optimisation are then invoked to take over and find appropriate actions to address whatever may have happened.

The anomaly detection methods must be broad and flexible enough to be able to detect a wide variety of anomalies and offer ways for calibration, so they detect all the critical issues while not flooding the system with spurious and meaningless alerts. Section 2 describes the selection of methods chosen for use. This includes a wide selection univariate and multivariate algorithms, from the simpler approaches based on running averages and thresholds, through detection using machine learning models such as the Isolation Forest, to deep-learning based methods such as the Generative Adversarial Networks (GANs). For complex event processing, where more meaningful events need to be identified, the StreamStory is presented, which builds a hierarchical Markov model of a multivariate system. The model can be used to run Monte-Carlo simulations of system state transitions or inspected manually in the graphical interface to gain insights into the system operation.

In Section 3 deployments of the anomaly detection methods in the FACTLOG use cases is presented. These examples include detecting parameter value drift in the Continental use-case, the unsupervised detection of anomalies in the JEMS use-case, both on the level of individual readings as well as on the level of system-wide events, and finally an example of detecting organisational anomalies through the QlectorLEAP application.

The implementation details of the anomaly detection module developed during FACTLOG are presented in Section 4. The section covers some of the methodological details of the implemented methods, the specification of their APIs, and the technical guidelines needed for their deployment.

Download

This deliverable describes a manufacturing chain recommender system used to recommend decision-making options regarding the manufacturing process, and logistics based on demand forecasts. While the recommendations across the manufacturing process are based on domain knowledge and heuristics, models regarding logistics are based on demand forecasts and collected data regarding different shipping options. The models we developed are relatively simple but introduce a great added value for the end users (e.g., shop floor managers, production planners, and logisticians), aiding them in decision-making.

The heuristic version regarding the manufacturing process was already deployed to production. The recommender system applied to the logistics use case was developed with real data, but not yet deployed to a productive environment: we expect to incorporate this solution to QlectorLEAP in the future. Qlector LEAP is an AI platform dedicated to co-creation of optimal plans and maintaining lean production. In last two years, QlectorLEAP has been recognised as one of the most innovative solutions recognised by Slovenian Chamber of Commerce and Industry.

This deliverable is related to Task 3.1: Design and life-cycle management of (Enhanced) Cognitive Twins, which is responsible for the maintenance of the models of the usual/normal behaviour of an industry asset in order to keep the operation of the Twin valid. The man problem is so called model drift.

Enhanced Cognitive Twin is an extension of Digital Twins which models an asset behaviour from the cognition point of view, as detailed in D1.2. It means that the models support cognition-driven behaviour of the Enhanced Cognitive Twin.

In this deliverable we describe a novel process of maintaining Digital Twins models. In order to support Model life cycle management, we adapted the Autonomic Computing MAPE architecture and present approaches for the Creation, Usage and Change Detection.

We also present the results from the Continental pilot.

This deliverable accompanies the software code developed in the scope of the task T3.1.

The document explains the conceptual background with description of services architecture, analytical tools developed, technologies selected, implementation approach and design, and final pilot specific implementation. The following chapter introduces the conceptual approach in services design.

The Services section (Section 3) introduces architectural approach in service implementation and data structures. The section describes each service implemented, with basic methodology, background technology and functional capabilities. In such way, the utilization of analytical tools developed is resented with description of limitation and generalizability scope.

Utilization of designed services is presented in Section 4, where individual pilot use cases are addressed, through the lenses of pilot use cases, pitot data with individual data structures and validations of results in trained models and service pipeline configurations. The demonstration scenarios explain and demonstrate reliability and validity of individual services use in FACTLOG project. More importantly, key findings also indicate constraints and conditions for using specific services, such as streaming sensor data for non-typical sensors (in example electric motor power consumption), and how to address this issue in the further services use. Most importantly, the basic services data structures were augmented with error metrics, which provide important KPIs for accurate integration and utilization of capabilities.

Finally, the results of pilot implementation and demonstration were summarized and contextualized to give strategic guidelines for further service utilization: possible combination of services implementation, key preconditions, how different use case and data can be treated at the level of services utilization, how to use error metrics as KPIs and how to achieve higher reliability and generalizability.

The main goal of this deliverable is to boost the cognition process (i.e., the management of enhanced cognitive digital twins) with the factory knowledge, derived mainly in WP4, required for the internal reasoning processes. One of the main challenges will be to understand the role of process and domain knowledge in the cognition process formalized in T3.1, leading to a completely new view on the factory knowledge and consequently on the methods for processing and validating it. The main outcome is a set of interfaces for accessing that knowledge. In addition, feedback about the validity of knowledge will be sent as feedback to support the FACTLOG platform development.

In this deliverable, ontology and knowledge graph models are first investigated. Then cognitive factory services and knowledge graph modelling are identified to provide all the functionalities of knowledge graph models. Then ontology based on the BFO is introduced. Based on the ontology, knowledge graph models are developed. Finally, the integration of knowledge graph models and FACTLOG platform is demonstrated including three approaches: 1) integration based on OWL models; 2) integration based on Neo4j; 3) integration based on HTTP.

Download

This deliverable is related to Task 3.5: Cognition in (pro)action: resolving unknown unknowns with Cognitive Twins, which is responsible for introducing, realizing and validating proactivity (proactively resolving currently “unresolvable” situations – unknown unknowns) in
Process industry scenarios through cognition (D2CogniPro system).

D2CogniPro is a new generation of the intelligent systems which uses human cognition-inspired processing for improving the detection, analysis and resolution of the critical situations.

This document reports about the development of D2CogniPro system, by providing

a) methods implemented in the system,
b) a walkthrough of the current implementation of the system (prototype) and
c) the application in selected use cases

This deliverable accompanies the software code developed in the scope of the Task 3.5.

It is important to emphasize that D2CogniPro system was completely developed within the FACTLOG. In this reporting period, we worked on the implementation of the advanced learning methods.

Process Simulation Modelling is an overall methodological scheme, suitable for modelling and simulating most or even all types of process industries, as well as suitable for providing the capacities and services involved in the process.

In the context of FACTLOG, Process Simulation Model denotes a generic model with all related methods, algorithms, mechanisms, services and tools it directly uses, integrated into an overall modelling application or platform. In any specialized model, these methods, algorithms and mechanisms do not change. Process Simulation Modelling interconnects and interoperates with external AI tools, Optimisation tools, Analytics tools, etc.

To model any system, its state space needs to be defined, i.e., the variables that govern the behaviour of the system with respect to the metrics being estimated. If the variables continuously change over time, it is a continuous system. If the system state instantaneously changes at discrete points in time, instead of continuously, it is a discrete system.

The baseline for system modelling will be the process modelling methodology, whereby the system modelled is analysed into machines/processes, production stages, resource stocks/warehouses, inputs, outputs and material/energy flows. All model entities are organised into a hierarchical inheritance registry that provides prototype reconfigurable building blocks for building any industrial system model.

The process models developed in the context of FACTLOG have the following attributes:

  • They are dynamic in the sense that key flows, as well as key process control settings
    are continuously updated in near real-time, in connection to a real-time monitoring
    system maintaining a digital Shadow of the physical System.
  • They are adaptive, in the sense that (a) the models support/facilitate the physical
    system’s adaptation to new goals/targets/conditions/possibilities, and (b) the models
    can adapt closely following any structural, methodological or operational changes
    made to the physical systems.

The Process Simulation and Modelling Tool is a tool developed in the context of Task 4.1, in order to address the process modelling and simulation requirements of the FACTLOG project. PSM Tool allows the user not only to create a process industry model but also to simulate the operation of such an industry. An Application Programming Interface has also been created, to assist the integration of Process Simulation Modelling module into the Digital Twins Platform of FACTLOG.

FACTLOG has five interesting and different industrial pilot cases (i.e., TUPRAS, JEMS1, CONT, BRC, and PIA). This document presents the outputs from the transformation of use cases, defined in WP1, into Process Models. A deep analysis of the pilot cases and the corresponding process models are provided.

1 JEMS pilot did not meet its objectives, especially with regards to the integration of the FACTLOG system to its plant since there is not yet an operative plant in Slovenia.

Download

This document is the technical report about WP4 Knowledge Graph and Process Modelling. It presents the FACTLOG ontology developed based on BFO and IoF and knowledge graph modelling based on OWL. Furthermore, it introduces a proposed shop floor case, ontology and OWL models. Finally, reasoning and query of SQWRL and SPARQL are proposed to explain how to analyse the knowledge graph models using the defined rules.

Download

In the context of FACTLOG, a Process Modelling and Simulation methodology is developed. The implemented models are parametric since the data used for these models are of three main categories. Static data that refer to the structural characteristics of the industrial systems of the pilot, dynamic data from the pilots that refer mainly to products being processed in their systems and in the appearance of events that change system states (such as machine breakdowns) and dynamic data received from the associate services. In particular operation of associate services (mainly optimisation, analytics and knowledge graphs) provides useful data that are used to define the scenarios under study (in the case of schedules) or make the simulated scenarios much more realistic (as in the case of analytics that can detect patterns regarding the appearance of certain events in the system on the simulated time horizon). The described data from the associate services justify the cognitive characteristics of the models developed.

After the definition and description of the proposed methodology, the model prototypes of certain realistic, concerning types and quantity of data, process models of the pilot systems are developed. The implemented models are used to simulate alternative scenarios, KPIs are calculated, and the results obtained are cross-validated with the results received from the optimisation service. From the values of the calculated KPIs, useful conclusions regarding the behaviour and the efficiency of the modelled industrial system are extracted. These KPIs are of general use, but also pilot-specific KPIs can be calculated with respect to
the individual needs and available data.

The system cognitive process models developed in the context of FACTLOG have the following attributes:

  • They are fully parametric. This makes them easy to update, expandable and useful for studying a wide variety of scenarios regarding the complexity and the behaviour of the system under completely different situations.
  • They are dynamic in the sense that key flows, as well as key process control settings, are continuously updated in near real-time, in connection to a real-time monitoring system maintaining a digital Shadow of the physical System.
  • They are interconnected through APIs with the associate services. This makes them much more realistic as the data used can describe realistic behaviours of increased complexity and from many different points of view.

The Process Simulation and Modelling Tool is developed in the context of T4.1 and T4.3, to address the process modelling and simulation requirements of the FACTLOG project. PSM Tool allows the user not only to create a process industry model but also to simulate the operation of such an industrial system. An Application Programming Interface has also been created to assist the integration of the Process Simulation Modelling module into the Digital Twins Platform of FACTLOG.

Download

This document presents the work done regarding the design and development of knowledge graph (KG) operated cognitive services. The development work builds on initial services designed under D3.2 Data Analytics as a Cognitive Services and ontology-based KG designed in deliverable D4.2.

Initially, the technology background is presented with description of services architecture, conceptual design and approach in integrating KG into cognitive API analytical workflow. The redesign and upgrade of the initial API is presented as well as its functional design.

Furthermore, in continuation, the ontology model used and KG extraction methods are explained, including the use of domain specific concepts for data analytics in FACTLOG Ontology for cognitive API automation. The final solution is presented on two main use cases – using a predefined AI model and an ontology managed AI model pipeline (feature vectors and data pipeline setup).

In the final chapter, a short consolidation and interpretation of the results in the light of project’s main objective were included. More importantly, the KG-based cognitive API enables to use initial analytical tools developed in FACTLOG in a more generic fashion. The results show how using an advanced approach such as ontology driven process definitions can enable new approaches in utilizing analytical technologies for faster deployment, easier scalability and more effective
maintainability.

Download

This deliverable reports on the specification and implementation of the State-of-the-art Optimization Methods of the FACTLOG project, reflecting the work performed in the context of the project Task 5.1 Robust Optimization Methods, and the outcomes thereof. Following the analysis of the pilot scenarios relevant to the needs for optimization, in parallel with the elaboration of the different use cases and system requirements, the progressive implementation of the different modules which are dependent on optimization as well as the ones which optimization depends on, this deliverable proceeds with describing the optimization methods as well as the initial version of the optimization toolkit of the FACTLOG ecosystem.

Starting from the toolkit itself, it is designed to be modular, expandable, and capable to be adapted and introduced in different cases, starting from the pilots themselves. It can solve short- mid- and long- term production optimization problems and depending on the different pilots (ongoing and post the projects lifecycle) and their needs, it can address the provision of optimal production schedules (e.g., BRC) to re-scheduling (e.g., PIA) and re-configuration of setting of different production units (e.g., TUPRAS), taking into account all process and business constraints. The Optimization as a Service module, named optEngine consists of different layers (as presented in D1.3 Architecture and Technical Specification) is responsible for the interconnection of the Optimization Module to the remaining FACTLOG modules (e.g ECTs) enabling them to initiate an optimization round or receive the optimization’s round results.

Besides the development of the optimization engine this Task had the goal of providing a theoretical approach to solving the different identified optimization problems of the pilots. Starting from the TUPRAS case relevant to the Oil Refineries and to the LPG Production process, the role of optimization was to handle the recovery to on-specs LPG production in the most energy efficient way. To do so, the optimization module identifies the most energy efficient combination of operational scenarios for all process units involved in the LPG purification process and proposes the settings combinations to the shopfloor (or ECT) to
take the appropriate decision and action. The TUPRAS case was solved utilizing a MIP model, based on a typical flow, and blending modelling approach that also incorporates a binary decision variable for each operational scenario of each process unit. This approach enables the optimization engine to take under consideration all involved units in the process, something that also pushed the boundaries of research (as current solutions focus on single unit optimization). Through experimentation we found that the behavior of our proposed approach given different time horizons for recovery will be suitable for real field application.

Moving forward from TUPRAS to the second pilot and respective problem solved by the optimization module we have the PIA case. In the PIACENZA case, a main goal as in the overall modern textile industry is to increase productivity while reducing production costs. The case has been challenging in terms of optimization as it has the inherent properties of weaving scheduling (job splitting and sequence dependent setup times) in parallel to additional setup constrains as in this case the number of setups that can be performed simultaneously on different machines is restricted due to a limited number of setup workers and daily setup time is also bounded. The PIACENZA case was solved utilizing a mixed integer linear programming (MILP) formulation that captures the elaborate structure of the weaving process extended by two combinatorial heuristics that differ on the way they perform job splitting and assignment to machines, to handle large real instances. Through experimentation we found that the solutions provide the best policies to balance makespan, number of tardy jobs and total tardiness over weekly instances.

The third case in the FACTLOG project the Optimization Toolkit deals with is the CONTINENTAL case. This is a discrete automotive part manufacturing environment that is modelled as a 2-stage assembly flow shop with resource constraints. Key challenge is the integration of maintenance planning and scheduling together with the scheduling of production orders at the production lines. To that end, an analytics module is providing maintenance windows and the goal is to schedule maintenance activities during periods that will have a minimum impact on the schedule in terms of makespan and tardiness. Another capability that is provided is dynamic re-scheduling to capture new urgent orders and unscheduled machine breakdowns. A rigorous Constraint Programming formulation is proposed for modeling and solving the problem. Preliminary results on benchmark data sets validate the applicability of the model and demonstrate the efficiency, effectiveness and scalability of the proposed CP approach.

Lastly, the fourth case in the FACTLOG project the Optimization Toolkit handles is the BRC Steel production case. This is a multistage flowshop with parallel machines at each stage. The main challenge in this particular case was the lack of digitized information. That combined with the inherent difficulty in needing cranes to unload / load machines create important bottlenecks in the production process. The goal of optimization in this case was to find the optimal production schedule in relation to the makespan or the number of tardy jobs. The BRC case was solved utilizing MILP for an extended flexible multistage flowshop problem with machine dependent setup times. Preliminary experimentation showed that the MIP model can handle instances of medium size quite easily and can provide production policies that balance between the criterion of minimum makespan and tardy jobs.

In addition to the optimization toolkit, we provide a novel approach for detecting variations in the process structure (phases) based on processing energy consumption data. These variations define the structure of a process and as such are important for the optimization of the process execution, as defined in our Cognitive Factory Model (reported in deliverable D3.1). Sensors for measuring energy consumption are very common in the industry processes. Since it is planned to install additional energy sensors in BRC and PIACENZA pilots, these two will be used for the validation phase.

That is, all sections related to optimisation document the research and innovation work on optimization (AUEB, UNIPI), while the last section brings in an indicative enhancement and interplay of the analytics, as presented in D3.1 (NISSA).

Download

This deliverable reports on the progress conducted in WP5 Robust Optimization Methods relating to the implementation of the State-of-the-art Optimization Methods of the FACTLOG project. It reflects the work performed in the context of the project Task 5.2 Robust Energy-aware production scheduling and T5.3 Resource-aware production planning, and the outcomes thereof. Following the implementation of the overall FACTLOG system and the overall project evolution in the current tasks the progressive evolution of the algorithms and methods as well as their implementation (in line with T5.4 Robust Optimization as a Service) took place and as such, this deliverable proceeds with describing the updates on the optimization methods as well as the updated version of the optimization toolkit implementation.

Starting from the toolkit itself, as it was designed to be modular, expandable, and capable to be adapted and introduced in different cases, it has progressed to be able to handle all pilots’ incoming data from the FACTLOG infrastructure and the pilots themselves. It solves the problems presented in D5.1 and follows the respective design and use cases. Optimization is initiated on an ad hoc basis with respective received signals from the FACTLOG ecosystem and output is return respectively.

Besides the actual development of the algorithms for the optimization engine, progress was made with respect to the proposed optimization approaches, algorithms and tools in the corresponding cases. Starting from the TUPRAS case relevant to the Oil Refineries and on- specs Liquefied petroleum gas (LPG) production, a mathematical programming approach to minimize the energy consumed for planning on-specs LPG production was developed and integrated in the Optimization toolkit. In this case, the problem is formulated as a Mixed Integer Linear Program (MILP) that integrates network flow and blending constraints in order to identify the most energy-efficient combination of configurations for all process units of the LPG purification process to achieve on-specs LPG production by calculating the optimal on-specs recovery plan. The MILP presented in D5.1 is further updated in this deliverable leading to a crisper presentation of the LPG purification process. Additional work includes the examination of the application of Data Envelopment Analysis assessment for identifying the dominant operational scenarios and hence reducing the solution space, the exploration of approaches such as Chance Constrained Programming and Interval Linear Programming to address uncertainty in the level of impurities in the input feed as well as handling the input feed rate and their incorporation in the proposed MILP approach. Lastly, the presentation of the concept of state-aware optimization which also utilizes other technological services of FACTLOG, such as the corresponding Machine Learning and Simulation tools.

Moving forward from TUPRAS to the second pilot and respective problem solved by the optimization module we have the PIACENZA case. In the PIACENZA case, there is a three-fold progress: (a) the Weaving scheduling problem (b) solving the (Parallel Machine Scheduling) PMS problem on unrelated machines with sequence-dependent setup times, job splitting and resource constraints and (c) design of effective exact methods. Therefore, in this case we provide novel and effective lower bounds and a three-stage heuristic for the makespan minimization problem identified at the pilot case. Additionally, we have numerically evaluated the algorithms developed on benchmark instances, as well as on experiments based on real datasets. Additional experiments enabled us to provide important findings on the problem parameters as business consultation thus deriving to policies for unexpected events.

The third case in the FACTLOG project the Optimization Toolkit handles is the BRC Steel production case. This is a multistage flowshop with parallel machines at each stage. The main challenge in this particular case was the lack of digitized information and the large-scale size of the problem. That combined with the inherent difficulty in needing cranes to load / unload machines create major bottlenecks in the production process. However, a significant progress was the incorporation of the cranes’ movements and imitation of the process as accurate as possible. The most challenging issue faced was the tracking of the starting and the ending point for each job which needed to be moved. The goal of optimization in this case was to find the optimal production schedule in relation to the makespan or the total lateness of tardy jobs. The BRC case was solved utilizing Mixed- Integer Linear Programming (MILP) for an extended flexible multistage flowshop problem with machine dependent setup times. Nevertheless, to capture the cranes’ movement the MILP is transformed to a Mixed Integer Quadratic Problem but in future research we can linearize it. Preliminary experimentation showed that the MIP model can handle instances of medium size quite easily and can provide production policies that balance between the criterion of minimum makespan and lateness. Finally, by finding some Lower Bound (LB) for the decision variables we obtained better computational times.

Lastly, the fourth case in the FACTLOG project the Optimization Toolkit deals with is the CONTINENTAL case. This is a discrete automotive part manufacturing environment that is modelled as a 2-stage assembly flow shop with resource constraints. Key challenge is the integration of maintenance planning and scheduling together with the scheduling of production orders at the production lines. To that end, an analytics module is providing maintenance windows and the goal is to schedule maintenance activities during periods that will have a minimum impact on the schedule in terms of makespan and tardiness. A rigorous Constraint Programming formulation is proposed for modeling and solving the problem. Results on synthetic and real data sets validate the applicability of the model and demonstrate the efficiency, effectiveness and scalability of the proposed Constrained Programming (CP) approach.

Download

The aim of WP6 is to implement the entire FACTLOG architecture and the system that integrates the enhanced cognitive twins and all tools and services over a cloud-based collaboration infrastructure, along the specifications reported in deliverable D1.3 “System Architecture and Technical Specifications” [3]. A key prerequisite in this direction is the development of a mediation middleware among the various components comprising the FACTLOG ecosystem, assigned with the interaction, coordination and orchestration of its components and operations. To address the above, FACTLOG proposes a data collection and integration framework offering the following functionalities:

  • Data acquisition: It provides for the integration of the infrastructure objects and data sources, through a unified solution for accessing information stored in or originating from heterogeneous systems.
  • Messaging and streaming: It facilitates both asynchronous and point-to-point message exchange between system components, while also supporting streaming, enabling on-the-fly and real-time processing of data as they arrive.
  • Digital twins as a single source of truth: Digital twins constitute the sole point of reference regarding the state and behaviour of considered manufacturing entities; in this sense, all related data are associated with the digital twins that represent the corresponding assets and are only retrieved from or through these digital twins, subject to their control.

In this setting, this deliverable presents the first version of the data collection and integration framework, reflecting the work performed in the context of the project Task 6.1 “Multi-modal Data Collection and Integration Framework”, and the outcome thereof. Closely following the analysis of the pilot cases and the associated requirements derived, as well as the development of the rest of FACTLOG components, services and tools, this deliverable describes the constituent modules and main functions of the proposed solution that render it the main integration enabler in FACTLOG platform.

Download

The aim of WP6 is to implement the entire FACTLOG architecture and the system that integrates the enhanced cognitive twins and all tools and services over a cloud-based collaboration infrastructure, along the specifications reported in deliverable D1.3 “System Architecture and Technical Specifications” [3]. A key prerequisite in this direction is the development of a mediation middleware among the various components comprising the FACTLOG ecosystem, assigned with the interaction, coordination and orchestration of its components and operations. To address the above, FACTLOG proposes a data collection and integration framework offering the following functionalities:

  • Data acquisition: It provides for the integration of the infrastructure objects and data sources, through a unified solution for accessing information stored in or originating from heterogeneous systems.
  • Messaging and streaming: It facilitates both asynchronous and point-to-point message exchange between system components, while also supporting streaming, thus enabling on-the-fly and real-time processing of data as they arrive.
  • Digital twins as a single source of truth: Digital twins constitute the sole point of reference regarding the state and behaviour of considered manufacturing entities; in this sense, all related data are associated with the digital twins that represent the corresponding assets and are only retrieved from or through these digital twins, subject to their control.

In this setting, this deliverable presents the final version of the data collection and integration framework, reflecting the work performed in the context of the project Task 6.1 “Multi-modal Data Collection and Integration Framework”, and the outcome thereof. Closely following the analysis of the pilot cases and the associated requirements derived, the development of the rest of FACTLOG components, services and tools, as well as the application of the framework to the pilot cases, this deliverable describes the constituent modules and main functions of the proposed solution that render it the main integration enabler in FACTLOG platform.

Download

FACTLOG is implemented as a sophisticated modular system, which comprises various data and functional components that interact with each other to provide the desired cognitive functionalities. Its goal is to realise the Enhanced Cognitive Twin (ECT) concept, which can be seen as the evolution of the digital twin in the big data era. As such it provides cognitive capabilities to the digital twin, enabling learning from the vast streams of data that flow through it and thus the continuous modelling of the physical element’s behaviour. In other words, an ECT consists of all the characteristics of a digital twin, with the addition of artificial intelligence features enabling it to optimize operation as well as continuously test what-if-scenarios, paving the way for predictive maintenance and an overall more flexible and efficient production using stored operation data in the digital twin throughout its lifecycle. The modularity of the system, as well as the broad scope of the manufacturing domain which the FACLOG platform aims to support, requires following a thoughtful integration plan along with per module guidelines, in order to ensure that every component fulfils its role, seamlessly.

The document explores the integration challenges that have been faced in the context of preparing the interim version of the FACTLOG platform in a per component manner, and presents in the form of guidelines the workflow and the configurations that need to be performed on each module in order to initialize a FACTLOG installation.

Download

FACTLOG proposes a generic technology pipeline, able to model all assets as digital twins and offer advanced cognition capabilities in support of demanding manufacturing processes. However, every factory has different operational contexts and needs. Therefore, this offering should be open, configurable per case and supported both by models and services that can be deployed and fit into the different industry needs.

In this direction, this document presents the main aspects of the instantiation of the first FACTLOG integrated prototype into the five pilots, each serving one of the following areas: waste-to-fuel transformation (JEMS)1, oil refineries (TUPRAS), textile industry (PIA), automotive manufacturing (CONT) and steel manufacturing (BRC).

A common approach has been followed for all pilots. A first important aspect concerns the definition of the data model for each of them; indeed, although common concepts exist, the specificities of each particular manufacturing domain have to be specifically addressed at this level. The development of the data model was based on a process of use cases’ analysis, data descriptions’ collection conducted with scrutiny and accompanying the FACTLOG platform development in an agile fashion. The data-driven use cases’ analysis delimited the data requirements from the pilot partners’ data sources. Pilot partners contributed with descriptions of their data models and with sample data, which exemplifies the implementation of their business processes from a data perspective. The process has evolved in parallel with the implementation of the services supporting the use cases. At the same time, the physical implementation of the data model of the project relies mainly on the pilot partners’ data structures and the digital twins infrastructure developed for the goals of the project. In any case, pilot partners have provided the metadata required for real time data provision, as well as bulk data extraction with relevance to historical data.

As a result of the above, the interaction of the system in each case, both internally and externally, is guided through the common data model defined. Pilots’ data sources comply conceptually with that schema, although the provided information may need to be mediated in order to make all data provided by different sources totally aligned. As a result, the data schemas consumed by the cognitive processes are also aligned. This is also reflected to the parameters and the results of the numerous services offered by the system’s modules.

The actual interconnection of FACTLOG cognitive functionalities, i.e., knowledge graphs (KGs), process and simulation modelling (PSM), analytics and optimization, in each pilot instantiation is realized through the exposed services of each of the corresponding modules, converging through the digital twins framework in place and facilitated by the data collection and integration framework. A sufficient collection of services has already been defined, based on the process flows that are desirable to be executed by the platform. For each service, the input parameters have concretely been defined to ease the utilization of the service by any technical partner in the implementation of any FACTLOG module. An initial version of the FACTLOG front-end has also been devised for each pilot, allowing users to interact with the platform at a basic level.

Last but not least, according to the evaluation methodology prescribed by the project, for each pilot a set of test scenarios have been defined. These closely follow the use cases identified already in the early stages of the project, with the aim to be addressed by the FACTLOG system to produce valuable results for the pilots’ business models. The setup of these test scenarios will validate the overall approach and will eventually yield the expected values for all KPIs identified.

1 JEMS pilot did not meet its objectives, especially with regards to the integration of the FACTLOG system to
its plant since there is not yet an operative plant in Slovenia.

In the first half of the FACTLOG project, the main focus was the implementation of modules and applications to enable digital twinning functionalities within industrial contexts. To make this happen, the requirements, use cases and KPIs defined in WP1 guided the implementation and integration of these modules.

As the whole project is designed over two development cycles, the focus in this intermediate step was on the integration of significant pilot demonstrations, in order to collect initial results that could lead to improvements in subsequent developments.

Furthermore, it was considered necessary to focus this phase on the creation of methodologies that are as shared and standardised as possible in order to

  • Evaluate end-user feedback in a structured way
  • Evaluate the relevance of some KPIs defined at the beginning of the project (or even before its inception) against the concrete developments of the technical work and the progress of the State of the Art

This document reports the status of the pilots’ implementation after the first project cycle, and the methodologies and indicators used for their evaluation.

Being an interim report, this document focuses on the methodological aspects, which are also fundamental for the second cycle in order to have comparable techniques available, and on subjective aspects, such as the relevance of the functions enabled by the platform and aspects of usability of the interface.

To do this, workshops were organised involving the main stakeholders of the pilots to gather their impressions and guide the development of the technologies in the next steps.

This document is closely linked to deliverable D7.5, in which the results of the workshops are reported.

After a first part focused on methodological aspects, which are valid for this first cycle but will also form the basis for the final evaluation cycle, the document presents a brief overview of the situation of each pilot, both in terms of implementation and expected benefits.

The document will be updated with the results of the final evaluation through the D7.4, planned for M40.

Download

This is the first version of the FACTLOG validation and impact assessment delivery. The main plan is to use findings from D7.3 where upon identified Key Success Factors (KSFs) and proposed Key Performance Factors (KPIs) functionalities of the AI Solution are in the development stage. These AI functionalities combined into an integral product will provide improvement of the production /business process that will have positive effect on current industry challenges. As a result, process optimization should mirror better process efficiency and productivity with direct and indirect financial impact on the overall success of the company. The aim is to present a KPIs system in a way to validate the planned financial impact on AI solution users (FACTLOG pilots)1 overall success by using results of the KPIs set as measurable values to show the effectiveness of a company’s business objectives.

The document includes business framework that prescribes the procedure for determining KPI system, which are then combined with the cost- and sales price of the developed AI solution and the required rate of return on investment that customer(s) would experience if they would buy this AI solution on the market (done in combination of D8.8).

Some of the information needed for the final plan are not yet available and will be added to the next version(s) of this deliverable. Special focus will be given over the extent of the business impact to which the FACTLOG solution contributed towards more effective process re-configuration, better use of resource or reduction of waste, as well as stimulating the firms’ sustainability activities in general.

This document will be regularly updated with new information, details, new potential business collaboration initiatives and with the individual exploitation plans. Content to be adapted is marked yellow.

Download

This document provides an overview of the initial version of the FACTLOG project website. This includes the structure of the website, as well as some screenshots as to show the project visual image and how it’s used in the official FACTLOG webpage.

The work related to the website will continue throughout the project’s lifetime, with the publishing of new content along the way. The website will follow-up the activities carried out during the project implementation and will inform our target audience about the results achieved.

The FACTLOG page is available in www.factlog.eu. It will also linked with the twitter page of the project, which can be found using “@Factlog_EU

Download

Dissemination and Communication play an extremely important role in the success of any project, so there is the need to define a clear plan for all the dissemination and communication activities to be carried out along the lifetime of the project.

This document (D8.2 Dissemination and Communication Plan) describes the general plan for disseminating the results of FATLOG. It gives an overview about the strategy behind the dissemination and communication activities. Furthermore, it provides a roadmap for the upcoming actions, which will be reported in the Dissemination and Communication Activities report, to be released in 3 iterations (M12, M24 and M42, respectively).

The general idea of WP8 is to ensure the impact of the results of FACTLOG to a wide audience, including both the scientific and the industrial community, but also the general public. To achieve this, a common set of dissemination materials and media will be made available, a specific strategy will be implemented and active involvement of all project participants will be required.

In the first six months of the project, an initial set of communication material was created, such as the document and presentation template, or other types of material including poster, flyer, roll-up, banner, bookmark and stickers. The FACTLOG presence online is also completed with the design and development of the social media channels in LinkedIn and Twitter, which complemented the already launched project website: www.factlog.eu

The consortium also prepared an initial list of potential events to be targeted by FACTLOG and defined the set of KPIs needed to be fulfilled in order to maximize the impact of the project.

This document summarises the dissemination and communication activities carried out by the FACTLOG partners in the first year of the project. It includes the activities carried out in its social media channels (such as LinkedIn or Twitter) and also the events targeted by the consortium for dissemination purposes. Due to the COVID19 pandemic, several events were cancelled or moved to online-only, which disrupted the initial plans of the consortium. This document also provides a list of Publications made by FATLOG partners in the initial 12 months of the project and also the list of materials made available to support all communication and dissemination activities. Also included is the status – at month 12 – of the KPIs defined in previously released D8.2 – Dissemination and Communication Plan.

Download
Download

This is the first version of the FACTLOG business plan. The main plan is to develop a joint business plan where all interested members of FACTLOG consortium will be involved. This is why the document already includes basic business model framework, the product, initial target markets, pricing model, proposed business models, initial marker analysis and detailed description of the intended organisation models that are being considered by the consortium. Some of the information needed for the final plan are not yet available and will be added to the next two versions of this deliverable. The missing data has been marked with yellow.

This document will be regularly updated with new information, detail, new potential business collaboration initiatives and with the individual exploitation plans. Updates will be presented in the two consecutive deliverables V2 in M24 and V3 in M36 at the end of the project.

This document summarizes the Dissemination and communication activities carried out by the FACTLOG partners in the second year of the project. This report includes the activities carried out by FACTLOG in its social media channels (such as LinkedIn or Twitter) and also the events targeted by the consortium for dissemination purposes. Due to the COVID19 pandemic, several events were cancelled or moved to online-only, which disrupted the initial plans of the consortium. This document also provides a list of Publications made by FATLOG partners in the second 12 months of the project and also the list of materials made available to support all communication and dissemination activities. Also included is the current status of the KPIs defined in previously released D8.2 – Dissemination and
Communication Plan.

Download

Based on the D8.3, an updated version of standardisation activities report is presented based on given standards, including ISO 23247, ISO 42010 and BFO ontology specification. In this new version, a given ontology framework is proposed as input to D4.2 as a basic to develop knowledge graph models.

Download

This deliverable is the first of a set of 2 deliverables, is a report on the result of task 8.4 – Industrial Interest Group and Lifelong Learning and is dedicated to reporting training material and clustering activities. It describes all the training and educational material that will be part of the project. Some of this material is already available, such as the FACTLOG overview video that gives us an overview of the project’s vision and workshops developed by the pilots. The planned materials are up to each of the six assets with their respective entities, that is, they will have to create content, training and education.

It also provides an overview about the workshops that the pilots carried out, more specifically about their date, the participants and some information about the workshop.

The last chapter of the deliverable describes the Clustering and Liaison Activities of the project. FACTLOG joined 5 projects – CAPRI, COGNIPLANT, COGNITWIN, HyperCOG and INEVITABLE – and together they created SPIRE-06. “SPIRE-06” is (presently) an informal discussion group of the projects funded under the H2020 DT-SPIRE-06-2019 topic focusing on digital technologies for improved performance in cognitive production plants. The Cluster has already carried out some activities, such as:

  • SPIRE Clustering webinar Organizational meeting
  • Cluster Workshop: SPIRE Industries
  • Workshop: 6P Methodology

As far as Liaison is concerned, FACTLOG has merged with IoT-Catalogue.com so that all the information coming from the project is available to everyone.

As this is the first deliverable of a total of two, there will be more content, workshops, training and education, which will also be placed on the website, in order to make all the material available to the public. At the moment, the FACTLOG information is not yet publicly available, but during the next few months, the information will be updated and validated by the consortium, so that everything can be made public.

Download

This document is designed to facilitate cooperation and management within the lifecycle of the FACTLOG project, by defining rules and standards for the day-to-day work. The objective is to ensure that all consortium partners have the same point of reference and a common understanding of methods and procedures, with emphasis on the contractual obligations towards the European Commission. If used with discipline, these guidelines will reduce project overhead, alleviate project management for all partners and increase efficiency and quality of the work carried out. It is thus imperative that all consortium partners are aware of this document, understand and use all rules and standards herein specified.

This document is in essence a monitoring loop running throughout the project lifetime, evaluating the quality of work and related deliverables and assessing internal and external risks. It defines, in accordance with the definitions and regulations in the Grant Agreement (and its Annexes) and the Consortium Agreement, strategies and methods to be adopted in order to ensure the quality of the project outcomes and the proper implementation of the risk management procedures. It provides useful insights concerning general guidelines and processes to be followed in terms of documents, software and project quality assurance and control.

This deliverable provides the first version of the Data Management Plan (DMP) of the FACTLOG project. It describes what kind of data will be generated or collected during the project implementation and how these data are then managed and published.

Such information could be the scientific publications issued, white papers published, Open Source code generated, mock-up datasets used for supporting the development process etc. The list of research data expected during the project consists of open source software components, original research data and anonymous statistics. These datasets are expected to be collected during the validation and evaluation phase and are therefore subject to change, considering also the definition of the FACTLOG business models and sustainability plans.

The publishing platforms used are the project website, OwnCloud platform, Zenodo for long-term archiving (as suggested by the EC), and GitLab for open-source code. All these platforms can be accessed openly.

Download

This document is intended to inform the Commission about the project progress in the first 6 months. In particular, it includes the main findings and a summary of work/activities performed in all work packages, challenges encountered and issue to be solved (if any).

This document is intended to inform the Commission about the project progress in the 6 months following the first periodic report (i.e. from M13 to M18). In particular, it includes the main findings and a summary of work/activities performed in all work packages, challenges encountered and issue (if any), including updates on costs incurred and of the resources.

This document is intended to inform the Commission about the project progress in the 6 months following the first periodic report (i.e. from M19 to M30). In particular, it includes the main findings and a summary of work/activities performed in all work packages, challenges encountered and issue (if any), including updates on costs incurred and of the resources.