Every factory has different operational contexts and needs. In FACTLOG, we are proposing a generic technology pipeline to model all assets as digital twins and offer some cognition capabilities (ability to understand, reason and act through optimization). This offering should be open, configurable per case and supported both by models and services that can be deployed and fit into the different industry needs.

This deliverable is trying to bridge our project vision and approach with the real needs of the industry. The overall approach used was to work into two inter-related ways: first, to identify “what the offer is”, i.e. to create an operational model of FACTLOG and explaining how it works and mention indicative problems and challenges that we can address. Second, to talk to the users (industry) to understand their real challenges and needs.

In D1.1, we present the results of those two approaches and describe the overall FACTLOG operational model where all its enablers are placed in an integrated way (cognition, analytics, optimization, etc.). Also, we detail the user needs and how FACTLOG operational model contributes to them.

The operational model has a generic flow pattern: it starts with the concept of modelling any asset/system/process in the industry as a network of inter-related Digital Twins (DTs). Those DTs interact with the physical assets in a bilateral way: collecting data and sending data. At the operational phase, we have collection of data streams from different information sources using a messaging service. Using a Reasoning engine (combining both data-driven and model-driven approaches), we can identify patterns of behaviour and potential shortfalls. Simulation and forecasting can propagate the behaviour of the system into the near future and assess the impact of the identified anomaly. Last, using robust optimization methods, we can improve decision-making (planning, scheduling, auto-configuration, etc.).

FACTLOG has five interesting and different industrial cases. Most of them correspond to the need of predictive maintenance, anomaly detection and mitigation, energy monitoring, scheduling and optimal machine operation status. We have identified the needs and reference scenarios (information flow and actors in line with the FACTLOG operational model) that will help in the definition of the system specifications, boundaries and pilot particularities for deployments. As a next step, we will conduct workshops with external industrial players to confirm the operational model and also identify other scenarios, which will give us a more comprehensive picture of the market perspectives of our solution. This was planned to be done in the period of the pilot analysis but due to the COVID-19 outbreak we faced problems in organizing such workshops. We expect to do this in the next months and an updated version of the deliverable to be submitted with such findings.

Through a holistic requirements elicitation approach, we expect such reference scenarios to be further detailed in the next step, which is the definition of the use cases and functional/non-functional requirements


The main goal of this deliverable is to provide the foundation for the realization of the cognition-driven solutions in pilots. Since this type of systems is a novel one, this deliverable also explains some of basic concepts, such as cognition and cognition process. It appears that a metaphor of human cognition, as taken from psychology, can be a very suitable basis for developing the concept for an efficient design of cognition-driven industrial systems, which we consider as Cognitive Factory Framework (CFF).

The main advantage of CFF is that it brings together two perspectives. One is the way in which human cognition deals with new information/situations, esp. in the case of unknowns (it is not known how to react based on existing models and past data). Another perspective is the industry-process oriented one: how a process behaves under variations (internal, external), i.e. how (un)stable are the process performances (KPIs) in such situations. The main goal of CFF is to make industry processes able to deal with variations efficiently, based on the analogy to the human cognition. More precisely, we define cognition process consisting of four basic phases:

1. Detect variations

2. Understand root causes of variations

3. Understand the impact of variations

4. Find optimal reaction

We also define the roles of four basic technologies, data analytics, knowledge graph, process modelling and simulations and optimization in these phases, illustrating that the envisioned cognition-driven processing is feasible.

In addition, we provide a deep analysis of all pilots regarding the realization of Cognitive Factory Framework, demonstrating that CFF is general enough to be applied in various use cases / scenarios.

This deliverable will serve as a kind of guideline for the realization of the specific components in other WPs, which combined together (based on the architecture described in D1.3) will provide desired cognition-driven solutions in all pilots.


This deliverable reports on the specification of the FACTLOG architecture, reflecting the work performed in the context of Tasks 1.3 “Requirements analysis” and 1.4 “Architecture definition”, and the outcomes thereof. Following the analysis of the pilot scenarios identified, as well as the elaboration of use cases and system requirements, this deliverable proceeds with describing the components and main functions of the FACTLOG ecosystem.

To a great extent, FACTLOG architecture is centred around the concept of Digital Twins (DTs). In the context of Cyber-Physical Systems (CPS), a digital twin of a production system can address the challenge of making systems easily and quickly reconfigurable by being applied for realising and testing reconfiguration scenarios in a simulative environment; recommissioning of the production system therefore takes less time and enables a higher system availability. FACTLOG in fact extends this concept with the Enhanced Cognitive Twin (ECT), which can be seen as the evolution of the digital twin in the big data era. As such it provides cognitive capabilities to the digital twin, enabling learning from the vast streams of data that flow through it and thus the continuous modelling of the physical element’s behaviour. In other words, an ECT consists of all the characteristics of a digital twin, with the addition of artificial intelligence features enabling it to optimize operation as well as continuously test what-if-scenarios, paving the way for predictive maintenance and an overall more flexible and efficient production using stored operation data in the digital twin throughout its lifecycle. Furthermore, having an intelligent cyber layer expands the digital twin with self-x capabilities such as self-learning or self-healing, facilitating its inner data management as well as its autonomous communication with other digital twins.

In order to achieve the above, a digital twins platform is put in place as the focal point of access to all manufacturing entities (MEs) operational context in FACTLOG, facilitating the realisation of the three main aspects characterising digital twins: synchronisation with the physical asset, active data acquisition from the production environment, and the ability of simulation through the appropriate interfaces. On this basis, a number of cognitive services cooperate in order to turn digital twins into ECTs.

The analytics modules hold the critical role of extracting knowledge from the data ingested into FACTLOG. This may take various forms, beginning with the creation of data-driven models based on detected anomalies and variations in past data. They may also perform predictions, which, combined with the above models, help identify situations of interest and thus facilitate variation understanding and root-cause establishment in the cognition cycle. Certain analytics tools are also in position to perform simulations based on industry data, which can in turn be exploited for indicating potential actions to improve the situations (insights).

However, data is not the only source of knowledge in FACTLOG; the solution also incorporates human domain experience formally expressed by means of two frameworks, the Knowledge Graph Model (KGM) and the Process & Simulation Model (PSM). The former formalises rather static aspects of FACTLOG deployments, denoting a generic ontology representation and description of all FACTLOG mechanisms and tools, but also of the specific MEs on occasion guiding their consistent representation through digital twins. The latter models the dynamic behaviour of each considered manufacturing process, may it be a continuous or a discreet one. Both the KGM and the PSM are invaluable in putting the results generated by other components in context, as well as verifying and evaluating them, while they themselves are refined through their interactions with other modules, thus exploiting newly-acquired knowledge towards the continuous evolution of FACTLOG cognition capabilities.

FACTLOG also contains a dedicated module which can solve short-, mid- and long-term well-defined production optimization problems. Depending on each particular production setting, it can address a variety of optimization issues, ranging from production schedules for approval at the shopfloor to re-scheduling and re-configuration of machines or entire processes, taking into account a variety of constraints. In that, the optimization module uses appropriate outputs of analytics as inputs to perform its functions, guided throughout the procedure by the KGM and PSM. This in itself is, among others, a good example of how new knowledge extracted from the data and used to refine the behaviour models of the digital twin benefits the cognition process; indeed, given that the optimization module provides assistance to optimize the production in various concerns, above refinement allows the digital twin to incrementally improve its behaviour and features and thus to steadily optimize the above-mentioned assistance, due to the fact that the data analytics results are directly incorporated into the system and models.

Finally, in order to provide for effective interaction, coordination and orchestration of FACTLOG components and operations, the architecture includes the Message and Service Bus (MSB), being a mediation middleware between the components of the FACTLOG ecosystem. The MSB comprises a messaging and streaming system providing for asynchronous and point-to-point message exchange between the system entities, circulation of events, and delivery of data in a real-time or batch-oriented fashion. Furthermore, the MSB supports integration of the production infrastructure objects and data sources, by means of appropriate connectors. The MSB incorporates also the functionality for the orchestration of FACTLOG components and operations as regards the execution of dataflows.

This document describes in detail all the requirements for the analytics system to be set-up and used in the FACTLOG project. The requirements are collected and presented on a per-pilot basis, considering the types and volumes of data available for each pilot as well as the target problems. Finally, the design specifications for the analytics system are described, both from the conceptual and the technical standpoint.

Each pilot use-case has a set of target problem scenarios they want to address with the technology from the FACTLOG platform. This deliverable presents all these scenarios for each pilot and identifies the role of analytics in them. Once each scenario is formulated as an analytics problem the methodology for addressing it is identified. The data sources and types are also inspected, and it is assessed if they are appropriate for the planned approach. An aggregated overview of the requirements is given for clarity.

Based on the requirements a design specification for the analytics system is drafted. First the analytics system is placed in relation to other components. It’s role as a building block of the cognitive factory framework is explained as well as the interactions it has with the optimisation system and the knowledge graph and process models. Then, a set of tools (i.e. analytics libraries and platforms) is identified along with the methods and approaches that address the requirements collected in the preceding sections of the document.

The deliverable is a comprehensive collection of requirements for the analytics system and the specification of its conceptual and technical design. It is built on the information and insights collected regarding the pilots and the project challenges up to the time of its preparation. The requirements and specifications may evolve as the project progresses and any adaptations will be reported in future deliverables.


This document provides an overview of the initial version of the FACTLOG project website. This includes the structure of the website, as well as some screenshots as to show the project visual image and how it’s used in the official FACTLOG webpage.

The work related to the website will continue throughout the project’s lifetime, with the publishing of new content along the way. The website will follow-up the activities carried out during the project implementation and will inform our target audience about the results achieved.

The FACTLOG page is available in It will also linked with the twitter page of the project, which can be found using “@Factlog_EU


Dissemination and Communication play an extremely important role in the success of any project, so there is the need to define a clear plan for all the dissemination and communication activities to be carried out along the lifetime of the project.

This document (D8.2 Dissemination and Communication Plan) describes the general plan for disseminating the results of FATLOG. It gives an overview about the strategy behind the dissemination and communication activities. Furthermore, it provides a roadmap for the upcoming actions, which will be reported in the Dissemination and Communication Activities report, to be released in 3 iterations (M12, M24 and M42, respectively).

The general idea of WP8 is to ensure the impact of the results of FACTLOG to a wide audience, including both the scientific and the industrial community, but also the general public. To achieve this, a common set of dissemination materials and media will be made available, a specific strategy will be implemented and active involvement of all project participants will be required.

In the first six months of the project, an initial set of communication material was created, such as the document and presentation template, or other types of material including poster, flyer, roll-up, banner, bookmark and stickers. The FACTLOG presence online is also completed with the design and development of the social media channels in LinkedIn and Twitter, which complemented the already launched project website:

The consortium also prepared an initial list of potential events to be targeted by FACTLOG and defined the set of KPIs needed to be fulfilled in order to maximize the impact of the project.

FACTLOG partners in the first year of the project. It includes the activities carried out in its social media channels (such as LinkedIn or Twitter) and also the events targeted by the consortium for dissemination purposes. Due to the COVID19 pandemic, several events were cancelled or moved to online-only, which disrupted the initial plans of the consortium. This document also provides a list of Publications made by FATLOG partners in the initial 12 months of the project and also the list of materials made available to support all communication and dissemination activities. Also included is the status – at month 12 – of the KPIs defined in previously released D8.2 – Dissemination and Communication Plan.


This is the first version of the FACTLOG business plan. The main plan is to develop a joint business plan where all interested members of FACTLOG consortium will be involved. This is why the document already includes basic business model framework, the product, initial target markets, pricing model, proposed business models, initial marker analysis and detailed description of the intended organisation models that are being considered by the consortium. Some of the information needed for the final plan are not yet available and will be added to the next two versions of this deliverable. The missing data has been marked with yellow.

This document will be regularly updated with new information, detail, new potential business collaboration initiatives and with the individual exploitation plans. Updates will be presented in the two consecutive deliverables V2 in M24 and V3 in M36 at the end of the project.

This document is designed to facilitate cooperation and management within the lifecycle of the FACTLOG project, by defining rules and standards for the day-to-day work. The objective is to ensure that all consortium partners have the same point of reference and a common understanding of methods and procedures, with emphasis on the contractual obligations towards the European Commission. If used with discipline, these guidelines will reduce project overhead, alleviate project management for all partners and increase efficiency and quality of the work carried out. It is thus imperative that all consortium partners are aware of this document, understand and use all rules and standards herein specified.

This document is in essence a monitoring loop running throughout the project lifetime, evaluating the quality of work and related deliverables and assessing internal and external risks. It defines, in accordance with the definitions and regulations in the Grant Agreement (and its Annexes) and the Consortium Agreement, strategies and methods to be adopted in order to ensure the quality of the project outcomes and the proper implementation of the risk management procedures. It provides useful insights concerning general guidelines and processes to be followed in terms of documents, software and project quality assurance and control.

This deliverable provides the first version of the Data Management Plan (DMP) of the FACTLOG project. It describes what kind of data will be generated or collected during the project implementation and how these data are then managed and published.

Such information could be the scientific publications issued, white papers published, Open Source code generated, mock-up datasets used for supporting the development process etc. The list of research data expected during the project consists of open source software components, original research data and anonymous statistics. These datasets are expected to be collected during the validation and evaluation phase and are therefore subject to change, considering also the definition of the FACTLOG business models and sustainability plans.

The publishing platforms used are the project website, OwnCloud platform, Zenodo for long-term archiving (as suggested by the EC), and GitLab for open-source code. All these platforms can be accessed openly.


This document is intended to inform the Commission about the project progress in the first 6 months. In particular, it includes the main findings and a summary of work/activities performed in all work packages, challenges encountered and issue to be solved (if any).