Building the First Digital Twin Prototype
Published: March 9, 2023
A call for solutions
It was in July 2014 that I got a call from my former supervisor, asking me to come to the boardroom, where an important meeting was taking place. During this meeting, our division’s CFO expressed his concerns about the high spending on shipping expedites, directly caused by capacity constraints in our plants and within our supplier base. In fact, we were often struck by unforeseen events that were tightening resources in non-value-adding tasks, hurting our bottom line. Purchasing was also complaining about the lack of visibility since a global forecast that could enable better planning and sourcing of components across all regions and product lines were not fully available. And we came to a conclusion: It was time to address this issue once for all.
Having successfully accomplished and implemented several strategic supply chain projects for our division: Inbound Process Improvements, B2B Supplier Collaboration Portal rollout (500+ Suppliers), as well as the development and implementation of an Inventory Management System (20+ Plants) among others, I was asked to lead a project to solve this issue.
Our original idea was to build a digital backbone for 15+ ERPs (100 Plants) that allow us to link our division vertically (from Finish Goods to Components/Raw Materials) and horizontally (Functional Areas: Sales, Supply Chain, Finance, Purchasing, etc.), breaking up the current silos so that an integrated business planning process can be performed. In other words, this new digital backbone would need to cascade a unique consensus 5-year sales forecast through all the levels of our bill of materials. This included all inter-company relationships (Tier 1, Tier 2, and Tier 3) so that our sub-assemblies or components forecasts can be generated independently from its development stage (Serial Manufacturing, Product development, or Planned Product). Then, we would be able to aggregate a global component volume forecast to compare it with internal and external available capacity. This end-to-end, detailed simulation model of an actual supply chain digitalizes the complete chain and uses real-time data and snapshots to forecast supply chain dynamics.
The result turned out to be our first version of an end-to-end (E2E) supply chain planning system, which many are now calling a supply chain digital twin. And here’s how we started our implementation.
Building the team
In order to start the project, it was critical to build a team. If we were to follow the formal process, it would have taken too long to get the process started. So, we chose people from multiple functions that will help address specific aspects of the project. We had a project lead from supply chain, someone from sales, then other members from IT, procurement, market planning, finance, etc.
Of course, back in 2014, the idea of E2E supply chain visibility was still quite distant. But for us supply chain practitioners, it would eventually become a must-have. But back then, it was not until people understood the complexity within the supply chain that they could come to realize this. So it was challenging to take everyone out of their silos and work together. Everyone had their own interests, and everyone was focusing on their own separate goals.
In this project, which I will go into detail later, our role from the supply chain team was to make them aware that everyone was fulfilling their tasks. For example, we helped “the sales and market planning department” confirm if they were providing a reliable forecast. On the side of engineering, we tried to make sure that the bill of materials came in as soon as possible in the ERP system. For purchasing, it was to try to fulfill all this different information about capacity and purchasing data.
Each of these activities was important to find the optimal process; all different pieces of information were key. And we identified and assigned the team members to different functional areas within our project, providing them with a holistic view. This end-to-end view made them aware that everyone is contributing to the institution of just one single source of truth, instead of just building their own silos and creating their own independent solutions.
A detailed roadmap
The prototype—took one and a half years (2014 – 2016) just to identify fields and data within the phases and building blocks of the project: gathering data, building and validating the market model based on actual sales, then long-range forecasts and other parts could be connected (up to 2018). Overall, our roadmap can be classified into four main stages:
1. POC and Global Deployment of the System.
Testing and validation of the multi-echelon Bill of Material (BOM) breakdown based on actual sales (Serial Products Only): We took the volume of all finished-good products sold during a given month to simulate and test the model’s accuracy. Since all KPIs such as Sales and Material Costs were already known, we obtained a rapid validation based on already available financial figures for a single month (benchmark). A data warehouse including a web-based front end was also implemented. It allowed users from all over the world to access and analyze the results. At the end of this stage, the system was able to handle the BOM breakdown of over 500K BOMs. After successful POC, we decided to roll the solution out globally within 1 product line and then expand it to others.
2. Creating global component volume forecast
Building on stage 1, we increased the complexity by incorporating a 5-year sales forecast at the finished-good part number level as input for the multi-echelon BOM breakdown. The biggest challenge was improving our BOM creation process’s performance since we needed BOMs for all Serial Manufacturing, Product development, and Planned Products. At the end of this stage, we were able to provide a component volume forecast consisting of 2 years of actuals, plus a 5-year forecast which also incorporated EDI call-offs as part of short-term demand. At the end of this phase, we ended up with a comprehensive data lake containing over 80 million data points coming out of 80+ manufacturing facilities. This data helped us to visualize and understand the complexity of our multi-tier network footprint.
3. Balancing Global Demand vs. Global External Supply (outsourced)
Now that the complete global forecast was available, we focus on developing the global capacity footprint for external suppliers. Basically, we put all capacity verification agreements with suppliers into a global database. This information allowed us to cross-reference the capacity of resources (Tools/Prod. Lines) and link it to components. Finally, we connected these components to our global FC to see if we would be able to fulfill demand. In other words, by including external capacity figures, we were able to obtain a constrained forecast to show when supply and demand becomes unbalanced at suppliers.
4. Global Demand vs. Global Internal Supply (In-house produced)
Like in stage 3, this step was built upon ERP data (routings and capacity) to compare our global forecast against line capacity in our facilities. The objective was to identify misalignments of demand and supply in production lines running in our manufacturing facilities, so that these exceptions could be visible and therefore immediately addressed by planners.
5. Adding Prices and Costs to the model
Finally we added pricing and material costs for each SKU into our complex multi-echelon digitized network. This information was pulled out of our multiple ERPs systems in local currency, so that the effects of foreign exchange exposure or commodity inflation could be quantified, and if necessary, addressed. Gaining visibility about the cost structure from saleable products to raw materials was an important feature added to the system, which made the tool more attractive for our finance community.
Having completed the project, we come to some understanding:
- A tremendous learning curve: Our team profited tremendously from the implementation of this project. The amount of knowledge generated by building a system to improve E2E Supply Chain visibility was a massive gain for every one of us. Even though the task of linking OEM’s vehicles to saleable products and all the way down to raw materials was challenging, it was also 100% worth it.
- The transcendent potential of the system: Although the system is still in use, we realized that our small-sized team could not fully incorporate new technologies at the pace it was required. Improvements such as using in-memory databases to reduce calculation time for simulation purposes (Top-Down/Bottom-Up), the use of cloud computing, deployment of IoT devices, and integrating AI/ML were the undertakings that we could not fully cover to the extent required by our organization.
- The importance of C-suite and stakeholder engagement: It is difficult to break up silos in the company without the full support of C-level leadership. To fully engage every party (functional areas), we need to clearly communicate the benefits of the E2E planning approach to each of them and fulfill their expectations by delivering and deploying the system “on-time-in-full” to maintain the momentum of progress and their interests in the implementation as roadmap milestones are being achieved.
My vision for the future of E2E planning
I believe that IBP platforms will add more capabilities and disruptive technologies (i.e., Blockchain technology, ML/AI, Edge Computing, IoT, etc.) to support their customers to maximize profit and reduce costs. These companies will become technology partners supporting OEMs and Tier 1-n suppliers by providing an ecosystem where strategic partners can collaborate, reduce waste, and boost their performance. In the future, the competition will go beyond the company boundaries and instead will be among ecosystems/networks.
Making use of AI/ML to improve forecasting accuracy as well as utilizing unstructured data for better future prediction will lead to an improvement of the complete E2E planning process. Moreover, I believe that companies will increase the usage of edge computing/IoT devices to perform calculations on the spot and get results right away without compromising data security and benefiting from standardization in the area of Machine to machine (M2M) collaboration, based on standard protocols (i.e., OPC UA, to ensure connectivity, interoperability, security, and reliability of industrial automation devices and systems).
Finally, the introduction of augmented/immersive visualization techniques to improve human–machine interfaces (HMI) and using guided analytics to help domain/process experts explore areas that are currently reserved only for data scientists are some of the changes I would expect to arrive soon.