Despite realizing the apparent benefits of implementing AI, only a few organizations succeed in implementing AI-based solutions. This article will outline the steps organizations should take to implement an advanced analytics-based supply chain project successfully.
I will use the problem of New Product Introduction (NPI) forecasting as an example to demonstrate the lifecycle of a successful AI implementation.
New Product Introduction (NPI) forecasting aims to predict customer demand months, sometimes years, before the product’s launch into the market. Achieving high accuracy with your NPI forecasts can significantly improve your production cycles and have a lasting impact on the organization’s bottom line.
Below are some of the critical steps that can help organizations to develop better AI solutions for their SCM problems:
Accurately defining the problem:
The primary reason so many AI and Machine Learning projects fail is not having a reasonable hypothesis. Often, your AI engineers and analytics practitioners don’t speak the same language as your supply chain experts. Spending quality time with their analytics counterparts can help supply chain teams accurately translate their crucial pain points into a set of problem statements. In the case of NPI forecasting, the problem statement can be – “Our organization is struggling with accurately forecasting the total sell-through demand. We want to build an AI model that can accurately forecast the demand for the first six months post the product launch.”
Identifying the suitable data sources:
Once the problem statement is clearly defined, data engineers, data scientists, and SCM experts should identify the institutional data sources that can help them precisely model their sell-through behavior. It is often good to cast a wider net to capture all the data sources that might help you with NPI forecasting. Once the relevant data sources are identified, the teams can collectively prioritize the data sets with the best data quality. Problems such as demand and NPI forecasting might also require external vendor data such as competitor pricing, promotions, product launches, etc.
Data quality and pre-processing:
Garbage in, garbage out. This is particularly true for the AI model. The quality of your AI models, to a large extent, depends on the quality of the data you feed into the models. Erroneous, incomplete, partial, and non-representative data are some of the typical data challenges that organizations face. The problem with having gaps in your data will especially be troublesome for demand forecasting. Organizations try to fix this issue by often buying the data from external vendors. It is essential for companies to invest time in fixing the data quality before rushing to modeling.
Modeling involves carefully choosing the right set of machine learning algorithms. Data scientists experiment with various data sources by transforming them and constructing the features that can best explain the variability in the data. The advances in AI technology, especially in cloud computing and deep neural networks, have enabled organizations to develop state-of-the-art solutions relatively cheaply. This means that organizations can now leverage the power of algorithms such as Seq-Seq and Auto-Encoders to generate forecasts. It is also important to realize that all AI algorithms are based on specific mathematical assumptions. The data must be prepared in a certain way to cater to these assumptions.
Explaining the model’s performance is as important as developing a good model. SCM leaders often want to use the model to take preemptive actions. A model with good explainability and slightly less predictive power should be prioritized over a model with great accuracy and opaqueness.
Model deployment and inference:
The end goal of most AI-enabled SCM solutions is to provide seamless inferences. Luckily, all major cloud service providers offer machine learning and AI solutions that enable organizations with rapid model deployment capabilities. This doesn’t mean that the AI predictions are to be consumed autonomously; on the contrary, several successful projects follow a human-in-the-loop (HITL) approach toward AI. HITL provides a framework for decision-makers to evaluate AI decisions and provide continuous feedback to the machines. Consider a situation in our NPI example when the model cannot pick up on the market changing event such as Covid. Human interpreters can run different scenarios, and of course correct the algorithms if necessary.
The writer is Analytics and AI leader at Bose Corporation.
Download The Economic Times News App to get Daily Market Updates & Live Business News.