ii-AI aims to enable the truly autonomous enterprise and the promises of industry X.0. The problem is that in every piece of the business and operations there are many tradeoffs and hence all decisions are interconnected. ii-AI goes way beyond a good prediction (which is what AI is mainly known for) to balance complex trade-offs amongst predictions and actions dynamically and does that by a modular and scalable design.
Applications and use cases
In a nutshell, ii’s solution is about automated continuous optimization in dynamic and uncertain environments with high financial consequences— by complementary systems that can self-learn and improve over time with collected data and gathered experience.
All decisions are interconnected, and optimal balance points are wild moving targets. Therefore, we shoot to balance against many tradeoffs and to do that continuously. This involves many interconnected predictions, optimizations (using those predictions, constraints, and other data), prescriptions, and feedback loops.
Applications areas (in manufacturing and supply chain):
- Pre-delivery – Product Creation Value Stream
- Product Delivery – Product Delivery Value Stream
- Post-delivery – Maintenance, Capital Investment , etc.
1) Product creation value stream
For example, auto-predicting compound properties accurately using available data and samples could reduce Lab testing costs and increase lab capacity. A by-product could be creation of useful permanent knowledge bases for future product design assistance and so on. Advantages: lower design/development costs and shorter time to market.
Novel applications around product design cycle, e.g. based on the patterns of the products, generate new designs, or come up with a design based on a set of constraints different than those for any other product but yet similar to others. Auto-generating a design to hit the sweet spot between similar and novel!
Whole range of applications using reasoning by similarity for products, components, customers, intermittent demands, etc.
2) Delivery value stream:
Whole range of applications to continuously realize opportunities for more efficiency in the value streams such as:
- Dynamic optimization of material sourcing/procurement considering many factors dynamically including custom usages profiles.
- Real time asset health monitoring to increase up-time. Replace prevention-based maintenance practices with a predictive-prescriptive maintenance solution.
- Asset and material tracking to increase throughput and decrease WIP and floor space allocated to it.
- Generating dynamically optimized machine-sku assignments.
- Generating dynamic unified KPI’s for connected operations intelligence.
- Auto-generate optimized S&OP, factoring in all variables and their interactions timely, free of existing static assumptions & modeling constraints.
- Auto-generate optimized master schedules.
- Further optimize WIP, utilization, lead time, throughput and fill rates, increasing flow through the whole supply chain.
- Real-time operations management, all schedules auto-generated dynamically and self-adjusted in real-time to maximize profit under any circumstance ; reduce headcount for operational control & planning.
- Inventory Management: determines optimal re-orders including optimal quantities, reordering feasibility, etc. integrated with all relevant data and factors (e.g. inventory at hand, liquidity, trends, etc.) resulting in overall higher inventory turnovers.
- Warehousing (directly related to inventory control). Optimize warehouse operations to maximize revenues per customer, by linking demand, supply capacity, operational network routing with all warehouse processes in real time. Example functional opportune areas: loading /unloading sequences, hub optimization, warehouse sequencer, labor optimization, merge/sort logic solutions, supplier distribution, etc.
- Distribution: integrated dynamic optimization in transportation alternatives, routings, etc.
- Marketing channel management and product pricing
- Revenue management – optimization around dynamics of cash flow.
3) Post delivery applications
Accurate dynamic degradation prediction and determining remaining useful life of assets can provide the base for many applications ranging from maintenance to investment in heavy-capital assets.
For instance in maintenance of an aging asset fleet, moving from scheduled maintenance to condition-based maintenance (CMB) can be a promising approach.
While most CBM systems attempt to predict imminent failure, they are not designed to reveal the probability that an asset will survive an unhealthy condition and continue operating. By moving beyond CMB application, using superior models one can predict failure probabilities at different times in the future, which makes it possible to learn and understand a specific asset’s possible paths to failure. This approach allows for ii systems to balance risk versus cost and provide an optimal window for replacing or repairing an asset. The management’s appetite for risk is another integration point for the system, in approving the macro direction at the very least. Similarly, these predictions allows ii to optimize the repair work and thus help optimize for both capital and operating expenditure in this case.
Similar predictions could be used to/for
- Minimize warranty costs.
- Optimal scheduling of maintenance activities.
- Optimize crew scheduling and crew routes.
- Optimal assignment of incoming event/alarms to shift workers.
Other examples would be to predict changes in safety behavior and find anomalies by profiling techniques, text analytics and so on → Improve crew safety and compliance with regulations.
It is important to note regarding all above-mentioned areas that 1) each require many continuous predictions underneath optimizations 2) they are not to be treated in isolation with a point solution, as the goal is an integrated intelligence! For instance in PLM operations integration the goal would be to integrate knowledge from product creation value stream all the way to aftermarket services for a holistic autonomous decision-making in operations management.
***
Let’s take the example of optimal machine settings on the manufacturing floor. What are the optimal setting? Perhaps it depends on the load. For any particular load, there maybe some setting that gives the best performance. This performance maybe less than a sub-optimal performance corresponding to a different setting at a slightly different load. So what load should we use? Isn’t the load just determined based on some schedule in our ERP system or alike? which in turn used some static and linear assumptions? Does it take this machine settings and performance variations into account? And then if it did, wouldn’t it affect other metrics such as (just to name something) Available-to-Promise or Capable-to-Promise or other types of metrics that may couple to the higher level business variables? Conversely, are there other trade-offs considering demand segments, unique constraints or variables within each of those segments that could be propagated down in determining the right load considering our prediction modules for both low-level manufacturing floor variables/predictions and business level dynamic tradeoffs and predictions?
A key design principle for our ii systems is “universality”. This allows for flexibility in scope, such that it could be just an ML system or reinforcement learning agent depending on what data we work with, to tell us exactly how the settings and loads are related and be able to predict performance (narrow scope). Or multiple predictions and prescriptions in various areas and overall optimizations; letting different areas take one another more into account in dynamically optimizing themselves as opposed to replacing that with some static, aggregate and average assumptions about one another (broader scope). The other thing we have been passionately working on is: how to seamlessly broaden the scope by simply putting two or several narrow scope ii systems together and vice versa, i.e. breaking a large broad scope ii system into two or more narrower scope ones, allowing any one of those or any decision point/variable in the middle to be replaced with something else. These are attaching and detaching processes which we deem necessary for deep analytics reasons but also to ensure maximum flexibility and maintainability.