Tuesday, April 4, 2017

A Single Supply Chain Data Platform For All

The supply chain value chain is a huge network of raw data, storage systems, analysis framework, and end users. Traditional supply chain systems are siloed for each function - ERP, S&OP, APS, TMS, WMS, MES, etc. Each has its own little world of data, storage, retrieve, and analysis.

Great for the yesteryear of "waterfall" data consumption.  But times have changed.

For example, let's focus on a brand (like Apple).  Apple does not own a single factory. It relies on a global network of suppliers (TSMC, Samsung - a frenemy) to source its global network of contract manufactures (Foxconn).  With a global network of suppliers and demanders, a company is now exposed to risks around the world, 24/7. On the supply side, a fire, a strike, geo-political instability, war can disrupt a JIT flow of parts. On the demand side, a negative post by an influencing blogger and render a demand model obsolete in seconds. A global network requires real time access to data across the planet. Traditional systems was great a performing a singular task - optimize factory usage, minimize inventory carry cost, reduce shipping time - but has become rigid in the modern world of social media and globalization.  Big Data can help. Alot.


Adopting A Big Data Platform For Supply Chain




How do we transform a traditional supply chain system into Big Data? Starting with raw data, it needs to support a variety of  sources (IoT devices, social media, suppliers, transportation, inventory, factory), support many formats (flat files, CSV, JSON), and delivered in many different ways (e-mail, ftp, Restful API, GPRS).  It needs to reduce the need to model the data. Traditional relational database design required careful profiling, planning, and design of data models. Typical steps include creating an ERD, using Crow's Foot notation. Care was taken to follow the NF as to optimize the database.  Profiling was done to find the optimal declaration of datatypes. And EDW that had to pre-know queries to avoid expensive JOINS. Big Data - with its support for unstructured data and mapping techniques - will help greatly with all of this.   Data ingestion needs to be automated - using cloud CI/CD automation tools such as Jenkins to load data into AWS S3. Data analysis needs to support decision maker with descriptive (what happened in the past), predictive (what might happen in the future), and prescriptive (what should I do).


No comments :

Post a Comment