5
I Use This!
Activity Not Available

News

Posted about 5 years ago
DDMRP stands for Demand Driven Material Requirements Planning. In the last few years, the popularity of DDMRP has been growing in certain industries; occupying the niche that lean manufacturing or six sigma used to occupy. Yet, what can really be ... [More] expected from DDMRP and how much novelty does it bring to the table as far as supply chain optimization is concerned? In order to address this question, let’s review DDMRP from a numerical perspective, i.e. looking at DDMRP as a set of numerical recipes1 to deliver a measurable performance optimization of a given supply chain. Indeed, as all the benefits put forward by the authors of DDMRP are all quantified targets (ex: achieve 97-100% on time fill rate performance2), it seems fair to adopt a numerical stance to assess the merits of this approach. The authors behind DDMRP state that this approach brings four key innovations to supply chain optimization, namely: decoupling the lead times3 the net flow equation4 the decoupled explosions5 the relative priority6 Jumping to conclusions, the careful review of each of those points - done in greater details in the following - indicates that there is very little substance to the bold claims of DDMRP. The numerical recipes proposed by DDMRP would not even have been considered state-of-the-art by the end of 1950’s as the nascent field of operations research had already uncovered arguably more sophisticated and better numerical optimization strategies at the time. The improvements claimed to be achieved by DDMRP start with a wrong baseline: MRPs - just like ERP - are typically not delivering any numerical optimization capabilities7. Their underlying relational database systems are simply unsuitable to carrying any sizeable data crunching workload, even when considering modern computing hardware. Thus, despite the discourse of many enterprise software vendors - operating in the transactional side of the problem - it is incorrect to take MRPs as a baseline as far as supply chain optimization is concerned. [Less]
Posted about 5 years ago
DDMRP stands for Demand Driven Material Requirements Planning. In the last few years, the popularity of DDMRP has been growing in certain industries; occupying the niche that lean manufacturing or six sigma used to occupy. Yet, what can really be ... [More] expected from DDMRP and how much novelty does it bring to the table as far as supply chain optimization is concerned? In order to address this question, let’s review DDMRP from a numerical perspective, i.e. looking at DDMRP as a set of numerical recipes1 to deliver a measurable performance optimization of a given supply chain. Indeed, as all the benefits put forward by the authors of DDMRP are all quantified targets (ex: achieve 97-100% on time fill rate performance2), it seems fair to adopt a numerical stance to assess the merits of this approach. The authors behind DDMRP state that this approach brings four key innovations to supply chain optimization, namely: decoupling the lead times3 the net flow equation4 the decoupled explosions5 the relative priority6 Jumping to conclusions, the careful review of each of those points - done in greater details in the following - indicates that there is very little substance to the bold claims of DDMRP. The numerical recipes proposed by DDMRP would not even have been considered state-of-the-art by the end of 1950’s as the nascent field of operations research had already uncovered arguably more sophisticated and better numerical optimization strategies at the time. The improvements claimed to be achieved by DDMRP start with a wrong baseline: MRPs - just like ERP - are typically not delivering any numerical optimization capabilities7. Their underlying relational database systems are simply unsuitable to carrying any sizeable data crunching workload, even when considering modern computing hardware. Thus, despite the discourse of many enterprise software vendors - operating in the transactional side of the problem - it is incorrect to take MRPs as a baseline as far as supply chain optimization is concerned. [Less]
Posted about 5 years ago
DDMRP stands for Demand Driven Material Requirements Planning. In the last few years, the popularity of DDMRP has been growing in certain industries; occupying the niche that lean manufacturing or six sigma used to occupy. Yet, what can really be ... [More] expected from DDMRP and how much novelty does it bring to the table as far as supply chain optimization is concerned? In order to address this question, let’s review DDMRP from a numerical perspective, i.e. looking at DDMRP as a set of numerical recipes1 to deliver a measurable performance optimization of a given supply chain. Indeed, as all the benefits put forward by the authors of DDRMRP are all quantified targets (ex: achieve 97-100% on time fill rate performance2), it seems fair to adopt a numerical stance to assess the merits of this approach. The authors behind DDMRP state that this approach brings four key innovations to supply chain optimization, namely: decoupling the lead times3 the net flow equation4 the decoupled explosions5 the relative priority6 Jumping to conclusions, the careful review of each of those points - done in greater details in the following - indicates that there is very little substance to the bold claims of DDMRP. The numerical recipes proposed by DDMRP would not even have been considered state-of-art by the end of 1950’s as the nascent field of operations research had already uncovered arguably more sophisticated and better numerical optimization strategies at the time. The improvements claimed to be achieved by DDRMP start with a wrong baseline: MRPs - just like ERP - are typically not delivering any numerical optimization capabilities7. Their underlying relational database systems are simply unsuitable to carrying any sizeable data crunching workload, even when considering modern computing computing hardware. Thus, despite the discourse of many enterprise software vendors - operating in the transactional side of the problem - it is incorrect to take MRPs are a baseline as far as supply chain optimization is concerned. [Less]
Posted about 5 years ago
The first principle of our Quantitative Supply Chain manifesto states that all futures should be considered. Thus, we expanded Envision two years ago to natively work with random variables. This probabilistic algebra is the cornerstone of our way of ... [More] dealing with uncertain futures. Then, the second principle states that all feasible decisions should be considered, e.g. quantities to be purchased from suppliers. Yet, while those decisions are not random variables. the quantities associated to those decisions are undecided not uncertain. Our probabilistic algebra was not sufficient by itself to properly reflect those yet-to-be-made decisions. Thus, last year, we silently and gradually rolled out a complementary algebra: the algebra of zedfuncs. A zedfunc is a datatype in Envision intended to reflect economic rewards or losses associated to quantified decisions. The main trick is that a zedfunc does not compute the outcome for one decision, but for all decisions; e.g. all the rewards from triggering a production for 1 unit up to an infinity 1 of units. [Less]
Posted about 5 years ago
The first principle of our Quantitative Supply Chain manifesto states that all futures should be considered. Thus, we expanded Envision two years ago to natively work with random variables. This probabilistic algebra is the cornerstone of our way of ... [More] dealing with uncertain futures. Then, the second principle states that all feasible decisions should be considered, e.g. quantities to be purchased from suppliers. Yet, while those decisions are not random variables. the quantities associated to those decisions are undecided not uncertain. Our probabilistic algebra was not sufficient by itself to properly reflect those yet-to-be-made decisions. Thus, last year, we silently and gradually rolled out a complementary algebra: the algebra of zedfuncs. A zedfunc is a datatype in Envision intended to reflect economic rewards or losses associated to quantified decisions. The main trick is that a zedfunc does not compute the outcome for one decision, but for all decisions; e.g. all the rewards from triggering a production for 1 unit up to an infinity 1 of units. [Less]
Posted about 5 years ago
The first principle of our Quantitative Supply Chain manifesto states that all futures should be considered. Thus, we expanded Envision two years ago to natively work with random variables. This probabilistic algebra is the cornerstone of our way of ... [More] dealing with uncertain futures. Then, the second principle states that all feasible decisions should be considered, e.g. quantities to be purchased from suppliers. Yet, while those decisions are not random variables. the quantities associated to those decisions are undecided not uncertain. Our probabilistic algebra was not sufficient by itself to properly reflect those yet-to-be-made decisions. Thus, last year, we silently and gradually rolled out a complementary algebra: the algebra of zedfuncs. A zedfunc is a datatype in Envision intended to reflect economic rewards or losses associated to quantified decisions. The main trick is that a zedfunc does not compute the outcome for one decision, but for all decisions; e.g. all the rewards from triggering a production for 1 unit up to an infinity 1 of units. [Less]
Posted over 5 years ago
Two months ago, we rolled out a major new feature for Lokad: our first bit of real-time data exploration. This feature is codenamed dashboard slicing, and it took us a complete overhaul of the low-level data processing back-end powering Envision to ... [More] get it done. With dashboard slices, every dashboard becomes a whole dictionary of dashboard views, which can be explored in real-time with a search bar. For example, by slicing a dashboard intended as a product inspector, which gathers in one place all the information about a product - including probabilistic demand and lead time forecasts for example - it is now possible to switch in real-time from one product to the next. At present, Lokad supports up to 200,000 slices (aka dashboard views) to be produced for a single dashboard; and those slices can be displayed in real time through the selector, which comes with a real-time search feature in order to facilitate the exploration of the data. Unlike business intelligence (BI) tools, those slices can contain highly complex calculations, not merely slice-and-dice over an OLAP cube. When it comes to data crunching and reporting there are typically two camps: online processing and batch processing. Online processing takes a feed of data, and it is typically expected that everything displayed by the system is always fresh: the system is not lagging more than a few minutes, sometimes no more than a few seconds behind the reality. OLAP cubes, and most of the tools referred to as business intelligence fall into this category. While real-time 1 analytics are highly desirable, not only from a business perspective (fresh data is better than stall data), but also for an end-user perspective (performance is a feature), they also come with stringent limitations. Simply put, it is exceedingly hard to deliver smart analytics2 in real-time. As a result, all online analytical systems come with severe limitations when it comes to the type of analytics that can be carried by the system. [Less]
Posted over 5 years ago
Two months ago, we rolled out a major new feature for Lokad: our first bit of real-time data exploration. This feature is codenamed dashboard slicing, and it took us a complete overhaul of the low-level data processing back-end powering Envision to ... [More] get it done. With dashboard slices, every dashboard becomes a whole dictionary of dashboard views, which can be explored in real-time with a search bar. For example, by slicing a dashboard intended as a product inspector, which gathers in one place all the information about a product - including probabilistic demand and lead time forecasts for example - it is now possible to switch in real-time from one product to the next. At present, Lokad supports up to 200,000 slices (aka dashboard views) to be produced for a single dashboard; and those slices can be displayed in real time through the selector, which comes with a real-time search feature in order to facilitate the exploration of the data. Unlike business intelligence (BI) tools, those slices can contain highly complex calculations, not merely slice-and-dice over an OLAP cube. When it comes to data crunching and reporting there are typically two camps: online processing and batch processing. Online processing takes a feed of data, and it is typically expected that everything displayed by the system is always fresh: the system is not lagging more than a few minutes, sometimes no more than a few seconds behind the reality. OLAP cubes, and most of the tools referred to as business intelligence fall into this category. While real-time 1 analytics are highly desirable, not only from a business perspective (fresh data is better than stall data), but also for an end-user perspective (performance is a feature), they also come with stringent limitations. Simply put, it is exceedingly hard to deliver smart analytics2 in real-time. As a result, all online analytical systems come with severe limitations when it comes to the type of analytics that can be carried by the system. [Less]
Posted over 5 years ago
Two months ago, we rolled out a major new feature for Lokad: our first bit of real-time data exploration. This feature is codenamed dashboard slicing, and it took us a complete overhaul of the low-level data processing back-end powering Envision to ... [More] get it done. With dashboard slices, every dashboard becomes a whole dictionary of dashboard views, which can be explored in real-time with a search bar. For example, by slicing a dashboard intended as a product inspector, which gathers in one place all the information about a product - including probabilistic demand and lead time forecasts for example - it is now possible to switch in real-time from one product to the next. At present, Lokad supports up to 200,000 slices (aka dashboard views) to be produced for a single dashboard; and those slices can be displayed in real time through the selector, which comes with a real-time search feature in order to facilitate the exploration of the data. Unlike business intelligence (BI) tools, those slices can contain highly complex calculations, not merely slice-and-dice over an OLAP cube. When it comes to data crunching and reporting there are typically two camps: online processing and batch processing. Online processing takes a feed of data, and it is typically expected that everything displayed by the system is always fresh: the system is not lagging more than a few minutes, sometimes no more than a few seconds behind the reality. OLAP cubes, and most of the tools referred to as business intelligence fall into this category. While real-time 1 analytics are highly desirable, not only from a business perspective (fresh data is better than stall data), but also for an end-user perspective (performance is a feature), they also come with stringent limitations. Simply put, it is exceedingly hard to deliver smart analytics2 in real-time. As a result, all online analytical systems come with severe limitations when it comes to the type of analytics that can be carried by the system. [Less]
Posted over 5 years ago
With the advent of cloud computing, a little more than a decade ago, it has become straightforward to acquire computing resources on-demand (storage, compute, network) pretty much at any scale as long as one is willing to pay for it. Yet, while it is ... [More] straightforward to perform large scale calculations over the cloud computing platform of your choice, it does not imply that it will be worth the cost. At Lokad, we do not charge our clients per GB of storage or per CPU per hour. Instead, the primary driver for our pricing, when opting for our professional services is the complexity of the supply chain challenge to be addressed in the first place. Naturally, we do factor into our prices the computing resources that we need to serve our clients, but ultimately, every euro that we spend on Microsoft Azure - spending-wise, we did become a “true” enterprise client - is a euro that we cannot spend on R&D or on the Supply Chain Scientist who is taking care of the account. [Less]