Neural plasticity is the physical substrate of learning. Neurochemical changes mediate short-term learning and structural changes (new branches and connections) establish long-term learning along with functional changes. Contrary to past beliefs, it is now known that these plasticity events continue throughout one’s life. Human beings are life-long learners and neural plasticity makes continuous learning possible!
Do we have all the counterparts of neural plasticity in Machine Learning today? NO; machine learning (ML) in 2016 harks back to the neuroscience of Ramon y Cajal and “cable theory” from early 1900s! In virtually every application of today’s ML, learning is defined as “generalization from past experiences (data)”, a one-shot plasticity event. ML solution is trained using historical data today, applied to business problems tomorrow and they work well – great. But what happens a few years from now? The solution uses learning based on data from a few years ago! Nothing in this world remains STATIC for a few years . . .
ML should embrace “life-long neural plasticity” and define learning as “generalization from past experiences AND results of new actions”; then we enter the realm of DYNAMICAL machine learning leading to Continuous Learning which leads to Continuous Improvement.
Technical underpinning of Dynamical ML is the following: DYNAMICAL ML = State-space data model + Real-time Recursive learning algorithms + In-Stream processing. All the three components are available today; more at the following links:
- Systems Analytics: Adaptive Machine Learning workbook: ML and Systems Theory merged for the first time; introducing basic and advanced theory, algorithms, MALTAB code and applications; available through Amazon.
- Next Stage in IoT revolution – “Continuous Learning”: General introduction to Dynamical ML within IoT framework.
- Generalized Dynamical Machine Learning: Leads to “hard core” algorithms for Dynamical, Non-linear, In-Stream Analytics.
As you get deeper into Dynamical ML, you will find that “State-space” concepts that arise within Systems Theory is a key topic. It turns out that a majority of STEM graduates are NOT trained in State-space approach; fortunately, there is a sizable minority of post-graduate Engineers who have solid training in state-space methods who are able to execute the development work.
Don’t get the wrong impression that “State-space” approach (and associated Kalman Filtering) is something exotic and new! It has been around since the 1960’s, proved its worth in Apollo space program and other esoteric applications and now part of many applications including garden-variety GPS that you and I use everyday! Theory, algorithms, software and hardware for Kalman Filter are common-place; the three weblinks above discuss Kalman Filter use in Data Science at multiple levels.
I call Data Science which emphasizes Dynamical ML as “IoT Data Science” since IoT provides the right framework for continuous improvement. Real-time data from a network of sensors and devices are processed in the Cloud (“lambda” architecture with appropriate database technologies such as Apache Flink) utilizing Dynamical ML algorithms. It appears to me that such a technology framework can underpin ALL businesses and industries of the 21st century (more discussion in “IoT as a Metaphor”).
IoT Data Science will engender a new revolution – but I do not believe that it will be an industrial revolution of the kind unleashed by steam engine and electricity; the hallmark of that revolution of increased productivity. It appears to me that the revolution in the offing that IoT Data Science will create will be more like the “printing press” revolution of 1400s. Printing press and movable type played a key role in the development of Renaissance, Reformation and the Age of Enlightenment. Similarly, IoT Data Science has the power to expand our thinking and change the structure of the 21st century society!
Try deep learning using MATLAB