Do neural networks represent a shift in coding? Tesla’s director of AI Andrej Karpathy in his note on Software 2.0 has coined this new term which emphasises a shift in developing and writing software. His take on the question is that training neural nets and predicting using them involves a new way of thinking of software. The earlier methodology, according to Karpathy, involved writing code, say, in Python and putting it in production. However, when it comes to training neural nets — the developer has to set up a framework for the neural net to properly “learn” and repeatedly feed examples through it of what the correct answer is to “teach” it.
Does Deep Learning Represent A New Software Paradigm?
Most AI practitioners do agree neural nets represent a shift in coding, but it’s not a path-breaking transition and the term “software 2.0” has been drubbed as another marketing buzzword for neural networks. Here’s why – deep learning is the not the “real silver bullet” for all business problems. Karpathy’s presumption is that deep learning can do better than other forms of software and will continue to be used instead of handwritten code. On one hand, Karpathy reinforces the fact that the overarching success of deep learning at a family of tasks machine learning practitioners thought should be hand-written coding has paved for a new paradigm of producing software is – Software 2.0.
To a certain extent, neural networks represent a trend towards “teachable machines” which brings us back to the main argument – is deep learning chipping away at machine learning techniques. Neural networks have made significant progress in fields like computer vision, image classification, language translation and speech recognition, areas where traditional methods were underperforming.
Conventional Programming vs Deep Learning
Karpathy, a highly respected machine learning researcher is well-known for his excellent publication record and citation count. According to Carlos E Perez, author of Deep Learning Playbook & Artificial Intuition: The Improbable Deep Learning Revolution, enterprises are still in the early stages of Deep Learning development and the field is rife with issues that need to be addressed to evolve to a new kind of computing that reflects the same features as Software 1.0.
- Practitioners argue that in a lot of cases, simpler algorithms are performing just fine in certain cases and even offered key benefits like interpretability, a major advantage over Deep Learning’s black box problem.
- There’s more to software than just writing code. Software 1.0 involves new features, maintenance, finding bugs and bottlenecks and even long-term support is crucial. Meanwhile, neural networks don’t really address those problems.
- According to machine learning researchers, Software 1.0 comprises of bitwise operations — AND, OR, NOT that can be performed using only NAND gates. The complexity steps in by building functions over bytes, floating-point numbers, memory pointers, and from specialising and optimising instructions to specific tasks. Software 2.0 excels in a family of tasks that couldn’t be solved with conventional computing and that’s where it took off.
- However, to advance “software 2.0”, one needs to keep producing massive, clean datasets for networks to train. Neural network approach has not been extended to use cases where there is a lack of labeled data.
- Perez believes neural networks are intrinsically ‘intuition machines’ which means the technology is inherently different from software 1.0.
- Do neural networks offer an automated programming framework – no? Neural networks are also programs that require coding – if you tweak the dataset and change the training parameters, one obtains a different output. Once the error is minimised, you get a solution for your problem. In this case, the trained network is a deterministic function of the hyperparameters and the output and stopping criteria are well-defined.
One Thing Is Clear, Deep Learning Is Indeed The Future Of Software
Even though many deem Karpathy’s argument to be flawed, one thing is certain, he coined a new term for deep learning – “software 2.0”, adding more teeth to the DL hype and has hinted how firms like Google, Amazon, Uber are working on it and will soon make it the new paradigm of software development.
Meanwhile, senior data scientist Seth Weidman has added a new perspective to it. According to Weidman, with the wide availability of easy-to-use packages such as SciKit Learn in Python, data scientists are spending more time on integrating data from various sources and rather than explicitly programming the models themselves. Weidman believes in the future, data scientists and machine learning researchers would spend more time setting up environments for teachable machines rather than explicit programming – the code for actually generating the models is contained in the libraries, he adds.
For example, Weidman explains that in the age of software 2.0, modeling tasks will not be about the designing custom functions, but more around function approximations wherein developers after feeding the right data will deploy off-the-shelf tools to tune the model.
Weidman gives an image classification example, wherein to train an image classifier, the developer loads in the images, and then uses an off-the-shelf code from a library like Keras to show the model structure. By searching how to perform “image classifier keras”, a developer can train a model with less than 20 lines of code.
Given this scenario, Weidman argues for budding data scientists entering the field, knowing how a model works would not be mission critical. The more important thing would be ensuring data quality and building the right checks for the model, he adds.
Well, the use cases for software 2.0/ deep learning are definitely expanding and over a period of time and there will be an increased prevalence of tech companies who would load off computations in their software to trained neural nets — this would represent the shift in software development.
Try deep learning using MATLAB