A neural network is built around simple linear equations like Y = WX + B, which contain something called as weights W. These weights multiply with the input X and play a crucial in how the model predicts. The prediction scores can go downhill if a wrong weight gets updated and as the network gets deeper i.e addition of more layers (columns of connected nodes), the error magnifies and the results miss the target.

There is no denying the fact that building ML algorithms from scratch is a thing of the past. Modern-day programming platforms offer plenty of options where a single line of code would invoke a monstrous algorithm in the background. This works for those who want to get an idea of how ML plays out. However, if one is even remotely serious about putting an ML model into production then many issues surface.

Here are a few learning resources for beginners to kickstart their ML journey :

### Essence Of Linear Algebra By 3Blue1Brown

3blue1brown, or 3b1b for those who prefer less of a tongue-twister, centres around presenting math with a visuals-first approach. That is, rather than first deciding on a lesson then putting illustrations to it for the sake of having a video, almost all projects start with a particular visualisation, with the narrative and storyline then revolving around this image.

Check here for more details.

### Linear Algebra By Khan Academy

This course by Khan Academy begins with defining and conceptualising what a vector is (rather than starting with matrices and matrix operations like in a more basic algebra course) and defining some basic operations (like addition, subtraction and scalar multiplication).

Check here for more details.

### Basic Linear Algebra for Deep Learning By Niklas Donges

This blog by Niklas gives an introduction to the most important concepts of Linear Algebra that are used in Machine Learning.

Check here for more details.

### Computational Linear Algebra for Coders By fast.ai

This course is focused on the question: How do we do matrix computations with acceptable speed and acceptable accuracy?

This course is structured with a top-down teaching method, which is different from how most math courses operate. Typically, in a bottom-up approach, you first learn all the separate components you will be using, and then you gradually build them up into more complex structures. The problems with this are that students often lose motivation, don’t have a sense of the “big picture”, and don’t know what they’ll need.

Check here for more details.

### Deep Learning Book By Ian Goodfellow and Yoshua Bengio and Aaron Courville

The Deep Learning textbook is a resource intended to help students and practitioners enter the field of machine learning in general and deep learning in particular. The online version of the book is now complete and will remain available online for free.

Check here for more details.

### Linear Algebra for Machine Learning By AppliedAICourse

The AppliedAICourse attempts to teach students/course-participants some of the core ideas in machine learning, data science and AI that would help the participants go from a real-world business problem to a first cut, working and deployable AI solution to the problem.

The videos focus on practical knowledge more than mathematical or theoretical rigour.

Check here for more details.

### Linear Algebra By MIT OpenCourseware

This course covers matrix theory and linear algebra, emphasising topics useful in other disciplines such as physics, economics and social sciences, natural sciences, and engineering. It parallels the combination of theory and applications in Professor Strang’s textbook Introduction to Linear Algebra.

Check here for more details.

### Coding The Matrix By Philip Klein

This course begins with limitations of eigenvalue analysis and goes onto give an in-depth idea of the workings of singular value decomposition and other matrix operations.

Check here for more details.

The field of machine learning is built on some ingenious mathematical and logical hypotheses and tools.

There are other rudimentary topics, which can make the life of a typical machine learning engineer easy:

- Law of large numbers
- The geometry of high dimensions
- Random walks in Euclidean space
- Gradient Descent methods
- Graph partitioning
- Bayesian or belief networks