### Why TensorFlow?

You must be wondering if we need to know TensorFlow. You must seek reasons to accept and adopt Tensorflow. Here is the answer. Since the world is focusing on Artificial Intelligence in some way or other, Deep Learning as the best component of Artificial Intelligence is going to take the centre stage. Deep Learning does a wonderful job in pattern recognition especially in context of images, sound, speech, language and time series data

When we talk about Deep Learning, it is likely to discuss the best frameworks for Deep Learning development. Fortunately, in November 2015, Google has released Tensorflow, deep learning framework, that has been used in most of Google products such as Google search, spam detection, speech recognition, Google Allo, Google Now and Google Photos.

Tensorflow allows Model Parallelism and Data Parallelism. TensorFlow provides multiple APIs. The lowest level API–TensorFlow Core– provides you with complete programming control.

### Tensors

Best way to start enjoying Tensorflow library is to get comfortable with the basic unit of data used in Tensorflow. The basic unit is Tensor. Tensor is a mathematical object, a generalization of scalars, vectors and matrices. Multidimensional array is a data structure suitable for representing a tensor

9 # a rank 0 tensor; this is a **scalar **with shape [ ]
[1. ,2., 3.] # a rank 1 tensor; this is a **vector** with shape [3]
[[5., 2., 7.], [3., 5., 4.]] # a rank 2 tensor; a **matrix** with shape [2, 3]
[[[6., 2., 3.]], [[7., 8., 9.]]] # a rank 3 tensor with shape [2, 1, 3]

### Top 10 Key Points about TensorFlow

Core Program of Tensorflow can be understood in simple statements:–

- Its programs are usually structured into a construction phase and execution phase.
- The computational graph is built in construction phase
- Construction phase assembles a graph having nodes (ops/operations) and edges (tensors)
- Tensors as input and Tensors as output for any operation (node). Addition is an operation (node) that takes two tensors as input and gives a tensor as output
- The computational graph is run in the execution phase. Execution phase uses a session to execute ops in the graph.
- The simplest ops(operations) is a constant that takes no inputs but pass outputs to other ops that do computation
- Examples of ops can be multiplication (or addition, subtraction that takes two matrices as input and passes a matrix as output.
- Tensorflow library has a default graph to which ops constructors add nodes
- To actually evaluate the nodes, we must run the computational graph within a session.
- A session encapsulates the control and state of the TensorFlow runtime.

TensorFlow programs use a tensor data structure to represent all data — only tensors are passed between operations in the computation graph. You can think of a TensorFlow tensor as an n-dimensional array or list. A tensor has a static type, a rank, and a shape.

### Let us start coding in TensorFlow

** **There are three variable types in Tensorflow. Variable, Placeholder and Constant. Let us play with it

**Variable:-**

**Here is the explanation of the above code in simple words.**

**Import the tensorflow module and call it tf****Create a constant value(x), and assign it the numerical value 12****Create a session for computing the values****Run just the variable x and print out its current value**

To be comfortable, let us have another example

**Here is the explanation of the above code.**

**Import the tensorflow module and call it tf****Create a constant value called x, and give it the numerical value 12****Create a Variable called y, and define it as being the equation 12+11****Initialize the variables with****tf.global_variables_initializer()****Create a session for computing the values****Run the model created in 4****Run just the variable y and print out its current value**

### Placeholder:-

A **placeholder** is a variable that we can feed to at a later time. It is supposedly designed to accept external inputs. Placeholders can have one or multiple dimensions, meant for storing N-dimensional arrays

** Here is the explanation of the above code.**

**Import the tensorflow module and call it tf****Create a placeholder called x, mentioning the float type****Create a Tensor called, y that is the operation of multiplying x by 10 and adding 500 to it. Note that any initial values for x is not defined.****Create a session for computing the values****Define the values of x in the feed_dict so as to run y****Print out its value**

In the following example, we create a 2 by 4 matrix (2-D array) for storing some numbers in it. We then use the same operation as before to do element-wise multiplying by 10 and adding 1 to it. The first dimension of the placeholder is **None**, that means any number of rows is allowed.

We can also consider 2-D array in place of 1-D array. Here is the code.

Constants are initialized when you call tf.constant, and their value can never change. By contrast, variables are not initialized when you call tf.Variable. To initialize all the variables in a TensorFlow program, you must explicitly call a special operation as follows:

It is important to realize* init* is a handle to the TensorFlow sub-graph that initializes all the global variables. Until we call **sess.run**, the variables are uninitialized.

### Constant:-

There are various kind of tensors we can create in TensorFlow. You can find more details in the book to be published next month. This article has taken inspiration from the book.

Hope you have enjoyed learning the basics of Tensorflow. Please feel free to comment.

Try deep learning using MATLAB