Posts

Exploring Gini Index in Decision Tree

Image
  Where  is the Gini Index used ?   Before diving into Gini Index it is essential to understand the area of Decision Tree, which is among the commonly used supervised machine learning algorithm for its simplicity in understanding. This algorithm supports predicting both classification and regression problems and Gini Index is used by Classification and Regression Tree (CART), one of the variants of Decision Tree algorithms.   Why  is Gini Index used?             Decision Tree supports making decisions by splitting the nodes of the tree into Root node, Decision node and Leaf node. To identify the best split the metric of Gini Index is used.   How  is Gini Index addressed?   Using the parameter criterion ‘Gini‘ should be passed as input. However in scikit-learn’s class constructors, Gini is passed as default value compared to other metrics such as ‘Entropy’.   History  of Gini Index?   Gini impurit...

Loss functions in Deep Learning using Keras

Loss functions in Deep Learning using Keras In Deep Learning, neural networks requires an optimizer and a loss function to configure an efficient model. The purpose of loss functions is to compute the quantity that a model should seek to minimize during training. Loss function are also termed as Cost function. Loss functions are categorized  namely, Probabilistic losses and Regression losses. For training the neural network various algorithm are used. To achieve optimization the weights are updated using back propagation and the optimization algorithms are used to reduce errors in the next iteration with weights changed. The score calculated after each evaluation is called the loss Probabilistic  Loss: These loss functions are used to identify classification based models Majority used loss functions in this category are Binary Cross Entropy: This function calculates the loss of classification model where the target variable is binary like 0 and 1.   Categorical Cross Entr...

Optimization techniques in Deep Learning

  In the deep learning world, the neural network is connected to all the layers(Input layer, Hidden layer and the output layer). In the front propagation we get the Y^ and calculate the error function. Error function is also called as the lost function or cost function. To reduce the loss function the optimizers are used. They update the weight in the back propagation.   Gradient Descent: The foremost optimizer used was the Gradient Descent. It works as follows 1.  Calculate what a small change in each individual weight would do to the loss function 2.  Adjust each individual weight based on its gradient 3.  Keep doing steps #1 and #2 until the loss function gets as low as possible   During optimization there could be problem in getting stuck on local minima . To avoid this need to make use of learning rate.   The learning rate  variable is used to multiply the gradients to scale them and need to ensure by changing weights at the right pace, not m...