Passion for Robotics,
Centred around Deepreinforcement learning
Building complex intelligent systems centred around Deepreinforcement learning and Deeplearning for Computer Vision applications and merging them with other skills like Kalman filters for sensor fusion, Visual SLAM for robot mapping
and navigation, Robotic Operating System (ROS) and Embedded programming most optimistically fits the way I want to express myself ; through creative and innovative solutions in the field of Robotics.
This post is about the 4 equations that make backpropogation great in deep neural networks. This is a topic providing a different view point of the equations and answering questions involving on how and why certain terms exist within
the equations. Having a strong mathematical background in backpropogation is recomededded in order to appreciate this post.
This post is about conserving information after we perform a certain mathematical operation on a given set of numbers between a predefined range (We will consider the example of convolution on an image - pixel intensities range between
0 - 255). In this article we will explore the famous misconception between mapping and normalization. We will see the difference between the both and why we use a two step normalization process to conserve the original information
in the predefined range of the given number set. Click here to follow the article with code.