In this section, we delve into the foundational aspects of neural networks, starting with the Perceptron, the simplest type of artificial neural network conceived by Frank Rosenblatt in 1958. The Perceptron consists of a single neuron that processes weighted inputs to produce a binary output through a step function. However, it has a significant limitation: it can only solve linearly separable problems, restricting its applicability. To overcome this limitation, we introduce Multi-Layer Neural Networks, also known as Multi-Layer Perceptrons (MLPs) or Feedforward Neural Networks. These networks consist of an input layer, one or more hidden layers, and an output layer, enabling them to handle non-linear problems effectively. The key advantage of MLPs, backed by the Universal Approximation Theorem, is their capability to approximate virtually any continuous function. This flexibility opens the doors to modeling complex patterns in various domains, signifying the revolutionary progress from perceptrons to multi-layer configurations in deep learning.