Fully Connected Layer: The brute force layer of a Machine Learning model
Reading time: 30 minutes
Fully Connected layers in a neural networks are those layers where all the inputs from one layer are connected to every activation unit of the next layer. In most popular machine learning models, the last few layers are full connected layers which compiles the data extracted by previous layers to form the final output. It is the second most time consuming layer second to Convolution Layer.
The diagram below clarifies the statement.
In the above model:
- The first/input layer has 3 feature units and there are 4 activation units in the next hidden layer.
- The 1’s in each layer are bias units.
- a01, a02 and a03 are input values to the neural network.They are basically features of the training example.
- The 4 activation units of first hidden layer is connected to all 3 activation units of second hidden layer The weights/parameters connect the two layers.
So the activation units would be like this:
Theta00, theta01 etc. are weights in the above picture. A conventional neural network is made up of only fully connected layers. Whereas in a Convolutional Neural Network, the last or the last few layers are fully connected layers.