How filters are made in a CNN?
$\begingroup$
First of all, feature maps are the output of the convolution after an activation function (e.g. ReLU or sigmoid) is applied, not the matrix that the image is convolved with. Usually, this is called a filter.
The magical thing about CNNs is that we don’t know what the filters should look like for any given problem. The CNN works out what each filter should look like automatically. This is done through the backpropagation procedure. Without getting heavy into the math of it all, essentially every time a training example (or a batch of examples) goes through the network, the values inside each filter get updated by some small amount. This small amount is determined using the derivatives of a loss function. As each step of the training procedure is completed, the values inside each filter (if all goes well!) slowly converge towards a value that minimises the loss function, thereby producing the best quality predictions.
For more information on the math, I encourage you to read the chapter on backprop in the free online book Neural Networks and Deep Learning, available here.
A more simple explanation is also provided here