How to build a simple neural network using tensorflow
Mục Lục
Recipe Objective
How to build a simple neural network using tensorflow?
To build a simple neural network we require a dataset, and here we are going to use the “fashion-mnist” dataset which is already present in Keras. The dataset contains two sets which are the training set and test set, the training set is having 60,000 examples data where as the test is having 10,000 examples data. As we can also say we are having a data of 70,000 images of 28×28 size (28 pixels of width and 28 pixels of height) in which 60,000 is in training and 10,000 is in test. These images are in grayscale in which each one is showing the q0 possible clothing type.
Access Deep Learning Project to Build a Similar Images Finder in Python
Step 1 – Import library
import tensorflow as tf
from tensorflow import keras
Step 2 – Load Dataset
(x_train_data, y_train_data), (x_val_data, y_val_data) = keras.datasets.fashion_mnist.load_data()
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/train-labels-idx1-ubyte.gz 32768/29515 [=================================] - 0s 0us/step Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/train-images-idx3-ubyte.gz 26427392/26421880 [==============================] - 0s 0us/step Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/t10k-labels-idx1-ubyte.gz 8192/5148 [===============================================] - 0s 0us/step Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/t10k-images-idx3-ubyte.gz 4423680/4422102 [==============================] - 0s 0us/step
Here in the above we are loading the dataset, also we are splitting them into training set and validation set. x_train_data and y_train_data is here to build our model in which x_train_data will consist of image pixel data for 60,000 clothes whereas the y_train_data will consist of classes i.e clothing type for x_train_data. Similarly the x_val_data and y_val_data is used for testing or validation of our model.
Step 3 – Preprocess and create dataset
def preprocessing_function(x_new, y_new):
x_new = tf.cast(x_new, tf.float32) / 255.0
y_new = tf.cast(y_new, tf.int64)
return x_new, y_new
def func_creating_dataset(xs_data, ys_data, num_classes=10):
ys_data = tf.one_hot(ys_data, depth=num_classes)
return tf.data.Dataset.from_tensor_slices((xs_data, ys_data)) \
.map(preprocessing_function) \
Step 4 – Create Neural network
dataset_training = func_creating_dataset(x_train_data, y_train_data)
dataset_val = func_creating_dataset(x_val_data, y_val_data)
Step 5 – Build the model
My_model = keras.Sequential([
keras.layers.Reshape(target_shape=(28 * 28,), input_shape=(28, 28)),
keras.layers.Dense(units=256, activation='relu'),
keras.layers.Dense(units=192, activation='relu'),
keras.layers.Dense(units=128, activation='relu'),
keras.layers.Dense(units=10, activation='softmax')
])
Step 6 – Train the model
My_model.compile(optimizer='adam',
loss=tf.losses.CategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
history = My_model.fit(
dataset_training.repeat(),
epochs=10,
steps_per_epoch=500,
validation_data=dataset_val.repeat(),
validation_steps=2
Epoch 1/10 500/500 [==============================] - 8s 12ms/step - loss: 0.6743 - accuracy: 0.7643 - val_loss: 0.4322 - val_accuracy: 0.8633 Epoch 2/10 500/500 [==============================] - 6s 11ms/step - loss: 0.3679 - accuracy: 0.8660 - val_loss: 0.3940 - val_accuracy: 0.8555 Epoch 3/10 500/500 [==============================] - 6s 12ms/step - loss: 0.3317 - accuracy: 0.8773 - val_loss: 0.3207 - val_accuracy: 0.9023 Epoch 4/10 500/500 [==============================] - 6s 12ms/step - loss: 0.3024 - accuracy: 0.8877 - val_loss: 0.3989 - val_accuracy: 0.8672 Epoch 5/10 500/500 [==============================] - 6s 12ms/step - loss: 0.2863 - accuracy: 0.8916 - val_loss: 0.3594 - val_accuracy: 0.8750 Epoch 6/10 500/500 [==============================] - 6s 12ms/step - loss: 0.2766 - accuracy: 0.8983 - val_loss: 0.4284 - val_accuracy: 0.8438 Epoch 7/10 500/500 [==============================] - 6s 11ms/step - loss: 0.2571 - accuracy: 0.9025 - val_loss: 0.2932 - val_accuracy: 0.8711 Epoch 8/10 500/500 [==============================] - 6s 12ms/step - loss: 0.2456 - accuracy: 0.9074 - val_loss: 0.1954 - val_accuracy: 0.9297 Epoch 9/10 500/500 [==============================] - 6s 12ms/step - loss: 0.2319 - accuracy: 0.9111 - val_loss: 0.3370 - val_accuracy: 0.8672 Epoch 10/10 500/500 [==============================] - 6s 11ms/step - loss: 0.2229 - accuracy: 0.9162 - val_loss: 0.3033 - val_accuracy: 0.9023
Step 7 – Make predictions
Make_predictions = My_model.predict(dataset_val)
Make_predictions
array([[8.21499402e-09, 1.98965409e-07, 1.17530963e-09, ..., 2.39849047e-04, 3.75632886e-10, 9.99724329e-01], [4.75593419e-11, 1.32912683e-13, 1.02271674e-12, ..., 3.79424998e-11, 1.00000000e+00, 1.16047255e-10], [6.36480451e-08, 1.22093002e-06, 1.70876291e-08, ..., 2.29782687e-04, 4.84631757e-09, 9.99749601e-01], ..., [3.27635678e-13, 7.38353267e-11, 6.17022167e-14, ..., 2.07368976e-06, 5.78794384e-15, 9.99997735e-01], [3.67982114e-15, 5.02441912e-18, 2.04865252e-15, ..., 4.30711580e-18, 1.00000000e+00, 3.65123112e-20], [3.21948218e-10, 9.99999881e-01, 1.29861441e-10, ..., 6.70993209e-12, 9.32399991e-11, 6.58160956e-11]], dtype=float32)
{“mode”:”full”,”isActive”:false}