A Comprehensive Guide to Neural Networks in Deep Q-learning

The deep Q-learning algorithm employs a deep neural network to approximate values. It generally works by feeding the initial state into the neural network which calculates all possible actions based on the Q-value.

There are many disruptive technologies in artificial intelligence, including self-driving cars, language models, gaming algorithms, and data processing. They all have something in common: reinforcement learning (RL). Currently one of the most exciting areas of machine learning, it is an optimization method that can be used to train a machine to perform a task. It uses experience or feedback from the world around it and improves performance based on the same. In this article, we explore reinforcement learning with emphasis on deep Q-learning, a popular method heavily used in RL.

Understanding reinforcement learning

A reinforcement learning algorithm is a component of machine learning that focuses on the action. In this environment, an agent gets self-trained in reward and punishment mechanisms.

Based on observations, the algorithm identifies the best possible action or path to take in a particular situation to gain the most rewards and the fewest punishments. As a result, it serves to signal both positive and negative behavior.

An agent (or agents) is created which can observe and analyze its surroundings. It is also capable of taking action and interacting with it. In contrast to supervised and unsupervised learning, reinforcement learning does not require additional supervised inputs or outputs.

In addition, actions need not require more correction to be highly efficient. A Markov decision process (MDP) plays a significant role in all reinforcement learning subjects.

A crucial point to note is that, within an environment, every state is dependent on its previous state which is dependent on its previous state. Hence, the present state of any environment is a product of the information gathered from the previous states.

Therefore, every agent must act and earn more rewards from the environment through their actions. In a given state, an MDP allows an agent to decide the best action to maximize its reward.

At any point in time, a policy refers to the probability that an action will be taken from a state at that time. As a result, the MDP aims to determine the best policy.

Terminologies in reinforcement learning

  • Agent: Makes all the decisions and learns everything.
  • Environment: The physical environment in which an agent learns and decides what actions to take.
  • Action: A list of actions agents can perform.
  • State: A description of the current environment of the agent.
  • Reward: Whenever the agent takes a specific action, the environment rewards it. The environment usually gives feedback in the form of a scalar value.
  • Policy: The agent prepares a strategy (decision-making) by mapping situations to actions.
  • Value function: Starting from the state and continuing until the policy gets executed, the value of the state shows as the reward.
  • Model: RL agents don’t use models of their environments. Over the states, the agent outlines the probability distributions of each state-action pair.

What is the Q-learning algorithm?

Q-learning is a reinforcement learning policy that determines the next possible best action based on a current state. By choosing this action randomly, it strives to maximize its reward.

The Q-learning algorithm is model-free, off-policy reinforcement learning, which finds appropriate action according to the agent’s current state. Based on the agent’s position within the environment, it decides what the next action will be.

According to the model’s current state, its goal is to determine the most appropriate course of action. To accomplish this, it may develop its own guidelines or not follow the pre-defined guidelines. As a result, no policy is actually necessary, which is why it is called off-policy.

The environment tends to respond as expected when an agent uses model-free behavior. It prefers the trial and error method rather than the reward and punishment system.

The advertisement recommendation system is an excellent example of Q-learning. The normal ad recommendation system works as ads appear according to past browsing history or previous purchases. For instance, after you buy a mobile phone, you’ll receive recommendations for new models from different brands.

Important terms in Q-learning

  • States: The State S indicates a particular agent’s position in a given environment.
  • Action: Action A describes what the agent takes when it is in a given state.
  • Rewards: Agents will receive positive or negative rewards based on their actions.
  • Episodes: Agents can’t perform an action after reaching a terminating state.
  • Q-values: A measure of the quality of an action, A, performed at a particular state, S. Q (A, S).
  • Temporal difference: The Q-value is calculated based on the values of the present state, the previous state, and the action.

What is deep Q-learning?

The deep Q-learning model breaks the chain in order to find the optimal Q-value function. It determines this by combining Q-learning and a neural network. The uses of the deep Q-learning algorithm can be stated as finding the input and the optimal Q-value for all possible actions as the output.

The following image illustrates the differences between Q-learning and deep Q-learning:

deep q learning example.webp

In deep Q-learning, past experiences are stored in memory and the future action depends on the Q-network output. It is how Q-network calculates the Q-value at state st. Similarly, the target network (neural network) determines the Q-value for state St+1 (next state) to stabilize the training.

As an additional feature, it copies Q-value count as training dataset for each iteration of Q-value in the Q-network, thereby blocking abrupt increases in Q-value count.

The deep Q-learning algorithm relies on neural networks and Q-learning. In this case, the neural network stores experience as a tuple in its memory with a tuple that includes <State, Next State, Action, Reward>.

A random sample of previous data increases the stability of neural network training. As a result, deep Q-learning uses another concept to boost agents’ performance – experience replay or store experience from the past.

The target network uses experience replay to determine the Q-value while the Q-network uses it for training. Calculating the loss becomes easier when the squared difference between the target and predicted Q-values is known. The equation is given below:

Equation to calculate loss.webp

To understand deep Q-learning better, we need to break it down into these steps:

  1. First, the agent is informed about the environmental state.
  2. The agent uses Q-values of possible actions based on the provided state.
  3. The agent applies a Q-value to the action resulting in higher rewards and lower punishment.
  4. We then follow up on rewards and next steps.
  5. The agent keeps track of previous experiences from the action in an experience replay memory.
  6. The agent uses experience replay memory to train the networks.

We continually repeat steps 2 through 6 according to the agent state.

Why ‘deep’ Q-learning?

The Q-learning algorithm creates a cheat sheet for agents in a simple but quite powerful manner. By doing so, the agents can determine exactly what action to take.

Consider an environment with 10,000 states and 1,000 actions per state. Wouldn’t the cheat sheet be too long? There would be a table with 10 million cells. Eventually, things would spiral out of control. Thus, it would be impossible to estimate the Q-value for new states based on previously explored states.

Two main problems would arise:

  1. As the number of states increases, the amount of memory required to save and update the table would expand.

  2. It would become impossible to explore every state to create the required Q-table in the time required.

A neural network becomes helpful in approximating the Q-value function in deep Q-learning. The state gets taken as the input and Q-values for all possible actions get generated as the output.

The following steps are involved in reinforcement learning using deep Q-learning networks (DQNs):

  • User memories get stored with past experiences.
  • A Q-network decides the next action based on its maximum output.
  • In the loss function, the mean square error is the difference between the target Q-value Q* and the predicted Q-value.

To prevent the neural network from affecting the distribution of states, actions, rewards, and next_states it encounters, deep Q-learning uses experience replay to learn in small batches. Moreover, the agent doesn’t need training after each step.

How deep Q-networks function

Functioning of deep Q-network.webp

A deep Q-network functions via the following steps:

Initialize main and target neural networks

The Q-table implementation is the prime difference between Q-learning and deep Q-learning. In the latter, neural networks replace the regular Q-table.

In neural networks, input states get mapped to (action, Q-value) pairs instead of state-action pairs to Q-values. A unique feature of deep Q-learning is that it uses two neural networks in its learning process.

Despite having the same architecture, these networks differ in their weights. During each N-step, the weights get copied between the main and target networks. Both networks help the algorithm learn more effectively and stabilize the learning process.

How to map states to (action, Q-value) pairs

An (action, Q-value) pair gets mapped between the main and target neural networks. The output nodes (representing actions) contain a floating-point value representing each action’s Q-value. As the output nodes do not reflect a probability distribution, they cannot add up to 1.

For the example above, we have a Q-value of 8 for one action and 5 for the other.

Choose an action using the Epsilon-Greedy exploration strategy

Using the Epsilon-Greedy exploration strategy, an agent selects an action at random with probability as epsilon. It further uses its best-known action with probability equivalent to 1 epsilon.

What are the best-known actions from a network?

There is a mapping between input states and output actions in both the target and main models. These output actions correspond to the predicted Q-value for the model. When the largest Q-value gets predicted for an action, that action is the best-known at that state.

The agents have to perform an action when they choose it and update the main and target networks using the Bellman equation. In order to update them, a deep Q-learning agent deploys experience replay which enables them to acquire knowledge about their environment.

Essentially, every step involves sampling and training which is based on a batch of 4 steps based on the past experiences. After 100 such steps, the weights of the main network get transferred to the weight of the target network.

Challenges in deep Q-learning

Deep Q-learning challenges.webp

Despite reinforcement learning’s popularity, there are challenges. The following are some of the more common ones that need to be addressed.

Sample efficiency

Finding an optimal and efficient way of learning from a limited number of samples is one of the critical challenges of reinforcement learning.

The sample efficiency refers to an algorithm that optimizes the given sample to its maximum extent. It also refers to the amount of experience that an algorithm must develop during training to perform efficiently. A lot of time is required for the RL system to be efficient. For example, AlphaGo Zero beat the world champion after playing five million Go games.

Reproducibility issues

When trying to replicate DeepMind’s AlphaZero, Facebook researchers said: “When combined with the unavailability of code and models, the result is that the approach is very difficult, if not impossible, to reproduce, study, improve upon, and extend.”

Neuronal networks are so difficult to understand that even their creators don’t fully comprehend them. They are also becoming increasingly complex, supported by large datasets, powerful computers, and many hours of training.

Recent years have seen an increase in AI efforts to resolve the so-called reproducibility crisis. As a result of this crisis, AI researchers report algorithm runs selectively or produce idealized results using GPU power.

Performing in real-life scenarios

An artificial environment provides a learning environment for RL agents. However, whereas a manufactured environment allows agents to fail and learn, a real-life scenario does not.

Real-life environments are not usually large enough for an agent to observe the environment well enough to decide on a winning strategy based on past training data. Owing to this, the agent cannot determine what is real and what is simulated, resulting in a reality gap. Researchers use a variety of techniques, including:

  • Mimicking the desired behavior
  • Simulation-based learning
  • Demonstrating and designing better algorithms
  • Providing rewards and punishments for the agents.

Ideally, the agent gets trained to maximize the correct actions by rewarding them for the right ones and punishing them for the incorrect ones.

Sparse rewards

There are some limitations to the reward technique discussed earlier. Due to the sparse distribution of rewards in the environment, an agent may not pay enough attention to the situation to notice rewards and maximize specific actions.

This can also occur when the environment fails to provide rewards on time. For instance, in many cases, green flags are only shown when an agent is close enough to a target.

Offline reinforcement

In real-time reinforcement learning, as the agent learns from new experiences, offline reinforcement learning relies on logging information without interacting with the environment. As a result, AI agents no longer need to get trained repeatedly to scale.

The challenge remains that if the model trained from an existing dataset or behaves differently than the data collection agent, the reward is not determined.

The future of deep Q-learning

Deep learning has proved to be capable of simulating real-world environments. As a result, it has the potential to solve a wide range of problems that machines have had difficulty tackling up until now. Combining deep learning with RL will help us solve these problems much faster.

There are many future applications of reinforcement learning, including:

  • Automated trading of stocks and options
  • Self-driving cars
  • Smart traffic signals
  • Automated robots
  • Automated factory.

Conclusion

Reinforcement learning has made significant progress recently but there’s still a long way to go. It will take dedicated researchers, engineers, and data scientists to ensure that RL achieves the same levels of success as deep learning.

Reinforcement learning is fascinating because it can take a dataset without labels and maximize rewards based on your decisions. As a result, unprecedented performance is achievable in environments where there is a need for the desired action, but there is no clear way to accomplish it.