New version of agent with 360 degree sensors and speed control.
Old version of agent with 180 degree face foward camera.
Overview
I have implemented a reinforcement learning (RL) simulation of a very basic self-driving car, that can navigate a three-lane road, avoiding collisions with other cars using a Deep Q-Network(DQN). The agent car learns to make decisions (stay, move left, move right) based on a state representation simulating a 180-degree forward-facing camera. The simulation uses Pygame for visualisation and PyTorch. Key Features include:
•
360-Degree Sensor Simulation: The agent detects the closes car in each lane ahead, behind, and to the side (within 300 pixels), with normalised distances and speeds.
•
Speed Control: The agent can control its speed and position by moving up and down each lane as well as how quickly.
•
State Normalisation: Distances are scaled to [0,1] for better DQN learning.
•
Side Collision Penalty: Penalises unsafe lane changes into nearby cars.
•
Lane-Switching Cooldown: Limits rapid lane changes for realistic movement.
•
Reduced Spawn Rate: Balances other car spawning for a navigable environment.
•
Visualisation: Displays episode/reward and lines to detect cars in Pygame.
This project is designed to run on either CPU or GPU.
Loss Function and Optimiser: Uses MSE Loss and Adam Optimiser(lr=0.001)
Improvements
Also, the model currently only detects the closest cars in each lane and typically waits until the last moment to move out of the way. This leads the agent to get trapped occasionally, which causes a collision. I will try to improve this so that the model plans better. To make the environment even more challenging, I could allow the other cars to change their speeds and change lanes, creating a more realistic environment, simulating real traffic.