Get Started with Convolutional Neural Network (CNN) + DQN | Deep Q-Learning Example
Ғылым және технология
The Deep Q-Networks (DQN) used in Part 1 (Deep Q-Learning Explained: • Deep Q-Learning/Deep Q... ) were straightforward neural networks with a hidden layer and an output layer. That network architecture works for simple environments. However, for complex environments-such as Atari Pong-where the agent learns from the environment visually, we need to modify our DQNs with convolutional layers. In this tutorial, we'll continue the explanation on the very simple FrozenLake-v1 4x4 environment, however, we'll modify the inputs such that they are treated as images.
Buy Me a Coffee: www.buymeacoffee.com/johnnycode
GitHub Repo: github.com/johnnycode8/gym_so...
CNN Explainer: poloclub.github.io/cnn-explai...
Reinforcement Learning Playlist: • Gymnasium (Deep) Reinf...
Пікірлер: 8
Thanks! Very nice to have CNN introduced.
@johnnycode
2 ай бұрын
Glad you’re enjoying the topics!
Can you please do a series on continuous state and action spaces
Can I use this code for solving cartpole problems using dqn and cnn
@johnnycode
3 ай бұрын
The Cartpole environment doesn't return its state as an image, so you can't pass the state into a CNN. DQN should work though.
hello sir i am working on my college project for self driving car so for the cnn part i am thinking of using YOLO. can you guide me through the whole process please 🙏. and also for self driving car how i am suppose to create the environment in openAI gym?
@johnnycode
10 күн бұрын
I don't know specifically about self driving cars. You can watch my series on training Flappy Bird to get some ideas: kzread.info/dash/bejne/k6aGma2znLzZZNo.html
@manaschintawar273
10 күн бұрын
@johnnycode ohkk thank you, sir 😄