HackyHour Würzburg 35
- When: May 23rd, 2019 at 5:00pm
- Where: Center for Computational and Theoretical Biology (CCTB)
- Info: HackyHour Website
Topic Suggestions
Add your :+1: to the end of a line you are interested in
- Say it challenge (hacker.org)
- organizing a fun coding competition (fbctf?)
- Make the computer play computer games (Reinforcement Learning, gym)
- Text Mining / Fact Extraction / Knowledge Graphs / WikiData
- Proteomics
- natural language processing
Ideas for another time
- revisit GANs maybe with this tutorial
- Quantum computing (for the very curious)
- AutoML (PennAI)
- GNU Guix
Participants
- Markus :pizza:
- Matthias :sushi:
- Franzi :sunflower: :sushi:
- Michaela
Cross Links
Manual solution
import gym
env = gym.make('CartPole-v0')
for i_episode in range(20):
observation = env.reset()
last_wiggle = 0
for t in range(1000):
env.render()
#print(observation)
if observation[2] > 0.1:
action = 1
last_wiggle = 1
elif observation[2] < -0.1:
action = 0
last_wiggle = 0
else:
last_wiggle = 1 - last_wiggle
action = last_wiggle
observation, reward, done, info = env.step(action)
if done:
print("Episode finished after {} timesteps".format(t+1))
break
env.close()
Link to the DQN approach I found
Cartpole - Introduction to Reinforcement Learning (DQN - Deep Q-Learning)