I Taught an AI to Play Street Fighter 6 by Watching Me (Behavior Cloning…

I Taught an AI to Play Street Fighter 6 by Watching Me (Behavior Cloning...

In this video, I walk through my entire process of teaching an artificial intelligence to play fighting games by watching my gameplay. Using Stable Baselines 3 and imitation learning, I recorded myself playing as Ryu against Ken at difficulty level 5, then trained a neural network for 22 epochs to copy my playstyle.

This is a beginner-friendly explanation of machine learning in gaming, but I also dive into the technical details for AI enthusiasts. Whether you’re curious about AI, love Street Fighter, or want to learn about Behavior Cloning, this video breaks it all down.

Code:
https://github.com/paulo101977/sdlarch-rl/tree/master/notebooks

🎯 WHAT YOU’LL LEARN:

  • How Behavior Cloning works (explained simply)
  • Why fighting games are perfect for AI research
  • My complete training process with Stable Baselines 3
  • Challenges and limitations of imitation learning
  • Real results: watching the AI play

🔧 TECHNICAL DETAILS:

  • Framework: Stable Baselines 3 (Imitation Learning)
  • Game: Street Fighter 6
  • Character: Ryu (Player 1) vs Ken (CPU Level 5)
  • Training: 22 epochs of supervised learning
  • Method: Behavior Cloning from human demonstrations

submitted by /u/AgeOfEmpires4AOE4
[link] [comments]

Liked Liked