Deepmind Lab
I think I begin practically every post on this blog talking about the pace with which Deep Learning has been progressing over the last few years. While this is really great for the domain as a whole, the lack of a standardised platform to test how good or bad an algorithm is has been lacking. The researchers over at DeepMind came up with Deepmind Lab as a solution to the problem particularly in the field of First Person Shooter (FPS) gameplay. They published a paper titled Deep Mind Lab and also (obviously) released the code for the same on Github.
DeepMind Lab is a first-person 3D game platform designed for research and development of general artificial intelligence and machine learning systems. DeepMind Lab can be used to study how autonomous artificial agents may learn complex tasks in large, partially observed, and visually diverse worlds. DeepMind Lab has a simple and flexible API enabling creative task-designs and novel AI-designs to be explored and quickly iterated upon.
Technically, the “Lab” is built on top of the Quake 3 Arena engine. (#Nostalgia). Setting up a is a bit of a pain thanks to random bazel related errors, which seems to be a ritual for trying out anything new that comes out from Google/Alphabet (but solutions were all easily google-able). But once the initial hiccups are done, everything works like a charm.
You can play the levels as a human as well if you wish to (which is the first thing I tried out), and I strongly recommend you do because you really get a feel of how useful a playground platform like this can be. You can do this using:
bazel run :game -- --level_script tests/demo_map
You also have a random agent ready to be trained. To play a level using the random agent, simply do:
bazel run :random_agent
For a couple of the initial levels, using tried and tested methods seems to work well enough with minimal training (I ran my tests on my personal system, and didn’t really do any multi-day GPU learning bruhaha), and I’m pretty sure that people will all be going back to the iconic Playing Atari with Deep Reinforcement Learning paper (and it’s variants) to find things to tinker around with.
I didn’t really explore the lab as much as I would’ve liked to but definitely plan to do so in the near future. Releases like DeepMind Lab, and the OpenAI Gym are providing a much needed plug and play task based exploration areas for Reinforcement Learning, and it’s high time that others start doing similar stuff (or at least using these as benchmarks and contributing back)
PS Happy New Year! :)
Deep Mind Lab by Charles Beattie, Joel Z. Leibo, Denis Teplyashin, Tom Ward, Marcus Wainwright, Heinrich Küttler, Andrew Lefrancq, Simon Green, Víctor Valdés, Amir Sadik, Julian Schrittwieser, Keith Anderson, Sarah York, Max Cant, Adam Cain, Adrian Bolton, Stephen Gaffney, Helen King, Demis Hassabis, Shane Legg, Stig Petersen