Open Access Open Access  Restricted Access Subscription Access

Generalisation over Details: The Unsuitability of Supervised Backpropagation Networks for Tetris


Affiliations
1 School of Engineering and ICT, University of Tasmania, Private Bag 87, Sandy Bay, TAS 7001, Australia
 

We demonstrate the unsuitability of Artificial Neural Networks (ANNs) to the game of Tetris and show that their great strength, namely, their ability of generalization, is the ultimate cause. This work describes a variety of attempts at applying the Supervised Learning approach to Tetris and demonstrates that these approaches (resoundedly) fail to reach the level of performance of handcrafted Tetris solving algorithms. We examine the reasons behind this failure and also demonstrate some interesting auxiliary results. We show that training a separate network for each Tetris piece tends to outperform the training of a single network for all pieces; training with randomly generated rows tends to increase the performance of the networks; networks trained on smaller board widths and then extended to play on bigger boards failed to show any evidence of learning, and we demonstrate that ANNs trained via Supervised Learning are ultimately ill-suited to Tetris.
User
Notifications
Font Size

Abstract Views: 183

PDF Views: 33




  • Generalisation over Details: The Unsuitability of Supervised Backpropagation Networks for Tetris

Abstract Views: 183  |  PDF Views: 33

Authors

Ian J. Lewis
School of Engineering and ICT, University of Tasmania, Private Bag 87, Sandy Bay, TAS 7001, Australia
Sebastian L. Beswick
School of Engineering and ICT, University of Tasmania, Private Bag 87, Sandy Bay, TAS 7001, Australia

Abstract


We demonstrate the unsuitability of Artificial Neural Networks (ANNs) to the game of Tetris and show that their great strength, namely, their ability of generalization, is the ultimate cause. This work describes a variety of attempts at applying the Supervised Learning approach to Tetris and demonstrates that these approaches (resoundedly) fail to reach the level of performance of handcrafted Tetris solving algorithms. We examine the reasons behind this failure and also demonstrate some interesting auxiliary results. We show that training a separate network for each Tetris piece tends to outperform the training of a single network for all pieces; training with randomly generated rows tends to increase the performance of the networks; networks trained on smaller board widths and then extended to play on bigger boards failed to show any evidence of learning, and we demonstrate that ANNs trained via Supervised Learning are ultimately ill-suited to Tetris.