First-Person Stop-Shooter, Parkinson’s Wearable, Neural Net Mystery, and Fly Fast and Break Things
- Medusa FPS (We Make Money Not Art) — Karolina Sobecka‘s Medusa FPS is directly inspired by these semi-autonomous and autonomous weapons. In her First Person Shooter game, the player uses an AI-assisted gun that guides his or her hand to aim more effectively and fires when a ‘target’ enters its field of view. Which of course seems to wipe out much of the thrill of playing a FPS game. Medusa FPS, however, reverses the usual logic and goals of FPS games. The challenge for the player here is to fight against his or her own in-game character and prevent it from shooting anyone. They cannot drop the weapon nor stop it from firing, but they can obstruct it (and the gun’s) vision.
- Microsoft’s Project Emma: A Wearable for Parkinson’s Sufferers — This disease makes it impossible for her to draw straight lines or write legibly. With the wearable on her wrist, however, normal writing and drawing is possible. Remarkably, how it works isn’t 100 percent known. (via Slashdot)
- Understanding Deep Learning Requires Rethinking Generalization (Paper a Day) — ANNs are bloody good at memorising things (even just 2-layer ones: There exists a two-layer neural network with ReLU activations and 2n + d weights that can represent any function on a sample of size n in d dimensions). You can train them on randomness and they’ll learn to parrot it perfectly, but with no predictive value. And they don’t seem to sweat any harder than when you teach them patterns and have them predict values they haven’t seen. In [this] case, there is no longer any relationship between the instances and the class labels. As a result, learning is impossible. Intuition suggests that this impossibility should manifest itself clearly during training, e.g., by training not converging or slowing down substantially. To our surprise, several properties of the training process for multiple standard architectures is largely unaffected by this transformation of the labels.
- Learning to Fly by Crashing (PDF) — We crash our drone 11,500 times to create one of the biggest UAV crash dataset. This dataset captures the different ways in which a UAV can crash. We use all this negative flying data in conjunction with positive data sampled from the same trajectories to learn a simple yet powerful policy for UAV navigation. (via IEEE Spectrum)
Continue reading Four short links: 11 May 2017.