Four short links: 2 November 2017
Capsule Neural Networks, Adversarial Objects, Deep Learning Language, and Crowdsourced Pop Star
- Dynamic Routing Between Capsules — new paper from one of the deep learning luminaries, Geoff Hinton. Hacker Noon explains: In this paper the authors project that human brains have modules called “capsules.” These capsules are particularly good at handling different types of visual stimulus and encoding things like pose (position, size, orientation), deformation, velocity, albedo, hue, texture, etc. The brain must have a mechanism for “routing” low-level visual information to what it believes is the best capsule for handling it.
- Adversarial Objects — Here is a 3D-printed turtle that is classified at every viewpoint as a “rifle” by Google’s InceptionV3 image classifier, whereas the unperturbed turtle is consistently classified as “turtle.”
- DeepNLP 2017 — Oxford University applied course focussing on recent advances in analyzing and generating speech and text using recurrent neural networks.
- Virtual Singer Becomes Japanese Mega-Star (Bloomberg) — CG-rendered pop star, singing crowdsourced songs. Crucial to Miku’s success is the ability for devotees to purchase the Yamaha-powered Vocaloid software and write their own songs for the star to sing right back at them. Fans then can upload songs to the web and vie for the honor of having her perform them at “live” gigs, in which the computer-animated Miku takes center stage, surrounded by human guitarists, drummers and pianists. This is fantastic. (via Slashdot)