New DeepMind research applies ‘neuron deletion’ to improve AI models
Artificial intelligence algorithms are highly sophisticated constructs that can contain an eye-watering number of neurons and internal connections. This gives them the power to handle a wide range of complex tasks, but also makes it tricky for researchers to understand the way conclusions are made and how the process can be improved.
On Wednesday, Alphabet Inc.’s DeepMind deep learning group detailed a new method that says can help clear up the picture.
The key to the technique is an approach called neuron deletion. As part of its research, DeepMind created several image classification models and removed some of their artificial neurons in a targeted manner. The group’s AI experts then measured how the changes affected processing.
The technique can provide insight into the inner workings of the individual neurons in an AI. According to the group, researchers have until now succeed in thoroughly analyzing only “selective neurons” configured to process a specific type of input, which constitute a small part of a typical neural network. Visibility of the other nodes makes it possible to piece together a much more complete picture.
DeepMind analyzed the impact of deleting neurons on its test AI algorithms. The Alphabet subsidiary found that there’s a correlation between how well a model withstands the changes and its ability to “generalize,” or perform calculations without any workarounds. This enables researchers to find structural problems within their models that may undermine the accuracy of results.
Researchers said that technique should be particularly handy for finding cases where an AI “cheats” using the sample data it was trained with. This phenomenon, which the group codiscovered last year with Google Brain and the University of California at Berkeley, essentially involves an algorithm memorizing the training records and using them as a sort of cheat sheet.
The testing method was inspired by techniques employed in the field of neuroscience. Another area that the group has drawn upon to help further its AI research is psychology. Two months ago, DeepMind open-sourced a cognitive assessment tool that lets AI researchers create a controlled virtual environment for evaluating their models.
Image: Unsplash
Since you’re here …
… We’d like to tell you about our mission and how you can help us fulfill it. SiliconANGLE Media Inc.’s business model is based on the intrinsic value of the content, not advertising. Unlike many online publications, we don’t have a paywall or run banner advertising, because we want to keep our journalism open, without influence or the need to chase traffic.The journalism, reporting and commentary on SiliconANGLE — along with live, unscripted video from our Silicon Valley studio and globe-trotting video teams at theCUBE — take a lot of hard work, time and money. Keeping the quality high requires the support of sponsors who are aligned with our vision of ad-free journalism content.
If you like the reporting, video interviews and other ad-free content here, please take a moment to check out a sample of the video content supported by our sponsors, tweet your support, and keep coming back to SiliconANGLE.