DeepMind's AlphaFold 2 reveal


DeepMind's AlphaFold 2 reveal: Convolutions are out, attention is in

In November, DeepMind, Google's AI team that created the chess champion neural network AlphaZero a few years ago, stunned the world once more with a computer that solved a decades-old challenge of protein folding. The software easily defeated all rivals, heralding a "watershed moment" in biology, according to one expert.

The algorithm, known as AlphaFold 2, was only briefly explained at the time, in a blog post by DeepMind and a paper abstract supplied by DeepMind for the biennial competition in which they entered it, the Critical Assessment of Techniques for Protein Structure Prediction.

DeepMind eventually detailed how it's done last week, with a blog post, a 16-page summary piece published in Nature magazine by DeepMind's John Jumper and colleagues, a 62-page collection of supplemental information, and a code library on GitHub. Ewan Calloway of Nature describes the data leak in a report on the new details as "protein structure coming to the masses." 

The architecture of a software programme refers to the operations that are employed and how they are integrated. The original AlphaFold was made up of a convolutional neural network, or "CNN," a traditional neural network that has been the workhorse of many AI achievements in the last decade, including numerous ImageNet computer vision contest victories.

Convolutions, on the other hand, are out, while graphs are in. Or, to be more precise, the combination of graph networks with what is known as attention.

 When a collection of objects, such as individuals in a social network, can be appraised in terms of their relatedness and how they're connected via friendships, it's called a graph network. AlphaFold constructs a graph of how close distinct amino acids are to one another using information about proteins in this example.

Read More