- The Watchtower
- Posts
- The WatchTower: 16th Edition
The WatchTower: 16th Edition
Welcome to the captivating world of Artificial Intelligence!
Hello innovators and curious minds of AI Society!
Welcome to the 16th Edition of The WatchTower, your quintessential guide to the riveting world of AI. In this edition, we take a look at the latest edition of Google DeepMind’s AlphaFold and discover how machine learning allows software engineers to tackle problems which were previously out of reach.
📰 Featured in This Edition:
Prompt Engineering Workshop
AlphaFold 3 - Predicting the Structure and Interactions of All of Life’s Molecules
How Machine Learning Extends Software Engineering
🗓 Upcoming Events
Join us for an engaging prompt engineering workshop conducted by Evan Liao from the AISOC Education Team! Learn the art of designing effective prompts to harness the full potential of AI models.
|
AlphaFold 3 - Predicting the Structure and Interactions of All of Life’s Molecules
In May this year, Google DeepMind introduced AlphaFold 3, the latest iteration in its groundbreaking series of biomolecular prediction models. To appreciate the significance of AlphaFold 3, let’s first explore the motivations behind and successes of AlphaFold and AlphaFold 2.
The Protein Folding Problem
Proteins are essential to life; every function of your body depends on them. Each type of protein is composed of a unique chain of amino acids, which fold to give the protein’s 3D structure. Knowing this structure is important for understanding a protein’s function, understanding diseases resulting from malfunctioning proteins, and designing drugs to interact with proteins. Traditionally, discovering a protein’s 3D structure had been done experimentally using methods such as X-ray crystallography and NMR spectroscopy, generally taking one PhD student their entire PhD to map the structure of just one protein.
In 1972, Nobel Prize winner Christian Anfinsen speculated that it should be possible to predict these structures purely computationally from the amino acid sequences alone, and this had remained a grand challenge in biology ever since. This task is incredibly difficult due to the astronomical number of possible configurations a protein chain can adopt - estimated to be around 10300, far exceeding the number of atoms in the universe.
The Evolution of AlphaFold
AlphaFold, introduced in 2018, approached the problem using convolutional neural networks. While this approach performed relatively well, it was AlphaFold 2, released in 2020, that solved the problem and hence revolutionised the field. By incorporating the transformers and attention mechanisms that were first introduced in 2017 and are powering the current LLM revolution, AlphaFold 2 was able to successfully predict the structure of every protein in the human body and predict hundreds of millions of structures in total. To achieve this with the current experimental methods would have taken hundreds of millions of researcher-years.
In short, the attention mechanism in AlphaFold 2 allowed the model to weigh the importance of each amino acid in the sequence relative to others, similar to how LLMs use attention to understand the relationship between words in a sentence. For a deeper understanding, check out the following videos:
AlphaFold 2's impact was profound, having been cited over 20,000 times and aiding discoveries in fields such as malaria vaccines, cancer treatments, and enzyme design.
Introducing AlphaFold 3
Unlike its predecessors, which focused solely on protein folding, AlphaFold 3 predicts the structure and interactions of a wide range of biomolecules, including DNA, RNA, ligands, and other chemical modifications. Its accuracy is unprecedented – for interactions of proteins with other molecule types, an improvement of at least 50% from existing methods has been observed, and for some important categories of interaction, prediction accuracy has doubled.
The ability of AlphaFold 3 to generalise predictions across various types of biomolecules results from the model working with atoms rather than amino acid sequences. Another key innovation is its use of diffusion techniques during training, adding noise to datasets of known atomic positions and training the model to reverse this noise. This teaches the model to start with a cloud of atoms and generate an accurate molecular structure, similar to how current AI image generators work.
Looking Ahead
The impact of AlphaFold 3 is likely to be profound, and we are only beginning to explore and understand its significance. Some research areas which will likely be accelerated by AlphaFold 3 include drug design, genomics, and synthetic biology. The new AlphaFold server is allowing scientists to make use of AlphaFold 3 regardless of their computational resources or expertise in machine learning, so it won’t be long before we see new discoveries and innovations emerging from this tool, shaping the future of science and medicine.
Published by Jonas Macken, June 10 2024
Credit: OpenAI
How Machine Learning Extends Software Engineering
Written by Stephen Elliott
Machine learning methods have led to the discovery of hundreds of thousands of new specialized materials, among other innovations, and billions upon billions of hours of human labour have been plucked from thin air. But this justification for machine learning does not reveal anything about how it enhances traditional software methods. Today, we’ll discover how machine learning works, and how its unique design allows us to solve new problems in software engineering.
To understand where machine learning can enhance our traditional methods, it is important to understand that learning machines do not learn or think like us. Suppose we are playing hide and seek with a friend. We know immediately upon glancing over a familiar face or skin tone or clothing that yes, this is my friend, and that is where they are. Everything is intuitive. It comes to the front of our mind without thought.
Now consider a machine with the same task. The machine has no intuition. In a traditional computer program, everything in the machine is precisely defined. Its thinking is but a series of steps. Do A and then B. If C, then you have found the friend. Perform action D. If not C, repeat A and B. It is true that if we have a good understanding of the problem, this structure works very well. It took humans to the moon. It takes hundreds of thousands of people to work on automated transport every day. It runs the world's financial infrastructure. These tasks are out of human reach, because we are not so good at following instructions; our behaviours are not so precise; and we are slow. But the task of recognising a friend in a train station fundamentally different to the task of running the network itself.
In a rail network, there is a finite number of stations. There is a definite number of trains. We can precisely state the outcome of an action and precisely interpret any question in a few checks. Is this position free? Will moving my train here cause any problems? Will there be some sort of blockage down there in a couple of hours? (Perhaps this example could be enhanced by machine learning, but suspend your disbelief). The solution to our automated rail network a series of steps. It is perfect for a traditional computer program, and perfect for the traditional software engineer.
The trouble comes when we are in an environment that is so chaotic that we cannot define a series of steps to solve the problem. At a most basic level, we do not understand the problem. Then how are we to write an algorithm to solve it? It is our intuition which tells us that it is that person over there who is our friend, not the other chap wearing a red shirt and not the red fire hydrant box. We recognise our friend based on some murky and complicated combination of checks done behind closed doors, inaccessible to us. For that reason, we cannot formalise what is going on. We cannot write a traditional algorithm to solve this problem.
For the sake of example, let’s try to do it anyway. First, take the input data. All that the computer has access to in determining whether the friend is in frame is a colour image, perhaps a standard HD 1280x720 pixels. That’s 921,600 points, each carrying three colour values between 0 and 255. We need 3 colour values to mix red, green and blue; together, they can create any visible colour. We are left with 921,600 x 3 x 256 = 707,788,800 total input data points. Given our poor understanding of what constitutes a human face, it is a very hard problem to manually discover an algorithm which can recognise a face – let alone in a chaotic, unpredictable environment described by 700 million numbers. Simple logical operations and for loops are not sufficient for this problem. Traditional algorithms are a no-go.
The principal innovation of modern thinking machines is that they write the facial recognition algorithm for us. They take some visual data, and some labels saying, “yes, this is the face,” and they learn the nebulous combination of features which describe that face. By allowing us to write a program which finds the correct algorithm for a problem, machine learning opens an entirely new class of problems to computerisation. Whenever the problem is too complex for us to solve by traditional algorithms, we should turn to machine learning.
The role of the software engineer shifts from designing the algorithm which solves the problem, to designing the algorithm which finds the algorithm which solves the problem. The algorithm we build creates a model of the face we are looking for. At first, the model is completely wrong. The engineer designs an algorithm which progressively adjusts the model to become more accurate to the phenomenon it’s aiming to represent – the presence of a face somewhere in the frame. By feeding lots of different images of our friend’s face into the model, with the face in different positions, lighting and angles, the machine must learn a general model of the friend’s face. The machine learns for us: what defines the face of interest in this context? Is it more accurate to attempt to recognise distinctive attributes of the face, or check many iterations of the same feature in different parts of the image? We cannot write down what it is that represents the face here, so our traditional algorithms are lost. But the machine is not.
For all the machine learning engineer needs to do, is build an algorithm which can learn an accurate enough model of that face. The way these models make decisions is by a large statistical calculation. The model’s learning mechanism is typically some sort of carrot and stick approach, where the model's weight, or where the model's knowledge, is iteratively adjusted. Each time it gets it wrong, it goes further away from what the current weight suggests is the correct answer, and each time it gets it right, it gets a bit closer. More complicated problems require different model structures, but this is the essence of neural networks
Following a simple curiosity, we have showed how machine learning techniques fill a gap in traditional computer algorithms. We have briefly explored the general structure of modern machine learning models and discussed the role of the software engineer in building modern thinking machines.
Published by Stephen Elliott, June 10 2024
🗣 Sponsors 🗣
Our ambitious projects would not be possible without the support of our GOLD sponsor, UNOVA.
Closing Notes
We welcome any feedback / suggestions for future editions here or email us at [email protected].
Stay curious,
🥫Sauces🥫
Here, you can find all sources used in constructing this edition of WatchTower: