- In the past, traditional chemical heuristics have been very important for the discovery of new materials. Machine learning approaches have started to replace those chemical heuristics in recent years, and they offer new opportunities for materials science. Both approaches are very strongly interconnected.
- Classical chemical heuristics typically rely on less data than machine learning approaches. There are two different types of machine learning approaches in materials science: one relies on features inspired by classical chemical heuristics, the other one purely relies on relationships within the analyzed data.
- The growing amount of data offers an opportunity to test traditional chemical heuristics. In combination with machine learning techniques, also new, more data-driven chemical heuristics should be developed.
Chemical heuristics have been fundamental to the advancement of chemistry and materials science. These heuristics are typically established by scientists using knowledge and creativity to extract patterns from limited datasets. Machine learning offers opportunities to perfect this approach using computers and larger datasets. Here, we discuss the relationships between traditional heuristics and machine learning approaches. We show how traditional rules can be challenged by large-scale statistical assessment and how traditional concepts commonly used as features are feeding the machine learning techniques. We stress the waste involved in relearning chemical rules and the challenges in terms of data size requirements for purely data-driven approaches. Our view is that heuristic and machine learning approaches are at their best when they work together.
- machine learning
- chemical heuristics
- materials discovery
- materials informatics
The Interplay of Chemical Heuristics and Machine Learning
Data science (see Glossary), artificial intelligence, and machine learning are nowadays present in all fields of science and technology, including chemistry and materials science. The impact of these techniques is expected to be very large, leading to a new path towards scientific discovery (sometimes called the fourth paradigm in science) . We will not review here all progress and future directions in the use of machine learning in materials science; we refer interested readers to recent reviews on this topic (e.g.,). Instead, we offer a personal perspective on how these new techniques, heavily relying on sophisticated algorithms and large data sets, compete, complement, challenge, and/or benefit from more traditional heuristic approaches.
Learning from Data: From Classical Heuristics to Data Relationships
Since the early days of chemistry, scientists have looked for patterns from often limited sets of data. This led to many of today’s widely used chemical heuristics such as the periodic system of elements, electronegativities, and atomic radii. These heuristic models were built by combining the knowledge and creativity of the scientist with simplified physical pictures. For example, Pettifor introduced a completely new chemical scale that enabled separation of structure types of AB compounds within a 2D map. This scale was based on the size, valence, and electronegativity of the constituting atoms. We will call this traditional approach classical or chemical heuristics throughout this article (Figure 1, Key Figure). Recently, the traditional heuristic approach has started to be replaced by machine learning techniques. This is not only due to improvements in machine learning methods that are now widely available through open source software but also to an ever-increasing amount of data available mainly through high-throughput ab initio computations. These machine learning approaches can be divided into two categories. The first approach uses features often based on chemical heuristics (e.g., atomic radii or electronegativity) as inputs to machine learning techniques that can provide a relationship between these features and material properties. Here, machine learning brings new relationships to properties from traditional heuristic descriptors. For instance, machine learning has been used to find a mathematical relationship between well-known atomic features (e.g., atomic radii and ionization potential) to discriminate between wurtzite and rock salt forming binaries. The second approach relies only on rules derived from relationships within the data bypassing traditional chemical descriptors. For example, the probability of certain ions to replace each other has been derived in this manner. No physical/chemical feature such as atomic radius was implied in this study, only the idea that some atoms are more likely to replace other atoms than others. The three different approaches that include traditional chemical heuristics, the extraction of rules between established chemical features using machine learning, and purely data-driven approaches are summarized in Figure 1. The chemical heuristics approaches require less data but are inherently biased towards preconception that could turn out to be incorrect. Conversely, the purely data-driven approach could require data set sizes that are sometimes not available.
‘Classical’ Chemical Heuristics
Many of the heuristics that are still taught in general chemistry courses date back from at least a century ago. The concept of electronegativity was proposed by Avogadro and Berzelius in ~1809 , oxidation states by Wöhler (~1835), atomic radii by Loschmidt (~1866), and the periodic table of elements dates 150 years back to Mendelejew. Goldschmidt and Pauling derived rules on the stability of crystal structures more than. Without any doubt, chemical heuristics have been instrumental to advances in chemistry and materials sciences over the last 100 years. However, one must be careful to not use these heuristics blindly. Their historical importance should not exclude them from critical assessment. Rahm and colleagues have recently shown that two of our most fundamental chemical heuristics, the periodic table and electronegativities, can change drastically at high pressures. Here, we face the common issue of extrapolation. For example, Li changes its valence and becomes a p group element at 300 gigapascal (GPa). K and heavier alkali metals become transitional metals. Furthermore, Na becomes the most electropositive s1 element. Thus, the structure of the periodic table and chemical reactivities will change completely for some elements with rising pressure. Similar changes should be expected for typical oxidation states. Chemical heuristics might have only limited transferability, needing to be adapted to allow for the exploration of more extreme conditions. Even in more common conditions, very well-established rules can turn out to be less powerful than expected when evaluated with modern techniques and larger data sets. Hautier and colleagues have recently evaluated the predictive power of the Pauling rules, which connects the coordination environments of a crystal with its stability. For instance, the first Pauling rule links the preferred coordination environment of a cation to the cation–anion ratio (Figure 2A ). This analysis on oxides shows that Pauling’s first rule is only fulfilled for 66% of all tested local environments, with important deviations in alkali and alkali–earth chemistries, for instance (Figure 2A). Strikingly, and despite being a corner stone of solid-state chemistry, only 13% of a data set of 5000 oxides fulfil the four other Pauling rules . This new assessment could only be performed using modern tools of information technology. First, a diverse set of oxide crystal structures had not only to be determined by crystallographers but also made readily available in databases such as the Inorganic Crystal Structure Database (ICSD), the Open Crystallographic Database, or the Cambridge Structural Database. Without these efforts of the crystallographic community, many of today’s data-driven searches would not be possible. Furthermore, automatic tools for the determination of coordination environments had to be developed and a quantitative assessment of the Pauling rules had to be set up. In today’s world of unprecedented access to information and automation, it is staggering to realize that only several hundred crystal structures had been determined when Pauling proposed his rules and that no computer was available to process the data
In a similar spirit of challenging established rules, Filip and Giustino have shown that a modified and improved Goldschmidt tolerance factor that relates the ionic radii of the ions building a perovskite structure to the stability of perovskites has predictive power (Figure 2B). The Goldschmidt tolerance factor is based on the idea of a tightly packed structure of ionic spheres. To improve the basic idea of the tolerance factor, Filip and Giustino included further geometric constraints in their prediction that were initially missing in the definition of Goldschmidt, based on the idea of a high stability of tightly packed structures. This improved tolerance factor can be used to predict stable perovskites with a high fidelity (80%). Almost needless to say, the dataset of stable perovskite structure available to Filip and Giustino was much larger than the one available to Goldschmidt in 1926. Based on this improved tolerance factor, Filip and Giustino predicted 90 000 hitherto unknown perovskites.
Since the initial development of most chemical rules, both the amount of data available and computing processing power have grown substantially. Now is the time for the community to test chemical heuristics more widely and rigorously. This includes assessing their validity in less common conditions (e.g., high pressure and/or temperature) and developing new rules following the spirit of previous heuristics but updated using modern machine learning techniques and larger data sets.
Physically or Chemically Inspired Description of Materials in Machine Learning
Advancing further along the data arrow in Figure 1, another popular approach has been to use concepts from the classical heuristics as features or descriptors for machine learning studies of materials properties (Figure 3) . The material is here represented by traditional physical and chemical features such as electronegativities or atomic radii, and machine learning models that link these descriptors to materials properties are trained on large data sets. To get back to the previous examples of the Pauling rules, such approaches could, for instance, start from similar descriptors (e.g., local environments or atomic radii) but search through machine learning for relationships that deviate from these empirical rules. Various materials properties such as the band gap, bulk and shear modulus, and vibrational properties have successfully been machine learned in this way. These studies can link composition to properties, including, if necessary, crystal structure features. For instance, researchers have shown that vibrational entropy could be inferred from composition and crystal structure features, including traditional heuristic descriptors such as atomic numbers, radii, position in the periodic table, Pettifor scales, or electronegativities (Figure 4A )
While materials descriptions based solely on composition were the norm in the earlier days, there is a growing set of methods that very elegantly incorporate structural information. For example, graph-based structure representations have shown very promising results. For instance, crystal graph convolutional neural networks allowed learning of different properties (e.g., formation energy or band gaps) for a variety of structure types and compositions. These crystal graph convolutional neural networks are based on a representation of atoms as nodes and bonds as edges within a graph. Additional information on the atom and bonds that are partially based on chemical heuristics (e.g., position in periodic table or electronegativities) are included in node and edge feature vectors. On top of this graph, a convolutional neural network is built that allows automatic extraction of the optimal representations of target properties. In a similar way, Chen and colleagues introduced MatErials Graph Networks (MEGNet) to represent crystals and molecules within machine learning models of material properties
There is an abundance of chemical features to use and efforts have started to group them in easily accessible codes and databases (matminer). However, not all descriptors are adequate to predict a property. While previous chemical knowledge could help choosing descriptors (e.g., the Pauling rules could motivate to focus on atomic radii and local environments to predict material stability), it is also possible to find the adequate descriptors from data only. The approach is here to start with many descriptors or features and select the most relevant ones. This is a common task implemented in many machine learning methods. Work from Ghiringhelli and colleagues illustrates the importance of selecting the right descriptors by performing a classification of AB binaries between zinc blende and rock salt formers. Using the least absolute shrinkage and selection operator (LASSO), which helps select descriptors in a large pool of typical atomic features, the authors discovered which descriptors would be the most predictive and established a simple relationship to be used for classification (Figure 4B). Here, they found that atomic ionization potentials, electron affinities, and radii describing where the radial probability density of the valence s or p orbitals show its maximum were the most effective descriptors on a large pool of features. Figure 4B also illustrates that the relationship between the selected descriptor and the target property is relatively simple. Feature selection can also be efficiently combined with less-interpretable feed-forward neural networks on crystal structure graphs. The importance of feature selection for these neural networks has been shown to be critical when data sets are small (around hundreds of data points).
Solely Learning from Data
The third approach of machine learning is to directly extract relationships between the raw data without using the intermediate step of chemical or atomistic features. For instance, one would like to directly link the crystal structure description (position and chemical elements on the sites) to properties. This strategy is appealing as it does not require knowledge a priori of what features or descriptors would be predictive, but it usually requires much more data (Figure 1).
An interesting example of such a data-driven approach comes from the field of crystal structure prediction. Based on atomic radii and groups in the periodic table, chemists have foreseen which ions can substitute each other for more than 100 years. In recent years, a model based on machine learning that can quantify how likely certain cations can substitute each other has been developed . Essentially, the model builds a probability function providing the likelihood for two ions or elements to substitute each other. No assumption is made on what chemical features (ionic radius, electronegativity, position in the periodic table) drives these substitutions. The likelihood of substitution between two ions is given as a matrix in Figure 5. This algorithm was successfully used and combined with density functional theory (DFT) to predict new Li-ion battery, luminescent, or ternary nitride materials that were confirmed experimentally. Some of those substitutions were surprising and not easily rationalizable on standard chemical grounds. For instance, the substitution model suggested a Ba2+ with Sr2+, Re7+ with Al3+, and N3– with O2– substitution to form a new phosphor host material, Sr2LiAlO4, which was successfully synthesized
Another field where data-driven models are replacing traditional physical models is interatomic potentials. Traditional force-fields such as Lennard–Jones potentials or embedded atom models have strong physical assumptions embedded. Recently developed machine learning potentials bypass these assumptions by directly connecting energetics and forces to crystal structure without much functional assumption about their relationships. In a sense, they are purely data-driven with minimal physical assumptions. To fit these potentials, a large set of reference data (e.g., from DFT) is needed. There are several types of these machine learning interatomic potentials: they differ by the descriptor used to encode the structure and the type of regression. Popular ones are the Gaussian approximation potentials by Bartók and Csányi, where a ‘smooth overlap of atomic positions’ (SOAP) descriptor is used, and the neuronal networks by Behler, where atom centered symmetry functions are used. The SOAP descriptor encodes atomic geometries with the help of a local expansion of Gaussian smeared atomic densities. The possibilities offered by these potentials are immense, allowing simulations of phenomena requiring many atoms (e.g., grain boundaries or amorphous phases ) or accelerating the search for new crystal structures . It is expected that they will impact many fields and they have already been used in the investigation of battery and potential thermoelectric materials
While these data-driven approaches are attractive because they do not rely on descriptors that could turn out to be inadequate, we should keep in mind that they could simply relearn well-established chemical rules. An illustrative example comes from a search for water-splitting materials among the ABX3 perovskites . Using a genetic algorithm, a search among all perovskites was performed, focusing on stability but also band gap and band edges. This search was purely data-driven using no heuristics (Figure 6, blue line, best genetic algorithm). The authors performed another search on the same data set, including constraints from typical chemical rules: the oxidation states in one material should sum up to 0, only materials with an even number of electrons in the primitive unit cell are considered to avoid metals and the classical Goldschmidt tolerance factor is used for ranking of the materials. The search based on traditional chemical heuristics was similarly successful as the genetic algorithms (Figure 6, red line, chemical). Obviously, here, the machine had been ‘reinventing the wheel’ or more precisely relearning known rules such as charge balance. Interestingly, it is the combination of data-driven optimization and heuristics that lead to the best performances. Joining the two approaches together prevents reinvention of the wheel while capturing relationships that might be unknown from traditional knowledge (Figure 6, green line). One of the earlier mentioned graph-based approaches (MEGNet) allows combination of the purely data-driven approach with heuristic features as well and it enables the researcher to decide on the amount of traditional heuristic knowledge (from only atomic numbers and simple bonding information to the inclusion of more complex chemical heuristics) used in the developed models depending on the data availability [
Chemical heuristics and machine learning are rooted in the same desire to build models explaining patterns in data. Instead of seeing traditional chemical heuristics and machine learning studies only as competitors, we think they instead feed on each other. Machine learning approaches can benefit from old chemical heuristics as they provide natural features to represent materials (see Outstanding Questions). The traditional chemical rules also provide standard benchmarks for machine learning studies. Researchers should be encouraged to not only validate their machine learning models by cross-validation but also compare the results and performance of artificial intelligence with traditional rules. In our opinion, machine learning provides a true added value only when it convincingly beats traditional approaches. If the outcome of a new neural network is to simply relearn the rule of charge balance, one can question the advance in knowledge achieved by the machine learning tool. Chemical heuristics should also be challenged by machine learning and large-scale statistical studies. The historical significance of a heuristic rule should not preclude its critical assessment. In fact, researchers should remember that many traditional concepts and rules in chemistry have been developed on relatively small datasets and might, therefore, be highly biased. The boom of information technology presents an opportunity for chemists to get back to the drawing board and design new rules based on large datasets with the help of machine learning approaches.
Will the intuitive understanding of materials science profit as much from machine learning as from traditional chemical heuristics? Traditional chemical heuristics are typically taught in chemistry courses in university and are core to our chemical understanding. Machine learning should allow for a similar contribution to chemical understanding as these traditional chemical heuristics.
Are current datasets of materials biased by traditional chemical heuristics? Most inorganic compounds have been discovered based on serendipity and traditional chemical heuristics. Therefore, it might be possible that current materials datasets and, therefore, also assessments based on these datasets are biased based on the use of traditional chemical heuristics during their discovery.
Should we build frameworks that allow for easy testing against classical chemical heuristics? True scientific progress based on machine learning would only be made if those approaches beat classical chemical heuristics. Currently, there are no straightforward ways to compare those approaches with each other.