MatSQ Blog > Column

Polymer Informatics

Viewed : 165 times,  2021-09-07 09:01:01

[C-3]

Polymer Informatics

Rampi Ramprasad (Georgia Institute of Technology)

 

Main Talk by Rampi Ramprasad :

The story I'm going to tell is going to be a bit different from the previous two presentation talks. So, the story I'm going to tell is going to focus on polymer discoveries, and this is a story that has evolved over a period of many years. I'm very thankful for the sponsorship by several federal agencies, as well as industry. Also, I'm even more thankful to the people I've had the opportunity to work with over the years, former and present graduate students and postdocs, and some undergraduate students, as well as several collaborators. These are the people who have been very closely involved in the work and in the story that I'm going to tell. Hence, this is really a team effort, and I'm sort of the “messenger”. So, before we dive into the main subject of the presentation, let me share a couple of thoughts.

In materials discovery or materials development, the optimal selection or discovery process is highly non-trivial, and this is mainly because of a couple of challenges. One of them is the need for conflicting property requirements, and I'm going to show a few examples of that. The second one is, of course, the staggeringly large search space. So, we are really searching this huge space for very special candidates, depending on the application of interest, and as I said, here are a few examples drawn from our own polymer work.

Firstly, you can see an example of a capacitor dielectric which is used for electrostatic energy storage these days in electric cars, hybrid cars and many other applications. The dielectric material that you need to use for this application needs to have a high band-gap and a high dielectric constant. A lot of times these two properties are inversely related. Hence you're trying to maximize two properties at the same time, which however tend to be mostly inversely related to each other.

The example in the middle is another one. This time it's about creating polymeric materials for battery electrolytes to make all solid-state lightweight polymer-based batteries. Here again, the polymer electrolyte material has to have a low glass transition temperature and high mechanical strength, and once again, these two properties are unfavourably entangled with each other. The third example concerns polymers for electronics applications, where you want a low band-gap and you want a low carrier recombination, and again, they're unfavourably entangled.

So, these are some quick examples, and as I said, a lot of times in application-delivered systems, we are interested in properties that are related in this conflicting manner. Then comes the search space. So, this is a cartoon of these truly staggeringly rich polymer chemical spaces. What you see here is the most common of them all, polyethylene, which is just a linear polymer, and these are examples of other linear polymers.

Of course, polymers can be also cyclic, almost cyclic or heterocyclic, and they can be mixtures of lymphatic or linear units and cyclic aromatic units. Polymers can also have a on-carbon based species in the backbones, which consists in a silicon, germanium or tin atom (basically going down the periodic table from carbon). It can also have other species, such as platinum, ruthenium, etc., in the backbone. So, the search space is chemically-speaking extremely rich, and physically or morphologically-speaking, it is even more complex.

How do we search this space efficiently? That's the big question. I'm going to start off with an entry point to the story: I'm going to share with you this one little nugget, when we were successful in designing polymeric materials for a particular application, in this case for a capacitor dielectric for use in high-energy density capacitors. What you see here is a picture of a capacitor bank that you would find in an electric or a hybrid car, in addition to a battery pack.

These capacitor banks are necessary because they are very fast, and they can grab the energy during regenerative breaking much faster than the battery. So, the capacitor grabs the energy first, and then gradually releases the energy to the battery. That's where you need the capacitor bank. The current dielectric material that is used in today's technologies is biaxially-oriented polypropylene. Polypropylene is a very close cousin of polyethylene, and it's made up of just carbon and hydrogen, essentially.

Polypropylene has a very high electrical break-down field of 700 Volts per microns. That's the highest field a material can withstand, above which it is irreversibly transformed and breaks down into a conductor. It also has a very low dielectric loss, meaning pretty much all the electrostatic energy that you put in, you can recover. It's also remarkably cheap, because the community has figured out ways of producing this material in mass quantities.

But of course, there is an issue, and the issue is that it has a super low dielectric constant of around 2. Permittivity of free space is one, and the dielectric constant of biaxially-oriented polypropylene is therefore of the order of two. It translates to an energy density of five joules per cc. That's a good number, and an important number for us to keep in mind, because that's the number we are trying to surpass. So, the maximum energy density you could have in a capacitor with the linear dielectric is given by the dielectric constant of the dielectric, times the square of the breakdown field, the highest electric field you can subject the material to.

So, if you want to design something that is better than polypropylene, you want to find a material that has a higher dielectric constant, and even more preferably, a higher dielectric breakdown field compared to that of polypropylene. Now, the dielectric constant is something we can compute using density functional theory calculations, whereas the dielectric breakdown field, in real engineering or practical materials, is today not possible to compute ab-initio. Therefore, we have traded off the dielectric breakdown field computation to a band gap computation, because there is a known empirical correlation between the band gap and the breakdown field. High breakdown field materials tend in fact to have a high band-gap.

So, how do we actually surpass biaxially-oriented polypropylene? Well, we would like to screen and search the chemical space of polymers for materials with high dielectric constant and large band gap. Hence, we’ve got a search criterium now, and thus we have to apply it in our search. Here is the repeat unit of a part of a particular polymer that you can imagine, and you can imagine also that these colourful rectangles may be populated by one of many possibilities.

Many years ago, when we started working on this problem, we considered only seven possibilities. Today, we have hundreds of such blocks that we could put in. In any case, if all of these blocks are populated with CH2, the most basic unit of them all, then you have polyethylene. But then we allow for all other possibilities, where we still weed out those cases that are chemical nonsense, such as for example you don't want to have a sequence of adjacent oxygens (oxygen is one of the block possibilities). Hence, we weed out cases like that, and then we are left with a few hundred possibilities, for which we have done high throughput DFT calculations, such as the ones that Chris Wolverton mentioned earlier. The dielectric constant was thus computed using density functional perturbation theory, and the band gap was calculated using hybrid functionals.

By looking at the resulting plot of dielectric constant vs band gap, the thing that jumps to the eye immediately is the inverse relationship. Polyethylene has a majestically large band gap, as you can see, but it has a very low dielectric constant, of about two or so, as mentioned earlier. Of course, on the other end of the spectrum, you have cases with large dielectric constants but very small band gaps. These are the classic semiconducting polymers. You dope them and you can create electronic devices, but they are not so good for use in a capacitor dielectric for energy storage applications.

So, initially we were disappointed, but then we asked the question: why not focus on this quadrant of this block? Maybe what we need is not a super-high dielectric constant and a super-high bandgap, maybe all we need is a moderate value of the bandgap and moderate value of the dielectric constant, and maybe it'll work right. So, our target values should be a bandgap greater than three eV, and a dielectric constant greater than four.

We thus picked up three cases from our dataset, and then we recommended these three cases to our experimental colleagues for synthesis. Amazingly, our experimental colleagues were able to make these three specific recommendations, and these three cases turned out to be Polyurea, Polymide and Polythiourea. Urea is a material that is used to make fertilizers, and so we were proposing that we make a polymer out of that starting material. So that's polyurea. The numbers for the dielectric constants and band gaps that we computed using DFT, and the numbers that were observed experimentally compare favourably, and therefore we initially rejoiced. But then that was a little premature, because none of these materials were actually practical materials, since they were unflexible in the form of brittle pellets. Hence, we could not make processable flexible thin films out of any of these three materials.

Hence in general they were good materials, and they had the expected property values that we hoped for. But in practice, they were not really that useful, and this is where the ingenuity of my synthesis collaborators and his team of talented students came along. What they were able to do was to start with what we recommended, and they then made tweaks and added linkers to the starting recommendation that we made. They were in this way able to make several dozen polymers, that could be processed now into flexible thin films, and so breakdown measurements could be made.

Here are some examples of actual materials that came out of this program. So, this is the current standard material, biaxially-oriented polypropylene, with dielectric constant of about 2, break down strength of 700 megavolts per meter, and energy density of about five joules per cc. Furthermore, here are three example materials that came out of the lab, in the form of flexible thin films, but actually made transparent, meaning the band gap is large enough. You can see that the resulting dielectric constant for such new materials is two to three times that of polypropylene, and the breakdown field is about the same or maybe slightly better than that of polypropylene. This basically translates to materials with an energy density between two to three times that of polypropylene.

Hence some of these materials are going through scaleup right now. This story is actually continuing and taking on a different direction as well, which I don't have time to dwell on, and that is to put metals into the backbone of polymers to increase the dielectric constant even more. The case of tin, replacing carbon in some locations, proved to be especially interesting. These are actually being frozen and being synthesized, and we are also having some success with the processing of those materials.

This is a story that was purely a DFT story based on computational science, in the sense that it was purely DFT-driven and then there was a lot of iteration with the experimental synthesis and processing. As a DFT story, it was very exciting, very accurate, very versatile, but also slow. Here is where some compelling push that I got from my experimental collaborators came in very handy. So, these collaborators would come to my office and say: “well, here is a polymer we can make today. Can you tell me what its dielectric constant is?”.  I would then reply: “give me like a couple of weeks, because we have to do first structure prediction calculations. This polymer has a big repeat unit, and therefore we need to first figure out what its structure is. Then, after that, we need to do our DFPT calculations to get the dielectric constant, and so it'll take us some time”. So at this point the experimental collaborator would ask me: “well, you have all this data, and you computed dielectric constants for several hundreds of polymers. Why can't you just use some ideas of similarity to give me an estimate?”. This was one of the propelling factors that took us in the materials informatics direction.

In fact, we have a fair amount of data that was generated through our own computations. But then we started also to dip into the experimental literature, data collections and databases from collaborators, published articles and handbooks, from which we could collect the data for a variety of properties which today, at least, are very hard to access using DFT. One example is the glass transition temperature. This is the temperature about which the polymer is rubbery, and below which the polymer becomes a glass.

Chris Wolverton talked about metallic glasses in his previous talk, and these are essentially polymers which are amorphous. If a polymer is amorphous, then of course below the glass transition temperature it exists in a glassy state, and that is something that is very hard to access directly using DFT calculations today. Therefore, we combined the experimental as well as computational data to create a machine learning-based set of models, for quick and rapid materials property prediction.

Here is my very quick overview of how I think about building predictive models, using data as starting point. We assume that we have already gathered this data. We are dealing with a whole lot of materials, material one through material N, for which we have some property value, either measured or computed. Then we ask this question: there is a new material X, what is the value of that property for that new material X? It turns out that under some circumstances we can do such predictions.

The first step is what we call fingerprinting, but it is also called the representation. How do we numerically represent these textual labels? These are number vectors that represent each of these cases, and once we do that, if you do that well, maybe we have succeeded in fingerprinting. Then, the next step is mapping those fingerprints and those numerical vectors to the property values of interest. Once we establish that mapping, we conclude that we have a machine learning model available.

Once that happens successfully, under some circumstances, the whole process can work well. When that happens, we are in a position to answer the original question. So, we have a functional form that connects the fingerprint vectors to the property values, and then if you want to know the property value for material X, we find the fingerprint or representation for that material X. You consequently get your property value out, and you've answered the question.

Hence, that's the basic idea. But there are a lot of critical things over here. Perhaps the most important aspect is the representation of the fingerprinting step. We have a lot of options for this. We can take off the shelf stuff, but we can also modify things to suit our needs. But the fingerprinting step is actually the most interesting step, and also the most critical step. In the case of polymers, we have come up with a way of representing polymers at a variety of length scales, starting from the atomic scale, where we take into account atomic level connectivity, to the block scale, where we take into account what kind of building blocks you have, all the way to the chain level as well, i.e. what kind of branches and site-chains and what sort of groups you have in the site-chains and the main-chain, how long are the site-chain, and other considerations of that nature.

Of course, in principle you could do everything from the atomic level, because the atomic level picture is in principle a complete description of the polymer material. But then after a point, it sort of becomes hopeless, because your fingerprint dimensionality will explode on you. So, at some point, you want to truncate your atomic level description, and we need to zoom out of the problem and then look at things that are a little bit bigger in length scale. Then at that scale, you go up to a point and you truncate it, and then you step back and look at it at an even bigger length scale, which is why we call this approach hierarchical length scales. We are also working on the next step in the series as well.

I'm going to say very little about the learning aspect itself, at least at this point, since I’ll touch on that again towards the end of my presentation. A lot of our initial work in this respect was done using Gaussian process regression, and a variant of that is something called co-Kriging, where we can use data from multiple sources of varying fidelities and varying accuracies, and we can fuse them together to build a learning model at the highest level of fidelity.

So, this is called co-Kriging, and then we have also used neural networks, especially in cases where we have the dataset sizes big enough. The output of all of this effort has been a variety of property prediction models, ranging from electronic properties such as band gap, ionization energy etcetera, to dielectric and optical properties (frequency-dependent dielectric constant as an example of that, a very exciting new development), thermal properties, the glass transition temperature that I mentioned earlier, and others.

As an idea of other properties, including even solubility properties, we can even make a judgment on whether a given solvent will dissolve or precipitate a polymer. Also, for example, what is the permeability of a given gas through a polymer membrane, and things of that nature. All these predictive models are available open access online, with this tool called polymer genome dot org, where you can draw the polymer, type in its name or type in its “SMILES” string.

Then you hit the "predict" button, and a variety of results for electronic dielectric properties and other properties of the material under consideration will appear, including a quick coarse optimization of the structure’s atomic coordinates, and so on so forth. Now, what I told you so far in terms of polymer genome was a great place for us to start, and then that has actually built up and gathered momentum. We are adding features and bells and whistles to that, and also to surrounding areas. That is coalescing together into a polymer informatics ecosystem, where we are playing a role, and also there are others in the community who are doing some excellent and exciting work in this field.

So, this general area of polymer informatics is actually gathering momentum, I would say. At a very high level, this sort of ecosystem has to have a basic set of components. One of them is the data aspect, which we touched on earlier; then the representation aspect, and again, we touched on that. Then there's artificial intelligence or the learning aspect, to predict a variety of models, and we also touched on that. Now there is design, in that we want to design materials that meet certain target property requirements.

The prediction and design aspects together define the prediction pipeline, to get materials that meet certain property requirements. Of course, there is the user interface aspects as well, to make the tools easy for people to use. Then these users of the platform can get recommendations for polymers, that people can use as part of some application need. That opens up the territory for the synthesis planning of such polymer solutions.

How do you make synthesis recommendations, to design a new polymer? So, these are things we are working on as well. Of course, if you want to do some computations on those polymers, AI-guided automated data generation computations, and data generation workflows, can be set up and we are working on those things at the moment. Ultimately, the data that comes from that aspect feeds back into our first box.

Then, you can keep going through this. The hope is that ultimately this sort of iterative ecosystem will lead to a capability that progressively improves intelligence. So, what I'm going to do from now on, for the next 15 minutes or so, is to sort of give some highlights on each of these boxes. Let's start with the design aspect.

You basically want to design polymers that meet certain property requirements. The easiest thing to do is this thing called enumeration. You just make a big list of materials that may be of interest to you, and simply make predictions about those, and select those cases which meet your target property requirements. That's called the enumeration-based approach.

This is an example of this approach for the case of polymers, for extreme temperatures and high-electric field applications, where you need a large band gap, and you need a high glass transition temperature and a high dielectric constant. Here is three 2D plots, where we enumerated thousands of polymers for which these properties have been calculated using the models. We make those plots, and basically look at these plots and just select the cases which meet your target requirements.

This is another example for battery electrolytes. Here's another example for polymer membranes. Depending on the application, your target property needs may be quite different, but the enumeration-based approach is a very quick and easy way of getting to the next level of design. But something that is even better than the enumeration approach is the single sequential or active learning. So, those cases that appear promising can be put through further computations, consisting in actual physics-driven computations or actual experimental validation. That can provide more data, and that can make your model better.

You can keep iterating like this, until you find materials that meet your goals. This plot is an example of how we were able to design polymers with high glass transition temperature. In the interest of time, I'm not going to go into the details of that. But the sequential or active learning approach, especially in an experimental setting, is actually quite powerful.

Let me move on to other more sophisticated design algorithms. Let us say we want a polymer with a glass transition temperature greater than some value, and a band-gap greater than some value. Let's say that's our design goal. I'm going to just highlight two algorithms we worked on very recently, one based on the genetic algorithm, the oldest algorithm of all, and then the variational auto encoder-based algorithm.

In the genetic algorithm, things are quite intuitive and simple. In fact, polymers lend themselves really well to this type of exploration. You basically start with two parent polymers. You splice them, you take one piece of one and another piece of another, and tie them together, and thus you create your children. So that's basically a crossover operation. Then, every so often, you do a mutation operation, where you take a random block and change it out to something else to get out of the gene pool. Then, you can predict properties for those things, and you cycle through. In summary, you have an initial population, you do crossover mutation, you do property prediction, and then you select the best candidate, as in the “survival of the fittest”.

You keep the ones that come closest to meeting the property requirements, throw away the rest, and keep cycling back and forth. Here is an example of results, for the band gap and glass transition temperature. This is the box we want to be in, when the glass transition temperature is greater than 500 Kelvin, and the band gap is greater than 6 eV. So that's the box we want to be in. There are just a few known polymers in that box, but using the genetic algorithm, we've been able to populate that part of the box quite richly.

Here is another example of an auto-generative algorithm, the syntax-directed variational auto-encoder, where you have an encoder piece which is a neural network, and then there's a decoder piece that's another neural network. There is also a latent space, that's actually like a fingerprint representation in an abstract, low-dimensional space that represents our polymers. This is a continuous space, whereas a polymer material space is actually discrete.

So, this is an algorithm that is quite exciting. It has been used successfully in other domains as well, and therefore we've been trying it within the polymer domain. Once this auto-encoder has been trained, you can find parts of the latent space that correspond to attractive properties, and then those go to the right of it. Then, the decoder will decode those points to actual polymers, and you get your resulting desired polymer designs.

Here are some examples of designs, where we've used both the genetic algorithm and the syntax-directed variational auto-encoder, to design polymers with high glass transition temperature and large band gap, in which these two algorithms worked independently. But interestingly, they produce designs that are quite similar in terms of the motifs that these polymer repeat units have, with saturated rings on both sides, containing Fluorine atoms.

So, both algorithms actually were able to design a number of polymers that meet these criteria. We're also exploring a couple of other algorithms as well. We are now in the process of validating those designs with our experimental collaborators. So, I’ll talk a little bit about design now, and I'm going to go backwards a little bit. Now, let's actually talk about the learning algorithm itself. I mentioned earlier that we use the Gaussian process regression in some cases, and neural networks in some other cases.

Here is a new algorithm that we very recently considered, and this goes under the name of multitask learning. Let's take a look at the correlation matrix for a couple of dozen properties. Those are the properties of interest to us, and it's really a matrix of Pearsons correlation coefficients between all pairs of properties. When you see zero, that basically means that there is no correlation between those pairs of properties. High values on the other hand mean that it’s highly correlated positively. With negative values, basically we get highly correlated properties, but negatively. You can see that a lot of properties are correlated. Now, when we have data sets for multiple properties, one approach is something that we have done in the past, to develop independent models for each property. But I think a more efficient and scalable thing to do is to bring all the data together, or at least some of it.

You use something like the multitask learning approach, which is really an information-fusion approach, and then build models that simultaneously ingest multiple datasets for multiple properties, and simultaneously predict multiple properties. We tried it for the entire data set. It actually worked better than the single-task, or single property-based models. But what worked even better was to break this entire dataset up into smaller subsets, with each subset containing a lot of correlation between properties, and in this way we were able to actually do a remarkable job in terms of predictive accuracy.

So, here are a couple of examples of how such multitask learning would work. You have a neural network, and then we input the dataset corresponding to all the properties, and then we input the outcomes on the different properties as well. This is one way, where you have as many nodes in the output layer as there are properties, that works quite well. But what works even better is this thing called the selector approach, where you have a selector vector over here which tells which properties you are talking about, and that particular property then gets delivered.

So, this is a paper that recently was published, and that's a very exciting development. I'm going to show one last example now, and again, I'm going to step backwards now. We already talked about design, and we talked about a new learning algorithm. We'll consider those, and then now we're going to talk about data. All of this assumes in fact that we have sufficient data, but many in the community are going through a dataset-crunch problem, in the form of a data bottleneck.

Computationally, we can produce a lot of data if you choose the right approach. But regardless, generally material science falls under the small or medium data problem. Matthias Scheffler has already talked a lot about that in his previous talk. How do we make the best use of a small data situation, and how do we increase the datasets size? At least within the polymer area, we're in the process of creating a pipeline, using natural language processing (NLP) methods to automatically and autonomously extract data, as well as knowledge, from the polymer literature.

Now, there are others in the community who have done remarkable work using NLP for other materials domains, and we are now taking baby steps to do that for the polymer domain. Essentially, the idea is that each text entity, which is called a token (for example “polyethylene oxide” or “scanning electron microscopy” tokens), is represented as a word vector.

These vectors are one of the outputs of this NLP approach. So, you collect your records, you download papers after getting permissions from publishers, you clean up their contents, and then you “tokenize” the resulting text and you train your word vectors. Each token now is represented by a numerical vector, which is context-sensitive. Hence, it sort of knows what the context of the phrase is, as it knows the general contextual meaning of the token. So, this is more than just doing a text search. These are actually language models. There is therefore some understanding of the contextual information in these word vectors. Anyway, so you take these word vectors, and I'm going to show just a few basic quick examples of what we have done so far. First of all, we train this framework on the “corpus” (main body) of polymer-related publications, and then you have all these resulting word vectors. Let’s take the word vectors corresponding to polymer names, for instance. Then, we plot the 2D plot/projection of these word vectors, and the results are very interesting, since they can cluster in terms of applications. For example, these comprise only polymer adhesives, whereas these are biodegradable polymers that are clustering together, and then these are conducting polymers, etc. You can sort of see what these polymers are, and this helps to automatically get the information from the literature, and consequently sort them out.

Then there are these things called analogies that we can also do. For example, styrene is the starting monomer for polystyrene, also referred to as PS as abbreviation. As a side note, one of the things in the polymer community is that the naming convention is not standardized. Although there is a standard naming convention (the IUPAC system), it is a cumbersome system, and so the community uses a variety of different types of naming conventions, including abbreviations. We therefore need to do this thing called normalization, meaning that we need the language more or less to recognize that these different names are for the same entity. Anyway, so what you see here is called an analogy. Styrene is a monomer, polystyrene is a polymer that you make out of such a monomer, and polystyrene is also called PS. So, the analogy kind of connects up all of these concepts in this word vector space.

Here is another example that came out of this work. What is the polymer that has been studied the most? So, to answer this, we look at the tokens corresponding to various polymers, and then basically we produce a frequency histogram. Polystyrene is accordingly the most studied polymer in the last 20 years or so. Our corpus of papers came in fact from the last 20 years, year 2000 to the year 2020. In those 20 years, polystyrene is thus the most studied polymer, followed by polyethylene, and followed in turn by polypropylene. Polypropylene is actually the story I started out with today, and that's the third most commonly studied polymer, it turns out. The relative number of occurrences as a function of year also provides information about which polymers have become more popular or less popular with time. We can also study the corresponding applications, i.e. what are the applications for which polymers have been studied the most: for example, membranes for separation of complex mixtures of fluids and gases, fibres and energy applications, etc. So, this is the sort of information that can be extracted. But our goal is not just to extract knowledge like this, but also to extract data to augment our database. Downstream, the data and the knowledge models can help with creating predictive models, and that's the ultimate goal.

I'm going to stop here and wrap up. There are therefore a lot of high-level components that any sort of ecosystem needs to have. In this particular case, data is very critical. We start off with what we have, and then we need to augment it through high-throughput experimentation, computation, natural language processing, etc., which I just talked about.  I will now give a very quick overview of how we are hierarchically representing or fingerprinting polymers. But there are also other ways that we, and others in the community, are actively working on, consisting in machine-generated fingerprinting, and not therefore just human-generated fingerprinting. The hierarchical fingerprinting scheme that we are developing is in fact handcrafted and human-generated, based on our domain knowledge. But we, and others, are also working on machine-generated fingerprints, in order to automatically generate fingerprints as well. That's another, I think, exciting domain, which involves graph neural networks. We also talked about the AI-methods for prediction using GPR (Gaussian Processing Regression), multi-fidelity information fusion, deep neural networks, etc. We also talked about enumeration-based design, active learning-based design, and generative genetic algorithm-based design, etc. In the end, we would like to deploy all these techniques, so that the community can also use them.

But also synthetic colleagues can actually take advantage of such techniques, since we want to lower the barrier for synthesis and processing, and allow experimentalists to use these tools so that they can actually make these materials. So, we are in the process of doing some exciting work on not just designing polymers, but also providing some guidance on how one may actually synthesize those polymers. Now, that's a very tough problem, and again, we are taking some small baby steps in that direction. But that's a very important problem to solve.

Once again, I'm going to finish up by thanking the people who actually did all this work, since it's been a team effort. Many people who were in the group in the past have contributed to this, including former students and postdocs that moved on and flourished elsewhere. There are many who are still in the group, including postdocs, grad students, and undergraduate students, who are actively working in a passionate manner. So, really a big thanks to them. Finally, I would certainly like to thank our funding federal agencies, as well as the funding industrial groups, for their sponsorship of all of this work. Last but not least, thank you all for your attention. I'm happy to take some questions at this stage.

 

 

Video

* You can download the full script pdf file and watch the recording on the workshop page.

 

 

 Summary

 Gabriele Mogni 

 Virtual Lab Inc.

 


Comments