Taylor & Francis Group
Browse

sorry, we can't preview this file

...but you can still download tstm_a_2124831_sm2855.xlsx
tstm_a_2124831_sm2855.xlsx (22.41 kB)

MaterialBERT for natural language processing of materials science texts

Download (22.41 kB)
dataset
posted on 2022-09-23, 07:40 authored by Michiko Yoshitake, Fumitaka Sato, Hiroyuki Kawano, Hiroshi Teraoka

A BERT (Bidirectional Encoder Representations from Transformers) model, which we named “MaterialBERT”, has been generated using scientific papers in wide area of material science as a corpus. A new vocabulary list for tokenizer was generated using material science corpus. Two BERT models with different vocabulary lists for the tokenizer, one with the original one made by Google and the other newly made by the authors, were generated. Word vectors embedded during the pre-training with the two MaterialBERT models reasonably reflect the meanings of materials names in material-class clustering and in the relationship between base materials and their compounds or derivatives for not only inorganic materials but also organic materials and organometallic compounds. Fine-tuning with CoLA (The Corpus of Linguistic Acceptability) using the pre-trained MaterialBERT showed a higher score than the original BERT. The two MaterialBERTs could be also utilized as a starting point for transfer learning of a narrower domain-specific BERT.

History