Google open-sources BERT, a state-of-the-art pretraining technique for natural language processing
Interested in learning what’s next for the gaming industry? Join gaming executives to discuss emerging parts of the industry this October at GamesBeat Summit Next. Register today.
Natural language processing (NLP) — the subcategory of artificial intelligence (AI) that spans language translation, sentiment analysis, semantic search, and dozens of other linguistic tasks — is easier said than done. Procuring diverse datasets large enough to train text-parsing AI systems is an ongoing challenge for researchers; modern deep learning models, which mimic the behavior of neurons in the human brain, improve when trained on millions, or even billions, of annotated examples.
One popular solution is pretraining, which refines general-purpose language models trained on unlabeled text to perform specific tasks. Google this week open-sourced its cutting-edge take on the technique — Bidirectional Encoder Representations from Transformers, or BERT — which it claims enables developers to train a “state-of-the-art” NLP model in 30 minutes on a single Cloud TPU (tensor processing unit, Google’s cloud-hosted accelerator hardware) or a few hours on a single graphics processing unit.
The release is available on Github, and includes pretrained language representation models (in English) and source code built on top of the Mountain View company’s TensorFlow machine learning framework. Additionally, there’s a corresponding notebook on Colab, Google’s free cloud service for AI developers,
As Jacob Devlin and Ming-Wei Chang, research scientists at Google AI, explained, BERT is unique in that it’s both bidirectional, allowing it to access context from both past and future directions, and unsupervised, meaning it can ingest data that’s neither classified nor labeled. That’s as opposed to conventional NLP models such as word2vec and GloVe, which generate a single, context-free word embedding (a mathematical representation of a word) for each word in their vocabularies.
Event
Transform 2022
Register now for your free virtual pass to Transform’s AI Week, July 26-28. Hear from AI and data executives from Visa, Lowe’s eBay, Credit Karma, Kaiser, Honeywell, Google, Nissan, Toyota, John Deere, and more.
Register Here
BERT learns to model relationships between sentences by pretraining on a task that can be generated from any corpus, Devlin and Chang wrote. It builds on Google’s Transformer, an open source neural network architecture based on a self-attention mechanism that’s optimized for NLP. (In a paper published last year, Google showed that Transformer outperformed conventional models on English to German and English to French translation benchmarks while requiring less computation to train.)
When tested on the Stanford Question Answering Dataset (SQuAD), a reading comprehension dataset comprising questions posed on a set of Wikipedia articles, BERT achieved 93.2 percent accuracy, besting the previous state-of-the-art and human-level scores of 91.6 percent and 91.2 percent, respectively. And on the General Language Understanding Evaluation (GLUE) benchmark, a collection of resources for training and evaluating NLP systems, it hit 80.4 percent accuracy.
The release of BERT follows on the heels of the debut of Google’s AdaNet, an open source tool for combining machine learning algorithms to achieve better predictive insights, and ActiveQA, a research project that investigates the use of reinforcement learning to train AI agents for question answering.
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.