{"id":806026,"date":"2021-12-20T11:08:11","date_gmt":"2021-12-20T19:08:11","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?p=806026"},"modified":"2021-12-20T11:08:13","modified_gmt":"2021-12-20T19:08:13","slug":"azure-ai-milestone-microsoft-kear-surpasses-human-performance-on-commonsenseqa-benchmark","status":"publish","type":"post","link":"https:\/\/www.microsoft.com\/en-us\/research\/blog\/azure-ai-milestone-microsoft-kear-surpasses-human-performance-on-commonsenseqa-benchmark\/","title":{"rendered":"Azure AI milestone: Microsoft KEAR surpasses human performance on CommonsenseQA benchmark"},"content":{"rendered":"\n
\"An<\/figure>\n\n\n\n

KEAR (Knowledgeable External Attention for commonsense Reasoning)<\/a>\u2014along with recent milestones in\u00a0computer vision<\/a> and\u00a0neural\u00a0text-to-speech<\/a>\u2014is part of a larger Azure AI<\/a> mission to provide relevant, meaningful AI solutions and services that work better for people\u00a0because they better capture how people learn and work\u2014with improved vision, knowledge understanding, and speech capabilities. At the center of these efforts is XYZ-code, a joint\u00a0representation\u00a0of three cognitive attributes: monolingual text (X), audio or visual sensory signals (Y), and multilingual (Z). For more information about these efforts, read the\u00a0<\/em>XYZ-code blog post<\/em><\/a>.<\/em><\/p>\n\n\n\n

Last month, our Azure Cognitive Services (opens in new tab)<\/span><\/a> team, comprising researchers and engineers with expertise in AI, achieved a groundbreaking milestone by advancing commonsense language understanding. When given a question that requires drawing on prior knowledge and five answer choices, our latest model\u2014 KEAR, Knowledgeable External Attention for commonsense Reasoning (opens in new tab)<\/span><\/a>\u2014performs better than people answering the same question, calculated as the majority vote among five individuals. KEAR reaches an accuracy of 89.4 percent on the CommonsenseQA (opens in new tab)<\/span><\/a> leaderboard compared with 88.9 percent human accuracy. While the CommonsenseQA benchmark is in English, we follow a similar technique for multilingual commonsense reasoning and topped the X-CSR (opens in new tab)<\/span><\/a> leaderboard.<\/p>\n\n\n\n

Although recent large deep learning models trained with big data have made significant breakthroughs in natural language understanding, they still struggle with commonsense knowledge about the world, information that we, as people, have gathered in our day-to-day lives over time. Commonsense knowledge is often absent from task input but is crucial for language understanding. For example, take the question \u201cWhat is a treat that your dog will enjoy?\u201d To select an answer from the choices salad<\/em>, petted<\/em>, affection<\/em>, bone<\/em>, and lots of attention<\/em>, we need to know that dogs generally enjoy food such as bones for a treat. Thus, the best answer would be \u201cbone.\u201d Without this external knowledge, even large-scale models may generate incorrect answers. For example, the DeBERTa language model<\/a> selects \u201clots of attention,\u201d which is not as good an answer as “bone.” <\/p>\n\n\n\n

On the other hand, expert systems with lots of rules and domain knowledge and little data have failed to deliver their promise of AI that understands and reasons more like people do. We revisit the rules and knowledge approach and find that deep learning models and knowledge can be organically combined via an external attention mechanism to achieve breakthroughs in AI. With KEAR, we specifically equip language models with commonsense knowledge from a knowledge graph, dictionary, and publicly available machine learning data.<\/p>\n\n\n\n

Given a question and five candidate answers, for the CommonsenseQA task, the KEAR model first retrieves related knowledge from a knowledge graph via entity linking, from a dictionary via word matching, and from related QA datasets via text retrieval. Then, the retrieved knowledge is concatenated with the input question and candidate answer and fed into a language model to produce a score. The candidate answer with the highest score is chosen as the output. The final submission is generated by an ensemble of 39 language models, such as DeBERTa and ELECTRA (opens in new tab)<\/span><\/a>, with majority voting. In this way, the KEAR model can attend to related external knowledge for effective commonsense understanding.<\/p>\n\n\n\n

For example, for the aforementioned question\u2014\u201cWhat is a treat that your dog will enjoy?\u201d\u2014KEAR retrieves \u201cDog \u2014 desires \u2014 petted, affection, bone, lots of attention\u201d from the knowledge graph ConceptNet (note that the choice \u201csalad,\u201d offered as one of the five options, doesn\u2019t appear in the retrieved results); \u201cBone: a composite material making up the skeleton of most vertebrates\u201d from the dictionary Wiktionary; and \u201cWhat do dogs like to eat? bones\u201d from the training data in the <\/strong>CommonsenseQA dataset. After concatenating the retrieved knowledge with the input, KEAR feeds it into the DeBERTa model, which selects the answer \u201cbone.\u201d<\/p>\n\n\n\n

In applying external attention to multilingual commonsense reasoning, we translate a non-English question into English, retrieve the knowledge from various sources, and translate the knowledge text into the source language for external attention. The proposed model, Translate-Retrieve-Translate (TRT)<\/a>, achieved first place on both the X-CODAH and X-CSQA datasets on the X-CSR benchmark.<\/p>\n\n\n\n

External attention: The benefits of looking outward<\/h3>\n\n\n\n

External attention <\/strong>is complementary to self-attention, which has been widely adopted by many of today\u2019s AI systems, such as those using Transformers (opens in new tab)<\/span><\/a>. These systems rely on a large amount of diverse data to achieve impressive AI performance with huge-size models. This has prompted the recent boom of super large Transformer models, ranging from BERT (opens in new tab)<\/span><\/a> with 110 million parameters to GPT-3 (opens in new tab)<\/span><\/a> with 175 billion parameters. Nevertheless, numerous studies (opens in new tab)<\/span><\/a> have shown that the corresponding general understanding and generation capabilities of these models are lower than that of people, especially on tasks requiring external knowledge. Moreover, the sheer size of these models poses a\u00a0challenge for much of the AI community to use, study, and deploy, not to mention the significant carbon footprint created during computation.<\/p>\n\n\n\n

\"Figure
Figure 2: External Attention to various knowledge sources\u201d with \u201cThe KEAR model first retrieves relevant knowledge from various sources and then uses a language model to conduct self-attention to the input and external attention to the knowledge.<\/figcaption><\/figure><\/div>\n\n\n\n

While Transformer models process input by looking inward<\/em> via self-attention, external attention makes a model look outward<\/em> by providing it with related context and knowledge from various sources, including knowledge graphs, dictionaries, corpora, and other language models\u2019 output, and then letting the model conduct both self-attention to the input and external attention to the knowledge. The external information is stored in a symbolic way (for example, in plain text or knowledge graph entries) and thus enables a moderately sized Transformer model to excel in language understanding. Moreover, the text-level concatenation of input and knowledge used by KEAR incurs no change to the Transformer model architecture, enabling existing systems to be easily adapted to external attention.<\/p>\n\n\n\n

Another benefit of external attention is that one could easily update the knowledge source to change the model behavior. The latest world knowledge can be fed into the model by updating the knowledge graph using recent online sources. By incorporating explicit world knowledge, the decision process of the model also becomes more transparent and explainable. These benefits can greatly facilitate the application of external attention technology to various natural language processing research projects and products. This opens the door for us to better understand the meaning of text, associate it with related knowledge, and generate more accurate output.\u00a0<\/p>\n\n\n\n

For more information on KEAR<\/a>, check out this\u00a0Tech Minutes video<\/a>\u00a0and our GitHub page,<\/a> and for our team\u2019s latest advancements, visit the Knowledge and Language Team<\/a>\u00a0page.\u00a0<\/p>\n\n\n\n

\n