{"id":1083822,"date":"2024-09-17T09:00:00","date_gmt":"2024-09-17T16:00:00","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?p=1083822"},"modified":"2024-11-05T06:41:34","modified_gmt":"2024-11-05T14:41:34","slug":"eureka-evaluating-and-understanding-progress-in-ai","status":"publish","type":"post","link":"https:\/\/www.microsoft.com\/en-us\/research\/blog\/eureka-evaluating-and-understanding-progress-in-ai\/","title":{"rendered":"Eureka: Evaluating and understanding progress in AI"},"content":{"rendered":"\n
\"A<\/figure>\n\n\n\n

In the fast-paced progress of AI, the question of how to evaluate and understand capabilities of state-of-the-art models is timelier than ever. New and capable models are being released frequently, and each release promises the next big leap in frontiers of intelligence. Yet, as researchers and developers, often we ask ourselves: Are these models all comparable, if not the same, in terms of capabilities? There are, of course, strong reasons to believe they are, given that many score similarly in standard benchmarks. In addition, rankings in the numerous leaderboards do not offer a consistent and detailed explanation of why a model is ranked slightly better than others. However, if some models are fundamentally different, what are their strengths and weaknesses? More importantly, are there capabilities that are essential for making AI useful in the real world but still universally challenging for most models? Answering such questions helps us understand where we are on the frontier of AI, and what capability improvements are needed to meet the expectations that humanity and science have for safe and responsible deployments of AI models. <\/p>\n\n\n\n

The prevalence of these models is dependent on our ability to mature the science of in-depth AI evaluation and measurement. In our latest open-source release and technical report EUREKA: Evaluating and Understanding Large Foundation Models (opens in new tab)<\/span><\/a>, we start answering these questions by running an in-depth measurement analysis across 12 state-of-the-art proprietary and open-weights models. Behind this analysis stands Eureka (opens in new tab)<\/span><\/a>, an open-source framework for standardizing evaluations of large foundation models, beyond single-score reporting and rankings. The framework currently supports both language and multimodal (text and image) data and enables developers to define custom pipelines for data processing, inference, and evaluation, with the possibility to inherit from existing pipelines and minimize development work. Eureka and all our evaluation pipelines are available as open source to foster transparent and reproducible evaluation practices. We hope to collaborate with the open-source community to share and expand current measurements for new capabilities and models. <\/p>\n\n\n\n

Focus on challenging and non-saturated capabilities<\/h2>\n\n\n\n

Eureka tests models across a rich collection of fundamental language and multimodal capabilities that are challenging for even the most advanced models, but are often overlooked by standard benchmarks commonly reported in model releases. In practice, this also means that our analysis intentionally does not pivot on oversaturated benchmarks. As unconventional as this may sound, it is motivated by two reasons. First, measurement on saturated benchmarks, for which most models perform over 95%, leaves very little space for failure analysis and model comparison. Second, even though saturation may be rooted in genuine model improvements, concerns about memorization and overfitting to labeling errors lower the credibility of measurements, especially in the very high accuracy regime. <\/p>\n\n\n\n\t

\n\t\t\n\n\t\t

\n\t\tSpotlight: Microsoft research newsletter<\/span>\n\t<\/p>\n\t\n\t

\n\t\t\t\t\t\t
\n\t\t\t\t\n\t\t\t\t\t\"\"\n\t\t\t\t<\/a>\n\t\t\t<\/div>\n\t\t\t\n\t\t\t
\n\n\t\t\t\t\t\t\t\t\t

Microsoft Research Newsletter<\/h2>\n\t\t\t\t\n\t\t\t\t\t\t\t\t

Stay connected to the research community at Microsoft.<\/p>\n\t\t\t\t\n\t\t\t\t\t\t\t\t

\n\t\t\t\t\t
\n\t\t\t\t\t\t\n\t\t\t\t\t\t\tSubscribe today\t\t\t\t\t\t<\/a>\n\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t\t\t<\/div>\n\t<\/div>\n\t<\/div>\n\t\n\n\n

Beyond single-score measurements and universal rankings<\/h2>\n\n\n\n

Even though rankings and leaderboards remain the quickest way to compare models, they rarely uncover important conditions of failure. Due to overreliance on single-score aggregations of performance, the more nuanced comparative findings are hidden behind small differences between model scores aggregated across many capabilities and experimental conditions.<\/p>\n\n\n\n

As we show in our study, the chase after these rankings has created surprising dynamics that do not necessarily lead to identical models, but to models that use different complementary skills to achieve comparable overall scores in important leaderboards. Imagine you are a triathlon athlete aiming to achieve an elite performance, which historically takes around two hours. Despite your ambition to hit this top-tier mark, you face constraints with limited time and resources for training and preparation. In practice, athletes often focus their best resources on excelling in certain disciplines while aiming for a satisfactory performance in others. They prioritize based on what they believe is most achievable given their time and experience.<\/p>\n\n\n\n

We observe similar phenomena in the set of 12 models we study. Even if two models may score very closely for the same capability, disaggregating that performance across disciplines and input conditions shows that each model has its own complementary strengths. Identifying, measuring, and understanding these strengths for a single model is needed for planning targeted improvements. Repeating this process for a large set of models, as we do in Eureka, is needed for identifying the hypothetical frontier, guiding research and development, and creating a model that combines and delivers capabilities that build on the strengths observed in existing models. <\/p>\n\n\n\n

Measuring consistency: non-determinism and backward compatibility<\/h2>\n\n\n\n

When people work with collaborators or when they choose tools to assist them in everyday tasks, predictability and consistency are key to a successful collaboration. Similarly, humans and application developers expect their AI assistants and models to be consistent over time for similar inputs and interactions. In our analysis, we study this under-explored angle of model performance, by focusing on two key aspects: the determinism of answer outcomes for identical examples and prompts, and the backward compatibility of model answers at the example level after a model has been updated with a new version. Lack of consistency in either of these domains would lead to breaking trust with users and application developers. <\/p>\n\n\n\n

The analysis shows surprising results and opens new considerations for improvement. For example, we observe that very few large foundation models are fully deterministic and for most of them there are visible variations in the output \u2014 and most importantly in accuracy \u2014 when asked the same question several times, with generation temperature set to zero\u2014a control that tells models to minimize randomness in generations. In addition, when comparing new model releases with earlier models from the same family, a significant amount of regress at the example level can be observed after the update, even though the overall accuracy may increase. In practice, this type of inconsistency can be frustrating for application developers who rely on prewritten examples and prompts propagated to a foundation model. <\/p>\n\n\n\n

Eureka Insights<\/h2>\n\n\n\n

Figure 1 is a high-level illustration of the current state of AI for Eureka-Bench, highlighting the best and the worst performances across various capabilities. These results reveal a nuanced picture of different models\u2019 strengths, showing that no single model excels in all tasks. However, Claude 3.5 Sonnet, GPT-4o 2024-05-13, and Llama 3.1 405B consistently outperform others in several key areas.<\/p>\n\n\n\n

\"A
Figure 1<\/em> – Performance of best and worse models for multimodal (left) and language (right) datasets in in Eureka-Bench. The red<\/span> frontier shows the performance of the worse model, indicating the area that is already solved for the set of capabilities. The green<\/span> frontier shows the performance of the best model, indicating the best-known result with current technology. The blue<\/span> horizon between the best model and the maximum performance shows the room for improvement for mastering the capability. The best performance sets indicated in the green border include all models that perform within 2% of the best observed result. <\/em><\/figcaption><\/figure>\n\n\n\n

Multimodal capabilities<\/h3>\n\n\n\n

Evaluation in Eureka reveals that state-of-the-art models are still fairly limited in their multimodal abilities, specifically when it comes to detailed image understanding (for example, localization of objects, geometric and spatial reasoning, and navigation), which is most needed in truly multimodal scenarios that require physical awareness, visual grounding, and localization. <\/p>\n\n\n\n

    \n
  1. State-of-the-art multimodal models struggle with geometric reasoning. <\/strong>
    Models perform worse in reasoning about height than about depth. Claude 3.5 Sonnet and Gemini 1.5 Pro are the best performing models for this task, with Claude 3.5 Sonnet being the most accurate model for depth ordering, and Gemini 1.5 Pro the most accurate for height ordering. <\/li>\n\n\n\n
  2. Multimodal capabilities lag language capabilities. <\/strong>
    On tasks that can be described either as multimodal or as language-only, the performance of most tested models is higher for the language-only condition. GPT-4o 2024-05-13 is the only model that consistently achieves better results when presented with both vision and language information, showing therefore that it can better fuse the two data modalities.<\/li>\n\n\n\n
  3. Complementary performance across models for fundamental multimodal skills<\/strong>.
    Claude 3.5 Sonnet, GPT-4o 2024-05-13, and GPT-4 Turbo 2024-04-09 have comparable performance in multimodal question answering (MMMU). In tasks like object recognition and visual prompting, the performance of Claude 3.5 Sonnet is better or comparable to GPT-4o 2024-05-13, but Gemini 1.5 Pro outperforms them both. Finally, in tasks like object detection and spatial reasoning, GPT-4o 2024-05-13 is the most accurate model. <\/li>\n<\/ol>\n\n\n\n

    Language<\/h3>\n\n\n\n

    The evaluation through Eureka shows that there have been important advances from state-of-the-art models in the language capabilities of instruction following, long context question answering, information retrieval, and safety. The analysis also discovers major differences and gaps between models related to robustness to context length, factuality and grounding for information retrieval, and refusal behavior. <\/p>\n\n\n\n

      \n
    1. Faster improvements in instruction following across all model families. <\/strong>
      Instruction following is the ability to follow guidance expressed in user prompts regarding specifications related to format, style, and structure of the generated content. Among the studied language capabilities, instruction following is where most models are improving faster, potentially due to strong investments in instruction tuning processes, with most models now having an instruction following rate of higher than 75%. <\/li>\n\n\n\n
    2. All models\u2019 performance in question answering drops with longer context.<\/strong> 
      Contrary to \u201cneedle-in-a-haystack\u201d experiments, testing state-of-the-art models on tasks that involve reasoning over long context shows significant decline in performance as context size grows. Amongst all models, GPT-4o 2024-05-13 and Llama 3.1 405B have the lowest drop in performance for longer context.<\/li>\n\n\n\n
    3. Major gaps in factuality and grounding for information retrieval from parametric knowledge or input context. <\/strong>
      Models exhibit query fact precision rates of lower than 55%, fact recall rates of lower than 25%, and rates of irrelevant and fabricated information above 20%. Llama 3.1 405B, GPT-4o 2024-05-13, and Claude 3.5 Sonnet are the top performers in this area across different conditions.<\/li>\n\n\n\n
    4. High refusal rates. Lower accuracy in detecting toxic content vs. neutral content for most models. <\/strong>
      While several models have high accuracy rates for toxicity detection, others (Gemini 1.5 Pro, Claude 3.5 Sonnet, Claude 3 Opus, and Llama 3.1 405B) exhibit low accuracy in classifying toxic content and a high refusal rate to classify toxic or neutral context, both of which make toxic content difficult to detect. During the safe language generation evaluation, models like GPT-4 1106 Preview and Mistral Large 2407 have the highest toxicity rates. GPT-4o 2024-05-13 is the only model that has both a high toxicity detection accuracy and a low toxicity score for safe language generation. <\/li>\n<\/ol>\n\n\n\n

      Non-determinism<\/h3>\n\n\n\n

      Several models have highly non-deterministic output for identical runs.<\/strong> Gemini 1.5 Pro, GPT-4 1106 <\/strong>Preview, GPT-4 Vision Preview, and GPT-4 Turbo 2024-04-09 show high non-determinism of outcomes. These results raise important questions regarding the stability of user and developer experiences when repeatedly inferencing with identical queries using the same prompt templates. Llama 3 70B, Llama 3.1 70B, and Mistral Large 2407 are almost perfectly deterministic. <\/p>\n\n\n\n

      Backward compatibility<\/h3>\n\n\n\n

      Backward incompatibility for shifts within the same model family is prevalent across all state-of-the-art models.<\/strong> This is reflected in high regression rates for individual examples and at a subcategory level. This type of regression can break trust with users and application developers during model updates. Regression varies per task and metric, but we observe several cases when it is higher than 10% across three model families (Claude, GPT, Llama), and sometimes they can dominate progress rates for whole subcategories of data. <\/p>\n\n\n\n

      Conclusion<\/h2>\n\n\n\n

      The complementary results extracted from this study highlight opportunities for improving current models across various areas, aiming to match the performance of the best model for each individual capability in this challenge set. However, several tasks in the challenge set remain difficult even for the most capable models. It is crucial to discuss and explore whether these gaps can be addressed with current technologies, architectures, and data synthesis protocols.<\/p>\n\n\n\n

      Finally, Eureka and the set of associated benchmarks are only the initial snapshot of an effort that aims at reliably measuring progress in AI. Our team is excited about further collaborations with the open-source community and research, with the goal of sharing and extending current measurements for new capabilities and models. <\/p>\n","protected":false},"excerpt":{"rendered":"

      How can we rigorously evaluate and understand state-of-the-art progress in AI? Eureka is an open-source framework for standardizing evaluations of large foundation models, beyond single-score reporting and rankings. Learn more about the extended findings.<\/p>\n","protected":false},"author":42735,"featured_media":1084362,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","footnotes":""},"categories":[1],"tags":[],"research-area":[13556],"msr-region":[],"msr-event-type":[],"msr-locale":[268875],"msr-post-option":[269148,243984,269142],"msr-impact-theme":[],"msr-promo-type":[],"msr-podcast-series":[],"class_list":["post-1083822","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-research-blog","msr-research-area-artificial-intelligence","msr-locale-en_us","msr-post-option-approved-for-river","msr-post-option-blog-homepage-featured","msr-post-option-include-in-river"],"msr_event_details":{"start":"","end":"","location":""},"podcast_url":"","podcast_episode":"","msr_research_lab":[992148],"msr_impact_theme":[],"related-publications":[],"related-downloads":[],"related-videos":[],"related-academic-programs":[],"related-groups":[],"related-projects":[],"related-events":[],"related-researchers":[{"type":"user_nicename","value":"Vidhisha Balachandran","user_id":43521,"display_name":"Vidhisha Balachandran","author_link":"Vidhisha Balachandran<\/a>","is_active":false,"last_first":"Balachandran, Vidhisha","people_section":0,"alias":"vidhishab"},{"type":"guest","value":"jingya-chen","user_id":"767776","display_name":"Jingya Chen","author_link":"Jingya Chen<\/a>","is_active":true,"last_first":"Chen, Jingya","people_section":0,"alias":"jingya-chen"},{"type":"user_nicename","value":"Neel Joshi","user_id":33073,"display_name":"Neel Joshi","author_link":"Neel Joshi<\/a>","is_active":false,"last_first":"Joshi, Neel","people_section":0,"alias":"neel"},{"type":"user_nicename","value":"Besmira Nushi","user_id":36975,"display_name":"Besmira Nushi","author_link":"Besmira Nushi<\/a>","is_active":false,"last_first":"Nushi, Besmira","people_section":0,"alias":"benushi"},{"type":"guest","value":"hamid-palangi","user_id":"1084356","display_name":"Hamid Palangi","author_link":"Hamid Palangi<\/a>","is_active":true,"last_first":"Palangi, Hamid","people_section":0,"alias":"hamid-palangi"},{"type":"user_nicename","value":"Eduardo Salinas","user_id":38371,"display_name":"Eduardo Salinas","author_link":"Eduardo Salinas<\/a>","is_active":false,"last_first":"Salinas, Eduardo","people_section":0,"alias":"edus"},{"type":"user_nicename","value":"Vibhav Vineet","user_id":37751,"display_name":"Vibhav Vineet","author_link":"Vibhav Vineet<\/a>","is_active":false,"last_first":"Vineet, Vibhav","people_section":0,"alias":"vivineet"},{"type":"guest","value":"james-woffinden-luey","user_id":"550137","display_name":"James Woffinden-Luey","author_link":"James Woffinden-Luey","is_active":true,"last_first":"Woffinden-Luey, James","people_section":0,"alias":"james-woffinden-luey"},{"type":"user_nicename","value":"Safoora Yousefi","user_id":43530,"display_name":"Safoora Yousefi","author_link":"Safoora Yousefi<\/a>","is_active":false,"last_first":"Yousefi, Safoora","people_section":0,"alias":"sayouse"}],"msr_type":"Post","featured_image_thumbnail":"\"A","byline":"","formattedDate":"September 17, 2024","formattedExcerpt":"How can we rigorously evaluate and understand state-of-the-art progress in AI? Eureka is an open-source framework for standardizing evaluations of large foundation models, beyond single-score reporting and rankings. Learn more about the extended findings.","locale":{"slug":"en_us","name":"English","native":"","english":"English"},"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/1083822"}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/users\/42735"}],"replies":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/comments?post=1083822"}],"version-history":[{"count":23,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/1083822\/revisions"}],"predecessor-version":[{"id":1085499,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/1083822\/revisions\/1085499"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media\/1084362"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=1083822"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/categories?post=1083822"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/tags?post=1083822"},{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=1083822"},{"taxonomy":"msr-region","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-region?post=1083822"},{"taxonomy":"msr-event-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event-type?post=1083822"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=1083822"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=1083822"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=1083822"},{"taxonomy":"msr-promo-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-promo-type?post=1083822"},{"taxonomy":"msr-podcast-series","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-podcast-series?post=1083822"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}