Downloads
OmniParser
October 2024
OmniParser is a comprehensive method for parsing user interface screenshots into structured and easy-to-understand elements, which significantly enhances the ability of GPT-4V to generate actions that can be accurately grounded in the corresponding regions of the interface.
Trace
July 2024
Trace is a new AutoDiff-like tool for training AI systems end-to-end with general feedback (like numerical rewards or losses, natural language text, compiler errors, etc.). Trace generalizes the back-propagation algorithm by capturing and propagating an AI system’s execution trace. Trace…
Phi-3
April 2024
The Phi-3-Mini-128K-Instruct is a 3.8B parameters, lightweight, state-of-the-art open model trained with the Phi-3 datasets that includes both synthetic data and the filtered publicly available websites data with a focus on high-quality and reasoning dense properties. The model belongs to…
KITAB Dataset
February 2024
🕮 KITAB is a challenging dataset and a dynamic data collection approach for testing abilities of Large Language Models (LLMs) in answering information retrieval queries with constraint filters. A filtering query with constraints can be of the form “List all books…
HoloAssist
January 2024
A large-scale egocentric human interaction dataset, where two people collaboratively complete physical manipulation tasks.
Orca-2-13B
January 2024
Orca 2 is a finetuned version of LLAMA-2. It is built for research purposes only and provides a single turn response in tasks such as reasoning over user given data, reading comprehension, math problem solving and text summarization. The model…
Orca-2-7B
January 2024
Orca 2 is a finetuned version of LLAMA-2. It is built for research purposes only and provides a single turn response in tasks such as reasoning over user given data, reading comprehension, math problem solving and text summarization. The model…
LLF-Bench
January 2024
LLF Bench is a benchmark for evaluating learning agents that provides a diverse collection of interactive learning problems where the agent gets language feedback instead of rewards (as in RL) or action feedback (as in imitation learning).
Phi-2
December 2023
The phi-2 is a language model with 2.7 billion parameters. The phi-2 model was trained using the same data sources as phi-1, augmented with a new data source that consists of various NLP synthetic texts and filtered websites (for safety…