下载
Orca-2-13B
2024年1月
Orca 2 is a finetuned version of LLAMA-2. It is built for research purposes only and provides a single turn response in tasks such as reasoning over user given data, reading comprehension, math problem solving and text summarization. The model…
Orca-2-7B
2024年1月
Orca 2 is a finetuned version of LLAMA-2. It is built for research purposes only and provides a single turn response in tasks such as reasoning over user given data, reading comprehension, math problem solving and text summarization. The model…
WALNUT
2022年6月
This repository contains the baseline code for the paper published in NAACL 2022: “WALNUT: A Benchmark on Weakly Supervised Learning for Natural Language Understanding”. Detailed description about the data sets and methods can be manuscript at here.
KID: Knowledge Infused Decoding
2022年3月
Knowledge Infused Decoding (KID) is a decoding algorithm that infuses knowledge (from Wikipedia) into each step decoding of text generation.
LiST (Lite Self-Training)
2021年10月
We present a new method LiST for efficient fine-tuning of large pre-trained language models (PLMs) in few-shot learning settings. LiST significantly improves over recent methods that adopt prompt fine-tuning using two key techniques. The first one is the use of…
Meta Self-training for Few-shot Neural Sequence Labeling [Code]
2021年10月
This is the implementation of the paper Meta Self-training for Few-shot Neural Sequence Labeling. MetaST is short for meta-learning for self-training.
Multi-source Weak Social Supervision for Fake News Detection (MWSS)
2021年5月
This repository contains code for fake news detection with Multi-source Weak Social Supervision (MWSS), published at ECML-PKDD 2020. Social media has greatly enabled people to participate in online activities at an unprecedented rate. However, this unrestricted access also exacerbates the…
Meta Representation Transformation for Low-resource Cross-Lingual Learning [Code]
2021年5月
This is a source code release for a published research at NAACL 2021. Paper Title: MetaXL: Meta Representation Transformation for Low-resource Cross-Lingual Learning Paper Abstract: The combination of multilingual pre-trained representations and cross-lingual transfer learning is one of the most…
Self-training with Weak Supervision [Code]
2021年4月
State-of-the-art deep neural networks require large-scale labeled training data that is often either expensive to obtain or not available for many tasks. Weak supervision in the form of domain-specific rules has been shown to be useful in such settings to…