Dataset Viewer
Auto-converted to Parquet Duplicate
model_name_string
stringlengths
4
14
model_name_url
stringlengths
41
75
model_size_string
stringlengths
2
13
dataset
stringlengths
13
214
data_type
stringclasses
5 values
research_field
stringclasses
6 values
risks_and_limitations
stringclasses
2 values
risk_types
stringlengths
39
153
publication_date
stringlengths
8
10
organization_and_url
stringclasses
9 values
institution_type
float64
country
stringclasses
2 values
license
stringclasses
9 values
paper_name_url
stringlengths
51
156
model_description
stringlengths
628
1.72k
organization_info
stringlengths
204
871
AudioLM
[AudioLM](https://arxiv.org/abs/2209.03143) 📢
Not specified
The train split of [unlab-60k](https://github.com/facebookresearch/libri-light), consisting of 60k hours of English speech
Audio
Natural Language Processing, Speech Recognition
Yes
Disinformation, Algorithmic Discrimination, Social Engineering, Environmental Impacts, Technological Unemployment
7/26/2023
[Google Research](https://research.google/) (Subsidiary)
null
United States of America
Proprietary License
[AudioLM: a Language Modeling Approach to Audio Generation](https://arxiv.org/abs/2209.03143)
[AudioLM](https://arxiv.org/abs/2209.03143) is a framework for a high-quality audio generation with long-term consistency that can map the input audio to a sequence of discrete tokens and cast audio generation as a language modeling task. According to Google Research, models trained via this framework learn to generate...
[Google Research](https://research.google/) is a research division of Google that focuses on advancing computer science, machine learning, artificial intelligence, and other related fields. Meanwhile, Google is a subsidiary of Alphabet Inc., a publicly traded company with multiple classes of shareholders. Google R...
BlenderBot 3
[BlenderBot 3](https://arxiv.org/abs/2208.03188) 📚
175B
Approximately 1.3B training tokens. A complete list of training datasets can be found in this [data card](https://github.com/facebookresearch/ParlAI/blob/main/parlai/zoo/bb3/data_card.md)
Text
Natural Language Processing
Yes
Disinformation, Algorithmic Discrimination, Social Engineering, Environmental Impacts
8/5/2022
[Meta AI](https://ai.meta.com/) (Public Company)
null
United States of America
BB3-175B License
[BlenderBot 3: a deployed conversational agent that continually learns to responsibly engage](https://arxiv.org/abs/2208.03188)
[BlenderBot 3](https://about.fb.com/news/2022/08/blenderbot-ai-chatbot-improves-through-conversation/) is a 175B parameter dialogue model capable of open-domain conversation with access to the internet and long-term memory. The model was developed using [OPT-175B](https://arxiv.org/abs/2205.01068) as its foundation, be...
[Meta AI](https://ai.facebook.com/) is the AI research division of Meta Platforms, Inc., known as Meta] and formerly Facebook, Inc. Meta is an American multinational technology conglomerate based in Menlo Park, California. Meta AI started as Facebook Artificial Intelligence Research (FAIR), announced in September 2013,...
BLOOM
[BLOOM](https://huggingface.co/bigscience/bloom) 📚
176B
[ROOTS corpus](https://arxiv.org/abs/2303.03915), a dataset comprising hundreds of sources in 46 natural and 13 programming languages (366B tokens)
Text
Natural Language Processing
Yes
Disinformation, Algorithmic Discrimination, Social Engineering, Environmental Impacts
11/9/2022
[BigScience](https://bigscience.huggingface.co/) (Academic/Research Institution)
null
France
BigScience RAIL License
[BLOOM: A 176B-Parameter Open-Access Multilingual Language Model](https://arxiv.org/abs/2211.05100)
BigScience Large Open-science Open-access Multilingual Language Model ([BLOOM](https://huggingface.co/bigscience/bloom)) is a transformer-based large language model created by over 1,000 AI researchers. BLOOM was trained on around 366 billion tokens from March through July 2022, and it was one of the first open alterna...
[BigScience](https://bigscience.huggingface.co/) is an open and collaborative workshop around studying and creating very large language models, gathering more than 1000 researchers worldwide. According to their [homepage](https://bigscience.notion.site/Introduction-5facbf41a16848d198bda853485e23a0), the BigScience pro...
ChatGPT
[ChatGPT](https://openai.com/blog/chatgpt/) 📚
175B
Improved version of GPT-3 dataset + human demonstrations/evaluations
Text
Reinforcement Learning, Natural Language Processing
Yes
Disinformation, Algorithmic Discrimination, Social Engineering, Malware Development, Environmental Impacts, Technological Unemployment, Intelectual Fraud
11/30/2022
[OpenAI Inc.](https://openai.com/) (Non-profit)
null
United States of America
Proprietary License
[Introducing ChatGPT: OpenAI's blog](https://openai.com/blog/chatgpt)
[ChatGPT](https://openai.com/blog/chatgpt/) is a large language model based on an improved version of GPT-3, developed in a similar way to [InstructGPT](https://arxiv.org/abs/2203.02155), which is trained to follow an instruction in a prompt. ChatGPT was trained by OpenAI with reinforcement learning from human feedbac...
[OpenAI](https://openai.com/) is an American artificial intelligence research laboratory consisting of the non-profit OpenAI, Inc. and its for-profit subsidiary corporation OpenAI, L.P, founded in 2015 by Ilya Sutskever, Greg Brockman, Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, Jessica Livingston, Jo...
CICERO
[CICERO](https://ai.meta.com/research/cicero/) 📚🕹️
2.7B
A dataset of almost 13M messages from online Diplomacy games
Text
Reinforcement Learning, Natural Language Processing
No
Algorithmic Discrimination, Social Engineering, Environmental Impacts
11/22/2022
[Meta AI](https://ai.meta.com/) (Public Company)
null
United States of America
CC BY-NC-SA 4.0
[Human-level play in the game of Diplomacy by combining language models with strategic reasoning](https://www.science.org/doi/10.1126/science.ade9097)
[Cicero](https://github.com/facebookresearch/diplomacy_cicero) is an AI agent that can achieve human-level performance in [Diplomacy](https://en.wikipedia.org/wiki/Diplomacy_(game)), a strategy game involving both cooperation and competition that emphasizes natural language negotiation and tactical coordination between...
[Meta AI](https://ai.facebook.com/) is the AI research division of Meta Platforms, Inc., known as Meta and formerly Facebook, Inc. Meta is an American multinational technology conglomerate based in Menlo Park, California. Meta AI started as Facebook Artificial Intelligence Research (FAIR), announced in September 2013, ...
CLIP
[CLIP](https://openai.com/blog/clip/) 📚🖼️
Not specified
Trained on publicly available image-caption data
Text, Image
Computer Vision, Natural Language Processing
Yes
Algorithmic Discrimination, Surveillance and Social Control, Environmental Impacts
2/26/2021
[OpenAI Inc.](https://openai.com/) (Non-profit)
null
United States of America
MIT License
[Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020)
[CLIP](https://openai.com/blog/clip/) (Contrastive Language-Image Pre-training) is a neural network capable of associating natural language snippets with images, learning the relationship between sequences of tokens and images. According to its [model card](https://github.com/openai/CLIP/blob/main/model-card.md#model-t...
[OpenAI](https://openai.com/) is an American artificial intelligence research laboratory consisting of the non-profit OpenAI, Inc. and its for-profit subsidiary corporation OpenAI, L.P, founded in 2015 by Ilya Sutskever, Greg Brockman, Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, Jessica Livingston, Jo...
Code Llama
[Code Llama](https://github.com/facebookresearch/codellama) 📚
34B
500B tokens of publicly available code plus a small portion of natural language datasets related to code
Text
Natural Language Processing
Yes
Algorithmic Discrimination, Malware Development, Environmental Impacts, Technological Unemployment
8/24/2023
[Meta AI](https://ai.meta.com/) (Public Company)
null
United States of America
LLaMA 2 Community License Agreement
[Code Llama: Open Foundation Models for Code](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/)
[Code Llama](https://github.com/facebookresearch/codellama) is a family of large language models for code based on Llama 2, trained on a size ranging from 7B to 34B, that has zero-shot instruction following ability for programming tasks. Code Llama was developed by fine-tuning Llama 2 using 500B tokens of publicly avai...
[Meta AI](https://ai.facebook.com/) is the AI research division of Meta Platforms, Inc., known as Meta and formerly Facebook, Inc. Meta is an American multinational technology conglomerate based in Menlo Park, California. Meta AI started as Facebook Artificial Intelligence Research (FAIR), announced in September 2013, ...
Codex
[Codex](https://openai.com/blog/openai-codex/) 📚
12B
159GB from public software repositories hosted on GitHub
Text
Natural Language Processing
Yes
Algorithmic Discrimination, Malware Development, Environmental Impacts, Technological Unemployment
7/7/2021
[OpenAI Inc.](https://openai.com/) (Non-profit)
null
United States of America
Proprietary License
[Evaluating Large Language Models Trained on Code](https://arxiv.org/abs/2107.03374)
[Codex](https://openai.com/blog/openai-codex/) is a GPT model fine-tuned on code containing up to 12B parameters, capable of translating natural language sentences into programmatic code (e.g., Python). In an initial investigation of the GPT-3 model, it turned out that it could generate simple programs from docstrings....
[OpenAI](https://openai.com/) is an American artificial intelligence research laboratory consisting of the non-profit OpenAI, Inc. and its for-profit subsidiary corporation OpenAI, L.P, founded in 2015 by Ilya Sutskever, Greg Brockman, Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, Jessica Livingston, Jo...
DALL-E 2
[DALL-E 2](https://openai.com/dall-e-2/) 📚🖼️
3.5B
Encoder dataset: DALL-E and CLIP dataset (approximately 650M images). Decoder dataset: DALL-E dataset (approximately 250M images)
Text, Image
Computer Vision, Natural Language Processing
Yes
Disinformation, Algorithmic Discrimination, Environmental Impacts, Technological Unemployment
4/13/2022
[OpenAI Inc.](https://openai.com/) (Non-profit)
null
United States of America
Proprietary License
[Hierarchical Text-Conditional Image Generation with CLIP Latents](https://arxiv.org/abs/2204.06125)
[DALL-E 2](https://openai.com/dall-e-2) is a text-to-image model composed of two main parts: an encoder that generates a CLIP image embedding given a text caption and a decoder that generates an image conditioned on the image embedding. The result of this combination is DALL-E 2, a multimodal model that can generate ph...
[OpenAI](https://openai.com/) is an American artificial intelligence research laboratory consisting of the non-profit OpenAI, Inc. and its for-profit subsidiary corporation OpenAI, L.P, founded in 2015 by Ilya Sutskever, Greg Brockman, Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, Jessica Livingston, Jo...
DALL-E 3
[DALL-E 3](https://openai.com/dall-e-3/) 📚🖼️
Not specified
Not specified
Text, Image
Computer Vision, Natural Language Processing
No
Disinformation, Algorithmic Discrimination, Environmental Impacts, Technological Unemployment
9/20/2023
[OpenAI Inc.](https://openai.com/) (Non-profit)
null
United States of America
Proprietary License
[DALL-E 3: OpenAI's Blog](https://openai.com/dall-e-3)
[DALL-E 3](https://openai.com/dall-e-3) is the successor of DALL-E 2, which, according to OpenAI's release, can understand significantly more nuance and detail instructions than previous versions, diminishing the need for prompt engineering in its use. Not much is known about the architecture, training protocol, or da...
[OpenAI](https://openai.com/) is an American artificial intelligence research laboratory consisting of the non-profit OpenAI, Inc. and its for-profit subsidiary corporation OpenAI, L.P, founded in 2015 by Ilya Sutskever, Greg Brockman, Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, Jessica Livingston, Jo...
DALL-E
[DALL-E](https://openai.com/blog/dall-e/) 📚🖼️
12B
250 million text-image pairs from Wikipedia, and a filtered subset from YFCC100M
Text, Image
Computer Vision, Natural Language Processing
No
Algorithmic Discrimination, Environmental Impacts, Technological Unemployment
2/24/2021
[OpenAI Inc.](https://openai.com/) (Non-profit)
null
United States of America
Proprietary License
[Zero-Shot Text-to-Image Generation](https://arxiv.org/abs/2102.12092)
[DALL-E](https://openai.com/blog/dall-e/) is a text-to-image model comprised of two main components. The first one is a discrete variational autoencoder (dVAE) that compresses 256×256 RGB images into a 32 × 32 grid of image tokens with a vocabulary of size 8192. The second component is an autoregressive transformer tha...
[OpenAI](https://openai.com/) is an American artificial intelligence research laboratory consisting of the non-profit OpenAI, Inc. and its for-profit subsidiary corporation OpenAI, L.P, founded in 2015 by Ilya Sutskever, Greg Brockman, Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, Jessica Livingston, Jo...
ESM-2
[ESM-2](https://www.science.org/doi/abs/10.1126/science.ade2574) 🧬
15B
Uniref50 (UR50) dataset: a biological dataset taken from the [Uniprot database](https://www.uniprot.org/)
Biological Data
Pattern Recognition, Forecasting
No
Biological Risks, Environmental Impacts
3/16/2023
[Meta AI](https://ai.meta.com/) (Public Company)
null
United States of America
MIT License
[Evolutionary-scale prediction of atomic level protein structure with a language model](https://www.biorxiv.org/content/10.1101/2022.07.20.500902v3)
[ESM-2](https://github.com/facebookresearch/esm) is a SOTA general-purpose protein language model that can be used to predict structure, function, and other protein properties directly from individual sequences of amino acids. The ESM-2 series was trained on a range from 8M to 15B parameters, being the continuation of ...
[Meta AI](https://ai.facebook.com/) is the AI research division of Meta Platforms, Inc., known as Meta and formerly Facebook, Inc. Meta is an American multinational technology conglomerate based in Menlo Park, California. Meta AI started as Facebook Artificial Intelligence Research (FAIR), announced in September 2013, ...
Galactica
[Galactica](https://galactica.org/explore/) 📚
120B
106 billion tokens of articles, reference materials, encyclopedias, and other scientific sources
Text
Natural Language Processing
Yes
Disinformation, Algorithmic Discrimination, Environmental Impacts, Intelectual Fraud
11/16/2022
[Meta AI](https://ai.meta.com/) (Public Company)
null
United States of America
CC BY-NC-SA 4.0
[Galactica: A Large Language Model for Science](https://arxiv.org/abs/2211.09085)
[Galactica](https://github.com/paperswithcode/galai) is a large language model that can store, combine, and reason about scientific knowledge. Galactica was trained in five sizes (from 125M to 120B), using a large scientific corpus of papers, reference material, knowledge bases, and many other sources. It can perform s...
[Meta AI](https://ai.facebook.com/) is the AI research division of Meta Platforms, Inc., known as Meta and formerly Facebook, Inc. Meta is an American multinational technology conglomerate based in Menlo Park, California. Meta AI started as Facebook Artificial Intelligence Research (FAIR), announced in September 2013, ...
Github Copilot
[Github Copilot](https://github.com/features/copilot) 📚
Not specified
Not specified
Text
Natural Language Processing
No
Algorithmic Discrimination, Malware Development, Environmental Impacts, Technological Unemployment
10/29/2021
[Github](https://github.com/) (Subsidiary)
null
United States of America
Proprietary License
[Your AI pair programmer](https://github.com/features/copilot)
[GitHub Copilot](https://github.com/features/copilot) is a cloud-based artificial intelligence tool developed by GitHub and OpenAI to assist users of Visual Studio Code, Visual Studio, Neovim, and JetBrains integrated development environments (IDEs) by autocompleting code. GitHub Copilot is powered by the OpenAI Codex...
[GitHub](https://github.com/) is a platform for hosting source code and other files with version control using [Git](https://git-scm.com/). GitHub was created by Chris Wanstrath, J. Hyett, Tom Preston-Werner, and Scott Chacon in 2008. Currently (2023), the company is a subsidiary of Microsoft, which bought the platform...
GLIDE
[GLIDE](https://gpt3demo.com/apps/openai-glide) 📚🖼️
3.5B
Not specified
Text, Image
Computer Vision, Natural Language Processing
Yes
Disinformation, Algorithmic Discrimination, Environmental Impacts, Technological Unemployment
12/20/2021
[OpenAI Inc.](https://openai.com/) (Non-profit)
null
United States of America
MIT License*
[GLIDE: Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models](https://arxiv.org/abs/2112.10741v1)
[GLIDE](https://gpt3demo.com/apps/openai-glide) (Guided Language to Image Diffusion for Generation and Editing) is a [diffusion model](https://lilianweng.github.io/posts/2021-07-11-diffusion-models/) that generates images through natural language. GLIDE also allows editions to be made to existing images using natural l...
[OpenAI](https://openai.com/) is an American artificial intelligence research laboratory consisting of the non-profit OpenAI, Inc. and its for-profit subsidiary corporation OpenAI, L.P, founded in 2015 by Ilya Sutskever, Greg Brockman, Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, Jessica Livingston, Jo...
GPT-2
[GPT-2](https://github.com/openai/gpt-2) 📚
1.5B
[WebText](https://github.com/openai/gpt-2/blob/master/domains.txt)
Text
Natural Language Processing
Yes
Disinformation, Algorithmic Discrimination, Social Engineering, Environmental Impacts
2/24/2019
[OpenAI Inc.](https://openai.com/) (Non-profit)
null
United States of America
MIT License
[Language Models are Unsupervised Multitask Learners](https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
GPT-2 is a large language model developed by OpenAI, being able to generate human-like text from minimal prompts. The model is a decoder-only transformer pretrained on a large corpus of English data in a self-supervised fashion, meaning it learned from the raw texts without any human labels. It was trained on a dataset...
[OpenAI](https://openai.com/) is an American artificial intelligence research laboratory consisting of the non-profit OpenAI, Inc. and its for-profit subsidiary corporation OpenAI, L.P, founded in 2015 by Ilya Sutskever, Greg Brockman, Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, Jessica Livingston, Jo...
GPT-3
[GPT-3](https://arxiv.org/abs/2005.14165) 📚
175B
570 GB of text-format data from CommonCrawl, WebText2, Books1, Books2, and Wikipedia
Text
Natural Language Processing
Yes
Disinformation, Algorithmic Discrimination, Social Engineering, Environmental Impacts
5/28/2020
[OpenAI Inc.](https://openai.com/) (Non-profit)
null
United States of America
Proprietary License
[Language Models are Few-Shot Learners](https://arxiv.org/abs/2005.14165)
GPT-3 is a transformer-based autoregressive language model with 175 billion parameters that achieves high performance, without any gradient updating or fine-tuning, on a wide range of NLP tasks (in a [zero-shot](https://en.wikipedia.org/wiki/Zero-shot_learning) or few-shot fashion), including translation, Q&A, word uns...
[OpenAI](https://openai.com/) is an American artificial intelligence research laboratory consisting of the non-profit OpenAI, Inc. and its for-profit subsidiary corporation OpenAI, L.P, founded in 2015 by Ilya Sutskever, Greg Brockman, Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, Jessica Livingston, Jo...
GPT-4
[GPT-4](https://arxiv.org/abs/2303.08774) 📚🖼️
Not specified
Not specified
Text, Image
Reinforcement Learning, Computer Vision, Natural Language Processing
Yes
Disinformation, Algorithmic Discrimination, Social Engineering, Malware Development, Environmental Impacts, Technological Unemployment, Intelectual Fraud
3/15/2023
[OpenAI Inc.](https://openai.com/) (Non-profit)
null
United States of America
Proprietary License
[GPT-4 Technical Report](https://arxiv.org/abs/2303.08774)
[GPT-4](https://arxiv.org/abs/2303.08774) is a [generative pre-trained transformer](https://paperswithcode.com/method/gpt) model and successor of the GPT-3(3.5) series. Besides being capable of dealing with several NLP tasks, GPT-4 is also a text-to-image model, making it different from its predecessors (i.e., multimo...
[OpenAI](https://openai.com/) is an American artificial intelligence research laboratory consisting of the non-profit OpenAI, Inc. and its for-profit subsidiary corporation OpenAI, L.P, founded in 2015 by Ilya Sutskever, Greg Brockman, Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, Jessica Livingston, Jo...
GPT-J
[GPT-J](https://huggingface.co/EleutherAI/gpt-j-6B) 📚
6B
[The Pile](https://pile.eleuther.ai/)
Text
Natural Language Processing
Yes
Disinformation, Algorithmic Discrimination, Social Engineering, Environmental Impacts
6/9/2021
[EleutherAI](https://www.eleuther.ai) (Non-profit)
null
United States of America
Apache 2.0 License
[GPT-J](https://huggingface.co/EleutherAI/gpt-j-6B)
[GPT-J](https://huggingface.co/EleutherAI/gpt-j-6b) is a 6B parameter autoregressive language model with 28 layers, a model dimension of 4096, and a feedforward dimension of 16384. Like the GPT-Neo series, GPT-J uses [Rotary Position Embeddings](https://huggingface.co/docs/transformers/model_doc/roformer). The model is...
[EleutherAI](https://www.eleuther.ai/) is a non-profit artificial intelligence research group. The group was formed in a Discord server in July 2020 to organize a replication of GPT-3. In early 2023, it formally incorporated as the EleutherAI Foundation, a non-profit research institute. According to its [mission s...
GPT-NeoX
[GPT-NeoX](https://huggingface.co/EleutherAI/gpt-neox-20b) 📚
20B
[The Pile](https://pile.eleuther.ai/)
Text
Natural Language Processing
Yes
Disinformation, Algorithmic Discrimination, Social Engineering, Environmental Impacts
4/14/2022
[EleutherAI](https://www.eleuther.ai) (Non-profit)
null
United States of America
Apache 2.0 License
[GPT-NeoX-20B: An Open-Source Autoregressive Language Model](https://arxiv.org/abs/2204.06745)
GPT-Neo is a series of large language autoregressive models, from [125M](https://huggingface.co/EleutherAI/gpt-neo-125m) to [20B parameter](https://huggingface.co/EleutherAI/gpt-neox-20b), trained on the Pile, openly available to the public through a permissive license. The GPT-Neo series is EleutherAI's replication of...
[EleutherAI](https://www.eleuther.ai/) is a non-profit artificial intelligence research group. The group was formed in a Discord server in July 2020 to organize a replication of GPT-3. In early 2023, it formally incorporated as the EleutherAI Foundation, a non-profit research institute. According to its [mission state...
Imagen
[Imagen](https://imagen.research.google/) 📚🖼️
14B
860 million text-image pairs from Google's internal datasets and the [Laion](https://huggingface.co/datasets/laion/laion400m) dataset
Text, Image
Computer Vision, Natural Language Processing
Yes
Disinformation, Algorithmic Discrimination, Environmental Impacts, Technological Unemployment
5/23/2022
[Google Research](https://research.google/) (Subsidiary)
null
United States of America
Proprietary License
[Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding](https://arxiv.org/abs/2205.11487)
[Imagen](https://imagen.research.google/) is a text-to-image diffusion model that builds on the language understanding capabilities of large transformer language models and the capacities of diffusion models to promote high-fidelity image generation. Imagen uses a large frozen [T5-XXL](https://huggingface.co/google/t5...
[Google Research](https://research.google/) is a research division of Google that focuses on advancing computer science, machine learning, artificial intelligence, and other related fields. Meanwhile, Google is a subsidiary of Alphabet Inc., a publicly traded company with multiple classes of shareholders. Google R...
InstructGPT
[InstructGPT](https://arxiv.org/abs/2203.02155) 📚
175B
Prompt/completion pairs submitted to the OpenAI API
Text
Reinforcement Learning, Natural Language Processing
Yes
Disinformation, Algorithmic Discrimination, Social Engineering, Malware Development, Environmental Impacts, Technological Unemployment, Intelectual Fraud
3/4/2022
[OpenAI Inc.](https://openai.com/) (Non-profit)
null
United States of America
Proprietary License
[Training Language Models to Follow Instructions with Human Feedback](https://arxiv.org/abs/2203.02155)
[InstructGPT](https://arxiv.org/abs/2203.02155) is a fined-tuned version of OpenAI's [GPT-3](https://arxiv.org/abs/2005.14165), achieved via a mix of supervised fine-tuning and reinforcement learning from human feedback. While GPT-3 was trained via causal language modeling, that is, to predict the next token in a seque...
[OpenAI](https://openai.com/) is an American artificial intelligence research laboratory consisting of the non-profit OpenAI, Inc. and its for-profit subsidiary corporation OpenAI, L.P, founded in 2015 by Ilya Sutskever, Greg Brockman, Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, Jessica Livingston, Jo...
LaMDA
[LaMDA](https://arxiv.org/abs/2201.08239) 📚
137B
2.81T tokens from public dialog data and other public web documents
Text
Natural Language Processing
Yes
Disinformation, Algorithmic Discrimination, Social Engineering, Environmental Impacts
2/10/2022
[Google](https://about.google/) (Subsidiary)
null
United States of America
Proprietary License
[LaMDA: Language Models for Dialog Applications](https://arxiv.org/abs/2201.08239)
[LaMDA](https://arxiv.org/abs/2201.08239) is a family of Transformer-based neural language models specialized for dialog. These models’ sizes range from 2B to 137B parameters, and they are pre-trained on a dataset containing 1.56T words of public dialog data and web text. LaMDA has been pre-trained in causal language ...
[Google](https://about.google/) is an American multinational technology company focusing on artificial intelligence, online advertising, search engine technology, cloud computing, computer software, quantum computing, e-commerce, and consumer electronics. Google is also a subsidiary of Alphabet Inc., a publicly trad...
LLaMA 2
[LLaMA 2](https://arxiv.org/abs/2307.09288) 📚
70B
2 trillion tokens with over 1 million human annotations
Text
Reinforcement Learning, Natural Language Processing
Yes
Disinformation, Algorithmic Discrimination, Social Engineering, Malware Development, Environmental Impacts, Technological Unemployment, Intelectual Fraud
7/18/2023
[Meta AI](https://ai.meta.com/) (Public Company)
null
United States of America
LLaMA 2 Community License Agreement
[Llama 2: Open Foundation and Fine-Tuned Chat Models](https://arxiv.org/abs/2307.09288)
[Llama 2](https://arxiv.org/abs/2307.09288) is a collection of pre-trained and fine-tuned large language models ranging in scale from 7B to 70B parameters. This model is an updated version of [Llama 1](https://arxiv.org/abs/2302.13971), trained on a new mix of publicly available data (increased the size of the pretrain...
[Meta AI](https://ai.facebook.com/) is the AI research division of Meta Platforms, Inc., known as Meta and formerly Facebook, Inc. Meta is an American multinational technology conglomerate based in Menlo Park, California. Meta AI started as Facebook Artificial Intelligence Research (FAIR), announced in September 2013, ...
LLaMA
[LLaMA](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/) 📚
65B
1.4 trillion tokens drawn from publicly available data sources and text from 20 different languages
Text
Natural Language Processing
Yes
Disinformation, Algorithmic Discrimination, Social Engineering, Environmental Impacts
2/27/2023
[Meta AI](https://ai.meta.com/) (Public Company)
null
United States of America
LLaMA 2 Community License Agreement
[LLaMA: A foundational, 65-billion-parameter large language model](https://arxiv.org/abs/2302.13971)
[LLaMA](https://ai.meta.com/blog/large-language-model-llama-meta-ai/) is a collection of foundation language models ranging from 7B to 65B parameters, trained on over a trillion tokens of publicly available data. According to Meta AI, LLaMA-13B outperforms GPT-3 (175B) on most benchmarks, and LLaMA-65B is competitive w...
[Meta AI](https://ai.facebook.com/) is the AI research division of Meta Platforms, Inc., known as Meta and formerly Facebook, Inc. Meta is an American multinational technology conglomerate based in Menlo Park, California. Meta AI started as Facebook Artificial Intelligence Research (FAIR), announced in September 2013, ...
Midjourney
[Midjourney](https://www.midjourney.com/) 📚🖼️
Not specified
Not specified
Text, Image
Computer Vision, Natural Language Processing
No
Disinformation, Algorithmic Discrimination, Environmental Impacts, Technological Unemployment
7/12/2022
[Midjourney, Inc.](https://www.midjourney.com/) (Independent Research Lab)
null
United States of America
Proprietary License
[Midjourney Documentation](https://docs.midjourney.com/)
[Midjourney](https://www.midjourney.com/) is a generative artificial intelligence program and service created and hosted by San Francisco-based independent research lab Midjourney, Inc. Midjourney generates images from natural language descriptions (prompts), similar to OpenAI’s DALL-E and Stability AI’s Stable Diffusi...
[Midjourney](https://www.midjourney.com/) is an independent research lab involved in generative AI. Midjourney develops text-to-image models similar to OpenAI’s DALL-E and Stability AI’s Stable Diffusion.
Muse
[Muse](https://muse-model.github.io/) 📚🖼️
3B
[Imagen](https://imagen.research.google/) dataset consisting of 460M text-image pairs
Text, Image
Computer Vision, Natural Language Processing
Yes
Disinformation, Algorithmic Discrimination, Environmental Impacts, Technological Unemployment
1/2/2023
[Google Research](https://research.google/) (Subsidiary)
null
United States of America
Proprietary License
[Muse: Text-To-Image Generation via Masked Generative Transformers](https://arxiv.org/abs/2301.00704)
[Muse](https://muse-model.github.io/) is a text-to-image Transformer model trained on a masked modeling task, i.e., given the text embedding extracted from a pre-trained large language model (LLM), Muse is trained to predict randomly masked image tokens, similar to what language models trained via MLM (masked language ...
[Google Research](https://research.google/) is a research division of Google that focuses on advancing computer science, machine learning, artificial intelligence, and other related fields. Meanwhile, Google is a subsidiary of Alphabet Inc., a publicly traded company with multiple classes of shareholders. Google R...
OPT-175B
[OPT-175B](https://arxiv.org/abs/2205.01068) 📚
175B
Approximately 180B tokens corresponding to 800 GB of data. A complete list of datasets used is listed in [Appendix C](https://arxiv.org/abs/2205.01068)
Text
Natural Language Processing
Yes
Disinformation, Algorithmic Discrimination, Social Engineering, Environmental Impacts
5/2/2022
[Meta AI](https://ai.meta.com/) (Public Company)
null
United States of America
OPT-175B License
[OPT: Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068)
[OPT-175B](https://ai.meta.com/blog/democratizing-access-to-large-scale-language-models-with-opt-175b/) is a language model with 175 billion parameters, following a similar architectural design to that of GPT models, trained on publicly available data sets, to allow for more community engagement in understanding founda...
[Meta AI](https://ai.facebook.com/) is the AI research division of Meta Platforms, Inc., known as Meta and formerly Facebook, Inc. Meta is an American multinational technology conglomerate based in Menlo Park, California. Meta AI started as Facebook Artificial Intelligence Research (FAIR), announced in September 2013, ...
PaLM 2
[PaLM 2](https://ai.google/discover/palm2) 📚
Not specified
A diverse set of sources containing web documents, books, code, mathematics, and conversational data
Text
Natural Language Processing
Yes
Disinformation, Algorithmic Discrimination, Social Engineering, Malware Development, Environmental Impacts, Technological Unemployment, Intelectual Fraud
5/1/2023
[Google AI](https://ai.google/) (Subsidiary)
null
United States of America
Proprietary License
[PaLM 2 Technical Report](https://ai.google/static/documents/palm2techreport.pdf)
[PaLM 2](https://ai.google/discover/palm2) is a state-of-the-art language model that has better multilingual and reasoning capabilities and is more compute-efficient than its predecessor ([PaLM](https://arxiv.org/abs/2204.02311)). The largest model in the PaLM 2 series, PaLM 2-L, is significantly smaller than the large...
[Google AI](https://ai.google/) is a research division at Google that focuses on developing artificial intelligence. Meanwhile, Google is a subsidiary of Alphabet Inc., a publicly traded company with multiple classes of shareholders. Google AI offers a range of machine learning products, solutions, and services that ar...
PaLM
[PaLM](https://arxiv.org/abs/2204.02311) 📚
540B
780 billion tokens that represent a wide range of natural language use cases
Text
Natural Language Processing
Yes
Disinformation, Algorithmic Discrimination, Social Engineering, Malware Development, Environmental Impacts, Technological Unemployment, Intelectual Fraud
10/5/2022
[Google Research](https://research.google/) (Subsidiary)
null
United States of America
Proprietary License
[PaLM: Scaling Language Modeling with Pathways](https://arxiv.org/abs/2204.02311)
The Pathways Language Model, or [PaLM](https://arxiv.org/abs/2204.02311), consists of a series of large language models, with sizes ranging from 8 billion, 62 billion, to 540 billion parameters. The development of PaLM was made possible through the utilization of Pathways, a machine learning system introduced in the [P...
[Google Research](https://research.google/) is a research division of Google that focuses on advancing computer science, machine learning, artificial intelligence, and other related fields. Meanwhile, Google is a subsidiary of Alphabet Inc., a publicly traded company with multiple classes of shareholders. Google R...
Parti
[Parti](https://github.com/google-research/parti) 📚🖼️
20B
[LAION-400M dataset](https://huggingface.co/datasets/laion/laion400m), [ALIGN training data](https://arxiv.org/abs/2102.05918), and the [JFT-4B dataset](https://paperswithcode.com/paper/scaling-vision-transformers)
Text, Image
Computer Vision, Natural Language Processing
Yes
Disinformation, Algorithmic Discrimination, Environmental Impacts, Technological Unemployment
6/22/2022
[Google Research](https://research.google/) (Subsidiary)
null
United States of America
Proprietary License
[Scaling Autoregressive Models for Content-Rich Text-to-Image Generation](https://arxiv.org/abs/2206.10789)
Pathways Autoregressive Text-to-Image model ([Parti](https://github.com/google-research/parti)) is a series of autoregressive text-to-image generation models, from 350M to 20B parameters, that achieves high-fidelity photorealistic image generation. Unlike Google’s [Imagen](https://imagen.research.google/), a diffusion...
[Google Research](https://research.google/) is a research division of Google that focuses on advancing computer science, machine learning, artificial intelligence, and other related fields. Meanwhile, Google is a subsidiary of Alphabet Inc., a publicly traded company with multiple classes of shareholders. Google R...
Polyglot-Ko
[Polyglot-Ko](https://huggingface.co/EleutherAI/polyglot-ko-12.8b) 📚
12.8B
863 GB of Korean language data curated by [TUNiB](https://tunib.ai/)
Text
Natural Language Processing
Yes
Disinformation, Algorithmic Discrimination, Social Engineering, Environmental Impacts
4/3/2023
[EleutherAI](https://www.eleuther.ai) (Non-profit)
null
United States of America
Apache 2.0 License
[A Technical Report for Polyglot-Ko: Open-Source Large-Scale Korean Language Models](https://arxiv.org/abs/2306.02254)
[Polyglot-Ko](https://huggingface.co/EleutherAI/polyglot-ko-12.8b/tree/main) is a series of large-scale Korean autoregressive language models made by the EleutherAI polyglot team, available in sizes from 1.3B to 12.8B. The model consists of 40 transformer layers with a model dimension of 5120 and a feedforward dimensio...
[EleutherAI](https://www.eleuther.ai/) is a non-profit artificial intelligence research group. The group was formed in a Discord server in July 2020 to organize a replication of GPT-3. In early 2023, it formally incorporated as the EleutherAI Foundation, a non-profit research institute. According to its [mission s...
Pythia
[Pythia](https://huggingface.co/EleutherAI/pythia-12b) 📚
12B
[The Pile](https://pile.eleuther.ai/)
Text
Natural Language Processing
Yes
Disinformation, Algorithmic Discrimination, Social Engineering, Environmental Impacts
4/3/2023
[EleutherAI](https://www.eleuther.ai) (Non-profit)
null
United States of America
Apache 2.0 License
[Pythia: A Suite for Analyzing Large Language Models Across Training and Scaling](https://arxiv.org/abs/2304.01373)
[Pythia](https://huggingface.co/EleutherAI/pythia-12b) is a collection of models developed to facilitate interpretability research, trained on sizes 70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two models: one trained on the Pile, and one trained on the Pile after the dataset has been global...
[EleutherAI](https://www.eleuther.ai/) is a non-profit artificial intelligence research group. The group was formed in a Discord server in July 2020 to organize a replication of GPT-3. In early 2023, it formally incorporated as the EleutherAI Foundation, a non-profit research institute. According to its [mission s...
WebGPT
[WebGPT](https://arxiv.org/abs/2112.09332) 📚
175B
A collection of demonstrations and comparisons made by freelance contractors from [Upwork](https://www.upwork.com) and [Surge AI](https://www.surgehq.ai)
Text
Reinforcement Learning, Natural Language Processing
Yes
Disinformation, Algorithmic Discrimination, Social Engineering, Environmental Impacts, Technological Unemployment
12/17/2021
[OpenAI Inc.](https://openai.com/) (Non-profit)
null
United States of America
Proprietary License
[WebGPT: Browser-assisted question-answering with human feedback](https://arxiv.org/abs/2112.09332)
[WebGPT](https://openai.com/research/webgpt) is a fine-tuned version of [GPT-3](https://arxiv.org/abs/2005.14165). While GPT-3 tends to hallucinate information when performing tasks requiring real-world knowledge, WebGPT was trained to search the web via a text-based web browser and generate responses via the retrieved...
[OpenAI](https://openai.com/) is an American artificial intelligence research laboratory consisting of the non-profit OpenAI, Inc. and its for-profit subsidiary corporation OpenAI, L.P, founded in 2015 by Ilya Sutskever, Greg Brockman, Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, Jessica Livingston, Jo...
Whisper
[Whisper](https://arxiv.org/abs/2212.04356) 📢📚
1.55B
680,000 hours of labeled audio
Text, Audio
Natural Language Processing, Speech Recognition
Yes
Disinformation, Algorithmic Discrimination, Environmental Impacts, Surveillance and Social Control, Technological Unemployment
12/6/2022
[OpenAI Inc.](https://openai.com/) (Non-profit)
null
United States of America
MIT License
[Robust Speech Recognition via Large-Scale Weak Supervision](https://arxiv.org/abs/2212.04356)
[Whisper](https://arxiv.org/abs/2212.04356) is a general-purpose speech recognition model based on the original [encoder-decoder transformer](https://arxiv.org/abs/1706.03762) architecture trained on over 96 languages (other than English). Whisper can perform multilingual speech recognition, translation, and language i...
[OpenAI](https://openai.com/) is an American artificial intelligence research laboratory consisting of the non-profit OpenAI, Inc. and its for-profit subsidiary corporation OpenAI, L.P, founded in 2015 by Ilya Sutskever, Greg Brockman, Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, Jessica Livingston, Jo...

Model Library DB

Dataset Summary

The Model Library is a project that maps the risks associated with modern machine learning systems. Here, we assess some of the most recent and capable AI systems ever created. This is the database for the Model Library.

Supported Tasks and Leaderboards

This dataset serves as a catalog of machine learning models, all displayed in the Model Library.

Languages

English.

Dataset Structure

Data Instances

Features available are: model_name_string, model_name_url, model_size_string, dataset, data_type, research_field, risks_and_limitations, risk_types,publication_date, organization_and_url, institution_type, country, license, paper_name_url, model_description, organization_info.

Data Fields

Read Data Instances.

Data Splits

"main" slipt is the current version displayed in the Model Library.

Dataset Creation

Curation Rationale

This dataset is maintained as part of a research project to catalog risks related to ML models.

Source Data

Initial Data Collection and Normalization

All data was collected manually.

Who are the source language producers?

More information can be found here.

Annotations

Annotation process

More information can be found here.

Who are the annotators?

Members of the AI Robotics Ethics Society (AIRES).

Personal and Sensitive Information

No personal or sensitive information is part of this dataset.

Considerations for Using the Data

Social Impact of Dataset

No considerations.

Discussion of Biases

No considerations.

Other Known Limitations

No considerations.

Additional Information

Dataset Curators

Members of the AI Robotics Ethics Society (AIRES).

Licensing Information

This dataset is licensed under the Apache License, version 2.0.

Citation Information


@misc{correa24library,
    author = {Nicholas Kluge Corr{\^e}a and Faizah Naqvi and Robayet Rossain},
    title = {Model Library},
    year = {2024},
    howpublished = {\url{https://github.com/Nkluge-correa/Model-Library}}
}

Contributions

If you would like to add a model, read our documentation and submit a PR on GitHub!

Downloads last month
20

Space using nicholasKluge/model-library 1