Git Product home page Git Product logo

measure-visual-commonsense-knowledge's Introduction

Measure Visual Commonsense Knowledge

ACL SRW 2022 paper

"What do Models Learn From Training on More Than Text? Measuring Visual Commonsense Knowledge".

This repo contains the code for the paper.

ย 

Where to start?

The repo is segmented into three main parts:

  1. models contains code necessary for attaining the models that haven't already been pre-trained and released. These are the BERT baselines trained on visual copora (bert-clip-bert-train, bert-lxmert-train and bert-lxmert-train-scratch) and CLIP-BERT. This repo also contains necessary model weights and code for pretraining.
  2. memory_colors contains code necessary for the Memory Colors evaluation. As long as you have the necessary model weights under models/data/model-weights, this can be run independently from the other directories.
  3. visual_property_norms contains code necessary for the Visual Property Norms evaluation. As long as you have the necessary model weights under models/data/model-weights, this can be run independently from the other directories.

Both the Memory Colors evaluation and the Visual Property Norms evaluation depend on pre-trained model weights for the models evaluated. Some of this pre-training needs to be done separately in models.

Reference

@inproceedings{hagstrom-johansson-2022-models,
    title = "What do Models Learn From Training on More Than Text? Measuring Visual Commonsense Knowledge",
    author = {Hagstr{\"o}m, Lovisa  and
      Johansson, Richard},
    booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop",
    month = may,
    year = "2022",
    address = "Dublin, Ireland",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2022.acl-srw.19",
    pages = "252--261",
    abstract = "There are limitations in learning language from text alone. Therefore, recent focus has been on developing multimodal models. However, few benchmarks exist that can measure what language models learn about language from multimodal training. We hypothesize that training on a visual modality should improve on the visual commonsense knowledge in language models. Therefore, we introduce two evaluation tasks for measuring visual commonsense knowledge in language models (code publicly available at: github.com/lovhag/measure-visual-commonsense-knowledge) and use them to evaluate different multimodal models and unimodal baselines. Primarily, we find that the visual commonsense knowledge is not significantly different between the multimodal models and unimodal baseline models trained on visual text data.",
}

Acknowledgements

This project wouldn't be possible without the Centre for Speech, Language, and the Brain (CSLB) at the University of Cambridge, the Huggingface library and the LXMERT repo, we thank you for your work!

measure-visual-commonsense-knowledge's People

Contributors

lovhag avatar

Stargazers

Amiya Shirou avatar Woojeong Jin avatar

Watchers

 avatar

measure-visual-commonsense-knowledge's Issues

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.