COVID-19 dataset clearinghouse: Difference between revisions

From Polymath Wiki
Jump to navigationJump to search
Line 27: Line 27:
* [https://github.com/covid19-data/covid19-data 2019-nCoV Data Processing Pipelines and datasets]   
* [https://github.com/covid19-data/covid19-data 2019-nCoV Data Processing Pipelines and datasets]   
** Countries and state names are normalized with ISO 3166-1 code.
** Countries and state names are normalized with ISO 3166-1 code.
* [https://coronavirus.1point3acres.com/en COVID-19 in US and Canada]
** [https://coronavirus.1point3acres.com/en/data Data request form]


== Data cleaning requests ==
== Data cleaning requests ==

Revision as of 17:44, 25 March 2020

Data cleaning proposal

Instructions for posting a request for a data set to be cleaned

Ideally, the submission should consist of a single plain text file which clearly delineates your request (specify what your “cleaned” data set should contain). This should specify the desired format in which the data should be saved (e.g. csv, npy, mat, json). This text file should also contain a link to a webpage where the raw data to be cleaned can easily be accessed and/or downloaded, and with specific instruction for how to locate the data set on said webpage.

We do not yet have a platform for these requests, so please post them for now at the above blog post or email tao@math.ucla.edu .

Data sets

Data cleaning requests

We do not have a platform yet to handle queries or submissions to these cleaning requests, so for now please use the comment thread at this blog post for these.

From Chris Strohmeier (UCLA), Mar 25

The biorxiv_medrxiv file at https://www.kaggle.com/allen-institute-for-ai/CORD-19-research-challenge contains another folder titled biorxiv_medrxiv, which in turn contains hundreds of json files. Each file corresponds to a research article, at least tangentially related to COVID-19.

We are requesting:

  • A tf-idf matrix associated to the subset of the above collection which contain full-text articles (some appear to only have abstracts).
  • The rows should correspond to the (e.g. 5000) most commonly used words.
  • The columns should correspond to each individual json file.
  • The clean data should be stored as a npy or mat file (or both).
  • Finally, there should be a csv or text document (or both) explaining the meaning of the individual rows and columns of the matrix (what words do the rows correspond to? What file does each column correspond to).

Contact: c.strohmeier@math.ucla.edu

Miscellaneous links

  • LitCovid - a curated literature hub for tracking up-to-date scientific information about the 2019 novel Coronavirus