COVID-19 dataset clearinghouse: Difference between revisions

From Polymath Wiki
Jump to navigationJump to search
Line 71: Line 71:
** [https://www.covidcaremap.org/maps/us-healthcare-system-capacity/#6.07/40.085/-75.195 Open map data on US health system capacity to care for COVID-19 patients]
** [https://www.covidcaremap.org/maps/us-healthcare-system-capacity/#6.07/40.085/-75.195 Open map data on US health system capacity to care for COVID-19 patients]
* [https://www.kaggle.com/darshan1504/covid19-detection-xray-dataset COVID-19 Detection X-Ray Dataset], Kaggle
* [https://www.kaggle.com/darshan1504/covid19-detection-xray-dataset COVID-19 Detection X-Ray Dataset], Kaggle
* [http://www.panacealab.org/covid19/ Covid-19 Twitter chatter dataset for scientific use], Panacea Lab, Georgia State University


=== Data scrapers and aggregators ===
=== Data scrapers and aggregators ===

Revision as of 07:20, 27 March 2020

Data cleaning proposal

Instructions for posting a request for a data set to be cleaned

Ideally, the submission should consist of a single plain text file which clearly delineates your request (specify what your “cleaned” data set should contain). This should specify the desired format in which the data should be saved (e.g. csv, npy, mat, json). This text file should also contain a link to a webpage where the raw data to be cleaned can easily be accessed and/or downloaded, and with specific instruction for how to locate the data set on said webpage.

We do not yet have a platform for these requests, so please post them for now at the above blog post or email tao@math.ucla.edu .

Data sets

Epidemiology

North America

Other regional data

Genomics and homology

Literature

Other data

Data scrapers and aggregators

Visualizations and summaries

Other lists

Data cleaning requests

We do not have a platform yet to handle queries or submissions to these cleaning requests, so for now please use the comment thread at this blog post for these.

From Chris Strohmeier (UCLA), Mar 25

The biorxiv_medrxiv file at https://www.kaggle.com/allen-institute-for-ai/CORD-19-research-challenge contains another folder titled biorxiv_medrxiv, which in turn contains hundreds of json files. Each file corresponds to a research article, at least tangentially related to COVID-19.

We are requesting:

  • A tf-idf matrix associated to the subset of the above collection which contain full-text articles (some appear to only have abstracts).
  • The rows should correspond to the (e.g. 5000) most commonly used words.
  • The columns should correspond to each individual json file.
  • The clean data should be stored as a npy or mat file (or both).
  • Finally, there should be a csv or text document (or both) explaining the meaning of the individual rows and columns of the matrix (what words do the rows correspond to? What file does each column correspond to).

Contact: c.strohmeier@math.ucla.edu

Miscellaneous links