COVID-19 dataset clearinghouse: Difference between revisions

From Polymath Wiki
Jump to navigationJump to search
Line 24: Line 24:
* [https://github.com/beoutbreakprepared/nCoV2019 Location for summaries and analysis of data related to n-CoV 2019, first reported in Wuhan, China], Outbreak and Pandemic Preparedness team at the Institute for Health Metrics and Evaluation, University of Washington
* [https://github.com/beoutbreakprepared/nCoV2019 Location for summaries and analysis of data related to n-CoV 2019, first reported in Wuhan, China], Outbreak and Pandemic Preparedness team at the Institute for Health Metrics and Evaluation, University of Washington
** A [https://www.healthmap.org/covid-19/ visualization of one of the data sets]
** A [https://www.healthmap.org/covid-19/ visualization of one of the data sets]
* [https://www.covid19india.org/ India COVID-19 tracker]
** [https://docs.google.com/spreadsheets/d/e/2PACX-1vSc_2y5N0I67wDU38DjDh35IZSIS30rQf7_NYZhtYYGU1jJYT6_kDx4YpF-qw0LSlGsBYP8pqM_a1Pd/pubhtml Patient database]
* [https://github.com/kgjenkins/covid-19-ny Covid-19 coronovirus cases in New York State]
* [https://www.ecdc.europa.eu/en/publications-data/download-todays-data-geographic-distribution-covid-19-cases-worldwide Daily data on the geographic distribution of COVID-19 cases worldwide], European Centre for Disease Prevention and Control
* [https://www.ecdc.europa.eu/en/publications-data/download-todays-data-geographic-distribution-covid-19-cases-worldwide Daily data on the geographic distribution of COVID-19 cases worldwide], European Centre for Disease Prevention and Control
* [https://docs.google.com/spreadsheets/d/1jS24DjSPVWa4iuxuD4OAXrE3QeI8c9BC1hSlqr-NMiU/edit#gid=1187587451 Google sheets from DXY.cn]  
* [https://docs.google.com/spreadsheets/d/1jS24DjSPVWa4iuxuD4OAXrE3QeI8c9BC1hSlqr-NMiU/edit#gid=1187587451 Google sheets from DXY.cn]  
Line 41: Line 38:
** [https://docs.google.com/spreadsheets/u/2/d/e/2PACX-1vRwAqp96T9sYYq2-i7Tj0pvTf6XVHjDSMIKBdZHXiCGGdNC0ypEU9NbngS8mxea55JuCFuua1MUeOj5/pubhtml raw data]
** [https://docs.google.com/spreadsheets/u/2/d/e/2PACX-1vRwAqp96T9sYYq2-i7Tj0pvTf6XVHjDSMIKBdZHXiCGGdNC0ypEU9NbngS8mxea55JuCFuua1MUeOj5/pubhtml raw data]
** [https://covidtracking.com/api/ API]
** [https://covidtracking.com/api/ API]
* [https://github.com/kgjenkins/covid-19-ny Covid-19 coronovirus cases in New York State]
==== Other regional data ====
* [https://www.covid19india.org/ India COVID-19 tracker]
** [https://docs.google.com/spreadsheets/d/e/2PACX-1vSc_2y5N0I67wDU38DjDh35IZSIS30rQf7_NYZhtYYGU1jJYT6_kDx4YpF-qw0LSlGsBYP8pqM_a1Pd/pubhtml Patient database]


=== Genomics and homology ===
=== Genomics and homology ===

Revision as of 16:01, 26 March 2020

Data cleaning proposal

Instructions for posting a request for a data set to be cleaned

Ideally, the submission should consist of a single plain text file which clearly delineates your request (specify what your “cleaned” data set should contain). This should specify the desired format in which the data should be saved (e.g. csv, npy, mat, json). This text file should also contain a link to a webpage where the raw data to be cleaned can easily be accessed and/or downloaded, and with specific instruction for how to locate the data set on said webpage.

We do not yet have a platform for these requests, so please post them for now at the above blog post or email tao@math.ucla.edu .

Data sets

Epidemiology

North America

Other regional data

Genomics and homology

Other data

Data scrapers

Visualizations and summaries

Other lists

Data cleaning requests

We do not have a platform yet to handle queries or submissions to these cleaning requests, so for now please use the comment thread at this blog post for these.

From Chris Strohmeier (UCLA), Mar 25

The biorxiv_medrxiv file at https://www.kaggle.com/allen-institute-for-ai/CORD-19-research-challenge contains another folder titled biorxiv_medrxiv, which in turn contains hundreds of json files. Each file corresponds to a research article, at least tangentially related to COVID-19.

We are requesting:

  • A tf-idf matrix associated to the subset of the above collection which contain full-text articles (some appear to only have abstracts).
  • The rows should correspond to the (e.g. 5000) most commonly used words.
  • The columns should correspond to each individual json file.
  • The clean data should be stored as a npy or mat file (or both).
  • Finally, there should be a csv or text document (or both) explaining the meaning of the individual rows and columns of the matrix (what words do the rows correspond to? What file does each column correspond to).

Contact: c.strohmeier@math.ucla.edu

Miscellaneous links