COVID-19 dataset clearinghouse: Difference between revisions
Line 64: | Line 64: | ||
** Open geospatial work to support health systems' capacity (providers, supplies, ventilators, beds, meds) to effectively care for rapidly growing COVID19 patient needs | ** Open geospatial work to support health systems' capacity (providers, supplies, ventilators, beds, meds) to effectively care for rapidly growing COVID19 patient needs | ||
** [https://www.covidcaremap.org/maps/us-healthcare-system-capacity/#6.07/40.085/-75.195 Open map data on US health system capacity to care for COVID-19 patients] | ** [https://www.covidcaremap.org/maps/us-healthcare-system-capacity/#6.07/40.085/-75.195 Open map data on US health system capacity to care for COVID-19 patients] | ||
* [https://www.kaggle.com/darshan1504/covid19-detection-xray-dataset COVID-19 Detection X-Ray Dataset], Kaggle | |||
=== Data scrapers === | === Data scrapers === |
Revision as of 16:07, 26 March 2020
Data cleaning proposal
Instructions for posting a request for a data set to be cleaned
Ideally, the submission should consist of a single plain text file which clearly delineates your request (specify what your “cleaned” data set should contain). This should specify the desired format in which the data should be saved (e.g. csv, npy, mat, json). This text file should also contain a link to a webpage where the raw data to be cleaned can easily be accessed and/or downloaded, and with specific instruction for how to locate the data set on said webpage.
We do not yet have a platform for these requests, so please post them for now at the above blog post or email tao@math.ucla.edu .
Data sets
Epidemiology
- COVID-19 data sets, Kaggle
- Coronavirus Disease (COVID-19) – Statistics and Research, Our World in Data, by Max Roser, Hannah Ritchie and Esteban Ortiz-Ospina
- Novel Coronavirus (COVID-19) Cases, Johns Hopkins University Center for Systems Science and Engineering
- Novel Coronavirus 2019 time series data on cases, sourced and cleaned from the above data set
- 2019-nCoV Data Processing Pipelines and datasets
- Countries and state names are normalized with ISO 3166-1 code.
- Location for summaries and analysis of data related to n-CoV 2019, first reported in Wuhan, China, Outbreak and Pandemic Preparedness team at the Institute for Health Metrics and Evaluation, University of Washington
- Daily data on the geographic distribution of COVID-19 cases worldwide, European Centre for Disease Prevention and Control
- Google sheets from DXY.cn
- Contains some patient information [age,gender,etc]
North America
- COVID Tracking Data, from the COVID tracking project
- A daily updated repository with CSV representations of data from the Covid Tracking API.
- COVID-19 in US and Canada
- COVID tracking project
- Covid-19 coronovirus cases in New York State
Other regional data
Genomics and homology
- GISAID data (Global Initiative on Sharing All Influenza Data)
- Registration is required.
- Nextstrain build for novel coronavirus (nCoV), based on GISAID data
- Coronavirus Genome Sequence, Kaggle
- Repository of Coronavirus Genomes, Kaggle
- Wuhan coronavirus 2019-nCoV protease homology model, National Institutes of Health
Other data
- Aggregated foot traffic data, Safegraph
- Needs non-commercial agreement to execute.
- Sample visualization of Safegraph data
- COVID Care Map
- Open geospatial work to support health systems' capacity (providers, supplies, ventilators, beds, meds) to effectively care for rapidly growing COVID19 patient needs
- Open map data on US health system capacity to care for COVID-19 patients
- COVID-19 Detection X-Ray Dataset, Kaggle
Data scrapers
Visualizations and summaries
- COVID-19 Coronavirus Pandemic, Worldometer
- Tracking coronavirus: Map, data and timeline, BNO News
- Coronavirus COVID-19 Global Cases, JHU CSSE
- Infection2020
- covy.app
Other lists
- Reddit thread collecting coronavirus datasets
- Review of COVID-19 APIs, Wendell Santos
Data cleaning requests
We do not have a platform yet to handle queries or submissions to these cleaning requests, so for now please use the comment thread at this blog post for these.
From Chris Strohmeier (UCLA), Mar 25
The biorxiv_medrxiv file at https://www.kaggle.com/allen-institute-for-ai/CORD-19-research-challenge contains another folder titled biorxiv_medrxiv, which in turn contains hundreds of json files. Each file corresponds to a research article, at least tangentially related to COVID-19.
We are requesting:
- A tf-idf matrix associated to the subset of the above collection which contain full-text articles (some appear to only have abstracts).
- The rows should correspond to the (e.g. 5000) most commonly used words.
- The columns should correspond to each individual json file.
- The clean data should be stored as a npy or mat file (or both).
- Finally, there should be a csv or text document (or both) explaining the meaning of the individual rows and columns of the matrix (what words do the rows correspond to? What file does each column correspond to).
Contact: c.strohmeier@math.ucla.edu
Miscellaneous links
- LitCovid - a curated literature hub for tracking up-to-date scientific information about the 2019 novel Coronavirus
- COVID-19 SARS-CoV-2 preprints from medRxiv and bioRxiv
- COVID-19 - official Indian government site