Johns Hopkins University

 

 


 

Customize your search:

E.g., 2017-01-16
E.g., 2017-01-16
E.g., 2017-01-16
Jan 22nd 2017

This course explores why primary health care is central for achieving Health for All. It provides examples of how primary health care has been instrumental in approaching this goal in selected populations and how the principles of primary health care can guide future policies and actions. Two of the most inspiring, least understood, and most often derided terms in global health discourse are “Health for All” and “Primary Health Care.”

Average: 5.8 (28 votes)
Jan 16th 2017

Do you want to write powerful, maintainable, and testable front end applications faster and with less code? Then consider joining this course to gain skills in one of the most popular Single Page Application (SPA) frameworks today, AngularJS. Developed and backed by Google, AngularJS is a very marketable skill to acquire.

No votes yet
Jan 16th 2017

In this course you will learn how to program in R and how to use R for effective data analysis. You will learn how to install and configure software necessary for a statistical programming environment and describe generic programming language concepts as they are implemented in a high-level statistical language. The course covers practical issues in statistical computing which includes programming in R, reading data into R, accessing R packages, writing R functions, debugging, profiling R code, and organizing and commenting R code. Topics in statistical data analysis will provide working examples.

Average: 5.6 (24 votes)
Jan 16th 2017

An introduction to the statistics behind the most popular genomic data science projects. This is the sixth course in the Genomic Big Data Science Specialization from Johns Hopkins University.

Average: 7.2 (5 votes)
Jan 16th 2017

Learn fundamental concepts in data analysis and statistical inference, focusing on one and two independent samples.

No votes yet
Jan 16th 2017

Before you can work with data you have to get some. This course will cover the basic ways that data can be obtained. The course will cover obtaining data from the web, from APIs, from databases and from colleagues in various formats. It will also cover the basics of data cleaning and how to make data “tidy”. Tidy data dramatically speed downstream data analysis tasks. The course will also cover the components of a complete data set including raw data, processing instructions, codebooks, and processed data. The course will cover the basics needed for collecting, cleaning, and sharing data.

Average: 5.4 (13 votes)
Jan 16th 2017

Learn to use tools from the Bioconductor project to perform analysis of genomic data. This is the fifth course in the Genomic Big Data Specialization from Johns Hopkins University.

Average: 8.3 (3 votes)
Jan 16th 2017

Statistical inference is the process of drawing conclusions about populations or scientific truths from data. There are many modes of performing inference including statistical modeling, data oriented strategies and explicit use of designs and randomization in analyses. Furthermore, there are broad theories (frequentists, Bayesian, likelihood, design based, …) and numerous complexities (missing data, observed and unobserved confounding, biases) for performing inference. A practitioner can often be left in a debilitating maze of techniques, philosophies and nuance.

Average: 7.1 (11 votes)
Jan 16th 2017

This course covers the design, acquisition, and analysis of Functional Magnetic Resonance Imaging (fMRI) data.

Average: 5.5 (2 votes)
Jan 16th 2017

Over 500,000 people in the United States and over 8 million people worldwide are dying every year from cancer. As people live longer, the incidence of cancer is rising worldwide and the disease is expected to strike over 20 million people annually by 2030. This open course is designed for people who would like to develop an understanding of cancer and how it is prevented, diagnosed, and treated.

Average: 7.1 (11 votes)
Jan 16th 2017

This course focuses on the concepts and tools behind reporting modern data analyses in a reproducible manner. Reproducible research is the idea that data analyses, and more generally, scientific claims, are published with their data and software code so that others may verify the findings and build upon them. The need for reproducibility is increasing dramatically as data analyses become more complex, involving larger datasets and more sophisticated computations. Reproducibility allows for people to focus on the actual content of a data analysis, rather than on superficial details reported in a written summary.

Average: 6.7 (3 votes)
Jan 16th 2017

This course covers the essential exploratory techniques for summarizing data. These techniques are typically applied before formal modeling commences and can help inform the development of more complex statistical models. Exploratory techniques are also important for eliminating or sharpening potential hypotheses about the world that can be addressed by the data. We will cover in detail the plotting systems in R as well as some of the basic principles of constructing data graphics. We will also cover some of the common multivariate statistical techniques used to visualize high-dimensional data.

Average: 7.2 (5 votes)
Jan 16th 2017

Learn to use the tools that are available from the Galaxy Project. This is the second course in the Genomic Big Data Science Specialization.

Average: 1 (4 votes)
Jan 16th 2017

Did you ever want to build a web application? Perhaps you even started down that path in a language like Java or C#, when you realized that there was so much “climbing the mountain” that you had to do? Maybe you have heard about web services being all the rage, but thought they were too complicated to integrate into your web application. Or maybe you wondered how deploying web applications to the cloud works, but there was too much to set up just to get going.

No votes yet
Jan 16th 2017

A conceptual and interpretive public health approach to some of the most commonly used methods from basic statistics.

No votes yet
Jan 16th 2017

Linear models, as their name implies, relates an outcome to a set of predictors of interest using linear assumptions. Regression models, a subset of linear models, are the most important statistical analysis tool in a data scientist’s toolkit. This course covers regression analysis, least squares and inference using regression models. Special cases of the regression model, ANOVA and ANCOVA will be covered as well. Analysis of residuals and variability will be investigated. The course will cover modern thinking on model selection and novel uses of regression models including scatterplot smoothing.

Average: 7.7 (6 votes)
Jan 16th 2017

This course introduces you to the basic biology of modern genomics and the experimental tools that we use to measure it. We'll introduce the Central Dogma of Molecular Biology and cover how next-generation sequencing can be used to measure DNA, RNA, and epigenetic patterns. You'll also get an introduction to the key concepts in computing and data science that you'll need to understand how data from next-generation sequencing experiments are generated and analyzed.

Average: 4 (3 votes)
Jan 16th 2017

You already know how to build a basic web application with the Ruby on Rails framework. Perhaps, you have even taken Course 1, "Ruby on Rails: An Introduction" (we highly recommend it) where you relied on external web services to be your “data layer”. But in the back of your mind, you always knew that there would come a time when you would need to roll up your sleeves and learn SQL to be able to interact with your own relational database (RDBMS). But there is an easier way to get started with SQL using the Active Record Object/Relational (ORM) framework. In this course, we will be able to use the Ruby language and the Active Record ORM framework to automate interactions with the database to quickly build the application we want.

No votes yet
Jan 16th 2017

A data product is the production output from a statistical analysis. Data products automate complex analysis tasks or use technology to expand the utility of a data informed model, algorithm or inference. This course covers the basics of creating data products using Shiny, R packages, and interactive graphics. The course will focus on the statistical fundamentals of creating a data product that can be used to tell a story about data to a mass audience.

Average: 3.3 (12 votes)
Jan 16th 2017

We will learn computational methods -- algorithms and data structures -- for analyzing DNA sequencing data. We will learn a little about DNA, genomics, and how DNA sequencing is used. We will use Python to implement key algorithms and data structures and to analyze real genomes and DNA sequencing datasets.

Average: 7.8 (8 votes)

Pages