PySpark

 

 


 

Customize your search:

E.g., 2016-12-09
E.g., 2016-12-09
E.g., 2016-12-09
Aug 15th 2016

Learn how to apply data science techniques using parallel programming in Apache Spark to explore big data.

No votes yet
Jun 15th 2016

Learn the fundamentals and architecture of Apache Spark, the leading cluster-computing framework among professionals. Spark is rapidly becoming the compute engine of choice for big data. Spark programs are more concise and often run 10-100 times faster than Hadoop MapReduce jobs. As companies realize this, Spark developers are becoming increasingly valued.

Average: 6.8 (8 votes)