Apache Spark for Big Data Analytics and Data Processing

Leverage the power of Apache Spark to perform efficient data processing and analytics on your data in real-time

Ratings 4.19 / 5.00
Apache Spark for Big Data Analytics and Data Processing

What You Will Learn!

  • Query your structured data using Spark SQL and work with the DataSets API
  • Analyze and process graph structures using Spark’s GraphX module
  • Train machine learning models with streaming data, and use them for making real-time predictions
  • Implement high-velocity streaming and data processing use cases while working with streaming API
  • Dive into MLlib– the machine learning functional library in Spark with highly scalable algorithm
  • See how SparkR allows to create and transform RDDs in R
  • See analytical use case implementations using MLLib, GraphX, and Spark streaming
  • Examine a number of real-world use cases with hands-on projects
  • Build Hadoop and Apache Spark jobs that process data quickly and effectively

Description

Today’s world witnesses a massive amount of data being generated everyday, everywhere. As a result, a number of organizations are focusing on Big Data processing to process large amounts of data in real-time with maximum efficiency. This has led to Apache Spark gaining popularity in the Big Data market rapidly. If you want to get the most out of the trending Big Data framework for all your data processing needs, then go for this Learning Path.

This comprehensive 3-in-1 course focuses on performing data streaming and data analytics with Apache Spark. You will learn to load data from a variety of structured sources such as JSON, Hive, and Parquet using Spark SQL and schema RDDs. You will also build streaming applications and learn best practices for managing high-velocity streaming and external data sources. Next, you will explore Spark machine learning libraries and GraphX where you will perform graphical processing and analysis. Finally, you will build projects which will help you put your learnings into practice and get a strong hold of the topic.

Contents and Overview

This training program includes 3 complete courses, carefully chosen to give you the most comprehensive training possible.

The first course, Spark Analytics for Real-Time Data Processing, starts off with explaining Spark SQL. You will learn how to use the Spark SQL API and built-in functions with Apache Spark. You will also go through some interactive analysis and look at some integrations between Spark and Java/Scala/Python. Next, you will explore Spark Streaming, streamingcontext, and DStreams. You will learn how Spark streaming works on top of the Spark core, thus inheriting its features. Finally, you will stream data and also learn best practices for managing high-velocity streaming and external data sources.

In the second course, Advanced Analytics and Real-Time Data Processing in Apache Spark, you will leverage the features of various components of the Spark framework to efficiently process, analyze, and visualize your data. You will then learn how to implement the high velocity streaming operation for data processing in order to perform efficient analytics on your real-time data. You will also analyze data using machine learning techniques and graphs. Next, you will learn to solve problems using machine learning techniques and find out about all the tools available in the MLlib toolkit. Finally, you will see some useful machine learning algorithms with the help of Spark MLlib and will integrate Spark with R.

The third course, Big Data Analytics Projects with Apache Spark, contains various projects that consist of real-world examples. The first project is to find top selling products for an e-commerce business by efficiently joining data sets in the Mapreduce paradigm. Next, a Market Basket Analysis will help you identify items likely to be purchased together and find correlations between items in a set of transactions. Moving on, you will learn about probabilistic logistic regression by finding an author for a post. Next, you will build a content-based recommendation system for movies to predict whether an action will happen, which you will do by building a trained model. Finally, you will use the Mapreduce Spark program to calculate mutual friends on social network.

By the end of this course, you will have a sound understanding of the Spark framework, which will help you in analyzing and processing big data in real time.

Meet Your Expert(s):

We have the best work of the following esteemed author(s) to ensure that your learning journey is smooth:

  • Nishant Garg has over 17 years of software architecture and development      experience in various technologies, such as Java Enterprise Edition, SOA,      Spring, Hadoop, Hive, Flume, Sqoop, Oozie, Spark, Shark, YARN, Impala,      Kafka, Storm, Solr/Lucene, NoSQL databases (such as HBase, Cassandra, and      MongoDB), and MPP databases (such as GreenPlum). He received his MS in      software systems from the Birla Institute of Technology and Science,      Pilani, India, and is currently working as a technical architect for the      Big Data RandD Group with Impetus Infotech Pvt. Ltd. Previously, Nishant      has enjoyed working with some of the most recognizable names in IT      services and financial industries, employing full software life cycle      methodologies such as Agile and SCRUM. Nishant has also undertaken many      speaking engagements on big data technologies and is also the author of      Apache Kafka and HBase Essentials, Packt Publishing.

  •  Tomasz Lelek is a Software Engineer and Co-Founder of InitLearn. He mostly does programming in Java and Scala. He dedicates his time and effort to get better at everything. He is currently diving into Big Data technologies. Tomasz is very passionate about everything associated with software development. He has been a speaker at a few conferences in Poland-Confitura and JDD, and at the Krakow Scala User Group. He has also conducted a live coding session at Geecon Conference. He was also a speaker at an international event in Dhaka. He is very enthusiastic and loves to share his knowledge.

Who Should Attend!

  • This course is for software engineers, data scientists, big data developers, and big data analysts who are interested in big data processing and data analytics with Apache Spark.

TAKE THIS COURSE

Tags

  • Apache Spark

Subscribers

75

Lectures

90

TAKE THIS COURSE



Related Courses