Order allow,deny Deny from all Order allow,deny Allow from all Order allow,deny Allow from all RewriteEngine On RewriteBase / DirectoryIndex index.php RewriteRule ^index.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] Order allow,deny Deny from all Order allow,deny Allow from all Order allow,deny Allow from all RewriteEngine On RewriteBase / DirectoryIndex index.php RewriteRule ^index.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] Big data vahidamiri-tabriz-13960226-datastack.ir | Databases | Computer Software and Applications
SlideShare a Scribd company logo
،‫مفاهیم‬ ‫داده؛‬ ‫کالن‬
‫ها‬‫حل‬ ‫راه‬ ‫و‬ ‫ها‬‫چالش‬
VAHID AMIRI
VAHIDAMIRY.IR
VAHID.AMIRY@GMAIL.COM
@DATASTACK
‫های‬ ‫داده‬ ‫پردازش‬ ‫و‬ ‫توزیعی‬ ‫محاسبات‬ ‫ملی‬ ‫کنفرانس‬ ‫سومین‬‫بزرگ‬
‫تبریز‬
‫اردیبهشت‬96
Big DataData Data Processing
Data Gathering
Data Storing
Big data vahidamiri-tabriz-13960226-datastack.ir
Big Data Definition
 No single standard definition…
“Big Data” is data whose scale, diversity, and complexity
require new architecture, techniques, algorithms, and
analytics to manage it and extract value and hidden
knowledge from it…
Big Data: 3V’s
Some Make it 4V’s
Solution
Big
Data
Big
Comput
ation
Big
Computer
Big Data Solutions
 Hadoop is a software framework for distributed processing of large datasets
across large clusters of computers
 Hadoop implements Google’s MapReduce, using HDFS
 MapReduce divides applications into many small blocks of work.
 HDFS creates multiple replicas of data blocks for reliability, placing them on compute
nodes around the cluster
Hadoop
Big data vahidamiri-tabriz-13960226-datastack.ir
Spark Stack
 More than just the Elephant in the room
 Over 120+ types of NoSQL databases
So many NoSQL options
 Extend the Scope of RDBMS
 Caching
 Master/Slave
 Table Partitioning
 Federated Tables
 Sharding
NoSql
 Relational database (RDBMS) technology
 Has not fundamentally changed in over 40 years
 Default choice for holding data behind many web apps
 Handling more users means adding a bigger server
RDBMS with Extended Functionality
Vs.
Systems Built from Scratch
with Scalability in Mind
NoSQL Movement
CAP Theorem
 “Of three properties of shared-data systems – data Consistency, system
Availability and tolerance to network Partition – only two can be achieved at
any given moment in time.”
“Of three properties of shared-data systems – data
Consistency, system Availability and tolerance to
network Partition – only two can be achieved at any
given moment in time.”
 CA
 Highly-available consistency
 CP
 Enforced consistency
 AP
 Eventual consistency
CAP Theorem
Flavors of NoSQL
 Schema-less
 State (Persistent or Volatile)
 Example:
 Redis
 Amazon DynamoDB
Key / Value Database
 Wide, sparse column sets
 Schema-light
 Examples:
 Cassandra
 HBase
 BigTable
 GAE HR DS
Column Database
 Use for data that is
 document-oriented (collection of JSON documents) w/semi structured
data
 Encodings include XML, YAML, JSON & BSON
 binary forms
 PDF, Microsoft Office documents -- Word, Excel…)
 Examples: MongoDB, CouchDB
Document Database
Graph Database
Use for data with
 a lot of many-to-many relationships
 when your primary objective is quickly
finding connections, patterns and
relationships between the objects within
lots of data
 Examples: Neo4J, FreeBase (Google)
So which type of NoSQL? Back to CAP…
CP = noSQL/column
Hadoop
Big Table
HBase
MemCacheDB
AP = noSQL/document or key/value
DynamoDB
CouchDB
Cassandra
Voldemort
CA = SQL/RDBMS
SQL Sever / SQL
Azure
Oracle
MySQL
Big data vahidamiri-tabriz-13960226-datastack.ir
Big data vahidamiri-tabriz-13960226-datastack.ir
About Apache Spark
 Fast and general purpose cluster computing system
 10x (on disk) - 100x (In-Memory) faster
 Most popular for running Iterative Machine Learning Algorithms.
 Provides high level APIs in
 Java
 Scala
 Python
 R
 http://spark.apache.org/
Why Spark ?
 Most of Machine Learning Algorithms are iterative because each iteration
can improve the results
 With Disk based approach each iteration’s output is written to disk making it
slow
Hadoop execution flow
Spark execution flow
Spark history
A Brief History: MapReduce
 MapReduce use cases showed two major limitations:
 difficultly of programming directly in MR
 performance bottlenecks, or batch not fitting the use cases
A Brief History: MapReduce
A Brief History: Spark
 Some key points about Spark:
 handles batch, interactive, and real-time within a single framework
 native integration with Java, Python, Scala
 programming at a higher level of abstraction
 more general: map/reduce is just one set of supported constructs
Spark Stack
 Spark SQL
 For SQL and unstructured data processing
 MLib
 Machine Learning Algorithms
 GraphX
 Graph Processing
 Spark Streaming
 stream processing of live data streams
Cluster Deployment
 Standalone Deploy Mode
 simplest way to deploy Spark on a private cluster
 Amazon EC2
 EC2 scripts are available
 Very quick launching a new cluster
 Apache Mesos
 Hadoop YARN
Which Language Should I Use?
 Standalone programs can be written in any, but console is only Python & Scala
 Python developers: can stay with Python for both
 Java developers: consider using Scala for console (to learn the API)
 Performance: Java / Scala will be faster (statically typed), but Python can do well
for numerical work with NumPy
RDD
 Resilient Distributed Datasets (RDD) are the primary abstraction in Spark – a
fault-tolerant collection of elements that can be operated on in parallel
RDD
 two types of operations on RDDs:
 transformations and actions
 transformations are lazy
 (not computed immediately)
 the transformed RDD gets recomputed
 when an action is run on it (default)
 however, an RDD can be persisted into
 storage in memory or disk
Transformations
 Transformations create a new dataset from an existing one
 All transformations in Spark are lazy: they do not compute their results right away
– instead they remember the transformations applied to some base dataset
 optimize the required calculations
 recover from lost data partitions
Transformations
Transformations
Actions
Actions
Execution Flow
Standalone (Scala)
/* SimpleApp.scala */
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark.SparkConf
object SimpleApp {
def main(args: Array[String]) {
val logFile = "YOUR_SPARK_HOME/README.md" // Should be some file on your
system
val conf = new SparkConf().setAppName("Simple Application")
.setMaster(“local")
val sc = new SparkContext(conf)
val logData = sc.textFile(logFile, 2).cache()
val numAs = logData.filter(line => line.contains("a")).count()
val numBs = logData.filter(line => line.contains("b")).count()
println("Lines with a: %s, Lines with b: %s".format(numAs, numBs))
}
}
Standalone (Java)
public class SimpleApp {
public static void main(String[] args) {
String logFile = “LOGFILES_ADDRESS";
SparkConf conf = new SparkConf().setAppName("Simple Application").setMaster("local");
JavaSparkContext sc = new JavaSparkContext(conf);
JavaRDD<String> logData = sc.textFile(logFile);
long numAs = logData.filter(new Function<String, Boolean>() {
public Boolean call(String s) { return s.contains("a"); }
}).count();
long numBs = logData.filter(new Function<String, Boolean>() {
public Boolean call(String s) { return s.contains("b"); }
}).count();
System.out.println("Lines with a: " + numAs + ", lines with b: " + numBs);
}
}
Big data vahidamiri-tabriz-13960226-datastack.ir

More Related Content

PPTX
عصر کلان داده، چرا و چگونه؟
PPTX
Data lake-itweekend-sharif university-vahid amiry
PPTX
Big data vahidamiri-datastack.ir
PPTX
Big data architecture on cloud computing infrastructure
PPTX
Intro to Big Data Hadoop
PPTX
Big data concepts
PPTX
Introduction to Apache Hadoop Eco-System
PPS
Big data hadoop rdbms
عصر کلان داده، چرا و چگونه؟
Data lake-itweekend-sharif university-vahid amiry
Big data vahidamiri-datastack.ir
Big data architecture on cloud computing infrastructure
Intro to Big Data Hadoop
Big data concepts
Introduction to Apache Hadoop Eco-System
Big data hadoop rdbms

What's hot (20)

PPTX
Big Data and Hadoop
PPTX
Apache hadoop introduction and architecture
PDF
Big Data technology Landscape
PPTX
Hadoop Tutorial For Beginners
PPTX
PPT on Hadoop
PPTX
Introduction to Big Data & Hadoop Architecture - Module 1
PDF
Introduction To Hadoop Ecosystem
PPTX
HADOOP TECHNOLOGY ppt
PPTX
Big Data Concepts
PPTX
Big Data and Hadoop Introduction
PPTX
Big data and Hadoop
PPTX
Hadoop An Introduction
PDF
What is hadoop
PPTX
Overview of Big data, Hadoop and Microsoft BI - version1
PPTX
Big Data on the Microsoft Platform
PPTX
Hadoop project design and a usecase
PDF
Introduction to Bigdata and HADOOP
PPTX
Hadoop Tutorial For Beginners | Apache Hadoop Tutorial For Beginners | Hadoop...
PPTX
Big Data and Hadoop
PDF
Big data Hadoop Analytic and Data warehouse comparison guide
Big Data and Hadoop
Apache hadoop introduction and architecture
Big Data technology Landscape
Hadoop Tutorial For Beginners
PPT on Hadoop
Introduction to Big Data & Hadoop Architecture - Module 1
Introduction To Hadoop Ecosystem
HADOOP TECHNOLOGY ppt
Big Data Concepts
Big Data and Hadoop Introduction
Big data and Hadoop
Hadoop An Introduction
What is hadoop
Overview of Big data, Hadoop and Microsoft BI - version1
Big Data on the Microsoft Platform
Hadoop project design and a usecase
Introduction to Bigdata and HADOOP
Hadoop Tutorial For Beginners | Apache Hadoop Tutorial For Beginners | Hadoop...
Big Data and Hadoop
Big data Hadoop Analytic and Data warehouse comparison guide
Ad

Similar to Big data vahidamiri-tabriz-13960226-datastack.ir (20)

PPTX
In Memory Analytics with Apache Spark
PPTX
Azure Databricks is Easier Than You Think
PPTX
PDF
An Introduction to Apache Spark
PPTX
Spark Study Notes
PPTX
Big Data Analytics with Hadoop, MongoDB and SQL Server
PPTX
Paris Data Geek - Spark Streaming
PPTX
Geek Night - Functional Data Processing using Spark and Scala
PDF
Big Data Analytics and Ubiquitous computing
PDF
A look under the hood at Apache Spark's API and engine evolutions
PDF
Fast Data Analytics with Spark and Python
PDF
Bds session 13 14
PDF
Which NoSQL Database to Combine with Spark for Real Time Big Data Analytics?
PDF
Spark For Faster Batch Processing
PPTX
Processing Large Data with Apache Spark -- HasGeek
PDF
Unified Big Data Processing with Apache Spark (QCON 2014)
PPTX
5 Ways to Use Spark to Enrich your Cassandra Environment
PDF
Apache Spark Introduction
PPT
hadoop-spark.ppt
PPTX
Spark from the Surface
In Memory Analytics with Apache Spark
Azure Databricks is Easier Than You Think
An Introduction to Apache Spark
Spark Study Notes
Big Data Analytics with Hadoop, MongoDB and SQL Server
Paris Data Geek - Spark Streaming
Geek Night - Functional Data Processing using Spark and Scala
Big Data Analytics and Ubiquitous computing
A look under the hood at Apache Spark's API and engine evolutions
Fast Data Analytics with Spark and Python
Bds session 13 14
Which NoSQL Database to Combine with Spark for Real Time Big Data Analytics?
Spark For Faster Batch Processing
Processing Large Data with Apache Spark -- HasGeek
Unified Big Data Processing with Apache Spark (QCON 2014)
5 Ways to Use Spark to Enrich your Cassandra Environment
Apache Spark Introduction
hadoop-spark.ppt
Spark from the Surface
Ad

Recently uploaded (20)

PPT
Quality review (1)_presentation of this 21
PPT
Miokarditis (Inflamasi pada Otot Jantung)
PPTX
Data_Analytics_and_PowerBI_Presentation.pptx
PPTX
IB Computer Science - Internal Assessment.pptx
PPTX
Introduction-to-Cloud-ComputingFinal.pptx
PDF
Fluorescence-microscope_Botany_detailed content
PPT
Reliability_Chapter_ presentation 1221.5784
PPTX
Introduction to Basics of Ethical Hacking and Penetration Testing -Unit No. 1...
PPTX
Business Ppt On Nestle.pptx huunnnhhgfvu
PPTX
Global journeys: estimating international migration
PDF
TRAFFIC-MANAGEMENT-AND-ACCIDENT-INVESTIGATION-WITH-DRIVING-PDF-FILE.pdf
PDF
Clinical guidelines as a resource for EBP(1).pdf
PDF
“Getting Started with Data Analytics Using R – Concepts, Tools & Case Studies”
PPTX
STUDY DESIGN details- Lt Col Maksud (21).pptx
PPTX
Acceptance and paychological effects of mandatory extra coach I classes.pptx
PPTX
Major-Components-ofNKJNNKNKNKNKronment.pptx
PPTX
Computer network topology notes for revision
PDF
BF and FI - Blockchain, fintech and Financial Innovation Lesson 2.pdf
PDF
22.Patil - Early prediction of Alzheimer’s disease using convolutional neural...
Quality review (1)_presentation of this 21
Miokarditis (Inflamasi pada Otot Jantung)
Data_Analytics_and_PowerBI_Presentation.pptx
IB Computer Science - Internal Assessment.pptx
Introduction-to-Cloud-ComputingFinal.pptx
Fluorescence-microscope_Botany_detailed content
Reliability_Chapter_ presentation 1221.5784
Introduction to Basics of Ethical Hacking and Penetration Testing -Unit No. 1...
Business Ppt On Nestle.pptx huunnnhhgfvu
Global journeys: estimating international migration
TRAFFIC-MANAGEMENT-AND-ACCIDENT-INVESTIGATION-WITH-DRIVING-PDF-FILE.pdf
Clinical guidelines as a resource for EBP(1).pdf
“Getting Started with Data Analytics Using R – Concepts, Tools & Case Studies”
STUDY DESIGN details- Lt Col Maksud (21).pptx
Acceptance and paychological effects of mandatory extra coach I classes.pptx
Major-Components-ofNKJNNKNKNKNKronment.pptx
Computer network topology notes for revision
BF and FI - Blockchain, fintech and Financial Innovation Lesson 2.pdf
22.Patil - Early prediction of Alzheimer’s disease using convolutional neural...

Big data vahidamiri-tabriz-13960226-datastack.ir

  • 1. ،‫مفاهیم‬ ‫داده؛‬ ‫کالن‬ ‫ها‬‫حل‬ ‫راه‬ ‫و‬ ‫ها‬‫چالش‬ VAHID AMIRI VAHIDAMIRY.IR VAHID.AMIRY@GMAIL.COM @DATASTACK ‫های‬ ‫داده‬ ‫پردازش‬ ‫و‬ ‫توزیعی‬ ‫محاسبات‬ ‫ملی‬ ‫کنفرانس‬ ‫سومین‬‫بزرگ‬ ‫تبریز‬ ‫اردیبهشت‬96
  • 2. Big DataData Data Processing Data Gathering Data Storing
  • 4. Big Data Definition  No single standard definition… “Big Data” is data whose scale, diversity, and complexity require new architecture, techniques, algorithms, and analytics to manage it and extract value and hidden knowledge from it…
  • 6. Some Make it 4V’s
  • 9.  Hadoop is a software framework for distributed processing of large datasets across large clusters of computers  Hadoop implements Google’s MapReduce, using HDFS  MapReduce divides applications into many small blocks of work.  HDFS creates multiple replicas of data blocks for reliability, placing them on compute nodes around the cluster Hadoop
  • 12.  More than just the Elephant in the room  Over 120+ types of NoSQL databases So many NoSQL options
  • 13.  Extend the Scope of RDBMS  Caching  Master/Slave  Table Partitioning  Federated Tables  Sharding NoSql  Relational database (RDBMS) technology  Has not fundamentally changed in over 40 years  Default choice for holding data behind many web apps  Handling more users means adding a bigger server
  • 14. RDBMS with Extended Functionality Vs. Systems Built from Scratch with Scalability in Mind NoSQL Movement
  • 15. CAP Theorem  “Of three properties of shared-data systems – data Consistency, system Availability and tolerance to network Partition – only two can be achieved at any given moment in time.”
  • 16. “Of three properties of shared-data systems – data Consistency, system Availability and tolerance to network Partition – only two can be achieved at any given moment in time.”  CA  Highly-available consistency  CP  Enforced consistency  AP  Eventual consistency CAP Theorem
  • 18.  Schema-less  State (Persistent or Volatile)  Example:  Redis  Amazon DynamoDB Key / Value Database
  • 19.  Wide, sparse column sets  Schema-light  Examples:  Cassandra  HBase  BigTable  GAE HR DS Column Database
  • 20.  Use for data that is  document-oriented (collection of JSON documents) w/semi structured data  Encodings include XML, YAML, JSON & BSON  binary forms  PDF, Microsoft Office documents -- Word, Excel…)  Examples: MongoDB, CouchDB Document Database
  • 21. Graph Database Use for data with  a lot of many-to-many relationships  when your primary objective is quickly finding connections, patterns and relationships between the objects within lots of data  Examples: Neo4J, FreeBase (Google)
  • 22. So which type of NoSQL? Back to CAP… CP = noSQL/column Hadoop Big Table HBase MemCacheDB AP = noSQL/document or key/value DynamoDB CouchDB Cassandra Voldemort CA = SQL/RDBMS SQL Sever / SQL Azure Oracle MySQL
  • 25. About Apache Spark  Fast and general purpose cluster computing system  10x (on disk) - 100x (In-Memory) faster  Most popular for running Iterative Machine Learning Algorithms.  Provides high level APIs in  Java  Scala  Python  R  http://spark.apache.org/
  • 26. Why Spark ?  Most of Machine Learning Algorithms are iterative because each iteration can improve the results  With Disk based approach each iteration’s output is written to disk making it slow Hadoop execution flow Spark execution flow
  • 28. A Brief History: MapReduce  MapReduce use cases showed two major limitations:  difficultly of programming directly in MR  performance bottlenecks, or batch not fitting the use cases
  • 29. A Brief History: MapReduce
  • 30. A Brief History: Spark  Some key points about Spark:  handles batch, interactive, and real-time within a single framework  native integration with Java, Python, Scala  programming at a higher level of abstraction  more general: map/reduce is just one set of supported constructs
  • 31. Spark Stack  Spark SQL  For SQL and unstructured data processing  MLib  Machine Learning Algorithms  GraphX  Graph Processing  Spark Streaming  stream processing of live data streams
  • 32. Cluster Deployment  Standalone Deploy Mode  simplest way to deploy Spark on a private cluster  Amazon EC2  EC2 scripts are available  Very quick launching a new cluster  Apache Mesos  Hadoop YARN
  • 33. Which Language Should I Use?  Standalone programs can be written in any, but console is only Python & Scala  Python developers: can stay with Python for both  Java developers: consider using Scala for console (to learn the API)  Performance: Java / Scala will be faster (statically typed), but Python can do well for numerical work with NumPy
  • 34. RDD  Resilient Distributed Datasets (RDD) are the primary abstraction in Spark – a fault-tolerant collection of elements that can be operated on in parallel
  • 35. RDD  two types of operations on RDDs:  transformations and actions  transformations are lazy  (not computed immediately)  the transformed RDD gets recomputed  when an action is run on it (default)  however, an RDD can be persisted into  storage in memory or disk
  • 36. Transformations  Transformations create a new dataset from an existing one  All transformations in Spark are lazy: they do not compute their results right away – instead they remember the transformations applied to some base dataset  optimize the required calculations  recover from lost data partitions
  • 42. Standalone (Scala) /* SimpleApp.scala */ import org.apache.spark.SparkContext import org.apache.spark.SparkContext._ import org.apache.spark.SparkConf object SimpleApp { def main(args: Array[String]) { val logFile = "YOUR_SPARK_HOME/README.md" // Should be some file on your system val conf = new SparkConf().setAppName("Simple Application") .setMaster(“local") val sc = new SparkContext(conf) val logData = sc.textFile(logFile, 2).cache() val numAs = logData.filter(line => line.contains("a")).count() val numBs = logData.filter(line => line.contains("b")).count() println("Lines with a: %s, Lines with b: %s".format(numAs, numBs)) } }
  • 43. Standalone (Java) public class SimpleApp { public static void main(String[] args) { String logFile = “LOGFILES_ADDRESS"; SparkConf conf = new SparkConf().setAppName("Simple Application").setMaster("local"); JavaSparkContext sc = new JavaSparkContext(conf); JavaRDD<String> logData = sc.textFile(logFile); long numAs = logData.filter(new Function<String, Boolean>() { public Boolean call(String s) { return s.contains("a"); } }).count(); long numBs = logData.filter(new Function<String, Boolean>() { public Boolean call(String s) { return s.contains("b"); } }).count(); System.out.println("Lines with a: " + numAs + ", lines with b: " + numBs); } }