![Page 1: Deep Dive Into Catalyst: Apache Spark 2.0'S Optimizer](https://reader031.vdocuments.site/reader031/viewer/2022020203/587155b91a28ab8e5b8b51a5/html5/thumbnails/1.jpg)
Deep Dive Into Catalyst:Apache Spark 2.0’s Optimizer
Yin Huai Spark Summit 2016
![Page 2: Deep Dive Into Catalyst: Apache Spark 2.0'S Optimizer](https://reader031.vdocuments.site/reader031/viewer/2022020203/587155b91a28ab8e5b8b51a5/html5/thumbnails/2.jpg)
2
SELECT count(*)FROM (SELECT t1.idFROM t1 JOIN t2WHEREt1.id = t2.id ANDt2.id > 50 * 1000) tmp
Write Programs Using RDD API
![Page 3: Deep Dive Into Catalyst: Apache Spark 2.0'S Optimizer](https://reader031.vdocuments.site/reader031/viewer/2022020203/587155b91a28ab8e5b8b51a5/html5/thumbnails/3.jpg)
Solution 1
3
val count = t1.cartesian(t2).filter {case (id1FromT1, id2FromT2) => id1FromT1 == id2FromT2
}.filter {case (id1FromT1, id2FromT2) => id1FromT2 > 50 * 1000
}.map {case (id1FromT1, id2FromT2) => id1FromT1
}.countprintln("Count: " + count)
t1 join t2t1.id = t2.id
t2.id > 50 * 1000
![Page 4: Deep Dive Into Catalyst: Apache Spark 2.0'S Optimizer](https://reader031.vdocuments.site/reader031/viewer/2022020203/587155b91a28ab8e5b8b51a5/html5/thumbnails/4.jpg)
Solution 2
4
val filteredT2 =t2.filter(id1FromT2 => id1FromT2 > 50 * 1000)
val preparedT1 =t1.map(id1FromT1 => (id1FromT1, id1FromT1))
val preparedT2 =filteredT2.map(id1FromT2 => (id1FromT2, id1FromT2))
val count = preparedT1.join(preparedT2).map {case (id1FromT1, _) => id1FromT1
}.countprintln("Count: " + count)
t2.id > 50 * 1000
t1 join t2 WHERE t1.id = t2.id
![Page 5: Deep Dive Into Catalyst: Apache Spark 2.0'S Optimizer](https://reader031.vdocuments.site/reader031/viewer/2022020203/587155b91a28ab8e5b8b51a5/html5/thumbnails/5.jpg)
Solution 1 vs. Solution 2
0.26
53.92
Exec
utio
n tim
e (s
)
Solution 1 Solution 2
5
~ 200x
![Page 6: Deep Dive Into Catalyst: Apache Spark 2.0'S Optimizer](https://reader031.vdocuments.site/reader031/viewer/2022020203/587155b91a28ab8e5b8b51a5/html5/thumbnails/6.jpg)
Solution 1
6
val count = t1.cartesian(t2).filter {case (id1FromT1, id2FromT2) => id1FromT1 == id2FromT2
}.filter {case (id1FromT1, id2FromT2) => id1FromT2 > 50 * 1000
}.map {case (id1FromT1, id2FromT2) => id1FromT1
}.countprintln("Count: " + count)
t1 join t2t1.id = t2.id
t2.id > 50 * 1000
![Page 7: Deep Dive Into Catalyst: Apache Spark 2.0'S Optimizer](https://reader031.vdocuments.site/reader031/viewer/2022020203/587155b91a28ab8e5b8b51a5/html5/thumbnails/7.jpg)
Solution 2
7
val filteredT2 =t2.filter(id1FromT2 => id1FromT2 > 50 * 1000)
val preparedT1 =t1.map(id1FromT1 => (id1FromT1, id1FromT1))
val preparedT2 =filteredT2.map(id1FromT2 => (id1FromT2, id1FromT2))
val count = preparedT1.join(preparedT2).map {case (id1FromT1, _) => id1FromT1
}.countprintln("Count: " + count)
t2.id > 50 * 1000
t1 join t2 WHERE t1.id = t2.id
![Page 8: Deep Dive Into Catalyst: Apache Spark 2.0'S Optimizer](https://reader031.vdocuments.site/reader031/viewer/2022020203/587155b91a28ab8e5b8b51a5/html5/thumbnails/8.jpg)
Write Programs Using RDD API
• Users’ functions are black boxes• Opaque computation• Opaque data type
• Programs built using RDD API have total control on how to execute every data operation• Developers have to write efficient programs for
different kinds of workloads
8
![Page 9: Deep Dive Into Catalyst: Apache Spark 2.0'S Optimizer](https://reader031.vdocuments.site/reader031/viewer/2022020203/587155b91a28ab8e5b8b51a5/html5/thumbnails/9.jpg)
9
Is there an easy way to write efficient programs?
![Page 10: Deep Dive Into Catalyst: Apache Spark 2.0'S Optimizer](https://reader031.vdocuments.site/reader031/viewer/2022020203/587155b91a28ab8e5b8b51a5/html5/thumbnails/10.jpg)
10
The easiest way to write efficient programs is to not worry about it and get your programs automatically optimized
![Page 11: Deep Dive Into Catalyst: Apache Spark 2.0'S Optimizer](https://reader031.vdocuments.site/reader031/viewer/2022020203/587155b91a28ab8e5b8b51a5/html5/thumbnails/11.jpg)
How?
• Write programs using high level programming interfaces• Programs are used to describe what data operations are needed
without specifying how to execute those operations• High level programming interfaces: SQL, DataFrame, and Dataset
• Get an optimizer that automatically finds out the most efficient plan to execute data operations specified in the user’s program
11
![Page 12: Deep Dive Into Catalyst: Apache Spark 2.0'S Optimizer](https://reader031.vdocuments.site/reader031/viewer/2022020203/587155b91a28ab8e5b8b51a5/html5/thumbnails/12.jpg)
12
Catalyst:Apache Spark’s Optimizer
![Page 13: Deep Dive Into Catalyst: Apache Spark 2.0'S Optimizer](https://reader031.vdocuments.site/reader031/viewer/2022020203/587155b91a28ab8e5b8b51a5/html5/thumbnails/13.jpg)
13
How Catalyst Works: An Overview
SQL AST
DataFrame
Dataset(Java/Scala)
Query Plan Optimized Query Plan RDDs
Code GenerationTransformations
Catalyst
Abstractions of users’ programs(Trees)
![Page 14: Deep Dive Into Catalyst: Apache Spark 2.0'S Optimizer](https://reader031.vdocuments.site/reader031/viewer/2022020203/587155b91a28ab8e5b8b51a5/html5/thumbnails/14.jpg)
14
How Catalyst Works: An Overview
SQL AST
DataFrame
Dataset(Java/Scala)
Query Plan Optimized Query Plan RDDs
Code GenerationTransformations
Catalyst
Abstractions of users’ programs(Trees)
![Page 15: Deep Dive Into Catalyst: Apache Spark 2.0'S Optimizer](https://reader031.vdocuments.site/reader031/viewer/2022020203/587155b91a28ab8e5b8b51a5/html5/thumbnails/15.jpg)
15
Trees: Abstractions of Users’ Programs
SELECT sum(v)FROM (
SELECTt1.id,1 + 2 + t1.value AS v
FROM t1 JOIN t2WHEREt1.id = t2.id ANDt2.id > 50 * 1000) tmp
![Page 16: Deep Dive Into Catalyst: Apache Spark 2.0'S Optimizer](https://reader031.vdocuments.site/reader031/viewer/2022020203/587155b91a28ab8e5b8b51a5/html5/thumbnails/16.jpg)
16
Trees: Abstractions of Users’ Programs
SELECT sum(v)FROM (
SELECTt1.id,1 + 2 + t1.value AS v
FROM t1 JOIN t2WHEREt1.id = t2.id ANDt2.id > 50 * 1000) tmp
Expression• An expression represents a
new value, computed based on input values• e.g. 1 + 2 + t1.value
• Attribute: A column of a dataset (e.g. t1.id) or a column generated by a specific data operation (e.g. v)
![Page 17: Deep Dive Into Catalyst: Apache Spark 2.0'S Optimizer](https://reader031.vdocuments.site/reader031/viewer/2022020203/587155b91a28ab8e5b8b51a5/html5/thumbnails/17.jpg)
17
Trees: Abstractions of Users’ Programs
SELECT sum(v)FROM (
SELECTt1.id,1 + 2 + t1.value AS v
FROM t1 JOIN t2WHEREt1.id = t2.id ANDt2.id > 50 * 1000) tmp
Query Plan
Scan(t1)
Scan(t2)
Join
Filter
Project
Aggregate sum(v)
t1.id,1+2+t1.value as v
t1.id=t2.idt2.id>50*1000
![Page 18: Deep Dive Into Catalyst: Apache Spark 2.0'S Optimizer](https://reader031.vdocuments.site/reader031/viewer/2022020203/587155b91a28ab8e5b8b51a5/html5/thumbnails/18.jpg)
Logical Plan
• A Logical Plan describes computation on datasets without defining how to conduct the computation• output: a list of attributes generated
by this Logical Plan, e.g. [id, v]• constraints: a set of invariants about
the rows generated by this plan, e.g. t2.id > 50 * 1000
18
Scan(t1)
Scan(t2)
Join
Filter
Project
Aggregate sum(v)
t1.id,1+2+t1.value as v
t1.id=t2.idt2.id>50*1000
![Page 19: Deep Dive Into Catalyst: Apache Spark 2.0'S Optimizer](https://reader031.vdocuments.site/reader031/viewer/2022020203/587155b91a28ab8e5b8b51a5/html5/thumbnails/19.jpg)
Physical Plan
• A Physical Plan describes computation on datasets with specific definitions on how to conduct the computation
• A Physical Plan is executable
19
Parquet Scan(t1)
JSON Scan(t2)
Sort-Merge Join
Filter
Project
Hash-Aggregate
sum(v)
t1.id,1+2+t1.value as v
t1.id=t2.idt2.id>50*1000
![Page 20: Deep Dive Into Catalyst: Apache Spark 2.0'S Optimizer](https://reader031.vdocuments.site/reader031/viewer/2022020203/587155b91a28ab8e5b8b51a5/html5/thumbnails/20.jpg)
20
How Catalyst Works: An Overview
SQL AST
DataFrame
Dataset(Java/Scala)
Query Plan Optimized Query Plan RDDs
Code GenerationTransformations
Catalyst
Abstractions of users’ programs(Trees)
![Page 21: Deep Dive Into Catalyst: Apache Spark 2.0'S Optimizer](https://reader031.vdocuments.site/reader031/viewer/2022020203/587155b91a28ab8e5b8b51a5/html5/thumbnails/21.jpg)
Transformations
• Transformations without changing the tree type (Transform and Rule Executor)• Expression => Expression• Logical Plan => Logical Plan• Physical Plan => Physical Plan
• Transforming a tree to another kind of tree• Logical Plan => Physical Plan
21
![Page 22: Deep Dive Into Catalyst: Apache Spark 2.0'S Optimizer](https://reader031.vdocuments.site/reader031/viewer/2022020203/587155b91a28ab8e5b8b51a5/html5/thumbnails/22.jpg)
• A function associated with every tree used to implement a single rule
Transform
22
Attribute(t1.value)
Add
Add
Literal(1) Literal(2)
1 + 2 + t1.value
Attribute(t1.value)
Add
Literal(3)
3+ t1.valueEvaluate 1 + 2 onceEvaluate 1 + 2 for every row
![Page 23: Deep Dive Into Catalyst: Apache Spark 2.0'S Optimizer](https://reader031.vdocuments.site/reader031/viewer/2022020203/587155b91a28ab8e5b8b51a5/html5/thumbnails/23.jpg)
Transform
• A transformation is defined as a Partial Function• Partial Function: A function that is defined for a subset
of its possible arguments
23
val expression: Expression = ...expression.transform {
case Add(Literal(x, IntegerType), Literal(y, IntegerType)) =>Literal(x + y)
}
Case statement determine if the partial function is defined for a given input
![Page 24: Deep Dive Into Catalyst: Apache Spark 2.0'S Optimizer](https://reader031.vdocuments.site/reader031/viewer/2022020203/587155b91a28ab8e5b8b51a5/html5/thumbnails/24.jpg)
val expression: Expression = ...expression.transform {
case Add(Literal(x, IntegerType), Literal(y, IntegerType)) =>Literal(x + y)
}
Transform
24
Attribute(t1.value)
Add
Add
Literal(1) Literal(2)
1 + 2 + t1.value
![Page 25: Deep Dive Into Catalyst: Apache Spark 2.0'S Optimizer](https://reader031.vdocuments.site/reader031/viewer/2022020203/587155b91a28ab8e5b8b51a5/html5/thumbnails/25.jpg)
val expression: Expression = ...expression.transform {
case Add(Literal(x, IntegerType), Literal(y, IntegerType)) =>Literal(x + y)
}
Transform
25
Attribute(t1.value)
Add
Add
Literal(1) Literal(2)
1 + 2 + t1.value
![Page 26: Deep Dive Into Catalyst: Apache Spark 2.0'S Optimizer](https://reader031.vdocuments.site/reader031/viewer/2022020203/587155b91a28ab8e5b8b51a5/html5/thumbnails/26.jpg)
val expression: Expression = ...expression.transform {
case Add(Literal(x, IntegerType), Literal(y, IntegerType)) =>Literal(x + y)
}
Transform
26
Attribute(t1.value)
Add
Add
Literal(1) Literal(2)
1 + 2 + t1.value
![Page 27: Deep Dive Into Catalyst: Apache Spark 2.0'S Optimizer](https://reader031.vdocuments.site/reader031/viewer/2022020203/587155b91a28ab8e5b8b51a5/html5/thumbnails/27.jpg)
val expression: Expression = ...expression.transform {
case Add(Literal(x, IntegerType), Literal(y, IntegerType)) =>Literal(x + y)
}
Transform
27
Attribute(t1.value)
Add
Add
Literal(1) Literal(2)
1 + 2 + t1.value
Attribute(t1.value)
Add
Literal(3)
3+ t1.value
![Page 28: Deep Dive Into Catalyst: Apache Spark 2.0'S Optimizer](https://reader031.vdocuments.site/reader031/viewer/2022020203/587155b91a28ab8e5b8b51a5/html5/thumbnails/28.jpg)
Combining Multiple Rules
28
Scan(t1)
Scan(t2)
Join
Filter
Project
Aggregate sum(v)
t1.id,1+2+t1.value as v
t1.id=t2.idt2.id>50*1000
Predicate Pushdown
Scan(t1)
Scan(t2)
Join
Filter
Project
Aggregate sum(v)
t1.id,1+2+t1.value as v
t2.id>50*1000
t1.id=t2.id
![Page 29: Deep Dive Into Catalyst: Apache Spark 2.0'S Optimizer](https://reader031.vdocuments.site/reader031/viewer/2022020203/587155b91a28ab8e5b8b51a5/html5/thumbnails/29.jpg)
Combining Multiple Rules
29
Constant Folding
Scan(t1)
Scan(t2)
Join
Filter
Project
Aggregate sum(v)
t1.id,1+2+t1.value as v
t2.id>50*1000
t1.id=t2.id
Scan(t1)
Scan(t2)
Join
Filter
Project
Aggregate sum(v)
t1.id,3+t1.value as v
t2.id>50000
t1.id=t2.id
![Page 30: Deep Dive Into Catalyst: Apache Spark 2.0'S Optimizer](https://reader031.vdocuments.site/reader031/viewer/2022020203/587155b91a28ab8e5b8b51a5/html5/thumbnails/30.jpg)
Combining Multiple Rules
30
Column Pruning
Scan(t1)
Scan(t2)
Join
Filter
Project
Aggregate sum(v)
t1.id,3+t1.value as v
t2.id>50000
t1.id=t2.id
Scan(t1)
Scan(t2)
Join
Filter
Project
Aggregate sum(v)
t1.id,3+t1.value as v
t2.id>50000
t1.id=t2.id
Project Projectt1.idt1.value t2.id
![Page 31: Deep Dive Into Catalyst: Apache Spark 2.0'S Optimizer](https://reader031.vdocuments.site/reader031/viewer/2022020203/587155b91a28ab8e5b8b51a5/html5/thumbnails/31.jpg)
Combining Multiple Rules
31
Scan(t1)
Scan(t2)
Join
Filter
Project
Aggregate sum(v)
t1.id,1+2+t1.value as v
t1.id=t2.idt2.id>50*1000
Scan(t1)
Scan(t2)
Join
Filter
Project
Aggregate sum(v)
t1.id,3+t1.value as v
t2.id>50000
t1.id=t2.id
Project Projectt1.idt1.value
t2.id
Before transformations
After transformations
![Page 32: Deep Dive Into Catalyst: Apache Spark 2.0'S Optimizer](https://reader031.vdocuments.site/reader031/viewer/2022020203/587155b91a28ab8e5b8b51a5/html5/thumbnails/32.jpg)
Combining Multiple Rules: Rule Executor
32
Batch 1 Batch 2 Batch n…
Rule 1
Rule 2
…
Rule 1
Rule 2
…
A Rule Executor transforms a Tree to another same type Tree by applying many rules defined in batches
Every rule is implemented based on Transform
Approaches of applying rules
1. Fixed point
2. Once
![Page 33: Deep Dive Into Catalyst: Apache Spark 2.0'S Optimizer](https://reader031.vdocuments.site/reader031/viewer/2022020203/587155b91a28ab8e5b8b51a5/html5/thumbnails/33.jpg)
Transformations
• Transformations without changing the tree type (Transform and Rule Executor)• Expression => Expression• Logical Plan => Logical Plan• Physical Plan => Physical Plan
• Transforming a tree to another kind of tree• Logical Plan => Physical Plan
33
![Page 34: Deep Dive Into Catalyst: Apache Spark 2.0'S Optimizer](https://reader031.vdocuments.site/reader031/viewer/2022020203/587155b91a28ab8e5b8b51a5/html5/thumbnails/34.jpg)
From Logical Plan to Physical Plan• A Logical Plan is transformed to a Physical Plan by applying
a set of Strategies• Every Strategy uses pattern matching to convert a Tree to
another kind of Tree
34
object BasicOperators extends Strategy {def apply(plan: LogicalPlan): Seq[SparkPlan] = plan match {…case logical.Project(projectList, child) =>
execution.ProjectExec(projectList, planLater(child)) :: Nilcase logical.Filter(condition, child) =>
execution.FilterExec(condition, planLater(child)) :: Nil…
}} Triggers other Strategies
![Page 35: Deep Dive Into Catalyst: Apache Spark 2.0'S Optimizer](https://reader031.vdocuments.site/reader031/viewer/2022020203/587155b91a28ab8e5b8b51a5/html5/thumbnails/35.jpg)
35
SQL AST
DataFrame
Dataset(Java/Scala)
Query Plan Optimized Query Plan RDDs
Unresolved Logical Plan Logical Plan Optimized
Logical PlanSelected
Physical Plan
Cost
Mod
el
Physical Plans
Catalog
Analysis LogicalOptimization
PhysicalPlanning
Catalyst
![Page 36: Deep Dive Into Catalyst: Apache Spark 2.0'S Optimizer](https://reader031.vdocuments.site/reader031/viewer/2022020203/587155b91a28ab8e5b8b51a5/html5/thumbnails/36.jpg)
36
Unresolved Logical Plan Logical Plan Optimized
Logical PlanSelected
Physical Plan
Cost
Mod
el
Physical Plans
Catalog
Analysis LogicalOptimization
PhysicalPlanning
• Analysis (Rule Executor): Transforms an Unresolved Logical Plan to a Resolved Logical Plan• Unresolved => Resolved: Use Catalog to find where datasets and
columns are coming from and types of columns
• Logical Optimization (Rule Executor): Transforms a Resolved Logical Plan to an Optimized Logical Plan• Physical Planning (Strategies + Rule Executor): Transforms a
Optimized Logical Plan to a Physical Plan
![Page 37: Deep Dive Into Catalyst: Apache Spark 2.0'S Optimizer](https://reader031.vdocuments.site/reader031/viewer/2022020203/587155b91a28ab8e5b8b51a5/html5/thumbnails/37.jpg)
Spark’s Planner
• 1st Phase: Transforms the Logical Plan to the Physical Plan using Strategies
• 2nd Phase: Use a Rule Executor to make the Physical Plan ready for execution• Prepare Scalar sub-queries• Ensure requirements on input rows• Apply physical optimizations
37
![Page 38: Deep Dive Into Catalyst: Apache Spark 2.0'S Optimizer](https://reader031.vdocuments.site/reader031/viewer/2022020203/587155b91a28ab8e5b8b51a5/html5/thumbnails/38.jpg)
Ensure Requirements on Input Rows
38
Sort-Merge Join[t1.id = t2.id]
t1
Requirements:Rows Sorted by t1.id
t2
Requirements:Rows Sorted by t2.id
![Page 39: Deep Dive Into Catalyst: Apache Spark 2.0'S Optimizer](https://reader031.vdocuments.site/reader031/viewer/2022020203/587155b91a28ab8e5b8b51a5/html5/thumbnails/39.jpg)
Ensure Requirements on Input Rows
39
Sort-Merge Join[t1.id = t2.id]
t1
Requirements:Rows Sorted by t1.id
Requirements:Rows Sorted by t2.id
Sort[t1.id]
t2
Sort[t2.id]
![Page 40: Deep Dive Into Catalyst: Apache Spark 2.0'S Optimizer](https://reader031.vdocuments.site/reader031/viewer/2022020203/587155b91a28ab8e5b8b51a5/html5/thumbnails/40.jpg)
Ensure Requirements on Input Rows
40
Sort-Merge Join[t1.id = t2.id]
t1[sorted by t1.id]
Requirements:Rows Sorted by t1.id
Requirements:Rows Sorted by t2.id
Sort[t1.id]
t2
Sort[t2.id]What if t1 has been
sorted by t1.id?
![Page 41: Deep Dive Into Catalyst: Apache Spark 2.0'S Optimizer](https://reader031.vdocuments.site/reader031/viewer/2022020203/587155b91a28ab8e5b8b51a5/html5/thumbnails/41.jpg)
Ensure Requirements on Input Rows
41
Sort-Merge Join[t1.id = t2.id]
t1[sorted by t1.id]
Requirements:Rows Sorted by t1.id
Requirements:Rows Sorted by t2.id
t2
Sort[t2.id]
![Page 42: Deep Dive Into Catalyst: Apache Spark 2.0'S Optimizer](https://reader031.vdocuments.site/reader031/viewer/2022020203/587155b91a28ab8e5b8b51a5/html5/thumbnails/42.jpg)
42
Catalyst in Apache Spark
![Page 43: Deep Dive Into Catalyst: Apache Spark 2.0'S Optimizer](https://reader031.vdocuments.site/reader031/viewer/2022020203/587155b91a28ab8e5b8b51a5/html5/thumbnails/43.jpg)
43
Spark Core (RDD)
Catalyst
DataFrame/DatasetSQL
ML Pipelines Structured Streaming
{ JSON }
JDBCand more…
With Spark 2.0, we expect most users to migrate to high level APIs (SQL, DataFrame, and Dataset)
Spark SQL
GraphFrames
![Page 44: Deep Dive Into Catalyst: Apache Spark 2.0'S Optimizer](https://reader031.vdocuments.site/reader031/viewer/2022020203/587155b91a28ab8e5b8b51a5/html5/thumbnails/44.jpg)
Where to Start
• Source Code:• Trees: TreeNode, Expression, Logical Plan, and Physical Plan• Transformations: Analyzer, Optimizer, and Planner
• Check out previous pull requests
• Start to write code using Catalyst
44
![Page 45: Deep Dive Into Catalyst: Apache Spark 2.0'S Optimizer](https://reader031.vdocuments.site/reader031/viewer/2022020203/587155b91a28ab8e5b8b51a5/html5/thumbnails/45.jpg)
45
~ 200 lines of changes(95 lines are for tests)
Certain workloads can run hundreds times faster
SPARK-12032
![Page 46: Deep Dive Into Catalyst: Apache Spark 2.0'S Optimizer](https://reader031.vdocuments.site/reader031/viewer/2022020203/587155b91a28ab8e5b8b51a5/html5/thumbnails/46.jpg)
46
~ 250 lines of changes(99 lines are for tests)
Pivot table supportSPARK-8992
![Page 47: Deep Dive Into Catalyst: Apache Spark 2.0'S Optimizer](https://reader031.vdocuments.site/reader031/viewer/2022020203/587155b91a28ab8e5b8b51a5/html5/thumbnails/47.jpg)
• Try latest version of Apache Spark and preview of Spark 2.0
Try Apache Spark with Databricks
47
http://databricks.com/try
![Page 48: Deep Dive Into Catalyst: Apache Spark 2.0'S Optimizer](https://reader031.vdocuments.site/reader031/viewer/2022020203/587155b91a28ab8e5b8b51a5/html5/thumbnails/48.jpg)
Thank you.Office hour: 2:45pm –3:30pm @ Expo Hall