SlideShare a Scribd company logo
Anthony Fox
Directorof Data Science,Commonwealth ComputerResearch Inc
GeoMesa Founderand TechnicalLead
anthony.fox@ccri.com
linkedin.com/in/anthony-fox-ccri
twitter.com/algoriffic
www.ccri.com
GeoMesa on Spark SQL
Extracting Location Intelligence from Data
Satellite AIS
ADS-B
Mobile Apps
Mobile Apps
Intro to Location Intelligence and GeoMesa
Spatial Data Types, Spatial SQL
Extending Spark Catalyst for Optimized Spatial SQL
Density of activity in San Francisco
Speed profile of San Francisco
Intro to Location Intelligence and GeoMesa
Spatial Data Types, Spatial SQL
Extending Spark Catalyst for Optimized Spatial SQL
Density of activity in San Francisco
Speed profile of San Francisco
Intro to Location Intelligence and GeoMesa
Spatial Data Types, Spatial SQL
Extending Spark Catalyst for OptimizedSpatial SQL
Density of activity in San Francisco
Speed profile of San Francisco
Intro to Location Intelligence and GeoMesa
Spatial Data Types, Spatial SQL
Extending Spark Catalyst for Optimized Spatial SQL
Density of activity in San Francisco
Speed profile of San Francisco
Location Intelligence
Easy Hard
Location Intelligence
Show me all the
coffee shops in
this
neighborhood
Easy Hard
Location Intelligence
Show me all the
coffee shops in
this
neighborhood How many users
commute
through this
intersection
every day?
Easy Hard
Location Intelligence
Location Intelligence
Location Intelligence
Location Intelligence
Location Intelligence
Show me all the
coffee shops in
this
neighborhood How many users
commute
through this
intersection
every day?
Easy Hard
Location Intelligence
Show me all the
coffee shops in
this
neighborhood How many users
commute
through this
intersection
every day?
Which users
commute within
250 meters of a
coffee shop?
Easy Hard
Location Intelligence
Easy Hard
Show me all the
coffee shops in
this
neighborhood How many users
commute
through this
intersection
every day?
Which users
commute within
250 meters of a
coffee shop?
Should I place
an ad for coffee
on this user’s
mobile device?
Location Intelligence
Easy Hard
Show me all the
coffee shops in
this
neighborhood How many users
commute
through this
intersection
every day?
Which users
commute within
250 meters of a
coffee shop?
Should I place
an ad for coffee
on this user’s
mobile device?
Where should I
build my next
coffee shop?
What is GeoMesa?
A suite of tools for persisting, querying, analyzing, and
streaming spatio-temporal data at scale
What is GeoMesa?
A suite of tools for persisting, querying, analyzing,
and streaming spatio-temporal data at scale
What is GeoMesa?
A suite of tools for persisting, querying, analyzing,
and streaming spatio-temporal data at scale
What is GeoMesa?
A suite of tools for persisting, querying, analyzing,
and streaming spatio-temporal data at scale
What is GeoMesa?
A suite of tools for persisting, querying, analyzing, and
streaming spatio-temporal data at scale
Intro to Location Intelligence and GeoMesa
Spatial Data Types, Spatial SQL
Extending Spark Catalyst for Optimized Spatial SQL
Density of activity in San Francisco
Speed profile of San Francisco
Spatial Data Types
Spatial Data Types
Points
Locations
Events
Instantaneous Positions
Spatial Data Types
Points
Locations
Events
Instantaneous Positions
Lines
Road networks
Voyages
Trips
Trajectories
Spatial Data Types
Points
Locations
Events
Instantaneous Positions
Lines
Road networks
Voyages
Trips
Trajectories
Polygons
Administrative Regions
Airspaces
Spatial Data Types
Points
Locations
Events
Instantaneous Positions
PointUDT
MultiPointUDT
Lines
Road networks
Voyages
Trips
Trajectories
LineUDT
MultiLineUDT
Polygons
Administrative Regions
Airspaces
PolygonUDT
MultiPolygonUDT
Spatial SQL
SELECT
activity_id,user_id,geom,dtg
FROM
activities
WHERE
st_contains(st_makeBBOX(-78,37,-77,38),geom) AND
dtg > cast(‘2017-06-01’ as timestamp) AND
dtg < cast(‘2017-06-05’ as timestamp)
Spatial SQL
SELECT
activity_id,user_id,geom,dtg
FROM
activities
WHERE
st_contains(st_makeBBOX(-78,37,-77,38),geom) AND
dtg > cast(‘2017-06-01’ as timestamp) AND
dtg < cast(‘2017-06-05’ as timestamp)
Geometry constructor
Spatial SQL
SELECT
activity_id,user_id,geom,dtg
FROM
activities
WHERE
st_contains(st_makeBBOX(-78,37,-77,38),geom) AND
dtg > cast(‘2017-06-01’ as timestamp) AND
dtg < cast(‘2017-06-05’ as timestamp)
Spatial column in schema
Spatial SQL
SELECT
activity_id,user_id,geom,dtg
FROM
activities
WHERE
st_contains(st_makeBBOX(-78,37,-77,38),geom) AND
dtg > cast(‘2017-06-01’ as timestamp) AND
dtg < cast(‘2017-06-05’ as timestamp)
Topological predicate
Sample Spatial UDFs: Geometry Constructors
st_geomFromWKT Create a point, line, or polygon from a
WKT st_geomFromWKT(‘POINT(-122.40,37.78)’)
st_makeLine Create a line from a sequence of points
st_makeLine(collect_list(geom))
st_makeBBOX Create a bounding box from (left, bottom,
right, top) st_makeBBOX(-123,37,-121,39)
...
Sample Spatial UDFs: Topological Predicates
st_contains Returns true if the second argument is
contained within the first argument
st_contains(
st_geomFromWKT(‘POLYGON…’),
geom
)
st_within Returns true if the second argument
geometry is entirely within the first
argument
st_within(
st_geomFromWKT(‘POLYGON…’),
geom
)
st_dwithin Returns true if the geometries are within
a specified distance from each other
st_dwithin(geom1, geom2, 100)
Sample Spatial UDFs: Processing
st_bufferPoint Create a bufferaround a point fordistance
within type queries st_bufferPoint(geom, 10)
st_envelope Extract the envelope ofa geometry
st_envelope(geom)
st_geohash Encode the geometry using a Z-Orderspace
filling curve. Useful for grid analysis.
st_geohash(geom, 35)
st_closestpoint Find the point on the target geometry thatis
closest to the given geometry st_closestpoint(geom1, geom2)
st_distanceSpheroid Find the great circle distance usingthe WGS84
ellipsoid
st_distanceSpheroid(geom1, geom2)
Intro to Location Intelligence and GeoMesa
Spatial Data Types, Spatial SQL
Extending Spark Catalyst for OptimizedSpatial SQL
Density of activity in San Francisco
Speed profile of San Francisco
Optimizing Spatial SQL
SELECT
activity_id,user_id,geom,dtg
FROM
activities
WHERE
st_contains(st_makeBBOX(-78,37,-77,38),geom) AND
dtg > cast(‘2017-06-01’ as timestamp) AND
dtg < cast(‘2017-06-05’ as timestamp)
Optimizing Spatial SQL
SELECT
activity_id,user_id,geom,dtg
FROM
activities
WHERE
st_contains(st_makeBBOX(-78,37,-77,38),geom) AND
dtg > cast(‘2017-06-01’ as timestamp) AND
dtg < cast(‘2017-06-05’ as timestamp)
Only load partitions that have
records that intersect the query
geometry.
Extending Spark’s Catalyst Optimizer
https://databricks.com/blog/2015/04/13/deep-dive-into-spark-sqls-catalyst-optimizer.html
Extending Spark’s Catalyst Optimizer
https://databricks.com/blog/2015/04/13/deep-dive-into-spark-sqls-catalyst-optimizer.html
Catalyst exposes hooks to insert
optimization rules in various points in the
query processing logic.
Extending Spark’s Catalyst Optimizer
/**
* :: Experimental ::
* A collection of methods that are consideredexperimental,but canbe used tohook into
* the query plannerforadvanced functionality.
*
* @group basic
* @since 1.3.0
*/
@Experimental
@transient
@InterfaceStability.Unstable
def experimental: ExperimentalMethods = sparkSession.experimental
SQL optimizations for Spatial Predicates
SELECT
activity_id,user_id,geom,dtg
FROM
activities
WHERE
st_contains(st_makeBBOX(-78,37,-77,38),geom) AND
dtg > cast(‘2017-06-01’ as timestamp) AND
dtg < cast(‘2017-06-05’ as timestamp)
SQL optimizations for Spatial Predicates
SELECT
activity_id,user_id,geom,dtg
FROM
activities
WHERE
st_contains(st_makeBBOX(-78,37,-77,38),geom) AND
dtg > cast(‘2017-06-01’ as timestamp) AND
dtg < cast(‘2017-06-05’ as timestamp)
GeoMesa Relation
SQL optimizations for Spatial Predicates
SELECT
activity_id,user_id,geom,dtg
FROM
activities
WHERE
st_contains(st_makeBBOX(-78,37,-77,38),geom) AND
dtg > cast(‘2017-06-01’ as timestamp) AND
dtg < cast(‘2017-06-05’ as timestamp)
Relational Projection
SQL optimizations for Spatial Predicates
SELECT
activity_id,user_id,geom,dtg
FROM
activities
WHERE
st_contains(st_makeBBOX(-78,37,-77,38),geom) AND
dtg > cast(‘2017-06-01’ as timestamp) AND
dtg < cast(‘2017-06-05’ as timestamp)
Topological Predicate
SQL optimizations for Spatial Predicates
SELECT
activity_id,user_id,geom,dtg
FROM
activities
WHERE
st_contains(st_makeBBOX(-78,37,-77,38),geom) AND
dtg > cast(‘2017-06-01’ as timestamp) AND
dtg < cast(‘2017-06-05’ as timestamp)
Geometry Literal
SQL optimizations for Spatial Predicates
SELECT
activity_id,user_id,geom,dtg
FROM
activities
WHERE
st_contains(st_makeBBOX(-78,37,-77,38),geom) AND
dtg > cast(‘2017-06-01’ as timestamp) AND
dtg < cast(‘2017-06-05’ as timestamp)
Date range predicate
SQL optimizations for Spatial Predicates
object STContainsRule extendsRule[LogicalPlan]with PredicateHelper{
override defapply(plan: LogicalPlan): LogicalPlan ={
plan.transform{
case filt @ Filter(f, lr@LogicalRelation(gmRel: GeoMesaRelation,_,_)) =>
…
val relation =gmRel.copy(filt =ff.and(gtFilters:+gmRel.filt))
lr.copy(expectedOutputAttributes=Some(lr.output),
relation =relation)
}
}
SQL optimizations for Spatial Predicates
object STContainsRule extendsRule[LogicalPlan]with PredicateHelper{
override defapply(plan: LogicalPlan): LogicalPlan ={
plan.transform{
case filt @ Filter(f, lr@LogicalRelation(gmRel: GeoMesaRelation,_,_)) =>
…
val relation =gmRel.copy(filt =ff.and(gtFilters:+gmRel.filt))
lr.copy(expectedOutputAttributes=Some(lr.output),
relation =relation)
}
}
Intercept a Filter on a
GeoMesa Logical Relation
SQL optimizations for Spatial Predicates
object STContainsRule extendsRule[LogicalPlan]with PredicateHelper{
override defapply(plan: LogicalPlan): LogicalPlan ={
plan.transform{
case filt @ Filter(f, lr@LogicalRelation(gmRel: GeoMesaRelation,_,_)) =>
…
val relation =gmRel.copy(filt =ff.and(gtFilters:+gmRel.filt))
lr.copy(expectedOutputAttributes=Some(lr.output),
relation =relation)
}
} Extract the predicates that can be handled by GeoMesa, create a new GeoMesa relation with
the predicates pushed down into the scan,and return a modified tree with the new relation and
the filter removed.
GeoMesa will compute the minimal ranges necessary to cover the query region.
SQL optimizations for Spatial Predicates
Relational
Projection
Filter
GeoMesa
Relation
SQL optimizations for Spatial Predicates
Relational
Projection
Filter
GeoMesa
Relation
Relational
Projection
GeoMesa
Relation
<topo predicate>
SQL optimizations for Spatial Predicates
Relational
Projection
Filter
GeoMesa
Relation
Relational
Projection
GeoMesa
Relation
<topo predicate>
GeoMesa
Relation
<topo predicate>
<relational projection>
SQL optimizations for Spatial Predicates
SELECT
activity_id,user_id,geom,dtg
FROM
activities
WHERE
st_contains(st_makeBBOX(-78,37,-77,38),geom) AND
dtg > cast(‘2017-06-01’ as timestamp) AND
dtg < cast(‘2017-06-05’ as timestamp)
SQL optimizations for Spatial Predicates
SELECT
*
FROM
activities<pushdown filter and projection>
SQL optimizations for Spatial Predicates
SELECT
*
FROM
activities<pushdown filter and projection>
Reduced I/O, reduced networkoverhead, reduced
compute load - faster Location Intelligence
answers
Intro to Location Intelligence and GeoMesa
Spatial Data Types, Spatial SQL
Extending Spark Catalyst for Optimized Spatial SQL
Density of activity in San Francisco
Speed profile of San Francisco
SELECT
geohash,
count(geohash)as count
FROM (
SELECT st_geohash(geom,35) as geohash
FROM sf
WHERE
st_contains(st_makeBBOX(-122.4194-1,37.77-1,-122.4194+1,37.77+1),
geom)
)
GROUP BY geohash
1. Constrain to San
Francisco
2. Snap location to 35
bit geohash
3. Group by geohash
and count records per
geohash
Density of Activity in San Francisco
SELECT
geohash,
count(geohash)as count
FROM (
SELECT st_geohash(geom,35) as geohash
FROM sf
WHERE
st_contains(st_makeBBOX(-122.4194-1,37.77-1,-122.4194+1,37.77+1),
geom)
)
GROUP BY geohash
1. Constrain to San
Francisco
2. Snap location to 35
bit geohash
3. Group by geohash
and count records per
geohash
Density of Activity in San Francisco
1. Constrain to San
Francisco
2. Snap location to 35
bit geohash
3. Group by geohash
and count records per
geohash
Density of Activity in San Francisco
SELECT
geohash,
count(geohash)as count
FROM (
SELECT st_geohash(geom,35) as geohash
FROM sf
WHERE
st_contains(st_makeBBOX(-122.4194-1,37.77-1,-122.4194+1,37.77+1),
geom)
)
GROUP BY geohash
SELECT
geohash,
count(geohash)as count
FROM (
SELECT st_geohash(geom,35) as geohash
FROM sf
WHERE
st_contains(st_makeBBOX(-122.4194-1,37.77-1,-122.4194+1,37.77+1),
geom)
)
GROUP BY geohash
1. Constrain to San
Francisco
2. Snap location to 35
bit geohash
3. Group by geohash
and count records
per geohash
Density of Activity in San Francisco
Density of Activity in San Francisco
1. Constrain to San
Francisco
2. Snap location to 35
bit geohash
3. Group by geohash
and count records per
geohash
Visualize using Jupyter
and Bokeh
Density of Activity in San Francisco
p = figure(title="STRAVA",
plot_width=900,plot_height=600,
x_range=x_range,y_range=y_range)
p.add_tile(tonerlines)
p.circle(x=projecteddf['px'],
y=projecteddf['py'],
fill_alpha=0.5,
size=6,
fill_color=colors,
line_color=colors)
show(p)
Visualize using Jupyter
and Bokeh
Density of Activity in San Francisco
Speed Profile of a Metro Area
Inputs STRAVA Activities
An activity is sampled once per second
Each observation has a location and time
{
"type": "Feature",
"geometry": { "type": "Point", "coordinates": [-122.40736,37.807147] },
"properties": {
"activity_id": "**********************",
"athlete_id": "**********************",
"device_type": 5,
"activity_type": "Ride",
"frame_type": 2,
"commute": false,
"date": "2016-11-02T23:58:03",
"index": 0
},
"id": "6a9bb90497be6f64eae009e6c760389017bc31db:0"
}
SELECT
activity_id,
index,
geom as s,
lead(geom) OVER (PARTITION BY activity_idORDER by dtgasc) as e,
dtg as start,
lead(dtg) OVER (PARTITIONBY activity_idORDER by dtgasc) as end
FROM activities
WHERE
activity_type = 'Ride' AND
st_contains(
st_makeBBOX(-122.4194-1,37.77-1,-122.4194+1,37.77+1),
geom)
ORDER BY dtg ASC
1. Select all activities
within metro area
2. Sort activity by dtg
ascending
3. Window over each set
of consecutive
samples
4. Create a temporary
table
Speed Profile of a Metro Area
1. Select all activities
within metro area
2. Sort activity by dtg
ascending
3. Window over each set
of consecutive
samples
4. Create a temporary
table
Speed Profile of a Metro Area
SELECT
activity_id,
index,
geom as s,
lead(geom) OVER (PARTITION BY activity_idORDER by dtgasc) as e,
dtg as start,
lead(dtg) OVER (PARTITIONBY activity_idORDER by dtgasc) as end
FROM activities
WHERE
activity_type = 'Ride' AND
st_contains(
st_makeBBOX(-122.4194-1,37.77-1,-122.4194+1,37.77+1),
geom)
ORDER BY dtg ASC
1. Select all activities
within metro area
2. Sort activity by dtg
ascending
3. Window over each
set of consecutive
samples
4. Create a temporary
table
Speed Profile of a Metro Area
SELECT
activity_id,
index,
geom as s,
lead(geom) OVER (PARTITION BY activity_idORDER by dtgasc) as e,
dtg as start,
lead(dtg) OVER (PARTITIONBY activity_idORDER by dtgasc) as end
FROM activities
WHERE
activity_type = 'Ride' AND
st_contains(
st_makeBBOX(-122.4194-1,37.77-1,-122.4194+1,37.77+1),
geom)
ORDER BY dtg ASC
1. Select all activities
within metro area
2. Sort activity by dtg
ascending
3. Window over each
set of consecutive
samples
4. Create a temporary
table
Speed Profile of a Metro Area
SELECT
activity_id,
index,
geom as s,
lead(geom) OVER (PARTITION BY activity_idORDER by dtgasc) as e,
dtg as start,
lead(dtg) OVER (PARTITIONBY activity_idORDER by dtgasc) as end
FROM activities
WHERE
activity_type = 'Ride' AND
st_contains(
st_makeBBOX(-122.4194-1,37.77-1,-122.4194+1,37.77+1),
geom)
ORDER BY dtg ASC
spark.sql(“””
SELECT
activity_id,
index,
geom as s,
lead(geom) OVER (PARTITION BY activity_idORDER by dtgasc) as e,
dtg as start,
lead(dtg) OVER (PARTITIONBY activity_idORDER by dtgasc) as end
FROM activities
WHERE
activity_type = 'Ride' AND
st_contains(
st_makeBBOX(-122.4194-1,37.77-1,-122.4194+1,37.77+1),
geom)
ORDER BY dtg ASC
“””).createOrReplaceTempView(“segments”)
1. Select all activities
within metro area
2. Sort activity by dtg
ascending
3. Window over each set
of consecutive
samples
4. Create a temporary
table
Speed Profile of a Metro Area
1. Select all activities
within metro area
2. Sort activity by dtg
ascending
3. Window over each set
of consecutive
samples
4. Create a temporary
table
Speed Profile of a Metro Area
SELECT
st_geohash(s,35)as gh,
st_distanceSpheroid(s, e)/
cast(cast(end as long)-cast(startas long)as double)
as meters_per_second
FROM segments
5. Compute the
distance between
consecutive points
6. Compute the time
difference between
consecutive points
7. Compute the speed
8. Snap the location to a
grid based on a
GeoHash
9. Create a temporary
table
Speed Profile of a Metro Area
SELECT
st_geohash(s,35)as gh,
st_distanceSpheroid(s, e)/
cast(cast(end as long)-cast(startas long)as double)
as meters_per_second
FROM segments
5. Compute the distance
between consecutive
points
6. Compute the time
difference between
consecutive points
7. Compute the speed
8. Snap the location to a
grid based on a
GeoHash
9. Create a temporary
table
Speed Profile of a Metro Area
SELECT
st_geohash(s,35)as gh,
st_distanceSpheroid(s, e)/
cast(cast(end as long)-cast(startas long)as double)
as meters_per_second
FROM segments
5. Compute the distance
between consecutive
points
6. Compute the time
difference between
consecutive points
7. Compute the speed
8. Snap the location to a
grid based on a
GeoHash
9. Create a temporary
table
Speed Profile of a Metro Area
SELECT
st_geohash(s,35)as gh,
st_distanceSpheroid(s, e)/
cast(cast(end as long)-cast(startas long)as double)
as meters_per_second
FROM segments
5. Compute the distance
between consecutive
points
6. Compute the time
difference between
consecutive points
7. Compute the speed
8. Snap the location to
a grid based on a
GeoHash
9. Create a temporary
table
Speed Profile of a Metro Area
spark.sql(“””
SELECT
st_geohash(s,35)as gh,
st_distanceSpheroid(s, e)/
cast(cast(end as long)-cast(startas long)as double)
as meters_per_second
FROM segments
“””).createOrReplaceTempView(“gridspeeds”)
5. Compute the distance
between consecutive
points
6. Compute the time
difference between
consecutive points
7. Compute the speed
8. Snap the location to a
grid based on a
GeoHash
9. Create a temporary
table
Speed Profile of a Metro Area
5. Compute the distance
between consecutive
points
6. Compute the time
difference between
consecutive points
7. Compute the speed
8. Snap the location to a
grid based on a
GeoHash
9. Create a temporary
table
Speed Profile of a Metro Area
SELECT
st_centroid(st_geomFromGeoHash(gh,35))as p,
percentile_approx(meters_per_second,0.5) as avg_meters_per_second,
stddev(meters_per_second) as std_dev
FROM gridspeeds
GROUP BY gh
10. Group the grid cells
11. For each grid cell,
compute the median
and standard
deviation of the speed
12. Extract the location of
the grid cell
Speed Profile of a Metro Area
SELECT
st_centroid(st_geomFromGeoHash(gh,35))as p,
percentile_approx(meters_per_second,0.5) as med_meters_per_second,
stddev(meters_per_second) as std_dev
FROM gridspeeds
GROUP BY gh
10. Group the grid cells
11. For each grid cell,
compute the median
and standard
deviation of the
speed
12. Extract the location of
the grid cell
Speed Profile of a Metro Area
SELECT
st_centroid(st_geomFromGeoHash(gh,35))as p,
percentile_approx(meters_per_second,0.5) as avg_meters_per_second,
stddev(meters_per_second) as std_dev
FROM gridspeeds
GROUP BY gh
10. Group the grid cells
11. For each grid cell,
compute the median
and standard
deviation of the speed
12. Extract the location
of the grid cell
Speed Profile of a Metro Area
10. Group the grid cells
11. For each grid cell,
compute the median
and standard
deviation of the speed
12. Extract the location of
the grid cell
Speed Profile of a Metro Area
p = figure(title="STRAVA",
plot_width=900,plot_height=600,
x_range=x_range,y_range=y_range)
p.add_tile(tonerlines)
p.circle(x=projecteddf['px'],
y=projecteddf['py'],
fill_alpha=0.5,
size=6,
fill_color=colors,
line_color=colors)
show(p)
Visualize using Jupyter
and Bokeh
Speed Profile of a Metro Area
Speed Profile of a Metro Area
Speed Profile of a Metro Area
Thank You.
geomesa.org
github.com/locationtech/geomesa
twitter.com/algoriffic
linkedin.com/in/anthony-fox-ccri
anthony.fox@ccri.com
www.ccri.com
Indexing Spatio-Temporal Data in Bigtable
Moscone Centercoordinates
37.7839°N,122.4012°W
Indexing Spatio-Temporal Data in Bigtable
• Bigtable clones have a single dimension
lexicographic sorted index
Moscone Centercoordinates
37.7839°N,122.4012°W
Indexing Spatio-Temporal Data in Bigtable
• Bigtable clones have a single dimension
lexicographic sorted index
• What if we concatenated latitude and longitude?
Moscone Centercoordinates
37.7839°N,122.4012°W
Row Key
37.7839,-122.4012
Indexing Spatio-Temporal Data in Bigtable
• Bigtable clones have a single dimension
lexicographic sorted index
• What if we concatenated latitude and longitude?
• Fukushima sorts lexicographically near Moscone
Center because they have the same latitude
Moscone Centercoordinates
37.7839°N,122.4012°W
Row Key
37.7839,-122.4012
37.7839,140.4676
Space-filling Curves
2-D Z-orderCurve 2-D Hilbert Curve
Space-filling
curve example
Moscone Center coordinates
37.7839° N, 122.4012° W
Encode coordinates to a 32 bit Z
Space-filling
curve example
Moscone Center coordinates
37.7839° N, 122.4012° W
Encode coordinates to a 32 bit Z
1. Scale latitude and longitude to use 16 available bits each
scaled_x = (-122.4012 + 180)/360 * 2^16
= 10485
scaled_y = (37.7839 + 90)/180 * 2^16
= 46524
Space-filling
curve example
Moscone Center coordinates
37.7839° N, 122.4012° W
Encode coordinates to a 32 bit Z
1. Scale latitude and longitude to use 16 available bits each
scaled_x = (-122.4012 + 180)/360 * 2^16
= 10485
scaled_y = (37.7839 + 90)/180 * 2^16
= 46524
1. Take binary representation of scaled coordinates
bin_x = 0010100011110101
bin_y = 1011010110111100
Space-filling
curve example
Moscone Center coordinates
37.7839° N, 122.4012° W
Encode coordinates to a 32 bit Z
1. Scale latitude and longitude to use 16 available bits each
scaled_x = (-122.4012 + 180)/360 * 2^16
= 10485
scaled_y = (37.7839 + 90)/180 * 2^16
= 46524
1. Take binary representation of scaled coordinates
bin_x = 0010100011110101
bin_y = 1011010110111100
1. Interleave bits of x and y and convert back to an integer
bin_z = 01001101100100011110111101110010
z = 1301409650
Space-filling
curve example
Moscone Center coordinates
37.7839° N, 122.4012° W
Encode coordinates to a 32 bit Z
1. Scale latitude and longitude to use 16 available bits each
scaled_x = (-122.4012 + 180)/360 * 2^16
= 10485
scaled_y = (37.7839 + 90)/180 * 2^16
= 46524
1. Take binary representation of scaled coordinates
bin_x = 0010100011110101
bin_y = 1011010110111100
1. Interleave bits of x and y and convert back to an integer
bin_z = 01001101100100011110111101110010
z = 1301409650
Distance preserving hash
Space-filling curves linearize a
multi-dimensional space
Bigtable Index
[0,2^32]
1301409650
4294967296
0
Regions translate to range scans
Bigtable Index
[0,2^32]
1301409657
1301409650
0
4294967296
scan ‘geomesa’,{STARTROW=> 1301409650,ENDROW=> 1301409657}
ADS-B
Provisioning Spatial RDDs
Provisioning Spatial RDDs
params = {
"instanceId": "geomesa",
"zookeepers": "X.X.X.X",
"user": "user",
"password": "******",
"tableName": "geomesa.strava"
}
spark
.read
.format("geomesa")
.options(**params)
.option("geomesa.feature", "activities")
.load()
Accumulo
Provisioning Spatial RDDs
params = {
"bigtable.table.name": "geomesa.strava"
}
spark
.read
.format("geomesa")
.options(**params)
.option("geomesa.feature", "activities")
.load()
HBase and Bigtable
Provisioning Spatial RDDs
params = {
"geomesa.converter": "strava",
"geomesa.input": "s3://path/to/data/*.json.gz"
}
spark
.read
.format("geomesa")
.options(**params)
.option("geomesa.feature", "activities")
.load()
Flat files
Speed Profile of a Metro Area
Speed Profile of a Metro Area
The Dream
Speed Profile of a Metro Area
Inputs
Approach
● Select all activities within metro area
● Sort each activity by dtg ascending
● Window over each set of consecutive samples
● Compute summary statistics of speed
● Group by grid cell
● Visualize

More Related Content

PDF
The Parquet Format and Performance Optimization Opportunities
PDF
Presto on Apache Spark: A Tale of Two Computation Engines
PDF
Optimizing Delta/Parquet Data Lakes for Apache Spark
PDF
Reading The Source Code of Presto
PDF
Apache Iceberg Presentation for the St. Louis Big Data IDEA
PDF
Introduction to Stream Processing
PDF
Dynamic Partition Pruning in Apache Spark
PDF
Apache Iceberg - A Table Format for Hige Analytic Datasets
The Parquet Format and Performance Optimization Opportunities
Presto on Apache Spark: A Tale of Two Computation Engines
Optimizing Delta/Parquet Data Lakes for Apache Spark
Reading The Source Code of Presto
Apache Iceberg Presentation for the St. Louis Big Data IDEA
Introduction to Stream Processing
Dynamic Partition Pruning in Apache Spark
Apache Iceberg - A Table Format for Hige Analytic Datasets

What's hot (20)

PDF
Hudi: Large-Scale, Near Real-Time Pipelines at Uber with Nishith Agarwal and ...
PPTX
The columnar roadmap: Apache Parquet and Apache Arrow
PPTX
Snowflake Architecture.pptx
PDF
Optimizing Delta/Parquet Data Lakes for Apache Spark
PDF
Data Privacy with Apache Spark: Defensive and Offensive Approaches
PDF
The Rise of ZStandard: Apache Spark/Parquet/ORC/Avro
PDF
Diving into Delta Lake: Unpacking the Transaction Log
PDF
Efficient Spark Analytics on Encrypted Data with Gidon Gershinsky
PDF
Apache Arrow: Open Source Standard Becomes an Enterprise Necessity
PDF
Spark (Structured) Streaming vs. Kafka Streams
PDF
Snowflake SnowPro Core Cert CheatSheet.pdf
PPTX
Introducing the Snowflake Computing Cloud Data Warehouse
PPTX
Elastic Data Warehousing
PPTX
Snowflake Datawarehouse Architecturing
PDF
ClickHouse Query Performance Tips and Tricks, by Robert Hodges, Altinity CEO
PDF
Apache Spark Core—Deep Dive—Proper Optimization
PDF
Apache Iceberg: An Architectural Look Under the Covers
PPTX
How to build a streaming Lakehouse with Flink, Kafka, and Hudi
PDF
Introduction to Apache Beam
PPTX
GeoMesa: Scalable Geospatial Analytics
Hudi: Large-Scale, Near Real-Time Pipelines at Uber with Nishith Agarwal and ...
The columnar roadmap: Apache Parquet and Apache Arrow
Snowflake Architecture.pptx
Optimizing Delta/Parquet Data Lakes for Apache Spark
Data Privacy with Apache Spark: Defensive and Offensive Approaches
The Rise of ZStandard: Apache Spark/Parquet/ORC/Avro
Diving into Delta Lake: Unpacking the Transaction Log
Efficient Spark Analytics on Encrypted Data with Gidon Gershinsky
Apache Arrow: Open Source Standard Becomes an Enterprise Necessity
Spark (Structured) Streaming vs. Kafka Streams
Snowflake SnowPro Core Cert CheatSheet.pdf
Introducing the Snowflake Computing Cloud Data Warehouse
Elastic Data Warehousing
Snowflake Datawarehouse Architecturing
ClickHouse Query Performance Tips and Tricks, by Robert Hodges, Altinity CEO
Apache Spark Core—Deep Dive—Proper Optimization
Apache Iceberg: An Architectural Look Under the Covers
How to build a streaming Lakehouse with Flink, Kafka, and Hudi
Introduction to Apache Beam
GeoMesa: Scalable Geospatial Analytics
Ad

Similar to GeoMesa on Apache Spark SQL with Anthony Fox (20)

PPTX
GeoMesa on Spark SQL: Extracting Location Intelligence from Data
PPTX
High Performance and Scalable Geospatial Analytics on Cloud with Open Source
PPTX
Big Spatial(!) Data Processing mit GeoMesa. AGIT 2019, Salzburg, Austria.
PDF
Comparing Geospatial Implementation in MongoDB, Postgres, and Elastic
PDF
Location analytics by Marc Planaguma at Big Data Spain 2014
PDF
Geospatial Options in Apache Spark
PPTX
How to Make Complex Spatial Processing Simple
PPTX
Geo data analytics
PDF
The Role of Data Science in Real Estate
PPTX
Geoposicionamiento Big Data o It's bigger on the inside Commit conf 2018
PDF
(eBook PDF) Introduction to Geographic Information Systems 8th
PDF
Big Data Day LA 2015 - Big Data Day LA 2015 - Applying GeoSpatial Analytics u...
PDF
Geoposicionamiento Big Data o It's bigger on the inside Codemetion Madrid 2018
PPTX
How To Analyze Geolocation Data with Hive and Hadoop
PPTX
Spatial decision support and analytics on a campus scale: bringing GIS, CAD, ...
PDF
unitiv-spacialdataanalysis-200423132043.pdf
PPTX
TYBSC IT PGIS Unit IV Spacial Data Analysis
PDF
Magellan-Spark as a Geospatial Analytics Engine by Ram Sriharsha
PPT
Uniting traditional GIS and mainstream IT
PDF
Spatial_Data_Analysis_with_open_source_softwares[1]
GeoMesa on Spark SQL: Extracting Location Intelligence from Data
High Performance and Scalable Geospatial Analytics on Cloud with Open Source
Big Spatial(!) Data Processing mit GeoMesa. AGIT 2019, Salzburg, Austria.
Comparing Geospatial Implementation in MongoDB, Postgres, and Elastic
Location analytics by Marc Planaguma at Big Data Spain 2014
Geospatial Options in Apache Spark
How to Make Complex Spatial Processing Simple
Geo data analytics
The Role of Data Science in Real Estate
Geoposicionamiento Big Data o It's bigger on the inside Commit conf 2018
(eBook PDF) Introduction to Geographic Information Systems 8th
Big Data Day LA 2015 - Big Data Day LA 2015 - Applying GeoSpatial Analytics u...
Geoposicionamiento Big Data o It's bigger on the inside Codemetion Madrid 2018
How To Analyze Geolocation Data with Hive and Hadoop
Spatial decision support and analytics on a campus scale: bringing GIS, CAD, ...
unitiv-spacialdataanalysis-200423132043.pdf
TYBSC IT PGIS Unit IV Spacial Data Analysis
Magellan-Spark as a Geospatial Analytics Engine by Ram Sriharsha
Uniting traditional GIS and mainstream IT
Spatial_Data_Analysis_with_open_source_softwares[1]
Ad

More from Databricks (20)

PPTX
DW Migration Webinar-March 2022.pptx
PPTX
Data Lakehouse Symposium | Day 1 | Part 1
PPT
Data Lakehouse Symposium | Day 1 | Part 2
PPTX
Data Lakehouse Symposium | Day 2
PPTX
Data Lakehouse Symposium | Day 4
PDF
5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop
PDF
Democratizing Data Quality Through a Centralized Platform
PDF
Learn to Use Databricks for Data Science
PDF
Why APM Is Not the Same As ML Monitoring
PDF
The Function, the Context, and the Data—Enabling ML Ops at Stitch Fix
PDF
Stage Level Scheduling Improving Big Data and AI Integration
PDF
Simplify Data Conversion from Spark to TensorFlow and PyTorch
PDF
Scaling your Data Pipelines with Apache Spark on Kubernetes
PDF
Scaling and Unifying SciKit Learn and Apache Spark Pipelines
PDF
Sawtooth Windows for Feature Aggregations
PDF
Redis + Apache Spark = Swiss Army Knife Meets Kitchen Sink
PDF
Re-imagine Data Monitoring with whylogs and Spark
PDF
Raven: End-to-end Optimization of ML Prediction Queries
PDF
Processing Large Datasets for ADAS Applications using Apache Spark
PDF
Massive Data Processing in Adobe Using Delta Lake
DW Migration Webinar-March 2022.pptx
Data Lakehouse Symposium | Day 1 | Part 1
Data Lakehouse Symposium | Day 1 | Part 2
Data Lakehouse Symposium | Day 2
Data Lakehouse Symposium | Day 4
5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop
Democratizing Data Quality Through a Centralized Platform
Learn to Use Databricks for Data Science
Why APM Is Not the Same As ML Monitoring
The Function, the Context, and the Data—Enabling ML Ops at Stitch Fix
Stage Level Scheduling Improving Big Data and AI Integration
Simplify Data Conversion from Spark to TensorFlow and PyTorch
Scaling your Data Pipelines with Apache Spark on Kubernetes
Scaling and Unifying SciKit Learn and Apache Spark Pipelines
Sawtooth Windows for Feature Aggregations
Redis + Apache Spark = Swiss Army Knife Meets Kitchen Sink
Re-imagine Data Monitoring with whylogs and Spark
Raven: End-to-end Optimization of ML Prediction Queries
Processing Large Datasets for ADAS Applications using Apache Spark
Massive Data Processing in Adobe Using Delta Lake

Recently uploaded (20)

PPTX
DISORDERS OF THE LIVER, GALLBLADDER AND PANCREASE (1).pptx
PPTX
IBA_Chapter_11_Slides_Final_Accessible.pptx
PPTX
climate analysis of Dhaka ,Banglades.pptx
PPTX
Introduction to machine learning and Linear Models
PPTX
Qualitative Qantitative and Mixed Methods.pptx
PDF
Recruitment and Placement PPT.pdfbjfibjdfbjfobj
PPTX
IB Computer Science - Internal Assessment.pptx
PDF
Business Analytics and business intelligence.pdf
PPTX
AI Strategy room jwfjksfksfjsjsjsjsjfsjfsj
PPTX
Introduction to Basics of Ethical Hacking and Penetration Testing -Unit No. 1...
PDF
Galatica Smart Energy Infrastructure Startup Pitch Deck
PPTX
Computer network topology notes for revision
PPTX
1_Introduction to advance data techniques.pptx
PPT
ISS -ESG Data flows What is ESG and HowHow
PPT
Reliability_Chapter_ presentation 1221.5784
PPTX
Database Infoormation System (DBIS).pptx
PPTX
Acceptance and paychological effects of mandatory extra coach I classes.pptx
PPT
Quality review (1)_presentation of this 21
PDF
[EN] Industrial Machine Downtime Prediction
PPTX
Introduction to Firewall Analytics - Interfirewall and Transfirewall.pptx
DISORDERS OF THE LIVER, GALLBLADDER AND PANCREASE (1).pptx
IBA_Chapter_11_Slides_Final_Accessible.pptx
climate analysis of Dhaka ,Banglades.pptx
Introduction to machine learning and Linear Models
Qualitative Qantitative and Mixed Methods.pptx
Recruitment and Placement PPT.pdfbjfibjdfbjfobj
IB Computer Science - Internal Assessment.pptx
Business Analytics and business intelligence.pdf
AI Strategy room jwfjksfksfjsjsjsjsjfsjfsj
Introduction to Basics of Ethical Hacking and Penetration Testing -Unit No. 1...
Galatica Smart Energy Infrastructure Startup Pitch Deck
Computer network topology notes for revision
1_Introduction to advance data techniques.pptx
ISS -ESG Data flows What is ESG and HowHow
Reliability_Chapter_ presentation 1221.5784
Database Infoormation System (DBIS).pptx
Acceptance and paychological effects of mandatory extra coach I classes.pptx
Quality review (1)_presentation of this 21
[EN] Industrial Machine Downtime Prediction
Introduction to Firewall Analytics - Interfirewall and Transfirewall.pptx

GeoMesa on Apache Spark SQL with Anthony Fox

  • 1. Anthony Fox Directorof Data Science,Commonwealth ComputerResearch Inc GeoMesa Founderand TechnicalLead [email protected] linkedin.com/in/anthony-fox-ccri twitter.com/algoriffic www.ccri.com GeoMesa on Spark SQL Extracting Location Intelligence from Data
  • 6. Intro to Location Intelligence and GeoMesa Spatial Data Types, Spatial SQL Extending Spark Catalyst for Optimized Spatial SQL Density of activity in San Francisco Speed profile of San Francisco
  • 7. Intro to Location Intelligence and GeoMesa Spatial Data Types, Spatial SQL Extending Spark Catalyst for Optimized Spatial SQL Density of activity in San Francisco Speed profile of San Francisco
  • 8. Intro to Location Intelligence and GeoMesa Spatial Data Types, Spatial SQL Extending Spark Catalyst for OptimizedSpatial SQL Density of activity in San Francisco Speed profile of San Francisco
  • 9. Intro to Location Intelligence and GeoMesa Spatial Data Types, Spatial SQL Extending Spark Catalyst for Optimized Spatial SQL Density of activity in San Francisco Speed profile of San Francisco
  • 11. Location Intelligence Show me all the coffee shops in this neighborhood Easy Hard
  • 12. Location Intelligence Show me all the coffee shops in this neighborhood How many users commute through this intersection every day? Easy Hard
  • 17. Location Intelligence Show me all the coffee shops in this neighborhood How many users commute through this intersection every day? Easy Hard
  • 18. Location Intelligence Show me all the coffee shops in this neighborhood How many users commute through this intersection every day? Which users commute within 250 meters of a coffee shop? Easy Hard
  • 19. Location Intelligence Easy Hard Show me all the coffee shops in this neighborhood How many users commute through this intersection every day? Which users commute within 250 meters of a coffee shop? Should I place an ad for coffee on this user’s mobile device?
  • 20. Location Intelligence Easy Hard Show me all the coffee shops in this neighborhood How many users commute through this intersection every day? Which users commute within 250 meters of a coffee shop? Should I place an ad for coffee on this user’s mobile device? Where should I build my next coffee shop?
  • 21. What is GeoMesa? A suite of tools for persisting, querying, analyzing, and streaming spatio-temporal data at scale
  • 22. What is GeoMesa? A suite of tools for persisting, querying, analyzing, and streaming spatio-temporal data at scale
  • 23. What is GeoMesa? A suite of tools for persisting, querying, analyzing, and streaming spatio-temporal data at scale
  • 24. What is GeoMesa? A suite of tools for persisting, querying, analyzing, and streaming spatio-temporal data at scale
  • 25. What is GeoMesa? A suite of tools for persisting, querying, analyzing, and streaming spatio-temporal data at scale
  • 26. Intro to Location Intelligence and GeoMesa Spatial Data Types, Spatial SQL Extending Spark Catalyst for Optimized Spatial SQL Density of activity in San Francisco Speed profile of San Francisco
  • 29. Spatial Data Types Points Locations Events Instantaneous Positions Lines Road networks Voyages Trips Trajectories
  • 30. Spatial Data Types Points Locations Events Instantaneous Positions Lines Road networks Voyages Trips Trajectories Polygons Administrative Regions Airspaces
  • 31. Spatial Data Types Points Locations Events Instantaneous Positions PointUDT MultiPointUDT Lines Road networks Voyages Trips Trajectories LineUDT MultiLineUDT Polygons Administrative Regions Airspaces PolygonUDT MultiPolygonUDT
  • 32. Spatial SQL SELECT activity_id,user_id,geom,dtg FROM activities WHERE st_contains(st_makeBBOX(-78,37,-77,38),geom) AND dtg > cast(‘2017-06-01’ as timestamp) AND dtg < cast(‘2017-06-05’ as timestamp)
  • 33. Spatial SQL SELECT activity_id,user_id,geom,dtg FROM activities WHERE st_contains(st_makeBBOX(-78,37,-77,38),geom) AND dtg > cast(‘2017-06-01’ as timestamp) AND dtg < cast(‘2017-06-05’ as timestamp) Geometry constructor
  • 34. Spatial SQL SELECT activity_id,user_id,geom,dtg FROM activities WHERE st_contains(st_makeBBOX(-78,37,-77,38),geom) AND dtg > cast(‘2017-06-01’ as timestamp) AND dtg < cast(‘2017-06-05’ as timestamp) Spatial column in schema
  • 35. Spatial SQL SELECT activity_id,user_id,geom,dtg FROM activities WHERE st_contains(st_makeBBOX(-78,37,-77,38),geom) AND dtg > cast(‘2017-06-01’ as timestamp) AND dtg < cast(‘2017-06-05’ as timestamp) Topological predicate
  • 36. Sample Spatial UDFs: Geometry Constructors st_geomFromWKT Create a point, line, or polygon from a WKT st_geomFromWKT(‘POINT(-122.40,37.78)’) st_makeLine Create a line from a sequence of points st_makeLine(collect_list(geom)) st_makeBBOX Create a bounding box from (left, bottom, right, top) st_makeBBOX(-123,37,-121,39) ...
  • 37. Sample Spatial UDFs: Topological Predicates st_contains Returns true if the second argument is contained within the first argument st_contains( st_geomFromWKT(‘POLYGON…’), geom ) st_within Returns true if the second argument geometry is entirely within the first argument st_within( st_geomFromWKT(‘POLYGON…’), geom ) st_dwithin Returns true if the geometries are within a specified distance from each other st_dwithin(geom1, geom2, 100)
  • 38. Sample Spatial UDFs: Processing st_bufferPoint Create a bufferaround a point fordistance within type queries st_bufferPoint(geom, 10) st_envelope Extract the envelope ofa geometry st_envelope(geom) st_geohash Encode the geometry using a Z-Orderspace filling curve. Useful for grid analysis. st_geohash(geom, 35) st_closestpoint Find the point on the target geometry thatis closest to the given geometry st_closestpoint(geom1, geom2) st_distanceSpheroid Find the great circle distance usingthe WGS84 ellipsoid st_distanceSpheroid(geom1, geom2)
  • 39. Intro to Location Intelligence and GeoMesa Spatial Data Types, Spatial SQL Extending Spark Catalyst for OptimizedSpatial SQL Density of activity in San Francisco Speed profile of San Francisco
  • 40. Optimizing Spatial SQL SELECT activity_id,user_id,geom,dtg FROM activities WHERE st_contains(st_makeBBOX(-78,37,-77,38),geom) AND dtg > cast(‘2017-06-01’ as timestamp) AND dtg < cast(‘2017-06-05’ as timestamp)
  • 41. Optimizing Spatial SQL SELECT activity_id,user_id,geom,dtg FROM activities WHERE st_contains(st_makeBBOX(-78,37,-77,38),geom) AND dtg > cast(‘2017-06-01’ as timestamp) AND dtg < cast(‘2017-06-05’ as timestamp) Only load partitions that have records that intersect the query geometry.
  • 42. Extending Spark’s Catalyst Optimizer https://databricks.com/blog/2015/04/13/deep-dive-into-spark-sqls-catalyst-optimizer.html
  • 43. Extending Spark’s Catalyst Optimizer https://databricks.com/blog/2015/04/13/deep-dive-into-spark-sqls-catalyst-optimizer.html Catalyst exposes hooks to insert optimization rules in various points in the query processing logic.
  • 44. Extending Spark’s Catalyst Optimizer /** * :: Experimental :: * A collection of methods that are consideredexperimental,but canbe used tohook into * the query plannerforadvanced functionality. * * @group basic * @since 1.3.0 */ @Experimental @transient @InterfaceStability.Unstable def experimental: ExperimentalMethods = sparkSession.experimental
  • 45. SQL optimizations for Spatial Predicates SELECT activity_id,user_id,geom,dtg FROM activities WHERE st_contains(st_makeBBOX(-78,37,-77,38),geom) AND dtg > cast(‘2017-06-01’ as timestamp) AND dtg < cast(‘2017-06-05’ as timestamp)
  • 46. SQL optimizations for Spatial Predicates SELECT activity_id,user_id,geom,dtg FROM activities WHERE st_contains(st_makeBBOX(-78,37,-77,38),geom) AND dtg > cast(‘2017-06-01’ as timestamp) AND dtg < cast(‘2017-06-05’ as timestamp) GeoMesa Relation
  • 47. SQL optimizations for Spatial Predicates SELECT activity_id,user_id,geom,dtg FROM activities WHERE st_contains(st_makeBBOX(-78,37,-77,38),geom) AND dtg > cast(‘2017-06-01’ as timestamp) AND dtg < cast(‘2017-06-05’ as timestamp) Relational Projection
  • 48. SQL optimizations for Spatial Predicates SELECT activity_id,user_id,geom,dtg FROM activities WHERE st_contains(st_makeBBOX(-78,37,-77,38),geom) AND dtg > cast(‘2017-06-01’ as timestamp) AND dtg < cast(‘2017-06-05’ as timestamp) Topological Predicate
  • 49. SQL optimizations for Spatial Predicates SELECT activity_id,user_id,geom,dtg FROM activities WHERE st_contains(st_makeBBOX(-78,37,-77,38),geom) AND dtg > cast(‘2017-06-01’ as timestamp) AND dtg < cast(‘2017-06-05’ as timestamp) Geometry Literal
  • 50. SQL optimizations for Spatial Predicates SELECT activity_id,user_id,geom,dtg FROM activities WHERE st_contains(st_makeBBOX(-78,37,-77,38),geom) AND dtg > cast(‘2017-06-01’ as timestamp) AND dtg < cast(‘2017-06-05’ as timestamp) Date range predicate
  • 51. SQL optimizations for Spatial Predicates object STContainsRule extendsRule[LogicalPlan]with PredicateHelper{ override defapply(plan: LogicalPlan): LogicalPlan ={ plan.transform{ case filt @ Filter(f, lr@LogicalRelation(gmRel: GeoMesaRelation,_,_)) => … val relation =gmRel.copy(filt =ff.and(gtFilters:+gmRel.filt)) lr.copy(expectedOutputAttributes=Some(lr.output), relation =relation) } }
  • 52. SQL optimizations for Spatial Predicates object STContainsRule extendsRule[LogicalPlan]with PredicateHelper{ override defapply(plan: LogicalPlan): LogicalPlan ={ plan.transform{ case filt @ Filter(f, lr@LogicalRelation(gmRel: GeoMesaRelation,_,_)) => … val relation =gmRel.copy(filt =ff.and(gtFilters:+gmRel.filt)) lr.copy(expectedOutputAttributes=Some(lr.output), relation =relation) } } Intercept a Filter on a GeoMesa Logical Relation
  • 53. SQL optimizations for Spatial Predicates object STContainsRule extendsRule[LogicalPlan]with PredicateHelper{ override defapply(plan: LogicalPlan): LogicalPlan ={ plan.transform{ case filt @ Filter(f, lr@LogicalRelation(gmRel: GeoMesaRelation,_,_)) => … val relation =gmRel.copy(filt =ff.and(gtFilters:+gmRel.filt)) lr.copy(expectedOutputAttributes=Some(lr.output), relation =relation) } } Extract the predicates that can be handled by GeoMesa, create a new GeoMesa relation with the predicates pushed down into the scan,and return a modified tree with the new relation and the filter removed. GeoMesa will compute the minimal ranges necessary to cover the query region.
  • 54. SQL optimizations for Spatial Predicates Relational Projection Filter GeoMesa Relation
  • 55. SQL optimizations for Spatial Predicates Relational Projection Filter GeoMesa Relation Relational Projection GeoMesa Relation <topo predicate>
  • 56. SQL optimizations for Spatial Predicates Relational Projection Filter GeoMesa Relation Relational Projection GeoMesa Relation <topo predicate> GeoMesa Relation <topo predicate> <relational projection>
  • 57. SQL optimizations for Spatial Predicates SELECT activity_id,user_id,geom,dtg FROM activities WHERE st_contains(st_makeBBOX(-78,37,-77,38),geom) AND dtg > cast(‘2017-06-01’ as timestamp) AND dtg < cast(‘2017-06-05’ as timestamp)
  • 58. SQL optimizations for Spatial Predicates SELECT * FROM activities<pushdown filter and projection>
  • 59. SQL optimizations for Spatial Predicates SELECT * FROM activities<pushdown filter and projection> Reduced I/O, reduced networkoverhead, reduced compute load - faster Location Intelligence answers
  • 60. Intro to Location Intelligence and GeoMesa Spatial Data Types, Spatial SQL Extending Spark Catalyst for Optimized Spatial SQL Density of activity in San Francisco Speed profile of San Francisco
  • 61. SELECT geohash, count(geohash)as count FROM ( SELECT st_geohash(geom,35) as geohash FROM sf WHERE st_contains(st_makeBBOX(-122.4194-1,37.77-1,-122.4194+1,37.77+1), geom) ) GROUP BY geohash 1. Constrain to San Francisco 2. Snap location to 35 bit geohash 3. Group by geohash and count records per geohash Density of Activity in San Francisco
  • 62. SELECT geohash, count(geohash)as count FROM ( SELECT st_geohash(geom,35) as geohash FROM sf WHERE st_contains(st_makeBBOX(-122.4194-1,37.77-1,-122.4194+1,37.77+1), geom) ) GROUP BY geohash 1. Constrain to San Francisco 2. Snap location to 35 bit geohash 3. Group by geohash and count records per geohash Density of Activity in San Francisco
  • 63. 1. Constrain to San Francisco 2. Snap location to 35 bit geohash 3. Group by geohash and count records per geohash Density of Activity in San Francisco SELECT geohash, count(geohash)as count FROM ( SELECT st_geohash(geom,35) as geohash FROM sf WHERE st_contains(st_makeBBOX(-122.4194-1,37.77-1,-122.4194+1,37.77+1), geom) ) GROUP BY geohash
  • 64. SELECT geohash, count(geohash)as count FROM ( SELECT st_geohash(geom,35) as geohash FROM sf WHERE st_contains(st_makeBBOX(-122.4194-1,37.77-1,-122.4194+1,37.77+1), geom) ) GROUP BY geohash 1. Constrain to San Francisco 2. Snap location to 35 bit geohash 3. Group by geohash and count records per geohash Density of Activity in San Francisco
  • 65. Density of Activity in San Francisco 1. Constrain to San Francisco 2. Snap location to 35 bit geohash 3. Group by geohash and count records per geohash
  • 66. Visualize using Jupyter and Bokeh Density of Activity in San Francisco p = figure(title="STRAVA", plot_width=900,plot_height=600, x_range=x_range,y_range=y_range) p.add_tile(tonerlines) p.circle(x=projecteddf['px'], y=projecteddf['py'], fill_alpha=0.5, size=6, fill_color=colors, line_color=colors) show(p)
  • 67. Visualize using Jupyter and Bokeh Density of Activity in San Francisco
  • 68. Speed Profile of a Metro Area Inputs STRAVA Activities An activity is sampled once per second Each observation has a location and time { "type": "Feature", "geometry": { "type": "Point", "coordinates": [-122.40736,37.807147] }, "properties": { "activity_id": "**********************", "athlete_id": "**********************", "device_type": 5, "activity_type": "Ride", "frame_type": 2, "commute": false, "date": "2016-11-02T23:58:03", "index": 0 }, "id": "6a9bb90497be6f64eae009e6c760389017bc31db:0" }
  • 69. SELECT activity_id, index, geom as s, lead(geom) OVER (PARTITION BY activity_idORDER by dtgasc) as e, dtg as start, lead(dtg) OVER (PARTITIONBY activity_idORDER by dtgasc) as end FROM activities WHERE activity_type = 'Ride' AND st_contains( st_makeBBOX(-122.4194-1,37.77-1,-122.4194+1,37.77+1), geom) ORDER BY dtg ASC 1. Select all activities within metro area 2. Sort activity by dtg ascending 3. Window over each set of consecutive samples 4. Create a temporary table Speed Profile of a Metro Area
  • 70. 1. Select all activities within metro area 2. Sort activity by dtg ascending 3. Window over each set of consecutive samples 4. Create a temporary table Speed Profile of a Metro Area SELECT activity_id, index, geom as s, lead(geom) OVER (PARTITION BY activity_idORDER by dtgasc) as e, dtg as start, lead(dtg) OVER (PARTITIONBY activity_idORDER by dtgasc) as end FROM activities WHERE activity_type = 'Ride' AND st_contains( st_makeBBOX(-122.4194-1,37.77-1,-122.4194+1,37.77+1), geom) ORDER BY dtg ASC
  • 71. 1. Select all activities within metro area 2. Sort activity by dtg ascending 3. Window over each set of consecutive samples 4. Create a temporary table Speed Profile of a Metro Area SELECT activity_id, index, geom as s, lead(geom) OVER (PARTITION BY activity_idORDER by dtgasc) as e, dtg as start, lead(dtg) OVER (PARTITIONBY activity_idORDER by dtgasc) as end FROM activities WHERE activity_type = 'Ride' AND st_contains( st_makeBBOX(-122.4194-1,37.77-1,-122.4194+1,37.77+1), geom) ORDER BY dtg ASC
  • 72. 1. Select all activities within metro area 2. Sort activity by dtg ascending 3. Window over each set of consecutive samples 4. Create a temporary table Speed Profile of a Metro Area SELECT activity_id, index, geom as s, lead(geom) OVER (PARTITION BY activity_idORDER by dtgasc) as e, dtg as start, lead(dtg) OVER (PARTITIONBY activity_idORDER by dtgasc) as end FROM activities WHERE activity_type = 'Ride' AND st_contains( st_makeBBOX(-122.4194-1,37.77-1,-122.4194+1,37.77+1), geom) ORDER BY dtg ASC
  • 73. spark.sql(“”” SELECT activity_id, index, geom as s, lead(geom) OVER (PARTITION BY activity_idORDER by dtgasc) as e, dtg as start, lead(dtg) OVER (PARTITIONBY activity_idORDER by dtgasc) as end FROM activities WHERE activity_type = 'Ride' AND st_contains( st_makeBBOX(-122.4194-1,37.77-1,-122.4194+1,37.77+1), geom) ORDER BY dtg ASC “””).createOrReplaceTempView(“segments”) 1. Select all activities within metro area 2. Sort activity by dtg ascending 3. Window over each set of consecutive samples 4. Create a temporary table Speed Profile of a Metro Area
  • 74. 1. Select all activities within metro area 2. Sort activity by dtg ascending 3. Window over each set of consecutive samples 4. Create a temporary table Speed Profile of a Metro Area
  • 75. SELECT st_geohash(s,35)as gh, st_distanceSpheroid(s, e)/ cast(cast(end as long)-cast(startas long)as double) as meters_per_second FROM segments 5. Compute the distance between consecutive points 6. Compute the time difference between consecutive points 7. Compute the speed 8. Snap the location to a grid based on a GeoHash 9. Create a temporary table Speed Profile of a Metro Area
  • 76. SELECT st_geohash(s,35)as gh, st_distanceSpheroid(s, e)/ cast(cast(end as long)-cast(startas long)as double) as meters_per_second FROM segments 5. Compute the distance between consecutive points 6. Compute the time difference between consecutive points 7. Compute the speed 8. Snap the location to a grid based on a GeoHash 9. Create a temporary table Speed Profile of a Metro Area
  • 77. SELECT st_geohash(s,35)as gh, st_distanceSpheroid(s, e)/ cast(cast(end as long)-cast(startas long)as double) as meters_per_second FROM segments 5. Compute the distance between consecutive points 6. Compute the time difference between consecutive points 7. Compute the speed 8. Snap the location to a grid based on a GeoHash 9. Create a temporary table Speed Profile of a Metro Area
  • 78. SELECT st_geohash(s,35)as gh, st_distanceSpheroid(s, e)/ cast(cast(end as long)-cast(startas long)as double) as meters_per_second FROM segments 5. Compute the distance between consecutive points 6. Compute the time difference between consecutive points 7. Compute the speed 8. Snap the location to a grid based on a GeoHash 9. Create a temporary table Speed Profile of a Metro Area
  • 79. spark.sql(“”” SELECT st_geohash(s,35)as gh, st_distanceSpheroid(s, e)/ cast(cast(end as long)-cast(startas long)as double) as meters_per_second FROM segments “””).createOrReplaceTempView(“gridspeeds”) 5. Compute the distance between consecutive points 6. Compute the time difference between consecutive points 7. Compute the speed 8. Snap the location to a grid based on a GeoHash 9. Create a temporary table Speed Profile of a Metro Area
  • 80. 5. Compute the distance between consecutive points 6. Compute the time difference between consecutive points 7. Compute the speed 8. Snap the location to a grid based on a GeoHash 9. Create a temporary table Speed Profile of a Metro Area
  • 81. SELECT st_centroid(st_geomFromGeoHash(gh,35))as p, percentile_approx(meters_per_second,0.5) as avg_meters_per_second, stddev(meters_per_second) as std_dev FROM gridspeeds GROUP BY gh 10. Group the grid cells 11. For each grid cell, compute the median and standard deviation of the speed 12. Extract the location of the grid cell Speed Profile of a Metro Area
  • 82. SELECT st_centroid(st_geomFromGeoHash(gh,35))as p, percentile_approx(meters_per_second,0.5) as med_meters_per_second, stddev(meters_per_second) as std_dev FROM gridspeeds GROUP BY gh 10. Group the grid cells 11. For each grid cell, compute the median and standard deviation of the speed 12. Extract the location of the grid cell Speed Profile of a Metro Area
  • 83. SELECT st_centroid(st_geomFromGeoHash(gh,35))as p, percentile_approx(meters_per_second,0.5) as avg_meters_per_second, stddev(meters_per_second) as std_dev FROM gridspeeds GROUP BY gh 10. Group the grid cells 11. For each grid cell, compute the median and standard deviation of the speed 12. Extract the location of the grid cell Speed Profile of a Metro Area
  • 84. 10. Group the grid cells 11. For each grid cell, compute the median and standard deviation of the speed 12. Extract the location of the grid cell Speed Profile of a Metro Area
  • 86. Speed Profile of a Metro Area
  • 87. Speed Profile of a Metro Area
  • 89. Indexing Spatio-Temporal Data in Bigtable Moscone Centercoordinates 37.7839°N,122.4012°W
  • 90. Indexing Spatio-Temporal Data in Bigtable • Bigtable clones have a single dimension lexicographic sorted index Moscone Centercoordinates 37.7839°N,122.4012°W
  • 91. Indexing Spatio-Temporal Data in Bigtable • Bigtable clones have a single dimension lexicographic sorted index • What if we concatenated latitude and longitude? Moscone Centercoordinates 37.7839°N,122.4012°W Row Key 37.7839,-122.4012
  • 92. Indexing Spatio-Temporal Data in Bigtable • Bigtable clones have a single dimension lexicographic sorted index • What if we concatenated latitude and longitude? • Fukushima sorts lexicographically near Moscone Center because they have the same latitude Moscone Centercoordinates 37.7839°N,122.4012°W Row Key 37.7839,-122.4012 37.7839,140.4676
  • 94. Space-filling curve example Moscone Center coordinates 37.7839° N, 122.4012° W Encode coordinates to a 32 bit Z
  • 95. Space-filling curve example Moscone Center coordinates 37.7839° N, 122.4012° W Encode coordinates to a 32 bit Z 1. Scale latitude and longitude to use 16 available bits each scaled_x = (-122.4012 + 180)/360 * 2^16 = 10485 scaled_y = (37.7839 + 90)/180 * 2^16 = 46524
  • 96. Space-filling curve example Moscone Center coordinates 37.7839° N, 122.4012° W Encode coordinates to a 32 bit Z 1. Scale latitude and longitude to use 16 available bits each scaled_x = (-122.4012 + 180)/360 * 2^16 = 10485 scaled_y = (37.7839 + 90)/180 * 2^16 = 46524 1. Take binary representation of scaled coordinates bin_x = 0010100011110101 bin_y = 1011010110111100
  • 97. Space-filling curve example Moscone Center coordinates 37.7839° N, 122.4012° W Encode coordinates to a 32 bit Z 1. Scale latitude and longitude to use 16 available bits each scaled_x = (-122.4012 + 180)/360 * 2^16 = 10485 scaled_y = (37.7839 + 90)/180 * 2^16 = 46524 1. Take binary representation of scaled coordinates bin_x = 0010100011110101 bin_y = 1011010110111100 1. Interleave bits of x and y and convert back to an integer bin_z = 01001101100100011110111101110010 z = 1301409650
  • 98. Space-filling curve example Moscone Center coordinates 37.7839° N, 122.4012° W Encode coordinates to a 32 bit Z 1. Scale latitude and longitude to use 16 available bits each scaled_x = (-122.4012 + 180)/360 * 2^16 = 10485 scaled_y = (37.7839 + 90)/180 * 2^16 = 46524 1. Take binary representation of scaled coordinates bin_x = 0010100011110101 bin_y = 1011010110111100 1. Interleave bits of x and y and convert back to an integer bin_z = 01001101100100011110111101110010 z = 1301409650 Distance preserving hash
  • 99. Space-filling curves linearize a multi-dimensional space Bigtable Index [0,2^32] 1301409650 4294967296 0
  • 100. Regions translate to range scans Bigtable Index [0,2^32] 1301409657 1301409650 0 4294967296 scan ‘geomesa’,{STARTROW=> 1301409650,ENDROW=> 1301409657}
  • 101. ADS-B
  • 103. Provisioning Spatial RDDs params = { "instanceId": "geomesa", "zookeepers": "X.X.X.X", "user": "user", "password": "******", "tableName": "geomesa.strava" } spark .read .format("geomesa") .options(**params) .option("geomesa.feature", "activities") .load() Accumulo
  • 104. Provisioning Spatial RDDs params = { "bigtable.table.name": "geomesa.strava" } spark .read .format("geomesa") .options(**params) .option("geomesa.feature", "activities") .load() HBase and Bigtable
  • 105. Provisioning Spatial RDDs params = { "geomesa.converter": "strava", "geomesa.input": "s3://path/to/data/*.json.gz" } spark .read .format("geomesa") .options(**params) .option("geomesa.feature", "activities") .load() Flat files
  • 106. Speed Profile of a Metro Area
  • 107. Speed Profile of a Metro Area
  • 109. Speed Profile of a Metro Area Inputs Approach ● Select all activities within metro area ● Sort each activity by dtg ascending ● Window over each set of consecutive samples ● Compute summary statistics of speed ● Group by grid cell ● Visualize