Throughout metropolitan areas, cyclist safety may vary. This safety consists of a cyclist getting from point A to point B with little risk for accidents, and this is largely dependent on bike lane availability, green way access, one way roads, and more. Utilizing geospatial data for road systems, greenways, and parks, this project sought to identify and map the safest available route for a cyclist using ArcGIS Pro and Python.
This project began with the acquisition of spatial datasets related to cycling infrastructure, greenways, and raw road networks within Wake County, North Carolina. For this project, an ArcGIS Pro Script Tool was developed using Python and arcpy geoprocessing tools. This script tool prompted the user to provide cycling infrastructure, roads, greenways, and desired start / stop location data for their area of analysis. This tool would then perform processing measures such as reprojection, aligning the collection of infrastructure data, creating a network analysis dataset, and assigning corresponding 'safety' fields to the dataset. arcpy functions were used to create a network dataset from the unified features. A network analysis layer was generated using ArcGIS Pro’s built-in tools to perform route optimization.
The Bicycle Route Generator would ultimately attempt to create the 'safest' route from the start and stop points provided by the user, by prioritizing existing bicycle infrastructure and greenways. This route would be a series of segments returned to the user with summary statistics and the option to export.
User Input Infrastructure Shapefiles
Script Tool Graphical Interface
Script Tool Parameter Table
The outputs of this script tool were two-part. Firstly, a cohesive route and accompanying segments were generated and added to the map. The routing algorithm prioritized greenways and bike infrastructure segments over standard roadways in order to identify the safest possible route. These route segments comprised a 'safe' route by using the categorized inputs of the network dataset to determine which route would be safest for the user (without sacrificing significant time). Once the optimal route was generated, it was saved as a new feature class and displayed with symbology differentiating road segments by infrastructure type (e.g., bike lane, greenway, standard road).
Following this route generation and display, summary statistics were returned to the user. Information regarding overall route length, estimated time, and overall 'safety' metrics were returned via the ArcGIS Messages pane.
"Safest" Route Output
Tool Messages Including Summary Statistics
Video Showcase
This project involved the development of a Python-based GIS tool in ArcGIS Pro that generates an optimized cycling route prioritizing safety. Using input datasets (bicycle infrastructure, greenways, and roads), I created a standardized and connected network, ran a route optimization analysis using ArcPy, and returned both the safest possible route and a summary of its infrastructure composition.
Throughout the project, I encountered real-world challenges related to data variability, spatial projection, and network dataset creation. These tasks pushed me beyond the core material of the course, requiring exploration of ArcGIS Pro’s geoprocessing history and documentation. While the core concept of network analysis was familiar, I had to apply it creatively and programmatically to build a routing system that considered non-standard travel priorities (e.g., safety vs. speed).
Additionally, this experience highlighted the gap between existing mapping tools (like Google Maps) and what safety-conscious cyclists actually need. While Google Maps provides basic cycling directions, it does not consider infrastructure type or risk level in its routing. My project directly addresses this issue by using publicly available data to create safer alternatives.
From this project, I learned that GIS tools can be extended significantly through automation and scripting, and that thoughtful data preprocessing and labeling are critical for successful network analysis. I now understand the value of infrastructure-aware routing and how to build and manipulate custom network datasets using ArcPy. Moving forward, I will explore ways to generalize this model for broader geographic areas and integrate more sophisticated travel attributes such as traffic volume or slope to further enhance the tool’s usefulness for cyclists.
I automated a repeatable workflow to quantify how forest greenness recovered following a major wildfire event in Canada. The goal was to select a burned area from a global fire catalog, build an annual time series of maximum-season NDVI from Landsat, and produce both maps and a recovery chart showing average greenness as a function of years since the fire. The project emphasized end-to-end scripting so the same routine could be applied to any large burned region and rerun reproducibly for monitoring or reporting.
I implemented the workflow as a single, well-documented Earth Engine script that performed five conceptual steps.
First, I selected a target fire from the GlobFire catalog by spatially filtering the fire table to a user-supplied anchor point and a date window, then chose the largest intersecting fire polygon as the burn extent. I added an area attribute in hectares to prioritize larger events and used that polygon as the analysis geometry.
Second, I prepared masks to remove non-vegetated pixels from the NDVI computations. I used a global surface-water product to exclude persistent water and applied a scene-level QA filter to remove cloudy pixels. This ensured the NDVI composites were not biased by water or cloud contamination.
Third, I generated seasonal NDVI composites for each year in the analysis period. For each year I restricted the Landsat collection to the summer growing season and computed a per-pixel maximum NDVI for that season—this approach captures the peak greenness and reduces the influence of occasional low-quality observations.
Fourth, I summarized NDVI across the burn polygon for each year. The script reduced each annual NDVI composite to a mean value over the burned geometry, producing a compact table of year → mean-max-NDVI values. I stored those as features so they could be charted directly and exported if needed.
Finally, I produced visual outputs and a chart: two example annual NDVI maps (first and last year of the series) were added as layers to the map, and I created a time-series line chart of mean annual max-NDVI to visualize recovery dynamics. The entire routine was parameterized so the study point, year range, season window, and cloud/water thresholds could be changed without rewriting the code.
2016 Fort McMurray Fire - Burn Extent
2016 Fort McMurray - Max NDVI
2023 Fort McMurray - Max NDVI
Applying the script to the Fort McMurray 2016 burn, I extracted an annual sequence of maximum-season NDVI from 2016 through 2023 and calculated mean NDVI within the burn polygon for each year. The NDVI maps showed minimal greenness immediately after the event and a gradual increase over subsequent summers. The time-series chart clearly displayed an upward trend in average greenness over the eight-year span, indicating progressive vegetation recovery across most of the burned area. Visual comparison between the first- and last-year NDVI layers highlighted spatial heterogeneity in recovery, with some patches showing rapid regrowth while steeper or wetter locations remained slower to green up.
The scripted approach produced reproducible numeric outputs (per-year mean NDVI values) and geospatial layers suitable for web visualization or export. Because the routine used a per-season maximum composite, the results were robust to individual cloudy scenes and captured peak growing-season conditions rather than transient decreases.
NDVI Recovery Animation (2013 - 2023)
NDVI Recovery Time Series
This exercise demonstrated several programming-first competencies relevant to geospatial monitoring: parameterized scripting, robust pre-processing, temporal compositing, and automated charting. By building the analysis in Earth Engine I eliminated local data management and gained scalable processing for large burned footprints. Key lessons included the importance of conservative masking (clouds + water) to avoid biased NDVI, the value of seasonal max composites for ecological interpretation, and the need to inspect spatial patterns (not just summary statistics) because mean values can mask heterogeneous recovery.
For future work I would incorporate additional indicators (e.g., NBR or tasseled-cap greenness) to complement NDVI, add uncertainty bounds by reporting pixel-level variability within the burn, and make the script export annual rasters and CSV summaries automatically to Google Drive or Cloud Storage for downstream reporting. I also documented the code and parameter choices so the same routine can be re-used for other large Canadian fires or integrated into an automated monitoring pipeline.