Pamlico County Coastal Resilience:
Multi-hazard Long-term Planning
Analyzing Risk and Vulnerability to Sea Level Rise
Fall 2025
North Carolina’s coastal communities face an existential threat from accelerating sea level rise (SLR). As saltwater intrudes inland, high-value ecosystems such as freshwater forests are dying off and converting into marshes. If these marshes cannot migrate inland due to topographic or anthropogenic barriers, they eventually drown, resulting in the loss of vital storm protection and the release of stored "Blue Carbon." For Pamlico County, a partner in the NC Resilient Coastal Communities Program (RCCP), the challenge was moving from general risk awareness to actionable, site-specific planning.
While the Sea Level Affecting Marshes Model (SLAMM) provided robust, high-resolution scientific predictions of this habitat migration, the data existed as thousands of inaccessible, unorganized ASCII raster files. This created a state of "data paralysis" where decision-makers could not easily access the insights needed to justify resilience funding. Planners needed to answer specific questions: Which parcels will transition from forest to marsh by 2050? How much carbon sequestration capacity will be lost under a 1.5-meter rise scenario? The objective of this Capstone project was to architect a full-stack geospatial solution that transformed this raw scientific data into a dynamic, web-based decision support tool, empowering planners to visualize future landscapes and quantify specific ecological losses.
Jurisdictional Boundary of Pamlico County, North Carolina
To operationalize this massive dataset, I engineered a full-stack geospatial solution that required advanced proficiency across programming, database management, modeling, and web services. The workflow was divided into three distinct phases: automated data ingestion, server architecture design, and analytical application development. All phases were developed in Jupyter Notebooks or Python-based script tools to ensure repeatability and scalability.
Automated Data Engineering
The initial barrier was data volume and inconsistency. The model output consisted of thousands of raw files with complex naming conventions and "noise" (pixel values outside the classification schema). To address this, I applied programming competencies to develop a custom Python ETL (Extract, Transform, Load) pipeline using the ArcPy and OS libraries. I implemented a "Bouncer" algorithm that iterated through directory structures, applying Remap functions to filter out erroneous pixel values, ensuring downstream data integrity. The script automated the conversion of heavy ASCII files into optimized, compressed GeoTIFFs, calculating statistics and building attribute tables in the process. Using string manipulation, the script parsed complex filenames to extract temporal metadata (Year, SLR Scenario) and injected these as queryable attributes directly into the dataset schema. This automation reduced weeks of manual data entry into a repeatable script that runs in under an hour.
The Split-Service Server Architecture
A major technical challenge was rendering over 70 high-resolution raster layers over the web without causing performance latency. Standard web map services would have been too slow for this volume of data. I applied database management principles to architect a Split-Service Database Environment within ArcGIS Enterprise. I compiled the processed rasters into a Mosaic Dataset and published it as a dynamic Image Service, applying server-side Raster Functions to handle the symbology rendering. This delivered lightweight images to the client for instant visualization. Simultaneously, I maintained a separate, lossless compressed version of the data for backend analysis, while vector township boundaries were served via a lightweight Map Service. This architecture allowed the application to balance the heavy computational load of the imagery with the speed required for a modern web user experience.
The Analytical Engine
Visualizing the data was not enough; the client needed to quantify the impact. I focused on modeling and analytics by developing a custom Geoprocessing Service using Python to serve as the project's analytical engine. Rather than processing the entire county dataset for every query, the tool utilizes an in-memory "Virtual Clip." It accepts a user-defined polygon (Area of Interest) and filters the backend Mosaic to a single temporal slice. The script then performs a Tabulate Area analysis and cross-references the raw pixel counts against a decoupled lookup table of carbon coefficients (derived from literature). Finally, I programmed the tool to dynamically query the database using updateParameters, ensuring that the dropdown menus for "Year" and "Scenario" automatically update if new model data is added to the database.
Web Services Deployment
The final delivery mechanism was a zero-footprint web application built in ArcGIS Experience Builder. I configured the application to consume these custom services, implementing dynamic filtering that acts as a time-slider for the Image Service. I embedded the custom analytical tool directly into the user interface, allowing non-technical users to draw a shape on the map and receive a detailed statistical breakdown of land cover change and carbon potential in seconds.
Jupyter Notebook / Python Script Collection for Scalability
Service Architecture for Web Application
The Web Application: Democratizing Access
The most visible result is the zero-footprint web dashboard built in ArcGIS Experience Builder. This application serves as the public face of the project, allowing non-technical stakeholders to interact with complex scientific data without needing GIS software. By utilizing the Split-Service architecture, the application renders high-resolution (5-meter) rasters instantly. Users can scrub through 75 years of future scenarios (2025 - 2100) with sub-second load times, transforming static maps into a fluid visualization of landscape change. The interface democratizes the data, empowering town managers to present live, data-driven arguments for resilience funding directly to their councils using nothing more than a laptop and an internet connection.
Dynamic Land Cover Time-Series Chart
Dynamic Carbon Storage Time-Series Chart
Application Display with Filterable Years and Scenarios - Instant Visualizations
The Analytical Engine: Precision at Scale
Beyond visualization, the project delivered a robust analytical tool capable of quantifying impact at the parcel level. The custom Geoprocessing Service successfully bridges the gap between "Big Data" and "Local Action." The tool outputs precise metrics, calculating exactly how many acres of "Dry Land" will be lost and how many acres of "Salt Marsh" will be gained for any user-defined polygon. By integrating carbon coefficients, the tool translates land cover change into ecosystem service value (Metric Tons of Carbon). This allows planners to identify "Carbon Positive" conservation targets, areas where marsh migration will actually increase carbon sequestration to provide a novel metric for grant applications. Unlike generalized web maps, this engine processes the native 5-meter resolution data, ensuring that the statistics provided are scientifically defensible for planning purposes.
Custom Geoprocessing Tool: Town Statistics for 2025 Initial Conditions
Custom Geoprocessing Tool: Town Statistics 2100 w/ Max SLR
Scalability & Sustainability
A less visible but critical result is the Automated Data Engineering Pipeline. By moving from manual processing to a scripted Python workflow, the project delivered a sustainable system rather than a one-off map. The pipeline reduced the time required to ingest new model data from weeks to less than an hour. As the SLAMM model is updated with new elevation data or accretion rates, the client can simply re-run the script to update the entire web application, ensuring the tool remains relevant for years to come.
In executing this project, I transitioned from the role of a GIS Analyst to that of a Solutions Architect. I was responsible for the entire application lifecycle: translating the client’s abstract needs ("we need to see the risk") into concrete technical requirements. This involved writing robust Python code to handle dirty data, designing a database schema that supported temporal filtering, and configuring a user interface that was accessible to non-technical planning staff.
The most significant challenge was System Interoperability, the complex challenge of making Python scripts, Enterprise databases, and Web Services function as a cohesive unit. I learned that high-resolution scientific data is heavy and often "breaks" standard web maps. Overcoming this required deep dives into ArcGIS Server capabilities, specifically regarding Image Services and server-side processing. I had to experiment with different raster compression methods (LZ77 vs. LERC) to find the sweet spot between data fidelity and load speeds. Additionally, handling the scientific uncertainty of carbon coefficients taught me the value of decoupled architectures; by storing the math in external lookup tables rather than hard-coding it into the raster, I built a system that can evolve as the science improves.
This experience demonstrated that accessibility is the ultimate goal of analysis. The most sophisticated model in the world is useless if it remains locked in a folder of text files. By bridging the gap between raw code and a public dashboard, I learned that the primary value of a Geospatial Professional is translation, turning complex spatial data into the clear, defensible intelligence that communities need to survive. I now possess the confidence to build full-stack solutions that not only analyze the world but empower others to change it.