Computer Vision for the Built Environment
Using street-level imagery and deep learning to quantify built environment features relevant to pedestrian safety and public health outcomes.
Computer vision offers a way to study the built environment at a scale that would be impractical with traditional field audits alone. In this project, we use Google Street View imagery and machine learning to measure visible features of neighborhoods and buildings, including walkability, street safety, physical disorder, urban development, and passive design characteristics. The broader goal is to understand how these everyday environmental features relate to important outcomes such as chronic disease, traffic injury risk, and household energy burden. By turning street-level imagery into structured data, this work helps bridge computer vision with public health, urban planning, and sustainable design.
Technically, the work follows a common pattern: large-scale Street View image sampling, manual annotation of key visual indicators, deep learning models for feature detection, and tract-level modeling of downstream outcomes. We use convolutional networks and related learning frameworks to identify features such as sidewalks, streetlights, greenness, road construction, housing form, and passive building elements, then aggregate those measurements for statistical analysis. Different projects adapt this pipeline to different questions, including improving indicator detection with multi-task learning, modeling longitudinal neighborhood change, predicting collision risk, and linking passive housing design to energy burden. The unifying idea is to make the built environment measurable in a scalable, reproducible, and policy-relevant way.