Flooding in Port Arthur during Hurricane Harvey.
Flooding in Port Arthur during Hurricane Harvey. (Wikimedia/SC National Guard)

Floodplain Maps Are Outdated. This Scientist Wants to Change That.

More than half of FEMA’s flood maps rely on decades-old data. Now, a group of Texas researchers is tackling the problem with a $3 million grant and crowdsourced data.

by

Talk to any scientist long enough, and eventually they’ll bring up an old aphorism: all models are wrong, but some are useful. Even with better data, and more sophisticated tools to collect it, there’s no truly perfect way to capture the dynamic world that we live in.

Two years ago, Texans learned that truth the hard way when Hurricane Harvey hit the Texas coast. The storm was classified as a 500-year storm. But neighborhoods that FEMA’s flood maps never predicted to flood—even in a storm of that size—experienced historic, devastating flooding.

hurricane harvey
Texas National Guard soldiers arrive in Houston to aid residents in heavily flooded areas during Hurricane Harvey.  Texas Army National Guard/1st Lt. Zachary West

FEMA’s maps calculate the expected risks of a given area to keep people from building in dangerous zones, and to inform residents and business owners if an existing property is in a flood zone. But an analysis by Bloomberg found that many of these maps rely on 40-year-old models based on outdated weather and storm data, and fail to account for changes in land use, like new developments and roads. (Climate change has also altered the strength and speed of hurricanes.) Just weeks after Harvey, the Department of Homeland Security issued a report finding that less than half of FEMA’s maps accurately portrayed current-day flood risks.

In an effort to rectify the problem, FEMA and the Texas General Land Office are partnering on a $3 million, two-year effort to create a new type of floodplain map. Sam Brody, the project’s principal investigator and the director of Texas A&M Galveston’s Center for Texas Beaches and Shores, spoke with the Observer about how to build better and more useful models by incorporating more than just traditional scientific data.

Texas Observer: What are some of the issues with FEMA maps that you’ve encountered, and how will this project go about fixing them? 

Sam Brody: I’ve spent a lot of my career criticizing the FEMA floodplain maps, not just [during] Harvey. Historically, in Houston, over half of the flood-insurance-based impacts were outside of the boundaries [of the mapped floodplain]. In places like Clear Lake, we found that even if you’re a quarter of a mile outside of that boundary, you’ll still have a high probability of flooding. That’s problematic because it’s not capturing enough of the risk and impact, particularly in developed, urban areas, where boundaries are harder to delineate and move a lot. You put it in a Walmart parking lot, it will literally change the floodplain map that’s only updated every 10 years.

I had maybe my hundredth presentation in Washington, D.C., about these FEMA maps. I’m very critical, but scientifically so, like, ‘Here’s the problem.’ And they said, ‘You’re great at criticizing us, how about some solutions?’ So now, we have a two-year, $3-million study, to better measure, map, and communicate flood risk and impacts in these urban areas.

What does that process look like? 

We’re integrating data. We’re using traditional [hydrological] models, but also crowdsourcing data, and [insurance] claims data, to delineate areas of predicted impact.

Then we go into a community, and, rather than announce and defend, we go in and say,OK, this is my best effort, tell me what’s wrong with it, make it better.’ And that reaction tends to be so much more positive than ‘We’re just announcing that this is what we’re modeling, and if you want to see it, fine.’ The people who really know what’s going on are the residents in the area, and the communities that are dealing with these hardships, time and time again.

It takes time to do this interactive process—it’s harder, you [need more] money. And the whole state can’t do this. That’s why we’re doing selected communities, and the first one is Greenspoint in Houston. We’re not replacing the regulatory floodplain. This is just to augment and complement what’s already in place.

How do you make sure that when you’re going to a community meeting, you can communicate these complex and confusing models to a lay person effectively? 

It’s a big challenge. That’s why there’s usually one modeling expert there. The other people are communications experts, resiliency experts, people who have experience talking about these models to everyday people. To me, some of the major solutions to addressing disaster impacts isn’t just a better model or a big engineering project. It’s communicating risks to people—if we really want to make a difference, it’s not just building a better risk map, it’s how do you communicate that [risk] effectively?

Do you think most academics and policy makers are on the same page about that? 

I think the tradition of science is to be a dispassionate observer, the all-knowing expert. But I think that what’s changed with the times is that more and more scholars and practitioners are realizing that we need to engage local stakeholders throughout the process. No one knows the extent of the regularity, the impact of flooding more than someone who’s living in these vulnerable areas, and incorporating their knowledge and experience in an iterative way is essential.

You’ve also been involved with a new initiative called The Institute for a Disaster Resilient Texas, which would bring together state agencies, university research programs, and others to collaborate on research about disaster relief, planning, and mitigation. It was created through a bill in the last legislative session, but it remains unfunded for now. Tell me about the goals of the Institute. 

My group did all the data analytics for Rebuild Texas’ Eye of the Storm Report [after Harvey], and it was this big coordination effort [between universities and government agencies]. We were at a meeting presenting our results, and a state official said, ‘Why aren’t we doing this all the time?’ That inspired me to create a proposal for an organization that is grounded in analytics, creating tools to help decision-makers and community members be better prepared for flooding and other disasters.

For me, it was a wake up call. My research is on [building better models], but I don’t have the capacity to then bring that to communities, decision-makers, companies, and individual residents. So the Institute would help better make those connections, throughout the university systems—not just A&M but UT, Rice, and others. My hope is, it’s not just another institute that’s unfunded and remains an idea on paper.

Read more from the Observer: