The Road to Domino & DNA: Q&A with Nicole Hu and Shabaz Patel

By Casey O’Brien, Marketing team, One Concern

One Concern
5 min readMar 1, 2022
One Concern Domino allows users to precisely map their risk exposure across their portfolio

The U.S. broke new records last year for catastrophic weather events:

  • The National Oceanic and Atmospheric Administration found the U.S. suffered 20 separate billion-dollar disasters.
  • The latest yearly data shows climate change caused $343 billion globally in economic losses last year, of which 62 percent were uninsured losses.

Climate change enables the severity and frequency of disasters. It expands the protection gap, making it more difficult for organizations and communities of all sizes to cover the burden of uninsured economic losses. To solve this and help enhance resilient mitigation solutions and risk management, One Concern has developed new innovative tools, One Concern Domino™ and One Concern DNA™. Interactive climate risk-intelligence software that uses curated data to help the financial services industry analyze and prepare for the impact of climate-related natural disasters.

Explaining our product innovation, One Concern’s Casey O’Brien, Marketing Associate, sat down with Nicole Hu, Co-founder and CTO, Shabaz Patel, Data Science Director, to discuss the company’s journey to developing its groundbreaking technology.

Casey O’Brien: What makes One Concern Domino and DNA unique and different from other climate modeling software?

Nicole Hu: Domino is the first U.S.-wide product offering that allows users to pinpoint either location-level risk from a hazard, such as earthquake risk, flood risk, climate risk or broader, dependency-level indirect risks that ripple out from an initial impact. No one else is offering a product at this level of scale.

Some of the novelty behind it is the type of data we’ve been able to create, collect and aggregate into a single location. It was a big effort we went through and is a huge task to even bring all that data together in a unified, structured way to be understandable and actionable to customers.

The second piece is the fact that we are doing predictive modeling for specific, location and asset-specific impacts with a baseline against the RCP 4.5. I’m confident that there is not another map of scale which talks about all these different types of impacts across all these different lenses.

Lastly, we have a novel product offering as well, providing two different ways to access our information. If you do want to only get the underlying data or quickly produce reports from it, we are able to provide that data very efficiently with our DNA product.

We’re also able to customize analysis. Our in-house expert data scientists who can take that data, interpret it in a way which makes sense for specific organizations, based on numerous factors, for use in yearly planning for climate risk mitigation, finance and investments, and more. The fact that we have not just the built solution, but constant access to expertise enables us to continually deliver important insights into climate change impacts.

We also have Domino, our advanced visualization product. It’s easy to use and straightforward. Users can quickly search on the map and access data from millions of buildings across the US and get all that information in a matter of a few seconds to inform critical decision making.

Casey O’Brien: What technological investments did we make to make this happen?

Shabaz Patel: We invested in gathering data and developing models towards generating downtime statistics for facilities from different climate scenarios. For this, we synthetically generate some data to make our models more complete. For example, when we talk about power infrastructure, substations are publicly listed — most people have this information. We take this information to the next level and map substations to specific buildings by generating synthetic distribution lines using network algorithms. This lets us determine how those impacts are really going to be quantified at the asset-level and the distribution lines along the routes, which provides some power to these buildings. In the absence of this data, we synthetically generate them. Besides data, we have invested in developing fragilities and recovery models for these dependency infrastructures. As such, large scale computation is one of our key differentiators. Large scale dependency analysis isn’t really possible by other currently available solutions.

Casey O’Brien: Are we hoping to expand our peril and dependency modeling capabilities and parameters in the future?

Nicole Hu: That definitely is something we want to do. We are continuously updating our suite of resilience solutions and exploring new features over time, like mobile telecommunications and water, in order to have the most comprehensive solution on the market.

Casey O’Brien: How are we ensuring accuracy in our models and minimizing bias?

Nicole Hu: Accuracy is vital to us. We’re constantly getting the latest data, understanding of flood maps in the world, climate maps, and so forth to ensure reliable and up-to-date accuracy.

One Concern’s methodologies and calculation process workflows are regularly subjected to validation exercises with advice from Technical Working Groups (TWGs), a group of external peer-reviewers comprised of experts in relevant research areas. The validation efforts reflect the application of historical data and comparison against published research to ensure a high level of accuracy and robustness. One Concern is also active in the broader research community focused on resilience modeling and regularly publishes its work at various conferences.

Shabaz Patel: For example, for our seismic model, we collected a lot of datasets from different earthquakes. Through careful review, we found that for generalizability, a lot of our damage outputs were missing information for a few sets of input parameters, which could cause an imbalance in the datasets. We solved this problem by generating synthetic datasets, using physics models and machine learning techniques to fill in the missing data gaps based on the available information.

We have fragility functions, which are based on physics-based learning. We use them to generate additional datasets for our machine learning models; giving us the unbiased dataset for all possible input parameters used for our model training purposes. This acts as an initial model from which the learning can be applied to other countries.

Casey O’Brien: This is an exciting time for One Concern, and I know we are all looking forward to the company’s continued growth. What are your thoughts about the skill sets we are looking for as we expand our technical team?

Shabaz Patel: We always look for data scientists familiar with hazard science, climate science, fragility and recovery, and portfolio modeling. We also value machine learning skills to generate digital twins and improve resilience modeling. Our data scientists must prioritize communication, curiosity, teamwork, and empathy.

As data scientists work with different stakeholders, connecting the product value proposition to data science metrics is essential. In most cases, users may not be accustomed to using probabilistic predictions in daily work. As such, it’s crucial to communicate the model uncertainties to understand its predictions and make the right decision with our product and their judgment. In addition, curiosity and proactiveness help innovate on problems traditionally unsolved due to well-known challenges in the field.

Nicole Hu: Since we are operating on trillions of data points across the globe, we are looking for engineers and data scientists who have worked on extensive data systems. Engineers who get excited about solving novel and challenging scaling issues thrive here. We are looking for strong team players and intellectually curious people with a strong track record for project execution.

--

--

One Concern

We’re advancing science and technology to build global resilience, and make disasters less disastrous