A Q&A With Dr. Yi Liu, Data Scientist, One Concern
In order to gain more insight into the Japan flood forecast pipeline and its “behind-the-scenes” story, we had an interview with Dr. Yi Liu, a data scientist of the flood team here at One Concern who has been playing an important role in our model development and testing.
Q: Can you briefly introduce yourself and your background?
A: My background is in local and large scale coastal modeling and coastal hazard assessment under a changing climate. I got my Ph.D. in civil engineering at Virginia Tech, where I focused on developing a fast and reliable storm surge forecast model and investigating how the FEMA flood insurance maps can be impacted by sea level rise. After graduating I was selected for the J. Philip Keillor Wisconsin Coastal Management-Sea Grant Fellowship to update the Coastal Processes Manual, a Wisconsin Sea Grant publication widely used as guidance by coastal engineers and planners in the Great Lakes area. Before joining One Concern, I also had experience in multiple projects related to living shorelines, marsh restoration and climate change during my employment with Environmental Science Associates, an engineering consulting firm.
Q: What is your role and responsibility in the Japan flood pipeline?
A: My main contributions are in the coastal and inundation components. For example, I’ve been leading the research and technical approach of setting up coastal flood alert mechanisms and incorporating coastal levees in the inundation model. I also share responsibilities with team members on coastal and inundation model validation, technical reports preparation etc.
Q: Can you describe one challenge you faced and overcame for the product?
A: During model development, one major challenge the flood team faces is the representation of flood defense structures (e.g., levees) scalably with scarce data. We conducted extensive research, compared pros and cons of different methods, and developed a scalable and accurate way to represent riverine and coastal levees in the inundation model with machine learning techniques.
Q: Did you encounter any challenge specifically related to the large scale of the model, considering it covers Japan the whole nation?
A: Definitely. One specific challenge arises when trying to rapidly visualize the inundation model results, which have millions of data points at an hourly interval for several days. I developed a Jupyter notebook that can help us view gigabytes of data in seconds. It has been widely used by team members during inundation model calibration and validation.
Q: Have you been communicating these technical details with domain experts and the public?
A: Yes continuously. For example, we published the Typhoon Haishen validation and presented a poster on the Kumamoto City flood pipeline at the American Geophysical Union Fall Meeting of 2020. This year, myself and another colleague attended the NOAA SCHISM Workshop and Bootcamp with other world-wide top notch scientists to showcase our Japan flood pipeline.
Q: Any non-technical aspect that you found important during the product development process?
A: I found the collaboration between software engineers, data scientists and product essential for a successful product delivery. I have been volunteering for the Scrum Master role in an agile team consisting of both engineers and data scientists. Fortunately my previous interactions with stakeholders with different backgrounds in academia, industry, and government agencies are helpful during the coordination. Good communications across teams ensure we are aligned on product details.
Q: What do you envision down the road?
A: The Japanese live compound flood forecast pipeline is a great starting point for us to build a more resilient living environment with sustained enthusiasm and cutting edge technology. I am thrilled for the upcoming Japan nationwide deployment of the pipeline and future global development as well.