Views

4
Nov

Resilience data sniffing and honesty testing

by Ass. Prof. Dimitrios Vamvatsikos

 

P

erformance became resilience, models became digital twins, but despite the seasonal migration of buzzwords, data remains king. A model without data is an empty shell, and getting the right numbers to fill it, where resilience is concerned, is a quest worthy of the Arthurian legends. The breadth and depth of hazard, damage, losses, downtime, recovery, social and economic aspects is such that a researcher will often come out with little to show for the time, sweat, and blood spent. It is an ugly picture, but it is often an unavoidable one when faced with maze-like agencies, missing documents, broken links, and government employees contemplating retirement. Many disillusioned PhD students have considered dropping their PhDs for a career in the pita-souvlaki-wrapping business when faced with such immovable granite walls. How is a mentor to come to their rescue? I need not go out on a limb here to claim that several of us have fantasized going John Rambo (ok, or Harry Potter/Hermione Granger for milder personalities out there) to blast/charm our way out of this. Thankfully for society, calmer options prevail.

 

A model without data is an empty shell, and getting the right numbers to fill it, where resilience is concerned, is a quest worthy of the Arthurian legends.

 

S

ometimes we call it expert opinion elicitation, sometimes informed extrapolation, but quite often we should man/woman up and straight up call it sniffing. I am not referring to the act undertaken by the noble nose of a trained bloodhound that will sniff out the last shred of evidence to solve a missing data case. I am talking about the less desirable action of raising one’s hand to the face, taking a deep breath, sniffing his/her well-manicured fingers and declaring “a=2.14515” with the proper pomp. Regardless of how you do it, it has an irresistible allure and the end result is the same: A human interjects and introduces a subjective estimate where no data is (willing to be) available. How to make sure such “data” is not detrimentally impacting the reliability of your predictions? Natural catastrophe modelers have always played with this conundrum, and the solution has always been one of testing, calibrating and improving. Perhaps we, as academic researchers, can take a hint from their work, as it tends to be held to high standards by paying customers. That is not to say that there have not been some spectacular prediction failures to the embarrassment of well-respected professionals (a couple of US storms come to mind). We should at least acknowledge such deficiencies in our models and, as Richard Feynman famously quipped, make sure we are not fooling ourselves, or, if I may add, our audience by making grand predictions about life, the universe and everything down to the fourth decimal digit. After all, the answer to that question is 42, and that is a fact.

 

Dimitrios Vamvatsikos - Short bio

Having an intense interest in math and physics Dr. Vamvatsikos enrolled in NTU Athens and graduated with a Diploma in Civil Engineering (1997) and a desire to broaden his horizons. He then moved to California and Stanford University where he studied geomechanics (MSc 1998), probabilistic methods, hazard estimation and seismic performance of structures (PhD 2002). Since January 2011, he has joined the faculty of the Metal Structures Laboratory at the National Technical University of Athens specializing in the static and dynamic analysis of steel structures. His research vision is focused on integrating structural modeling, computational techniques, probabilistic concepts and experimental results into a coherent framework for the performance evaluation of structures. Specifically, he is highly interested in the estimation of seismic hazard, the modeling, analysis and design of steel and monumental structures, the nonlinear static and dynamic analysis of structures, the seismic performance of buildings and bridges and the associated influence of uncertainties.