Fascinated with intersection of AI, innovation, and intuitive explanations. Especially in research-focused roles/labs.
I'm Bernease Herman, a data scientist at the University of Washington eScience Institute.
I am passionate about the translation of highly technical information* to easily understandable forms. It is ethically imperative that the world's increasingly complex systems are understood by all they affect, regardless of technical training. To that end, my current research area is interpretable machine learning, where we seek to algorithmically generate human-understandable explanations of black-box models.
Another place I'm involved is the entrepreneurship community. I enjoy pondering commercialization opportunities, fault-tolerant machine learning UX, and designing the "final stretch" innovations that make ML solutions viable in the real world.
Through my work with the UW eScience Institute's Data Science for Social Good (DSSG) and Winter Incubator programs, I've worked on problems as broad as data collection strategies in autonomous marine vehicles to predicting inequity in Seattle urban data. — eScience website
* e.g. machine learning models, concepts/definitions, and source code