Full Name
MAJ Chris Krueger, USA
Job Title
PhD Student, Designing Trustworthy AI Systems Fellow
Company
The George Washington University
Speaker Bio
Major Christine (Chris) Krueger commissioned as an Active Duty Army officer in 2008 following their graduation from the United States Military Academy (USMA) with a degree in Engineering Management. They served 8 years as an aviation officer and UH-60 pilot to include one deployment to Afghanistan. Following masters degrees in Project Management from Drexel University and in Operations Research from Northeastern University, they transitioned to serve as an Operations Research and Systems Analyst (ORSA). Their assignments as an ORSA included Assistant Professor of Systems Engineering at USMA and Force Strategy Analyst at the Center for Army Analysis (CAA). While at CAA, they were recognized for their innovative analysis during the development of the Army's new force generation model. They were awarded the annual Military Operations Research Society's (MORS) Hughes Award for outstanding junior analyst in DoD and DHS and the MORS Rist Prize for the most impactful DoD research.
They are currently pursuing a PhD in Systems Engineering at George Washington University. They are also a fellow in the Designing Trustworthy AI Systems NSF fellowship. Their current research has two major focuses. The first focus is on how to visualize and communicate AI model behavior. More specifically, the research seeks to enable the understanding of non-AI experts about the failure modes of specific AI models to better understand the risks associated. Their second focus is on understanding how to structure the location of the human in/on/over the loop depending on the risk from the environment and the impacts the system can have. They are also interested in how previous disruptive technologies such as aviation can serve as a model for how to structure AI regulation and deployment.
They are currently pursuing a PhD in Systems Engineering at George Washington University. They are also a fellow in the Designing Trustworthy AI Systems NSF fellowship. Their current research has two major focuses. The first focus is on how to visualize and communicate AI model behavior. More specifically, the research seeks to enable the understanding of non-AI experts about the failure modes of specific AI models to better understand the risks associated. Their second focus is on understanding how to structure the location of the human in/on/over the loop depending on the risk from the environment and the impacts the system can have. They are also interested in how previous disruptive technologies such as aviation can serve as a model for how to structure AI regulation and deployment.
Speaking At