Engineering and Scientific Progress.
My dissertation, Engineering Progress in Science (University of Cincinnati, 2025), advances what I call the operational account of scientific progress. Philosophers of science have long measured progress in terms of theory change or increasing truth-likeness, yet these accounts struggle to explain how progress persists across incommensurable paradigms. I argue that these difficulties arise from an overly narrow conception of scientific cognition as exclusively intellectual. By contrast, the sciences also depend on forms of operative cognition, embodied manipulation, design, and iterative construction that are irreducible to propositional reasoning. Drawing on the maker’s knowledge tradition, I develop the notion of a ‘scientific work’ as a practical, non-intellectual complement to the more familiar notion of scientific theory. Much as philosophers of science are already said to analyze the logical ‘anatomy’ of scientific theory, I suggest a way of doing philosophy of science which examines the cognitive ‘anatomy’ of scientific works.
I thereby develop a more inclusive picture of progress that captures the epistemic achievements intrinsic to engineering and technological developments, especially those now underway in artificial intelligence. These developments, conceived of as scientific works, include the construction of scientific models, instrument design, calibration routines, and forms of experimental embodiment that allow investigators to intervene on their natural targets of study in ever more complex and revelatory ways. Further, I argue that by grounding scientific progress in the practical organization of activity, I reframe traditional worries about in philosophy of science such as incommensurability: even when entire conceptual frameworks change, operative capacities often carry forward.
Since completing the dissertation, my work has increasingly focused on the conceptual foundations of artificial intelligence. The rapid adoption of large language models in recent years has imported a host of philosophical terms such as reasoning, attention, alignment, and ground truth into engineering discourse. My current research analyzes how these concepts are deployed, and often distorted, when transposed into computational contexts and vice versa.
For example, my paper “AI Attention is Not Higher-Order Thought” (in progress) examines the claim that the “attention” mechanisms used in transformer architectures vindicate higher-order theories of consciousness. I show that the mathematical function of attention in language models, re-weighting token vectors by relevance, shares only a superficial resemblance to the intentional or metacognitive phenomena described in philosophy of mind. Clarifying these conceptual conflations serves both analytic and practical ends: it prevents category mistakes that mislead public discourse and grounds interdisciplinary dialogue between philosophers and engineers on firmer logical footing.
This approach, rigorous analysis of philosophical vocabulary as it migrates into AI, illustrates my broader project of bringing the tools of analytic philosophy to bear on the epistemic structure of engineering practice itself. Rather than asking whether machines can think, I examine how engineers engineer thinking, and what this reveals about cognition as a natural and artificial phenomenon. My training in both philosophy and robotics engineering (M.Eng., 2024) enables me to engage these issues with technical precision.
Public Engagement with Science
A third strand of my research explores how public engagement with science functions as a cognitive process in its own right. I argue that outreach and participatory design are not merely ethical add-ons but epistemically generative: they reorganize the distributed cognition of scientific communities. My article “Leveraging Participatory Sense-Making and Public Engagement with Science for AI Democratization” (Studies in History and Philosophy of Science, 2025) develops this thesis by integrating enactive sense-making theory with models of civic participation. When non-experts engage with scientists around AI development, they co-constitute the normative space in which questions of alignment and value are settled. Public engagement, on this view, is a site of epistemic progress, where new operational coherences between technical systems and social goals are forged.
This work extends to my ongoing collaboration with Bowdoin’s Hastings Initiative for AI and Humanity, where I design programs that integrate philosophical reflection with hands-on experimentation in large language models. Together with a team of student AI ambassadors, I am developing modules that teach conceptual analysis, moral reasoning, and model-building as interlocking practices. While we develop an critical AI curriculum for Bowdoin students as a team, we are in the process of writing up a research paper articulating the changes, real and perceived, that our AI moment has brought to liberal arts campuses. For what it’s worth, our current working hypothesis is that these changes are over-estimated more often than not.