← Back to blog

Reclaiming Jurisdiction: How AI Lets Us Take Back Our Tools

higher-edAIstrategyfaculty
Reclaiming Jurisdiction: How AI Lets Us Take Back Our Tools

Reclaiming Jurisdiction and How AI Lets Us Take Back Our Tools

Introduction

In The Universal Hammer, I wrote about the danger of applying generic business frameworks to complex, mission-driven institutions. When we prioritize standardized efficiency over deep domain expertise, we often hollow out the very value we are trying to create.

Nowhere is this dynamic more visible than in the software we use to run our universities. If you have spent more than a week in higher education, you are likely familiar with the creeping dread of the “enterprise solution.” These massive platforms (whether they are learning management systems, assessment portals, or student success dashboards) are almost always sold as engines of efficiency.

But what actually happens is a quiet surrender of professional control. When a standardized platform cannot accommodate the nuance of a specific discipline, it is the faculty member who is expected to compromise. The pedagogy bends to fit the software.

Over the last few months, my relationship with technology has shifted significantly, but not because I have suddenly become a software engineer. Instead, by using AI to build my own localized tools, I’ve realized that generative AI offers something far more important than mere productivity. It is a mechanism for domain experts to reclaim control over their own work.

The Battle for Jurisdiction

To understand what is actually happening when we adopt these technologies, it is helpful to look at the work of sociologist Andrew Abbott. In his landmark 1988 book, The System of Professions, Abbott argues that professions are not static entities; they exist in an “ecology,” constantly competing with one another for jurisdiction. Jurisdiction is the socially and legally recognized right to control a specific area of work, diagnosis, and treatment.

Professions defend their jurisdiction by claiming “abstract knowledge,” which is a deep, theoretical understanding of their field that outsiders lack.

For decades, the faculty’s jurisdiction over teaching and assessment was absolute. But the digital revolution created a system disturbance. As universities digitized, jurisdiction began to quietly shift away from the academic expert and toward IT departments and third-party educational software vendors.

Consider the inherent clash between the subject matter we teach and the software we use to teach it. Entrepreneurship and strategy are, by definition, iterative, messy, and non-linear. Yet, the enterprise platforms we rely on are often built like digital assembly lines.

Your domain expertise allows you to see the nuance in a student’s strategic pivot or the complex dynamics of a startup pitch. But enterprise software rarely accommodates nuance. It demands standardized data. It frequently forces you to compress rich, qualitative expert judgment into a generic five-point dropdown menu or a rigid rubric simply because that is how the software is built. When you are forced to flatten your expertise (when you have to teach and evaluate an agile subject inside a rigid structure) you inevitably alter your pedagogy to keep the system happy. A quiet shift in jurisdiction occurs. You are no longer designing the optimal learning environment for your field. You are renting space in a generic, prefabricated box.

Building Without a Blueprint

This is exactly where AI changes the equation. It allows the professional to bypass the vendor and reclaim that lost jurisdiction.

I say this not as an AI guru or a master coder. My use of these tools is entirely unglamorous and frankly not that impressive to anyone with a computer science background. But it has been highly effective for my specific needs.

Recently, some colleagues at Miami University and the University of Cincinnati were researching the value of pairwise evaluations versus standard rubrics for judging startup pitches. They had been experimenting with a free commercial tool to facilitate this, but it required an enormous amount of setup on the back end and was incredibly clunky for students to actually use. Instead of accepting that poor user experience as an unavoidable constraint, I decided to try building my own app.

Using Claude Code, I built a custom application I call PitchCompare. Through highly iterative prompting, I was able to generate the underlying logic, the instructional infographics, and the help pages. I used the exact same approach to completely rebuild my personal academic website.

Was it a seamless, magical experience? Absolutely not. It was messy and frustrating, and the AI frequently hallucinated or broke its own code. But I didn’t need to be a software developer to fix it. I just needed patience and my domain expertise. The AI provided the technical execution, but I held the abstract knowledge required to audit the outputs and ensure the tool actually served the pedagogical mission.

What makes this capability so powerful is that the solution doesn’t end with my own syllabus. Because the tool is custom-built and unburdened by enterprise licensing, other professors — including the colleagues whose research inspired it — can use PitchCompare in their own classrooms. Furthermore, the platform isn’t just an assessment tool. It has the potential to double as an instrument for data collection, allowing us to research how factors like an evaluator’s background characteristics might influence the evaluation process. We didn’t just bypass a clunky software vendor. We built a dedicated research engine.

The Auditing Tax and the Value of the Core

This highlights a profound misunderstanding in how we currently talk about AI in education. There is a persistent fear that AI will do the thinking for us, rendering foundational knowledge obsolete.

While no one can predict what these models will be capable of in a decade, right now, the reality of working with AI is very different. Using today’s tools requires paying what I refer to as an auditing tax. If you ask an algorithm to build a strategic framework or write a script, it will do so with immense confidence. But if you do not possess the deep, disciplinary knowledge required to verify that output (e.g., to spot the logical leap or the misaligned incentive) you will confidently deploy a terrible product.

This is another reason why the integration of disciplines — for example, combining the critical inquiry of the liberal arts with the tactical execution of business — is more vital than ever. We must teach students how to possess the intellectual depth required to be critical editors of AI, not just passive consumers of its output.

Conclusion and Rebuilding the House

Strategy, as I have argued before, is about fit. It is about matching the right mindset to the right mission.

We do not need to become full-time developers to survive the next decade of higher education. But we do need to stop letting generic, lowest-common-denominator software dictate the boundaries of our profession. When placed in the hands of professionals who actually understand the terrain, AI is the ultimate anti-silo tool. If enterprise software is the universal hammer, AI is the artisan’s chisel. It is a precise instrument that finally allows the expert to shape the work exactly as the discipline demands.