Workforce Perspectives and System-Level Synthesis
This phase expands the study beyond leadership perspectives to incorporate the lived experience of Extension professionals across roles and program areas. Conducted during the Joint Council of Extension Professionals (JCEP) national conference, this phase captures how AI adoption is being experienced in practice, including operational realities, capacity constraints, and emerging ethical considerations. The findings below represent a system-level synthesis of these perspectives, highlighting both reinforcing patterns and critical tensions across the Land-grant system.
Overarching Theme 1: Institutional Governance Scaffolding
The most pervasive finding across all data is a state of institutional suspension characterized by fragmented information and an absence of clear departmental structure. Extension professionals report navigating a landscape of vague guidance and conflicting organizational silos, where legal and technical requirements remain largely difficult to access to those on the ground.
This governance vacuum creates significant professional anxiety, as staff are cautioned about the risks of technology without being provided the necessary roadmaps or structural support to manage those risks safely. There is a resounding demand for a transition from individual, ad hoc experimentation to a coordinated, top-down organizational commitment.
This includes the development of formal policies, clearly defined parameters for safe use, and a national pooling of resources to address the current state of policy fragmentation.
Overarching Theme 2: Intellectual Sovereignty
The integration of AI has triggered an existential tension regarding the value of the Extension professional as an expert. There is a growing divide between the superficial efficiency offered by automated tools and the deep cognitive labor required to maintain scholarly integrity.
Faculty express concern that an institutional emphasis on efficiency and output may shift their role from original knowledge creation to that of an information verifier. In this emerging paradigm, educators report spending increasing amounts of time auditing and fact-checking AI-generated content, a shift that risks devaluing the unique thought processes and research-based expertise that define Land-grant academics.
To protect the integrity of the system, there is a strong call for institutional guardrails that preserve the visibility of human and science-based origins of knowledge, ensuring that critical thinking and scholarly contribution remain central.
Overarching Theme 3: Stewardship of Public Trust
At the core of the Extension mission is a human-to-human feedback loop that professionals fear is being disrupted by algorithmic mediation. This theme emphasizes the educator’s role as a curator and guardian of public trust, responsible for protecting communities from non-vetted, biased, or incorrect information.
This responsibility is especially pronounced in youth programming, including 4-H, where concerns center on the potential for technology to replace human relationships or compromise privacy. Extension professionals increasingly view themselves as a frontline defense, helping the public identify misinformation, recognize red flags, and navigate digital content responsibly.
Trust is not perceived as an inherent feature of AI systems, but rather as a function of the educator’s transparency, academic rigor, and accountability. This reinforces the need for university-verified, research-based resources to ensure Extension remains a trusted source of evidence.
Overarching Theme 4: Professional Development and Capacity Building
A significant barrier to organizational readiness is a severe limitation in professional capacity, combined with a lack of clarity around what constitutes AI proficiency. Many professionals perceive AI adoption as an additional responsibility layered onto an already full workload, without corresponding adjustments in time, expectations, or support.
This capacity strain is compounded by technical barriers, including the cost of tools and cybersecurity restrictions that limit access and experimentation. Additionally, because AI readiness is a moving and often undefined target, many in the workforce feel they are falling behind.
To address this, there is a strong call for a transition from passive awareness to structured, mandatory professional development. This includes clearly defined competencies, dedicated time for training within the workweek, and institutional support that aligns expectations with available capacity.
Overarching Theme 5: Societal Ethics and the Extension Mission
A unique and critical dimension of the findings is the ethical tension between AI adoption and the Extension mission of resource stewardship. Many professionals identify a contradiction between promoting sustainability and adopting technologies that require significant energy and water consumption through large-scale data infrastructure.
This environmental impact is not viewed as a secondary concern, but as a central moral conflict. Extension professionals express concern that adoption, if driven primarily by external or corporate pressures, may conflict with the values and responsibilities they are tasked with upholding.
This creates a perception of systemic inconsistency, where educators may feel compelled to adopt technologies that could negatively impact the very communities they serve. Addressing this tension will require greater transparency, consideration of environmental impact in tool selection, and a commitment to intentional, minimal, and purpose-driven use of AI.
Points of Convergence and Divergence Between High-Level (Leadership) and Ground-Level (Extension Professionals) AI Perspectives
The comparison between leadership-level findings and workforce-level experiences reveals a system that is broadly aligned on long-term goals, but divided in how AI adoption is currently being experienced in practice. While leadership perspectives emphasize strategy, infrastructure, and future potential, Extension professionals describe a present-state experience characterized by uncertainty, constraint, and ethical tension.
Points of Convergence
The areas of alignment indicate a shared understanding of the foundational requirements for a successful AI transition:
- Demand for Institutional Guidance: Both leadership and workforce participants identify a critical gap in formal structure, policy, and direction. There is mutual agreement that moving from ad hoc experimentation to coordinated institutional strategy is essential.
- Maintaining Public Trust: Across both groups, trust is recognized as the system’s most critical asset. There is consensus that trust is grounded not in the technology itself, but in the transparency and accountability of the human educators who validate and deliver information.
- Workforce Readiness Barriers: Both perspectives highlight the need for coordinated, national training programs. There is agreement that current self-directed learning approaches are insufficient and that tiered, structured professional development is necessary.
- Human-Centric Ethics: There is shared agreement that AI should augment, not replace, human expertise. Both groups support the development of a human-centered framework that ensures institutional values remain central to AI adoption.
Points of Divergence
The differences between leadership and workforce perspectives reveal a policy–implementation gap, where strategic vision has not yet translated into operational clarity:
- Administrative Efficiency vs. Verification Labor: Leadership views AI as a tool for improving efficiency and productivity. In contrast, professionals describe a “productivity paradox,” where time savings are offset by the need to verify and fact-check AI-generated content, often resulting in increased workload.
- Strategic Opportunity vs. Resigned Acceptance: While leadership sentiment is largely optimistic, workforce perspectives reflect a more cautious or resigned stance, with AI adoption often experienced as a source of pressure, uncertainty, or ethical concern rather than opportunity.
- Environmental-Mission Gap: A significant divergence emerges around environmental impact. Extension professionals identify a contradiction between AI’s resource demands and Extension’s role in promoting sustainability, a concern that is less prominent in leadership-level findings.
- Top-Down Strategy vs. Capacity Constraints: Leadership emphasizes training, infrastructure, and strategic investment. In contrast, professionals highlight a capacity crisis, where AI adoption is perceived as an additional burden without sufficient time, support, or adjustment to existing responsibilities.



