2026 National AI Report

Extension AI

Aaron Weibe, Ph.D.
Director of Technology Services & Communications
Extension Foundation
aaronweibe@extension.org

Dhruti Patel, Ed.D.
Senior Agent, Family & Consumer Sciences
University of Maryland Extension 
dhrutip@umd.edu

David Warren
Senior Director of Integrated Digital Strategies
Oklahoma State University & Extension Foundation
davidwarren@extension.org

Mark Locklear
Digital Systems & IT Operations Manager
Extension Foundation

marklocklear@extension.org

The Extension Foundation extends its sincere gratitude to the leaders, partners, and facilitators whose dedication made this work possible.

Dr. Damona Doye
Associate Vice President, Oklahoma Cooperative Extension Service, Oklahoma State University

Dr. Alton Thompson
Executive Director, Association of 1890 Research and Extension Administrators (AERA), based at North Carolina A&T State University

Dr. Beverly Coberly
Chief Executive Officer, Extension Foundation

Bill Hoffman and the Extension Committee on Organization and Policy (ECOP)

David Warren
Oklahoma State University & Extension Foundation

Dr. Aaron Weibe
Extension Foundation

Dr. Dhruti Patel
University of Maryland Extension

Mark Locklear
Extension Foundation

Ashley Griffin
Chief Operating Officer, Extension Foundation and the New Technologies for Ag Extension (NTAE) program

University of New Hampshire Extension

Extension Foundation Staff and AI Advisory Board Members
For serving as facilitators and moderators during the focus group sessions, ensuring productive and forward-focused dialogue throughout Parts II and III of the Convening.

agInnovation and Extension Directors and Administrators
For their time, thoughtful engagement, and contributions to shaping the national conversation on AI readiness, strategy, andimplementation across the Land-grant system.

Regional Extension and agInnovation Directors and Administrators

USDA National Institute of Food and Agriculture

NTAE Logo

This report is supported in part by New Technologies for Ag Extension (funding opportunity no. USDA-NIFA-OP-010186), grant no. 2023-41595-41325 from the USDA National Institute of Food and Agriculture. Any opinions, findings, conclusions, or recommendations expressed in this publication are those of the author(s) and do not necessarily reflect the view of the U.S. Department of Agriculture.

This report discussed the current AI Landscape across Cooperative Extension and agInnovation in the Land-grant university system. Through a sequential systems mixed-methods study, it comprehensively investigated the views of leaders in Cooperative Extension and agInnovation, followed by ground-level realities of Extension professionals engaged in community efforts. The initial study report published in late 2025 focused on Directors and Administrators in Cooperative Extension and agInnovation, capturing system-level perspectives on governance, infrastructure, workforce development, and strategic priorities for AI adoption.

In February 2026, the study was expanded to include direct engagement with Extension professionals across roles and program areas during the Joint Council of Extension Professionals (JCEP) national conference in Savannah, Georgia. This additional phase introduced critical workforce-level insights, capturing how AI adoption is being experienced in practice by agents, specialists, and educators working on the ground.

The inclusion of these perspectives revealed important areas of alignment and key gaps between leadership strategy and operational reality. While leaders emphasized coordination, infrastructure, and long-term opportunity, Extension professionals highlighted immediate challenges related to capacity, policy clarity, ethical considerations, and day-to-day implementation.

As a result, this report has been updated to reflect both 2025 leadership findings and 2026 workforce insights. Together, these combined perspectives provide a more complete and  system-wide understanding of AI readiness across the Land-grant system.

Introduction

Artificial intelligence (AI) represents a defining technological shift that is rapidly influencing how Land-grant Universities fulfill their public missions of research, education, and community service. Across both Cooperative Extension and agInnovation (Research) systems, AI holds the potential to enhance how science is generated, translated, and applied, improving accuracy, efficiency, and availability, while also introducing new responsibilities related to governance, workforce transformation, data safety, transparency, and ethical use.

Recognizing this pivotal moment, the Extension Foundation (EXF), in collaboration with the University of New Hampshire Extension and supported through the New Technologies for Ag Extension (NTAE) cooperative agreement with USDA’s National Institute for Food and Agriculture (NIFA), led a national effort to understand the evolving landscape of AI within both Extension and Research components of the Land-grant system. Contributing members included the Extension Foundation, Oklahoma State University Extension, and University of Maryland Extension.

This effort built upon current EXF initiatives that had already begun exploring practical AI applications for the system, including tools such as ExtensionBot and the MERLIN data management platform, both of which demonstrate early-stage models of AI-enabled information discovery and decision support.

Purpose

The purpose of this study was to move beyond early exploration and toward a coordinated, system-wide framework for AI integration across Extension and agInnovation networks. Through a combination of survey research and facilitated convenings, and expanded field-based engagement, the study examined institutional readiness, workforce development needs, policy structures, and both leadership and practitioner perspectives on responsible AI use across both missions.

Overarching Priorities

Leaders across the Land-grant system identified three overarching priorities that define how Extension and Research should guide AI adoption nationally:

  1. Establish clear standards for transparency, accountability, and human oversight in research, education, and outreach applications.
  2. Develop consistent training, practice guidelines, and leadership structures to support responsible, effective use of AI tools.
  3. Align shared data systems, policies, and frameworks to enable consistent, sustainable adoption across institutions.

These priorities establish the foundation for the next phase of AI leadership across the Land-grant system. However, findings from the expanded study indicate that successful implementation will require not only a coordinated leadership strategy but also AI guidance/policy, direct alignment with the day-to-day realities, capacity- building efforts,, and ethical considerations experienced by Extension professionals on the ground.

Implementation Methodology

To explore these priorities in depth, the study was originally organized in four sequential phases. It began with a national AI Landscape Assessment to establish a baseline understanding of readiness and current use across institutions. This was followed by three AI Convenings, one virtual and two in person, that engaged both Extension and agInnovation Directors and Administrators in identifying priorities, validating findings, and developing actionable implementation strategies.

In early 2026, the study expanded to include a fifth phase, incorporating direct engagement with Extension professionals across program areas and roles through in-person data collection at the JCEP national conference. This additional phase provided a critical comparative lens, enablingallowing for analysis between leadership-level strategy and on-the-ground experience.

The following sections outline the timeline of activities, the methods used, and the findings that emerged through this coordinated process.

What this Study Delivers

  • System snapshot: a national baseline of AI awareness, use cases, and readiness across CES and agInnovation.
  • Leadership consensus: validated themes and priorities derived from structured convenings.
  • Workforce insight: new findings capturing the lived experience, constraints, and ethical tensions faced by Extension professionals.
  • Implementation direction: practical steps for workforce development, governance, and shared infrastructure.
  • Next Steps: align policy and attribution practices, launch tiered AI training, and formalize governance with clear roles and review processes.
  • Future Plans: translate priorities into funded pilots and shared services that can scale across the Land-grant network.

Timeline of Activites

The study took place over a concentrated four-month period from June through September 2025. Each phase was designed to build upon the previous one, progressing from environmental scan to structured dialogue to consensus on leadership priorities and implementation strategies.The process intentionally combined the perspectives of both Extension and agInnovation leadership to ensure alignment across Extension and agInnovation.

This timeline reflects a deliberate design. The initial landscape assessment provided a system-wide view of readiness, the virtual convening established shared current perspectives, and the two in-person convenings allowed leaders to establish priorities and co-create strategies for ethical governance, workforce training, and long-term infrastructure development. The sequence positioned the Land-grant system to move from awareness to coordinated priorities and future plans around AI adoption.

In early 2026, the study was expanded through an additional phase of engagement with Extension professionals at the Joint Council of Extension Professionals (JCEP) national conference. This phase extended the original timeline by incorporating workforce-level perspectives, enabling a more comprehensive understanding of AI adoption across roles, program areas, and institutional contexts.

The expanded phase provided a critical complement to the original design, allowing for comparison between leadership-driven strategy and the lived experience of Extension professionals, and strengthening the overall validity and applicability of the study’s findings.

  • Phase

    1

    Quantitative: Landscape survey with Extension and AgInnovation Leaders

  • Phase

    2

    Qualitative Virtual Focus Group

  • Phase

    3

    Qualitative AI Convening with a focus on the identification of key priorities (in-person)

  • Phase

    4

    Qualitative AI Convening with a focus on the identification of key strategies (in-person)

  • Phase

    5

    Qualitative Workforce Focus Group Engagement during JCEP (In-person)

Methods

This study used a four-phase, sequential systems/mixed-method research methodology to capture the current state of AI perspectives, priorities, and implementation strategies of Cooperative Extension and agInnovation (Research) leaders within the Land-grant system. In early 2026, the study was expanded to include a fifth phase to incorporate perspectives from Extension professionals across the broader workforce, allowing for a more comprehensive, system-wide understanding of AI adoption.

The process intentionally engaged Directors and Administrators across 1862, 1890, and 1994 institutions, representing the full research and outreach missions of the system. The expanded phase additionally engaged Extension professionals across roles, including agents, educators, and specialists, broadening the dataset beyond leadership perspectives. However, the 1994 representatives were not present for discussion during the original in-person conversations and are only reflected in Phase 1 and Phase 2.

 Quantitative Assessment (AI Landscape Survey)

The study began with a national AI Landscape Assessment (June–July 2025) designed to measure institutional readiness, current use cases, governance and policy, and leadership perspectives regarding AI adoption. The structured survey was distributed across all 112 Land-grant Universities, capturing responses from Extension and Research professionals. 47 respondents from 29 different states participated in this initial survey. Quantitative results established a baseline of widespread enthusiasm for AI alongside notable gaps in policy, training, and coordination.

Qualitative Exploration (Virtual Focus Groups)

The second phase took place in July 2025 as a series of simultaneous, regional breakout sessions:11 small, structured focus groups facilitated by members of the Extension Foundation staff and the national AI Advisory Board. More than 100 participants representing 41 universities took part in these 90-minute discussions. Following a brief overview of survey results, each group was guided under a single framing question, “Where are we now with AI?” which was explored through related sub-questions addressing current awareness and ongoing initiatives; perceived challenges and risks; and emerging opportunities for Extension and Research. These regional focus groups provided the first qualitative layer of data, illustrating both the promise and fragmentation of AI activity across the Landgrant system. Phase 1 and Phase 2 data were triangulated to draw themes for the prioritization activity.

Brainstorming & Prioritization (In-Person Workshop)

Phase 3, held in September 2025, focused on collective prioritization through two sequential 45-minute activities.

Activity 1, Brainstorming: The purpose of this phase was two-fold:

  1. to build upon the triangulated themes from the AI landscape assessment and virtual focus groups by generating a wide array of strategies and emerging ideas, and
  2. to refine those ideas into a defined set of leadership priorities. The session was organized around two 45-minute activities facilitated by members of the Extension Foundation and partner institutions.

Participants were provided with a synthesis of findings from Phase 2 and asked to generate as many ideas in small groups as possible for how Cooperative Extension and agInnovation could advance AI policy, strategy, and application. Prompts focused on three key areas derived from the initial report:

  1. Culture, Ethics, and Public Trust
  2. Capacity, Infrastructure, and Policy
  3. Strategic Readiness and Alignment Each group worked collaboratively to submit their ideas through a shared digital form, creating an extensive set of strategies and concepts across all categories.

Activity 2, Prioritization: Immediately following the brainstorming session, participants reviewed the full set of submitted ideas and deliberated to identify their top three priorities within each category. Using a structured worksheet and shared online database, groups collectively determined the nine most significant priorities to guide future implementation.

Implementation of Priorities (In-Person Workshop).

Between Phases 3 and 4, the facilitation and analysis team used ChatGPT and Google Gemini tools to assist in synthesizing the data collected from brainstorming and prioritization sessions. These tools helped consolidate hundreds of qualitative responses into thematic clusters by identifying redundancy, similarity, and co-occurring concepts across the datasets. The AI-assisted synthesis was then reviewed manually to ensure validity, accuracy, and interpretive rigor.

Participants then self-selected into breakout groups according to their area of greatest interest. Each group was tasked with developing strategies that could move these priorities from concept to practice, focusing on both immediate and long-term actions.

Facilitators guided each group through two key prompts: What strategies and resources are needed to implement these priorities? What barriers exist, and how can they be overcome?

To ensure applied outcomes, participants were encouraged to consider institutional structures, funding opportunities, partnerships, and mechanisms for accountability. Discussions were documented through shared digital forms, with each group’s scribe capturing proposed actions, obstacles, and enabling conditions.

The emerging themes represent the integrated leadership vision for AI within the Land-grant system, connecting ethical alignment, human oversight, workforce development, and shared infrastructure as mutually reinforcing priorities for future action.

During Phase 4, participants reconvened in person to review the top nine priorities including three under each of the key focus areas, and to develop strategies for action. In partially facilitated breakout discussions, participants explored the specific resources and partnerships needed for implementation, barriers that might impede success; and strategies to overcome those barriers and operationalize their ideas within institutional and system-wide contexts.

In-Person Engagement, JCEP National Conference

The fifth phase of the study was conducted in February 2026 during the Joint Council of Extension Professionals (JCEP) national conference in Savannah, Georgia. This phase expanded the research beyond leadership perspectives by directly engaging Extension professionals from across program areas and career stages.

Participants contributed through structured activities designed to capture both individual and collective perspectives on AI adoption in their daily work. These activities generated qualitative data reflecting real-world application, perceived risks, ethical concerns, and operational constraints associated with AI integration.

The purpose of this phase was to validate, challenge, and extend the findings from earlier phases by introducing a workforce-level lens. While Phases 1–4 focused primarily on institutional readiness, leadership priorities, and strategic direction, Phase 5 captured the lived experience of implementation, including the practical realities of workload, training capacity, trust, and mission alignment.

Data collected during this phase were synthesized alongside prior findings to identify areas of convergence and divergence between leadership-level strategy and on-the-ground experience. A comparative analysis enabled the identification of systemic tensions, capacity gaps, and emerging ethical considerations that may not have been fully visible in earlier phases

Analytical Approach

Qualitative data from Phases 2 through 4 were analyzed using thematic analysis by
Dr. Dhruti Patel, supported by AI-assisted narrative synthesis, with a thorough review of coding accuracy, redundancy, data coherence, bias, and ethics. The analysis from 1-3 consolidated participants’ collective input was placed into two overarching themes: Culture, Ethics, and Public Trust; and Capacity, Infrastructure,
and Policy.

Qualitative data from Phase 5 was analyzed using the same thematic approach, with additional emphasis on identifying system-theoretical approaches to uncover innate convergent and divergent patterns between the leadership lens (Phases 1-4) and the Extension professional lens (Phase 5). The combined insights from leadership and workforce perspectives capture the organizational, ethical, and operational dynamics of AI adoption across the Land-grant system.

Findings

AI Landscape Assessment

AI–related Strategic Planning and Institution Stakeholder Engagement

  • When surveyed about their institution’s leadership’s general outlook on Artificial Intelligence, a resounding 73% of respondents characterized it as either enthusiastic or cautiously optimistic. This indicates a prevailing positive sentiment towards AI’s potential within Extension and agInnovation.
  • The survey revealed a high level of decentralization and an absence of formal, institution-wide strategies. Only 17% of respondents reported collaborative, cross-unit efforts regarding AI initiatives. Furthermore, 32% noted that some units are moving ahead independently, suggesting a fragmented approach. In 18% of cases, leadership in AI is emerging organically from faculty rather than being driven by top-down directives.

This shows that while enthusiasm for AI exists, comprehensive, system- wide strategies are still mostly lacking.

When efforts are scattered, it can cause duplication of work, inconsistent methods, and missed chances for collaborative growth across the larger Extension and agInnovation systems. This highlights the need for a clear, central vision to effectively integrate new technologies within large organizations like ours.

Influential Stakeholders and Bottom-Up Dynamics

  •  Our investigation reveals that peer institutions and networks are identified as the most significant stakeholder group influencing AI strategy within institutions, explicitly named by 25% of respondents. This suggests that observing and collaborating with similar organizations plays a crucial role in shaping strategic direction.
  • The survey data also highlights a prominent bottom-up dynamic in AI adoption. 47% of respondents affirmed significant faculty autonomy in choosing AI technologies, indicating that individual educators are often empowered to explore and implement AI tools as they see fit. Furthermore, 43% reported a growing interest among faculty in integrating AI into their teaching practices.
  • 40% reported that AI is a stated strategic priority. The most commonly reported implementation strategy is training that targets faculty, staff, and students.
  •  This combination suggests a scenario where interest and experimentation are growing naturally among faculty members. While external peer influence shapes broader strategy, the internal motivation for AI assimilation is largely driven by individual faculty initiative and enthusiasm. This dynamics can lead to rapid grassroots innovation, though it also highlights the need for some coordination to maximize overall impact.

Survey Demographics

Our quantitative investigation garnered 47 responses from Extension and agInnovation professionals across 29 states, representing a varied crosssection of roles. A significant majority, 61% of those surveyed, held leadership positions, including Deans, Assistant Deans, and Directors.

Funding Models and Areas of AI Involvement

  • When examining funding for AI initiatives, a notable degree of uncertainty was reported by respondents. Only 26% reported relying on external grant funding, while 14% cited internal institution resources. A significant knowledge gap was reflected regarding the financial framework for AI initiatives. Specifically, 27% of respondents were unaware of how AI projects are funded within their institutions.
  • Regarding the main areas of AI involvement, extension, teaching, and learning were the most frequently reported strategic areas, cited by 53% of respondents. This high percentage probably reflects the professional focus of the survey participants, as most held leadership positions.
  • Engagement from other critical areas was reported less frequently:
    • Research: 36%
    • Business Operations: 33%
    • Data Analytics: 32%

This distribution indicates that while AI is actively being considered and integrated into core programmatic areas, there may be untapped potential for greater involvement from research, operational, and data-focused units.

A more robust, cross-disciplinary engagement could lead to more comprehensive and sustainably funded AI strategies.

AI Policies and Procedures Adoption by Leaders

  • Surveyed leaders expressed widespread uncertainty concerning their institutions’ preparedness regarding AI policies and procedures. A significant 52% were unsure about the overall direction of their institution’s AI-related policies, indicating a lack of clarity regarding the strategic direction or even the existence of such frameworks.
  • The data indicates a significant need for greater clarity regarding institutional cybersecurity. A total of 53% of respondents were uncertain whether current policies adequately address the risks associated with AI. Furthermore, 30% did not know which institution policies, if any, had been updated in response to AI, suggesting a disconnect between policy development and awareness among leadership.
  • While some policies may exist, their perceived effectiveness is also a concern. 14% of leaders believed existing AI-related policy was ineffective, and 13% viewed it as overly restrictive.

This dual perception indicates that even when policies exist, they might not effectively fulfill their purpose or could hinder innovation.

Overall, these figures highlight a significant gap in the development, communication, and perceived effectiveness of AI governance within institutions.

Ethics, Privacy, and Restructuring

  • Concerns regarding the ethical implications of Artificial Intelligence are prominent among respondents. The top AI-related concerns identified include ethical governance, algorithmic partiality (defined as fairness or bias in AI algorithms), and data security.These areas represent critical challenges that institutions must address as AI assimilation progresses.
  • The survey indicates that while these significant concerns about AI exist, organizational restructuring to address this evolving landscape has been limited. This is reflected in the lack of new roles or redefined responsibilities among institutions.
    • 94% of respondents reported no new leadership positions specifically designated for AI oversight or strategy.
    • 91% indicated no new staff or faculty roles created to manage or develop AI initiatives.
    • 87% confirmed no restructuring of existing leadership responsibilities to formally incorporate AI-related duties.

This stark contrast between high levels of ethical and privacy concerns and a minimal organizational response suggests that while the risks are recognized, institutions have yet to formalize the necessary structural changes to effectively manage these challenges and guide AI adoption strategically.

The absence of designated roles may impede the proactive development and implementation of robust ethical frameworks and best practices for AI within Extension and agInnovation.

Risks and Opportunities with Stakeholders in the Next Two Years

  • The analysis of the survey measure on stakeholder perceptions revealed a positive trend regarding improved ease of access and program efficiency stemming from the future impacts of AI. Our data shows a positive outlook on AI’s potential to enhance access to programs overall, with 44% of respondents expecting such improvements. Notably, 41% foresee AI tools specifically increasing access for individuals with impairments, highlighting an understanding of AI’s role in improving access to Extension and agInnovation efforts.
  • 64% reported a strong positive indication regarding the use of AI tools for program reporting. This suggests a recognized opportunity to streamline administrative tasks and improve data management within Extension and agInnovation.
  • While several opportunities have been identified, uncertainty remains about AI’s ability to ease operational burdens. 36% of respondents were unsure if AI tools would effectively cut down workload, emphasizing the need for clearer demonstrations or proof of efficiency improvements in this area. These results point to both promising possibilities for AI use and areas where more clarity and development are necessary to fully harness AI’s advantages within the Extension and agInnovation systems

Quantitative Survey Inference

  • A pervasive enthusiasm and acceptance of AI is evident across the surveyed leadership. However, this positive sentiment is juxtaposed with numerous logistical and procedural unknowns concerning the comprehensive diffusion and assimilation of AI technologies throughout institutions.
  • The data further indicates that current AI initiatives are characterized by decentralized efforts, with limited evidence of system-wide adoption in institutions. This suggests that while individual units or faculty may be exploring AI, a unified strategic approach is not yet widely established.
  • Overall, the identified AI priorities and critical needs underscore that AI literacy is a key concern among faculty and staff. There is a clear demand for enhanced understanding and skills related to AI to effectively leverage its potential.
  • Finally, the data strongly suggests that the most effective approach to providing AI support is through peer networks. This implies that fostering collaborative learning environments and facilitating knowledge exchange among Extension and agInnovation professionals could be a highly impactful strategy.
  • Consequently, more targeted efforts in this area may be warranted to enhance AI capabilities across the system.

Qualitative Investigation of the AI Landscape with Extension and agInnovation Leaders

Following the dissemination of the quantitative survey, a qualitative focus group investigation was conducted during the first AI convening virtual meeting with Extension and agInnovation leaders and administrators. One hundred and six individuals representing 41 universities were in attendance.

The following themes emerged from the focus groups.

  1. Uneven Preparedness and Organic Adoption
    Most institutions are in the nascent stages of AI assimilation. Efforts are frequently driven by individual faculty, team, or unit initiatives rather than being guided by a comprehensive institution-wide strategy. While a limited number of universities have begun formalizing their AI strategies, the majority are still in the exploratory phase, seeking entry points or conducting internal assessments to understand current AI usage.
  2. Lack of Training and Organizational Infrastructure
    A recurring concern among participants was the conspicuous absence of systematic training and guidance for AI tools. Current AI adoption predominantly relies on “self-taught” methods or informal peer-to-peer knowledge sharing, with few institutions offering structured professional development programs or dedicated support mechanisms. This observation aligns with findings from the quantitative survey, which indicated a reliance on peer networks and professional associations for AI literacy support among faculty and staff.
  3. Fear, Resistance, and Misunderstanding
    A segment of staff perceives AI as a potential threat, particularly concerning job security or the traditional face-to-face models. These apprehensions can manifest as resistance to or avoidance of AI technologies.
  4. Fragmented Policy and Ethical Concerns
    Institutions largely lack clear, overarching policies or definitive guidance regarding the appropriate and responsible use of AI. Specific areas of concern include compliance with regulations such as FERPA, intellectual property rights, questions of AI authorship, and broader ethical boundaries for AI applications.
  5. System-wide Constraints: Capacity, Resources, and Knowledge Silo
    Intrinsic challenges significantly impede progress in AI assimilation. These include budgetary constraints, a shortage of staff proficient in AI, and the adverse effects of siloed knowledge, which collectively hinder cohesive advancement.
  6. Strong Interest in Shared Resources and Communities of Practice
    There was a broad consensus among participants regarding the critical need for shared resources. This includes standardized training and onboarding materials, clear policy templates, easy-to-access case studies illustrating responsible and effective AI utilization, and the establishment of cross-state learning communities or collaborative working groups to foster shared learning and best practices.

Summary of Qualitative Findings

  • The qualitative investigation with Extension and agInnovation leaders reveals a complex, yet promising, landscape for AI assimilation. While a prevailing enthusiasm for AI exists, its practical adoption across the system is largely decentralized and organic, driven more by individual or unit-level initiative than by top-down strategic planning. This grassroots interest is a testament to the adaptability of Extension and agInnovation professionals, despite some significant intrinsic vulnerabilities.
  • A critical finding is the pervasive lack of formal training and robust organizational infrastructure to support AI adoption. This void forces reliance on self-taught methods and peer sharing, which, while indicative of strong internal networks, cannot substitute for structured professional development and clear guidance from institutions. Furthermore, the presence of fear, resistance, and misunderstanding surrounding AI, particularly concerning job security, underscores an urgent need for transparent communication and educational initiatives to demystify AI’s role in augmenting, rather than replacing, human expertise.
  • Compounding these challenges are fragmented policies, significant ethical concerns (e.g., data privacy, algorithmic partiality), and system-wide impediments such as budget constraints, limited AI-literate staff, and knowledge silos. These factors collectively hinder a cohesive and ethically sound approach to AI.
  • Despite these hurdles, the consistent demand for shared resources and vibrant communities of practice presents a substantial opportunity. The strong desire for standardized training, policy templates, case studies, and collaborative learning environments indicates a readiness among leaders to engage in collective action.

Implications and Opportunities for Extension and agInnovation

  • Strategic Alignment Imperative:
    The decentralized nature of AI adoption highlights the critical need for a centralized, system-wide AI strategy. This strategy has the potential to leverage existing organic efforts while providing overarching guidance, resources, and policy frameworks to ensure consistent, ethical, and practical adoption.
  • Prioritization of AI Literacy and Training:
    Addressing the deficiency in training is paramount. Developing and delivering systematic professional development programs—potentially leveraging existing peer networks—would be crucial for building AI literacy, mitigating fear, and empowering faculty and staff.
  • Proactive Policy Development:
    There is an immediate need to develop clear, comprehensive policies addressing AI use, ethical considerations, data privacy, and intellectual property. These policies must be communicated effectively across all units to foster responsible innovation.

Triangulation of Quantitative and Qualitative Findings
on Current AI investigation in Extension and agInnovation.

Triangulation of Quantitative and Qualitative Findings on Current AI investigation in Extension and agInnovation.

Brainstorming and Prioritization

Following the exploratory discussions of Phase 2, the study advanced into a structured prioritization process designed to convert qualitative insights into clear, actionable focus areas for the Land-grant system.

Culture, Ethics, and Public Trust

  1. Be transparent and build public trust: Strengthen public con idence through openness about how AI is used in research and outreach.
  2. Ethical governance and safeguards: Develop clear frameworks to address misinformation, bias, and data integrity.
  3. Human-centric education: Use AI to handle routine tasks so faculty and agents can focus on high-impact, trust-based programming.

Capacity, Infrastructure, and Policy

  1. Train people: Provide targeted, practical AI training for faculty, researchers, Extension staff, and communities.
  2. Set clear rules: Establish shared policies for safe and responsible AI use across teaching, research, and Extension.
  3. Shared systems, resources, and infrastructure: Strengthen broadband, platforms, and shared data resources to support AI work in labs and in the field.

Strategic Readiness and Alignment

  1. Future-ready workforce: Build comprehensive AI training and workforce development plans, including reskilling for those affected by automation.
  2. Establishment of use cases and best practices: De ine and disseminate examples of how AI can be applied effectively and responsibly across the Land-grant mission areas.
  3. National strategy and collaboration: Pursue coordinated national initiatives and partnerships that blend trusted human expertise with AI tools to deliver reliable results

Thematic Synthesis

Thematic analysis of Phases 2–4 revealed three themes: Culture, Ethics, and Public Trust; Capacity, Infrastructure, and Policy; and Strategic Readiness and Alignment. However, because the elements of Strategic Readiness and Alignment were already fully contained within the other two, we consolidated it moving forward and present the findings under the two overarching themes that most clearly reflect participants’ priorities. This consolidation does not remove this content, rather, it redistributes those elements under the two broader themes where they more logically belong. These themes synthesize hundreds of individual responses gathered through focus groups, prioritization activities, and implementation discussions, forming a comprehensive view of where Extension and agInnovation leaders believe coordinated action is most urgently needed.

Theme 1: Culture, Ethics, and Public Trust

Leaders across the Land-grant system consistently emphasized that AI adoption must remain human-centric, grounded in ethical principles, transparent practice, and public accountability. Participants warned that technical progress without safeguards could erode both credibility and community confidence.

Subtheme 1. Ethics and Human-Centrism The most widely discussed imperative was to ensure that AI strengthens rather than replaces human expertise. Participants proposed the development of a formal “human-centric” code of conduct led jointly by Extension and agInnovation Directors and Deans. This policy framework would codify expectations for human oversight in all AI-supported research, teaching, and outreach activities. Such a framework would serve as a reference point for how AI tools are used, evaluated, and communicated, ensuring that institutional values and community trust remain central to technological innovation.

Subtheme 2. Attribution and Transparency Leaders identified a strong need for consistent, verifiable attribution of AI contributions in publications and outputs. Respondents recommended standardized attribution guidelines that clearly distinguish where and how AI was used, differentiating between research papers and Extension materials. They also urged expansion of open-access publishing so AI systems can train on transparent, peer-reviewed data. This aligns with federal open-access mandates and reinforces integrity by making AI-assisted work traceable and auditable.

Subtheme 3. Risk Prevention Participants cautioned that unchecked AI adoption could inadvertently displace critical thinking and essential human skills. To prevent this, institutions should frame AI explicitly as a tool that augments human capacity. Continuous professional development and reflective training were identified as the best safeguards against dependence on automated outputs.

Subtheme 4. Public Trust Trust was recognized as the ultimate determinant of success. Leaders called for systematic efforts to measure stakeholder trust and to design AI applications that respect the comfort levels and expectations of end users. Transparent communication, demonstrated accountability, and community involvement in AI design were cited as key to sustaining public confidence in both Research and Extension programs.

Theme 2: Capacity, Infrastructure, and Policy

The second theme addresses the structural conditions necessary to scale AI responsibly: a capable workforce, collaborative frameworks, and coordinated policy governance. Participants repeatedly emphasized that without shared infrastructure and systematic training, adoption would remain fragmented and unsustainable.

Subtheme 1. AI Workforce Readiness
Training emerged as the single most dominant priority. Leaders proposed a national, tiered AI training program led by Extension, featuring beginner-to-advanced pathways for faculty, researchers, agents, and community audiences.
Key components include:

  • foundational AI literacy and ethics;
  • advanced modules such as prompt engineering and data validation;
  • “train-the-trainer” models using computer-science expertise; and
  • clear certification pathways verifying competency and responsible use.

Participants stressed that training must demonstrate tangible efficiency gains and be embedded within institutional strategic plans to reach beyond early adopters.

Subtheme 2. Collaborative Framework
No single university can meet AI demands alone. Leaders urged the creation of system-wide alliances across the Extension Committee on Organization and Policy, and other partners to coordinate resources and prevent duplication. Examples included joint funding proposals, shared centers of excellence, and cross-state learning communities. The Extension Foundation was identified as a potential platform for distributing resources, webinars, and best-practice repositories that benefit all Land-grant types (1862, 1890, and 1994). Summary

Subtheme 3. Policies and Best Practices
Governance must evolve in parallel with technology. Participants called for institution-wide policies clarifying acceptable AI use in research, teaching, and Extension outputs.
Recommendations included:

  • creation of multidisciplinary task forces involving IT, compliance, communications, and academic leadership;
  • standing national committees through APLU or scientific societies to update policies as technology evolves; and
  • guidance framed around “dos and don’ts” emphasizing risk mitigation rather than rigid restriction.

Subtheme 4. Infrastructure, Resource Allocation, and Funding
Sustainable AI integration depends on access to physical and financial resources. Leaders underscored the need for shared data repositories, compute capacity, and broadband access, as well as new funding mechanisms to offset training and infrastructure costs. They highlighted public-private partnerships and coordinated grant strategies as essential for building durable AI infrastructure across all institutions, including those with limited internal capacity.

Summary

Across both themes, participants voiced a consistent conclusion: policy must lead technology. Human oversight, ethical governance, and coordinated capacity building are prerequisites for AI to achieve its full potential in research and community engagement. AI should not be treated as an isolated innovation but as a system-wide transformation requiring collaboration, investment, and trust.

Workforce Perspectives and System-Level Synthesis

This phase expands the study beyond leadership perspectives to incorporate the lived experience of Extension professionals across roles and program areas. Conducted during the Joint Council of Extension Professionals (JCEP) national conference, this phase captures how AI adoption is being experienced in practice, including operational realities, capacity constraints, and emerging ethical considerations. The findings below represent a system-level synthesis of these perspectives, highlighting both reinforcing patterns and critical tensions across the Land-grant system.

Overarching Theme 1: Institutional Governance Scaffolding

The most pervasive finding across all data is a state of institutional suspension characterized by fragmented information and an absence of clear departmental structure. Extension professionals report navigating a landscape of vague guidance and conflicting organizational silos, where legal and technical requirements remain largely difficult to access to those on the ground.

This governance vacuum creates significant professional anxiety, as staff are cautioned about the risks of technology without being provided the necessary roadmaps or structural support to manage those risks safely. There is a resounding demand for a transition from individual, ad hoc experimentation to a coordinated, top-down organizational commitment.
This includes the development of formal policies, clearly defined parameters for safe use, and a national pooling of resources to address the current state of policy fragmentation.

Overarching Theme 2: Intellectual Sovereignty

The integration of AI has triggered an existential tension regarding the value of the Extension professional as an expert. There is a growing divide between the superficial efficiency offered by automated tools and the deep cognitive labor required to maintain scholarly integrity.

Faculty express concern that an institutional emphasis on efficiency and output may shift their role from original knowledge creation to that of an information verifier. In this emerging paradigm, educators report spending increasing amounts of time auditing and fact-checking AI-generated content, a shift that risks devaluing the unique thought processes and research-based expertise that define Land-grant academics.

To protect the integrity of the system, there is a strong call for institutional guardrails that preserve the visibility of human and science-based origins of knowledge, ensuring that critical thinking and scholarly contribution remain central.

Overarching Theme 3: Stewardship of Public Trust

At the core of the Extension mission is a human-to-human feedback loop that professionals fear is being disrupted by algorithmic mediation. This theme emphasizes the educator’s role as a curator and guardian of public trust, responsible for protecting communities from non-vetted, biased, or incorrect information.

This responsibility is especially pronounced in youth programming, including 4-H, where concerns center on the potential for technology to replace human relationships or compromise privacy. Extension professionals increasingly view themselves as a frontline defense, helping the public identify misinformation, recognize red flags, and navigate digital content responsibly.

Trust is not perceived as an inherent feature of AI systems, but rather as a function of the educator’s transparency, academic rigor, and accountability. This reinforces the need for university-verified, research-based resources to ensure Extension remains a trusted source of evidence.

Overarching Theme 4: Professional Development and Capacity Building

A significant barrier to organizational readiness is a severe limitation in professional capacity, combined with a lack of clarity around what constitutes AI proficiency. Many professionals perceive AI adoption as an additional responsibility layered onto an already full workload, without corresponding adjustments in time, expectations, or support.

This capacity strain is compounded by technical barriers, including the cost of tools and cybersecurity restrictions that limit access and experimentation. Additionally, because AI readiness is a moving and often undefined target, many in the workforce feel they are falling behind.

To address this, there is a strong call for a transition from passive awareness to structured, mandatory professional development. This includes clearly defined competencies, dedicated time for training within the workweek, and institutional support that aligns expectations with available capacity.

Overarching Theme 5: Societal Ethics and the Extension Mission

A unique and critical dimension of the findings is the ethical tension between AI adoption and the Extension mission of resource stewardship. Many professionals identify a contradiction between promoting sustainability and adopting technologies that require significant energy and water consumption through large-scale data infrastructure.

This environmental impact is not viewed as a secondary concern, but as a central moral conflict. Extension professionals express concern that adoption, if driven primarily by external or corporate pressures, may conflict with the values and responsibilities they are tasked with upholding.

This creates a perception of systemic inconsistency, where educators may feel compelled to adopt technologies that could negatively impact the very communities they serve. Addressing this tension will require greater transparency, consideration of environmental impact in tool selection, and a commitment to intentional, minimal, and purpose-driven use of AI.

Points of Convergence and Divergence Between High-Level (Leadership) and Ground-Level (Extension Professionals) AI Perspectives

The comparison between leadership-level findings and workforce-level experiences reveals a system that is broadly aligned on long-term goals, but divided in how AI adoption is currently being experienced in practice. While leadership perspectives emphasize strategy, infrastructure, and future potential, Extension professionals describe a present-state experience characterized by uncertainty, constraint, and ethical tension.

Points of Convergence

The areas of alignment indicate a shared understanding of the foundational requirements for a successful AI transition:

  • Demand for Institutional Guidance: Both leadership and workforce participants identify a critical gap in formal structure, policy, and direction. There is mutual agreement that moving from ad hoc experimentation to coordinated institutional strategy is essential.
  • Maintaining Public Trust: Across both groups, trust is recognized as the system’s most critical asset. There is consensus that trust is grounded not in the technology itself, but in the transparency and accountability of the human educators who validate and deliver information.
  • Workforce Readiness Barriers: Both perspectives highlight the need for coordinated, national training programs. There is agreement that current self-directed learning approaches are insufficient and that tiered, structured professional development is necessary.
  • Human-Centric Ethics: There is shared agreement that AI should augment, not replace, human expertise. Both groups support the development of a human-centered framework that ensures institutional values remain central to AI adoption.

Points of Divergence

The differences between leadership and workforce perspectives reveal a policy–implementation gap, where strategic vision has not yet translated into operational clarity:

  • Administrative Efficiency vs. Verification Labor: Leadership views AI as a tool for improving efficiency and productivity. In contrast, professionals describe a “productivity paradox,” where time savings are offset by the need to verify and fact-check AI-generated content, often resulting in increased workload.
  • Strategic Opportunity vs. Resigned Acceptance: While leadership sentiment is largely optimistic, workforce perspectives reflect a more cautious or resigned stance, with AI adoption often experienced as a source of pressure, uncertainty, or ethical concern rather than opportunity.
  • Environmental-Mission Gap: A significant divergence emerges around environmental impact. Extension professionals identify a contradiction between AI’s resource demands and Extension’s role in promoting sustainability, a concern that is less prominent in leadership-level findings.
  • Top-Down Strategy vs. Capacity Constraints: Leadership emphasizes training, infrastructure, and strategic investment. In contrast, professionals highlight a capacity crisis, where AI adoption is perceived as an additional burden without sufficient time, support, or adjustment to existing responsibilities.

Visual Thematics of the Points of Convergence and Divergence

Top and Ground Level AI Preparedness and Policy Data of the AI Convening (2025-2026)

Graph shows the points of convergence: Points of Convergence Demand for institutional guidance Maintaining public trust Workforce readiness barriers Human-centric ethics

Discussion and Implications

The AI Convening demonstrated that both Cooperative Extension and agInnovation leaders recognize AI as a defining factor in the future of research, education, and outreach across the Land-grant system. Participants viewed AI not as a single tool, but as an ecosystem-level transformation affecting how information is created, validated, shared, and applied. The discussions revealed broad enthusiasm and optimism about AI’s potential, while subsequent workforce-level findings reveal a more complex operational reality defined by uncertainty, capacity strain, and ethical tension. The system’s success will depend on coordinated readiness, ethical governance, and investment in people and infrastructure. The convening sessions offered a clear message: both Extension and agInnovation are poised to take further steps in shaping responsible, mission-driven uses of AI, but doing so will require shared standards, structured collaboration, and a national vision that bridges leadership strategy with on-the-ground implementation.

Emerging Readiness and Distributed Innovation

AI experimentation is already occurring across the Land-grant system. Faculty, researchers, and educators are independently piloting AI tools for literature synthesis, data analysis, stakeholder communication, and decision support. This decentralized experimentation has generated valuable insights, yet leaders acknowledged that progress remains uneven and largely uncoordinated. Workforce findings further indicate that this experimentation is often occurring without sufficient guidance, creating variability in practice and uncertainty around acceptable use.

Participants agreed that the next phase must focus on structured implementation. Both Extension and agInnovation institutions need frameworks that align experimentation with organizational priorities, create consistency across outputs, and promote cross-state learning and sharing. The challenge ahead is to connect local innovation with national coordination, enabling knowledge to scale without losing institutional flexibility while reducing the current state of fragmentation and policy ambiguity.

Workforce Development as the Central Bottleneck

A strong consensus emerged that workforce readiness is the most significant barrier to AI adoption. Both Extension and agInnovation rely on a workforce that blends subject-matter expertise with community engagement and applied research. Yet most professionals have not received structured training in AI literacy, ethics, or application. Workforce perspectives further reveal that AI is frequently perceived not as an opportunity, but as an additional burden layered onto already constrained workloads.

Participants emphasized that a tiered approach to workforce development is needed. Basic digital literacy must be coupled with advanced technical and ethical competencies for specialists, researchers, and educators. This training should not only focus on how to use AI tools, but also on how to critically evaluate AI-generated information, validate data, and communicate transparently with stakeholders. Importantly, readiness cannot be achieved without dedicated time, institutional support, and alignment between expectations and capacity. Without these conditions, training efforts risk limited adoption and uneven impact.

Human-Centered Ethics and Responsible Governance

Both Extension and agInnovation leaders reaffirmed that people must remain at the center of AI integration. Participants were clear that technology should augment human expertise, not replace it. This human-centric approach preserves the credibility, trust, and relational foundation that have defined the Land-grant mission for more than a century.

Leaders identified a strong need for shared ethical frameworks, including policies on attribution, data privacy, intellectual property, and algorithmic fairness. A framework developed collaboratively across the system could help ensure that AI tools are used transparently and in alignment with Land-grant values. Workforce findings reinforce this need, highlighting concerns around “intellectual sovereignty,” where professionals fear a shift from knowledge creation to verification of AI-generated content. Addressing this requires governance structures that preserve the visibility of human expertise, authorship, and scientific rigor.

Additionally, trust emerged as a central theme across both datasets. Trust is not viewed as an inherent feature of AI systems, but as a function of the transparency, accountability, and academic integrity of Extension professionals. Ensuring that AI-enabled outputs remain grounded in university-verified, research-based content will be essential to maintaining Extension’s role as a trusted public resource.

Infrastructure, Policy, and Data Readiness

Infrastructure disparities emerged as a major concern. While some institutions have advanced data systems and internal AI policies, others are still developing the foundational digital and policy environments necessary for integration. Participants noted that AI readiness requires not only technical infrastructure such as broadband, storage, and computing capacity, but also structured, available, and validated data.

Shared data repositories, consistent metadata standards, and aligned policies on availability and governance were identified as key enablers for cross-institutional collaboration. Additionally, with concerns around data privacy and data security, institutions may consider training or educational best practices for those building and using AI applications. These investments would allow Extension and agInnovation to more effectively connect research outputs with educational and applied use cases, creating a unified ecosystem of reliable, AI-ready information.

Workforce findings further emphasize that gaps in policy clarity and access to approved tools contribute to a broader sense of institutional ambiguity, where professionals are expected to manage risk without clear guidance. Addressing these gaps will be essential for enabling consistent and confident adoption across the system.

Implications for the Land-Grant System

The findings from the convening underscore that AI readiness represents both a challenge and an opportunity for Cooperative Extension and agInnovation. The decentralized nature of the Land-grant system remains a double-edged sword: it encourages innovation and adaptation, but can also lead to fragmentation if not intentionally aligned. The addition of workforce perspectives makes clear that this fragmentation is already being felt at the operational level, where professionals are navigating unclear expectations and uneven support structures.

To sustain national leadership in research and community engagement, Extension and agInnovation must advance in tandem. AI has the potential to accelerate the translation of research into practice, improve decisionmaking, and expand access to knowledge for all communities. Achieving this potential will depend on shared investment in training, infrastructure, policy development, and data stewardship.

At the same time, the system must reconcile emerging ethical tensions, including the environmental impact of AI technologies and their alignment with Extension’s mission of sustainability and resource stewardship. Addressing this dimension will require intentional decision-making around tool selection, usage, and long-term impact.

Ultimately, the path forward is not defined by adoption alone, but by intentional, values-aligned integration. The Land-grant system is uniquely positioned to lead in this space, not only by implementing AI, but by shaping how it is applied in the service of public good, scientific integrity, and community trust.

Without this investment, the Landgrant system risks widening disparities between early adopters and those lacking access to training or technical support.

Strategic Recommendations

The AI Convening emphasized that Extension and agInnovation leaders should translate ideas into actionable strategies for building AI readiness across the Land-grant system. The recommendations below represent the most prominent themes identified through the prioritization and implementation sessions. They reflect a shared vision among participants for creating a coordinated, ethical, and sustainable approach to Artificial Intelligence in research, education, and outreach. Findings from workforce-level engagement further reinforce that this vision must be operationalized in ways that are easy to access, supported, and realistic for professionals across all roles. This also means to observe, acknowledge, and address the perceptions and fears of workforce.

Strengthen Capacity, Infrastructure, and Policy

Key Recommendations:

  • Form multidisciplinary committees to build an AI effort within the institution that reflects education, policy, technical expertise, and external stakeholders.
  • Establish clear institutional and system-level policies on AI use, data governance, privacy, and attribution, ensuring consistency across the Land-grant network. These policies should be translated into practical guidance that is easy to access and actionable for professionals at all levels.
  • Develop shared data systems and platforms to reduce duplication, ensure data integrity, and support collaboration across institutions.
  • Launch comprehensive, tiered training programs (e.g., administrative, educator, and staff level trainings)to strengthen AI literacy among faculty, researchers, Extension professionals, and community educators.
  • Provide capacity-building training and other educational efforts on AI literacy within communities and stakeholders.
  • Help build advocacy structures (committees, efforts, and marketing) that provide scientific, unbiased, and relevant AI-related impacts (environment, energy, etc.).
  • Advocate for sustained investment in broadband and digital infrastructure to enable participation across all institutions, including 1862, 1890, and 1994 universities.

Advance Strategic Readiness and Alignment

Key Recommendations:

  • Establish a national AI strategic interdisciplinary preparedness framework for Cooperative Extension and agInnovation that defines shared goals, success measures, and accountability.
  • Create cross-institutional working groups or communities of practice to pilot AI use cases, evaluate best practices, and share implementation models.
  • Integrate national level efforts to encourage AI workforce development into existing professional development pipelines and academic programs, preparing the next generation of Extension educators and researchers. These efforts should also account for current workforce capacity constraints and include dedicated time and support for participation.

Promote Ethical Governance, Transparency, and Public Trust

Key Recommendations:

  • Develop best academic integrity practices that define responsible and ethical use to AI in research, teaching, and Extension, grounded in keeping   and ethical use human-in-the-loop.
  • Engage communities and stakeholders directly in conversations about AI to build understanding, confidence, and trust in how technologies are applied.
  • Implement quality assurance, regular audits (internal and external), red teaming, and validation processes for AI-generated content to prevent errors, bias, and misinformation.
  • Require clear attribution policies/guidance to ensure transparency when using AI-generated information in research publications, communications, and educational materials.

Foster Collaboration and Learning with Resource Alignment

Key Recommendations:

  • Use collective structures such as ECOP to maintain momentum, share findings, and support national coordination.
  • Establish multi-institutional partnerships that include universities, federal agencies, and private-sector collaborators to share resources, training, and expertise, and partner on grants/funding.
  • Seek federal and competitive grant opportunities to support shared AI infrastructure and systemwide training programs.
  • Encourage state and institutional investment in digital modernization, ensuring Extension and agInnovation remain equipped to operate in a data-driven environment.
  • This includes prioritizing access to tools and resources across institutions and roles.

Implications and Future Considerations

The findings from this study highlight both the momentum and the complexity of advancing AI across Cooperative Extension and agInnovation. While enthusiasm is strong, meaningful progress will depend not only on national coordination and leadership, but on how institutions interpret and operationalize these priorities within their own contexts.

Workforce findings further emphasize that successful adoption will require alignment between strategy and the day-to-day realities of Extension professionals. As institutions move forward, the following questions can be used  to further guide local explorations, decision-making, and plans for implementation.

  1. How is AI currently governed within your institution, and where are the gaps in clarity, consistency, or accessibility?
  2. What structures are needed to move from informal and decentralized  experimentation to cohesive systemic strategy?
  3. What multidisciplinary committees and individuals can help in developing and translating practical guidance for Extension professionals.
  1. What does “AI readiness” mean within your institution, and how is it defined and measured across roles?
  2. How are Extension professionals  expected to build AI competency within existing workloads?
  1. Do professionals have access to approved, trusted AI tools and validated data sources?
  2. How are data systems structured to support AI integration across research, Extension, and education?
  3. What barriers exist, technical, financial, or policy-related, that limit consistent adoption?
  1. How is your institution ensuring that AI uses align with the highest levels of academic integrity, ethics, equity, access, public trust, and transparency for Extension professionals?
  2. What safeguards are in place to maintain human oversight and protect the integrity of research-based knowledge?
  3. How are emerging concerns, such as intellectual sovereignty and environmental impact, being addressed in decision-making?
  1. How is your institution leveraging cross-institutional initiatives and communities of practice to share resources and minimize duplication across state and national partnerships?
  2. How is your institution collaborating with regional partners to develop scalable AI literacy frameworks that move toward meaningful community empowerment and digital resilience?
  3. How can collaboration reduce individual burden while accelerating system-wide progress

Looking Ahead

Advancing AI across the Land-grant system will require more than adoption, it will require intentional, values-aligned integration at every level. While national coordination efforts, such as the AI Program Action Team, will provide structure and direction, the success of this work ultimately depends on how institutions and professionals engage with these questions locally.

Extension Foundation Insights

As the Land-grant system advances toward a coordinated national strategy for Artificial Intelligence, the Extension Foundation continues to play a facilitative role in supporting Cooperative Extension and agInnovation with tools, infrastructure, and expertise that help translate system priorities into implementation. Findings from both leadership and workforce engagement reinforce that this translation, from strategy to practice, is the critical gap the system must now address. The insights and priorities emerging from this study align closely with several active NIFA-funded initiatives currently stewarded by the Foundation, particularly those advancing data readiness, workforce training, and AI-informed content infrastructure.

Building Capacity and Data Infrastructure

Through the New Technologies for Ag Extension (NTAE) program, the Extension Foundation has developed a portfolio of scalable, interoperable technologies designed to strengthen institutional capacity for AI adoption. Central to this work is MERLIN (Machine-driven Extension Research and Learning Innovation Network), a structured data platform that organizes, validates, and standardizes research and educational content for Extension and agInnovation.

MERLIN enables Land-grant universities to prepare their data ecosystems for AI-driven applications by creating structured, shareable, and regularly updated datasets. This ensures that research outputs, fact sheets, and publications remain available, verifiable, and ready for integration with current and emerging AI systems.

This approach directly addresses workforce-identified challenges related to fragmented data, inconsistent access to approved resources, and uncertainty around trusted content sources. By standardizing and validating information at scale, MERLIN reduces ambiguity and supports more consistent, confident use of AI across roles and institutions.

These efforts directly align with the priorities identified by convening participants, including shared infrastructure, increased availability, and consistent governance for data and content.

Advancing AI Readiness and Workforce Development

Complementing MERLIN is ExtensionBot, the Foundation’s national AI platform that makes validated Extension content discoverable and interactive. ExtensionBot allows institutional partners to customize chat-based interfaces powered by their own data sourced from MERLIN, enabling trusted, real-time responses to stakeholder inquiries.

This approach supports the convening’s calls for workforce development and strategic alignment by providing a tangible, ethical framework for AI adoption. It also directly responds to workforce concerns around verification burden and public trust by ensuring that AI-generated responses are grounded in curated, research-based content rather than unverified external sources.

Through these systems, institutions can explore AI use cases, strengthen internal capacity, and demonstrate transparent AI engagement with the public, while ensuring that all outputs remain grounded in peer-reviewed, research-based information.

To further support the national direction recommended in the convening, the Foundation is positioned to collaborate with ECOP, the proposed AI Program Action Team, and networks such as NDEET. These collaborations can help organize, coordinate, and nationalize training offerings that draw on the expertise of Land-grant content specialists, ensuring broad, consistent access to high-quality professional development across the system. Importantly, these efforts can help move the system from passive awareness to structured, competency-based training that aligns with real workforce capacity and expectations.

Embedding a Human-Centered and Ethical Framework

A central outcome of the AI Convening was the systemwide agreement that AI must remain human-centered: a tool to amplify, not replace, the expertise of researchers, educators, and Extension professionals. The Foundation’s tools embody this principle.

ExtensionBot requires human validation and content review before public release, while MERLIN’s structure promotes transparency in authorship, attribution, and version control. These features ensure that AI integration enhances, rather than compromises, the trust and credibility that define the Land-grant mission.

This model also directly addresses workforce concerns related to intellectual sovereignty and the preservation of expert identity, ensuring that human knowledge creation, interpretation, and accountability remain central in AI-assisted environments.

As the system continues to evolve, the Foundation is positioned not only to provide tools, but to help shape a national model for responsible, transparent, and mission-aligned AI implementation that balances innovation with public trust, and efficiency with scientific integrity.

Conclusion

From Consensus to Coordinated Action

Artificial intelligence is a defining shift in our world. This study confirms that the Land-grant system is ready to lead the way in navigating this new technology across its research, education, and community engagement missions, but doing so will require bridging the gap between leadership vision and workforce reality. If we align our policies, training, and data infrastructure around a shared national strategy, and ensure those strategies are operationalized across all roles and institutions, the system is positioned to lead.

Through a national survey and a series of leadership convenings, this work has moved the system beyond scattered, isolated experiments. With the addition of workforce perspectives, it also reveals the operational challenges, capacity constraints, and ethical tensions that must be addressed to move forward effectively. We now have a unified vision for moving forward.

The consensus from leaders across Cooperative Extension and agInnovation, representing 1862, 1890, and 1994 institutions, is more than just an observation. It is a clear mandate for action, built on three core priorities:

  1. People: The single biggest hurdle isn’t technology; it is workforce readiness. A coordinated, national training program is the most urgent priority. This must include defined competencies, dedicated time for learning, and alignment between expectations and capacity.
  2. Trust: Leaders agreed that public trust is the system’s most critical asset. We must protect it with human-centric policies, transparency, and clear standards for attribution. As many participants affirmed, policy must lead technology. Trust is not inherent in AI systems, it is sustained through the integrity, accountability, and expertise of Extension professionals.
  3. Coordination: To prevent fragmentation, the system needs shared data, common platforms, and aligned policies. This is the only way to support collaboration and scale these efforts effectively. Coordination must also reduce duplication and ambiguity, enabling professionals to adopt AI with clarity and confidence.

This study marks a turning point from individual exploration to an intentional, coordinated strategy. The challenge leaders identified is not technical, it’s organizational: how to harness local innovation within a single, national framework.

The recommendations in this report provide the blueprint.

The next phase of this work will require sustained leadership, shared accountability, and continued alignment between strategy and implementation.

The essential first step is to create the National AI Program Action Team (PAT). With support from partners like the Extension Foundation, this team will put the plan into action. This group, led by ECOP and agInnovation, can provide the leadership needed to align training, develop policy, and coordinate infrastructure investment. It will also play a critical role in ensuring that national strategies translate into practical, usable frameworks that support professionals across the system.