A Practical Playbook for Moving Faculty from Fear to Flourishing with AI
Sarah Gibson, Laura Morrow, Jennifer Shewmaker and Adam Wilson | Lipscomb University
Listen to the Podcast Overview
Purpose of This Playbook
This playbook translates change management principles into a clear, actionable guide for supporting faculty through AI adoption. It's designed to normalize faculty emotions, provide a shared language for transformation, offer concrete coaching strategies at each developmental stage, and connect AI integration to your institution's core mission and values.
This is not a technical guide.
It is a change-management and formation guide that recognizes the deeply human dimensions of technological change in higher education.
The Big Idea:
Fear → Flourishing
AI adoption stalls when institutions treat it as a tool problem instead of a human problem.
The Fear to Flourishing™ Framework recognizes that faculty move through predictable emotional and cognitive stages before they can use AI well, ethically, and creatively. This journey is not linear, but understanding these stages allows leaders to provide the right support at the right time.
Flourishing happens when fear is named rather than dismissed, when agency is restored to faculty members, and when institutional values lead before tools are deployed. This framework shifts the conversation from compliance to transformation, from resistance to possibility.
Fear is Named
Not dismissed or minimized
Agency is Restored
Faculty lead their own change
Values Lead
Before any tools are introduced
*Gibson, S. (forthcoming). Fear to Flourishing: Guiding Institutions Through AI Transformation. Manuscript in preparation.
Site Navigation: Explore the Playbook
Use this guide to jump to specific sections of interest and explore how to transition faculty from AI fear to flourishing within your institution.
What this playbook contains:
Check back often as this website is being updated constantly. Last update was March 6, 2026.
Leading Faithfully in a Shifting Landscape
By Jennifer Shewmaker
The Challenge: A Perfect Storm
Higher education faces unprecedented challenges: rapid AI disruption, evolving public perceptions of value, and growing demand for professional studies over liberal arts. This creates a risk of reactive policies driven by fear.
  • AI Disruption: Rapid technological change challenging traditional assessment and instruction.
  • Value Perception: Evolving public perceptions of higher education's relevance.
  • Professional Demand: Growing market pressure for professional studies over liberal arts.
  • The Risk: Reactive policies driven by fear rather than proactive leadership rooted in mission.
"How can Christian universities lead with clarity and conviction in the face of AI disruption?"
Anchored in Mission: Our Approach
At Lipscomb, we chose not to ban or ignore AI. Instead, we developed a strategy deeply rooted in our Christian identity. Our goal is to equip students and faculty to view this technological shift through a missional lens, fostering a deeper sense of calling. We are moving our community from a posture of fear to one of flourishing.
1
Ethical Integration
Connecting institutional mission and Christian values to emerging AI strategy.
2
Faculty Empowerment
Focusing on holistic development and resilience in a changing world.
3
Student Formation
Designing scalable, values-based engagement initiatives that support professional growth.
Leadership for the AI Era: Commitments and Practices
Effective leadership in the age of AI requires a thoughtful blend of principle and pragmatism. Institutions must cultivate specific commitments and adopt key practices to guide their communities from apprehension to innovation.
Leadership Commitments
Commitment to Presence
Leaders must be accessible, fostering open dialogue and addressing faculty concerns directly.
Commitment to Agency
Empower the community, valuing their voice and expertise to build ownership and trust.
Commitment to Transparency
Provide consistent, clear information about AI changes, their relevance, and alignment with institutional mission.
Commitment to Values
Explicitly link AI integration to the institution's core mission and foundational values.
Leadership Practices
Naming Reality with Compassion
Acknowledge both the challenges and opportunities of AI with empathy and understanding.
Fostering Agency & Voice
Actively create opportunities for faculty, staff, and students to shape AI strategy and implementation.
Building Predictable Structures for Trust
Establish regular updates, listening sessions, clear guidelines, and a culture of continuous learning.

The Fear to Flourishing™ Framework
The Fear to Flourishing Framework is a layered playbook that moves both individuals and institutions from AI anxiety to confident, values-aligned practice. This framework, with additional pieces, will be available in Sarah Gibson's book: From Fear to Flourishing: Guiding Institutions Through AI Adoption.
Fear to Flourishing™ Bots
Fear to Flourishing Bots are guided AI assistants built around the Fear to Flourishing framework. They help individuals and teams move from fear and uncertainty about AI toward curiosity, experimentation, and confident integration. Through structured conversations, prompts, and practical exercises, the bots support users in developing the judgment, creativity, and agency needed to thrive in an AI-enabled world.
*Please note, non-paid users are talking to a model that is less sophisticated and nuanced in its responses. You will still need to make a free BoodleBox account to chat.
The Individual Roadmap: Faculty Experience
Individuals do not move in a straight line through change. But they do move through seven predictable mile-markers, each marked by distinct emotions, needs, and coaching opportunities.
We've seen people move through these markers in mere hours or remain in them for months. How quickly someone progresses is often less about technical skill and more about their underlying mindset toward growth, learning, and change.
Understanding where a faculty member or student is on this journey allows leaders and educators to provide precisely the right support at the right moment.
The following are indications of a restrictive thinking where change is experienced as something being done to me, triggering a need to protect control, competence, and identity. The focus is on risk avoidance and loss of control.
Toggle to see the infographic.
Initial Understanding
First exposure, basic understanding
Feels like: Uncertainty, confusion
You might hear: "I don't really know what AI does" or "Is this just cheating?"
Coaching move: Show AI in action as a thinking partner, not a shortcut. Help them see it, then name it clearly.
Ready to advance when: They can explain AI in one sentence.
Grief
Mourning what feels lost or replaced
Feels like: Loss, anger, identity threat
You might hear: "The craft of writing is dying" or "What's left for humans?"
Coaching move: Create space to lament. Tell stories of faculty whose work deepened with AI. Name what AI cannot replace.

This is a critical step. Without help here, faculty linger in quiet resistance.
Day of Reckoning
Realization that AI changes things
Feels like: Panic, urgency
You might hear: "Half my assignments are obsolete" or "We need to ban this."
Coaching move: Co-build an AI-aware rubric. Redesign one assignment together with concrete, practical solutions.
Ready to advance when: They pilot something new in their classroom.
This signals a shift from restrictive thinking to transformative thinking where change is reframed as something I can engage with intentionally, restoring agency and opening space for learning and innovation.
Flip the Script
Reframing AI as an opportunity, not a threat
Feels like: Tentative hope
You might hear: "What if this could actually help?" or "This could actually free up time for the deeper conversations I've always wanted to have in class."
Core shift: AI becomes an amplifier of human judgment, not a replacement for it.
Coaching move: Brainstorm ways AI could solve their specific teaching challenges.
Ready to advance when: They name one personal use case.
Curiosity
"I wonder if I can…" moments spark exploration
Feels like: Playfulness, experimentation
You might hear: "I wonder if AI could...?" or "I'm curious whether AI could help me differentiate assignments for different skill levels."
Coaching move: Provide prompt libraries, sandbox environments, and peer buddies for low-risk exploration.
Ready to advance when: They share a small success story with colleagues.
Integration & Sharing
AI becomes part of daily practice and tools passed to others
Feels like: Pride, confidence
You might hear: "Every week, I have AI help me brainstorm three different ways to explain difficult concepts." or "I can't imagine going back to creating rubrics without AI assistance. It saves me hours."
Coaching move: Create show-and-tell roundtables and peer mentoring opportunities.
Signal of success: Other faculty cite their example as inspiration.
Honing
Refining skills, building mastery
Feels like: Craft mindset, continuous improvement
You might hear: "I'm now teaching my students prompt engineering as a core competency in my discipline." or "I'm experimenting with different AI tools to see which ones best support metacognitive reflection."
Coaching move: Offer research-informed professional development, conference opportunities, and advanced benchmarks.
Outcome: Faculty set their own growth goals and success metrics.
The speed at which individuals move through these mile-markers is often determined by their default mindset toward growth, which means support strategies, not pressure, determine momentum.
Using Knoster’s Model to Diagnose AI Fear
AI-related resistance is often mislabeled as unwillingness, negativity, or lack of skill. In reality, what we are seeing is usually a missing component in the change environment.
Knoster's model for managing complex change offers a helpful diagnostic lens. It suggests that successful change requires five elements to be present at the same time: vision, skills, incentives (motivation), resources, and an action plan. When any one of these elements is missing, predictable forms of resistance emerge. Applied to AI adoption, the model helps leaders diagnose fear accurately instead of reacting to it emotionally.
Knoster, T. J. (1991). Presentation on managing complex change. Pennsylvania State University.
AI fear is rarely about the technology itself. It is almost always a signal that one of the conditions for healthy change has not been met. Using Knoster's model shifts the leadership question from:
“Why are people resisting AI?”
to:
“What is missing in the system that is producing fear?”
This reframing allows leaders to respond with targeted support instead of pressure, restoring agency and accelerating movement from fear toward flourishing. When leaders diagnose fear correctly, they stop trying to fix people and start fixing conditions. That is when real momentum begins.
Toggle to see the infographic.
The Institutional Roadmap
While individual faculty move through emotional stages, institutions must follow a strategic sequence adapted from Rogers' Diffusion of Innovations theory. This roadmap ensures that infrastructure, culture, and support systems develop in the right order to sustain widespread adoption.
Toggle to see the infographic.
Launching the Vision
Initial Task Force
A diverse, empowered group researches AI and writes the playbook, grounding the institutional approach in principles.
AI Event
Publicly reframe AI through institutional values and vision with a visible campus event reframes AI from a threat to an opportunity and recruits the next wave of champions. This sets the tone for the institutional approach
Formalizing Structure
Academic Integrity Policy
Establish guardrails that guide rather than ban;These formal guidelines turn ad-hoc usage into an institutionally sanctioned practice.
Just-in-Time Training
Provide personalized support aligned to faculty readiness stages; This should be bite-sized workshops and office hours meet people where they are and solve problems.
Scaling & Sustaining
Growth & Spread
Enable peer-led adoption through faculty champions; proven practices from the early/late majority scales implemented across departments; peer showcases drive adoption.
Universal Adoption
Select and implement one trusted institutional platform; AI use is the norm; therefore, a universal AI tool must be selected to prevent momentum from fading. This also adds equity of access for all.
Re-Root
Regularly revisit values, policies, and tools for continuous alignment; review policies, tools, and values in a new post-AI institution.

Critical Insight: Order matters. Implementing policy before building trust with people creates lasting resistance. Lead with values and relationships, then introduce structure.
The 5 P's of Sustainable AI Adoption
Sustainable AI adoption requires five interconnected elements working in harmony. When these pieces align in the correct sequence, institutions build lasting capacity for ethical, effective AI integration that serves their mission.
Toggle to see the infographic.
Principles
Why are we using AI?
  • Mission-aligned values
  • Clear risk tolerance
  • Ethical frameworks
Watch out for: Ethics statements no one revisits or applies to real decisions.
People
Who must be equipped and supported?
  • Faculty, staff, students
  • IT and administration
  • Stage-aligned coaching
Watch out for: One-size-fits-all workshops that ignore readiness levels.
Policies
What are the guardrails (not handcuffs)?
  • Clear usage guidelines
  • Attribution standards
  • Data privacy tiers
  • Regular review cycles
Watch out for: Blanket bans and policy drift without updates.
Processes
How does AI fit daily work?
  • Review and human sign-off
  • Pilot → feedback → scale
  • Integration into existing systems
Watch out for: Flashy pilots with no integration plan or sustainability.
Platforms & Partners
With what tools do we trust?
  • Security and transparency
  • Interoperability with existing systems
  • Vendor stability
Watch out for: Shiny objects, closed systems, and vendor lock-in.
How the pieces interlock:
1. Principles set the moral North Star.
2. People embody and iterate those principles.
3. Policies codify them so everyone plays the same game.
4. Processes turn policy into habit, revealing bottlenecks to fix.
5. Platforms/Partners supply the tech backbone, but only after the first four P's are solid.

Lipscomb’s AI Story: Choosing Courage Over Fear
Lipscomb University's journey to becoming an AI-forward campus exemplifies a proactive, values-driven approach, transforming initial uncertainty into a framework for human flourishing. This timeline highlights key milestones that cemented AI as a sustained, faculty-led institutional priority.
Late 2023: Initial Uncertainty
ChatGPT emerges, sparking widespread anxiety across higher education regarding cheating and loss of control. At Lipscomb, like most institutions, conversations across the campus quickly turned to anxiety about cheating, authorship, and loss of control.
Jan 2024: Leadership's Vision
Laura Morrow, Senior Director of the Center of Teaching and Learning, addressed the moment at a faculty meeting by reframing AI-related academic integrity concerns within the broader reality that cheating has existed long before AI. She reminded the community that "since the dawn of time, students have found ways to cheat," and yet institutions have always adapted. While generative AI represented a real disruption, Lipscomb should not to respond with fear. Her message centers on walking alongside instructors with practical resources, shared understanding, and steady guidance.
Spring 2024: AI Task Force Forms
A diverse AI Task Force launches, bringing together faculty, staff, administrators, legal counsel, IT, and academic leadership. At the first meeting, members voiced a shared sentiment: We may not fully understand this yet, and it may feel unsettling, but we must not speak about AI negatively. The group desired that Lipscomb's response be grounded in discernment, optimism, and the belief that this technology could be shaped in service of human flourishing.
This posture carried through the institution's full approach.
June 2024: Guiding Commitments
The AI Task Force released its findings, emphasizing several core commitments: AI literacy for faculty, staff, and students; ethical use grounded in Lipscomb's Christian mission; guardrails rather than bans; and the integration of AI into teaching, research, and operations as a support for human judgment not a replacement for it.
The report highlighted the importance of clear guidelines for academic integrity, data privacy, research transparency, and ongoing professional development, while affirming that good pedagogy addresses most AI-related concerns
Aug 2024: Shifting Mindsets
In August 2024, national scholar José Bowen speaks at the opening faculty session. Every faculty member received a copy of Teaching with AI. The conversation begins to shift from "How do we stop this?" to "How do we teach well in this new reality?”
Fall 2024: Clarity Over Fear
Throughout Fall 2024, the Center for Teaching and Learning hosted a series of AI and Academic Integrity sessions designed to reframe the narrative around "AI cheating." Seven sessions were initially planned. Only two were needed. Faculty quickly moved from fear toward clarity, with a growing consensus: We understand the issue, now let's learn how to use AI to improve education.
Jan 2025: Sustained Priority
In January 2025, Lipscomb named its first AI Faculty Fellow (Sarah Gibson), signaling that this work was not a temporary initiative but a sustained institutional priority.
Feb 2025: Faculty Ownership
Faculty advocate for a universal, trusted AI platform, marking a pivotal shift to cultural ownership and grassroots demand. This is an important turning point that marks true cultural ownership rather than compliance.
Laura Morrow explains to leadership that "this is the moment when everyone gets an email address. Imagine if we only gave email to 10 people at the institution, we would never understand the impact email would have on our institution."
July 2026: Universal Adoption
On July 10, 2026, Lipscomb adopts BoodleBox for all faculty, staff, and students. This decision makes Lipscomb the first independent college or university to achieve universal AI platform adoption, and notably, the first to deploy tools a a result of a grassroots faculty-driven movement rather than a top-down administrative mandate.
Fall 2025: Integrating AI into Everything
By Fall 2025, communities of practice and an AI pedagogy series allows faculty to dive deeper, share their work, and learn from one another. AI is fully integrated into professional development through the CTL, not as a standalone topic, but as part of conversations about teaching, assessment, and learning design. The guiding belief became clear: AI is not a separate issue. Good pedagogy fixes most AI concerns, and AI must be integrated into how we teach and assess.
2026: Future-Ready Skills
In 2026, Lipscomb in collaboration with the AAC&U AI Institute launches a small-group initiative focused on helping faculty map the skills students need for a post-AI world further reinforcing that the goal was not technological adoption for its own sake, but preparation for meaningful, ethical, human-centered work.
Lipscomb's AI story is a story of values-led leadership, faculty trust, and the conviction that when fear is named and agency is restored, flourishing follows.
Lipscomb's Guiding Principles for AI Use
These principles demonstrate how an institution can ground AI adoption in its distinctive mission and values. Your institution's principles will differ, but the model remains: let your deepest commitments guide your technological choices.
At Lipscomb University, our use of artificial intelligence is guided by principles rooted in our Core Values.
Love God
Grounded in our Christian identity and guided by ethical principles, the integration of AI tools may empower us to fulfill our divine calling within God's kingdom.
Seek to Learn
As a scholarly Christian community, we approach the use of AI with informed curiosity guided by prayerful discernment. We honor the intrinsic value of the continual journey of learning and critical thinking.
Serve Others
As a Christ-centered community, we prioritize humanity above technology in all our endeavors. AI is a tool by which we seek to enrich the lives of those we serve, reflecting our commitment to human flourishing.
Embrace Collaboration
We will embrace collaboration both within and beyond the Lipscomb community to innovate, experiment, critique, and dialogue as we use AI.
Respect All
Our responsible use of AI protects and affirms the dignity of all individuals as image-bearers of God.
Deliver Our Best
We will leverage AI as a powerful tool, encouraging innovation that contributes to an abundant life and glorifies God.
Pursue Joy
We aim to integrate AI in a manner that enriches the human experiences.
Create Solutions
We will provide clear, transparent, and well-documented guidelines for the uses of AI to allow our community to explore and benefit from emerging technologies safely and effectively.

Student Formation: A Lesson Learned
By Laura Morrow and Sarah Gibson
Student formation is one of the central areas of focus in Lipscomb's approach to AI. From the outset, the guiding question was not simply how students might use AI, but how the institution could form students who exercise judgment, responsibility, and discernment in an AI-saturated world.
In the earliest stages of Lipscomb's AI engagement, students were not brought into the conversation in a broad or formal way and that decision was intentional. At the time, the institutional landscape was uncertain. Policies were still forming, tools were rapidly evolving, and higher education conversations were dominated by fear-driven narratives around cheating and loss of control. Lipscomb did not want to make promises it could not yet keep or set expectations before faculty had clarity and confidence.
Instead, the institution chose to begin with faculty formation and pedagogical grounding, believing that clear, aligned faculty leadership would ultimately create a healthier learning environment for students. However, students left in the Spring of 2025 and came back in the fall to an AI-forward campus.
Knowing what we know now, engaging student leaders earlier in the process would have strengthened trust, surfaced valuable perspectives, achieved buy-in and helped shape clearer norms sooner. Student leaders are often already navigating AI thoughtfully in their academic, professional, and personal lives, and they bring insights that can sharpen institutional assumptions about use, ethics, and expectations.
This learning has reinforced an important principle: while broad student engagement may require institutional readiness, student leadership engagement can and should begin earlier. When students are invited into the conversation as partners rather than subjects of policy, formation deepens and shared responsibility grows.
Observations of Student Reactions
Positive
Accelerated Learning
Students leverage AI for organization, accelerating Bloom's Taxonomy levels, and generating cutting-edge research ideas. They often use soft skills for managing learning spaces.
Neutral
"Just Another Tool"
Many students perceive AI as another technology to adapt to, stating, "Why is this such a big deal to you? We learn new technology every day."
Opposition
Concerns & Confusion
Resistance stems from environmental impacts, ethical concerns (e.g., "Mark of the Beast"), fear of job loss, mental health worries, and parental concerns about education value.
Flourishing Patterns: Students vs. Faculty
Student fear-to-flourishing patterns diverge significantly from faculty. Micro approaches (one-on-one, small groups) prove more effective. Student resistance often dissipates quicker as AI becomes a non-negotiable employer expectation.
Guidance must be practical and explicit, outlining acceptable and unacceptable uses. Elevating two-way trust between students and faculty is essential to maintain the perceived value of the degree and institutional competency.
Key Lessons for Student AI Integration
  • Universal adoption with equal opportunity is vital.
  • Robust classroom tools can enhance assignments and assessments.
  • Data safety and security are critical for risk management.
  • Faculty attitudes profoundly influence student attitudes.
  • Faculty should model AI use, foster curiosity (e.g., "Generation Bot"), and map AI across the curriculum.
  • Establish common AI language through positive, realistic conversations, and gauge student attitudes regularly.
  • Understand ethical implications: Apply the "Fingerprint Test" (does it still look like me?) and "Transportation Test" (is this the right power level?).
  • Determine when AI use is non-negotiable and implement program-level policies.

AI Literacy vs. Preparing for the AI-Native World
By Sarah Gibson
While AI literacy provides an essential entry point, true readiness for the future demands a deeper, more human-centric approach that transcends tool-specific knowledge.
AI literacy teaches people how to use AI.
Preparing for the AI-native world teaches people how to be human when AI is everywhere.
AI Literacy
What most institutions think they are doing
AI literacy focuses on understanding the tool.
It emphasizes:
  • What AI is and how it works
  • Basic capabilities and limitations
  • Ethical use, citation, and compliance
  • Learning how to use specific tools
AI Literacy is typically:
  • Short-term
  • Tool-dependent
  • Reactive to new technologies
  • Framed as a skill set to be mastered
While useful for initial engagement, AI literacy alone quickly becomes outdated as technology evolves rapidly. In addition, as student engage with AI, the question of AI literacy will rapidly dissolve.
Preparing for the AI-Native World
What actually prepares students and faculty for the future
Preparing for an AI-native world focuses on developing the human capacities needed to live and work alongside AI.
It emphasizes:
  • Judgment, discernment, and ethical decision-making
  • Knowing when to leverage AI and when to prioritize human input
  • Designing work that demands human reasoning, creativity, and responsibility
  • Adapting thinking, assessment, and pedagogy for an AI-saturated environment
  • Understanding shifts in power, agency, and trust with AI's presence
Preparing for the AI-native is:
  • Long-term
  • Values-driven
  • Tool-agnostic and resilient
  • Integrated across disciplines and learning experiences
This is not about mastering a technology.
It is about forming people who can think, choose, and lead wisely in a world where AI is everywhere.
Instead of asking "Are our students AI literate?" we should be asking a deeper question:
In a post-AI world, where AI is simply part of how work gets done, much like Google, what human skills do we want to protect, develop, and intentionally instill in our students?
This shift reframes the conversation from technical competence to human formation, emphasizing judgment, ethical reasoning, creativity, discernment, and responsibility as the enduring outcomes of education.

Preparing for the AI-Native Student
By Sarah Gibson and Emily Gibson, 5th Grader at Lipscomb Academy
From Digital Natives to AI Natives
Technology doesn't just change what we do; it fundamentally changes who we are. As new generations emerge, their relationship with technology redefines intelligence and authority.
Digital Natives (Millennials & Gen Z)
These generations adapted to technology as an enhancement or extension. They learned to navigate information and use search engines as thinking tools. Authority shifted from professor to platform, but students were primarily consumers of information.
  • Millennials: Learned email in school, saw social media emerge.
  • Gen Z: No memory of life without smartphones, social identity formed online.
Intelligence shift: "I know how to find it."
AI Natives (Gen Alpha & Gen Beta)
This marks a profound shift. These generations are growing up with AI embedded in their cognitive formation and daily lives, moving beyond mere consumption to active creation. If Gen Alpha grows with AI, Gen Beta will grow inside AI systems.
  • Gen Alpha: Smartphones as default, algorithmic feeds before literacy, generative AI emerging during cognitive formation.
  • Gen Beta (projected): Born into a world where AI is assumed, educational systems redesigned for AI integration.
Intelligence shift: "I can build it."
Reimagining the Professor's Role in a Post-AI World
The shift from students as "searchers" to "builders" demands a new role for educators. We must evolve from partners to architects of formation, shaping what gets built rather than competing with the builders.
Architect of Learning
Design learning experiences AI cannot replicate, personalizing education to cultivate a growth ecosystem.
Curator of Complexity
Introduce intellectual friction and nuance to problems that AI might flatten, challenging students meaningfully.
Ethical Frame-Setter / Discernment Coach
Teach students when not to use AI, fostering discernment and mitigating creativity loss.
Builder of Human Judgment / Identity Former
Ground AI integration in human judgment and formation, helping students integrate intelligence without outsourcing self.
The architects do not compete with the builders. We guide the learning ecosystem that students and AI utilize to facilitate learning.
The Post-AI Classroom
A post-AI classroom embraces intentionality and critical engagement, focusing on human formation rather than technological control.
It IS a classroom that:
  • Uses AI intentionally
  • Audits AI outputs
  • Compares human vs. machine reasoning
  • Builds tools ethically
  • Reflects on dependence
It is NOT a classroom that:
  • Bans AI
  • Replaces teachers with AI
  • Is designed with the assumption that AI doesn't exist.
Instead of: "Don't use ChatGPT." We say: "Show me how you used it. And why."
The Charge: Form the Builders
Partner → Architect
When Gen Alpha becomes the builder, professors must become architects.
Pedagogy → Anthropology
With Gen Alpha, we are adjusting anthropology, reconsidering what it means to know.
Information → Formation
This is no longer about information; it is about human formation in an AI-saturated world.

Final Takeaway
Connect
institutional mission and Christian values to emerging AI and technology strategy.
Design
scalable, values-based faculty and student engagement initiatives that support human flourishing.
Apply
practical models of governance and pedagogy that strengthen resilience and clarity in times of disruption.
Iteration is Key
to adapting and responding to the community culture and AI development
AI adoption succeeds when institutions treat fear as valuable data rather than an obstacle to overcome, lead with values before introducing tools, and build trust systematically before rolling out technologies.
Flourishing is not accidental. It is designed.
This playbook provides the framework, but your institutional context will shape the details. The key is to remain steadfast in your commitment to support faculty as whole people navigating profound change, not simply as end-users learning new software.
When we honor the human dimensions of technological change, we create conditions where faculty don't just adopt AI—they thrive with it, using it to amplify their best work while staying grounded in their deepest professional values.

Want to Go Further?
These books offer deeper frameworks for supporting faculty development, building hope-filled learning communities, and implementing practical pedagogical changes that stick.
Teaching with AI
Bowen & Watson
Helps educators understand AI and its disruption by offering practical, classroom-tested strategies that treat AI as a thinking partner rather than a shortcut.
Great place for educators to start.
Small Teaching
James Lang
Supports AI adoption by showing how small, intentional changes in practice can lead to meaningful improvement without overwhelming faculty or students.
Hope Circuits
Jessica Riddell
Grounds AI in broader higher ed conversations, helping leaders think critically about how disruption shapes human values, agency, and flourishing.
The best book about AI that never mentions AI.
From Fear to Flourishing
Sarah Gibson (forthcoming)
A guidebook that frames AI adoption as a formation journey, giving leaders strategies to move faculty from fear and resistance toward discernment, confidence, and human-centered integration.
Links
Past Presentations

Additional Handouts