AI Summer School - Implementing Gen AI at your company

Your weekly guide to getting the most out of generative AI tools

Welcome to Gen AI Summer School

We’re spending the summer teaching you the essentials you need to succeed in an AI-forward world.

Here’s the plan:

Implementing Gen AI at your company

Last week we discussed implementing generative AI in your job, drawing on JobsGPT, a custom GPT that identifies tasks associated with a given job that can potentially be accelerated when performed in tandem with AI. In particular, we identified Quick Wins, Definite Don’ts, and In-Between Cases, three categories of tasks divided up based on how readily we can and ought to leverage AI to help perform those tasks.

This approach is part of a more general strategic process that we have developed to provide guidance in applying generative AI in your organization. Here’s the broad picture of the nine-step process, which we will break down into individual components below. Note first that we have three layers to the process, Foundation, Framework, and Future, each of merits explanation.

Foundation

The first three steps of our nine-step process depend on a foundational understanding of generative AI tools, what they can do well, what they can’t do well, what costs are associated with them, and which tasks in our organization can potentially be enhanced if we were to perform them in tandem with AI.

The first step is to understand the generative AI tools that are currently available to help support our work. Those tools include large language models, AI image generation tools, tools for generating and editing audio and video, vibe coding platforms, and AI-powered productivity tools. This step can certainly be a moving target, as new tools are regularly released, updated, and improved. The lion’s share of the work the Innovation Profs do in our workshops and presentations involves this very step: informing folks of what tools are out there and how to use them effectively.

The next step is to determine the limitations of these tools as well as their associated costs. Limitations here include both limitations in performance, i.e., tasks which they do not perform at an adequate level, and what we might call limitations in alignment, i.e., challenges faced by generative AI tools pertaining to alignment with certain ethical and legal standards (you can read all about these standards in our discussion from two weeks back). The costs here are monetary: What does it cost to use these tools? Will individual accounts suffice or should our organization subscribe to these tools via a team or enterprise package?

Once we have a broad understanding of the current state of generative AI tools as well as their associated limitations and costs, we can enumerate target tasks that are potential candidates for automation. At this point of the process, we’ve drawn no conclusions about how appropriate it would be to automate any of these tasks—we’re just setting the table for the next three steps of the process, to which we now turn.

Framework

The next layer of steps in our process is based on a human-centered framework according to which, for a given task, we should always put the human question, “How much human involvement should be included in performing this task?” before the AI question, “How much AI are we going to use to perform this task?” Here I highlight “should” and “are” to draw attention to the fact that, in our view, the first question is prescriptive, involving broader responsibilities, ethical commitments, and other values, while the second question is merely descriptive, only to be answered in light of how we’ve answered the first question.

To inform how we answer these two questions, the first task is to define what we refer to as “motivating values.” These values can be either internal (e.g., efficiency, productivity, fostering innovation and creativity, contributing to employee development, cost-effectiveness, aligning with ethical standards and responsibility, ensuring data privacy and security) or external (e.g., meeting customer expectations, improving market positioning, ensuring regulatory compliance, promoting social responsibility, increasing stakeholder trust). According to a different reckoning, these values can be either AI-phobic, not directly compatible with higher degrees of automation (e.g., human-centered, transparent, accountable, based on craftsmanship and authenticity, motivated by environmental stewardship, promoting fairness and mitigating bias, preserving jobs), or AI-friendly, more compatible with higher degrees of automation (e.g., efficiency, productivity, personalized, custom-centric, providing a competitive edge, cost-effectiveness).

Having defined these motivating values, for each task, we ask, “How much human involvement should we require?” Drawing on our motivating values, we might attempt to answer each question even at the crude level of “None,” “Very little”, “Some,” or “A high degree.” For any task receiving the answer of “None” or “Very little,” we can then ask, “Can AI perform this task adequately?” And that’s it. No other questions need to be asked at this stage of the process.

After completing the previous step, we can identify two immediate categories of task, Quick Wins and Definite Don’ts:

  • A Quick Win is a task for which little to no human involvement is required and AI is capable of performing this task.

  • A Definite Don’t is a task for which a high degree of human involvement is required.

What about those tasks for which only some human involvement is required? Or those tasks for which little to no human involvement is required but which cannot be adequately performed by AI? These questions lead us to our final three steps.

Future

In the last layer of our process, we proceed with an eye towards the future.

An In-Between Case is a task for which some degree of human involvement is required to carry it out effectively and responsibly. Just how much AI are we willing to use here? This is something we need to test out before coming to any definite conclusions. Here, a more nuanced, incremental strategy seems prudent.

Next, having assessed the merits of leveraging AI in support of each of our target tasks, we can ask: Are there things we can do now that we could not do before generative AI? Imagine what you would do if technology had no boundaries. 

  • What have been the biggest bottlenecks or inefficiencies in our organization that AI can help us solve?

  • What customer experiences could we transform if we had unlimited content generation or data analysis capabilities?

  • What creative or operational tasks were previously unscalable but could now be boosted with AI?

Finally, we rinse, and repeat. We may need to reevaluate the decisions we’ve made in light of tool improvements, including:

  • Higher level of accuracy

  • Streamlining content creation across platforms and modalities

  • Better ability to handle edge cases

  • New applications of generative AI to different types of content

  • Higher degree of control / personalization

  • Lessened environmental impact

Similarly, we may need to reevaluate in light of a potential change in values:

  • Increased acceptance of LLM authorship

  • Allowance of data to be used for training models

  • Acknowledgement of workforce shifts

  • AI use more commonplace and less alien

  • Generational shifts in attitude towards AI use

Summing up

By following this layered, values-driven process, organizations can move beyond ad-hoc experimentation with generative AI and instead adopt a clear, repeatable strategy for its integration. The goal is not simply to automate for automation’s sake, but to align AI adoption with the human priorities, ethical commitments, and business objectives that define your organization’s identity. As tools evolve and cultural attitudes shift, this framework ensures that AI implementation remains adaptable, principled, and purpose-driven, positioning your company to capture quick wins, avoid costly missteps, and continually identify new opportunities that balance innovation with responsibility.

Upcoming Gen AI Events

Free Back to School Gen AI Update: Join us as we kick off the new school year with a free virtual overview of recent developments in generative artificial intelligence. This virtual event is free and open to the public, so sharpen your pencils and join the fun.

When: Monday, Aug. 25, 12-12:45 p.m. Central time
Where: Virtual event on Zoom
Sign up here.

Gen AI Boot Camp: Join us in-person at Drake University or virtually for our next Generative AI Boot Camp. This one-day workshop will get you up to speed and using generative AI tools effectively in your work.

When: Friday, Sept. 12, 9 a.m. to 3 p.m.
Where: Drake University or on Zoom
Sign up here