Skip to content

AI tools in practice

Brick Maier Brick Maier

In my recent experience developing content for the Healthy Home Evaluator certification course, I’ve gained some insights into the practical applications and limitations of AI tools in professional workflows. Over the course of three months, I built a three-hour course with 23 separate lectures, 45 quiz questions and ten scenario-based assessments all with the help of AI and a subject matter expert in the loop for review. Any use of AI for content creation in a training capacity must have an expert in the loop to review, revise and refine.  

While these tools have significantly accelerated certain aspects of content creation, the reality of working with AI is more complex than the “one-click solution” often portrayed in AI hype marketing.

My workflow involved a tool chain of three to four different AI platforms. We chose Anthropic’s Claude as our main platform primarily for its deep reasoning capability and its privacy first model. We use Claude for book analysis with permission from copyright holder and slide content creation, the Claude API for PDF processing, Eleven Labs for audio processing, and various tools for image generation and prompt engineering.

Rather than a streamlined process, this resulted in what felt like a Rube Goldberg machine – functional but complex, with content flowing between platforms before final local processing on my computer. This process, while complicated, does produce better content faster.

This complexity raises important questions about scalability. While the system effectively speeds up content production, its intricacy makes it challenging to integrate into existing platforms or transfer to other users without significant simplification.

Through conversations with professionals in several renewable energy fields from workforce development to utility scale design engineering, I discovered two distinct use cases that highlight the strengths and limitations of current AI tools. The first involved grant writers seeking to analyze and iterate on PowerPoint presentations – a use case well-suited to AI’s ability to process and suggest improvements to existing content. The second case, however, revealed crucial limitations: an engineering firm attempting to use AI for analyzing government documents and bylaws to aid with complex permitting requirements for utility scale solar. They encountered significant issues with ‘hallucinations’, where AI tools confidently presented incorrect or non-existent information. Ultimately

These experiences point to a fundamental insight: AI tools, particularly large language models, face challenges with information reliability and verification. They excel in scenarios where creative interpretation and synthesis are valuable, such as content creation and general analysis. However, they can become a liability in situations requiring strict accuracy, such as engineering specifications or regulatory compliance, where any fabricated or misinterpreted information could lead to serious errors. The challenge isn’t just about accuracy rates – it’s about the resources required to verify the AI’s output against source documents.

This understanding suggests that successful implementation of AI tools requires careful consideration of the task at hand. Where exact, verified information is required, traditional methods may still be more reliable. However, for tasks involving content creation, summarization, or general analysis where some interpretation is acceptable, AI tools can significantly enhance productivity – provided users build appropriate verification steps into their workflows.

What I learned is that while AI tools can improve efficiency and quality, they require thoughtful integration and an understanding of their limitations to be truly effective in professional settings.

I have developed a practical two hour workshop where I address these limitations with workflows that are secure and help avoid these hallucinations. I include practical applications of AI in instructional design and content creation using the latest Claude 3.7 Sonnet. Attendees will learn to leverage AI for summarizing technical knowledge, organizing learning materials, preparing presentation decks, and building prototype widgets, with a focus on maintaining human expertise throughout the process. You can learn more below. Reach out if you have any questions!

Brick Maier
Written by

Brick Maier

With an M.Ed. from the University of Notre Dame and seven years as a Sr. Learning Experience Designer at Amazon, Brick brings a wealth of expertise to HeatSpring. As the clean energy industry evolves, Brick will focus on strengthening relationships with L&D leadership at current and prospective HeatSpring Teams customers and optimizing course creation processes. His multifaceted experience in education, media production, and learning experience design is available to support your clean energy initiatives.

More posts by Brick