Using Third-Party Tools with AI Inside
About the Episode
In Part 3 of 4 from our AI mini-series, Jane Urban and Nathan Trueblood, Chief Product Officer, Enkrypt AI tackle the complex reality of purchasing AI solutions. While contracts and questionnaires are the industry standard for vetting vendors, this episode reveals why they are no longer enough.
From “over-eager interns” to chatbots that accidentally share illicit recipes, Jane and Nathan explore the reputational risks that hide in third-party code. They discuss why “Red Teaming” (penetration testing for AI) is the new mandatory step for buying software and how to balance trust with rigorous verification.
Highlights from the Episode
- Why do standard vendor surveys fail to catch AI “hallucinations” and behavioral risks?
- A bot can be technically secure against hackers but still destroy your reputation with toxic answers.
- Why must you ethically hack third-party tools to find cracks before your customers do?
- A real-world look at how a finance bot was tricked into giving illegal advice.
- Why should you manage AI tools exactly like you manage an inexperienced, over-eager intern?
Tune In for the Next Episode
Stay tuned for the final episode of this mini-series, where we discuss the “Wild West” of External AI. What happens when your proprietary data meets the public internet?
Listen to the full episode
Recommended content
About the Authors
Jane Urban
Chief Data & Analytics Officer
Nathan Trueblood
Chief Product Officer, Enkrypt AI