Episode 17

full
Published on:

2nd Jul 2025

The AI Readiness Project: Turning Ideas Into Action with Tara Bonhorst

This week on The AI Readiness Project, hosts Anne Murphy and Kyle Shannon welcome Tara Bonhorst, founder of Do That Dave, a company dedicated to helping aspiring entrepreneurs bring their ideas to life. With a background in learning design, Tara is on a mission to make AI tools approachable and practical, empowering creators to move from inspiration to real-world outcomes.

𝗪𝗵𝘆 𝗧𝘂𝗻𝗲 𝗜𝗻?

  • Hear how Tara uses AI in unexpected ways — including how she took her cat to the Grand Canyon!
  • Learn strategies for making AI accessible and meaningful, especially for those just starting out.
  • Discover resources like the Future Product Playground course that helps turn future-focused ideas into concrete steps.
  • Gain insight from Tara’s journey supporting creators and building tools that help bring ideas to life.

𝗢𝘂𝗿 𝗚𝘂𝗲𝘀𝘁:

Tara Bonhorst is the founder of Do That Dave, where she guides entrepreneurs from spark to execution. With a background in learning design, Tara is passionate about helping others explore AI in ways that are creative, practical, and approachable. Connect with Tara on LinkedIn or visit dothatdave.com for more.

Transcript

0:04

Forget trying to keep up with AI. It's moving too fast. It's time to think differently about it. Welcome to the AI

0:11

readiness project hosted by Kyle Sham and Anne Murphy. They're here to help you build the mindset to thrive in an

0:18

AIdriven world and prepare for what's next. [Music]

0:27

Oh, Ann Murphy. Hello. Shaking. How's Am looking? Am I looking Shannon? Am I

0:34

looking like a disturbingly old guy at the club? [Laughter]

0:43

At the clubb. Um,

0:49

how are you doing with your AI readiness these days this week? I'm doing good.

0:56

I'm doing good. I uh I oscillate wildly

1:01

from feeling clueless to feeling like I have a clue. And I'm I'm in a process of

1:07

starting to feel like I have a clue, which I think is a very bad place to be.

1:16

Why do you why do you think so? Because I think when

1:24

about the time you think you have AI figured out, something changes and you don't. I There's something about there's

1:31

something about being in an ongoing

1:37

adaptive adaptability mode where just you're going to have to adapt to new stuff all the time. I think when you

1:42

start to feel like, ah, I think I got my head around this, I think you risk trying to lock it down and have it be

1:49

that forever. So, so that's why I think it's a bad thing. How are you doing? What's going on with you?

1:58

Well, you know, I I said before the show, I'm not exactly sure what I'm talking about

2:03

today because we're doing we're like in the where the

2:09

rubber meets the road phase of the work that we're doing in the AI space. I'll

2:16

give you a couple of examples. You know, we're going to talk about today the AI readiness program and that it has

2:22

quietly dropped. It has it is soft launched. It is soft launched as we say.

2:30

Then we Oh, Rachel. Good. We We got We've got our friends in the house and Steo and

2:37

Reggie. Nice. Um so, you know, so the AI readiness program is now available to

2:44

people. Meanwhile, we have just today we had our first

2:50

um cohort. We start we launched our first cohort of the certified AI educator program in she leads AI. So we

2:59

now have our first yeah our first group of six women going through this and

3:09

we and the thing that's so cool about it is that we have an internship. So they will not only be learning how to educate

3:17

like through an adult learning learning and development, you know, framework where you're like officially doing

3:23

things the right way for people to learn things. But then they get to try that out on um our audience of young college

3:33

age women in poor countries who are wanting to learn AI. So our educators

3:40

get to try it out. Meanwhile, I got into this thing today where I need to say to

3:47

somebody, who are the um consultants in the AI in the she leads AI consulting

3:53

agency? So, I was like looking around. I'm like, well, here here's who they are. Like all of these things are we've

4:01

got the conference like I was going to say all these things are happening. Yeah. And we've got the conference. So,

4:07

like here we are. All of this stuff is coming to fruition. And I feel like this

4:12

is what people on their AI journey will experience at some point where you just

4:17

go there's so much there there now. There's such a center of gravity. Whether it's

4:24

because you've decided you're joy and now you are an AI filmmaker, right? Or

4:31

you know you're you and you're like, you know what I'm gonna do? I'm gonna show up on TikTok every single day for

4:36

forever or whatever. however it manifests. Like bunches of us are

4:42

getting into this era now where we know what we're doing. We know what we have to offer the world and we're making it

4:49

available to the world. And like that is just very very it's a very heady thing.

4:54

It's a head. Okay. So So I'm gonna I want to let you off the hook a little bit if that's okay.

5:00

So please do. So you you said you like weren't really

5:07

sure how ready you were for AI. I think you're not in a sort of as we define, you know, AI readiness, it's this

5:13

curious sort of exploratory, adventurous kind of headsp space, right?

5:20

Yeah. What you just described, you actually can't be in that headsp space right now. Right. What you just described is, oh crap, I've got to be

5:26

the expert right here. I've got to like I've got to show as a leader who who knows what they're talking about, right?

5:32

And and as you said, put put the rubber to the road. So, I think I think the fact that you're feeling not too

5:38

connected to that open curious thing is actually a really good sign of where you

5:44

are, right? You you need to be there as a leader and whether you make the right decision or not that time will tell. But

5:52

you have to be there having made a clear decision. Here's who we are. Here's where we're going. Here's the consultants. here's how this course is

5:58

going to go. Here's how the AI readiness going to roll out. Right? So I give I

6:05

feel like you're in a really good place, but it's not where where you and I normally are when we talk about this AI

6:11

readiness. It's not where Yeah. Like because it is what it is. I need to be able to say this is how we teach

6:19

prompting. This is not how we teach prompting. Yeah. Exactly. This is how we

6:25

do use casebased education. Yeah. And is is it the right thing for all time?

6:30

Probably not. Will you change it a week from now? Maybe. But right now, you have to be definitive. So, I think that's I

6:37

think I think I think you're in a very healthy place. I I applaud you. I mean, damn. Some stuff going on.

6:44

I think I am. And and I and I guess what I what I'm grateful for is that

6:50

at some point in watching watching the um learning lab I you said something

6:58

about developing a point of view. I think sometimes I think our brains like

7:04

mix together but there was something about developing a point of view. Yeah. Yeah. Yeah.

7:10

And I started thinking about how there are so many leaders out there like especially like um industry leaders

7:17

where developing a point of view is a very high stakes risky situation. No

7:22

matter what the topic is when they develop a point of view that's when they're sticking their neck out, right?

7:29

And now we're asking people to develop a point of view around AI, right? And it

7:35

you kind of have to like Yeah. I I don't I don't know that it's developing a point of view about AI. I

7:43

think it's developing a point of view to be able to be successful with AI. I

7:48

think if if AI is going to amplify you and you don't have a point of view, then it's just going to amplify lots of ideas

7:54

that are not focused. But but I think it was you that told me that, you know, when I was talking about

8:01

this idea of have a point of view, have a creative point of view, that there's a lot of people in this world that have

8:06

never been asked to have a point of view or to your point, if they have one, it

8:12

feels dangerous, right? I may work in a corporate culture where having a point of view is like risking your job, right?

8:18

Risking your livelihood. Um, and yet when you have a suite of

8:25

tools that can allow you to literally do anything, how do you choose what to do?

8:31

And and I I think that only comes from, well, what do you want,

8:36

right? What do you want? What does good look like? Who's it for? What are you trying to accomplish? And it's you're

8:42

the only one that can look at the output of what this AI does and go, is that

8:48

good or not? Right? And so like by definition, you have to have a a point of view or you're just going to be

8:53

putting out crap or you're gonna or you're going to ignore what it put out and just put it in the world and it's

8:59

going to be riddled with mistakes and then that's going to come back on you as well.

9:05

Totally. Totally. You're right. I think to be good at to be good at AI, you have

9:12

to have a you have to have a point of view. And you can get by for a little

9:18

while without it. Um but eventually, you know, you have to be able to articulate

9:24

over Thanksgiving dinner. Somebody's going to ask you, you know,

9:29

Yep. Why is it good? What's it good at? What's, you know, all that sort of stuff. Yeah. I think inherently we we

9:35

all have some sort of an opinion on something. When we generate something, we go, "Oh, that's good or that's not good." But having a point of view is

9:42

really thinking about upfront. I think I'll talk about this a little bit in what to focus on this week. You know,

9:48

before you sit down to do something with AI, have you thought about what you actually want to do or are you just

9:55

sitting down to crank something out? And you know, there's there's a place for that, just playing and just seeing where

10:02

it goes. But there's also if you want to accomplish something then I think having a point of view and and refining that

10:07

point of view right like this is why I talk about Rick Rubin so much but I also think about like Gordon Ramsay right the

10:14

one thing you know about Gordon Ramsay is he's got a clear and definite point of view that if something doesn't taste

10:20

good to him he literally spits it out right and he tells you what he feels

10:26

with no filter right and I think that I think that could be

10:31

That strong a point of view I think is sometimes makes people a little bit fearful. Well, I don't want to be a

10:37

jerk, right? But having a point of view is important. And if you have a point of view, that by

10:44

definition means that there will be some people that agree with your point of view and some people that don't agree

10:50

with your point of view. And and a lot of people don't necessarily want to be in that uncomfortableness,

10:56

you know. Well, yeah. And I'll give an example. So

11:01

recently, one of the things that's been happening to us is the question about AI and the

11:09

environment has been on everybody's minds in all of our presentations. And for a little while, I was kind of going

11:17

to let that slide. like I was going to just stay silent on that topic because

11:26

it's you're threading quite the needle once when you open open up that

11:31

conversation like you're threading a needle with the people in the room for sure. You are in a situation where you

11:37

can lose all of your credibility in their you know from their perspective or

11:43

where you can change some people's minds or where you can sound like you're just polyiana.

11:48

So that's been coming up more and more for people and so we're having this robust conversation today about how do

11:54

we handle when you're in the moment and you let's say somebody has hired you to give a presentation

12:01

which is a deliverable and you've got somebody who's needling on the you know

12:06

any of any objection around AI but particularly the environmental one because it's a hard question to answer.

12:13

It's like actually difficult. you have to know stuff or you have to have really good

12:18

talking points. And so some of us are coming up with our point of view on that like right now.

12:25

Right now like while we're talking or right after we're talking, we're making notes to ourselves and we're doing research and we're coming up with what

12:32

are our bullet points for next time. Yeah. Yeah. I I I'll tell you my I I'm glad that

12:38

you're going there. I am not going there. I'm I'm not taking much time to to to put much energy into that

12:46

ironically because here here's here's my thought on it is what is the purpose of saying AI

12:56

takes up too much energy? If the purpose of it is to say, hey, I I really want to

13:01

do something about that. I love AI and I want to find a way to make it more efficient, then great. But I think most

13:07

people bring that up as an argument why they're staying on the sidelines. It takes up too much energy, so I'm not

13:14

going to do it. Right? It's one of those tropes that's really easy to say it takes 10 10 times more energy than a

13:20

Google search. Okay. Yeah, it does. It also, you know, makes you, you know, 50 times more efficient at your job, which,

13:27

you know, may counter counteract that. But here's what I can promise you. The

13:32

AI models are getting more and more efficient. as they get more efficient, a lot of them are going to start running

13:38

on edge devices like phones where they're not going to take up so much energy and and also the AI is going to

13:44

start to solve things like, you know, fusion and and you know, we're going to

13:50

find more efficient ways to generate energy. So, I feel like in in the long-term scheme of things, it will be

13:57

solved, right? because if we want to have if we want to have these toys to play with, if we want to have these

14:02

tools, we're going to have to solve the energy thing. Um, so so my only point would be to the degree that you're using

14:10

it takes too much energy as an excuse not to get into AI, I don't I I no

14:15

longer accept it. Like if you want to if you if you if you want to go into AI and and uh and find a

14:24

way to make it more efficient, more power to you. Go do it. let's let's go to solar, let's go to wind power,

14:29

whatever it might be, great. Um, but if it's just an excuse to stay on the sidelines, then it just for me, it just

14:35

becomes another trope of why people are avoiding dealing with this. Yeah. And

14:41

it's listen, at the core of AI readiness is being is being ready. Your your sync

14:48

just went way off, so I think our our timing there's a there's a skip. Yeah.

14:54

So, it is way off. Yeah. Hi, sorry about that. Um, didn't mean didn't mean to cut

14:59

you off, but um but but being AI ready means being engaged with what's going on with AI. So, um, so with that, let me

15:07

let's let's uh Yeah.

15:14

Okay. So, one thing uh two things actually. So on just so just to like

15:19

give you or and the listeners a little nugget when people because I agree with

15:24

you that a lot of times it's just one of a series of examples of reasons to stay

15:30

on the sidelines and also it's an pretty unintellectual argument because it's a false dichotomy.

15:38

You're it's not either or. So, if we're going to kind of go the route of being a

15:43

little bit um nonsensical, right, I'll recommend two things. One is if you're

15:51

concerned about the amount of energy that each chat GPT, you know, thread is

15:57

taking, learn how to prompt better so you're not going back and forth. So, I put it back on them.

16:04

That's good. And then the other one is kind of like buying carbon offsets. For

16:10

every hour you save by using chat GPT, spend an hour doing climate um justice

16:17

advocacy. There you go. Perfect. Love it. It's great. Beautiful. All right,

16:22

you win. Uh let's see. Oh, that's not it. Okay, one thing to pay attention to this week. Okay.

16:29

Um, so, so here's a thing I think would be valuable for people to pay attention to

16:36

this week. Chains. And here's what I mean by chains.

16:43

Um, one of the things that has struck me in the past

16:50

couple of weeks is that when someone says, "What's the

16:56

best tool for XYZ?" It's almost an impossible question to answer because the tools are

17:02

really complicated. A lot of tools are multimodal and have all these different features that they do.

17:09

Um, and I also feel like saying things like, you know, what's the

17:15

best tool for making, you know, this kind of output

17:21

kind of reduces how you make stuff with AI down to a

17:29

really simple um, it's just like a tool like it it perpetuates the idea that you

17:35

just push a button with AI and out comes some work, right? So, one of the tropes that people say is, "Well, AI is for

17:40

lazy people. You just push a button out and out comes the thing." So, when I talk about chains, what I'm talking about is the process, the workflow. What

17:48

are the chain of events that have to happen if you say, "Oo, I want to make one of those Yeti videos, right, where

17:55

there's all, you know, the yetis sort of walking through the woods talking about whatever product they're talking about.

18:00

I want to do one of those." Well, what is the chain of events? What are the chain of tools? What are

18:06

the chain of activities? What are the jobs that you need to do to be able to produce one of those videos? Well, the

18:12

first thing you have to do is you have to have concept, right? So, step one in the chain is have a concept. Now, is

18:18

that a concept you came up with? Are you just copying someone else's or you going to go to chat GPT and brainstorm it? Are

18:24

you going to go to a whiteboard and brainstorm it? And then second thing might be work with chat GPT to come up

18:29

with a campaign, right? Maybe you want to talk about you want to have yetis talking about Yeti coolers, right? Get

18:35

it? It's Yeti and Yeti, right? And how did you come up with that idea? Maybe that's a collaboration with ChatGBT. And

18:41

then now once you have a concept, now you can come up with individual ad ideas and maybe you have chat GBT write those.

18:48

Then you have to come up with a voice for the character if you want to control the voice. Or maybe you're just going to use V3 and just have it automatically do

18:55

it. So there everything that we do in AI is some chain of events, some chain of

19:01

jobs. And some of those involve AI tools and a lot of those just involve human

19:08

input, right? I want to do this. Chat GBT gave me a result. I don't like the result. Make it better. There's a lot of

19:14

back and forth. I'm going to go make an image of what my Yeti is going to look like. No, I don't like that. No, I don't like that. No, I don't like that. Back

19:20

to point of view. So, my my thing to pay attention to this week is when someone

19:26

asks you, "What tool did you use for XYZ?" Don't just answer the tool. If I just

19:32

say to you, "Oh, I used Hedra to make that or I used V3 to make that." You're just like, "Oh, okay." And then you

19:38

don't have a V3 subscription and you don't know how to do it. But if I say to you, oh, to make that thing, well, first

19:44

I had the concept and then I kind of did this brainstorming thing with chat GPT and then I went to the whiteboard and

19:49

then I went to 11 Labs and I designed a voice and then I there's there's something about unpacking that process

19:56

that lets the person asking the question know that there's way more to this AI stuff than just pushing a button. It's

20:04

there's a lot of craft in it, right? Even if what you're just doing is

20:09

a recipe book, right? It's I got to come up with the ingredients and then I come up with the tone and and and I started

20:16

with my grandma's recipes, but some of them were bad and so we changed some of them and like there's there's a chain of

20:22

jobs that has to happen before you get to some output. Some things with AI are

20:27

much more simple, but most things that I see that are of any value have been some chain of events, some chain of jobs that

20:34

are strung together. So that's my thing to pay attention to this week. So, I'm curious as to your thoughts of if I am

20:41

blowing smoke up my own butt. No, I Okay. Yeah. So, Reggie in the

20:48

comments just said exactly what I was saying, which is this is a craft. So,

20:53

there's all these people out there saying AI is ruining our craft, right? AI just with AI. Yes, a craft. Yes, it

21:03

is a freaking craft. Don't take that away from me. It is a craft. I am a com

21:10

I am a creator. And you know what? Yesterday when uh we were in the AI

21:15

salon presents, we had um uh Princeton

21:20

Marks, Princ not Princeton, Preston Marks from Jelly, Pearson Marks from

21:26

Jelly Pod was on the AI Salon Presents last night and he was showing us Jelly

21:32

Pod, which is a it's like a podcast, an AI podcast creation. Yeah. And what was

21:39

really cool was he was feeling sheepish about doing a demo because a lot of times demos are can kind of be have the

21:47

connotation of being salesy. Well, Leah Fasten, who's your co- um co-founder of

21:53

the AI Salon, she said, "Wait a second. Time out. You've created this. This is

22:00

your art." Yeah. This is your art. Do us the honor and the privilege of showing

22:07

it to us. like and and by the way what an act of vulnerability too when we show

22:12

people our our babies our artwork and I just that stuck in my head and I was

22:18

like this why can't we claim in the sea of people on LinkedIn etc saying you

22:24

know AI this AI that AI is taking your jobs AI is ruining all the crafts blah blah blah why don't we get to own that

22:32

AI itself is yeah to be creators

22:37

The fact the fact that you know that that when when you sit down and to

22:43

think, "Hey, I need to create a uh an ad for LinkedIn."

22:48

You sit down and you're like, "Okay, I need to do this this." You you kind of know the seven things you're going to

22:54

do, right? How did you get to that? Well, you got to that through trial and error. And you got and you got it to a point some of those chains of jobs that

23:01

you put together are known, right? like like if you were to if you were to document it, you could probably say,

23:06

okay, for this kind of output, I use this tool, then I use that, then I use that, then I use like you could write it down. There's other things where you're

23:12

just like, I don't really know. I kind of know where I'm going. And you kind of meander down this path of the of these

23:18

things. And at some point, if you get it right, you're like, oh, I want to remember that. So, what what I would

23:24

encourage people to do for the next week or so is start noticing your chains of

23:30

jobs. Start noticing those chains of events. start documenting them, right?

23:35

Start documenting them and then start talking about them because I think that that idea of revealing the craft of AI

23:43

is really important. And I think it especially for like the doom the doomers

23:49

that are like, "Oh, AI is just push a button. It's just lazy." No, it's only lazy if you treat it lazily. If you

23:55

treat it like a crap, yeah, then it's not lazy. It's it is it is an art.

24:03

So, here's here's a fun challenge. What if we all committed to when the next time somebody says, "What did you use to

24:10

make that?" We we take like a minute to take a step

24:15

back and say, "Yeah, I will tell you the tool that I ultimately used, but here was my thought process going into tell

24:21

you how I got there." Yeah. Or let me tell you how I got there. Or it you know what? It was a lot of tools and some of

24:28

them weren't even AI. You know what? I I used a whiteboard. I used whiteboard technology for this. It's from like

24:35

1962. [Laughter]

24:41

Yeah. And then to maybe even also to articulate that there's a category of

24:46

tools that I used. I chose from, you know, maybe you're like, you know what I did? I used a tool that makes videos,

24:54

right? There's 15 of them. or I tried three tools that make videos and two of

25:00

them failed. This was the one I went with, but I've done other projects where I went with one of the other ones.

25:05

Right? Like again, that's that idea of it's not a singular thing. If if we

25:10

reduce it down to the tool did the work, it removes us out of the process

25:16

and we're critical to the process. Even when we have agents and we're saying, "Hey agents, go off and do this stuff."

25:23

We're the ones that say what to do. We're the ones that curate what's comes comes back. We're the ones that choose

25:28

what to put in the world. So all of that is some chain of events. So that's my

25:34

that's my big my word for the week is chain. My activity for the week is start paying

25:40

attention to the the craft of what you do and start talking about it. Yeah, I

25:46

like it. I think that's good stuff, Kyle. Thank you. Thank you. All right, let's uh I'm excited to get Tara up

25:52

here. So, let me let me just talk a little bit about the AI salon and then you can talk about she leads and then we'll talk about the AI readiness

25:58

training program. Um, so if you have not joined the AI salon,

26:04

what are you doing with your life? What are you doing with your life? Go to the salon.ai, click on join our community

26:10

and join our community. And if you haven't introduced yourself to the community, what are you doing with your life? introduce yourself to the

26:17

community. And then if you're in the community and you're like, "Yeah, I'm just kind of"

26:23

then you should really think about joining the mastermind, which is this um more focused

26:29

um way to step up your AI game within the AI salon. Um so the AI salon is free. The mastermind is

26:35

subscription-based. Uh and you can really step up and it's it's a really great group of people that are stepping in there. It's just getting started. I'm

26:42

really excited about it. So, go join the AI salon. Join the AI salon. Um, so, and also if

26:51

you're a woman in AI or wanting to get into AI, join She Leads AI. So, what we

26:56

have is an AI academy for education. We

27:02

have a consulting agency. We have a paid community. And we have something that's evolving into like kind of a think tank.

27:09

We're not exactly sure what it is, but it's all of our like nerd brains wanting

27:14

to nerd with more people who are similarly nerdy. Um, and one of the ways

27:21

to figure to find out if we're a good fit for you is to attend social

27:26

Saturdays. So, this is an evergreen opportunity. You are fully invited any

27:33

the year. We are on Zoom from:

27:38

noon. You can get a Zoom link by just going to sheleadsai.ai.

27:43

There's a little registration button. You get the Zoom link and it's good for any Saturday ever. It's free and we

27:51

would love to have you. Yeah, come join. The other thing, a little known fact, uh I run the mansplaining division of She

27:57

Leads AI. I'm really excited about that. Um Yeah. So, yeah, it's a it's a it's a

28:03

lesser attended little corner of the community, but uh but you know, I'm I'm

28:09

there to answer any questions you may or may not have that I can't answer properly. Exactly. Exactly. Or I can

28:16

restate what you really meant. I'm happy. Yeah. Um so,

28:22

um AI readiness, we have the AI readiness training program has soft

28:27

launch. We officially launch in I think a week. Um but if you go to are you readyforai.com

28:33

this is the um the training that we put together based on uh the the uh

28:39

presentations from AI festivists and we looked at the what were the commonalities across all of those and

28:45

how could we break that down into um a training this is per request of the of

28:50

the participants of of AI Festivus and it is now live uh Vicky Baptiste uh put

28:56

together the training and it's really quite remarkable and very thorough and you get a lot uh for for what you're

29:05

paying here. You get a lot. So, go check it out. Um it is well worth your time. Um yeah, so there's that. Thoughts on

29:12

thoughts on are you ready for AI? You excited? Well, I think that I, you know, so I was going back through some of our

29:19

speakers and the their presentations yesterday and

29:25

how enduring their

29:30

recommendations and and thoughts and perspectives and the things that they're

29:35

up to. Like, I know that AI changes at the speed of light. I get that. I'll

29:40

give you that. But what's in the AI readiness program like like what's in

29:46

Tara's program that we're going to talk about in a minute, it's fundamentals. It's like who you are and how you relate

29:53

to AI. It's like how you think and it it's not like you have to learn exactly

30:00

every button in some app that you've never heard of and when it goes away you're you won't even know because you

30:06

just it's like not that important, right? And so the the content there is

30:13

going to help you think broadly about AI and you're going to know, you know, the

30:19

next thing that comes down the pike. You're going to be like, "Oh, this is how I thought about, you know, making

30:26

videos before, but now I can or how I made images before and here's how making

30:32

videos is different, right?" Like how I approached it. So, it'll allow you to

30:38

create a a way to like organize your thoughts around AI. Yep. It's a it's a it's a shift in mindset. Um and uh and

30:46

really really really exciting. So, please go check that out. So, with that, why don't you tell the good folks about

30:51

Tara and then we'll bring her up here, but while she's backstage, you know, say all sorts of nice things about us. I'll

30:56

say all sorts of things. Yeah, exactly. Um, so

31:02

one of the things I really appreciate about Tara is that she is a learning and

31:08

development expert. So when she creates something, it's made the right way for our brains to actually learn. Nice. You

31:17

know, I mean, Kyle eventually teaches us stuff. He gets around It's like if you

31:23

hang around Yeah. you'll learn something. Whereas Tara's

31:29

like, if I do X, your brain does Y, right? So, she knows how to teach us

31:35

stuff. And she took that skill set and turned that turned and created AI

31:41

educational content that is the only ondemand content that I would ever

31:48

recommend to anyone other than the AI readiness program because but that's

31:53

different, right? If someone was like, I need to know how to use AI, I would only say go to Tara. Go to Tara's programs.

32:01

And she's also a delight. She is so generous to the AI community and such a

32:08

fan and such a supporter. And she's also hilarious. And her recent content series has been a

32:16

joy to follow along with. Tara, welcome. Hi, Tara. Welcome, welcome, welcome.

32:22

Hey. Oh, thank you so much. That is, you know, such high praise from you, Ann.

32:28

And yeah, I'm so happy to be here. Uh,

32:34

and yeah, I'll just I'll tell you a little bit about Do That Dave first. Great. You know, a lot of people ask

32:40

sci-fi classic film and novel:

32:47

e a rogue AI system named HAL:

32:53

and then very famously declines to follow the commands of their human crew.

33:00

you know, there's, you know, out there in space and uh they're trying to uh get inside, one

33:08

of the astronauts is trying to get back inside the spaceship and says, "Open the pod bay doors, Hal." And Hal says, "I'm

33:14

sorry, Dave. I'm afraid I can't do that." So, you know, it's a little cheeky, but you know, I launched this to

33:22

help people discover AI regardless of their level of technical expertise and

33:28

learn how to use it in their work, but also to understand how the technology works so they can control it instead of

33:36

letting it control them, you know. So, we offer interactive self-paced courses

33:42

in basic AI literacy and skills. uh we do live hands-on workshops so people can

33:49

actually learn by doing instead of just listening because like you know Ann was

33:55

saying that's one of those principles of learning design where you know listening and remembering is very much the lowest

34:03

order of learning objectives that you can have. And what you really want to

34:09

get to is these higher order objectives like applications analysis, you know,

34:15

thinking about how to use these tools, you know, and building these chains of

34:21

processes that we can, you know, actually get to our outcomes and not

34:26

we're not just using AI for AI's sake, right? We're trying to get something out of it. Do something. So do that, Dave.

34:35

That's a little bit of where that comes from. And uh yeah, one of my favorite uh

34:41

one of my one of our f my favorite course at do that Dave is the one that I presented during AI festivist which is

34:49

called the future product playground. And this is a creative exercise really

34:57

where we show people how to brainstorm product ideas of the future using chat

35:04

GPT or Claude or their AI friend of choice and then you know come up with

35:11

something that's going to come out 50 or even like a hundred years from now that seems like sci-fi right now but you know

35:18

is probably not going to be for very long. And then we actually just really take the to ridiculous lengths and have

35:25

people do a whole go to market strategy for it. So they do their business plan for their future product and then do a

35:32

pitch deck because you got to go out and get some VC money to make your your future product and we do creating a

35:40

landing a product landing page creating collateral like videos

35:47

uh you know social media stuff. So, it's a way to really introduce people to a really broad variety of different tools

35:55

and different use cases, but in a way that's not, you know, just going through

36:01

a checklist. It's like all anchored around this, you know, fun concept. And

36:08

it we we've really seen people just kind of realize how easy it is also to

36:17

take some of these ideas that maybe used to seem really out there and really impossible but can you know you can do

36:26

this stuff now. You know, it's not, you know, a lot of people, I think, get hung

36:33

up on this idea of like, well, I I wouldn't know how to start a business or I wouldn't know how to launch an

36:38

organization or, you know, a big even just a big project, but it's really, you

36:44

know, you can now it's so much easier than it used to be because you've got

36:50

all these tools to help you. So, we've seen a lot of I've seen a lot of students just kind of have that moment

36:57

where, you know, they go from thinking, "Wow, I can never do that to," well, I

37:03

can. Totally can. So, you you mentioned something, Tara, that

37:09

um I'm I'm curious if there's been evolution there. You're talking about sort of use cases, like I want to make a

37:15

homepage, you know, I want to make a website. What's your philosophy and has it changed on, you know, did you teach

37:22

here's chat GPT 101, here's to here's tool 101 versus here's use case, you

37:28

or:

37:33

teach the tool, do you teach the use case, do you teach the chain of events? Like where where's your head these days? And has that evolved?

37:41

Yeah, I I always try to stay tool agnostic with everything I do. I think you mentioned earlier, you know, that's

37:48

which is a great way to think about it. Talk about the class of tools that you're using, not like particular one

37:55

because they're all a little a little different. Sometimes it's going to depend on, you know, what you're looking

38:02

for and what you're trying to accomplish. I myself, you know, switch back between back and forth between chat

38:08

GPT and Claude and Gemini just depending on what it is I'm trying to do. And I do

38:16

try to give those tips as I go along for what's worked for me. But at the end of

38:22

the day, it's really going to depend on, you know, what your style is and what

38:27

you're looking for. So yes, really try to teach that skill of like thinking about which tool you want to use too.

38:34

Thinking about what you want to achieve with it instead of like yeah we're going

38:39

to use chat GPT for this and we're going to use gamma for this. I mean I have to pick one to demo it and so I use my

38:46

favorites but also try to give people you know thoughts on how they can

38:53

work through that decision makingaking process for themselves. So,

38:59

very cool. You're muted, Ann. No, man's on mute.

39:05

How did that happen? Um I Sorry, I got distracted by the chat.

39:13

People are talking about whether they whether whether guys can sneak into Sheile's AI or not. It would be awfully

39:19

difficult because we are a cameras on community. So, you would need to you would get kicked out just by not having

39:26

your camera on. Um, anyway, what I love about the playground is that

39:36

it it really addresses this thing that Kyle was talking about about the chains.

39:41

You're teaching people to think so much more than use the tools. The tools are

39:47

just like this. They're the they're like

39:53

You just went mute again. And

40:01

we have a gremlin. I can hear I just unmuted you or No, I can't. Oh, am I am

40:06

I back? You're back. Okay. Yeah, my something is happening. If something if

40:12

something happens even more, I'll come out. I'll go out and come back in. But it's like the tools are just like stops

40:19

along the way. And I love how if you get somebody to think about something, you

40:25

know, kind of kind of kooky, crazy, like unthinkable that could happen in the future, you give them the gift of not

40:33

having to worry about how because our re in our real everyday life, all we ever

40:39

have constantly are stuck in how. How am I going to do that thing, right? And it's so boring and dantic, right? You

40:46

can't spend time with your head in the clouds because you're constantly having to be practical. But what you're letting

40:52

people do is play and dream. And you're like, "Don't worry about the how. We'll figure that out. It's going to be one of

40:59

these type of tools and one of those type of tools, but in between you're going to do all this stuff. You're going

41:04

to maybe journal. What are all the different things that you could create?" Like I specifically remember in first

41:11

grade they asked us to come up with an invention and I came up with an invention that my

41:20

uh substitute teacher by the way told me was impossible. And I was like yeah

41:27

that's cool. It's not may maybe it's I mean in first grade I was like whatever but you know yeah it's not it may not

41:33

seem like it's possible but who cares about that right? Like, and that's what you're letting people do. They're like,

41:39

"Well, okay, we'll figure that out." I just think it's awesome. Yeah. I I wish I could find that teacher and just slap

41:46

them. I'm not a violent person, but like, how dare you say, you know, quash

41:51

people's dreams. But, you know, and I think that's a lot of what's happening right now with AI adoption

41:59

in enterprise level. It's just everyone's thinking about how they can

42:04

use these tools to do what they're doing right now. And that's great. That's a great place to start there. Obviously,

42:11

in a business, you have to be focused on the ROI. But there's this opportunity

42:17

right now to think, you know, think about, well, what's even possible? What

42:22

is there's things that we haven't even dreamed of yet that, you know, and you

42:27

mentioned like Kyle, you mentioned fusion. Maybe there's even something

42:32

completely beyond exactly imagined yet. So, and and

42:38

certainly within a business context, right? How we do what we do as a business may radically alter over time

42:47

and you know, is that change going to come from within or is it going to come from, you know, is going to bully you

42:52

over from the outside? And I, you know, I fear that a lot of the companies that just live in this sort of efficiency use

42:58

the tool as a, you know, as an efficiency tool model are going to get blindsided. And those companies that

43:04

figure out how to train their people to think critically and and do exactly what you're saying, they're going to be the

43:09

ones that that discover new new way of doing things from within. And I think that's ultimately, you know, that's

43:15

that's how the winners will emerge over the next 10 years. I think, you know, I

43:20

agree. I don't think it's going to be these big players that we that we know right now. It's going to be

43:27

organizations that are more agile and more creative and you know,

43:34

not the Titanic, which is a lot of, you know, these big headed straight for the

43:39

iceberg. Well, we're out here kind of like in our little boats. Can I ask you a question on on audience? So, you've

43:46

how how long have you been doing do that, Dave? How long have you been doing AI education, whatever it's been called?

43:53

I think it's been about, you know, a year and a half. A year and a half. I'm just What's the Have you seen a shift in

44:00

your students in in in in the learners? Have you seen a shift in either how they're showing up or or how they're

44:07

learning or or what you're teaching them? What do what are you seeing um in

44:12

in your in your you know, audience base? Yeah. I mean, I don't know if it's it,

44:19

you know, I always definitely try to cater to the absolute beginners. And I

44:25

think the misconception right now is that there aren't very many of those left. And you know, we're inside our AI

44:33

bubble, right? We're all very much in it all the time, every day. And it's kind

44:39

of inconceivable that there's people who haven't even heard of it or tried it. But I know for a fact that there are

44:45

plenty because I meet them all the time. Yeah. You know, my my uncle uh you know

44:51

from he was just visiting me this past uh

44:57

couple weeks ago and he's like so what is this AI thing? I don't even get it.

45:03

Like is this and he's like is this is AI the one making all these horrible

45:08

comments on Facebook? Yeah. Well, how could be Yeah. I mean,

45:14

unfortunately, some of those are real people, but a lot of them are Yeah, exactly.

45:20

It's like and and he's like, you know, he's been, you know, he's worked in the

45:25

corporate world for for decades and he's semi-retired now, but still it's like

45:32

it's not a given that everybody knows this yet. And I'm I'm in Silicon Valley,

45:39

too. I live in San Francisco, so we're even more of a bubble. We're even worse thinking that everybody knows it. And I

45:45

go to these events around town and I talk to people and I'm like, "Yeah, there's people who haven't heard of it." And they're like, "No." Wow. It could

45:52

be. Yes, it could. It could be. That's amazing. Yeah. All right. So,

46:00

let's tee up our three questions here. Yeah. Jump in. So Tara, one of the questions that we

46:05

ask all of our guests to answer is when did you know that it was time to go

46:12

allin on AI and what happened next?

46:18

Uh you know it was pretty shortly after I think chat GPT 3.5 was first released

46:26

e public there in November of:

46:33

was at the time I was doing a lot of freelance work as a learning designer and you know my business partner at the

46:40

time kind of came to me with this and like have you heard of this and I was like oh it'll never be better than me

46:47

and it's still and I started to play around with it and

46:52

I was like oh I guess I was like oh I could use this to do this you know I

46:58

used it to do the parts of the stuff that I always hate doing as a learning designer. Like I like coming up with

47:04

concepts on my own and writing a lot of the content, but then I get to the end I'm like, "Oh crap, I got to make quiz

47:10

questions. I got to do a course overview. This is the stuff where I'm like, I don't got time for this." So,

47:17

you know, I really started to see that potential and just really see it as not

47:22

like a replacement for every anybody. Like I never thought in the beginning

47:29

the first you know the initial iterations of the technology I never thought it was going to be replacing

47:34

anybody. Now I'm not so sure but as it's advanced but you know at the time I was

47:39

like but this can be you know this isn't going to replace me but this can

47:44

multiply my effect. I can do more and I can do more of the things that I've

47:51

always wanted to do. So I really just and and you know I've I've seen I've

47:57

done a lot of work in the past developing programs for

48:02

um you know people in marginalized communities helping you know build like

48:08

technical assistance programs for people who want to start their own businesses was what I was doing a lot of at the

48:14

time and I was like you know this and I just thought this is going to leave those people behind just like everything

48:20

else has every other advance. and every other, you know, the disparities in

48:26

educational outcomes are so stark that this is going

48:32

to just make the gap even wider than ever. So, that's really where I where I

48:37

got in and what I've been doing ever since is just trying to make sure that

48:44

everybody who has access to this. Yeah, that's amazing. That's great. Um, okay.

48:52

So, so let me let me ask you question number two that we asked our guests and it's it's kind of I I'm interested in

48:59

this one just based on what you just said. So, given your your specialty, your focus and your your worldview, what

49:07

are some trends or what is a trend in AI that you're following and why? It's my

49:13

favorite question. Oh man. I mean,

49:20

everybody's looking at agents right now, right? And uh I just I've been following

49:27

this very much from the sidelines. It's not something that I do a lot in my

49:35

teaching practice because it takes more it to get to that point

49:42

where you're ready for that than people realize. You don't discover AI and then the next day you're

49:49

building your own agent. It's so much more goes into it than that. You have to know how to work with these models. You

49:57

have to know how they behave. You have to know how to get consistent results. And like again, you know, theme of the

50:03

day, you have to have your workflows documented. You need to know the chain

50:08

of events that needs to happen to get to what you want. And that's not even an AI

50:14

problem, right? That's a workflow, that's an ops pro, you know, problem that these workflows are, you know, tend

50:21

to be like especially creative workflows, they tend to be illdefined,

50:27

live in somebody's head exclusively or, you know, their institutional knowledge

50:32

that aren't necessarily like easily teased out. And that's a big part of,

50:38

you know, what you have to do before you even get there. I think it's kind of funny and I also think it's hilarious

50:44

the way, you know, people are slapping this agent label on things that are not.

50:50

Yeah. It's like natural foods, right? When everyone was like, "Oh, yeah. It's

50:56

it's a label that means literally nothing right now." Yeah. Exactly. Yeah. Yeah. Now with now

51:05

with Agentic, that is not like that's an automation. And this is not a system that has agency and that can make

51:11

decisions and that can take independent actions and act you know so that it's

51:16

going to take a long time I think for that to shake out. So but I just kind of sit and watch this trend and kind of

51:22

like yeah what's what's the what's the the what

51:27

will be the tipping point for you where you feel like uh like someone's actually cracked it? So, I assume you've played

51:33

with like Manis or Gen Spark or those things that have agentic elements in them. Um, you know, do you think those

51:40

things are there? Like what what's going to be what's it going to look like? What what's going to show up in the world

51:45

where you go, "Ah, now now we've shifted gears."

51:51

Yeah. I I think it's going to come from Yeah. getting the workflows act actually

51:57

documented and seeing how

52:04

these models and these systems, these agentic systems can actually pass off these different tasks in a way that

52:11

makes sense. I'm not sure I know exactly what that looks like or how we're going to know. Yeah, that was like it's just,

52:20

you know, maybe we're going to call customer service one day and

52:26

an agent an agent is actually going to solve our problem. Right. in a way that makes sense and that's

52:33

gonna be like that was actually an agent that was

52:39

but it's I think it's it's quite a ways off and you know I that's something I do a lot in my workshops right now I've

52:46

been working with some teams that are just they've been using AI but want to

52:53

go a little deeper and be more strategic and I'm trying to get them set up for

52:58

this this occasion And I like to say like GPTs or like a

53:03

custom AI assistance, that's like the gateway drug to us. You see what they're

53:09

doing. You observe how these tools behave so that you can set yourself up

53:15

to use, you know, hand stuff off in a way that you don't have to be there.

53:22

Yeah. That's really and that's the thing I guess if if you're wondering like where's the tipping point, it's when

53:28

they act right when you're not there. Oh, that's good. That's really good.

53:34

Realic. That's really good. I love that.

53:40

That's great. I'm with you on the, you know, when when

53:46

we're working with people and they're at the stage where they're ready to make custom GPTs, the brilliant part and why

53:51

I think currently like we should still have the goal of getting to the phase

53:58

where we know or we're interested enough and we know how to make custom GPTs

54:03

because if that's your goal, you now know how to prepare for agents. you know

54:09

how to prepare for different workflows. You know how to prepare better for the even very very basic stuff because the

54:16

the the very beginning of it which is so boring I want like it's so pedestrian is

54:23

you have to have documents that you put it you know you have to have some documentation some files that matter

54:30

that like are current and are relevant and you have to Kyle and I were looking

54:36

for a file yesterday on his live for 17 minutes these poor humans were sitting around watching us talk about a

54:43

file that we thought maybe sort of kind of existed. This was you know great content

54:49

obviously and you know if if you can't if you can't locate that because of the

54:56

way you run your business then you can't benefit you can't do context

55:03

engineering. Now, we're trying to call it engineering, by the way, which is just like so annoying. Um, why can't we

55:10

just say context? Just like why couldn't we just say prompting? Why do we have to say engineering on the end of it? So

55:16

that tech bros are the only people who supposedly can do the thing. But anyway,

55:22

I believe that teaching people some of these goals gets them ready for the bigger stuff.

55:28

All right, question. I heard that for the first time. Oh, sorry. I heard that term for the first time the other day,

55:34

context engineering. And I was like, Lord, you mean you have your Google Drive

55:40

organized? Good job. Yeah, exactly. Yeah, exactly. I mean, I don't

55:47

you know how to talk. Okay, congrats for that. Okay, so what So here's our third

55:54

question. Our third and last. We've always We've got three. This is three of three. Um what does AI readiness mean to

56:00

you? And what would you advise somebody who's just getting into AI?

56:07

Ready? Yeah. I mean, it's hard to be ready for this because, you know, we don't even know what we're ready for.

56:13

Like, you know, we're not even sure this is going to end. It's probably not. Um, but

56:21

I think you know the part for really being ready for anything is this like if

56:29

you want to call it context engineering or whatever, but like knowing what you have, knowing where your knowledge

56:35

lives, knowing how to what knowledge you need for what tasks,

56:42

having your workflowmented, you know, really thinking about, well,

56:48

if I were to ask a you know a hum a human to do this

56:54

task how would I do that what you know what conversation you know would I have to have what would they need to have and

57:01

that's really you know yeah knowing who you what you do and what you're all about is the best way to be ready from

57:08

there you can just start start tinkering with it and you know have that context

57:15

ready you know I always uh I have what I call my AI go bag which has got like

57:21

like my all my you know the prompts that I've used the instructions that I've

57:26

created for different GPTs and assistant applications and all the contacts like

57:33

my greatest hits that go into pretty much everything that I do and you know

57:38

having that ready I think that's that's how I stay ready. You got to keep track

57:43

of this stuff. Yeah. And I, you know, I take it from somebody who's had to go back and dig a lot of this stuff back up

57:50

after the fact. Easier to do it. It's easier to do it as you go along. Save those best prompts. Have your documents

57:58

that you use all the time in one spot. You know, don't ever sink effort into a

58:05

particular tool without making sure you documented what you did with that tool

58:10

and what the outcomes were and you know how you could replicate it somewhere

58:16

else because that's great. Some of these tools are just not going to be around. Yeah. Even six months from now. So never

58:23

never you know trap your work into any one platform. I think that's how you

58:29

stay ready. Stay nimble and just, you know, be ready to move all the time.

58:35

Love it. Love it. Stay ready. Stay ready. Questions.

58:41

Yeah, those are fun questions, aren't they? Awesome. So, let's um Oh, wait. I'm

58:48

getting I'm getting an echo. I don't know who it's coming from. Um but but thank you for that. That was really

58:54

awesome. Um uh if you if you have not done it yet, go check out do

58:59

thatdave.com. That's uh that's Terara's website. Um and anything else you want to say to the

59:05

good people about what they should go uh check out on your side?

59:10

Uh yeah, I mean if you have never tried AI before, we have a course called basic

59:16

training. It's always free and it's really just gets right down to like how

59:22

do you like opening up chat GPT creating an account like takes it all the way

59:28

back to the very beginning and just like guides people step by step through their first conversation and then also like

59:36

what do you do next? What like how do you share these outputs? How do you save these outputs? How do you verify these

59:43

outputs? most important part, you know, and and so that's a great place for

59:49

anybody who's really just at the very beginning. Otherwise, you know, we've got uh the course that we offer through

59:57

uh she leads AI Academy, which is our explore generative AI course, and that

::

really digs deeper into how these tools work, but with with an applied bent.

::

Like, you want to know how they work so you can know how to use them best. It's not just it's it's not super technical.

::

We're going to show you what you need to know to get the most out of them. Beautiful. Beautiful. Great. Well, Tara,

::

thank you so much for joining us. Thank you. It was so much fun. You guys have the most fun. It's so fun to have you on

::

the show. So, great. Thank you. Good luck with everything. Thanks for

::

everything. Bye. Bye. Thank you.

::

[Music]

Listen for free

Show artwork for AI Readiness Project

About the Podcast

AI Readiness Project
Forget trying to keep up with AI, it's moving too fast. It's time to think differently about it.
The AI Readiness Project is a weekly show co-hosted by Anne Murphy of She Leads AI and Kyle Shannon of The AI Salon, exploring how individuals and organizations are implementing AI in their business, community, and personal life.

Each episode offers a candid, behind-the-scenes look at how real people are experimenting with artificial intelligence—what’s actually working, what’s not, and what’s changing fast.

You’ll hear from nonprofit leaders, small business owners, educators, creatives, and technologists—people building AI into their day-to-day decisions, not just dreaming about the future.

If you're figuring out how to bring AI into your own work or team, this show gives you real examples, lessons learned, and thoughtful conversations that meet you where you are.

• Conversations grounded in practice, not just theory
• Lessons from people leading AI projects across sectors
• Honest talk about risks, routines, wins, and surprises

New episodes every week.

About your host

Profile picture for Anne Murphy

Anne Murphy