Episode 23

full
Published on:

13th Aug 2025

What Makes Us Human: Reimagining Intelligence with Vanessa Chang

SHOW OVERVIEW:

Most AI systems still reflect a narrow picture of intelligence. In this conversation, Vanessa Chang invites us to widen that frame — asking what might change if we included more ways of knowing and thinking. What strengths do women and other underrepresented voices bring to that shift? And how can we work with AI in ways that keep us connected to, rather than distanced from, our most human traits?

We explore the gap between doing more and thinking better, the cost of handing over our attention, and the quiet strength in resisting hustle culture for something more rooted. Vanessa also shares how she guides leaders to make room for complexity — and why that skill is becoming essential.

KEY TAKEAWAYS:

  • Why a broader definition of intelligence matters for shaping more human AI
  • What day-to-day collaboration with AI can look like in real life
  • How slowing down can sharpen thinking, even during rapid change
  • What leadership could become when we stop outsourcing our sense-making

OUR GUEST:

Vanessa Chang is the founder of Mosaek, an AI strategy firm, and the voice behind the RE:Human newsletter. Blending leadership coaching with a deep curiosity about how we think, she helps overthinkers and productivity pros slow down and see clearly in an AI-shaped world. Her work centers on strengthening human judgment — especially in how we work alongside machines. And yes, she’s also a Certified Cheese Professional, which is exactly as delightful as it sounds.

Transcript

0:03

Forget trying to keep up with AI. It's moving too fast. It's time to think differently about it. Welcome to the AI

0:11

readiness project hosted by Kyle Sham and Anne Murphy. They're here to help you build the mindset to thrive in an

0:17

AIdriven world and prepare for what's next.

0:25

You know what's so crazy about this being such a bop is that like it's not even fancy

0:33

compared to new like like the music things that we have the tools that we have to make AI music are just getting

0:40

better and better. Yeah. But like a bop is a bop. Bob's a bop. And that was that was AI

0:45

music. We did that with AI. Yeah. Yeah. I know you did. I know you did. We did. I wrote a I wrote a love song to

0:54

my team on Friday. It was very sweet and I I conveyed all of my feelings in a in

0:59

an AI song and it was a ballad and it was intense and I felt like I was able

1:06

to express myself far better than I could in words. That's awesome. And we can say I'm not an artist. You

1:12

can say you could say I'm not a musician if you want to. But I felt like one. Felt like one.

1:19

Listen, you're a produ. It's like Rick Rubin. Rick Rubin doesn't know how to write songs. He doesn't know how to work a recording console. He just knows what

1:26

he likes. You know what you like. You You pull a Rick Rubin. Beautiful. Yeah. Little bit of this, a little bit

1:31

of that. Sprinkle in some cheese and then good to go. Exactly. Exactly. So, I have a question

1:38

for you. What is it? How ready for are you this week?

1:46

So, I'll ask you the same. I'll ask you the same question. Um,

1:53

first of all, I realize that a lot of people have big big feelings about Chad

2:00

TBT4 being unceremoniously ripped out from under them and then sent

2:06

back to them. But I had the exact opposite experience, which is that I've

2:14

been working on this campaign with a client for like three months. And

2:19

usually that something like that. No problem. Done it a million times. Very

2:25

easy. Tab A, slot B. Even the creative comes together very, very easily. Couldn't figure it out. Couldn't figure

2:31

it out. Couldn't figure it out. And I was like, God, if I lost my magic touch, like, you know, all I know how how to do

2:38

is fundraise and this is not going that well. Do I need to be I need to go work at Starbucks or what? So, because I just

2:46

couldn't figure out the the the angle and I was on the call with my client

2:52

going through some more attempts to figure out um what what the angle was going to be for the campaign. And I was

3:00

just typing while we were talking and I just like, you know, cut and pasted into chat GPT with like a little tiny tiniest

3:08

like a few words of direction and it popped out exactly what we needed. It

3:14

was like this huge brain breakthrough that I can promise I never would have

3:20

gotten there. I simply That was GPT5. That was with GPT5. I

3:25

So you were using four and it wasn't getting you there. wasn't getting you there. Five showed up. I mean, nothing.

3:32

My brain by itself wasn't putting it in my head before I would go to sleep to

3:37

sleep on it. Wasn't like none of my tricks were working. And I thought this is like what do I do? I'm just

3:43

going to like not figure this out. Yeah. And just out of desperation while we were talking, I was just kind of like

3:49

double, you know, um multitasking. Put it in there and uh yeah, got the

3:55

answers that we needed like three months ago. Huh? Like that. That's fascinating. I the all the all

4:02

the halaloo over you know bring 40 back or not. I I have two thoughts about it.

4:07

The the first thought I have is that we are absolutely in a place now where

4:16

um it is not like Google deprecating a feature of Gmail. Right? If there's some

4:22

feature of Gmail that you love and Google gets rid of it, you're like, "Oh, I love that feature.

4:28

deprecating one of these models that people actually have relationships with whether it's a you know therapy kind of

4:34

relationship or a companion or whatever it might be. It is a different kind of

4:39

thing. And I think that I think that one of the lessons that OpenAI learned is that you can't just turn it off with

4:45

really no notice, right? What you know what it looks like they're going to do moving forward is if you

4:51

need it back on, here's the ability to do that. It's sort of buried in this legacy model. uh menu that you have to

4:57

you have to enable and then over time if the usage of that goes down at some point they'll deprecate it with plenty

5:03

of notice right but I think that's important for us I think it's important to understand that you know these are

5:10

large language models right we talk to them we have you know relationships with

5:15

them you know whether whether we explicitly declare that or not and the personality of the different models some

5:23

of them you really resonate with and especially especially if you treat them in in a kind of you know companionship

5:29

kind of way then you really resonate with actually is a different thing so I think we should acknowledge that um but

5:37

here's the other thing that I think GPT has 700 million weekly users

5:45

I know just by you know the samplings within our communities like I would say our communities are about as up to how

5:52

to use these things as anyone and not a lot of people are using chat GBT with

5:58

any level of depth whatsoever. So most of those 700 million people probably

6:03

didn't even notice. The people that noticed were the power users who are were on Reddit and 4chan and things like

6:10

that and made all this screaming noise, right? So you majority of people sort of

6:16

dictating to OpenAI, here's how we want you to do your rollout strategy and they capitulated a bit to that. Um,

6:24

so I think over time they'll get smarter. I

6:30

mean, one of the fascinating things to watch with OpenAI in particular is that

6:36

they went from being a research lab to being a consumer software company overnight. And they're not very good at

6:42

being a consumer software company, right? The way they named their models is horrible. The way they rolled them

6:48

out is a mess. um you know the thing that they just did where they took away people's you know companions and didn't

6:54

realize oh that might be a thing right yeah like one of the things you know I'm

6:59

tempted right now to make a Tik Tok video to Sam Alman saying dude hire some

7:04

liberal arts majors get someone in there understands storytelling and understands empathy and

7:09

understands human first design right he he bought Johnny Johnny Ives company

7:15

he's he's he's got a resource right but but you don't hear Sam right now talking

7:20

about the balance between art and science, right? It's all science. And so,

7:25

you know, I I think these frontier model companies are are treading on

7:32

I don't know if it's thin ice. They're they're treading in really interesting territory because they're thinking about

7:37

it just as math, but it ain't just math. It's language and it's humanity

7:43

and it it it impacts us in a very human way. Yep.

7:48

Right. And so I don't know. So those those are my thoughts on that. Well, the piece about like how close we

7:57

are to CH GPT and what it means for us when things change is something that we

8:03

should all kind of embrace and it is part of our journey of

8:10

the only skill that really really really matters which is adaptability as we as

8:16

we say all the time. the people who are out there beefing on X, etc. are demonstrating to the whole

8:23

entire world that they don't have the one skill that's going to pro I don't care how

8:29

well they use any model. You are not an adaptable human being. So I don't you

8:34

don't need to say any say less. I got it. Yeah. Exactly. Yeah. We've adopted it.

8:41

We we've perfected it. We're the masters of it. Don't change it. Uh don't change it. I'll die.

8:47

Don't change it. I'll die. Yeah. Don't Don't change it. You You'll mess up my marketing program, right? I can't I can't sell my courses anymore.

8:54

I did. Um Sorry. Go ahead. No, go ahead. I was just going to say I this has been

9:02

my theme of the week is it's kind of unfortunate that so much

9:07

attention has been paid to or or so much noise has been made of we're bringing

9:13

back 40, we're doing this, we're doing that. What hasn't been really in the discussion much, although it has been on

9:21

um Vanessa's uh Tik Tok channel, which I'm super excited to have her here and bring her up, um is

9:28

is just people using chat GPT5 and talking

9:33

about it, right? There's there's there's been all this beef about is the old one better or the new one better? And there's kind of a romanticizing of 40,

9:40

like 40 was the best and it had this personality that I loved and 5 sucks and well 50 likely sucks because it's

9:47

different and you don't know how to use it, right? Because I know I personally don't know how to use it and now they've

9:55

changed. I think in four days they've had four different interfaces for it, right? And now we're back to nine

10:01

different models you can choose from if you enable um legacy legacy um models. And so I think one of

10:09

my one of my words for the week is patience and one of the words for the week is grace and like forgive yourself

10:16

if you feel like I don't know if this is good or bad. I don't know how to use it. I don't if

10:21

you're having those feelings of inadequacy or like just feeling like you you you literally can't keep up anymore

10:29

that I think is the place you should be, right? Like you shouldn't you shouldn't beat yourself up over it. I think it is

10:35

natural right now. Like the the last

10:40

had from OpenAI was March of:

10:48

It's been a long time. So this is a major step and you know

10:53

we've been we think we can keep up with this stuff. This is a major change and

10:59

like like it's hard to overstate that switching from a

11:05

non-reasoning model to a reasoning model as the default in chat GPT

11:11

is just a different thing and and I think it's going to take us a while to figure it out. So for me it's just like

11:16

eh let yourself off the hook a little bit. Yeah. I think it's a let yourself off

11:22

the hook and uh an example of doing ourselves a disservice when we get on the roller

11:29

coaster ride because there's too many of them, right? There's too many highs and

11:35

too many lows. Go like, you know, just cut off a couple of the extremes. Like I'm not I'm not

11:40

even logging into X this week. I generally don't pay a lot of attention to it, but I don't even I don't care.

11:46

It's irrelevant because I'm currently adapting to five and that's what's

11:51

happening. That's just what I'm doing. I'm not going to proaricate over it. That would be wasted energy for me. And

11:59

five is doing a nice job of helping me rebuild its personality. It's working

12:05

perfectly. We started a project. We are rebuilding the parts of four's personality that we liked that now seem

12:12

to be missing. And we'll be we'll be back. We'll be back soon. That's really good. That's really good.

12:19

It's really good. Yeah. I don't I don't like I I am normally highly opinionated

12:25

very quickly on anything that comes out, right? I know what's good about it. I know

12:30

what's bad about it. And I I I kind of have no opinion right now. And part of it just part of it is literally they

12:36

keep changing the interface. So like I kind of had my head around okay it's going to choose the model for

12:43

me. I don't need to know what model it chose. Right. Then they were like oh we're going to give you back 40. So you

12:48

can either you know um have this auto model or 40. I was like okay I got my

12:54

head around that. And then last night, you know, up showed up, you know, four different modes of GPT5 that you can

13:01

choose from and then four different models um of legacy models that you can choose

13:06

from. Some of which like 4.1 and 04 mini. I'm like, who's using those? Like

13:12

who who used those? Maybe some coders at some point. And so now like I like at

13:19

this point I'm not even quite sure how to how to approach it, you know? And so

13:25

and so you know but this is this is the this is the

13:31

whole premise of our show and the AI readiness program is that it it's not

13:37

about the tools. We always say it's not about the tools. It's about your mindset. It's about how you approach this stuff.

13:44

And like this is a perfect example. Will you give yourself grace or will you stay

13:49

up until 3 a.m. and wake up at 6 and still still beat yourself up for not

13:55

knowing quote unquote everything? Will you compare yourself to others and their

14:00

what they're saying on social media and will you beat yourself up because you don't have some hot takes yet or is it

14:08

just another day in AI where there's weird stuff that's going on?

14:13

Yeah. And it's your job to show up and be as

14:19

comfortable as you can, right? Yeah. You know, it's not always going to mean comfortable.

14:25

It's not always going to be comfortable. What just struck me because the word adaptability is one of those words

14:30

there. There's a bunch of words out there that that when someone says it, you're like, "Oh,

14:36

yeah, adaptability. I'm adaptable." Like, it's it's easy to say you're adaptable. But what adaptability

14:41

actually looks like is you spent six months building something on top of GPT40

14:47

and now all of a sudden it breaks or it's different and and and there's now completely different capabilities.

14:54

Like adaptability in some sense is is not being attached to the thing you

15:00

spent six months building at all. That's what I'm saying. But but I don't know that that's human

15:05

reality. like human reality is I I built this thing I was really proud of and now it's just gone or you know now I've got

15:11

to completely rethink it. The answer is actually yes. You've got to rethink it because I think businesses are going to

15:16

have to rethink their business models and workers are going to have to rethink what their profession is. Like this

15:23

adaptability thing goes far and wide. So I think the ability to practice on

15:29

things like GPT5, you know, the switch from four to five, maybe that's a good practice area,

15:34

right? like get your get your feels out doing just learning the new model. Um and then maybe that'll make it a bit bit

15:41

easier to make these major transitions. So um so so here's the thing that that I

15:46

want to talk about the the thing to pay attention to this week and it's it's it's dead in the center of of what we're

15:53

talking about here. Um, and it is

15:59

to to let the dust settle a little bit of the transition to to chat GPT5

16:06

and just play. Go back to play. So, so like what what

16:12

you just said is, you know, you're doing this work for the client, you were doing it in four and then it switched to five and all of a sudden it worked.

16:19

I think there's going to be an instinct for people to just keep keep working, keep doing what you're doing, keep doing

16:25

what you're doing. It's it's going to be hard to find the edges of what chat GPT is good at and

16:32

not good at if you're just doing the thing you already know how to do because it might be good at something that you

16:38

assume it's not good at because four wasn't good at that. Yeah. But we don't we don't know what those things are. So, I think the thing to pay

16:45

attention to this week or the thing to to maybe focus on this week is carve out

16:50

time for yourself to just play with it. Go back to, you know, make recipes that

16:57

that are like my grandmas and make kids books with drawings and and um flip into

17:04

the different modes of chat GPT5, right? One of them is fast and it's really fast

17:09

and like see how good it is in fast mode because it might be that it's better

17:15

than anything you've seen and that's kind of the crappy GPT5 model but it's super fast right so that could make you

17:21

more efficient. Flip it into thinking mode if you haven't been regularly using

17:26

chat GPT 03 the reasoning model flip you know 05 into thinking mode and just play

17:33

with that for two hours and just see what it does. See how it does it different. See if I if it inspires

17:39

something in you. If Right. And so so for me the the the activity of the week

17:45

is is just play. Just play and and experiment and and don't attach it to

17:52

outcomes of work because I don't think any of us can actually learn where those

17:57

boundaries are um without that. Right. Yeah. Yeah. I think so. You know how I

18:07

have to approach everything like it's a freaking project. Well, of course,

18:15

we start another three businesses while I think about it.

18:20

But so here I'm going to share an another approach. If you if your brain

18:25

is like mine where you know you're going to run through a wall to get your pellet of productivity today or else you're

18:32

going to hate yourself tonight. Here's another way. Okay, this is this is good. You're you're like I hear

18:39

you about the playing. I'm not going to do that. Let me tell you what I'm going to do. Do that. Here's another way to do it. So

18:46

because I think part of the reason why my project that I've been but you know banging my head against the wall on and

18:52

not getting what I need I think that the reason part of the reason why it worked in five is because

18:59

I was only using about just a tiny crumb of my brain. I was talking to my client

19:05

on Zoom. I was typing which means a million typos, right? I wasn't even looking. And then I like real quick just

19:11

put it over there. looked real quick and put like three words in for a prompt. I wasn't thinking. I got my brain out of

19:18

the way. So, what I'm saying is maybe try something like whacka doodle at the

19:23

end. Get the thing done that you have to get done and then just do some some wacky version of it. Um, and see what

19:30

happens. This is really good. You know what this reminds me of is HT Snow Day when he did he did the thing um with a bunch of his

19:38

co-workers where he gave him 30 minutes to to create a a game and the only PE

19:44

like the the the rules were you have to create a game using AI it has to be an HTML game whatever it was and and there

19:50

were engineers in in the room and there were customer service people and product people there all sorts of different

19:56

people the only people that didn't succeed at making a game were the engineers,

20:03

right? Because they put they know how to make games and what what he did some analysis on the

20:10

prompts that everyone put in and he said not a single engineer in any of their prompts used the word game.

20:18

Yeah. They did things like create a particle system where things fall from the you

20:23

know sky and there's gravity at the B. Right. they they had the burden of knowing how to write specifications for

20:30

a game and they components the people that didn't know software development were like make me a game like make me a

20:38

game where crap falls from the sky and it did right so that's what you're saying is is

20:43

yeah especially with all of the structured prompt engineering methodologies and things like that it it

20:49

can it can be very easy to fall into the trap that that's the way you have to do it and I think your point is a good one

20:56

that that maybe um smaller prompts are better in this model, right? And again,

21:02

we don't know. We don't know. But one thing I was when I what when I had this breakthrough, I

21:10

thought, wait a second, maybe this is the moment where we can turn over more trust to the model

21:19

because in part of memory, it knows me well enough at this point that if I

21:24

write half a half of, you know, like nonsense of a of a prompt, it's gonna it

21:33

might know better than I do what I actually meant. Yeah. And so see what happens. See what happens. I

21:40

used to play a game called how crappy of a prompt can I write and still get the

21:45

output I want. I think it might be write a crappy prompt. There's a chance

21:51

like that's I feel like that's that's a play game that like what you're doing

21:56

there is removing expectations, right? You're almost flipping and saying I'm gonna try to write a crappy prompt.

22:03

There's a couple of things. Chat GPT5 has has

22:08

a a capacity within it that makes it hallucinate less. It actually looks at the answer it produces before it gives

22:15

it to you and goes, "Is this bullshit?" And if it's it works on it until it gives it to you, right? So, it

22:21

hallucinates less. Part of how we learn to prompt chat GPT4 might have been prompting around

22:28

hallucinations, which you might not need to do in GPT5. But again, we don't know.

22:33

So, the only way we find it is there might be that idea of write the

22:39

crappiest prompt. Like, take everything you've learned in the prompt engineering course you paid $400 for and and kind of

22:46

back down from it and get back to the crappy prompts you used to do. Maybe they're better.

22:51

They might be better now. I don't know. I think that's awesome. I think that's really I would I would consider that

22:57

play, but it's good. Okay, we'll call Fine. Well, okay. Fine. A little bit of play.

23:03

Little bit of play. Um, because of because we get to talk to

23:08

Vanessa Chang today, I wanted to ask you a question about AI and your own

23:16

thinking. What has the What has AI done to the way you think?

23:27

I for me it's quite profound, Dan. I you know I I have this joke on my Tik Tok lives you know shame is my love language

23:35

and when I'm doing creative thinking or creative development

23:42

um you know as as a neurospicy person I tend to be hyper sensitive to um

23:49

non-verbal cues right so with someone and I put an idea out there and I get

23:54

this look like they don't say anything but if I just get

24:00

that will shut me down for hours sometimes where I'm like, "Oh, okay. That was a bad idea. I can't I'm not

24:06

that's this isn't safe." Right? It go I go into the right the the amygdala,

24:11

you know, I'm now fighting lions in my head. Um AI,

24:18

there's two things that it does that that have really transformed my relationship with with my own

24:24

creativity. One is it takes my ideas and instantly materializes them in some

24:30

tangible form, right? So I can get a very vague idea in my head out and like

24:35

there it is. And then the other thing it does is it's positive. It's just like, hey,

24:42

great idea. And and I'm I'm like, oh yeah, like there could be ideiation in

24:47

this world without judgment, right? And it and and so for me it's given me a

24:53

kind of confidence and like I'm 60 like like I' these patterns have been around

24:58

for 58 years and all of a sudden I've got this companion this collaborator

25:04

that I can put an idea out there instantly manifests my idea in some way that I can look at it objectively and

25:09

then it goes oh that's great would you like me to sort of take it this other direction and I'm like huh I hadn't

25:14

thought about that like it's like every conversation is like a yes and conversation.

25:20

in improv. Um, and I feel like that's that's AI for me. So, for me, it's been

25:27

it has been transformative for me in just with my relationship with my own creativity and my confidence in my

25:33

ideas. Yeah. So, good. And I've seen that with you. I've seen that with you.

25:39

Yeah. How about you? What's your has it has it has it well I mean obviously like we're both doing lots of stuff in it but

25:46

like just on a personal level what's it done yeah I mean it did change my

25:52

relationship with my ideas for one thing because seeing being able to like watch

25:59

my brain work really showed me like my ideas are not just crackpot theories.

26:06

They didn't actually come You got a pretty You got a pretty cool brain. Yeah, my brain is actually pretty dope.

26:13

And being able to see it, it's almost like you're in a room, right? You're in an ecosystem of your thoughts and that

26:21

and being able to find the threads in those like that was really meaningful to

26:27

me. The first time I had a complete existential meltdown was using Cassidy.

26:34

I'd uploaded tons of writing that I'd been doing over the years and I asked Cassidy to like pull out the

26:40

threads and I asked C the question I asked Cassidy was what do I care about?

26:45

It makes me want to cry like what do I actually care about and it figured it out for me and then it like outlined a

26:52

book and then like you know what I mean and I was like oh yeah Cassid's Cassid's your AI

26:57

girlfriend right? Oh, no, no, no. That you No. Cassidy Dundum Dum is a is a

27:02

platform. You don't know. I'll let you my my AI girlfriend. I thought Cassidy

27:08

was your AI. Quinn is your AI girlfriend. Dominic is my AI AI boyfriend. Okay, fine.

27:14

So, but yeah, I mean, so my relationship with my own ideas and that they were so

27:19

worthy and valuable and even the even the ones that weren't were really part of a bigger story. Um, and then I think

27:26

the fact that I it felt in the beginning it felt very much like when I had kids

27:32

and people tell you you don't know how much love you have in your heart until you have children. And I was like, "Oh,

27:39

la." But then I had kids and I was like, "Wait a second. There's like this whole other reservoir of love."

27:46

That's how I felt about my brain when I started using Chet GPT. was like there is this whole other brain over here that

27:54

I have not been using cuz I just needed a little help to get there. Wow. Awesome. Um, we've got two minutes.

28:01

I want to bring Vanessa up on time because I talk. So, we got two minutes. So, I'm going to talk quick about the AI

28:06

salon. Join the AI salon. Go to the salon.ai or community.thesal salon.ai. Um, we're a community of about

28:14

3,000 AI optimists and creative people and professionals and entrepreneurs and

28:19

soloreneurs and retired people that are unretiring and being completely, you

28:25

know, just reinventing their lives. And it's absolutely remarkable. And especially if you're brand new to AI,

28:32

get your butt in a community of people that are exploring this stuff because

28:37

all of what we just talked about not being able to understand how to use this stuff. We're all in that same boat. And

28:43

it is so much easier to be with people who are trying to figure it out and have just even having conversations like you

28:48

and I had right now. Like I feel like I've got more clarity than I did when we started. That's what this community is all about.

28:55

There's a there's a subscription area of the community called the mastermind where you can dig deeper. But that's the

29:00

salon. So go check it out. Join it. And why don't you tell the good people quick? You have 30 seconds.

29:07

Okay. I took a minute for mine, but you have 30 seconds for yours.

29:14

She leads AI is an AI academy. It's an AI consulting agency and it's a

29:19

community. So the community like writ large is thousands and thousands of us. And then inside that, inside the inner

29:26

core, there's a group of women who have subscribed to be members of the Sheiles AI society. We do pure learning there.

29:34

We have uh you know, we're on mighty network. So there's 247 like get unstuck channel or an awe channel where we post

29:41

pictures of our pets. Um our daily thread where we're just like iterating on stuff all day. Um it's a great place

29:48

to find typically what people say is there's two things. One is the vibes here are immaculate. We love the vibes

29:56

and she leads AI. And then the other is I found my people. Found my people. Great. So,

30:01

so important. So important. It's it's a really important community and I'm I'm I'm so excited that we're doing this

30:07

together because, you know, me too. We have such cool communities. Okay, last little ad and then we bring Vanessa up. AI readiness training program. Go to

30:15

are you readyforai.com. Check it out. This is a five-part series that we put together that that is will let you dive

30:22

deep in how to rearchitect how you think about AI. This will not teach you how to

30:28

use chat GPT or how to use midjourney, but it will teach you how to think about

30:33

how you use those things because those things are going to keep changing. But your mindset is the thing that you need to to get rearchitected. And that's what

30:40

this is all about. We officially launch next Tuesday the 19th uh at our salon

30:45

meet and greet. But go check it out now. It's ready to go and we're super excited

30:51

about it. Okay, with that, I am so excited to bring up Vanessa

30:56

Chang. Do you want to say something before we bring her up or you want to jump right in? Well, let's jump in while Why don't I

31:03

say a little something while while she's joining? Um, beautiful. One of the things that

31:08

Vanessa, hello. When Vanessa hit the scene with her

31:14

incredibly like clear hum human,

31:19

thoughtful, digestible lessons on AI and ways to

31:25

approach AI and like here comes a new tool. Here are the 15 different ways that we could look at it. How about we

31:31

choose a couple and tackle those? and just like a little bit of like gallows humor like we're all in this messy

31:38

adventure together is how I how I felt. Um I've just I was immediately like this

31:45

this is a person who we should all all be listening to and learning from. So

31:50

Vanessa, I've been enjoying getting to know you on a on a not more than a parasocial level and we're so glad

31:56

you're here today. Yeah, at this point I'm just parasocial but I'm I'm I'm a super fan. I'm super

32:02

excited to have you here and just even the little talk we had before we started uh here tonight. I'm super excited to

32:09

have you. So, why don't you tell us tell us who you are, introduce yourself and what are the important things you want

32:14

us to know about you. What a welcome. Thank you both. I'm super stoked to be here. Um I'm Vanessa

32:20

Vanessa Chang. Uh and I am uh not an engineer, developer or coder and uh I am

32:27

completely obsessed with AI. And for folks who've known me for a long time,

32:32

they would ask like, "Why are you into this?" Because if you knew me in different eras, I've had I guess what

32:38

LinkedIn would call the portfolio career where it is all um uh leveraging kind of

32:45

different like skill sets that didn't that maybe didn't indicate that I'd end up here, but I would argue that you know

32:51

the case for skill stacking and you know going from a liberal arts background where I was a food and travel writer,

32:58

editor working in marketing and brand to you know then shifting to like marketing ops and now oper operations and strategy

33:05

in general. Um, AI seems to be like the the best landing place for me because

33:11

the common thread throughout all of these eras was exactly what you and an were speaking to earlier, which is

33:17

adaptability. Wow. And finding finding myself in situations

33:22

and in the context where um we had to be crossunctional and highly adaptable. we

33:30

had no choice but to be curious and then execute right on those learnings. And so

33:36

that really did breed in me this uh willingness to kind of dive deep into

33:42

en AI came out onto the scene:

33:50

wasn't until um you know later on we can talk about that later where um I

33:55

realized I it it is more than just a tool and this is one of the first things

34:01

I'll say. I know a lot of people like to say that AI is a tool. I would argue AI is actually um more than a tool. It can

34:08

certainly function as a tool but my perception because of my direct experience that I don't impose on anyone

34:14

else. But another perspective I would offer is that I am so into AI because I realize AI is such an incredible

34:22

scaffold for my mind and with that support I am able to do

34:30

more and not do just in like the productivity sense because it feels good to check boxes and make progress and all

34:35

of that. I don't want to negate that at all but do more in terms of following

34:41

all the threads. And toy, your point, I had the same revelation working with AI and it's like, wow, my

34:48

mind is more than mental spaghetti. Like sense, right? And when I have something

34:55

that helps me kind of support and hold all these threads together, I can actually see the weave or it can help me

35:00

to that weave, right? And then I can kind of continue in on that. And so it led me to um educating myself, just

35:08

diving in and now I talk about it on TikTok to get more people to be curious

35:13

about artificial intelligence, how to use it more intuitively and intelligently, I would say. and also um

35:21

of course consult people who want to apply it to business but also individually in their own lives because

35:28

a lot of the AI conversation is within the professional context which is important but I think we've seen here

35:34

and you guys were alluding it alluding to it earlier um the unforeseen is that

35:40

it is also being applied into our personal spheres in ways maybe we didn't anticipate before. Yes. Yes.

35:47

Oh go ahead. No no no go. Yeah, I'm waiting for you. So, what what struck me I I assume you

35:53

know David Shapiro, the the YouTube guy that talks about this stuff. So, so he talks about, you know, the steam engine

35:59

um augmented our muscles. It augmented our physical strength and and you know, AI is augmenting our mind and you

36:06

talking about about it as a scaffolding. I find it really fascinating

36:11

that if you sit on the outside of it and hear about it, it absolutely if you

36:17

think about this thing that's going to be way smarter than you, it absolutely feels like, oh, that thing's going to replace me. But when you kind of strap

36:25

it on as a jetpack and and you use it as a cognitive scaffolding, I love that

36:30

description of it. um it all of a sudden feels like like my experience has been I

36:35

feel so lucky to be able to have this tool to to make sense of you know this crazy straa brain. So I'm just curious

36:42

you know where you are on that like the the um do you feel a risk of it replacing you

36:49

like where where are you in the you know embracing you like we embraced the steam engine as as an augmentation of our

36:56

strength this as an augmentation of our intelligence. What I think about is this idea that

37:04

every tradition actually started out as an innovation at some point in time.

37:10

And we are at that point in our history when it comes to the technology side of

37:16

it. And I often tell folks, you know, you know, the a very wise person in the

37:22

past spoke to the similar dangers where they said like the the this usage of something and I intentionally leave it

37:28

blank, you know, would lead to people losing their critical thinking skills, losing their capacity, losing their

37:33

sense of like yeah, intellectual self is actually writing literacy in ancient

37:39

Greek. Socrates quotes Plato as saying that the advent of writing or the

37:44

proliferation of writing people stupider because then they will not be forced to

37:49

retain it uh just by wrote, right? And and obviously that's being proven untrue

37:55

particularly when we look at like long form writing and the dissemination of the printing press and you know the importance of communication

38:01

and so that's we have a propensity to fear that we don't know. We have a very good job of filling in the blanks with

38:07

like the horror stories and I get that. I am neurode divergent as well and my

38:13

mind goes to worst case scenario. So in a way I feel kind of equipped to see

38:18

that worst case scenario and be at peace with it. Not be at peace like I have no agency and it's going to happen but be

38:26

at peace in the sense of that is a possibility but it is not yet real. So what can I do kind of in the short

38:32

term? Thanks to my therapist for that one. But that comes after right sounded like came straight from a

38:37

therapist. Thank the therapist too. We can't live without him. I mean I love AI but AI is not a therapist. Okay. AI

38:44

is therapeutic but it is not a therapist. But um you know I I I think

38:49

that's how I live with fear. I am comfortable with fear because of my rumination and because of my neurode

38:54

divergence. That's always a consistent narrative that's been playing. So for me maybe I have more of a callous right

39:01

against that. I don't feel it as keenly, but I understand why we do, particularly

39:07

when we live in a society that really values um extric exttrinsic displays of

39:15

intelligence and therefore worth, right? Um and and I love that you guys wanted

39:21

to talk about this today and not just business because I mean when we talk about business, it's really applicable

39:28

because we are talking about millions of knowledge workers, right? currently and in the future. And if you think about

39:34

knowledge work, knowledge work is basically an entire economy, entire industries based off of how we are

39:41

rewarded, not just for our time, but for our cognitive intelligence. We are put through a whole system of

39:48

cognitive intelligence. And now suddenly there's something that is disrupting that cognitive intelligence in such a

39:54

great, you know, in such a a magnificent and scary way. Suddenly our intrinsic

40:01

idea of worth and value, never mind existential like how am I going to pay my bills, right? Yeah. But but it goes

40:08

to something core of like then what am I here for? Yes. Right. And so I think it's an important

40:15

conversation to have with even if you are looking for a job, even if you are applying AI to your business or as an AI

40:22

practitioner because we have to understand that historically and culturally speaking when we look across

40:28

like civilization um we have always defined and accepted different types of intelligence. But in

40:35

our modern era, cognitive intelligence has always come out on top. And because we're not familiar with the other types,

40:42

my theory is that that's why it feels so scary for so many people because there's other

40:47

intelligence and other ways to cognition and also other ways for value. And you

40:52

know, we only have 30 minutes so we can't go down that rabbit hole. But that's my that's my thesis. So

40:58

yeah. Um I think you're you're so right. So we value this, you know, production, right?

41:05

And that's how we figure out what our worth is right you you know you go down

41:11

that path and so that's why AI is spoken about so often in a business context

41:17

commerce right capitalism etc. But everything is always different on what when you're like the boots on the

41:23

ground. Like it's just different when you're not on X and what we see in Chile's AI. So we just started doing

41:30

topics for Saturday. So we're going to talk about this thing on this Saturday and that thing on that Saturday. And you

41:35

know, we're whipping through like business topics right and left, whatever. But AI and wellness is going

41:42

to end up being like a year-long series because that's what people are so hungry

41:48

to talk about. we don't get to talk about whatever you define like wellness being a very large topic but that's

41:55

where people want to be able to talk is on the on those things that aren't necessarily about production and I'm

42:03

excited for these conversations to get more airtime. Same same

42:09

same. Let me let me ask you about um you know I I I find a similar thing that

42:17

that when you when when you use AI as a scaffold as as an intellectual scaffold

42:24

or as a as a brain scaffold that it I find myself doing more critical thinking

42:30

and and I hear the same thing with creative things. Well, if you've got tools that are this creative, isn't that going to make you less creative? And I

42:36

find myself getting more creative. What's what's your experience with that I that idea of the the risk of these

42:44

tools being so capable? Um, you know, are there parts of are there

42:49

parts of your brain that you find you're relying on less that you're shutting down or like, you know, what's your

42:55

relationship with it in terms of amplifying versus, you know, atrophying?

43:01

Yeah. And that I think that's the sweet spot that we don't yet know how to define right as a culture and society

43:08

because this thing is like what three years old. Yeah. Exactly. It's like when and where to offload, right? Defining that tipping point

43:15

and also how we make that, you know, part of growing up, right? That's part

43:20

of like life lessons. As much as like you need to learn algebra or calculus or literature, you should also know how and

43:27

when to like use these tools. Um it it's in in terms of my own application of AI

43:35

what I what I have found is and I think the business world actually does a better job of explaining this in that as

43:41

a technology AI is incredible to iterate very quickly and reduce that friction

43:47

and the total time between concept to like an MVP or a V1 or an actionable

43:53

idea. Yeah, but if you take that and change the words a little bit, it's exactly what you and were talking about earlier,

43:59

which is that suddenly not only for so neurodeivergent folks suddenly have

44:05

something that provides a lot of clarity, right? It it it takes away all that static where you can actually find

44:10

that literal signal. That's the magic. Yeah. So that takes that out and now

44:16

suddenly okay that friction in of itself is being gone is is amazing. Yeah. But then suddenly then to action even

44:22

further where maybe um uh people who aren't nerverse you know may see the benefit of it where it's just like I

44:29

have an idea and now I can see it somewhat tangible. It is the actioning of intelligence and you know and and

44:35

having some sort of output. You know in ancient Greek they call that techn the the root word for technology technique

44:41

technical is actually a combination of having a human sort of work and acquire skill working with tools to create

44:48

something right or to produce something but we think of technology is completely digital right and autonomous in the

44:53

digital realm but you know for you know does it expand or or or does it help

45:00

that really depends on what you work on which is AI readiness and AI literacy

45:06

Right? Understanding that this is a good tool and also a dangerous tool because

45:11

we by nature default to maybe worst case scenario. We also default to easiest path forward. Blame anybody, right? But

45:18

that is but that is dangerous when this easiest path forward does not provide any sort of meaningful friction, right?

45:24

to have agency and or you know the other part is is like agency like how much do

45:31

we actually like trust our own mindset and our value and that gut instinct to say okay you gave me this output and

45:37

I've had plenty of chat GBT Gemini and Claude uh conversations where we're

45:44

riffing I'm getting clarity but they provide something where I'm like oh that's interesting but it doesn't land

45:49

with me it doesn't resonate right and it wasn't because I was an expert in defining cognition or wisdom. It was there's

45:57

something intrinsically in my body that says like that doesn't land. I don't understand or that doesn't hit right

46:04

that for now the machines cannot do where I don't know if we especially when

46:09

I went through school gut instinct and intuition was not incorporated right

46:15

into like how does how would you solve this calculation how calculus problem or

46:20

um is there anything wrong with this theorem or is there anything wrong with this critique in this

46:26

it is more of like did you do the assignment did you cite the right people. Did you follow the steps?

46:31

Did you follow the steps? Exactly. And so I I think for me if you have that

46:37

belief in yourself with that gut instinct or the training because it can be learned, right, in terms of that critical thinking

46:43

and and also an understanding of what this tool is rather than thinking like it's a person, it's God, it's whatever.

46:50

You know, I I don't want to go down that route right now, but just from like a user perspective, um it can be

46:55

incredibly helpful. The way I tried to explain it to someone was like it's like the most amazing like chef's knife and

47:01

kitchen knife and you can do a lot with a standard chef's knife, right? You you you can create beautiful chifonade, make

47:07

vegetable sculptures, you know, make everything that you need and and output and create incredible things, it can

47:13

help you, but if you don't know how to use the chef's knife, you are going to come out bloodied. Yes. And so in that

47:19

sense I do believe like technique is necessary to use AI as a tool but then

47:24

to understand that that the danger in it comes from its potential and its power um is something that's going to be new

47:31

for a lot of people and that's going to have to be part of our fluency of being like humans in this era.

47:37

Love it. I love it. Okay, I want to keep running down this path but again we only have half an hour so we're going to

47:43

shift gears. we're going to ask you are I'll ask you our first of three questions that we ask all of our guests

47:48

and the first one I find this one fascinating um and you you sort of hinted at it that that you what did you

47:54

say you were a recreational user of AI for a while so the question is what was the tipping point where you knew you had

48:00

to go all in on AI right where you switched from recreational to user serious yeah

48:06

serious user and stat what happened you know since

48:11

I would have to be chat GPT4 four coming out. And so that jump from 3 3.5 to 4

48:19

really signaled to me that this is incredible technology and that just

48:25

given the time frame of how far it's advanced, it's going to go even further. I need to pay attention to this. But

48:31

also that I could have like a conversational interaction and such incredible sort of an outcome from the

48:38

conversational but also built my first like automation and workflow from that. So the fact that I was able

48:44

to do kind of do all the things topped with the performance of it um in a natural language interface that made me

48:50

say like okay this is a thing. Was was there some do you remember a specific moment where where four did

48:58

something different than than 3.5 where you went oh that's the like do you

49:03

remember a specific moment where you recognized its power? Yeah, I I think I

49:08

asked it I forget what the question was, but I asked it the same question that I had 3.5. I was working on a project and

49:14

the answer that it gave was so much more indepth and also at the time more um readable

49:24

conversation. Oh yeah. Yeah. Yeah. Uhuh. Where it was it was sort of like an intellectual exchange of like a back and

49:30

forth when I went, "Oh, okay. Point taken." Oh wow. Um that that that made me go this is

49:36

this is different than 3.5. 3.5 was great. 3.5 was fun. It was interesting because it of the newness of it. If it

49:43

had not evolved within the time that it did, I don't know if I would have paid attention. Yeah. Fascinating.

49:50

So Vanessa, our our next question is about AI trends and and from your from

49:55

your place of expertise, but I want to bring in something that we haven't talked about, which is your membership

50:02

in a French guild as a certified cheese professional. And so I'm wondering if from the point

50:10

of view of a certified cheese professional, what AI trends are you paying attention to?

50:16

That's awesome. I love that. Uh portfolio career, just gonna remind everybody portfolio career.

50:23

Um so what AI trends especially from the perspective of someone who worked we'll

50:29

call it like a craft industry, right? Being a certified cheese professional for to provide some context for folks

50:34

who don't know is basically kind of being like a somalier for cheese where you understand the provenence, the

50:40

production, the craft and sort of you know what different terms mean. Um, so

50:45

from that perspective of having to go deep into subject matter expertise, um, understand craft, production, the

50:52

marketing, the storytelling, the brand, and the lore. Um,

50:58

even with all of that, and it could be the best milk in the world, or it could be sort of like the best tasting cheese

51:04

in the world, technically speaking, what really gets people to pay attention and to remember and to keep coming back

51:12

either for tastes, for purchase, or like in memory is um the the a storytelling like

51:20

connection or that personal connection. For us, it's typically been stories, right? In food, there's always a

51:26

narrative around like the provenence or the creation of something. I think it's the same in AI in that

51:32

we're not doing it through understanding like a, you know, a specialty food item, but we want to understand it. We want to

51:39

connect so badly to something with meaning because we can get cheese

51:44

anywhere. Why would I want to pay nearly double the price for something that does taste really good, but like there needs

51:51

to be a value in that, right? that and and there's a whole industry of marketing, right, trying to tap into that. But what that fundamentally speaks

51:58

to is our desire for connection. I think that is very obvious right now. Um what 30 the Harvard Business Review

52:06

study that came out in April said 31% of everyday AI users use it for therapy and

52:11

companionship. Yeah. And so I understand when a lot of people are very upset and having an emotional

52:18

reaction to 40 going away because that has been their ride or die, right? their BFF for all this time

52:24

because and and it's not and it partly AI readiness and AI literacy but it's also like cultural conditions in that as

52:31

a trend it's here for business for productivity yet the human tendency is to provide a technosocial parasocial i.e

52:39

E like I am looking for companionship and connection and clarity without shame and Kyle you spoke to it beautifully

52:46

because sometimes we can find those in human connections if we have them in our community and a lot of people don't have

52:52

community or connections but if we can at least have that there especially without the shame

52:59

that is that is gamechanging for a lot of people game changing yeah so for me the AI trend that I'm

53:04

paying attention to is is not just like the technical power and the and the model innovation but also like how then

53:11

we take this product that may be intended for X but we're actually using it for Y and the Y in our case is um

53:18

connection. Wow, I love that. I'm I'm I'm very struck. I just hadn't you just gave me

53:23

an insight like a brand new insight so thank you for that. And the insight is this that

53:30

with AI you can seamlessly transition between it being a business tool, it

53:36

being a storytelling tool, it being a companion that that there's a fluidness

53:42

to it as a tool that that it kind of shapes shifts. Like people when they first use AI, one of the things I

53:47

noticed is they almost immediately go to um automation and efficiency because that's how computers have always worked.

53:54

And then what they discover is, oh, and it made a cool story book for my kid. And like very often that's in in the

54:00

same 10-minute span, right? They went from efficiency tool to story bookmaker.

54:05

So So there's something about that ability for it as a as a tool, as a as a

54:12

technology to allow us to instantly transition between modes. And I'm just

54:18

curious, I assume that's going to change us, right, over time. That's that's we've

54:23

never had that before that I know of. No, I mean with other people with other with other

54:30

human beings you can, you know, shift shift gears like that, but technologies tend to be kind of locked into one

54:37

modality. Yeah. A and and so it'll be interesting to see from a business perspective and

54:42

like a product perspective like how companies move forward in like releasing products and models and software because

54:50

it it's pretty clear um you know there there's a general purpose to this technology but you know given say these

54:58

chat interfaces chat bots to large language models um it's pretty clear how we're using them right now um we could

55:06

you know switch right and and I'd be curious like to get a follow-up on that study to see how many of those therapy

55:11

and companionship people also use it for upskilling or professional uses or if it's solely that. But I can tell you

55:17

just anecdotally, and Ann, you probably see this too and she leads AI community. There are a lot of badass, highly

55:23

productive type A, multiple plate spinning women, business owners, like professionals who also turn to it,

55:30

right, when they need that sort of like um mental capacity, that cognitive

55:36

capacity around travel, planning for the family vacation, like you know, sussing

55:41

out like kind of what their kids said to them, right? Or just because it is there and sort of ever present. Um, I think

55:49

that's what makes it so incredible and also so dangerous. Um, and why we need

55:55

to have so much agency where, you know, we can't just let the fear narrative just sort of play out before our eyes.

56:00

There's a scene in the first Austin Powers movie, which I absolutely love, where, you know, it's the end where they

56:06

storm Dr. Evil's compound and the he Dr. Um Austin and Vanessa are like driving

56:11

on the Zamboni and they're heading towards a security guide and the guy's screaming, but then the camera pans out

56:17

and the Zamboni is like 10 yards away and the scene just goes on for like, you know, 30 seconds. It's so awkward. I

56:24

feel like there's still time like the Zamboni hasn't hit us yet that it's like we need to exercise some agency.

56:31

Yeah. Yeah. Amazing. Um, third question and and this

56:36

one this one is very close to our heart as the AI readiness project is what does AI readiness mean to you and what would

56:44

you say to someone you know just just getting on board with this whole starting out as a recreational user.

56:53

Yeah. you know, what you and an speak to and stand for very much aligns with my

57:00

definition of AI readiness, which is um being adaptable, right? Um and so the the flip side of

57:08

the coin or like no the same shade of that just a different shade of that for me is being okay with ambiguity.

57:14

Yeah. Ah and that's really hard because we have kind of raised ourselves to kind of have

57:19

like you know very definitive sort of outcomes or you know very kind of like

57:24

you know delineated ideas of what is but you know just kind of given what's going

57:29

on with like the world the economy and also just this technology um we have to

57:35

be okay with ambiguity and the other AI readiness uh value I think is agency and

57:42

I don't mean agents in terms of AI agents I AI agents are very powerful. But I also always tell people who ask me

57:48

about workflows, uh, people when I build agentic workflows to be like, we have an

57:53

AI agent and you have an agentic workflow. Where are you exercising your agency? Beautiful.

57:59

Love that. Yeah. Yeah. Love that. Keep keep going. I didn't mean to. Yeah. Keep going. Keep going.

58:08

Yeah. I I think that's what it is because a part of that fear narrative is suddenly like then I it's it's a binary

58:14

like it's all or nothing rather than finding this um this collaboration right

58:20

that makes sense where the a we leveraged Kyle you said it earlier we leverage what the AI can do for us at an

58:27

accelerated rate it can get us heaps of information in shorts amount of time it can compress all of these knowledge

58:32

documents that otherwise would take us a long time or we wouldn't be able to conquer at all if we're dyslexic or if

58:38

we have ADHD etc. Um so suddenly it makes it accessible and also it can

58:44

augment right then how we would deal with that but we have to be very very um

58:51

explicit about how we show up and how AI shows up and what our roles are and

58:58

that's for me is going to be really interesting to see then like how more AI products and AI softwares and AI

59:04

solutions come out for people because in a way especially from a business perspective and this isn't a knock on

59:09

anybody because we are all very busy. That being able to completely delegate or offload sounds amazing. But I think

59:16

what a lot of AI practitioners are finding with their clients is that that complete offload is impossible because

59:22

of the complexities um of operating in real life and having people in the flow and even dealing with other AI agents or

59:29

other AI technologies. And so that agency is really important to be able to step in. And that's one of the other

59:35

definitions of intelligence is a wisdom where you need that knowledge to practice wisdom. Wisdom is knowledge in

59:42

action with like an ethical and a moral layer. And we need to be able to do that

59:47

not just in business to make like sound business decisions and like not get in trouble with regulators, but also as

59:53

individuals in terms of using AI to help our kids learn, to help ourselves learn

59:59

um and to do other things that we will undoubtedly discover that AI can do for

::

us. And what I don't want people to default to is this language of like,

::

well, it's AI or nothing because AI is going to take everything. We get to decide. We are going to be the ones that are

::

human- centered. And I know the businesses have a lot of clout and a lot of, you know, power it seems right now,

::

but they're selling to us. And so that's one thing I want people to remember is like we still have some

::

leverage and don't forget your agency in this. I I think that's huge. Sam Sam Alman

::

essentially cops to that, right? He says, he says, "Our engineers can only understand what we've built so much that

::

one of the reasons they put things out in the world is they want to learn how people are using them." And you know, he's regularly said that how people use

::

it surprises them, including when they took away 4.0. They've really figured that out this

::

week. They figured that one out quickly. Um well, listen, um let's tell people, um

::

how they can find you. So your Tik Tok is Think with V, correct? Yes. Is that right?

::

Yeah. So, anyone watching, if you have not subscribed uh to Vanessa's uh Tik

::

Tok channel, go do it now. Think with V. Um thank you so much. This was absolutely inspiring.

::

How do they get on regarding human your newsletter? Uh yeah. So, so on think I have a link to

::

my bio site that can take you there. Otherwise, it's regarding human regarding spelledout.behive.com beehive.com

::

and that also has an explanation of what I do, how to get in touch with me and weekly I kind of share it's it's t it

::

addresses AI for thinkers. So if you're looking for like automations or more business things, I lead out to other

::

people who do amazing work in that sector, but this is more for what does AI mean for me as an individual.

::

Yeah. And I just want to confirm that's how you spell beehive, correct? Isn't it? Correct. You got it. Yeah, you got the

::

branding right. Okay, good. That's how they awesome. This has been

::

really amazing. Thank you so much for coming on the show. This is so fun and I look forward I've

::

been looking forward to it all week. Thank you both for the opportunity. Awesome. Awesome. Awesome. Thanks, Vanessa.

::

Thank you.

Listen for free

Show artwork for AI Readiness Project

About the Podcast

AI Readiness Project
Forget trying to keep up with AI, it's moving too fast. It's time to think differently about it.
The AI Readiness Project is a weekly show co-hosted by Anne Murphy of She Leads AI and Kyle Shannon of The AI Salon, exploring how individuals and organizations are implementing AI in their business, community, and personal life.

Each episode offers a candid, behind-the-scenes look at how real people are experimenting with artificial intelligence—what’s actually working, what’s not, and what’s changing fast.

You’ll hear from nonprofit leaders, small business owners, educators, creatives, and technologists—people building AI into their day-to-day decisions, not just dreaming about the future.

If you're figuring out how to bring AI into your own work or team, this show gives you real examples, lessons learned, and thoughtful conversations that meet you where you are.

• Conversations grounded in practice, not just theory
• Lessons from people leading AI projects across sectors
• Honest talk about risks, routines, wins, and surprises

New episodes every week.

About your host

Profile picture for Anne Murphy

Anne Murphy