Why Voice AI Is Ready for Prime Time written by John Jantsch read more at Duct Tape Marketing
Catch the Full Episode:
Episode Overview
Voice agents are rapidly evolving from novelty tools into core revenue infrastructure. Instead of functioning as glorified talking FAQs, today’s AI voice systems can serve as qualifiers, schedulers, concierges, onboarding guides, retention reps, and upsell assistants.
In this episode of the Duct Tape Marketing Podcast, John Jantsch interviews Ryan Mrha, founder of Yodify, a platform that enables creators and brands to stay personal at scale through AI-powered voice and text agents trained on their content libraries.
Mrha explains why purpose-built voice agents outperform generic AI tools, how multi-layered LLM orchestration reduces hallucinations, and where businesses can safely begin experimenting with voice AI. The conversation explores the future of buyer behavior, the role of AI in modern sales processes, ethical transparency considerations, and practical implementation strategies for agencies and creators alike.
If you’re curious about where voice AI fits in your marketing, sales, or customer experience strategy, this episode delivers both vision and practical guidance.
About Ryan Mrha
Ryan Mrha is the founder of Yodify, a platform that helps creators and brands maintain personal engagement at scale. Yodify allows followers to call or text an AI agent that speaks in the creator’s own voice, grounded in their existing content library.
By combining voice cloning, multi-layer LLM orchestration, and structured prompt engineering, Mrha focuses on building purpose-driven AI agents that feel authentic, aligned with brand voice, and capable of performing specific business roles.
He is also involved in launching Methodiq, a platform focused on AI-powered facilitation experiences.
Key Takeaways
1. Voice Agents Are Moving from Novelty to Revenue Infrastructure
Businesses should stop thinking of voice AI as a talking FAQ and start treating it as a role within the organization, such as a business development rep, onboarding assistant, or scheduler.
2. Generic AI Tools Deliver Poor Results Without Role Design
Simply uploading a knowledge base and prompting “act like John” produces inconsistent outcomes. Effective voice agents require:
- Defined job descriptions
- Multiple orchestrated LLM layers
- Targeted prompts for specific states or roles
- Structured knowledge access
3. Multi-LLM Architecture Reduces Hallucinations
Instead of relying on a single large prompt, Yodify breaks tasks into targeted LLM calls, such as orchestration, action execution, and response generation. This improves accuracy and reduces hallucination risk.
4. Buyer Behavior Is Changing
Modern buyers prefer to:
- Conduct independent research
- Avoid early-stage sales conversations
- Engage only when close to making a decision
Voice agents can provide 24/7 answers without hard selling, aligning perfectly with this shift in buyer psychology.
5. Transparency May Become a Competitive Advantage
There is still tension around whether users feel “duped” when speaking to AI. However, proactively positioning a voice agent as an “AI advisor” may enhance trust and acceptance.
6. Start Small with Clear Use Cases
The best way to implement voice AI is through a focused, low-risk pilot:
- A receptionist agent
- Appointment scheduling
- A simple qualification call flow
- A basic single-prompt LLM test
Start narrow. Prove ROI. Then expand.
7. Voice AI Is Especially Valuable for Creators
As creators scale, personal interaction becomes impossible. Voice agents allow fans to text or call an AI trained on the creator’s content, maintaining connection while scaling engagement.
Great Moments from the Episode
- 00:03 Voice Agents as Revenue Infrastructure
John frames the shift from novelty AI to functional, role-based AI agents. - 01:12 What a Voice Agent Actually Is
Ryan explains how voice agents combine LLM responses with text-to-speech tools. - 02:23 Why “Just Upload Everything” Fails
Discussion on why dumping a content library into an LLM produces poor results without structured orchestration. - 03:42 Role-Based AI vs Emotional AI
Clarifying that effective agents are built around business roles such as sales, support, and concierge, not emotional states. - 07:11 AI in the Modern Buyer’s Journey
Exploring how voice agents can replace early-stage sales calls. - 10:18 Do Customers Feel Duped?
The ethical and experiential implications of AI transparency. - 12:08 Building a Purpose-Built Agent
Ryan outlines how projects begin with small, focused use cases. - 13:30 The AI Receptionist Use Case
Why simple use cases like scheduling can deliver immediate value. - 18:54 Safe Pilot for a Marketing Agency
How agencies can test AI voice agents without major risk.
Memorable Quotes
- “Voice agents are moving from novelty to revenue infrastructure.” John Jantsch
- “If you’re very specific about what you want the LLM to do, you’re going to get much better results. It can’t do too much at once.” Ryan Mrha
- “People don’t want to be sold. They just want to ask their questions.” Ryan Mrha
- “There’s no point in building something your customers don’t want.” Ryan Mrha
Resources & Links
John Jantsch (00:03.032)
So voice agents are moving from novelty to revenue infrastructure. And if that’s is if you stop treating them like talking FAQs and start treating them like a role, maybe qualifiers, scheduler, concierge, onboarding guide, retention rep, upsell assistant. That’s what we’re going to talk about today.
Hello and welcome to another episode of the Duct Tape Marketing Podcast. This is John Jantsch. My guest today is Ryan Mrha. He is a founder of Yodify. Yodify helps creators and brands stay personal at scale by letting them, letting followers call and text an AI that speaks in the creator’s own voice grounded in their content library. So Ryan, welcome to the show.
Ryan Mrha (00:47.59)
Thanks for having me.
John Jantsch (00:48.686)
Did I say, you know, I asked you how to pronounce your last name, but then did I pronounce your O-T-L-F-Y right? Okay. Awesome. So we’re talking about voice AI. So let’s, let’s kind of set the table. There’s a lot, you know, there’s IVRs, there’s LLMs are, you know, participating chat bots. mean, so, so what’s a voice agent?
Ryan Mrha (00:53.748)
Yes, it is Yodify.
Ryan Mrha (01:12.166)
Yeah, so a voice agent, or mean, most agents are just interacting with an LLM. A voice agent is essentially just an LLM that knows it’s supposed to respond in a way that’s like naturally speaking. And then you use another tool to have it actually read that text out loud as it’s coming. Yeah.
John Jantsch (01:18.37)
Mm-hmm.
John Jantsch (01:36.366)
So typically, like if I had a library, if I wanted somebody to be able to answer questions about my business or my service, I would just give it everything I could. And then hope when somebody asked a question, it would access the right thing in giving a response. I is that as simple as it comes?
Ryan Mrha (01:52.634)
Yeah, I mean, essentially that’s what it is. So you want to build a knowledge base, but there’s kind of two components to it. So one is, let’s say all of the episodes that you’ve ever done, and we could take all that text and we could feed that to the LLM that it could use for context. But the other piece is that we also have to make the agent feel like you and act like you in different points that you interact.
John Jantsch (02:11.98)
Mm-hmm.
John Jantsch (02:23.566)
So you mean literally you, like it would sound like they were talking to John Jance. Yeah.
Ryan Mrha (02:27.83)
well, yeah. So, we do also clone the voice. So we could take a lot of your audio and use that to clone your voice. But the thing that we’ve been finding is that a lot of people will say, here’s a prompt, Hey, you’re an LLM be John Jance and here’s all of his episodes. And they’re typically getting pretty poor results with that because you, as say a podcast host, you have a lot of different states.
John Jantsch (02:32.814)
Okay. Yeah.
Ryan Mrha (02:55.37)
Sometimes you may be, I don’t know, explaining something and sometimes you may be asking a question or pushing back on something. And so what we try to do is we try to have a few different LLMs that an agent can call on and can be different versions of you and have different access to pieces of knowledge that you may need at a certain time. So that way it sounds like you, it feels like you, it responds like you.
John Jantsch (03:25.376)
And would it be as simple to say, you know, when I hear you describe that, I’m like, this is when John’s feeling kind of sad and this is when John’s having a good, really good day and happy, or is it really more, this is John in his sales hat and this is John in his customer service hat.
Ryan Mrha (03:32.592)
Yeah
Ryan Mrha (03:42.032)
Yeah, exactly. It’s, going to be the latter. and that’s what’s going to make it feel like you’re actually speaking to a person compared to, you know, just the LLM, because what a lot of people are used to is, speaking with LLM, like over a chat window, you know, chat, GBT or something like that. And that hides a lot of the sort of mistakes, but when you start talking with it, you realize, you know, Very quickly. Yeah.
John Jantsch (03:43.256)
Yeah.
John Jantsch (04:08.706)
Yeah, Yeah, it butchers my name, you know, for example. But, but yeah, and I think so where do you think we are in the world today? Are, you know, at one point, you know, people were like, I hate those things or gosh, I’m talking to a robot and you know, that, but I’ve, get the sense that now as more and more, well, first off, as the technology has gotten better, but as more and more people have had good experiences.
Ryan Mrha (04:13.681)
Mine as well.
John Jantsch (04:37.612)
Do feel like the acceptance to where it’s like, I know I’m talking to AI and I don’t care.
Ryan Mrha (04:43.299)
Yeah, think people, I mean, I really think like 2026, 2027 are going to be the years of like real voice agents. think people have been interacting in these chat functions for a while now, and they’re going to want to start having a more real experience. And kind of like I was describing how we build these agents, it’s going to have to be a little bit more tailored to the experience that the user is looking for. I guess where we’re at in it,
John Jantsch (04:50.392)
Yes.
Ryan Mrha (05:12.291)
I think we’re still actually quite early. A lot of people are not even using any voice agent, for example.
John Jantsch (05:24.28)
So one of the things that I think I picked this up from off of your website, you talk about a voice agent that critically thinks. How is that happening? I mean, again, when I hear that, hear like, you know, they’re actually making decisions. know, they’re not just accessing stuff and predicting what you want to hear.
Ryan Mrha (05:34.874)
Yeah, so.
Ryan Mrha (05:47.377)
Yeah, so without giving away too much of the secret sauce, we use like multiple levels of LLMs, right? And within those, there’s different instructions. Like one may just be orchestrating and another one may be doing an action. Another one may be calling a a different LLM to give it a response. So we break up all of those tasks to be, so that way each
LLM call is like very targeted. and that’s kind of, that’s kind of the mistake that we’re seeing a lot of businesses, like fall into right now is they buy a cool AI tool. It looks great in the demo. And then they, they get their hands on and they’re like, this isn’t working for me. It’s because they’re using like a very general package and the way the LLMs work is like, if you were very specific on what you want, you’re going to get much better results, but it can’t do too much at once.
John Jantsch (06:44.686)
Sure. Yeah, you can’t just brain dump the entire organization’s knowledge base in there and hope it finds what you’re looking for. So I’m curious about this because I have, you know, the way people are buying today is really changing, right? I mean, they’re doing a lot more research. They don’t want to do a sales call. I mean, they want to get all the way to almost to the point of deciding and then have like a consultation, you know? And so…
Ryan Mrha (06:51.524)
Unfortunately, yeah. Right.
Ryan Mrha (07:09.873)
Definitely.
John Jantsch (07:11.634)
I have a theory that AI agents are going to play a role in that because where people will actually offer them, not ready to talk to a human, talk to the AI voice agent, they can answer all your questions and they’re not going to hard sell you. mean, they’re not going to… Do you feel like there’s a point in the buyer’s journey where that’s actually going to be seen as a value add as opposed to a convenience?
Ryan Mrha (07:19.858)
Mm-hmm.
Ryan Mrha (07:35.155)
I love that you brought this up because we were actually planning on doing this. Yeah. You know, just like when you go to a website now and you, you know, a little chat thing comes up and it’s like, Hey, maybe I can answer a few questions. Yeah. The technology is there to be like, you know, take it that much further. And the reality, especially like in software and technology, a lot of the sales and procurement process is just about making sure that you get the legal documents passed back and forth.
John Jantsch (07:38.339)
Yeah.
John Jantsch (08:03.79)
Mm-hmm.
Ryan Mrha (08:04.666)
I think that we’re going to see a lot more of those roles focus on that piece and then the answering questions and explaining the product. People don’t want to be sold. They just want to ask their questions. They want to get to experience it. So in some ways, AI is kind of perfect for that.
John Jantsch (08:23.532)
Yeah. And, and they can hang up, right? I mean, it’s like, I’m not getting the answer. I went, I’m just going to hang up. You know, it’s like, I’m not going to be rude to a person maybe, but you know, this, this is, can just hang up on this. So.
Ryan Mrha (08:28.069)
Yeah.
Ryan Mrha (08:35.45)
And on top of that, you can do that at three in the morning as well. Right. Like you don’t have to be waiting for that, that call next week and they’re busy or we got to go to this conference and you know, it’s instant.
John Jantsch (08:38.112)
Yeah, right,
John Jantsch (08:45.806)
So let’s talk that through. Let’s, I think you also use the term purpose built. Let’s, let’s walk through the framework of giving a voice agent a job description. And then, and then maybe let’s explore what the limitations are. So let’s, let’s go with a typical kind of business development agent. Somebody buys a low cost product on your website and you want to upsell them to the higher cost. You know, can a voice agent reach out or is that really more of a, we’re going to train that person to be able to.
answer anybody’s questions that they might have about what’s next.
Ryan Mrha (09:20.476)
So there’s full tools available already that have this like full, we’ve experimented a lot with one of them for building some of our agents, just because the functionality that they come with where they can already call, they can lead the conversation. They’ll have sort of what you can, like if you can imagine like a timeline and then along that timeline, you have different prompts. And when the agent…
John Jantsch (09:36.173)
Mm-hmm.
Ryan Mrha (09:46.535)
gets to a certain criteria, it meets that, it goes to the next prompt. And so these tools are very cool. You can have a conversation with it and feel like you’re speaking with a person and you can get very advanced with it. can remember your names or your ticket number or things and reuse them later and go update the database when it’s done. And on top of that, you can use it a thousand times at the same second instead of
John Jantsch (09:50.86)
Mm-hmm.
John Jantsch (10:05.389)
Yeah.
Ryan Mrha (10:15.666)
just like an individual.
John Jantsch (10:18.158)
So do, do, you, are we at a point where some people are feeling duped? Like, you know, where it’s like, thought I was talking to a human and even if they got the result they wanted, it still felt, you know, they still felt sort of deceived.
Ryan Mrha (10:36.506)
I was on a call the other day and I was trying to ask the person, like, are you a AI agent? And I think they felt offended if they, because maybe they weren’t, but I’m still not convinced they weren’t, you know, because, but there’s, certain tells that, you know, if you speak with these all the time, you’re like, okay, there’s a delay here. And, the accent is changing a little bit and things like that. so yeah, I think people.
John Jantsch (10:42.798)
reasons.
John Jantsch (10:48.641)
Yeah.
John Jantsch (10:59.469)
Yeah.
Ryan Mrha (11:03.984)
I think people don’t want to feel that they’re talking to an agent yet, but I do think that’s going to change.
John Jantsch (11:09.762)
Well, and do you think we’re at a point where, and I’m not saying disclose it,
because it’s an ethical thing, but just to disclose it because people want to, it’s a transparent thing. It’s like, Hey, talk to our AI advisor. They’re, you know, they have all the answers for you. So, I mean, it’s like right up front, even though it feels like a conversation, I know it’s not. I mean, you think we’re, that’s the, that’s kind of the crossroads right now.
Ryan Mrha (11:30.492)
Yeah.
Ryan Mrha (11:34.897)
I don’t know. I’m one of those people that, you know, do like, do you want to share your data? And I’m like, yes, take all my data and customize my experience and things like that. but I could imagine there’s a lot of people who want to be very private. yeah, I think that’s going to be a hurdle that we have to, we have to face. And it is going to be a deciding factor, like, how people decide to do business with certain companies, you know, it should at least be on the website.
John Jantsch (11:40.974)
Yeah, yeah, yeah, yeah.
John Jantsch (11:59.148)
Yeah,
I forgot to tell you when we booked this interview, I do need your social security number.
Ryan Mrha (12:06.642)
No problem.
John Jantsch (12:08.91)
Okay. Now, so, so to walk me through, if I came to you said, Ryan, need this, um, business development agent. Um, like what’s how, how’s the process go? What do you need from me? What, you know, how do we put guardrails on it? I mean, what, what’s the pro how’s the process work?
Ryan Mrha (12:25.872)
Yeah, so we’re always going to start with like a single small use case and try to like nail that down and then kind of build things on top of it. We’re also going to just try to like, like for me, it’s very big about matching to a brand in brand voice and making sure that it’s consistent with the experience you want your users to have. We build a lot more agents that are in the
John Jantsch (12:44.27)
Hmm.
Ryan Mrha (12:53.776)
Like we have a big one for facilitation. So maybe it’s not trying to sell you something, but you still want it to experience like a full facilitator. So what that looks like is breaking down what makes a good facilitator and then building all those different pieces, putting them together, matching it to your brand and letting you use it in your company.
John Jantsch (13:09.591)
Mm-hmm.
John Jantsch (13:20.952)
Do let’s just go with a really, really basic receptionist. mean, is that a, is that a use for this or is that to almost too basic?
Ryan Mrha (13:30.489)
No, I think, I think basic is good. yeah, you, could definitely, you can have a, an agent receive a call quickly, book an appointment with you. kind of like what you talked about or asked about, are people going to feel kind of duped by it? I think there’s a lot of scenarios where people are actually going to appreciate it more. And maybe, maybe it takes some time to get there, but I mean, if you can offer me a product at a lower cost and because I speak to AI agent and like.
John Jantsch (13:47.522)
This is
John Jantsch (13:51.671)
Sure. Sure.
Ryan Mrha (14:00.476)
great, you
John Jantsch (14:01.474)
Well, and I think for a lot of routine things that people want to do, I know personally, things like, you know, once a year I go get contacts, you know, and I just want to be able to go on there and schedule an appointment. I don’t want to call somebody to do that. And so I think there are a lot of things like that, that are going to be AI enabled that, you know, that people are going to actually want and appreciate. Because as you said, it’s three o’clock in the morning. I want to do that. Right.
Ryan Mrha (14:10.875)
Mm-hmm.
Ryan Mrha (14:14.193)
Mm-hmm.
Ryan Mrha (14:23.762)
Definitely.
Ryan Mrha (14:28.25)
Yeah, yeah, exactly. It did change the game. And it can also be a hybrid approach where, you know, yeah, I hit zero if you want to speak to that person, but.
John Jantsch (14:39.468)
I know one of the fears that people sometimes have is that, you know, the AI agents going to hallucinate, it’s going to be wrong. It’s going to actually say something that is maybe counter to the brand. How do you, you know, are there, there’s probably some instances where you should never use this. It would be one thing, but, but, but how do you also put the guardrails on?
Ryan Mrha (15:03.91)
Yeah, so we do put guardrails in the prompts, but I’m a big fan of the Gemini models because of that, even though maybe they’re a little bit less fun or something like to talk to, they definitely hallucinate less. So that’s probably the biggest step you can take. But it’s also just about being specific. If you give the agent the right context of what it’s trying to do, then it doesn’t have to go fill in the blanks itself. So a lot of it
John Jantsch (15:14.316)
Yeah.
Ryan Mrha (15:33.82)
comes out in testing, we’ll find, okay, why did it come up with that? And then we’ll go back, we’ll revisit the prompts and find out, we maybe overemphasize this or didn’t give it clarity on what to do here. One thing you can also do is just, give it like a document in your knowledge base, kind of where it can find things. If it doesn’t find something, here’s some ways you can respond.
John Jantsch (15:50.604)
Mm-hmm.
John Jantsch (15:57.198)
So if you’re using Gemini, then could you put a lot of these sources in like a notebook LL or something or, and then be able to tap it that that make that be its library.
Ryan Mrha (16:08.338)
Connect directly to notebook. I have not tried that. I do love notebook. Do you use it a lot?
John Jantsch (16:11.651)
Yeah.
John Jantsch (16:14.968)
Yeah. Well, Jim and I, yeah, Jim and I does connect directly to notebook as a source. now, yeah, yeah. So it’s, I’ve been shortcutting training because I’ll build the notebook LMS with 300 documents in it. And then, you know, just be able to say, source these three. so it kind of gives you, it’s a, it’s a good best of both worlds. your model is voice and phone number, right? Voice and phone call.
Ryan Mrha (16:21.039)
okay. Yeah.
Ryan Mrha (16:42.318)
Yes, so the Yodify model is phone. We can text it. We can also deploy it within the web app, just like the service we’re using here.
John Jantsch (16:56.27)
But there is no avatar. There’s no video component to it. It’s just voice. Yeah.
Ryan Mrha (16:59.538)
No, the way we see it is that a lot of people are going to want to be able to have conversations with their creator, the creators that they follow. So, you know, maybe when you were a bit of a smaller creator, you could interact with all of the different fans and everything and respond to every comment. And then as you get bigger, it becomes more and more difficult. But that doesn’t mean people still don’t want to communicate. we can do that with sort of
John Jantsch (17:14.113)
Mm-hmm.
Ryan Mrha (17:26.95)
them being able to just text you directly and have conversations and, I’m going through this. What’s your take on it? And yeah, it’s not the real thing, but it is, you know, still valuable for a lot of people.
John Jantsch (17:38.83)
So where do you feel like you fit in the category? Is 11 Labs a competitor or are they just tangentially related? mean, where do you fit in the category? Yeah, okay.
Ryan Mrha (17:53.587)
We use 11 Labs. yeah, they provide voices. They do a lot of great stuff. We combine the different pieces, the different tools that these producers are making and try to bring them to market. I think there’s a lot of cool tools out there, but people haven’t…
John Jantsch (18:01.197)
Yeah.
Ryan Mrha (18:20.316)
figured out really like great use cases that are going to enhance people’s lives. So we’re trying to, you know, meet them there.
John Jantsch (18:22.796)
Yes.
John Jantsch (18:26.446)
Yeah. Yeah. I kind of laugh at some of the tools are like, well, okay, it’s cool. can do that, but like, why, you know, where, how, you know, would you use that? So, so if somebody’s listening and they’re like, Hey, I want to, I want to try this out like next month. Um, what’s the, let me give you a concrete example. I have a marketing agency, so you can use that as the example. Um, what would be the smallest kind of safest experiment that you think a marketing agency could.
Ryan Mrha (18:34.69)
Yeah, yeah, exactly.
Ryan Mrha (18:43.378)
Yeah.
John Jantsch (18:54.84)
could do that would still provide ROI, either in marketing or for their clients or just even in efficiencies in the business.
Ryan Mrha (19:04.301)
you mean sort of to prototype themselves?
John Jantsch (19:06.22)
Yeah, yeah, to kind of give it a test, like a pilot.
Ryan Mrha (19:10.458)
Yeah, I would say, I mean, chat GPT has these, I think they’re called GPTs. I think that’s a nice way to test something. Yeah. I think that’s a nice way to sort of test. can upload a few files and like talk with it be like, is this interesting for us? Definitely have a few customers try it because there’s no point in building something that your customers don’t want. And then, yeah, if you’re getting a lot of good reactions, then you can, you know, engage us or we can point you in the right direction to.
John Jantsch (19:17.634)
Yeah, custom GPTs, yeah. Right.
Ryan Mrha (19:40.262)
to somebody that would.
John Jantsch (19:41.176)
Well, I guess I asking specifically about Yodelfine. Like if somebody wanted to do a pilot, came to you and said, we heard the show and we want to do a pilot, but we want to start really small. Is there a place that you would say, hey, this is a small, safe experiment that I think you’ll get some value from?
Ryan Mrha (19:50.15)
Yeah.
Ryan Mrha (19:59.729)
Yeah. So what we would do is we would probably do like a single prompt LLM. So very, very basic, which is basically what I told you we don’t do, but it’s, kind of the starting thing that you can play around with. We’d have like a single prompt. We’d upload a few of your, your files. And then we would let you call it and be like, you know, maybe we do like a very quick and dirty, like voice clone and we’ll say like, okay, is this interesting for you? Maybe show it to a few your customers, get some feedback. And then.
John Jantsch (20:07.982)
Yeah.
Ryan Mrha (20:28.454)
Yeah, we have different ways we can price it. We like to be an additional revenue stream for creators. But yeah, it could be an ad agency. We can build all kinds of agents. But for our creators, we try to be an additional revenue stream. So maybe they already have a paid tier, and they can incorporate it in there and add $0.02 on or something like that.
John Jantsch (20:50.99)
Gotcha. Okay. Well, again, appreciate you taking a few moments to stop by the duct tape marketing podcast. Is there some place you’d invite people to connect with you learn more about YOLOFi?
Ryan Mrha (21:00.57)
Yeah, so LinkedIn is my main social media. So you can find me on LinkedIn, Ryan Murha. Yeah, we have yotify.com. And then that’s actually a brand that belongs to another bigger project, Methodic, which is actually going to be launching here, the beta version. So if you’re interested in checking out AI facilitation, would be awesome to get some beta users.
John Jantsch (21:24.366)
Awesome again, appreciate you stopping by and hopefully maybe we’ll run into you one of these days out there on the road.
Ryan Mrha (21:30.685)
Sounds great, thanks for having me.
John Jantsch (21:32.066)
Thanks, Russ.
Sign up to receive email updates
Enter your name and email address below and I’ll send you periodic updates about the podcast.
Overview
Episode Overview
Episode Overview
Episode Overview
Episode Overview
Episode Overview
Episode Overview
Episode Overview