Growth through Talent

From Plato to Prompting: AI Fluency with Gastón Tourn

 16th Oct 2025

This is the first article in Pivotal’s AI Fluency Series, following the launch of our AI Fluency Hiring Frameworks. Over the next few weeks, we’ll release a set of articles with marketing and talent leaders about how AI is really changing work.

To kick things off, we sat down with Gaston Tourn, Chief Growth Officer at Oddbox, to discuss what AI fluency really means (and doesn’t mean), how curiosity, productivity and judgment are the skills that truly stand the test of time.

 

To set the scene, at Pivotal we’ve been developing practical frameworks for assessing AI capability in the roles we recruit for across marketing, media, sales and creative. With that in mind, how do you define ‘AI fluency’ for the roles you hire?

Gaston: It’s an interesting question, because when it comes to AI fluency, in a way AI technology should be so simple to use that you shouldn’t even be training yourself in it. It’s almost like an oxymoron to say “AI fluency,” because it should be so easy to use. The technical complexity ought to be low; if not, why are we even using it?

You don’t really need to learn to prompt in a deep way. The recommendations are so obvious: provide some context and give a clear task – it’s not that complex. For me the more important elements when interviewing aren’t AI specifics. AI is the hot technology now; in five or ten years it’ll be something different. Technologies change all the time. Five years ago everyone was talking about the metaverse; ten years ago it was machine learning, and at that point machine learning was AI. Now it’s just another technology. The most important skills are curiosity and productivity.

You want someone curious enough to try new things, and you want someone proactively looking to be more efficient without compromising quality, that’s judgment. Sometimes using AI makes sense; sometimes it doesn’t because the productivity gain isn’t there, or it might go in detriment to the quality you expect.

 

 

There’s a wider debate about juniors and tools like ChatGPT, some people say early-career professionals shouldn’t use it so they learn the basics. What’s your take?

Gaston: I wouldn’t advise anyone not to use ChatGPT. I’d advise everyone to use all technologies – but always ask yourself why. Computers don’t think. Computers will never think. With AI we know what they’re doing: it’s statistical prediction of tokens, which token should go next based on patterns and predictability. That’s not how humans think. Humans ask, “Is this the right time to say this? Is this the right time to go out with this new campaign?” Only human beings can do that, and that’s judgment.

I don’t penalise someone for using ChatGPT in applications, I’ve never penalised anyone for that; I actually encourage it. What I penalise is using it without thinking, copy-pasting what comes out, even leaving placeholders like “(Hiring Manager)”. That doesn’t tell me it’s bad because they used AI; it tells me they didn’t even bother to check, and they’re thinking less than a machine because they aren’t questioning what they’re putting out there. Use all the technologies, but your role as a professional is always asking why you’re using this and whether it makes sense.

 

 

How do you actually assess that kind of thinking in interviews?

Gaston: I don’t assess AI fluency directly. I assess curiosity, human judgment, and productivity. We don’t ask candidates to prove their AI fluency. To be honest, I haven’t seen a single application where people aren’t using AI. What I do instead is set case studies or tasks where I can assess those three criteria.

The tasks are nuanced. There’s no clear yes or no. I let the candidate take a path and then I ask why they took that approach. You can tell who really thought it through versus who copied and then can’t explain the why. If they can’t explain the why, I wouldn’t call it a failure of AI, it’s a lack of curiosity and professionalism. People cheated and copied before AI as well; the issue is not questioning whether what you’re doing is right for the task.

 

So, you’re almost designing tasks that AI can’t trivially answer, to see how people think.

Gaston: Exactly. I already assume most people are using AI, and I think it’s great to use new technologies to be faster and more efficient. The question is how we use them. If you read historically, it’s fascinating. When writing emerged as a new technology, Plato was very negative about it. He said we would forget how to speak properly and our rhetoric would become poor; that our memory would be distorted because we’d rely on writing rather than memorising. We read that now and it’s almost hilarious. Writing is a technology too. A lot of current concerns about AI remind me of that. I think we’ll use the technology to be more productive and open new opportunities, but human judgment is going to be very critical to understand what makes sense and what doesn’t, based on the context.

 

 

Measuring ROI from AI is tricky. Everyone claims huge productivity gains, but few can prove them. How do you think about it?

Gaston: It’s hard to quantify and say, “I’m 15% more efficient thanks to AI.” I’ve seen studies saying companies are “49% more efficient,” and my reaction is… how do they measure efficiency? What does ‘more efficient’ even mean? I’d be curious to understand how other companies do it, but so far, I’ve found the technology for measuring AI efficiency in the workplace is weak. I haven’t found anything where I’d say, “This is far enough.”

What we’re doing instead is questioning whether we need new roles. Before opening a new role we ask: can we manage this workload using AI? Shopify became quite famous for asking, when they open a new role, why can’t AI do it? There are things AI cannot do, but the question is useful. Beyond that, I haven’t found a reliable way to measure efficiency gains, although they’re definitely there; it’s just hard to quantify.

 

What’s the best way for leaders to encourage AI adoption without overwhelming people?

Gaston: The worst thing is a top-down mandate, ‘Everyone must use AI.’ It feels very top-down and not really productive. Two strategies work better for me:

First, peer learning. Connect people with others who are experimenting successfully. If, for example, Bloom & Wild is using AI effectively for copy, it’s much more productive for a copywriter to see that and learn from it than for me to say, “Use AI.” It becomes an exchange.

Second, push for outcomes, not tools. I’ll say, “We need to double the output of X.” You might want to use AI for it – up to you – but the output needs to double. That encourages people to think about how to be more proactive; AI can be one of the solutions. Leave the how to the team. They’re grown-ups and professionals; they know how to use technologies.

 

Final question. For leaders trying to navigate all this, what’s the one thing you’d leave them with?

Gaston: Stay curious and keep experimenting. Don’t make AI a scary thing – try it, explore it, and question it. But don’t get obsessed with the shiny new tools; we sometimes love them so much we forget why we’re using them. Always ask: what problem are you solving, and how is it making your work better or faster?

 

Conversations like this are exactly why we created Pivotal’s AI Fluency Hiring Frameworks, practical tools to help identify and develop the kind of curiosity, judgment, and adaptability Gastón highlights.

You can explore and download the frameworks here to see just how companies can hire for AI confidence across marketing, creative, and growth roles.

Cookies on this website
We want to ensure that we give you the best experience on our website. If you wish you can restrict or block cookies by changing your browser setting. If you continue without changing your settings, we'll assume that you are happy to receive all cookies on this website.