Fluency is a moving target: AI Fluency with Skip Fidura
28th Oct 2025
In the third feature of Pivotal’s AI Series, Skip Fidura, Chief Marketing Officer at Leadscale Group, talks about hiring for real-world AI capability, why process matters more than single outputs, and the trade-offs leaders need to weigh for early-career talent.
We’ve been exploring how teams define and hire for AI fluency. How does that idea land with you?
I’ve never thought of it in terms of fluency. The notion of an AI hiring framework is interesting, but the challenge for a hiring manager is that things are changing so quickly. You have to be on the cutting edge of what is going on in AI so you can assess if the person is also at the cutting edge, and that is always a moving target.
If the bar keeps moving, what does ‘good’ look like right now?
Skip: Right now, AI fluency is: I use it, I am comfortable using it, and I know the places where it can go wrong. I am comfortable enough that if I see a new tool, I will get stuck in and see if it is better than what I am using. To use an analogy, AI today is like the BBC’s Race Across the World: you’ve been dropped in a country, you don’t have all your usual tools, and you still have to get from point A to point B. You need to be resourceful, learn quickly, and keep moving.

How do you see adoption and impact across organisations right now?
Skip: There is a lot of research out there, and some of it is from consultancies selling services, so take it with a grain of salt. Enterprise businesses are not seeing the payback they thought they would. In small and medium enterprises, adoption is not happening in an organised way; many are waiting for best practices. But by the time you document a best practice, it is tied to models that are already out of date, or tools that are no longer the best tools out there.
When you are hiring, would you ever set a prompt-writing task?
Skip: Depending on the role, yes, you could design an exercise where they have to write a prompt. But I would not over-index on prompt craft. Do I want somebody who writes great prompts, or somebody who thinks through the problem? I land on thinking through the problem. I would pick a real problem from the job and give them the time to work it with AI. Ask them to explain what they did, why they did it, what failed, and what they changed. Did they look for hallucinations? Did they test prompts, check links, or try more than one model? Show me your steps and your judgment. That tells me more than a polished final answer.
How do you look for the mindset you described?
Skip: Using AI, you can put the same prompt in twice and get different answers. Rather than mark the output, I want to see the process. Give them a problem and ask them to solve it with AI. What matters is their thinking and methodology, how they are processing and interacting with the AI. If the AI returns links, do they check those links? Do they test alternatives? Are they verifying and refining rather than accepting the first answer?
Are there earlier signals you pay attention to, before interview stage?
Skip: We added a filter at the application stage. If your cover letter is obviously AI-written, that is not how we want people using AI. When you apply for a job, you are branding yourself. If it feels synthetic, can I trust what you have handed me? I have empathy for juniors under volume pressure, it might be the 200th role they have applied for, but I draw a line by seniority. I can teach a junior how to combine efficiency with craft. For a senior person, I expect them to know better.
How do you think about results, ROI versus efficiency?
Skip: Calculating the ROI of marketing is a challenge at the best of times. I do not look at AI on an ROI basis; I think of it in terms of efficiency. We time-track. I can pull data on idea-to-publish time, research time, revision rounds. We look at where AI shortens phases of the work. That is measurable. Then there is quality. Leaders have to ask: do we need four junior people, or can AI do the work of one junior person? That has real implications.

Let’s talk about those implications for early-career talent.
Skip: The knowledge I have is based on the mistakes I have made. If AI takes more of the junior work, we reduce the number of opportunities for junior people to make mistakes and learn. If a junior hands me something and it is not right, and I explain why it cannot go out and ask them to redo it, they learn. If they are thinking, “the AI did most of this; I just handed it over,” they miss the chance to build ownership.
Is there anything that shows how AI can affect ownership and learning in practice?
Skip: Yes. I saw a study with three groups. One group had AI write the piece. Another generated the ideas and then had AI write from those ideas. A third wrote everything themselves using only Google for research. The more emotionally invested groups did a much better job recognising their own work. If AI has done all the work and you only proofread and submit, you are not invested, and you are not going to learn from mistakes that were not yours.
Where does this leave managers who are trying to keep quality high while adopting AI responsibly?
Skip: Focus on the checks and balances. Make sure people are verifying facts, checking sources, and testing alternatives. If the tool produces links, go and check them. If it proposes a plan, ask how they validated it. The safeguard is not a single rule, it is a habit of thinking that you build into the way the team works.
And inside the team, how do you encourage good practice without stifling adoption?
Skip: Create space to experiment and share. Ask people to show what they tried, what worked, and what broke. Make it normal to say, “this was faster but worse quality,” or “this saved us two hours and passed review.” If you normalise that kind of discussion, you raise quality and keep the pace.
Looking ahead, what would you tell a marketer who feels overwhelmed by the pace of change?
Skip: Start where you are. Pick one workflow that takes time and see if AI can make it faster without hurting quality. Learn the failure modes in that one area. Then add another. You do not have to master everything at once. What matters is that you are moving, testing, and learning, in a way you can trust.

Final thought, amid hype and fatigue, what does a practical middle ground look like?
Skip: A lot of what is out there is either “AI is going to be the best thing ever” or “doom and gloom.” The practical middle ground is: AI is here. This is how you should use it. This is how you should integrate it into your business.
If you’re ready to make AI capability a consistent, testable part of hiring and development, Pivotal’s AI Fluency Hiring Frameworks turn these principles into practical rubrics for marketing, creative and growth, covering mindset, strategic acumen and builder skills, with interview prompts and calibration guidance you can use tomorrow. Download them here.