In this episode, we talk about why AI solutions should not be purchased and onboarded in the same way as other IT systems. We discuss what it means to “recruit” an AI to your team and how that differs from bringing in a traditional system. Among other things, we explore what types of tasks an AI can perform and how ways of working change in this new paradigm.
Want to dive deeper? Learn more about AI adoption in our library.
In this episode, you’ll learn:
Voices in this episode



Listen to the episode
Transcript of the episode
Recruiting an AI to the Team
The podcast is in Swedish. This transcript is translated from Swedish to English.
Syntra is a podcast from Fiwe about the interplay between people, systems, and data — and how new technological possibilities are changing the way we work.
In this first episode, we address a question many organizations are currently grappling with: how do you “recruit” an AI to your team? What does that mean in practice — and why is it a better way of thinking than saying, “We’re purchasing an AI system”?
Participants
- Sebastian Mildgrim – Technical Advisor, Fiwe
- Glenn Svanberg – Developer (Innovation & Experimentation), Fiwe
- Mitra Javadzadeh – Head of Business Development, Fiwe
Sebastian: I’ve been at Fiwe for 11 years and held many different roles. Today, I help customers understand which solutions can address their challenges – both in advisory roles and on-site, strengthening their ability to use data more effectively. Much of that work falls within what we call Data Governance.
Glenn: I’m a developer with a passion for innovation – creating new things, testing emerging technologies, and exploring what lies beyond the “next hill.” I drive initiatives where we experiment and build to stay at the forefront.
Mitra: I work with business development. We help clients through transformation and the paradigm shift that occurs when AI becomes part of everyday operations.
Why is the podcast called Syntra?
The name Syntra refers to transferring information in a structured way between different entities – for example, between systems and people, or between machines and humans – in a way that makes the information understandable.
Syntra is the podcast for those who want to move their function forward and understand how the interplay between technology and users affects ways of working, processes, and results.
Episode Theme: “Recruiting an AI to the Team”
What do we mean by recruiting an AI?
Sebastian: Traditionally, you purchase an IT system for a very specific purpose. It does “one thing” according to a clear specification. But AI is far more adaptable – and capable of performing significantly more tasks than a traditional system.
That’s why we want to challenge the idea of “buying an AI system” and instead think: we’re recruiting a colleague.
That means onboarding, clear responsibilities, defined boundaries, feedback loops, and a way of working where humans remain accountable.
AI is not Deterministic
Glenn: A major difference is that traditional IT systems are typically deterministic. Once implemented, they behave the same way every time.
AI, however, is probabilistic. Ask the same question ten times and you may get different answers — even with identical input.
That means you can’t treat AI as a finished implementation. It becomes more like an onboarding process where you work side by side with the AI, guide it, and improve it over time.
Is AI the same thing as automation?
Mitra: At its core, you’re still using a machine to perform tasks automatically. But the difference is that AI can handle more complexity, unstructured input, and less rigid outputs.
Sebastian: And the requirements change completely. In traditional automation, responsibility largely lies in correct specification and building a clear “track” for the process to follow. With AI, we must rethink responsibility and quality – because outputs can vary and sometimes be wrong in ways that aren’t immediately obvious.
Humans must stay involved – and Accountable
When AI is integrated into workflows, humans must remain more involved than in traditional automation.
- You cannot give AI full responsibility.
- You must review outputs – and have criteria for what is “good enough.”
- The human role shifts from doing everything manually to orchestrating, guiding, and quality-assuring.
It’s a clear mindset shift: From doer → manager.
What tasks are best suited for AI?
Glenn: AI works particularly well for repetitive tasks where the outcome can be clearly described and evaluated.
Examples:
- Generating text at scale (with clear tone and framework)
- Classifying products
- Structuring and reformatting information
- Suggesting content based on defined rules
The key is being able to provide meaningful feedback on the output.
Sebastian: Think of it like working with a colleague. Saying “this isn’t good” doesn’t help – whether it’s a person or an AI. You need to explain what was wrong and why. For example:
“This text risks greenwashing. Avoid that by using concrete facts, measurable data, and neutral language.”
AI as knowledge support
Sebastian: Another clear strength is helping us navigate large volumes of information.
For example: customer service. I don’t believe AI should replace human interaction – but it can be a powerful support tool, helping service teams quickly find the right information in FAQs, past cases, and internal data sources – in the correct context.
But there’s an important prerequisite: structured, well-governed data. Order and clarity are the foundation for AI to be helpful and trustworthy.
“AI knows nothing” – yet often sounds convincing
Glenn: AI can sound extremely confident and reasonable – even when it’s wrong. That’s why context and instructions are crucial.
AI doesn’t “learn” like a human in each individual run. You must provide:
- The right information
- The right boundaries
- The right instructions
- The right feedback model
The better you are at giving AI precisely the right context – neither too little nor too much – the higher the quality and often the lower the cost per execution.
Responsibility: input and output
AI cannot be accountable. Humans must always take responsibility for:
- Input – what data and instructions the AI receives
- Output – what is published, sent, used, or forms the basis of decisions
Ultimately, accountability always lies with the organization.
Scaling requires new ways to build Trust
Glenn: AI can generate 10,000 texts quickly. But then the question becomes: can we review 10,000 texts?
No. Instead, we must work with methods that build trust without reviewing everything:
- Sampling and spot checks
- Validation rules
- Clear boundaries and policies
- Programmatic controls where possible
The goal is to reach a level of quality and safety that is practically reasonable – and comparable to how humans also make mistakes at times.
Onboarding: start with business value
Mitra: It’s easy to get caught up in hype. But everything must be grounded in business value. “Better texts” may sound good — but does it matter to the customer or the business? If no one reads them, it may not be the right place to start.
Sebastian: A useful exercise is writing a “job description” for the AI:
- What should it do?
- What output should it deliver?
- What boundaries apply?
- What data does it need?
This forces clarity — and makes it easier to understand what supporting structures are required.
Practical tips for Ssuccessfully “Recruiting an AI"
If you want to get started:
- Choose a clearly defined, repetitive task
- Ensure you understand both input and expected output
- Make sure the right data is available
- Establish a way to provide concrete quality feedback
- Assign human ownership of the process (input + output) and long-term governance
Closing
Thank you for listening to Syntra – a podcast from Fiwe.
Do you have feedback or suggestions for future topics? Feel free to reach out. We’ll continue releasing new episodes about the intersection of people, technology, and data.
