Podcast
2026-03-26

Can you trust your data?

Glenn Svanberg
,
Mitra Javadzadeh
,
Martin Vollrathson
,
Can you trust your data?

About the episode

What happens when information does not add up? In this episode, we explore the business impact, clarify the difference between data quality and information quality, and explain how you can start automating your validation process with the help of AI.

Want to dive deeper? Learn more about information quality and AI in our library.

In this episode, you’ll learn:

The difference between data quality and information quality
How poor information quality affects trust, brand perception and conversion
Why work on information quality is almost always reactive and what that costs
How trust is built, and why someone must always own the information
How AI can validate information holistically and automatically identify sources of error
Where to start when you want to work proactively with information quality

Voices in this episode

Glenn Svanberg
Glenn Svanberg
AI Developer & Innovation Advocate
Mitra Javadzadeh
Mitra Javadzadeh
Head of Business Development
Martin Vollrathson
Martin Vollrathson
Developer

Listen to the episode

Transcript of the episode

Introduction

Mitra:

A warm welcome to Syntra and our second episode! Today, we are going to talk about information quality, trust, responsibility, and data puzzles. How do these pieces fit together? We are going to explore that in depth together with Glenn Svanberg, who you met in our first episode. Great to have you back!

Glenn:

Great to be back. This is going to be really exciting!

Mitra:

And today we are also joined by Martin Vollrathson. Who are you, Martin?

Martin:

Well, who am I? I am a software developer here at Fiwe, with a background in e-commerce and product data going back 20 years. So I hope I have something useful to contribute.

Mitra:

I am quite sure you do. And for those of you who have not listened before, this is the podcast for people who are driving their function forward and want to understand how the interplay between systems and users shapes the way we work. We share insights into how new technological possibilities are transforming the way we work and interact with technology. With that said, let us start with the question: what do we actually mean by information quality?

What is the difference between data quality and information quality?

Glenn:

It is a really interesting term. But before we answer that question, I want to talk about the difference between data and information. We have been talking about data quality for a long time, and when I think about data, I think about a single data point. It is one piece of an information puzzle. It is only when you put several data points together that you actually create meaning and communicate something.

To use a concrete example, data quality is about having a colour registered for a product. That is very important. If every product has a colour field filled in and it shows green at 100 percent, then you have good data quality. But information quality is something more. Is it the right colour for that specific product? An image might show white, while the field says “green t-shirt”. In that case, you may have 100 percent data quality, but almost no information quality.

Martin:

I have a good example I found online, a wall tile. If you read the technical attributes, it says “placement: wall or floor” and “surface: matt”. But when you look at the image, the tile really looks glossy. And if you read the description, it turns out it is only recommended for walls, because wall tiles are more fragile than floor tiles. So there are obvious contradictions in the published information.

Glenn:

You can find that kind of example almost everywhere. I think the problem has been ignored for a long time because there has not really been a way to manage it. We are constantly adding more and more data. We connect new sources and bring in external data flows. But we have not had the time to review every field, and still we put it all together into an information puzzle in the end.

Martin:

Exactly. When you had 1,000 or 10,000 products, it was manageable. Someone could keep track of what information was actually being published. But with 100,000 products, 500 fields per product, and pressure for faster time to market, how are you supposed to know whether every product is correct?

Glenn:

That is when you end up just trusting it and pushing the information out. And then you find examples like this everywhere.

What are the consequences of poor information quality?

Glenn:

The consequences are quite serious. First, the customer may not feel confident enough to complete the purchase if the information is unclear or contradictory. If I do not know whether I am getting a red or a white t-shirt, because the image and text say different things, I may move on to another seller. But it also damages your brand. What does it say about your control and professionalism if you publish incorrect information?

Martin:

And there are direct financial consequences as well. Returns increase when the delivery does not match the description. I think we place too much of the review responsibility on the end customer today. Someone who wants to buy something is aware, at least subconsciously, that the information may be wrong. So when they find a product they want, they start checking other retailers just to make sure the information is actually correct. If we can remove that responsibility from the customer and deliver better information, that has to be a competitive advantage.

Glenn:

Absolutely. When I buy something, I choose the seller I trust most, the one with the best information. And I come back again and again, because I know what I am actually buying. That is a clear sales argument.

Martin:

And that trust affects everything else as well. If you can trust the information being published, you automatically start to expect that you can also trust the logistics and the returns process.

What does trust have to do with information quality?

Mitra:

You mentioned how this affects the brand and trust in a company. If we dig deeper into the concept of trust, how is it connected to information quality?

Glenn:

You can trust a company, a brand, or a person, and those things are often linked. I trust a person at a company, and therefore I trust the company. Externally, what matters is the customer’s perception of your company. But internally, there is always someone taking responsibility and making the company worthy of that trust. For someone to take responsibility, they have to trust their internal processes and tools. Trust runs through several levels.

Martin:

We talked earlier about whether you should state that a text was created with the help of AI. I think what you should actually publish alongside your information is the validation framework. In other words, explain which criteria the information has been validated against. That way, the customer can judge whether the process seems trustworthy. I think it creates more confidence to show how the information has been reviewed, regardless of whether it was AI-generated or produced by a human.

Mitra:

Trust is a fascinating concept. I trust you completely, Glenn, and you too, Martin. If you say you had meatballs for lunch, I do not need to verify it. I was not in the lunchroom, but I still believe you. So why do I not trust a system in the same way?

Martin:

Exactly. And that is a very good question. I may actually have had meatballs the day before and mixed up the days. In that case, my process for producing the information was flawed, not my intention to lie.

Glenn:

Trust is built over time. You trust Martin because he is usually honest about that sort of thing. He has built that trust. I think you need to approach systems and processes in the same way. You can grow into trust if you verify continuously and see that the process works over time. But the difference between systems and people is interesting here. Part of why we trust people more is that a system cannot take responsibility. There is always a human being who is the final accountable party, and we cannot hand that over to machines.

What does work on information quality look like today?

Mitra:

Are there good processes in place today? What does it actually look like out there in companies?

Martin:

I think it is a very reactive process. You get feedback from customers through returns or direct questions. But working proactively with it, no, I do not think that is common. The holistic evaluation is missing.

Glenn:

I have not found a place where people are actually working with information quality in a systematic and proactive way. Companies have worked on data quality for a long time, and they know whether the fields are filled in. But information quality, whether the information is actually correct in context, is always reactive. And often the feedback comes at the very end, when the information has already reached the customer. By then, you have already lost trust, and the path back to correcting the issue is long, through endless email chains from customer to supplier. An enormous amount of time is lost simply because there is no systematic process in place.

Martin:

We are missing that built-in feedback loop. The one we drive ourselves, without depending on customer reactions.

Glenn:

It simply is not there. And once you have published a product range, incorrect information can sit there for a year before someone happens to notice it.

Martin:

And the worst part is the feedback we never get, the customer who quietly goes somewhere else.

How do you start working actively with information quality?

Mitra:

If we look at companies that now want to start working more actively with information quality, not just adding more data but also making sure it is the right data, where do you begin?

Glenn:

Start now. It can feel intimidating to open that box and actually measure whether the information is correct, because you will find errors. But that box does not get any smaller if you wait. It feels backwards to wait until you have added three more data sources. It is better to look at a narrow product range now. For example:
• Your core assortment
• The top 10 percent best-selling products

What level of information quality do you have there?

Martin:

And where in the process should you begin? It is probably unavoidable that you start with the information that is already published. That gives you a clear view of the current situation. Over time, you can work your way further back in the chain, earlier and earlier in the process. It is like spilling milk on the table. The milk runs down onto the floor, and you are tempted to start wiping the floor because that is where the visible problem is. But if you only wipe the floor, more milk will keep running down from the table.

Mitra:

That is a great analogy. And that is exactly it, the error is discovered at the very end of the chain, when the customer is about to make a purchase. It takes a long time just to figure out where the source of the problem actually is. But are there tools that can support this work?

Glenn:

This is a very new area. We have not had the capabilities before. It was really with the ChatGPT moment in 2023 that it started to become possible to do this kind of validation. Before that, it was a huge undertaking. I can see tools beginning to emerge, but no one has really taken on the challenge and solved it fully. Martin is working on a solution for exactly this.

Martin:

Exactly. We are working on an application where you feed in product data, initially the information already published on the e-commerce site. Then AI automatically carries out a holistic evaluation for each product:
• Is the information contradictory?
• Are the dimensions and units reasonable for this category?
• Is there something that logically cannot be correct?

The tool can also automatically cluster the products and reveal patterns. All products made of this material seem to have similar issues in their descriptions. All products from this supplier show a similar type of deviation. In that way, you can find the source of the problem and fix it early, even if you discovered the issue late.

Mitra:

Can it feel intimidating when a tool suddenly reveals errors you had not seen before?

Martin:

Absolutely. When I test it, it is easy to get the result that everything is wrong. But that depends on how fine-meshed a net you use. If you focus on the most important things first, you might find the eight percent of products with urgent issues and fix those first. Then you can go deeper step by step. The important thing is not to begin with the feeling that nothing can be fixed.

Glenn:

And you are quite vulnerable if you have been responsible for a category for a long time. There is professional pride in the work. Being told that 100 percent of your products contain errors is not motivating. It is a daunting world to step into.

Summary

Mitra:

Thirty minutes have now passed, and a huge thank you to both of you for joining today. It has been incredibly interesting, and we probably could have kept going for a long time.

Glenn:

Thank you so much. It was great fun.

Martin:

Thank you, thank you!

Mitra:

If you are interested in hearing more about validation and how to improve information quality, there is an episode from AI Sweden featuring Validio as one of the guests. The founder takes part, and it is a very good episode that I can recommend. If you have thoughts or suggestions for topics you would like us to cover in the podcast, you are more than welcome to get in touch. Otherwise, thank you for listening, and we hope to see you again soon.

You have been listening to Syntra, a podcast from Fiwe.

Listen to more episodes

Deepen your knowledge. Explore more episodes of Syntra.

More episodes
An abstract image of data that flows in multiple colors.

Ready to take the next step together with your data?

We help you transform data into information and communication that truly makes a difference – for your workflows, decision-making and product offering.