Research gets filed not because teams are negligent, but because it is delivered in a format that cannot be interrogated. The knowledge exists. It is just not reachable at the moment it would be most useful.
Across creative agencies, engineering firms, product teams and public bodies, the same thing keeps happening: significant investment goes into understanding an audience, a customer base or a problem context, and then that knowledge plays almost no part in the decisions that follow.
The research itself is rarely the problem; most of it is excellent. The issue is structural: how research gets delivered, stored and accessed, and the mismatch between the format knowledge is produced in and the format it needs to be in to actually shape decisions.
The same story, every project
The sequence is familiar enough. Research is commissioned, fieldwork conducted, findings written up. There's a meeting where key themes get discussed, the brief gets shaped, and then the project begins in earnest.
At this point the research tends to stop being active. It becomes a PDF in a shared drive, a slide deck in a project folder, a set of conclusions people half-remember. The team doing the day-to-day design, development or production work rarely has easy access to the underlying data. They rely on what stuck from the kick-off meeting, what made it into the brief, and what senior colleagues carry in their heads.
It is not laziness, and it is not negligence. It is a rational response to an access problem. The research exists in a format that simply cannot be interrogated during a live project. You cannot ask a PDF how a specific audience segment feels about a specific design choice, or check a slide deck against a creative direction that emerged in a session last Tuesday. The knowledge is there, but it is not reachable at the moments it would matter most.
Why it gets expensive
Once the research stops being consulted, the gap compounds. Individual decisions get made without reference to the evidence that would have informed them. Over time they accumulate into a direction that may have drifted considerably from what the research indicated the audience actually needed. By the time misalignment surfaces (usually at client review or stakeholder sign-off), the rework is expensive and the schedule is tight.
Industry data on creative projects suggests campaigns typically require around five rework rounds before approval, with a significant proportion of creative spend going on revision that could have been avoided had audience needs been kept active throughout.[1] That is a structural waste of both research investment and project budget, not a marginal efficiency loss.
There is also a second-order cost that is harder to measure: the slow erosion of trust in research as a discipline. When research does not visibly influence outcomes, organisations gradually invest less in it. The next brief is written with a lighter evidence base. The next research programme is shorter and cheaper. Over time, the institutional knowledge that good research builds is never accumulated, because each project starts again from scratch.
Summaries and workshops help, but they don't solve it
Several standard interventions are used to address this. Research summaries and "insight packs" distil findings into shorter documents intended to be more usable. Research champions are appointed to keep findings visible. Workshops at project start embed key themes before work begins. Regular check-ins against the brief are scheduled in.
These all help at the margin, but none of them address the underlying problem. Teams do not need a shorter summary. They need to be able to ask a specific question and get a grounded answer at the moment that question arises. Research summaries cannot do this. The insight is still locked in a format that cannot respond to the unpredictable, contextual questions that come up throughout a live creative or strategic process.
Research champions help with salience but create a bottleneck: they are only present in meetings, not in the design session at 4pm on a Thursday when a decision is actually being made. Workshops front-load knowledge but cannot address the ongoing need for access throughout a project that may last months.
What actually needs to be different
What needs to change is straightforward to describe, even if it has historically been difficult to achieve without the right tooling: research needs to be continuously queryable, throughout a project, at the moment questions arise.
That means knowledge encoded in research documents needs to be accessible in a conversational format, so that a designer, strategist or product manager can ask "what does the research say about how this segment responds to this kind of message?" and get a grounded, source-linked answer in real time. The answer needs to be traceable: not "the research suggests something like this" but "section 3.2 of the Q3 audience study found this, and the brand tracker from Wave 4 corroborates it." And that access needs to be available to the whole team, not just the senior researcher who commissioned the work.
The confidence scoring requirement follows directly from this. When research is being used in real time by practitioners who are not research specialists, they need to be able to tell the difference between a strongly-grounded answer and one that is extrapolating beyond the evidence. Without that signal, the risk is substituting misplaced confidence in AI outputs for the informed scepticism that research professionals bring to their own findings.
Three things shift when research stays active
In practice, keeping research queryable throughout a project changes three things.
When creative or design decisions can be quickly checked against audience evidence throughout a project, not just at the start, misalignment is caught in days rather than discovered at review after weeks of work in the wrong direction. Facilitator agents monitoring the digital thread of decisions can flag when work is diverging from the evidence base before the divergence becomes expensive.
When every output can be traced to the specific research passages that informed it, stakeholders asking "why did you take this direction?" get a traceable answer rather than "that's what felt right." This is particularly valuable in organisations where decisions need to be approved by people who were not part of the day-to-day process: clients, boards, regulators, funders.
Traditionally, each project draws on the research commissioned for it and then archives it. When audience knowledge is maintained in a persistent, queryable system, what is learned in one project informs the next. The second campaign benefits from the first. The third product launch builds on what the first two taught the team about this specific audience. Institutional knowledge accumulates rather than evaporating between engagements.
This isn't just a creative industry problem
The research-gets-filed problem is not unique to creative industries. It surfaces wherever organisations invest in understanding a problem, an audience or a domain, and then make decisions about it without adequately drawing on that understanding. Engineering firms commission technical assessments that inform the project specification but not the day-to-day engineering decisions. Public bodies conduct consultation processes that shape the policy framework but not the implementation choices. Professional services firms develop institutional expertise that accumulates in senior people's heads rather than being systematically accessible to the whole team.
In each case, the form in which knowledge is produced (reports, presentations, documents) is not the form in which it needs to be present to influence the decisions that follow. Closing that gap is not primarily a question of research quality, or of team discipline. It is a question of the infrastructure through which knowledge is made accessible in professional practice. That infrastructure can now be built. The question for most organisations is whether they will bother to build it.
- [1] The BetterIdeas Project (2025). BetterBriefs / WFA / IPA. Global survey of 1,034 creative industry professionals across 54 countries examining the relationship between brief quality, rework cycles and creative outcomes.