Home / Use Cases / Knowledge Search

How do we make internal company knowledge searchable via Teams?

At your company the answers are somewhere — in SharePoint, in old emails, in the wiki, in the folder of the colleague who is on holiday in two weeks. But people still ask in WhatsApp because it's faster. This page describes how your distributed company knowledge becomes a searchable answer source in Teams — with cited

At your company the answers are somewhere — in SharePoint, in old emails, in the wiki, in the folder of the colleague who is on holiday in two weeks. But people still ask in WhatsApp because it’s faster. This page describes how your distributed company knowledge becomes a searchable answer source in Teams — with cited original sources, not with a hallucinating black box.

Do you have this situation?

Why solve this now instead of postponing

How it would look at your company

Step 1 — Sift and sort the knowledge inventory (week 1–2)

Before we index anything, we clarify together: where does what sit, what is official, what is outdated, who is allowed to see what and who is not. That’s the uncomfortable phase — but it’s also the most important. An AI search over a poorly sorted data basis delivers bad answers, no matter how good the model is.

Stack: SharePoint Admin Center, Microsoft Graph, permission audit. Result: a list of sources to be included in the search, plus a list of sources that need cleanup beforehand.

Step 2 — Set up the RAG architecture (week 3–5)

The pattern is called Retrieval-Augmented Generation: with each question the system searches your company knowledge, fetches the most relevant passages, and a language model formulates an answer from them — with reference to the original source, so the asking person can read on. No “the AI made it up”, but “answer comes from this specific SharePoint document, last update X”.

Stack: Azure AI Search as the index, Microsoft Graph API as access to SharePoint and OneDrive, Azure OpenAI Service or Foundry for the language model, all in your own Azure tenant — your data does not leave your Microsoft environment.

Step 3 — Teams bot as the entry point (week 5–6)

Employees don’t ask in a new app they have to install. They ask where they are anyway — in Teams. We build a bot you can address like any other person: “What’s the procedure again for complaints over 5,000 euros?” An answer comes within two to five seconds, with a link to the original source. If the AI has no confident answer, it says so — instead of guessing.

Stack: Microsoft Bot Framework, Teams app manifest, optionally Power Platform for simple connections.

Step 4 — Respect permissions (week 4–6, in parallel)

That’s the point where many AI projects fail: the search may only give answers from documents the asking person is actually allowed to see. Whoever has no access to the management folder shouldn’t get answers from it — not even summarized. We set up the search so that it honours your SharePoint permissions, not bypasses them.

Stack: Microsoft Graph with delegated permissions, Azure AI Search with Security Trimming.

Step 5 — Trial, feedback, expansion (week 6–10)

We start with a trial group of 10–20 people from two or three departments. They use the bot for three to four weeks, give feedback, flag bad answers. From this we adjust source selection, prompts and answer format. Only once the answers are useful in 80 percent of cases do we roll out further.

What you should look out for along the way

What realistically changes afterwards

What you contribute

Risks & when it does NOT fit

How the conversation starts

30 minutes initial conversation, free of charge, by video or phone. What we clarify: where does your knowledge predominantly sit today (SharePoint, wiki, tickets, mails)? Which questions are particularly often repeated at your company? Have you tested Copilot or deliberately not? What is the data protection and codetermination situation at your company? From this it emerges whether a custom RAG, Copilot or a smaller solution is the right path.

Response remote immediately during service hours, initial conversation typically set up within 3–5 working days — honestly speaking, depending on my calendar in solo operation.

Book an initial conversation

Frequently asked questions

Doesn’t the AI still hallucinate? Hallucinations mostly arise when a language model answers “from its head”, without a source. In the RAG pattern, the answer is formulated out of your specific documents, with a reference to the source. With good implementation the AI says “I find nothing on this in your sources”, instead of guessing. Excluding it completely never works, but the risk drops considerably.

Will Microsoft or OpenAI then see our company data? When using Azure OpenAI in your own Azure tenant, the Microsoft Enterprise terms apply: your data is not used for training models, it stays in the chosen Azure region. That is not the same as a private ChatGPT account of an employee — and exactly that’s why the in-house variant is more straightforward to set up in data-protection terms.

What does ongoing operation cost? Not the bot itself is the cost driver, but model invocations and the search index in Azure. For an 80-person company with moderate usage, ongoing Azure consumption typically moves in the low three- to four-figure range per month — strongly dependent on model choice and usage intensity. We show you before the rollout how to monitor and cap that yourself.

Can we extend this later to other sources — tickets, ERP, CRM? Yes, and that’s one of the reasons to choose an in-house build over pure Copilot. Via connectors or custom adapters, sources outside SharePoint can also be integrated. But that only makes sense once the first area runs stably — otherwise you build complexity before you see the first value.