At your company the answers are somewhere — in SharePoint, in old emails, in the wiki, in the folder of the colleague who is on holiday in two weeks. But people still ask in WhatsApp because it’s faster. This page describes how your distributed company knowledge becomes a searchable answer source in Teams — with cited original sources, not with a hallucinating black box.
Do you have this situation?
- Employees ask subject-matter questions in WhatsApp groups because they know someone will answer there quickly — and not because the official documentation is bad, but because it’s unfindable.
- Onboarding new colleagues consists in large part of “Just ask Frank” — and Frank repeats the same explanation for the seventh time because it’s nowhere centrally retrievable.
- There is a wiki, a SharePoint, a network-drive folder, a Confluence instance someone once set up, and in each of them fragments of the truth sit — some current, some from 2019.
- When someone asks “What’s the process again for complaints over 5,000 euros?”, first someone searches internally (in vain), then phone calls happen, and in the end the answer maybe lands in a mail that no one can find again.
- Sales leadership wants a chatbot that “finally gives answers”. IT leadership wants no further data graveyard. Management wants the knowledge not to leave the company when employees leave.
Why solve this now instead of postponing
- Knowledge is leaving the company. Older colleagues retire, younger ones change employer on average every 3–5 years. What is not centrally stored and findable is gone when they leave.
- AI tools have arrived in the Mittelstand — usually uncontrolled. Employees long since copy company documents into private ChatGPT accounts because they need answers quickly. An orderly internal solution is the answer to a reality that’s already happening.
- Microsoft 365 brings the building blocks already. If you work in Microsoft 365 anyway, the technical distance to a Teams bot with search across your SharePoint isn’t huge — but it isn’t zero either. At the latest at the license renewal or with the question “Do we need Copilot?” the discussion will happen anyway. Then better cleanly once.
How it would look at your company
Step 1 — Sift and sort the knowledge inventory (week 1–2)
Before we index anything, we clarify together: where does what sit, what is official, what is outdated, who is allowed to see what and who is not. That’s the uncomfortable phase — but it’s also the most important. An AI search over a poorly sorted data basis delivers bad answers, no matter how good the model is.
Stack: SharePoint Admin Center, Microsoft Graph, permission audit. Result: a list of sources to be included in the search, plus a list of sources that need cleanup beforehand.
Step 2 — Set up the RAG architecture (week 3–5)
The pattern is called Retrieval-Augmented Generation: with each question the system searches your company knowledge, fetches the most relevant passages, and a language model formulates an answer from them — with reference to the original source, so the asking person can read on. No “the AI made it up”, but “answer comes from this specific SharePoint document, last update X”.
Stack: Azure AI Search as the index, Microsoft Graph API as access to SharePoint and OneDrive, Azure OpenAI Service or Foundry for the language model, all in your own Azure tenant — your data does not leave your Microsoft environment.
Step 3 — Teams bot as the entry point (week 5–6)
Employees don’t ask in a new app they have to install. They ask where they are anyway — in Teams. We build a bot you can address like any other person: “What’s the procedure again for complaints over 5,000 euros?” An answer comes within two to five seconds, with a link to the original source. If the AI has no confident answer, it says so — instead of guessing.
Stack: Microsoft Bot Framework, Teams app manifest, optionally Power Platform for simple connections.
Step 4 — Respect permissions (week 4–6, in parallel)
That’s the point where many AI projects fail: the search may only give answers from documents the asking person is actually allowed to see. Whoever has no access to the management folder shouldn’t get answers from it — not even summarized. We set up the search so that it honours your SharePoint permissions, not bypasses them.
Stack: Microsoft Graph with delegated permissions, Azure AI Search with Security Trimming.
Step 5 — Trial, feedback, expansion (week 6–10)
We start with a trial group of 10–20 people from two or three departments. They use the bot for three to four weeks, give feedback, flag bad answers. From this we adjust source selection, prompts and answer format. Only once the answers are useful in 80 percent of cases do we roll out further.
What you should look out for along the way
- If someone sells you an AI search without first checking your SharePoint permissions — caution. That’s exactly the mistake that leads to “the bonus of management” suddenly popping up as a search result for every person. That’s not a model problem. That’s a permission problem.
- Ask for citations in the answers. A serious internal AI search always shows which source an answer comes from. A solution that just spits out text without reference is not verifiable — and therefore not trustworthy.
- Clarify where the data ends up. An “on-prem LLM” sounds safe, but is rarely realistic to operate in the Mittelstand. Azure OpenAI in your own Azure tenant in an EU region is usually the pragmatic compromise: your data stays in your Microsoft environment, and the model is not released for training third-party models.
- Check whether it really has to be a custom bot — or whether Copilot is enough. Microsoft Copilot for Microsoft 365 can do much of what is described here out of the box. If your knowledge inventory sits cleanly in SharePoint and your permissions are right, Copilot is often the more honest answer than a custom build. But if you have to integrate specific sources outside M365 (ticketing systems, ERP, industry wikis), a custom bot becomes interesting.
What realistically changes afterwards
- Employees find answers to recurring subject-matter questions in Teams, instead of in WhatsApp groups or from colleagues already under stress.
- Onboarding gets easier: new people can ask without feeling “stupid”, and get answers with references to the original source — which they can then read on themselves.
- Knowledge that previously sat only in the heads of individuals gets documented more and more, because it becomes visible where the AI finds no answer — and exactly there the incentive to write something down emerges.
- The uncontrolled use of private ChatGPT accounts with company data decreases, because there is a more convenient and legitimate alternative.
- Management gets an honest view of which topics are most often asked — and thereby an indicator of where there are process or documentation gaps.
What you contribute
- Access: an admin in your Azure and Microsoft 365 tenant who grants us targeted permissions. We work with service principals, not with permanent personal admin accounts.
- Stakeholder time: one subject-matter-experienced person per relevant department who can assess which sources are authoritative and which are outdated — typically 2–3 hours per department in the sifting phase, then occasionally for trial feedback.
- Data Protection Officer and works council: an AI solution that can analyse employee questions needs a clean agreement. We provide the technical description, you bring it into your codetermination and data protection processes.
- Trial group: 10–20 people who are willing to give honest feedback for three to four weeks — and not just report “works” or “doesn’t work”.
Risks & when it does NOT fit
- If your SharePoint is sprawl and permissions are unclear, then this is the wrong first step. First clean up, then make searchable. Otherwise you build a fast search over chaotic data — and the chaos becomes findable faster, not smaller.
- If the expectation is that an AI replaces process knowledge written down nowhere. A RAG search can only find what exists. If your relevant knowledge only lives in heads, the first task is documentation, not AI.
- If the data protection framework can’t be clarified. In sectors with special confidentiality requirements (tax firm, medical-practice associations, critical infrastructure), the question of what an AI may see in which region is not trivial. That belongs settled up front, not during the trial.
- If you intend to roll out Copilot soon anyway. Then the honest recommendation is sometimes: Copilot first, custom build only when Copilot demonstrably isn’t enough. We don’t recommend Copilot reflexively, but we also don’t reflexively recommend against it.
How the conversation starts
30 minutes initial conversation, free of charge, by video or phone. What we clarify: where does your knowledge predominantly sit today (SharePoint, wiki, tickets, mails)? Which questions are particularly often repeated at your company? Have you tested Copilot or deliberately not? What is the data protection and codetermination situation at your company? From this it emerges whether a custom RAG, Copilot or a smaller solution is the right path.
Response remote immediately during service hours, initial conversation typically set up within 3–5 working days — honestly speaking, depending on my calendar in solo operation.
Frequently asked questions
Doesn’t the AI still hallucinate? Hallucinations mostly arise when a language model answers “from its head”, without a source. In the RAG pattern, the answer is formulated out of your specific documents, with a reference to the source. With good implementation the AI says “I find nothing on this in your sources”, instead of guessing. Excluding it completely never works, but the risk drops considerably.
Will Microsoft or OpenAI then see our company data? When using Azure OpenAI in your own Azure tenant, the Microsoft Enterprise terms apply: your data is not used for training models, it stays in the chosen Azure region. That is not the same as a private ChatGPT account of an employee — and exactly that’s why the in-house variant is more straightforward to set up in data-protection terms.
What does ongoing operation cost? Not the bot itself is the cost driver, but model invocations and the search index in Azure. For an 80-person company with moderate usage, ongoing Azure consumption typically moves in the low three- to four-figure range per month — strongly dependent on model choice and usage intensity. We show you before the rollout how to monitor and cap that yourself.
Can we extend this later to other sources — tickets, ERP, CRM? Yes, and that’s one of the reasons to choose an in-house build over pure Copilot. Via connectors or custom adapters, sources outside SharePoint can also be integrated. But that only makes sense once the first area runs stably — otherwise you build complexity before you see the first value.