AI is everywhere — on every conference stage, on every sales slide, in every second LinkedIn post. And at the same time you sit in your company and see: apart from a handful of employees using ChatGPT for their emails, little has happened in daily work. We build AI and automation solutions for mid-market companies (German Mittelstand) that really take load off in day-to-day operations — and we openly say where AI is not the right answer and a 50-line workflow does the job better.
Does this sound familiar?
- Your management asks: “Why aren’t we using Copilot? Others already do.” — and nobody quite knows what an honest answer would be.
- A few months ago you bought Copilot licenses because they were offered with the Microsoft renewal. Three months later, nobody uses it regularly, and reporting shows the money is simply flowing.
- The IT service desk gets 40 tickets a day, and it feels like 70 percent are the same frustration about password resets, printer drivers and access requests. Your first line works through this and gets to nothing else.
- Your employees ask subject-matter questions in WhatsApp groups, in emails to colleagues, or worse: in ChatGPT — because the company knowledge in SharePoint, in the DMS and on network drives is not findable.
- Marketing experiments with ChatGPT, sales with Claude, HR with some CV AI — and nobody at the company knows which company data ends up there.
Why this happens
In the last two years AI has moved from a research question to a sales push. Every Microsoft partner brings up Copilot, every consultant has an AI roadmap in their portfolio, and at the same time the expectation from management, boards and family shareholders rises: “We have to do something.” What rarely happens is the sober question of where AI actually takes work off in concrete operations — and where it remains an expensive toy that lies in the drawer after three months.
On top of that: most Mittelstand companies don’t have their data in the state modern AI tools need. Copilot can only be as good as the SharePoint permissions it sits on top of — and in many companies these permissions have grown over years and no one wants to touch them. If Copilot is then “successfully” introduced, it suddenly sees personnel data, payroll or board minutes it shouldn’t see. That’s not an AI problem, it’s a data foundations problem — but it only becomes visible through the AI.
And finally: AI is not the same as automation. A lot of what is sold today as an “AI project” is at its core a workflow problem — a recurring procedure that was never cleanly mapped, and for which now a large language model is supposed to be harnessed because it sounds more impressive than “Power Automate”. We deliberately separate the two worlds: AI where language and ambiguity are in play, classical automation where the process is actually clear and just never consistently implemented.
What this is concretely about
Microsoft 365 Copilot — when it really makes sense
Before we book a Copilot license, we check two things with you: what your SharePoint permissions look like, and which data in your tenant is actually classified. If permissions have grown wild, Copilot sees things it shouldn’t — and the question “How high was management’s last bonus?” suddenly becomes answerable for every person in the company. First get the data basis in order, then Copilot. And even then not for everyone, but for the roles in which text work, research and summarization occur daily — sales, marketing, management, back office with high mail volume.
Internal knowledge search
Employees ask a subject-matter question in Teams, and the answer comes from the company’s knowledge — from SharePoint, from the DMS, from the wiki, with cited sources. Technically that builds on Azure AI Search and a classic RAG pattern (Retrieval-Augmented Generation): the AI doesn’t generate an answer from thin air, but first finds the matching documents in your inventory and formulates the answer along these sources. How you notice it’s due: when new employees need three weeks until they know where which document sits — and the old hands carry the knowledge in their heads.
Workflow automation (Power Automate / n8n)
The less glamorous but usually more rewarding part. Recurring procedures that today run via mail distribution lists, Excel sheets and shouting become defined workflows: quote dispatch with automatic filing in the DMS, order confirmation with feedback to sales, onboarding of a new employee with license assignment, group membership and device preparation. We use what fits the situation — Power Automate, if you are in the Microsoft world anyway, n8n for more open scenarios or when you want to stay independent.
Ticket triage & classification
Incoming service-desk tickets get pre-classified (category, urgency, likely resolution path), briefly summarized and routed to the right place. For recurring standard questions — password, VPN, printer — the system suggests a resolution path that the responsible person only needs to confirm. The human stays in the loop. How you notice it: when your first line consists 70 percent of the same five topics and nobody has time anymore for the really interesting tickets.
Governance — what AI may see, what not
The invisible but decisive part. Data classification (what is public, internal, confidential, strictly confidential), Sensitivity Labels in M365, prompt filters and an audit trail for AI usage. Plus clear ground rules for employees: what may go into ChatGPT, what may not, which internal tools are available. That’s the answer to the question your data protection officer will ask you anyway in the coming months.
Where AI doesn’t help today — the honest answer
AI is not a panacea, and we often say in the initial conversation: “This goes better today with a 50-line Power Automate flow than with an LLM.” For example:
- Structured data extraction from always-identical forms — if the form looks the same every day, Power Automate with Form Recognizer rules is more stable, cheaper and more predictable than any LLM.
- Closing tickets without a human in the loop — risky. An AI that independently grants access or resets passwords is a security gap you can’t explain in the audit. The human should make the decision.
- Complex legal or regulatory evaluations — AI as assistance for researching and structuring yes, AI as the answer-giver on liability-relevant questions no.
A sentence we often say: if your sales team writes 50 quotes per week and 90 percent of them differ only in quantities and prices, you don’t need AI — you need a clean template and a Power Automate flow. AI makes the difference where language really varies, where documents look different, where questions are ambiguous. There it is a lever. With fixed patterns it is the expensive detour.
What you should look out for — even if you don’t go with us
- Ask for the concrete use case before anyone talks licenses. Whoever starts with “Copilot costs per month …” instead of “In which role at your company would this take work off daily?” is selling a license, not a solution.
- Ask to see the data classification before you roll out a knowledge AI. If nobody can say which documents are confidential and which are not, the AI ends up seeing everything. That isn’t the AI’s fault, but it is your problem.
- Ask about the rollback plan. If an automation flow sends order confirmations and runs into an infinite loop at three in the morning, your sales team will want to know the next day how you stop it. Whoever has no answer has no plan.
- Be suspicious of ROI promises with concrete numbers. Whoever promises “300 % productivity uplift from Copilot” has no figure they can substantiate — the market is simply too young for that, and the measurements that exist are all vendor-funded. Serious is: “We define up front how we measure, and after three months we look honestly.”
- Clarify early where the data ends up. Microsoft Copilot stays in the tenant. ChatGPT in the free variant does not. Private Claude use with company documents is a data protection incident. Whoever doesn’t differentiate this is throwing terms together.
- Pilot small before you roll out broadly. One department, six to eight weeks, clear success criteria — then decide. Whoever buys Copilot for the whole company before anyone has really used it is burning money.
When this is now due
- Your management is actively asking about Copilot or AI, and you want a well-founded answer instead of a gut reaction.
- Growth is stalling at a plateau that looks like more headcount, but is actually a productivity problem — recurring procedures eat the capacity needed for the actual business.
- Hiring at first line or in the back office has been unsuccessful for months — the labour market doesn’t give it up, and you need a different leverage logic.
- A concrete data protection concern has come up: employees use ChatGPT with customer data, or generated text suddenly appears in marketing whose origin nobody knows.
- Microsoft license renewal is up, Copilot is offered, and you want an honest decision basis instead of sales pressure.
- NIS-2 preparation is running, and the question “How do you handle AI tools?” will appear in the questionnaire.
How we work
Phase 1 — Initial conversation & use-case inventory
30 minutes initial conversation, then a structured look at the recurring procedures at your company: what happens daily, what happens weekly, where does frustration accumulate, where is time lost. Delivery: a use-case list, sorted by “AI is worth it”, “classic automation is worth it”, “nothing is worth it because the process has to be clarified first”.
Phase 2 — Pilot in one department
We pick a use case with you that is manageable, measurable and visible in success. Six to eight weeks pilot in one department, with clear success criteria defined up front. Delivery: a running use case, an honest evaluation (“what worked, what didn’t”), a decision basis for “roll out further or do it differently”.
Phase 3 — Roll out or back to the start
If the pilot carries, we roll out step by step — per department, per use case, with training of users and inclusion in governance. If the pilot doesn’t carry, we say so openly, and we either find a better use case or give the hint that AI currently isn’t the right lever at your company.
Phase 4 — Operation & ongoing adaptation
AI models change, licenses change, workflows change. Optionally we accompany ongoing operations in a quarterly rhythm: what’s new at Microsoft, which new use cases have emerged, what isn’t running as planned. Delivery: an AI and automation portfolio that grows along, instead of rusting.
What you can expect from us — and what not
What you get:
- Direct contact to the founder as your fixed point of contact — no ticket carousel, no rotating account managers.
- An honest use-case evaluation before anyone buys a license or builds a flow.
- Pilot phases with defined success criteria, instead of “let’s see”.
- Documentation that a successor understands — not spaghetti workflows that only we can maintain.
- Recommendations that may also work against our own revenue — if the right lever is a simple Power Automate flow, we build exactly that.
What we deliberately don’t do:
- ROI promises with concrete percentage figures. The market doesn’t support them, and we don’t want to entangle ourselves in gut numbers that fly back at you in the supervisory board.
- AI as an end in itself. If the use case runs cheaper and more stable with classical automation, we do that.
- Full automation of critical decisions without a human in the loop. Closing tickets, releasing contracts, triggering payments — the human stays.
Where we also say no:
- If you want to introduce Copilot “because everyone has it” and the data foundation doesn’t carry — then first clean up SharePoint and permissions, then we talk again.
- If the honest answer is: “This isn’t an AI use case, it’s a process that was never cleanly defined.” Then we talk about the process, not the model.
- If the need is actually training employees in safe use of existing AI tools, instead of an in-house build. That, too, is a valid answer.
How it starts
- 30 minutes initial conversation, free of charge, non-binding, by video or phone.
- What we clarify: where does noticeable, recurring effort emerge at your company today, and which existing tools are already in the house.
- Optionally useful in advance, but not required: current Microsoft license packages, workflow/automation tools in use, a rough idea which department could most likely be the pilot carrier.
- Engagement models are possible as a one-time pilot project, as ongoing support in a quarterly rhythm, or as a hybrid — what suits you, we clarify in conversation.
Frequently asked questions
Do we really need Microsoft 365 Copilot? That depends on two things: on your roles — who works daily with text, research, summaries — and on your data classification. If SharePoint permissions are clean and there are roles in the company doing a lot of text work, Copilot can be a real lever. If the data is unsorted, you are buying a security risk on top. We measure that beforehand.
What does an AI project cost? That depends on three drivers: how many use cases are in the pilot, how clean the data basis already is (or whether it has to be tidied up first), and how many employees are trained at the end. In the initial conversation we give an honest range — flat statements without a look into your tenant would be unserious.
How do we prevent the AI from sending company data to OpenAI? Through the choice of tool and through clear ground rules. Microsoft Copilot stays in the own tenant, Azure OpenAI Service likewise. ChatGPT in the free variant does not — there prompts by default go into training. We integrate the tools so that company data stays where it belongs, and we define with you what may go into which tool.
Can we also use AI without Microsoft? Yes. Azure OpenAI is one option, Anthropic Claude via AWS Bedrock another, local models (Llama, Mistral) on own hardware a third. We are not ideologically fixed on Microsoft — what fits the situation is what we use. Microsoft is the pragmatic path for many mid-market companies, because M365 is in the house anyway; it isn’t mandatory.
Who is liable if the AI says something wrong? In case of doubt, the company that uses the answer. That’s why our architectures are built so that the human stays in the loop — the AI proposes, the human decides. That isn’t a brake, it’s risk management. For knowledge searches with source citations, the AI is a research tool, not the answer-giver.
What do we do when our employees use ChatGPT privately for company work? First don’t react moralistically — they do it because the internal tool is missing or too cumbersome. Second: provide clear ground rules and an officially recognized internal tool, so the reflex doesn’t point at ChatGPT. Third: training, what may go into which tool. Bans without an alternative do not work.
Related topics
- Use Case: Internal knowledge search with AI — making company knowledge findable again
- Use Case: Automating ticket triage — relieving first line without abolishing the human
Looking more for a clean M365 tenant as a foundation? Services overview