Home / Services / AI & Automation

AI & Automation — where AI really helps in the Mittelstand, and where not

AI-supported automation for mid-market companies in the Lower Rhine region — pragmatic, with measurable benefit, without buzzword bingo. We build what actually takes load off.

AI is everywhere — on every conference stage, on every sales slide, in every second LinkedIn post. And at the same time you sit in your company and see: apart from a handful of employees using ChatGPT for their emails, little has happened in daily work. We build AI and automation solutions for mid-market companies (German Mittelstand) that really take load off in day-to-day operations — and we openly say where AI is not the right answer and a 50-line workflow does the job better.

Does this sound familiar?

Why this happens

In the last two years AI has moved from a research question to a sales push. Every Microsoft partner brings up Copilot, every consultant has an AI roadmap in their portfolio, and at the same time the expectation from management, boards and family shareholders rises: “We have to do something.” What rarely happens is the sober question of where AI actually takes work off in concrete operations — and where it remains an expensive toy that lies in the drawer after three months.

On top of that: most Mittelstand companies don’t have their data in the state modern AI tools need. Copilot can only be as good as the SharePoint permissions it sits on top of — and in many companies these permissions have grown over years and no one wants to touch them. If Copilot is then “successfully” introduced, it suddenly sees personnel data, payroll or board minutes it shouldn’t see. That’s not an AI problem, it’s a data foundations problem — but it only becomes visible through the AI.

And finally: AI is not the same as automation. A lot of what is sold today as an “AI project” is at its core a workflow problem — a recurring procedure that was never cleanly mapped, and for which now a large language model is supposed to be harnessed because it sounds more impressive than “Power Automate”. We deliberately separate the two worlds: AI where language and ambiguity are in play, classical automation where the process is actually clear and just never consistently implemented.

What this is concretely about

Microsoft 365 Copilot — when it really makes sense

Before we book a Copilot license, we check two things with you: what your SharePoint permissions look like, and which data in your tenant is actually classified. If permissions have grown wild, Copilot sees things it shouldn’t — and the question “How high was management’s last bonus?” suddenly becomes answerable for every person in the company. First get the data basis in order, then Copilot. And even then not for everyone, but for the roles in which text work, research and summarization occur daily — sales, marketing, management, back office with high mail volume.

Employees ask a subject-matter question in Teams, and the answer comes from the company’s knowledge — from SharePoint, from the DMS, from the wiki, with cited sources. Technically that builds on Azure AI Search and a classic RAG pattern (Retrieval-Augmented Generation): the AI doesn’t generate an answer from thin air, but first finds the matching documents in your inventory and formulates the answer along these sources. How you notice it’s due: when new employees need three weeks until they know where which document sits — and the old hands carry the knowledge in their heads.

Workflow automation (Power Automate / n8n)

The less glamorous but usually more rewarding part. Recurring procedures that today run via mail distribution lists, Excel sheets and shouting become defined workflows: quote dispatch with automatic filing in the DMS, order confirmation with feedback to sales, onboarding of a new employee with license assignment, group membership and device preparation. We use what fits the situation — Power Automate, if you are in the Microsoft world anyway, n8n for more open scenarios or when you want to stay independent.

Ticket triage & classification

Incoming service-desk tickets get pre-classified (category, urgency, likely resolution path), briefly summarized and routed to the right place. For recurring standard questions — password, VPN, printer — the system suggests a resolution path that the responsible person only needs to confirm. The human stays in the loop. How you notice it: when your first line consists 70 percent of the same five topics and nobody has time anymore for the really interesting tickets.

Governance — what AI may see, what not

The invisible but decisive part. Data classification (what is public, internal, confidential, strictly confidential), Sensitivity Labels in M365, prompt filters and an audit trail for AI usage. Plus clear ground rules for employees: what may go into ChatGPT, what may not, which internal tools are available. That’s the answer to the question your data protection officer will ask you anyway in the coming months.

Where AI doesn’t help today — the honest answer

AI is not a panacea, and we often say in the initial conversation: “This goes better today with a 50-line Power Automate flow than with an LLM.” For example:

A sentence we often say: if your sales team writes 50 quotes per week and 90 percent of them differ only in quantities and prices, you don’t need AI — you need a clean template and a Power Automate flow. AI makes the difference where language really varies, where documents look different, where questions are ambiguous. There it is a lever. With fixed patterns it is the expensive detour.

What you should look out for — even if you don’t go with us

When this is now due

How we work

Phase 1 — Initial conversation & use-case inventory

30 minutes initial conversation, then a structured look at the recurring procedures at your company: what happens daily, what happens weekly, where does frustration accumulate, where is time lost. Delivery: a use-case list, sorted by “AI is worth it”, “classic automation is worth it”, “nothing is worth it because the process has to be clarified first”.

Phase 2 — Pilot in one department

We pick a use case with you that is manageable, measurable and visible in success. Six to eight weeks pilot in one department, with clear success criteria defined up front. Delivery: a running use case, an honest evaluation (“what worked, what didn’t”), a decision basis for “roll out further or do it differently”.

Phase 3 — Roll out or back to the start

If the pilot carries, we roll out step by step — per department, per use case, with training of users and inclusion in governance. If the pilot doesn’t carry, we say so openly, and we either find a better use case or give the hint that AI currently isn’t the right lever at your company.

Phase 4 — Operation & ongoing adaptation

AI models change, licenses change, workflows change. Optionally we accompany ongoing operations in a quarterly rhythm: what’s new at Microsoft, which new use cases have emerged, what isn’t running as planned. Delivery: an AI and automation portfolio that grows along, instead of rusting.

What you can expect from us — and what not

What you get:

What we deliberately don’t do:

Where we also say no:

How it starts

Book an initial conversation

Frequently asked questions

Do we really need Microsoft 365 Copilot? That depends on two things: on your roles — who works daily with text, research, summaries — and on your data classification. If SharePoint permissions are clean and there are roles in the company doing a lot of text work, Copilot can be a real lever. If the data is unsorted, you are buying a security risk on top. We measure that beforehand.

What does an AI project cost? That depends on three drivers: how many use cases are in the pilot, how clean the data basis already is (or whether it has to be tidied up first), and how many employees are trained at the end. In the initial conversation we give an honest range — flat statements without a look into your tenant would be unserious.

How do we prevent the AI from sending company data to OpenAI? Through the choice of tool and through clear ground rules. Microsoft Copilot stays in the own tenant, Azure OpenAI Service likewise. ChatGPT in the free variant does not — there prompts by default go into training. We integrate the tools so that company data stays where it belongs, and we define with you what may go into which tool.

Can we also use AI without Microsoft? Yes. Azure OpenAI is one option, Anthropic Claude via AWS Bedrock another, local models (Llama, Mistral) on own hardware a third. We are not ideologically fixed on Microsoft — what fits the situation is what we use. Microsoft is the pragmatic path for many mid-market companies, because M365 is in the house anyway; it isn’t mandatory.

Who is liable if the AI says something wrong? In case of doubt, the company that uses the answer. That’s why our architectures are built so that the human stays in the loop — the AI proposes, the human decides. That isn’t a brake, it’s risk management. For knowledge searches with source citations, the AI is a research tool, not the answer-giver.

What do we do when our employees use ChatGPT privately for company work? First don’t react moralistically — they do it because the internal tool is missing or too cumbersome. Second: provide clear ground rules and an officially recognized internal tool, so the reflex doesn’t point at ChatGPT. Third: training, what may go into which tool. Bans without an alternative do not work.

Looking more for a clean M365 tenant as a foundation? Services overview