How does LandiKI differ from ChatGPT or Gemini?
Our solution is based on open source LLMs that are operated entirely on-premises or in a private cloud. Unlike public services, we offer full control over data flows, model customization and integrations. Communication with the model never leaves the company network - a decisive advantage in terms of data protection, IP protection and compliance.
Which model do you use - how flexible is it?
Our LLM instance runs on Ollama - a lean, containerized platform for the local deployment of Large Language Models (LLMs). We use Open WebUI, an intuitive, modular user interface with multi-user support, role management and integration of RAG components, as the front end. This combination allows us to create a production-ready environment with a high degree of flexibility and easy expandability. We are not limited to a single model - different LLMs can be individually integrated, configured and tested in order to find the best fit for specific requirements.
How is company data integrated?
We use a RAG architecture for the contextual enrichment of prompts. The embeddings are generated via OpenAI-compatible transformer models (e.g. BGE, Instructor XL) and automatically fed from internal sources (PDFs, Confluence, Markdown, emails, etc.). Access is via customized RAG plugins within the Open WebUI or your own middleware.
How quickly is LandiKI ready for use?
Thanks to Ollama and Open WebUI, the first use cases can go live within a few days. Getting started is quick and scalable, especially for teams with DevOps experience. On request, we offer a ready-made instance including configuration, data connection and rollout support.
How secure is the setup in productive use?
Very secure. Our AI runs completely internally, without any connection to the cloud or external servers. Access is protected via user accounts and only authorized persons or teams can work with the AI. This is a great advantage, especially for sensitive data or industries with high data protection requirements.