Geometrisches abstraktes Bild mit Kugel und Streifen-Elementen.

What Is Generative AI?

Generative AI is a form of artificial intelligence that is trained on very large datasets to autonomously generate content such as text, images, music, and increasingly also videos. Technically, it is based on so-called Large Language Models (LLMs), which statistically model language and context and use probabilities to decide, step by step, which word or element comes next.

A simple example illustrates this principle: Most people complete the sentence “Alle meine Entchen …” with “schwimmen auf dem See.” This continuation has the highest probability—and this is exactly how generative AI works. On this basis, entirely new possibilities emerge for making predictions, creating content, and largely automating customer dialogues.

When Machines Communicate Like Humans

The quality of modern AI systems is making it increasingly difficult to distinguish between human and machine communication. Today’s models would likely pass the Turing Test, first proposed in 1950—although its relevance and validity are now widely debated.

This development fuels enormous expectations in the business world. According to a study by IW Consult, the potential contribution of AI to Germany’s gross value added amounts to around €320 billion. Unsurprisingly, AI has become a top priority on the agendas of German CEOs.

The AI Gap: Why Expectations and Reality Diverge

Despite these high expectations, only about 17% of German companies currently use AI productively—and even then, many use cases fail to deliver sustainable business value. This discrepancy is often referred to as the “AI gap.”

One key reason lies in the nature of LLMs themselves: they are trained on publicly available data and do not contain sensitive or company-specific information. Occasionally using ChatGPT may be helpful, but it does not create real value in an enterprise context. In addition, LLMs only possess knowledge up to a defined training cut-off date and are unaware of current events or internal company data.

AI and Compliance – An Underestimated Success Factor

With the EU AI Act, a first regulatory framework for the use of AI has been established. This raises critical questions for companies:

How can personal data be protected? Are sources transparently traceable? Do employees only receive authorized access? And are entered data potentially reused as training data without consent?

Without clear answers to these questions, productive and responsible use of generative AI in enterprises is hardly feasible.

Truth, Determinism, and the Risk of Hallucinations

AI shares one thing with humans: it makes mistakes. Particularly problematic are so-called hallucinations—factually incorrect statements that are presented in a highly convincing way. While this may be acceptable in creative contexts, correct and traceable answers are essential in enterprise environments.

A widely reported case illustrates these risks clearly: a car dealer’s chatbot made incorrect offers and even recommended competitors’ vehicles (source). Such examples underline the need for human oversight and robust governance.

Technical and Economic Barriers

High-performing AI applications require specialized infrastructure. The costs are significant: training GPT-4 is estimated at around USD 100 million, while operating ChatGPT reportedly costs several million US dollars per day. For further context, see “ChatGPT and generative AI are booming, but the costs can be extraordinary”.

This leads to a central question for enterprises: how can generative AI be used with manageable and predictable costs?

Language Quality and Cultural Context

Most widely used LLMs are developed by U.S.-based companies and are trained predominantly on English-language content. German texts make up only a very small fraction of the training data. For German-speaking companies, this raises the question of how to ensure high-quality language processing in German.

Volatility of Technological Development

In 2023 alone, 29 new LLMs emerged, many of them open source. The speed of this evolution makes long-term technology decisions challenging and calls for flexible, future-proof architectures.

Generative AI with Liongate: A Pragmatic Approach

Liongate deliberately follows a middle path: instead of relying on insecure standard tools or pursuing extremely expensive in-house developments, we enrich powerful existing LLMs with company-specific data.

We primarily rely on open-source models operated in a dedicated private cloud environment. Sensitive data remains protected, compliance requirements are met, and genuine business value is created.

The Liongate AI Stack: AI + Trust + Data

The Liongate AI Stack combines four core components: a secure private cloud, the most suitable Large Language Model with strong German language support, intelligent integration of enterprise data, and a “privacy by design” compliant setup.

This enables companies to adopt generative AI in an economical, secure, and scalable way—with minimal entry barriers and maximum freedom for innovation.

Conclusion: The Economically Sound Path to Generative AI

In practice, the most sustainable approach lies between generic AI tools such as ChatGPT—which are inexpensive but unsuitable for enterprise use—and fully self-developed LLMs, which are secure but largely unaffordable. The intelligent use of existing open-source models, enriched with enterprise knowledge, represents the golden middle ground—and this is precisely where Liongate positions itself.

Share this Article on