plugnpl.ai – the concept behind the AI

The basic idea is to ensure maximum flexibility when integrating any data sources, AI models, and features, and to make them usable right away: plug in and get started – plugnpl.ai.

The plugnpl.ai concept and the requirements of state-of-the-art software development form the foundation of the AI platform developed by just experts.

Requirements

The plugnpl.ai concept is based on three core principles: flexibility, speed and agility.

To fulfill these principles, we have incorporated the following requirements of state-of-the-art software development in the development of plugnpl.ai:

Flexibility

As soon as new services and features become available in the infrastructure, they are immediately integrated into your infrastructure components. This avoids technical complications and enables rapid further development.

Flexibility is what sets our platform apart. Individual features or specific functions can be integrated with minimal effort—for example, you can connect your ticketing system to the platform or use pretrained LLMs to detect potential quality issues in production. As long as you can imagine it, we can make it happen—there are no limits!

In the plugnpl.ai interface, we integrate your logo and your brand colors according to your specifications, ensuring the solution fits seamlessly into your corporate identity and systems. Features such as the integration of a company profile and industry-specific prompt properties can also be seamlessly incorporated.

Whether you have 10 users or 10,000 users per day, the platform’s performance remains consistently high.

Speed

AI enables people to work more efficiently. As a foundation for this, all applications are immediately available and operate at an appropriate speed without any delay.

Stateless components are elements that do not store or modify internal data. They operate solely based on the information passed to them and always deliver the same result for the same inputs. As a result, they are generally easier to understand and test, and provide better performance.

Developers should be able to focus on development and not waste time on manual integration and deployment tasks, complex configurations, or network issues. Automation, standardization, and high integrability reduce these tasks to an absolute minimum.

Waiting a long time for your response to be fully generated by the LLM? Not on the plugnpl.ai platform – all responses are instant across devices and sessions, and behave synchronously. Want to send a request from your phone and your laptop at the same time? No problem – your requests are processed simultaneously!

Agility

The core application is regularly maintained and centrally updated. Essential updates are developed centrally and can be rolled out to different environments and organizations.

Each company can develop its own AI-based components, which can be easily integrated into the platform. This allows plugnpl.ai to be individually customized to meet specific needs. These custom developments can be shared and enhanced within the plugnpl.ai community by other developers.

Each company can develop its own AI-based components, which can be easily integrated into the platform. This means plugnpl.ai can be individually tailored to your specific needs. These custom developments can be shared and further enhanced within the plugnpl.ai community by other developers.

Security is our top priority. The software has been designed with security and integration in mind from the very beginning, at every level. Secure code development and continuous, integrated authentication—both interactive and non-interactive—are fundamental aspects of our approach.

Do you know what the next big AI trends will be? No? Neither do we! But we’re ready to integrate them into our platform—whatever they may be.

The operating model of the Supporthub is tailored to your organization and can be adapted to any need and scale—from your own code running on your infrastructure to a streamlined SaaS model.

Automate anything that needs to be done more than once. The platform takes care of repetitive tasks or process steps.

Flexibilität

As soon as new services and features become available in the infrastructure, they are immediately integrated into your infrastructure components. This avoids technical complications and enables rapid further development.

Flexibility is what sets our platform apart. Individual features or specific functions can be integrated with minimal effort—for example, you can connect your ticketing system to the platform or use pretrained LLMs to detect potential quality issues in production. As long as you can imagine it, we can make it happen—there are no limits!

In the plugnpl.ai interface, we integrate your logo and your brand colors according to your specifications, ensuring the solution fits seamlessly into your corporate identity and systems. Features such as the integration of a company profile and industry-specific prompt properties can also be seamlessly incorporated.

Whether you have 10 users or 10,000 users per day, the platform’s performance remains consistently high.

Agility

  • The core application is regularly maintained and centrally updated. Essential updates are developed centrally and can be rolled out to different environments and organizations.

    Each company can develop its own AI-based components, which can be easily integrated into the platform. This allows plugnpl.ai to be individually customized to meet specific needs. These custom developments can be shared and enhanced within the plugnpl.ai community by other developers.

Security is our top priority. The software has been designed with security and integration in mind from the very beginning, at every level. Secure code development and continuous, integrated authentication—both interactive and non-interactive—are fundamental aspects of our approach.

 

Do you know what the next big AI trends will be? No? Neither do we! But we’re ready to integrate them into our platform—whatever they may be.

The operating model of the Supporthub is tailored to your organization and can be adapted to any need and scale—from your own code running on your infrastructure to a streamlined SaaS model.

Automate anything that needs to be done more than once. The platform takes care of repetitive tasks or process steps.

Geschwindigkeit

AI enables people to work more efficiently. As a foundation for this, all applications are immediately available and operate at an appropriate speed without any delay.

Stateless components are elements that do not store or modify internal data. They operate solely based on the information passed to them and always deliver the same result for the same inputs. As a result, they are generally easier to understand and test, and provide better performance.

Developers should be able to focus on development and not waste time on manual integration and deployment tasks, complex configurations, or network issues. Automation, standardization, and high integrability reduce these tasks to an absolute minimum.

Waiting a long time for your response to be fully generated by the LLM? Not on the plugnpl.ai platform – all responses are instant across devices and sessions, and behave synchronously. Want to send a request from your phone and your laptop at the same time? No problem – your requests are processed simultaneously!

Design principles

plugnpl.ai was developed according to several fundamental design principles. The focus is on efficiency, flexibility, stability, and security. The following principles are particularly important:

Multi Layer Architecture

In a multi-layered software architecture, the system consists of various layers, each with dedicated responsibilities. Each of these layers can be individually customized by adding, for example, customer-specific modules without affecting the functionality of the entire system.

When the technological environment changes, only the affected layer needs to be updated, while the other layers remain untouched. This means the system stays flexible and adaptable, even as requirements or technologies evolve. Individual layers and functions can be integrated as needed: Want a different frontend? Simply connect directly to the integration layer. Have your own LLM? It can be integrated into the LLM layer.

Operating Model

We offer you the option to use our software either as SaaS (Software-as-a-Service) or to host the platform in your own Azure subscription. With the SaaS option, we take care of all infrastructure and maintenance, so you can access the software without having to host it yourself. If you choose to host the platform in your own Azure subscription, you have full control over the installation and management of the software in your own cloud environment. Regardless of which option you choose, all users can receive regular updates to ensure that the software is always up to date and delivers optimal performance.

All modular units

All components have been designed as stateless, standalone modules to enable customization. Stateless means that the services do not store data and operate independently. The modules are small, separate units that can be easily adapted or separated. Everything is parameterized, allowing for flexible configurations to respond to different requirements.

Entra ID integrated

Lightning-fast and secure login via Entra ID (formerly Azure Active Directory) enables integrated Single Sign-On (SSO), including multi-factor authentication if configured.  

Graph API integrated

plugnpl.ai recognizes your Entra user—profile picture, display name, and group memberships are synchronized. A native display of Teams, including channels and much more, is easily possible in the user context through integration with the Microsoft Graph API.

API contracts

To design robust and future-proof API contracts, clear and stable interfaces are defined that can be easily extended. This enables seamless integration of new features and enhancements in both the backend and frontend without affecting existing implementations. In addition, detailed documentation and versioning systems are in place to ensure smooth ongoing development and the expansion of API usage in various areas.

Floating platform

If part of the platform is solved more easily and/or better by one of the hyperscalers, that’s no problem: we use and integrate that innovation instead of trying to outperform the leading innovators. Our focus is on providing the best solutions for our customers, regardless of who developed them. With this approach, we can always leverage the most advanced technologies and continuously improve our offering.

All Infrastructure as Code (IaC)

All infrastructure components are defined as „Infrastructure as Code“ (IaC) from the very beginning to enable effortless deployment and scaling. This way, every change to the infrastructure can be implemented in an automated, consistent, and transparent manner.

All Zero Trust

We implement zero-trust design principles from the very first line of code. Zero Trust assumes that no one is trusted—whether inside or outside the network. Every request is checked and authenticated before access is granted.

Pay per Usage

We develop all system components so that they only incur costs when used. This reduces fixed costs and enables flexible, efficient spending.

Multi Layer Architecture

The software architecture of plugnpl.ai is a multi-layer architecture. This means that the plugnpl.ai software consists of various layers to handle specific tasks and to create an efficient, flexible, and easily maintainable infrastructure.

Each layer serves a specific purpose. Maintenance and further development are simplified, and testability is improved, resulting in more reliable software. This separation of layers enables a high degree of scalability and flexibility—individual layers can be replaced or scaled without the need to rework the entire system.

The multi-layer architecture brings structure and order to the software design. It promotes clear interfaces between components, leading to more efficient development, easier maintenance, and higher software quality.

This layer refers to the people interacting with the system. It includes:

  • Comprehensive Training Concept: A comprehensive training program that ensures all users are well-prepared to use the systems effectively.
  • Seamless SSO with Entra ID / Graph Integration: A seamless Single Sign-On (SSO) solution that simplifies access to the system and ensures users only need to log in once to access all resources.

This layer contains the applications and services that end users interact with directly. These include:

  • plugnpl.ai Web App: A web application offering various support functions such as web search, translation, or rephrasing.
  • Customer Specific ERP Integration: Integration with specific ERP systems to optimize business processes.
  • Power Automate Integration: Automation of workflows and processes through integration with Power Automate.
  • Teams Client Integration: Integration with Microsoft Teams to enhance collaboration and communication, as well as to utilize data and information from Teams.
  • Edge Integration: Enables applications to communicate directly with edge network devices, such as sensors or machines, and process data in real time. This leads to faster responses and more efficient operations, as data does not need to be sent through the entire network first.

This layer ensures that different systems can communicate seamlessly with each other. It includes:

  • plugnpl.ai API: An API that provides access to the functions of the Supporthub.
  • “Feature XY” API Scope: Specific API scopes that enable customization and extension of functionalities.

This layer is responsible for processing and analyzing data. It includes:

  • Optimized ChatGPT Connect: Optimized connections to ChatGPT to provide powerful AI capabilities.
  • Generic LLM Connect: Connections to generic large language models (LLMs) for various use cases.
  • Analytics Tool for SQL: Tools for analyzing SQL data.
  • Websearch: Web search functionalities for connecting to the latest information.
  • Sales Agent: AI-driven sales agents that support the sales process.

This layer includes any large language models (LLMs) that can be used for various tasks:

  • ChatGPT on Azure: Deployment of ChatGPT on the Azure platform.
  • ChatGPT on OpenAI: Use of ChatGPT via the OpenAI platform.
  • Self-Hosted, Self-Trained LLM: Your own hosted and trained LLMs.
  • Any 3rd-Party LLM API (such as groq (Llama 3) or Google Gemini): Use of third-party LLMs such as groq or Google.

This layer includes any large language models (LLMs) that can be used for various tasks:

  • ChatGPT on Azure: Deployment of ChatGPT on the Azure platform.
  • ChatGPT on OpenAI: Use of ChatGPT via the OpenAI platform.
  • Self-Hosted, Self-Trained LLM: Your own hosted and trained LLMs.
  • Any 3rd-Party LLM API (such as groq (Llama 3) or Google Gemini): Use of third-party LLMs such as groq or Google.

This layer encompasses the company’s specific data structures:

This multi-layer architecture makes it possible to efficiently manage different tasks and respond flexibly to changing requirements. Each layer is specialized for certain functions and contributes to making the overall system robust, scalable, and easy to maintain.

Operating Model

Our operating model offers flexible solutions that can be easily adapted to your individual needs and scalability requirements. It includes multiple migration paths and infrastructure options specifically tailored to different scenarios.

as a SaaS

(Software as a Service)

Own database, our Entra ID (B2C/B2B)

Included services:

  • Entra Identity
  • Static Apps
  • Chat Service
  • Dedicated SQL database

Own database, our Entra ID (B2C/B2B)

Included services:

  • Entra Identity
  • Static Apps
  • Chat Service
  • Dedicated SQL database

Own environment, your Entra ID

Included services:

  • Entra Identity
  • Static Apps
  • Chat Service
  • Dedicated SQL database

or your infrastructure

  • Own database, our Entra ID (B2C/B2B)
    Included services:

    • Entra Identity
    • Static Apps
    • Chat Service
    • Dedicated SQL database
  • Own repositories, dedicated customer branch (dev/stage/prod)
    Included services:

    • Entra Identity
    • Static Apps
    • Chat Service
    • Dedicated SQL database

Our model facilitates migration and offers seamless integration with your existing infrastructure.

Whether you prefer a shared environment or a dedicated environment, we have the right solution for you.

Continuous deployment and integration : CI/CD

We have established a comprehensive continuous integration and deployment process that enables lightning-fast deployments with zero downtime and supports the development of custom solutions. Our process is divided into two main areas:

  • Development environment
  • Customer environment

Development environment

  • Your developers write the code and check it into Azure Repos.

Your developers write the code and check it into Azure Repos.

  • The code goes through a series of tests:

    • Linting
    • Unit tests
    • End-to-end tests
    • PR review
  • The reviewed code is further processed:

    • Deployment to the staging environment
    • Acceptance tests
    • Deployment to the production environment

Customer environment

  • You check your code into the customer repository.

The code also goes through tests (similar to those in the development environment).

The code is deployed to different environments:

  • Dev: Chat service, static apps, infrastructure via IaC/Terraform
  • Stage: Chat service, static apps, infrastructure via IaC/Terraform
  • Prod: Chat service, static apps, infrastructure via IaC/Terraform
  • Prod Customer: Chat service, static apps, infrastructure via IaC/Terraform

DEVELOPMENT PARTNERSHIP

Let’s develop
together

Become an active part of the plugnpl.ai community and benefit from the collaborative further development of the Supporthub. Connect with other companies, share development costs, and find inspiration and new ideas.