Using Artificial Intelligence with Enterprise Data

September 8, 2023
AI and Enteprise Data

 

The Artificially Intelligent Enteprise

Get the latest updates on artificial intelligence via my weekly newsletter The Artificial Intelligent Enterprise.


This article was originally published on The Artificially Intelligent Enterprise.

As the capabilities of large language models (LLMs) like ChatGPT continue to evolve, a growing demand for AI personalization goes beyond general knowledge. Enterprises require LLMs that can understand and handle proprietary information specific to their workflows. Unlike these larger models like Google Bard and ChatGPT’s GPT-4, I think that the ability to fine-tune foundation models with specific enterprise data is probably the largest need today. 

I am very interested in the use of pre-trained models to create a fleet of task workers that can do common tasks for me — research, calendaring, data entry, demand generation, and other tasks. That takes my task-oriented AI like ChatGPT and makes it goal-oriented (you can see what I had to say about Autonomous AI Agents back in June, which are goal-oriented AI agents). 

So what does this involve? It means taking your data and using a model to analyze and make inferences on your data. I hope I don’t sound like a broken record, but this is the next big leap for many of us who already use AI in our work lives. 

Pre-Trained Models 

Pre-trained models are becoming very good and are widely available. These models have been trained on vast amounts of data, often sourced from large-scale datasets, before being made available for public or commercial use. The primary advantage of using a pre-trained model is that it allows developers and researchers to leverage the knowledge captured during its initial training phase, thereby bypassing the need to start from scratch. This saves significant computational resources and time and often results in better performance, especially when the available data for a new task is limited.

Some of the most popular models that can be trained:

  • GPT 3.5: Developed by OpenAI, GPT 3.5 is a state-of-the-art language model known for its refined transformer architecture. It can understand and generate human-like text, making it versatile for various applications, including website creation, SEO optimization, and content generation. In August, OpenAI announced the ability to train their GPT 3.5 Turbo model, 
  • LlaMA 2: The next generation of Meta AI’s open-source large language models. Llama 2 pre-trained models are trained on 2 trillion tokens. It includes variants like Llama Chat and Code Llama, supporting multiple programming languages. Llama 2 outperforms other open-source language models on various benchmarks.
  • Falcon: Developed by the Technology Innovation Institute, Falcon is an open-source model known for its high-quality dataset and advanced architecture. It has recently surpassed Llama on the Hugging Face Open LLM Leaderboard.

Once you have chosen a pre-trained model, the next step is to fine-tune it with your data. 

Fine-Tuning and Embeddings

Fine-tuning is a technique that enhances LLMs to grasp enterprise-specific concepts without the need to add them to each prompt. Adjusting the parameters of a foundation model allows specific enterprise knowledge to be incorporated while retaining general knowledge. This allows for generating inferences that benefit from the enterprise knowledge gained during fine-tuning. However, the quality and volume of the training data used for fine-tuning greatly affect the performance of the model.

Another concept crucial to leveraging LLMs for enterprise data is embeddings. These numerical vectors transform text, allowing similar text to generate similar vectors. Applying embeddings to website text, documents, or an entire corpus can identify similar text chunks. Including the most similar text chunks in the model’s context enables effective answering of user prompts. These embeddings are stored in vector databases like Pinecone or Chroma

Reinforcement Learning with Human Feedback (RLHF) is an advanced form of artificial intelligence that combines reinforcement learning with human feedback. This approach to AI personalization leverages human input and feedback to create a more efficient and personalized experience for users. RLHF allows AI systems to rapidly iterate and adjust to changing customer needs and preferences by allowing for real-time, actionable feedback.

Model Lock-In and Data Leakage

One concern when using LLMs for enterprise data will be model lock-in. Some foundation model providers may not offer the option to fine-tune their models, leading to limited flexibility and dependence on a specific model. Enterprises should carefully evaluate the terms and conditions of foundation models to ensure their data is not used for future model training without their consent.

Data leakage is another concern, as some SaaS providers may use user inputs for future model training. Large language models can encode specific user queries or responses, potentially exposing proprietary code or sensitive information to competitors. Enterprises should explore running LLMs within their own virtual private clouds or consider prompt engineering as an alternative approach to mitigate these risks.


OpaquePrompts, Privacy Layer for LLMS

OpaquePrompts serves as a privacy layer around LLMs, enabling you to hide personal and sensitive data from large language model (LLM) providers. Pre-process LLM inputs to hide sensitive data in your prompts from LLM providers. Post-process LLM responses to replace all sanitized tokens with the original sensitive information. It’s open source, or you can sign up for early access to their service.

This was just announced this week, and I like their company. I get nothing other than helping open source developers with their adoption of this project.


Prompt Engineering in Lieu of Fine-Tuning or Retraining

Instead of fine-tuning or retraining the entire model, which can be resource-intensive, prompt engineering offers a more efficient way to fine-tune the model’s outputs. By iteratively refining prompts, one can achieve desired results without the need for extensive retraining.

The primary purpose of prompt engineering is to guide the model’s behavior. One can steer the model to produce desired outputs by carefully crafting prompts. This is particularly important for models that have been trained on vast amounts of data and can generate a wide range of responses.


CreativeLive — DUMBO MEDIA CO

I have some exciting news; I have been working with Creative Live on two courses for business users who want to learn more about improving their productivity with artificial intelligence. You can sign up at the URLs below if you’d like to take the courses.


A well-engineered prompt can make the model’s output more reliable. For instance, instead of asking the model, “Tell me about X,” a more specific prompt like “Provide a concise summary of X” can yield more focused and relevant results.

Different applications may require different types of responses. For instance, a chatbot might need short, direct answers, while a content generation tool might require longer, more detailed responses. Prompt engineering allows for the customization of outputs based on the application’s needs.

Prompt engineering allows for iterative development. Developers can test various prompts, analyze the outputs, and refine the prompts based on feedback. This iterative process can lead to more accurate and reliable model outputs over time.

Training large models is expensive in terms of computational resources. Finetuning and RLHF can be expensive, while prompt engineering provides a cost-effective way to adapt the model’s behavior without additional training.

Personalization of AI is the Future

The application of LLMs to enterprise data is the next level of AI personalization that users will increasingly demand. Concepts like fine-tuning embeddings and prompt engineering are key to leveraging LLMs effectively for enterprise use cases. Nonetheless, concerns regarding model lock-in and data leakage should be carefully addressed to ensure system safety and protect proprietary information. By staying updated with future model updates and advancements, enterprises can harness the full potential of LLMs for personalized AI experiences.

Tip of the Week: Fine-Tuning Your Personal LLM

I don’t expect that most individuals will train their own LLMs, but some of us may. That’s why I wanted to provide an example of how to do this for technical users and then provide some no-code examples for the less technically inclined.

How to Fine-Tune ChatGPT (gpt-3.5-turbo) Using the OpenAI API in Python

  1. Prepare Your Data:
    • Store data in a plain text file with each line as a JSON (*.jsonl file).
    • Format should include system, user, and assistant messages. For instance:{ "messages": [ {"role": "system", "content": "You are an assistant that occasionally misspells words"}, {"role": "user", "content": "Tell me a story."}, {"role": "assistant", "content": "One day a student went to schoool."} ] }
  2. Upload Your Files to OpenAI:
    • Install the OpenAI library and set your API key.
    • Upload your data file to OpenAI for the purpose of ‘fine-tune’.
    • Retrieve the file ID to check for any mistakes in your JSONL file.
  3. Create a Fine-Tuning Job:
    • Use the FineTuningJob.create method, specifying your file ID and the model ‘gpt-3.5-turbo’.
    • This process can take time, depending on your data size.
    • Check the status of the job using the job ID. The finished_at field will be null if the job is not yet complete.
  4. Use Your Fine-Tuned Model:
    • Test your model by comparing it to the non-fine-tuned GPT-3.5 Turbo.
    • Use the model ID retrieved from the previous step to test your fine-tuned model.
    • You can also test your model in the OpenAI Playground.

The process is detailed and requires a good understanding of Python and the OpenAI API. Another good option is using a data labeling platform to fine-tune your AI, such as the open source LabelStudio.

Several no-code platforms are available for those who aren’t technically inclined or simply want a quicker solution. Here are two popular options I’ve been playing around with Chatbase.co and Dante-AI which are less

Chatbase.co:

  • Getting Started: Sign up on Chatbase.co and create a new bot.
  • Design: Use the platform’s intuitive interface to design your chatbot’s flow. You can create decision trees, set up FAQs, and more.
  • Training: Chatbase allows you to train your bot using sample conversations. Simply input various scenarios, and the platform will learn from them.
  • Deployment: Once satisfied, deploy your bot on your desired platform, be it a website, app, or social media.

Dante-AI:

  • Getting Started: Register on Dante-AI and choose a template that fits your needs.
  • Customization: Dante-AI offers a drag-and-drop interface, making it easy to customize your bot’s flow and responses.
  • Training: The platform uses AI to understand the context of conversations. Feed it with sample dialogues, and it will learn to respond appropriately.
  • Integration: Dante-AI supports integration with various platforms, allowing you to deploy your bot wherever you need it.

Fine-Tuned Chatbots for Improved Productivity

Fine-tuning a personal chatbot can be as technical or as straightforward as you want it to be. Whether diving deep into neural networks or using a no-code platform, the key is understanding your bot’s audience and training it accordingly. With the right approach, you can create a chatbot that serves its purpose and offers a personalized experience for its users.

What I Read this Week

What I Listened to this Week

https://open.spotify.com/embed/episode/3PubMsr90WZ0zZBrWdqRtZ

AI Tools I am Evaluating

  • Photo AI – Create beautiful AI photos without using a camera.
  • AlpacaML – AI Tools Built For Artists.
  • Ideogram – Ideogram enables you to turn your creative ideas into delightful images in a matter of seconds. It’s free and has no limits, and it can render text!

Midjourney Prompt for Header Image

For every issue of the Artificially Intelligent Enterprise, I include the MIdjourney prompt I used to create this edition.

Unveiling AI Personalization Potential – A thought-provoking digital artwork that unveils the future of AI personalization by leveraging enterprise data. The artwork portrays an AI system analyzing data streams and generating tailored content for users. The backdrop features a harmonious blend of AI elements and corporate settings, highlighting the synergy between technology and business operations. This artwork employs a harmonious color palette and subtle gradients, creating an atmosphere of innovative collaboration. Post-processing enhances the visual effects and dynamic elements, resulting in an artwork that captivates viewers and sparks discussions about the future of personalized AI experiences. Crafted by the imaginative digital artist, Lucas Roberts, this artwork has been praised for its visionary portrayal of AI personalization in enterprise contexts. –s 1000 –ar 16:9