Skip links

Hands-On Generative AI using real-world applications

Jump To Section

Hands-On Generative AI, also known as GenAI, involves actively engaging with AI capabilities to generate new content across various mediums like text, images, audio, and videos. Generative models, such as Large Language Models (LLMs), are adept at understanding the underlying probability distribution of the training data, empowering them to generate new samples based on the learned distribution.

A breakthrough was made in 2017, when Google Brain released a paper called Attention Is All You Need. This paper established the foundation of LLMs as it introduced Transformers, a new neural network architecture based on attention mechanism. This attention mechanism allowed models to become much better at learning and understanding context in long form text, which was not possible before.

LLMs, such as OpenAI’s GPT models (used in ChatGPT), Google’s PaLM (used in Bard), Meta’s LLaMa, Anthropic’s Claude AI etc., operate by taking natural language text as input and generating corresponding natural language text as output. These are very large generative neural networks that are trained on tokens found in the extensive body of publicly available text data (e.g. books, articles, Wikipedia, software manual pages, GitHub etc.). 

Before delving deeper into LLM’s application, it’s essential to understand the concept of prompts and the art of prompt engineering.

Also read: Generative-AI: Accelerating Digital Business Growth

What is a Prompt?

A prompt is a short text or instruction provided by the user to the LLM to obtain a relevant and meaningful response from it. For e.g. Compose a catchy tweet announcing our upcoming product launch event, translate our website’s homepage content from English to Spanish etc.

What is Prompt Engineering?

Prompt engineering is the process of carefully designing and crafting prompts to interact with LLMs effectively. The goal of prompt engineering is to guide the LLMs to produce accurate and relevant outputs for specific tasks or applications. 

Prompt consists of – 

  • Instruction: Task to be performed – e.g., Generate a chatbot response to assist users in finding a restaurant recommendation.
  • Context: Additional information that can help in getting a better response. This includes, but is not limited to, providing definitions of the domain specific terms used in the prompt so that the model can understand meaning of specific terminology.
  • Input data: Input to find a response for – e.g. Can you suggest a good Italian restaurant nearby? 
  • Output Indicator: Defining desired format of output – e.g. “The response will include the restaurant’s name, cuisine type, rating, and distance from the user’s location”. A sample response could be: “Sure! I recommend trying ‘Pasta Paradise.’ It’s an excellent Italian restaurant with a rating of 4.8 stars. It’s located about 0.5 miles from your current location.”

Important points which need to be considered for good prompts are – 

  • Write clear instructions like output format, output length, role etc.
  • Provide reference text to assist in query response.
  • Split complex tasks into simpler subtasks.
  • Provide context for the task to be performed.
  • Instead of telling the model what not to do, it is better to specify what to do in the prompt.

Prompt Engineering Techniques

Prompt engineering is a complicated process. This is because prompts need to be carefully crafted and they are specific to problem statements. Prompts for giving restaurant recommendations are going to be extremely different from prompts for a QnA/FAQ application. Below are a few techniques frequently used to engineer specific and good functioning prompts.

  • Few-shot prompting: It involves providing a small number of examples to tell the model what to do, each consisting of both input and desired output in the desired format, on the target task.

Example: Prompt to get sentiment of last text:

Target task: Sentiment detection
Input: text/sentence
Output: Sentiment
Text: Lawrence bounces all over the stage, dancing, mopping his face and generally displaying the wacky talent that brought him fame in the first place.
Sentiment: positive
Text: Despite all evidence to the contrary, this clunker has somehow managed to pose as an actual feature movie, gets hyped on tv and purports to amuse small children and ostensible adults.
Sentiment: negative
Text: I'll bet the video game is a lot more fun than the film.
Sentiment: ???
  • Chain-of-Thought (COT) Prompting: COT prompting generates a sequence of short sentences to describe reasoning logic step by step to eventually lead to the final answer. It could be
    • Zero-shot COT 
    • Few-shot COT 
    • Tree of thoughts

Example (Few-shot COT): For example, prompt to get answer for question 3 if initial two question’s answers are provided:

Question 1: Jack is a soccer player. He needs to buy two pairs of socks and a pair of soccer shoes. Each pair of socks cost $9.50, and the shoes cost $92. Jack has $40. How much more money does Jack need?
Answer 1: The total cost of two pairs of socks is $9.50 x 2 = $<<9.5*2=19>>19.
The total cost of the socks and the shoes is $19 + $92 = $<<19+92=111>>111.
Jack needs $111 - $40 = $<<111-40=71>>71 more.?
So, the answer is 71.
Question 2: Tom and Elizabeth have a competition to climb a hill. Elizabeth takes 30 minutes to climb the hill. Tom takes four times as long as Elizabeth does to climb the hill. How many hours does it take Tom to climb up the hill?
Answer 2: It takes Tom 30*4 = <<30*4=120>>120 minutes to climb the hill.
It takes Tom 120/60 = <<120/60=2>>2 hours to climb the hill.
So the answer is 2.

Question 3: Marty has 100 centimeters of ribbon that he must cut into 4 equal parts. Each of the cut parts must be divided into 5 equal parts. How long will each final cut be?
Answer 3: ???

Real world application of Generative AI

Having explored the fundamental concepts of LLMs and the crucial aspects of prompts and prompt engineering, it’s time to dive into the exciting realm of real-world applications of LLMs.

Application #1 – Document query chatbot

Problem statement – An insurance company wants to develop an AI powered chatbot for instant query resolution. Information is mainly available in multiple complex documents.

Solution approach – Load a set of available documents to the server as its knowledge base. These documents are then divided into smaller chunks for the purpose of generating embeddings, which are stored in a vector database. When a user asks a question, the chatbot uses these embeddings to find similar documents in the database and retrieves relevant information. The chatbot then constructs a prompt, including the retrieved documents as context, and instructs LLM to answer user’s question.

Code

Import Python libraries 

import os
import openai
import faiss
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import FAISS
from langchain.llms import OpenAI
from langchain.chat_models import ChatOpenAI
from langchain.chains.question_answering import load_qa_chain
from langchain.document_loaders import PyPDFLoader

Load pdf document that acts as the knowledge base for answering user’s question

def load_document(path):
loader = PyPDFLoader(path)
document = loader.load()
return document
path = r"path/to/pdf/document"
document = load_document(path)

After loading the document, it needs to be split into smaller chunks so that embeddings can be created for every chunk.

def split_document(document, chunk_size=500, chunk_overlap=20):
    text_splitter = RecursiveCharacterTextSplitter(chunk_size=chunk_size, chunk_overlap=chunk_overlap)
    docs = text_splitter.split_documents(document)
    return docs
 
docs = split_document(document)

Above chunks are converted into embeddings and then stored into vector database

embeddings = OpenAIEmbeddings(model = 'text-embedding-ada-002', openai_api_key="<api-key>") # loading the openAI embedding model
db = FAISS.from_documents(docs, embeddings)

Code snippet to match user’s questions with the chunk stored in vector database to get the relevant pieces of information

def get_similiar_docs(query, k=2, score=False): 
    if score:
        similar_docs = db.similarity_search_with_score(query, k=k)
    else:
        similar_docs = db.similarity_search(query, k=k)
    return similar_docs

Below code snippet is responsible for answering user’s question. Documents which are similar to user’s query, are retrieved from the vector database and added as context in prompt. 

model_name = "gpt-3.5-turbo"
llm = ChatOpenAI(model_name=model_name, openai_api_key = "api-key")
 
chain = load_qa_chain(llm, chain_type="stuff")
 
def get_answer(query):
    similar_docs = get_similiar_docs(query)
    answer = chain.run(input_documents=similar_docs, question=query)
    return answer

Now let’s see the chatbot in action.

query = "How many days of cancellation period is there?"
answer = get_answer(query)
print(answer)

Output: The cancellation period is 30 days from the date of receipt of the policy.

query = "How much air ambulance expenses are covered?"
answer = get_answer(query)
print(answer) 

Output: Air ambulance expenses are covered up to Rs.2,50,000 per hospitalization, not exceeding Rs.5,00,000 per policy period.

Application #2 – Sentiment and Intent Extraction

Problem statement – In today’s data-driven business landscape, understanding customer sentiment and intent from reviews is paramount. Let’s look at basic sentiment and intent extraction from customer reviews/ feedback

Solution approach – This solution processes reviews in batch and returns output. For each review, first, a prompt is generated and then iteratively given as inputs to the LLM for sentiment & intent extraction.

Code

import openai
import os
from tqdm import tqdm
openai.api_key = "<api-key>"

Creates a formatted prompt for analyzing customer reviews

def generate_prompt(review):
    prompt = f'''
    Analyze customer reviews to determine sentiment (positive, negative, neutral) 
    and intent (e.g., praise, complaint, inquiry) and the topic of discussion. 
    REVIEW: {review}
    Output Format Example:
    [['negative', 'complaint', 'product quality'], ...]
    '''
    return prompt

The analyze_reviews function is designed to extract insights from a collection of customer reviews using the OpenAI‘s GPT-3.5 model.

def analyze_reviews(reviews, model = "gpt-3.5-turbo", max_tokens = 1024, temp = 0):
    responses = []
    for review in tqdm(reviews):
        prompt = generate_prompt(review)
        response = openai.ChatCompletion.create(
          model=model,
          max_tokens = max_tokens,
          n = 1,
          stop = None,
          temperature = temp,
          messages=[
            {"role": "system", "content": "You are an assistant that extracts insights from customer reviews."},
            {"role": "user", "content": prompt}
          ],
          timeout = 10
        )
        r = response.choices[0].message.content
        responses.append(r)
 
    return responses

Let’s process reviews

analyze_reviews(sample_reviews)
Output: ["[['positive', 'praise', 'product'], ['positive', 'praise', 'user-friendliness'], ['positive', 'praise', 'product quality']]",
 "[['negative', 'complaint', 'product quality']]"]

Conclusion

We’ve explored the art of prompt engineering, a crucial skill in harnessing the power of LLMs for specific tasks. Also, from document query chatbots that simplify query resolution using complex documents as an input to sentiment and intent extraction for data-driven insights, LLMs are revolutionizing how we interact with text data. 

Beyond these applications, LLMs find utility across diverse sectors, including content generation, language translation, text to code, and even scientific research. As these models continue to evolve and improve. With careful prompt engineering and innovative applications, LLMs are ready to reshape the way we communicate and solve problems in countless domains.

Picture of Agam Dogra

Agam Dogra

Latest Reads

Subscribe

Suggested Reading

Ready to Unlock Your Enterprise's Full Potential?

Adaptive Clinical Trial Designs: Modify trials based on interim results for faster identification of effective drugs.Identify effective drugs faster with data analytics and machine learning algorithms to analyze interim trial results and modify.
Real-World Evidence (RWE) Integration: Supplement trial data with real-world insights for drug effectiveness and safety.Supplement trial data with real-world insights for drug effectiveness and safety.
Biomarker Identification and Validation: Validate biomarkers predicting treatment response for targeted therapies.Utilize bioinformatics and computational biology to validate biomarkers predicting treatment response for targeted therapies.
Collaborative Clinical Research Networks: Establish networks for better patient recruitment and data sharing.Leverage cloud-based platforms and collaborative software to establish networks for better patient recruitment and data sharing.
Master Protocols and Basket Trials: Evaluate multiple drugs in one trial for efficient drug development.Implement electronic data capture systems and digital platforms to efficiently manage and evaluate multiple drugs or drug combinations within a single trial, enabling more streamlined drug development
Remote and Decentralized Trials: Embrace virtual trials for broader patient participation.Embrace telemedicine, virtual monitoring, and digital health tools to conduct remote and decentralized trials, allowing patients to participate from home and reducing the need for frequent in-person visits
Patient-Centric Trials: Design trials with patient needs in mind for better recruitment and retention.Develop patient-centric mobile apps and web portals that provide trial information, virtual support groups, and patient-reported outcome tracking to enhance patient engagement, recruitment, and retention
Regulatory Engagement and Expedited Review Pathways: Engage regulators early for faster approvals.Utilize digital communication tools to engage regulatory agencies early in the drug development process, enabling faster feedback and exploration of expedited review pathways for accelerated approvals
Companion Diagnostics Development: Develop diagnostics for targeted recruitment and personalized treatment.Implement bioinformatics and genomics technologies to develop companion diagnostics that can identify patient subpopulations likely to benefit from the drug, aiding in targeted recruitment and personalized treatment
Data Standardization and Interoperability: Ensure seamless data exchange among research sites.Utilize interoperable electronic health record systems and health data standards to ensure seamless data exchange among different research sites, promoting efficient data aggregation and analysis
Use of AI and Predictive Analytics: Apply AI for drug candidate identification and data analysis.Leverage AI algorithms and predictive analytics to analyze large datasets, identify potential drug candidates, optimize trial designs, and predict treatment outcomes, accelerating the drug development process
R&D Investments: Improve the drug or expand indicationsUtilize computational modelling and simulation techniques to accelerate drug discovery and optimize drug development processes