Large Language Models (LLM): Complete Guide in 2024

Everything you need to know about LLM

Table of Contents

Download eBook

Large language models

Introduction

Ever scratched your head, amazed at how Google or Alexa seemed to ‘get’ you? Or have you found yourself reading a computer-generated essay that sounds eerily human? You’re not alone. It’s time to pull back the curtain and reveal the secret: Large Language Models, or LLMs.

What are these, you ask? Think of LLMs as hidden wizards. They power our digital chats, understand our muddled phrases, and even write like us. They’re transforming our lives, making science fiction a reality.

This guide is on all things LLM. We’ll explore what they can do, what they can’t do, and where they’re used. We’ll examine how they impact us all in plain and simple language.

So, let’s start our exciting journey into LLMs.

Who is this Guide for?

This extensive guide is for:

  • All you entrepreneurs and solopreneurs who are crunching massive amount of data regularly
  • AI and machine learning or professionals who are getting started with process optimization techniques
  • Project managers who intend to implement a quicker time-to-market for their AI modules or AI-driven products
  • And tech enthusiasts who like to get into the details of the layers involved in AI processes.
Large language models llm

What are Large Language Models?

Large Language Models (LLMs) are advanced artificial intelligence (AI) systems designed to process, understand, and generate human-like text. They’re based on deep learning techniques and trained on massive datasets, usually containing billions of words from diverse sources like websites, books, and articles. This extensive training enables LLMs to grasp the nuances of language, grammar, context, and even some aspects of general knowledge.

Some popular LLMs, like OpenAI’s GPT-3, employ a type of neural network called a transformer, which allows them to handle complex language tasks with remarkable proficiency. These models can perform a wide range of tasks, such as:

  • Answering questions
  • Summarizing text
  • Translating languages
  • Generating content
  • Even engaging in interactive conversations with users

As LLMs continue to evolve, they hold great potential for enhancing and automating various applications across industries, from customer service and content creation to education and research. However, they also raise ethical and societal concerns, such as biased behavior or misuse, which need to be addressed as technology advances.

What are large language models

Essential Factors in Constructing an LLM Data Corpus

You must build a comprehensive data corpus to successfully train language models. This process involves gathering vast data and ensuring its high quality and relevance. Let’s look at the key aspects that significantly influence the development of an effective data library for language model training.

  1. Prioritize Data Quality Alongside Quantity

    A large dataset is fundamental for training language models. Yet, there’s a lot of significance attached to data quality. Models trained on extensive but poorly structured data may yield inaccurate outcomes.

    Conversely, smaller, meticulously curated datasets often lead to superior performance. This reality shows the importance of a balanced approach to data collection. Data representative, diverse, and pertinent to the model’s intended scope requires diligent selection, cleaning, and organizing.

  2. Select Appropriate Data Sources

    The choice of data sources should align with the model’s specific application goals.

    • Models that generate dialogue would benefit from sources like conversations and interviews are invaluable.
    • Models focusing on code generation will benefit from well-documented code repositories.
    • Literary works and scripts offer a wealth of training material for those targeting creative writing.

    You must include data that spans the intended languages and topics. It helps you tailor the model to perform effectively within its designated domain.

  3. Use Synthetic Data Generation

    Enhancing your dataset with synthetic data can fill gaps and extend its range. You can use data augmentation, text generation models, and rule-based generation to create artificial data that reflects real-world patterns. This strategy broadens the diversity of the training set to enhance the model’s resilience and help reduce biases.

    Make sure you verify the synthetic data’s quality so that it contributes positively to the model’s ability to understand and generate language within its target domain.

  4. Implement Automated Data Collection

    Automation for the data collection process facilitates the consistent integration of fresh, relevant data. This approach streamlines data acquisition, boosts scalability, and promotes reproducibility.

    You can efficiently collect varied datasets by using web scraping tools, APIs, and data ingestion frameworks. You can fine-tune these tools to focus on high-quality, relevant data. They optimize the training material for the model. You must continuously monitor these automated systems to maintain their accuracy and ethical integrity.

Popular Examples of Large Language Models

Here are a few prominent examples of LLMs used widely in different industry verticals:

Llm example

Image Source: Towards data Science

Understanding the Building Blocks of Large Language Models (LLMs)

To fully comprehend the capabilities and workings of LLMs, it’s important to familiarize ourselves with some key concepts. These include:

Word Embedding

This refers to the practice of translating words into a numerical format that AI models can interpret. In essence, word embedding is the AI's language. Each word is represented as a high-dimensional vector that encapsulates its semantic meaning based on its context in the training data. These vectors allow the AI to understand relationships and similarities between words, enhancing the model's comprehension and performance.

Attention Mechanisms

These sophisticated components help the AI model prioritize certain elements within the input text over others when generating an output. For example, in a sentence filled with various sentiments, an attention mechanism might give higher weight to the sentiment-bearing words. This strategy enables the AI to generate more contextually accurate and nuanced responses.

Transformers

Transformers represent an advanced type of neural network architecture employed extensively in LLM research. What sets transformers apart is their self-attention mechanism. This mechanism allows the model to weigh and consider all parts of the input data simultaneously, rather than in sequential order. The result is an improvement in handling long-range dependencies in the text, a common challenge in natural language processing tasks.

Fine-Tuning

Even the most advanced LLMs require some tailoring to excel in specific tasks or domains. This is where fine-tuning comes in. After a model is initially trained on a large dataset, it can be further refined, or 'fine-tuned' on a smaller, more specific dataset. This process allows the model to adapt its generalized language understanding abilities to a more specialized task or context.

Prompt Engineering

Input prompts serve as the starting point for LLMs to generate outputs. Crafting these prompts effectively, a practice known as prompt engineering, can greatly influence the quality of the model's responses. It's a blend of art and science that requires a keen understanding of how the model interprets prompts and generates responses.

Bias

As LLMs learn from the data they're trained on, any bias present in this data can infiltrate the model's behavior. This could manifest as discriminatory or unfair tendencies in the model's outputs. Addressing and mitigating these biases is a significant challenge in the field of AI and a crucial aspect of developing ethically sound LLMs.

Interpretability

Given the complexity of LLMs, understanding why they make certain decisions or generate specific outputs can be challenging. This characteristic, known as interpretability, is a key area of ongoing research. Enhancing interpretability not only aids in troubleshooting and model refinement, but it also bolsters trust and transparency in AI systems.

How are LLM models trained?

Training large language models (LLMs) is quite a feat that involves several crucial steps. Here’s a simplified, step-by-step rundown of the process:

How are llm models trained?

  1. Gathering Text Data: Training an LLM starts with the collection of a vast amount of text data. This data can come from books, websites, articles, or social media platforms. The aim is to capture the rich diversity of human language.
  2. Cleaning Up the Data: The raw text data is then tidied up in a process called preprocessing. This includes tasks like removing unwanted characters, breaking down the text into smaller parts called tokens, and getting it all into a format the model can work with.
  3. Splitting the Data: Next, the clean data is split into two sets. One set, the training data, will be used to train the model. The other set, the validation data, will be used later to test the model’s performance.
  4. Setting up the Model: The structure of the LLM, known as the architecture, is then defined. This involves selecting the type of neural network and deciding on various parameters, such as the number of layers and hidden units within the network.
  5. Training the Model: The actual training now begins. The LLM model learns by looking at the training data, making predictions based on what it has learned so far, and then adjusting its internal parameters to reduce the difference between its predictions and the actual data.
  6. Checking the Model: The LLM model’s learning is checked using the validation data. This helps to see how well the model is performing and to tweak the model’s settings for better performance.
  7. Using the Model: After training and evaluation, the LLM model is ready for use. It can now be integrated into applications or systems where it will generate text based on new inputs it’s given.
  8. Improving the Model: Finally, there’s always room for improvement. The LLM model can be further refined over time, using updated data or adjusting settings based on feedback and real-world usage.

Remember, this process requires significant computational resources, such as powerful processing units and large storage, as well as specialized knowledge in machine learning. That’s why it’s usually done by dedicated research organizations or companies with access to the necessary infrastructure and expertise.

Does the LLM Rely on Supervised or Unsupervised Learning?

Large language models are usually trained using a method called supervised learning. In simple terms, this means they learn from examples that show them the correct answers.

Does the llm rely on supervised or unsupervised learning? Imagine you’re teaching a child words by showing them pictures. You show them a picture of a cat and say “cat,” and they learn to associate that picture with the word. That’s how supervised learning works. The model is given lots of text (the “pictures”) and the corresponding outputs (the “words”), and it learns to match them up.

So, if you feed an LLM a sentence, it tries to predict the next word or phrase based on what it has learned from the examples. This way, it learns how to generate text that makes sense and fits the context.

That said, sometimes LLMs also use a bit of unsupervised learning. This is like letting the child explore a room full of different toys and learn about them on their own. The model looks at unlabeled data, learning patterns, and structures without being told the “right” answers.

Supervised learning employs data that’s been labeled with inputs and outputs, in contrast to unsupervised learning, which doesn’t use labeled output data.

In a nutshell, LLMs are mainly trained using supervised learning, but they can also use unsupervised learning to enhance their capabilities, such as for exploratory analysis and dimensionality reduction.

What is the Data Volume (In GB) Necessary To Train A Large Language Model?

The world of possibilities for speech data recognition and voice applications is immense, and they are being used in several industries for a plethora of applications.

Training a large language model isn’t a one-size-fits-all process, especially when it comes to the data needed. It depends on a bunch of things:

  • The model design.
  • What job does it need to do?
  • The type of data you’re using.
  • How well do you want it to perform?

That said, training LLMs usually requires a massive amount of text data. But how massive are we talking about? Well, think way beyond gigabytes (GB). We’re usually looking at terabytes (TB) or even petabytes (PB) of data.

Consider GPT-3, one of the biggest LLMs around. It is trained on 570 GB of text data. Smaller LLMs might need less – maybe 10-20 GB or even 1 GB of gigabytes – but it’s still a lot.

Source

But it’s not just about the size of the data. Quality matters too. The data needs to be clean and varied to help the model learn effectively. And you can’t forget about other key pieces of the puzzle, like the computing power you need, the algorithms you use for training, and the hardware setup you have. All these factors play a big part in training an LLM.

The Rise of Large Language Models: Why They Matter

LLMs are no longer just a concept or an experiment. They’re increasingly playing a critical role in our digital landscape. But why is this happening? What makes these LLMs so important? Let’s delve into some key factors.

The rise of llm: why they matter?

  1. Mastery in Mimicking Human Text

    LLMs have transformed the way we handle language-based tasks. Built using robust machine learning algorithms, these models are equipped with the ability to understand the nuances of human language, including context, emotion, and even sarcasm, to some extent. This capability to mimic human language isn’t a mere novelty, it has significant implications.

    LLMs’ advanced text generation abilities can enhance everything from content creation to customer service interactions.

    Imagine being able to ask a digital assistant a complex question and getting an answer that not only makes sense, but is also coherent, relevant, and delivered in a conversational tone. That’s what LLMs are enabling. They’re fueling a more intuitive and engaging human-machine interaction, enriching user experiences, and democratizing access to information.

  2. Affordable Computing Power

    The rise of LLMs would not have been possible without parallel developments in the field of computing. More specifically, the democratization of computational resources has played a significant role in the evolution and adoption of LLMs.

    Cloud-based platforms are offering unprecedented access to high-performance computing resources. This way, even small-scale organizations and independent researchers can train sophisticated machine learning models.

    Moreover, improvements in processing units (like GPUs and TPUs), combined with the rise of distributed computing, have made it feasible to train models with billions of parameters. This increased accessibility of computing power is enabling the growth and success of LLMs, leading to more innovation and applications in the field.

  3. Shifting Consumer Preferences

    Consumers today don’t just want answers; they want engaging and relatable interactions. As more people grow up using digital technology, it’s evident that the need for technology that feels more natural and human-like is increasing.LLMs offer an unmatched opportunity to meet these expectations. By generating human-like text, these models can create engaging and dynamic digital experiences, which can increase user satisfaction and loyalty. Whether it’s AI chatbots providing customer service or voice assistants providing news updates, LLMs are ushering in an era of AI that understands us better.

  4. The Unstructured Data Goldmine

    Unstructured data, such as emails, social media posts, and customer reviews, is a treasure trove of insights. It’s estimated that over 80% of enterprise data is unstructured and growing at a rate of 55% per year. This data is a goldmine for businesses if leveraged properly.

    LLMs come into play here, with their ability to process and make sense of such data at scale. They can handle tasks like sentiment analysis, text classification, information extraction, and more, thereby providing valuable insights.

    Whether it’s identifying trends from social media posts or gauging customer sentiment from reviews, LLMs are helping businesses navigate the large amount of unstructured data and make data-driven decisions.

  5. The Expanding NLP Market

    The potential of LLMs is reflected in the rapidly growing market for natural language processing (NLP). Analysts project the NLP market to expand from $11 billion in 2020 to over $35 billion by 2026. But it’s not just the market size that’s expanding. The models themselves are growing too, both in physical size and in the number of parameters they handle. The evolution of LLMs over the years, as seen in the figure below (image source: link), underscores their increasing complexity and capacity.

Popular Use Cases of Large Language Models

Here are some of the top and most prevalent use cases of LLM:

Popular use cases of large language models

  1. Generating Natural Language Text: Large Language Models (LLMs) combine the power of artificial intelligence and computational linguistics to autonomously produce texts in natural language. They can cater to diverse user needs such as penning articles, crafting songs, or engaging in conversations with users.
  2. Translation through Machines: LLMs can be effectively employed to translate text between any pair of languages. These models exploit deep learning algorithms like recurrent neural networks to comprehend the linguistic structure of both source and target languages, thereby facilitating the translation of the source text into the desired language.
  3. Crafting Original Content: LLMs have opened up avenues for machines to generate cohesive and logical content. This content can be used to create blog posts, articles, and other types of content. The models tap into their profound deep-learning experience to format and structure the content in a novel and user-friendly manner.
  4. Analysing Sentiments: One intriguing application of Large Language Models is sentiment analysis. In this, the model is trained to recognize and categorize emotional states and sentiments present in the annotated text. The software can identify emotions such as positivity, negativity, neutrality, and other intricate sentiments. This can provide valuable insights into customer feedback and views about various products and services.
  5. Understanding, Summarizing, and Classifying Text: LLMs establish a viable structure for AI software to interpret the text and its context. By instructing the model to understand and scrutinize vast amounts of data, LLMs enable AI models to comprehend, summarize, and even categorize text in diverse forms and patterns.
  6. Answering Questions: Large Language Models equip Question Answering (QA) systems with the capability to accurately perceive and respond to a user’s natural language query. Popular examples of this use case include ChatGPT and BERT, which examine the context of a query and sift through a vast collection of texts to deliver relevant responses to user questions.

Integrating Security and Compliance into LLM Data Strategies

Embedding robust security and compliance measures within LLM data collection and processing frameworks can help you ensure data’s transparent, safe, and ethical use. This approach encompasses several key actions:

  • Implement Robust Encryption: Safeguard data at rest and in transit using strong encryption methods. This step protects information from unauthorized access and breaches.
  • Establish Access Controls and Authentication: Set up systems to verify user identities and restrict access to data. It’ll ensure that only authorized personnel can interact with sensitive information.
  • Integrate Logging and Monitoring Systems: Deploy systems to track data usage and identify potential security threats. This proactive monitoring aids in maintaining the integrity and safety of the data ecosystem.
  • Adhere to Compliance Standards: Follow relevant regulations such as GDPR, HIPAA, and PCI DSS, which govern data security and privacy. Regular audits and checks verify compliance, ensuring practices meet industry-specific legal and ethical standards.
  • Set Ethical Data Use Guidelines: Develop and enforce policies that dictate the fair, transparent, and accountable use of data. These guidelines help maintain stakeholder trust and support a secure training environment for LLMs.

These actions collectively strengthen the data management practices for LLM training. It builds a foundation of trust and security that benefits all stakeholders involved.

Fine-tuning a Large Language Model

Fine-tuning a large language model involves a meticulous annotation process. Shaip, with its expertise in this field, can significantly aid this endeavor. Here are some annotation methods used to train models like ChatGPT:

Part-of-speech (pos) tagging

Part-of-Speech (POS) Tagging

Words in sentences are tagged with their grammatical function, such as verbs, nouns, adjectives, etc. This process assists the model in comprehending the grammar and the linkages between words.

Named entity recognition (ner)

Named Entity Recognition (NER)

Named entities like organizations, locations, and people within a sentence are marked. This exercise aids the model in interpreting the semantic meanings of words and phrases and provides more precise responses.

Sentiment analysis

Sentiment Analysis

Text data is assigned sentiment labels like positive, neutral, or negative, helping the model grasp the emotional undertone of sentences. It is particularly useful in responding to queries involving emotions and opinions.

Coreference resolution

Coreference Resolution

Identifying and resolving instances where the same entity is referred to in different parts of a text. This step helps the model understand the context of the sentence, thus leading to coherent responses.

Text classification

Text Classification

Text data is categorized into predefined groups like product reviews or news articles. This assists the model in discerning the genre or topic of the text, generating more pertinent responses.

Shaip can gather training data through web crawling from various sectors like banking, insurance, retail, and telecom. We can provide text annotation (NER, sentiment analysis, etc.), facilitate multilingual LLM (translation), and assist in taxonomy creation, extraction/prompt engineering.

Shaip has an extensive repository of off-the-shelf datasets. Our medical data catalog boasts a broad collection of de-identified, secure, and quality data suitable for AI initiatives, machine learning models, and natural language processing.

Similarly, our speech data catalog is a treasure trove of high-quality data perfect for voice recognition products, enabling efficient training of AI/ML models. We also have an impressive computer vision data catalog with a wide range of image and video data for various applications.

We even offer open datasets in a modifiable and convenient form, free of charge, for use in your AI and ML projects. This vast AI data library empowers you to develop your AI and ML models more efficiently and accurately.

Shaip’s Data Collection and Annotation Process

When it comes to data collection and annotation, Shaip follows a streamlined workflow. Here’s what the data collection process looks like:

Identification of Source Websites

Initially, websites are pinpointed using selected sources and keywords relevant to the data required.

Web Scraping

Once the relevant websites are identified, Shaip utilizes its proprietary tool to scrape data from these sites.

Text Preprocessing

The collected data undergo initial processing, which includes sentence splitting and parsing, making it suitable for further steps.

Annotation

The preprocessed data is annotated for Named Entity Extraction. This process involves identifying and labeling important elements within the text, like names of people, organizations, locations, etc.

Relationship Extraction

In the final step, the types of relationships between the identified entities are determined and annotated accordingly. This helps in understanding the semantic connections between different components of the text.

Shaip’s Offering

Shaip offers a wide range of services to help organizations manage, analyze, and make the most of their data.

Data Web-Scraping

One key service offered by Shaip is data scraping. This involves the extraction of data from domain-specific URLs. By utilizing automated tools and techniques, Shaip can quickly and efficiently scrape large volumes of data from various websites, Product Manuals, Technical Documentation, Online forums, Online Reviews, Customer Service Data, Industry Regulatory Documents etc. This process can be invaluable for businesses when gathering relevant and specific data from a multitude of sources.

Data web-scraping

Machine Translation

Develop models using extensive multilingual datasets paired with corresponding transcriptions for translating text across various languages. This process helps dismantle linguistic obstacles and promotes the accessibility of information.

Machine translation

Taxonomy Extraction & Creation

Shaip can help with taxonomy extraction and creation. This involves classifying and categorizing data into a structured format that reflects the relationships between different data points. This can be particularly useful for businesses in organizing their data, making it more accessible and easier to analyze. For instance, in an e-commerce business, product data might be categorized based on product type, brand, price, etc., making it easier for customers to navigate the product catalog.

Taxonomy extraction & creation

Data Collection

Our data collection services provide critical real-world or synthetic data necessary for training generative AI algorithms and improving the accuracy and effectiveness of your models. The data is unbiased, ethically and responsibly sourced while keeping in mind data privacy and security.

Data collection

Question & Answering

Question answering (QA) is a subfield of natural language processing focused on automatically answering questions in human language. QA systems are trained on extensive text and code, enabling them to handle various types of questions, including factual, definitional, and opinion-based ones. Domain knowledge is crucial for developing QA models tailored to specific fields like customer support, healthcare, or supply chain. However, generative QA approaches allow models to generate text without domain knowledge, relying solely on context.

Our team of specialists can meticulously study comprehensive documents or manuals to generate Question-Answer pairs, facilitating the creation of Generative AI for businesses. This approach can effectively tackle user inquiries by mining pertinent information from an extensive corpus. Our certified experts ensure the production of top-quality Q&A pairs that span across diverse topics and domains.

Question & answering

Text Summarization

Our specialists are capable of distilling comprehensive conversations or lengthy dialogues, delivering succinct and insightful summaries from extensive text data.

Text summarization

Text Generation

Train models using a broad dataset of text in diverse styles, like news articles, fiction, and poetry. These models can then generate various types of content, including news pieces, blog entries, or social media posts, offering a cost-effective and time-saving solution for content creation.

Text generation

Speech Recognition

Develop models capable of comprehending spoken language for various applications. This includes voice-activated assistants, dictation software, and real-time translation tools. The process involves utilizing a comprehensive dataset comprised of audio recordings of spoken language, paired with their corresponding transcripts.

Speech recognition

Product Recommendations

Develop models using extensive datasets of customer buying histories, including labels that point out the products customers are inclined to purchase. The goal is to provide precise suggestions to customers, thereby boosting sales and enhancing customer satisfaction.

Product recommendations

Image Captioning

Revolutionize your image interpretation process with our state-of-the-art, AI-driven Image Captioning service. We infuse vitality into pictures by producing accurate and contextually meaningful descriptions. This paves the way for innovative engagement and interaction possibilities with your visual content for your audience.

Image captioning

Training Text-to-Speech Services

We provide an extensive dataset comprised of human speech audio recordings, ideal for training AI models. These models are capable of generating natural and engaging voices for your applications, thus delivering a distinctive and immersive sound experience for your users.

Training text-to-speech services

Our diverse data catalog is designed to cater to numerous Generative AI Use Cases

Off-the-Shelf Medical Data Catalog & Licensing:

  • 5M+ Records and physician audio files in 31 specialties
  • 2M+ Medical images in radiology & other specialties (MRIs, CTs, USGs, XRs)
  • 30k+ clinical text docs with value-added entities and relationship annotation
Off-the-shelf medical data catalog & licensing

Off-the-Shelf Speech Data Catalog & Licensing:

  • 40k+ hours of speech data (50+ languages/100+ dialects)
  • 55+ topics covered
  • Sampling rate – 8/16/44/48 kHz
  • Audio type -Spontaneous, scripted, monologue, wake-up words
  • Fully transcribed audio datasets in multiple languages for human-human conversation, human-bot, human-agent call center conversation, monologues, speeches, podcasts, etc.
Off-the-shelf speech data catalog & licensing

Image and Video Data Catalog & Licensing:

  • Food/ Document Image Collection
  • Home Security Video Collection
  • Facial Image/Video collection
  • Invoices, PO, Receipts Document Collection for OCR
  • Image Collection for Vehicle Damage Detection 
  • Vehicle License Plate Image Collection
  • Car Interior Image Collection
  • Image Collection with Car Driver in Focus
  • Fashion-related Image Collection
Image and video data catalog & licensing

Let’s Talk

  • By registering, I agree with Shaip Privacy Policy and Terms of Service and provide my consent to receive B2B marketing communication from Shaip.

Frequently Asked Questions (FAQ)

DL is a subfield of ML that utilizes artificial neural networks with multiple layers to learn complex patterns in data. ML is a subset of AI that focuses on algorithms and models that enable machines to learn from data. Large language models (LLMs) are a subset of deep learning and share common ground with generative AI, as both are components of the broader field of deep learning.

Large language models, or LLMs, are expansive and versatile language models that are initially pre-trained on extensive text data to grasp the fundamental aspects of language. They are then fine-tuned for specific applications or tasks, allowing them to be adapted and optimized for particular purposes.

Firstly, large language models possess the capability to handle a wide range of tasks due to their extensive training with massive amounts of data and billions of parameters.

Secondly, these models exhibit adaptability as they can be fine-tuned with minimal specific field training data.

Lastly, the performance of LLMs shows continuous improvement when additional data and parameters are incorporated, enhancing their effectiveness over time.

Prompt design involves creating a prompt tailored to the specific task, such as specifying the desired output language in a translation task. Prompt engineering, on the other hand, focuses on optimizing performance by incorporating domain knowledge, providing output examples, or using effective keywords. Prompt design is a general concept, while prompt engineering is a specialized approach. While prompt design is essential for all systems, prompt engineering becomes crucial for systems requiring high accuracy or performance.

There are three types of large language models. Each type requires a different approach to promoting.

  • Generic language models predict the next word based on the language in the training data.
  • Instruction tuned models are trained to predict response to the instructions given in the input.
  • Dialogue tuned models are trained to have a dialogue-like conversation by generating the next response.