Why Does ChatGPT Forget the Conversation?

The world of chatbots and artificial intelligence continues to evolve, and with that comes a series of intricate processes to understand. One such process is how language models, like ChatGPT, retain and process chat memories. This article will delve into the way Large Language Models (LLM) manage memory, particularly focusing on token limitations.

Contextual Memory in Chat Conversations

When interacting with ChatGPT, the chatbot retains memories only from the current chat session.  Instead, it keeps track of the ongoing conversation to maintain context.

How Token Limits Work

Each conversation is made up of a series of tokens. In language processing, tokens are roughly equivalent to words, though they can sometimes be shorter or longer depending on the language or the specific content.

To illustrate, let’s consider an example:

Suppose you initiate a chat with the question, “How does photosynthesis work?”

ChatGPT responds with an explanation.

Later, you follow up with, “And what’s its significance?”

You might assume that you’re only sending your subsequent question to ChatGPT. However, the backend system views it as:

[meta data]

User: How does photosynthesis work?

System: [Explanation of photosynthesis]

User: And what’s its significance?

As the conversation proceeds, more tokens (or words) accumulate. Once the conversation reaches a certain threshold, the token limit, earlier parts of the conversation may be discarded to make room for new messages. 

How Token Limitation Can Cause the Chatbot to “Forget” the Context

Every LLM has a predefined token limit. Once this limit is reached, the model can’t process more tokens unless some are removed. Consequently, the earliest parts of the chat will be the first to be excluded, creating a sense that the chatbot has “forgotten” those parts.

For example, after a lengthy conversation about various topics, if you were to reference “photosynthesis” again, ChatGPT might not recall the specifics of the initial discussion about it if that part has been pruned due to token limits.

The token limitation varies depending on the specific version or type of LLM you’re interacting with. For instance, ChatGPT Pro, the premium version, offers a larger token limit compared to its free counterpart.  

How to Get the Chatbot to Remember More of the Conversation Despite Token Limitations

If you’re involved in a long and context-heavy chat with ChatGPT and are concerned about losing key details, there are strategies you can employ.  Periodically asking ChatGPT to summarize the ongoing conversation, or highlighting specific details you want to retain, can help. When the bot responds with the summary or acknowledgment, that information is effectively “refreshed” in the conversation, ensuring it remains in the chat’s active memory for longer.

Moving Forward

As development of AI chatbots continues, there is no doubt that token limitations will keep increasing.  At the same time, developers are feverishly working on ways to get their chatbots to store and retrieve more information while using fewer tokens per interaction.  This will allow chatbots to not only remember more information for deeper interactions, but also output responses faster using less compute power.

Related Tools

Article Writer

Generate tailored articles for any topic, audience, and tone. Receive thorough drafts in a flash. Optimize results with descriptive inputs. Refine and regenerate to perfection.

Brand Voice Analyzer

Paste your copy to get a description of your brand voice as well as a custom prompt to help you recreate it.

Intersecting circles to symbolize style

Style Articulator

Get help describing the style of a piece of art, music, writing, or artist in a clear and succinct way.

Summarizer

Get a brief summary or important bullet points from any piece of content.