A context window is the working memory of an AI model or how much information it can remember while generating a response to your prompt. Learn more about why context windows matter and how popular AI models compare.
The context window of an AI model is a way of measuring how much information the model can remember, working similarly to humans’ short-term memory. If you went to the grocery store to pick up a few items, how many items could you successfully remember to buy without writing a shopping list? When you’re working with an AI model, a context window is the amount of information you can give the machine to provide you with an accurate response to your prompt.
If you’ve ever worked with a large language model, you may have noticed that within a long conversation, the model can forget things you discussed earlier. This can be frustrating when you are working on a longer project. A larger context window means that the AI can keep more information while you are working, allowing you to ask for more complex prompts, analyze longer texts, perform multi-prompt tasks, give more nuanced answers, and more.
Learn more about how AI models use context windows to provide more accurate and nuanced responses and how to measure context windows and compare the context windows of some popular AI models.
If an AI model thinks similarly to how a human thinks, then a context window is the AI’s working memory, allowing the model to store information temporarily to use for the task at hand. For example, you could ask an AI model to read a news article and summarize the main points. The AI model would need to be able to remember your prompt, all of the content of the news article, and its own response to deliver a cohesive answer to your request.
This is a light job for many of the AI models available on the market today. With a large enough context window, you could ask an AI model to summarize a whole book, a series of books, or even a library of books. Beyond summarizing, larger context windows allow AI models to give more accurate, complex, and nuanced responses to your prompts.
With larger context windows, researchers have shown that AI is capable of completing complex, nuanced tasks. For example, Google researchers gave their Google Gemini 1.5 Pro model the single grammar manual still in existence for a critically endangered language, Kalamang, as context for a translation task. For context, this language has fewer than 200 speakers, and the grammar book was compiled between 2015 and 2019 in an attempt to document Kalamang [1]. Although the model did not have prior training in translating from English to Kalamang, it was able to perform translation tasks at a similar skill level as a human performing the same task with the same information, using the grammar book as context [2].
Magic AI recently announced they’ve trained their still-in-development AI model on a 100 million token context window. For perspective, that would equal roughly 750 novels or roughly 10 million lines of code [3]. While the applications of an AI with these capabilities will be understood more when the model is widely available, Magic AI is focusing its development on creating AI specializing in generating code with your existing code, documentation, and programming libraries. This includes libraries not available publicly online.
It’s easier to understand how revolutionary a concept like a 100 million token context window is if you first understand how to measure a context window and what a token is. Tokens are processing units that equal a character, a part of a word, or a short phrase. It’s tempting to use a measurement like word count to measure context windows, but turning language into tokens offers a major advantage: It takes less computational power for an AI model to understand a series of tokens than words, which means this is a more efficient method for large language models.
Despite the variable length of characters any given token can have, you can do some quick math to get an estimate of how many tokens a text document will represent. You can assume that a document will have about 30 percent more tokens than words, although it’s worth repeating that the number could be much smaller or larger depending on the type of document you have and the tokenization process you use.
Now that you have a better understanding of how researchers measure context windows with tokens and why context windows are important, you may be curious to know how your go-to AI model compares to its peers. Explore how context windows differ between Gemini, Claude, and ChatGPT, then compare against the claims of Magic AI’s development team.
Magic AI (Still in development): 100 million tokens
Google Gemini 1.5 Pro: 1 million tokens, in testing up to 10 million
Anthropic Claude Pro: 200,000 tokens
OpenAI ChatGPT-4: 128,000 tokens
As you can see, increasing the context window of large language models to 100 million tokens would increase the computational power and reasoning ability of AI models to over 10 times the technology currently available.
If you want a career working directly with context windows to create generative AI content, or if you’re looking for a career in increasing the power of AI technology and the context windows of AI models, you might consider becoming an AI researcher, data scientist, or AI developer. Consider these roles, including their average salary and job outlook in the United States.
Average annual salary in the US (Glassdoor): $99,352 [1]
Job outlook (projected growth from 2023 to 2033): 26 percent [2]
As an AI researcher, you will work on projects that advance AI capabilities, such as finding ways to increase the context windows of AI models or working on projects that use AI tech to solve real-world problems. You will work with a team to conduct research, report your findings to stakeholders, publish results, and push the limits of this technology to new heights.
Average annual salary in the US (Glassdoor): $118,075 [3]
Job outlook (projected growth from 2023 to 2033): 36 percent [4]
As a data scientist, you will work with your company or organization to find meaningful insights from data. You will determine what data you need to answer questions, collect data, store and clean data, and then interact with and analyze the data to find patterns. In this role, you will report your findings to senior stakeholders and make recommendations about how they can use your insight in actionable ways.
Average annual salary in the US (Glassdoor): $110,972 [5]
Job outlook (projected growth from 2023 to 2033): 17 percent [6]
As an AI developer, you will work to create software applications using AI technology. In this role, you may work with a team of developers to design and create AI solutions. You will implement these solutions and provide training to team members about how to use your software.
A context window is a metric that measures how much data an AI model can remember before it starts forgetting things you’ve told it. Learn more about working with generative artificial intelligence on Coursera. For example, you can enroll in AI For Business Specializationthe University of Pennsylvania to learn skills in artificial intelligence, machine learning, and fraud prevention.
Language Science Press. “A Grammar of Kalamang, https://langsci-press.org/catalog/book/344.” Accessed January 30, 2025.
Google. “What Is A Long Context Window?, https://blog.google/technology/ai/long-context-window-ai-models.” Accessed January 30, 2025.
Magic AI. “100M Token Context Windows, https://magic.dev/blog/100m-token-context-windows.” Accessed January 30, 2025.
Glassdoor. “Salary: AI Researcher in the United States, https://www.glassdoor.com/Salaries/ai-researcher-salary-SRCH_KO0,13.htm.” Accessed January 30, 2025.
US Bureau of Labor Statistics. “Computer and Information Research Scientists: Occupational Outlook Handbook, https://www.bls.gov/ooh/computer-and-information-technology/computer-and-information-research-scientists.htm.” Accessed January 30, 2025.
Glassdoor. “Salary: Data Scientist in the United States, https://www.glassdoor.com/Salaries/data-scientist-salary-SRCH_KO0,14.htm.” Accessed January 30, 2025.
US Bureau of Labor Statistics. “Data Scientists: Occupational Outlook Handbook, https://www.bls.gov/ooh/math/data-scientists.htm.” Accessed January 30, 2025.
Editorial Team
Coursera’s editorial team is comprised of highly experienced professional editors, writers, and fact...
This content has been made available for informational purposes only. Learners are advised to conduct additional research to ensure that courses and other credentials pursued meet their personal, professional, and financial goals.