Understanding Tokens and Overcoming their Limitations in LLMs

Hardik Rathod
8 min readJan 29, 2024
reference: https://bonitasprings.floridaweekly.com/

Token limits are one of the most discussed limitations of LLMs that I have observed in any conversation that happens around LLMs. This is because it is directly related to how much information can the LLM absorb to answer a specific question. The more information you give(more specifics) the better your response will be. Before diving deep into ways to overcome this token limit, let's first talk about the limitations of LLMs very briefly. Large Language Models have 2 types of limitations,

  1. Conceptual Limitations: These can be thought of as soft limitations. Some examples of these limitations are domain knowledge, languages known, data cutoff, etc. These limitations, given more data or resources, or both are possible to overcome in an ideal situation.
  2. Technological Limitation: These are more like hard limitations. Some examples are input or output token limits, memory limits, and trainable parameters. These limitations cannot be overcome without making changes to the architecture of the LLMs. However, you can never fully eliminate them. Meaning, that if your current LLM has an input token limit of 1k tokens, you can build a larger LLM that overcomes the 1k limit but then it may have a limit of 8k for 36k or even 100k, my point is that the limit will always exist.

--

--

Hardik Rathod

Data Scientist at Quantiphi | GCP | Databricks | Google Cloud — Machine Learning Engineer | Google Cloud — Professional Data Engineer | AWS — Cloud Practitioner