Extending GPT-3’s context window infinitely by storing context in GPT-3 itself or in secondary layers

Let’s give GPT-3 what it needs to power an AGI. Feedback is welcome.

Erwin Mayer
4 min readMay 23, 2021

Context

OpenAI’s GPT-3 is one the best (and most underrated) things that happened to mankind in 2020. It proved that it is possible for an AI to surpass the zero-shot and few-shot learning abilities of most humans on a huge range of general tasks, both logical and creative, and pass the duck test for human intelligence (aka Turing test) with flying colors.

The implications are profound, and provide a fresh look both about what it means to be intelligent, and the notion of agency. Indeed, what are we if not a machine predicting and merely obeying our next most likely action/thought based on the context window of our life and everything we’ve been exposed to, creating our own model of the world and acting upon it?

GPT-3 has also built its own model of the world, basically of how things (symbols, words and abstract concepts) relate to each other, in a comprehensive and accurate way no symbolic AI researchers/enthusiasts (myself humbly included) could have hoped for, or remotely anticipated.

We have this amazing model, which proves human knowledge can be encoded in a huge neural network, but today can only use it provided the context and expected response fit within 2048 tokens (expressed via language, only around 1,000 words). And most importantly, we cannot easily and efficiently teach it any new information the way we can teach a human intelligence (which is really the Holy Grail for AGI). Fine tuning and retraining the whole model open useful possibilities, but are far from being the kind of one-shot feedback loops humans excel with.

Proposal

What about training a meta model that would learn where and how to best adjust GPT-3 parameters so they can “learn” new knowledge, preserved, contextualized and linked to the huge swath of existing knowledge already in the network?

For more modularity, what if given something we want GPT-3 to learn (an input), and GPT-3 itself, we could find how to modify the parameters of a secondary layer/model so that the combination of both layers produces an output that is consistent with the input provided?

Both the reward function and model evaluation benchmark could basically prompt the model itself to verify that the combined network is able to reliably and robustly regurgitate what it has been taught (not raw data memorization).

Context itself could therefore be taught, infinitely expanding GPT-3’s capabilities beyond 2048 tokens, and giving it the ability to create its own memory and evolving model of the world, one shot at a time, the way humans do (at worst with our same defects/limitations).

Contextual layers could be deeply personal (taught with all the stimuli we experience ourselves, language being a good start, the same way we could teach a human locked in a room via a console about our life by journaling what we hear, read, say and think), to generate an AI that knows almost everything about our life and could become the ultimate personal assistant/extension of self. It could also leverage layers at an organizational level (e.g. company knowledge), and the core, common mankind-level probabilistic knowledge we are willing to trust (e.g. GPT-3, the same way we trust Wikipedia).

Conclusion

What I propose here is the simplest mechanism allowing for an AI to find its way to modify only a small part of its existing model parameters to learn a specific input, without the need to retrain the model or change its architecture. I believe this is a key step in the right direction towards a more general AI able to learn, improve and contextualize its knowledge, and leverage it in a robust and scalable way.

I plan to work on this myself, but feel that it is such a big, important, challenging and potentially risky endeavor that I feel compelled to share it with the AI community and hopefully receive feedback/help from people with a better understanding of the technicalities and the necessary skills to help improve the idea and eventually try and implement it.

If you like the idea, and want to discuss, collaborate, or help, please comment here or reach out to me.

I’d like to thank GPT-3 for writing this conclusion (all the possibilities it suggested, based solely on what I had written before, were spot on).

Infinity skyscrapers
Source: Image by joelfilip on Unsplash

--

--

Erwin Mayer
Erwin Mayer

Written by Erwin Mayer

Humble problem solver. AGI enthusiast. Software developer. Investor. Growth hacker. Human.

Responses (1)