Skip to Content

ai is useless without context

May 8, 2023

During my career in artificial intelligence I have been through the developing, improving, applying and fine-tuning of AI algorithms many, many times. At a specific point of time it become clear to me that the algorithms alone will never be able to solve your problem or use case other than in a lab-setting.

The reason? Context. AI models put into work in the real world have no possibility to relate to all possibilities across all dimensions in a real-world setting.

So I started to work on context for AI. First with explicit modeling of context using rules (the if-this-than-that kind of things). That did not work to well (too much work, I would say). So we aimed at describing the world and offering that as context. From the early 2000s I worked on Knowledge Graphs and their standards (and I still love them). They enabled modeling knowledge, but also flexibility by logical reasoning and inferencing, finding inconsistencies in our world and much more. But they are not the final or only answer either (as nothing is, I guess). So when we started to work with deep learning we thought part of the quest was solved. But it did not really work either. In real-world scenarios the AI models we made failed hopelessly at unexpected and unwanted moments. Why? They failed on context. Again.

And so came ChatGPT. Featuring a model which we had seen (failing) before, becoming racist after only a few hours in the real-world. But now with a wrapper that actually made it work…. much better! And more reliable. Still not perfect, but hey, given the previous attempts: great improvements!

And what was the trick, why did it work this time? The layer that was added by OpenAI was a genius strike: it added a context-layer, able to interpret what was happening, able to stop unwanted outcomes to a large extend and thus enabling the AI Model to work in the real-world.

We are not there yet, also this is not enough. But all the great work that has been done last years, on the graph tech, on deep learning, on transformer models and, not in the least, this first actually working context-layer, make me very optimistic that we can look ahead with confidence and trust. Still a lot of work to do, but the basics for a great future with AI seem to fall in place.

Next thing to add to the equation? Let´s rock and allow these models to use the context awareness in order to solve the parts that language models cannot do: the knowledge parts: factuality, causality, planning, maths, physics etc. First approaches popped up already, I cannot wait to see more integration of it all!

Read this article on Medium.

Meet the author

Robert Engels

Vice President, CTIO Capgemini I&D North and Central Europe | Head of Generative AI Lab
Robert is an innovation lead and a thought leader in several sectors and regions, and holds the position of Chief Technology Officer for Northern and Central Europe in our Insights & Data Global Business Line. Based in Norway, he is a known lecturer, public speaker, and panel moderator. Robert holds a PhD in artificial intelligence from the Technical University of Karlsruhe (KIT), Germany.