Connects decision-makers and solutions creators to what's next in quantum computing
Quantum Helps AI Models to ‘Understand’
Quantinuum framework looks to quantum computing to help teach AI to understand concepts
AI systems that can understand the world instead of merely predicting the next word or code are the dream of many AI researchers.
A group of quantum computing scientists developed a new approach that brings the dream a step closer: They developed a framework that enables machines to learn the way humans do.
A new paper from the team at Quantinuum describes the framework, which lets AI systems learn concepts like shape and color. Not only can the machine look at an image and recognize it, but it also actually understands the meaning of the object.
They developed the Compositional Quantum Framework, which is designed to structure and learn concepts automatically from data through both classical and quantum computing approaches.
Using a type of math called “category theory” that uses graphical calculus to represent objects and morphisms, objects are depicted as labeled wires, and morphisms as boxes connecting these wires, allowing for a visual and intuitive understanding of complex operations.
In simple terms, the researchers essentially merged insights from quantum computing with cognitive science concepts to create a framework that provides a mathematical structure to allow an AI system to visualize an action.
Quantinuum applied the concept to image recognition, demonstrating that concepts like shape, color, size and position can be taught to machines that are trained on images of shapes.
Quantinuum’s framework breaks concepts down into simpler parts so the system can see how they relate and interact with each other – like a detailed map of sorts.
By improving a machine’s ability to understand an action or concept, the team at Quantinuum hopes the research will contribute towards advancing AI systems that not only predict but also understand.
Beyond the black box
Top minds in the AI field want to push past generative AI to create more powerful systems. Recently, Meta’s Yann LeCun gave a speech where he said generative AI should be abandoned with a focus on creating systems that understand the world around them.
The research team at Quantinuum also wants to achieve this goal – but for accountability. They argue that current large language models are essentially black boxes with users unable to examine their underlying workings.
“In the current environment with accountability and transparency being talked about in artificial intelligence, we have a body of research that really matters, and which will fundamentally affect the next generation of AI systems. This will happen sooner than many anticipate” said Ilyas Khan, Quantinuum’s founder.
Related:Research Aims to Make Quantum Artificial Intelligence More Human
Quantinuum, while largely a quantum computing company, has a deep history of conducting AI-related research. This latest effort focuses on the interpretability of AI systems with the firm hoping to help safety efforts.
“AI has the power to cause serious harm alongside immense good. It is critical that users understand why a system is making the decisions it does. When we read and hear about ‘safety concerns’ with AI systems, interpretability and accountability are key issues,” a company blog post reads.
Quantinuum’s framework can run on classical computers and quantum machines, with the paper saying the latter systems are more naturally suited for addressing concepts like category theory.
It is still early days for the Compositional Quantum Framework, with the team behind it saying it needs “substantial further work” to demonstrate that it can be applied to for use in applications like AI agents.
This article was first published on nter Quantum's sister site AI Business.
About the Author
You May Also Like