Current AI systems do not exhibit active intelligence; instead, they are large-scale cognition models.
Summary
This thesis puts forward a framework in which computation, cognition, and intelligence are defined as distinct yet interdependent operations. First, computation is conceptualized as a series of logical operations that enable the transfer and transformation of information, in line with Turing and Shannon’s formal theories. Building on this computational layer, cognition is introduced as a cost-minimization process. Here, “cost” represents the gap between a system’s outputs and some truth factor; cognition thus continuously strives to improve its approximation of truth.
Above cognition lies intelligence, formulated as a value-gain function. Intelligence takes the outputs of a cognition model, evaluates their value relative to the input state, and seeks to maximize this value through additional iterations of cognition. In other words, while cognition aims at reducing error, intelligence aims at increasing value. Crucially, the thesis posits that intelligence is not merely an emergent property of large cognitive systems; rather, it is constructive—it is a deliberate process of selecting actions or judgments that improve outcomes. Even simple living cells, though they possess minimal cognition models, display active intelligence under this definition.
Applying these principles to current AI, the thesis argues that contemporary large-scale models (such as language or image-generation systems) mainly exhibit sophisticated cognition. Their reinforcement learning (RL) components or “reasoning” modules approximate intelligence but still rely on external “value assessments.” In everyday use—like iteratively refining an AI-generated image prompt—humans serve as the intelligence layer, performing the value-based selection among multiple cognitive outputs. Therefore, these systems do not replace human intelligence; instead, they extend human cognition.