Generative Data Intelligence

Part 2. Generative AI: It Could Redefine “The Theory of Knowledge”

Date:

generative AI

Credit: DALL-E 2

“The contemplation in natural science of a wider domain than the actual leads to a far better understanding of the actual.” — Sir Arthur Eddington

In Part 1 of this series, I said “I will refrain from saying that Generative Artificial Intelligence (AI) is the basis for a third school of epistemology”. A recent article I read explored the challenges in Generative AI from the perspective of metaphysics. My resolve about no new school of epistemology is weakening and this new article explains why. Neural Networks and Deep Learning algorithms, combined with synthetic data, are near to the current frontier in artificial intelligence (AI). These tools are becoming increasingly effective to provide new insights into natural systems. Equally important, these tools are particularly well suited to a world we now increasingly understand in terms of quantum physics, complexity science and systems thinking. This ability to further our understanding of science and natural systems is what excites me about AI and Generative AI in particular.

Modern science was heavily influenced by Descartes. Descartes saw science as macroscopic, top down, mechanistic and deterministic. This thinking created a very empirical metaphysics or theory of reality. Descartes’ approach can be described as “substance metaphysics” — reality is substance — and substance has mass and shape. This approach naturally leads to an epistemology (theory of knowledge) consistent with an observable world. This metaphysics and epistemology worked well to launch the Industrial Revolution. Near the end of the 19th century, a series of mathematicians and physicists began the original work that led to quantum physics.

Quantum physics is the study of the behavior and interactions of matter and energy at the atomic and subatomic level. It is a branch of physics that deals with phenomena at very small scales. In quantum mechanics, particles such as electrons, protons, neutrons, and photons behave in unexpected ways [1]. A quantum system exists as a collection of different possibilities, which can be described by probability until it is measured. Effectively, the scale at which we observe and describe reality changed from something you could hold in your hand to something invisible. Given that we cannot see the sub-atomic or atomic particles or their equivalent in wave form, we have no instinctive understanding of quantum physics, which explains why so few people understand the science and even fewer yet agree on the science.

If this in-depth educational content is useful for you, subscribe to our AI mailing list to be alerted when we release new material. 

Quantum physics provides three valuable concepts to better understand reality: 1) the component nature of reality, 2) the combination of components in emergent ways, and 3) measurement replaces deterministic causality to explain phenomena. The component nature of reality is a hierarchical, self-organizing, bottom-up building block approach to create atoms, molecules, organs, animals, humans. The components can also be combined “synthetically” to create manmade results that have utility (value). When we examine the natural and synthetic combinations we recognize “systems” and the process of emergence (complexity science), wherein a result is not causally related to the components. “Quantum physics shows that there is an emergent level of reality at larger scales; …objects do not have definite and fixed characteristics (uncertainty principle), and that objects are not separate from each other but that their identity is determined by internal structures as well as by their place in space and time.” [2] To simplify, think of quantum mechanics, complexity science and systems thinking as an integrated body of thought (the metaphysics), still developing but consistent across different approaches and objectives. [3] To model this non-equilibrium, non-linear phenomena of both natural and manmade systems at all scales we needed new tools. Such modeling does not lend itself to the traditional data fitting or the early machine learning algorithms. Deep Learning, Neural Networks and their variations are perhaps the best tools we have today to understand this 3-part metaphysics.

Turing Award winner Yoshua Bengio describes deep learning as:

…methods aim at learning feature hierarchies with features from higher levels of the hierarchy formed by the composition of lower level features. Automatically learning features at multiple levels of abstraction allow a system to learn complex functions mapping the input to the output directly from data, without depending completely on human-crafted features. [4]

This power of abstraction to produce and document the novel is in fact the power of all mathematics. Professor Hannah Fry explains this point well.

Mathematics is about abstracting away from reality, not about replicating it. And it offers real value in the process. By allowing yourself to view the world from an abstract perspective, you create a language that is uniquely able to capture and describe the patterns and mechanisms that would otherwise remain hidden. And, as any scientist or engineer of the past 200 years will tell you, understanding these patterns is the first step toward being able to exploit them. [5]

This ability to abstract has always been a fundamental and unique ability of human learning and knowledge creation. Therein lies the theoretical threat for AI to take over and make humans subordinates or worse. While an important ethical and existential issue for the 21st century, I am more optimistic and interested in the ability of advanced algorithms to develop knowledge of physical, natural and manmade systems at a deeper level of understanding. This potential is made clear by MacArthur Genius awardee Daphne Koller:

We were able to identify, — using ML (machine learning) — patterns [within the liver tissue] that correspond with known genetic drivers of disease. Human pathologists couldn’t see those patterns because they don’t even know what to look for. We found that if we let ML loose on the samples — if we let machines have an unfettered, unanchored look at the data — they’re able to identify disease-causing and disease-modifying associations that a human just can’t see.

The venture capital firm a16z perhaps makes the point more simply:

Our best tools for understanding and manipulating biology have historically been observation and experimentation. Then, we developed the ability to “read” and “write” in biology. And now, the convergence of biology with computation and engineering has finally unlocked our ability to “execute” in biology. In other words: biology is becoming programmable. [6]

This ability to “read” and “write” “synthetic data” is what we now call Generative AI. The consultants at Gartner Group explain Generative AI as “AI techniques that learn a representation of artifacts from the data and use it to generate brand-new, completely original artifacts that preserve a likeness to original data.” This generation of brand-new data and the related abstractions is what is so exciting to Koller. Gartner (and many others) believe Generative AI can be applied “in a wide range of industries, including life sciences, healthcare, manufacturing, material science, media, entertainment, automotive, aerospace, defense and energy.” [7]

The advantage of an approach like Koller’s is that the knowledge of the scientist or investigator is not the limiting factor that it might be in data fitting. If we were to develop a simple map of knowledge creation, “humans’ abilities for abstraction and analogy are at the root of many of our most important cognitive capacities.” [8] Mapping these relationships, knowledge can be derived as shown below.

Pattern Recognition > Analogy > Abstraction > Novel Insight > Knowledge

To me, this map looks very similar to how Generative AI behaves. The legendary mathematician David Hilbert (1900) said, “Mathematics is the foundation of all exact knowledge of natural phenomena”. Renowned computer science professor Jeannette Wing expands on Hilbert’s thought:

Computational Thinking focuses on the process of abstraction — choosing the right abstractions, operating in terms of multiple layers of abstraction simultaneously and defining the relationships between the layers. [9]

This approach not only explains the current state of AI but also mirrors the complex 3-part metaphysics of quantum mechanics, complexity science and systems thinking. Complexity science explains the hierarchical system, systems thinking applies it to the totality of a problem and quantum mechanics provides the foundational layers upon which all synthesis and abstraction begins. In some ways it looks to me like the advances in areas such as Generative AI have been aided by our expanded scientific knowledge of natural phenomena over the last sixty years. As the knowledge of science advanced, it documented the power of the AI to validate new science. To me this looks like autocatalysis, wherein the product of a process is a critical input of the process. This process or business model has been incredibly successful in the past, as evidenced by the Google search algorithm that values links based on the number of times the link comes up in a Google search.

EO Wilson, the legendary Harvard biologist, invented computational biology. He wrote in 1999:

“We are drowning in information, while starving for wisdom. The world henceforth will be run by synthesizers, people able to put together the right information at the right time, think critically about it, and make important choices wisely.” My hope is that the computational power of tools such as Generative AI will be equally successful in unraveling the issues embedded in the social sciences. Such success I think reduces the likelihood that we humans lose control of AI. As the AI enhances the ability to abstract insight and eventually determine knowledge, we redefine the nature of intelligence. More importantly, for the first time in human history we may have the computational tools to advance new solutions to social inequities and the impending environmental apocalypse. AI like Generative AI will hopefully make prescient the visionary mission statement of Deep Mind — “Solve intelligence and use it to solve everything else”.

In Part 3 of this series, I plan to discuss a new approach to creativity and opportunity identification that is required by the advances in AI.

Note: This optimistic view of AI perhaps was first presented by the polymath John von Neumann in the 1950s. As Markowitz explains von Neumann:

“Von Neumann nurtured a vision of explanation that was much more willfully mechanical and hierarchical. In this “teleological” world, simple micro-level, rule-governed structures would interact in mechanical, possibly random, manners; and out of their interaction would arise some higher-level regularities that would generate behaviors more complex than anything which could be explained at the level of micro entities. The complexity of these emergent macro properties would be formally characterized by their information-processing capacities relative to their environments. The higher level structures would be thought of as “organisms,” which in turn would interact with each other to produce even higher-level structures called “organizations.”…[Von Neumann’s would be a mathematics that] truly applied to the Natural and the Social, the live and the life-like, uniformly and impartially. This theory would apply indifferently and without prejudice to molecules, brains, computers and organizations.”

(Mirowski 2002, 144, 147) (Bhattacharya 2022)

[1] ChatGPT, OpenAI

[2] Quantum and systems theory in world society: Not brothers and sisters but relatives still?

[3] Statistical Physics and Complexity

[4] Yoshua Bengio, (2009), Learning Deep Architectures for AI 

[5] Hannah Fry, The Mathematics of Love: Patterns, Proofs, and the Search for the Ultimate Equation

[6] Programming Biology: Read (r), Write (w), Execute (x) (a16z)

[7] 5 Impactful Technologies From the Gartner Emerging Technologies and Trends Impact Radar for 2022 (Gartner)

[8] Abstraction and Analogy-Making in Artificial Intelligence (arXiv)

[9] Microsoft Research Asia Faculty Summit 2012

This article was originally published on Medium and re-published to TOPBOTS with permission from the author.

Enjoy this article? Sign up for more AI research updates.

We’ll let you know when we release more summary articles like this one.

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?