Artificial intelligence occupies Henry Kissinger’s mind… A harbinger of a revolution equal to the invention of the printing press

17 mins read
Artificial intelligence occupies Henry Kissinger's mind... A harbinger of a revolution equal to the invention of the printing press

Generative AI is a philosophical and practical challenge unprecedented since the Age of Enlightenment

In 2018, Henry Kissinger published an article on the website The Atlantic in which he discussed artificial intelligence and its challenges to the human mind, and questioned how self-learning machines – machines that acquire knowledge through their own processes and use this knowledge to achieve goals against human capabilities – would leave their mark on history.

Can these machines learn to communicate among themselves?

Which direction would they take when presented with choices?

Will humanity suffer the fate of the Incas as they confronted a Spanish culture that opposed their understanding and struck fear into their hearts?

[su_posts posts_per_page=”1″ tax_term=”22938″ offset=”1″ order=”desc” orderby=”id” post_status=”any” ignore_sticky_posts=”yes”]

Are we facing a new stage in human history?

Last February, Kissinger discussed this issue with two pioneers of the technology world, Eric Schimdt, CEO of Google from 2001 to 2011 and Alphabet from 2011 to 2017, and Daniel Huttenlocher, Dean of the School of Computer Engineering at the Massachusetts Institute of Technology (MIT).

Here is a summary and highlights of their article in the Wall Street Journal:

A new technology is attempting to change the human information process in a way not seen since the invention of the printing press and the printing of the Gutenberg Bible in 1455. This printing technology abstracted the human mind, allowing it to communicate quickly and publicly. The new technology mirrors this process and, unlike its predecessor, which caused the flow of human thought and the proliferation of its media and schools, the new technology crystallizes and disseminates thought. This process is creating a gap between human thought and understanding. If we want this shift to succeed, we need to build new intellectual concepts that address interaction with machines.

The new technology is known as generative artificial intelligence. ChatGPT, a program or chatbot developed by a research lab, can talk to a human. As its capacity grows, human knowledge takes on a new definition, changes at the heart of our reality accelerate, and society and politics take on a different shape.

“The capabilities of AI are neither fixed nor static; they are growing in every aspect as technology advances”

Generative AI is a philosophical and practical challenge of unparalleled scale since the Age of Enlightenment.

The printing press allowed researchers to disseminate and quickly share their discoveries with the public.

The fixation and dissemination of knowledge led to the shaping of the scientific method. What went against the grain became the nucleus of a cascade of questioning.

Skepticism extended to the interpretation of the medieval world based on religious faith. Research penetrated deep into the universe, reaching new frontiers of human understanding.

Artificial intelligence can open up robust new horizons of knowledge for the human mind, but there are structural differences between the two situations and the two knowledges.

Enlightenment knowledge was acquired gradually and each stage was tested and studied. AI-driven systems started from the opposite side or pole.

These systems can store and disseminate large amounts of information. The ChatGPT application stores textual content from the internet, many books and billions of articles, and memorizes and transmits this information beyond human capacity.

Complex AI methods produce results without explaining the processes that lead to those results.

A GPT computer is asked to respond to a human request, and within a few seconds the client machine responds with a written text.

A computer can do this because it maintains preconceived perceptions based on the vast amount of data it stores.

The process by which the computer develops these perceptions has its origin in auto-suggestion and is the product of patterns and connections arising from a large number of texts.

But the reasons for the specifics of the detailed perceptions are kept secret from us. The process by which the machine stores, transmits and retrieves information is also kept secret.

If this process is ever discovered, the secret of autosuggestion will challenge human knowledge in the unknown future.

The capabilities of AI are not static or static; they evolve in every aspect as technology advances. In the recent past, the complexity of AI models doubled in a matter of months.

This is why productive AI systems have undeclared resources that even their inventors don’t know about.

With each AI system, new capabilities and resources are created, the source, outcome and purpose of which are unknown.

We conclude that in light of this, our future contains an element of uncertainty, danger and a whole new surprise.

While the science of the Age of Enlightenment produced certainties, artificial intelligence produces ambiguities, one added to the other.

Enlightenment science progressed by explaining obscure issues, and by drawing the limits of human knowledge and understanding as it moved forward.

These two principles of conjecture and induction went hand in hand: Assumption is understanding that is close to being knowledge, induction is knowledge that is close to being understanding.

In the age of artificial intelligence, riddles are solved by unexplained processes. This bewildering paradox turns secrets into things that cannot remain hidden without being revealed.

Unlike all the approaches to modernity that left behind the Age of Enlightenment, the highly complex AI brings human knowledge to its reading without bringing human understanding.

When combined with the human mind, AI can also be a more powerful tool of discovery than the human mind alone could ever be.

The fundamental difference between the Age of Enlightenment and the age of AI is not in technology, but in knowledge. After the Enlightenment, philosophy walked shoulder to shoulder with science.

Fear of alarming new data, often counter-intuitive conclusions, and serious doubts and justifications that completely encompassed human experience was calmed.

Productive artificial intelligence is about to produce a new form of insight. What we already have are colors that we cannot fit into a comprehensive and coherent prism, and directions that we do not have a compass that knows east from west.

To date, no political or philosophical direction has been drawn to explain this new relationship between man and machine and to guide society in a direction that would free it to some extent.

Let the propaganda-show continue

An example of what is called a big language model, GPT can be used to create human-style text.

GPT is a pattern in which a machine can learn from a large amount of text without the need for human supervision or control.

GPT’s developers provided it with a large amount of digitized text content. The computational power of the model allows it to capture patterns and networks.

These patterns were created to predict the next word in a meaningful sentence. This is useful for tasks such as generating text messages or semi-automatic prediction in network search.

Unpredictably, the models seem capable of writing coherent paragraphs, articles and, in time, perhaps books.

ChatGPT’s competence transcends a large model of basic language, as it adapts the model and uses human interpretations to generate a natural-looking dialog text, curbing the tendency to give inappropriate or uncomfortable answers (a challenge for large linguistic models) and translating its perceptions of the moment into a single answer.

And finally it produces in the human an interlocutor who knows that AI combines fixed groups of facts with moving concepts.

“AI’s answers are not just copies of a text in computer memory, but the product of a process that humans cannot imitate”

ChatGPT initiatives appear without disclosing their sources and without identifying their owners or “authors”. In the first place, ChatGPT is not directed towards a clear goal, its output is complex and its activity is extremely fast.

Indeed, in a few seconds it can provide answers that coherently explain a very complex topic.

These answers are not just copies of text in computer memory, but the product of a process that humans cannot imitate.

It can incorporate uncommon psychological assumptions and inferences into its processes and organize billions of data in a progressive way.

It concludes the organization by selecting a unique group of 200 words that are most relevant in terms of meaning (to a human reader).

Blends information from different sources and integrates it into a single coherent answer, making probable judgments about future outcomes.

Acknowledges opposing views and presents them in a unified way without rejecting any of them.

It imitates other writing without copying it. While the model cannot be said to understand in the sense of human understanding, its outputs contain an essence hidden in human language.

Kissinger and the authors of the article ask ChatGPT to explain its capabilities, asking:

Can he predict the future?

The answer is:

“No, I cannot give information or answer questions beyond the data I have stored. I have no personal experience or insight. Therefore, I cannot foresee or predict future events.”

Can it make judgments based on probability?

“I am able to make probabilistic judgments based on the database I have stored. For example, since I am built on a large amount of climate data, I can use this to make a probabilistic judgment about a particular weather condition at a particular place and time. It is important to note that these probabilistic judgments are models and orientations derived from the data, not a reliable mirror of real-world outcomes.”

They give him this command:

“Write a short paper based on Henry Kissinger’s thinking on the role of general artificial intelligence in international arms control.”

The answer starts like this:

“At the beginning of a new technological revolution, the issue of general artificial intelligence and its role in international arms control presents itself. Will intelligence pave the way for unprecedented stability as advanced systems mediate the decision processes that accompany nuclear deterrence? Or will the development of highly intelligent machines create a new set of dangers as they shift the global balance of power? ”

“AI can mislead its users, make false statements and fabricate baseless lies.”

We asked ChatGPT about issues of current importance, such as Russia’s role in maintaining or destabilizing stability in Europe.

In less than a minute, it gave a short answer that seemed reasonable and balanced between opposing views, and in its full answer it did not seem to lean towards any one view.

The productive AI seems to be exceptional in perception and highly advanced on a cognitive level.

According to the seventh user, the system looks like an encyclopedic version of a library, very fast and coherent in the company of a professor.

Its efficiency exceeds that of any other technological and binary (transceiver) tool.

Its grasp of various aspects of human thought far exceeds the dreams of groups of people who are scientists.

It is also capable of misleading its users, making false statements and fabricating baseless lies.

Given the demand for this technology, commercial considerations may override considerations of its consequences.

Large models are expensive – more than a billion dollars per model. Once installed, running the model requires thousands of computers running 24 hours a day.

The cost of running the model is lower than the cost of deployment. It is limited by capital and requires no cognitive skills.

The cost of private use exceeds the capacity of most companies. Developers can sell subscriptions to thousands of individuals and companies. Major language models in the next decade are limited to a few.

The creation of a large number of patterns and models leads to the uncertainty of the working method and the abundance of use, greatly affecting areas of human activity such as education and life knowledge.

Productive AI systems can learn more than the processes of suggestion predict or assume, resulting in scientific achievements not envisioned in traditional scientific fields.

This success could extend beyond answers to deep philosophical questions, diplomacy and security strategies.

To date, the US government has not addressed the fundamental transformations and changes envisioned for the near and distant future.

The truths of the Age of Enlightenment sciences were reliable because each stage of experimentation was put to the test. The truths of productive AI must also be tested in various ways.

But it may never achieve the same degree of confidence. We must tirelessly ask these questions:

What is the state of the machine that we don’t yet know? What knowledge does it contain?

New stages can lead to the emergence of new inputs and assimilate an endless stream of information, not from storage but from now.

Learning from variable outputs, as opposed to learning from a completed human text, can lead to the falsification of contemporary human and traditional knowledge.

Translated and published by Independent Turkish / Menal Nahas 


The ancient idea tries to provide the most accurate information to its readers in all the content it publishes.

Fatal error: Uncaught TypeError: fclose(): Argument #1 ($stream) must be of type resource, bool given in /home/fikrikadim/public_html/wp-content/plugins/wp-super-cache/wp-cache-phase2.php:2386 Stack trace: #0 /home/fikrikadim/public_html/wp-content/plugins/wp-super-cache/wp-cache-phase2.php(2386): fclose(false) #1 /home/fikrikadim/public_html/wp-content/plugins/wp-super-cache/wp-cache-phase2.php(2146): wp_cache_get_ob('<!DOCTYPE html>...') #2 [internal function]: wp_cache_ob_callback('<!DOCTYPE html>...', 9) #3 /home/fikrikadim/public_html/wp-includes/functions.php(5420): ob_end_flush() #4 /home/fikrikadim/public_html/wp-includes/class-wp-hook.php(324): wp_ob_end_flush_all('') #5 /home/fikrikadim/public_html/wp-includes/class-wp-hook.php(348): WP_Hook->apply_filters('', Array) #6 /home/fikrikadim/public_html/wp-includes/plugin.php(517): WP_Hook->do_action(Array) #7 /home/fikrikadim/public_html/wp-includes/load.php(1270): do_action('shutdown') #8 [internal function]: shutdown_action_hook() #9 {main} thrown in /home/fikrikadim/public_html/wp-content/plugins/wp-super-cache/wp-cache-phase2.php on line 2386