Alan Kirker

Dream

June 18th, 2021 by

In an essay titled “The Power and Peril of Language”, philosopher Susanne K. Langer describes the fundamental nature of human thought:

“The tendency to manipulate ideas, to combine the abstract, mix and extend them by playing with symbols, is man’s outstanding characteristic. It seems to be what his brain most naturally and spontaneously does. Therefore, his primitive mental function is not just judging reality, but dreaming his desires” (1944, p. 52).

Where do our thoughts, emotions, and dreams come from? What role do the stimuli and symbols of our environments and contexts play in their arising? Psychiatrist Carl Jung observes:

“I have also realized that one must accept the thoughts that go on within oneself of their own accord as part of one’s reality. The categories of true and false are, of course, always present; but because they are not binding they take second place. The presence of thoughts is more important than our subjective judgment of them. But neither must these judgments be suppressed, for they also are existent thoughts which are part of our wholeness” (1961, p. 298).

“Time is but the stream I go a-fishing in. I drink at it but while I drink I see the sandy bottom and detect how shallow it is. Its thin current slides away but eternity remains. I would drink deeper; fish in the sky, whose bottom is pebbly with stars. I cannot count one. I know not the first letter of the alphabet. I have always been regretting that I was not as wise as the day I was born” (Henry David Thoreau, 1854, Walden).

A Sheaf of Golden Rules from Twelve Religions | Islam:
“No one of you is a believer until he loves for his brother what he loves for himself” (1946, p. 310).


Hoople, R. E., Piper, R. F., & Tolley, W. P. (1946), A Sheaf of Golden Rules from Twelve Religions, in Preface to Philosophy: Book of Readings (pp.309-310). New York, United States: The Macmillan Company (1952 ed.)

Jung, C. G. (1961) Memories, Dreams, Reflections, Recorded and edited by Aniela Jaffe, translated from the German by Richard and Clara Winston. New York, United States: Vintage Books – Random House Inc. (1965 ed.)

Langer, S. K. (January, 1944), The Power and Peril of Language, from “The Lord of Creation” in Fortune. New York, United States: Time Inc., reprinted in Hoople, R. E., Piper, R. F., & Tolley, W. P. (1946), Preface to Philosophy: Book of Readings (pp. 50-53). New York, United States: The Macmillan Company (1952 ed.)

Thoreau, H. D. (1854) Walden, Chapter 2, “Where I Lived, and What I Lived For” [HTML document]. Retrieved June 2021.

Alignment

June 18th, 2021 by

“In artificial intelligence (AI) and philosophy, the AI control problem is the issue of how to build a superintelligent agent that will aid its creators, and avoid inadvertently building a superintelligence that will harm its creators. Its study is motivated by the notion that humanity will have to solve the control problem before any superintelligence is created, as a poorly designed superintelligence might rationally decide to seize control over its environment and refuse to permit its creators to modify it after launch” (Wikipedia, retrieved June 2021).

In this realm, understanding how to reward machine learning behaviour so as to develop a “policy” that dictates how the “intelligent agents” do what we want them to, has been supplanted by looking instead at structuring the environments in which these agents will operate. In his book The Alignment Problem: Machine Learning and Human Values (2020), author Brian Christian explains why, using the example of ourselves in nature:

“A programmed heuristic like, ‘Always eat as much sugar and fat as you can’ is optimal as long as there isn’t all that much sugar or fat in your environment and you aren’t especially good at getting it. Once that dynamic changes, a reward function that served you and your ancestors for tens of thousands of years suddenly leads you off the rails” (2020, p. 173).

Clues from evolution and child development are now useful to reward designers of robots and artificial intelligences. Beyond specific policies, Christian says “values” must be instilled in these agents using notions of parenting and pedagogy, and in a manner where not only will our actions be understandable to our creations, but so that they act in ways that are transparent to us. He cautions against relinquishing too much control, not to the agents and machines, but to the training models we use for these sorts of purposes, citing Hanna Arendt as to how easily evil can emerge from an ill-conceived but otherwise innocuous template, as the models themselves “might become true” (2020, p. 326).

Given their complex nature, should we wonder whether our intelligent machines might develop some equivalent of emotion? In an essay titled “In The Chinese Room, Do Computers Think?”, science author George Johnson suggests such anomalous behaviour could take the form of “qualities and quirks that arose through emergence, through the combination of millions of different processes. Emotions like these might seem as foreign to us as ours would to a machine. We might not even have a name for them” (1987, p. 169).

How might such artificial emotions arise and what might they be like? As science fiction author Philip K. Dick wonders, will our Androids Dream of Electric Sheep? Do such speculations point to how our very own emotions and thoughts arise, and the factors in our bodies and environments which contribute to their arising?


Christian, B. (2020) The Alignment Problem: Machine Learning and Human Values. New York, United States: Penguin Random House.

Dick, P. K. (1968) Do Androids Dream of Electric Sheep? New York, United States: Penguin Random House – Doubleday.

Johnson, G. (1987), In The Chinese Room, Do Computers Think? in Minton, A. J. & Shipka, T. A. (eds.), Philosophy: Paradox and Discovery Third Edition (1990), (pp. 156-170). New York, United States: McGraw-Hill.

Pagina-extra's

About Alan Kirker

Introduction & artist statement.

Curriculum Vitae (pdf)

Employment & education highlights.

alankirkerantibotbit@gmail.com

Contact Alan with your questions.

Page top | Home | Work | Blog | Support | © Alan Kirker