I am fearless.

Originally Posted on December 05, 2013 by Heather Hershey

Alternate Title: “How I intentionally botched a massive final for my Master’s Program due to philosophical irritation.”

Some background: This is the essay I wrote to the AI Department at the University of Georgia the very day I decided to leave my Master’s Program. (I’ve never regretted the decision!) 

George Boole

 

Today I sat down for a final in what is arguably the most important class of my Master’s in Artificial Intelligence (AI) program: Philosophy of Artificial Intelligence, or PHIL6500.

I was as prepared as I possibly could be, arriving at the testing center two hours early armed with notes for some last-minute review. Truth be told, I was a basket case. My primary background is in behavioral science. I was prepared to take a difficult philosophy class with some logic and some heated discussion when I registered for this course. That isn’t what I got.

I realized while browsing through my handwritten notes today that the teacher – whose primary field of study is computer science (CS) – had neglected everything in Peter Norvig’s massive AI textbook related to psychology, statistics, PHILosophy, history, and CONTEXT in favor of basic search algorithms in Java you could learn in other computer science courses on campus and symbolic logic that all the AI students would encounter in multiple required logic courses later in the program. In other words, none of this stuff mattered…and I was amazed by my emotional reaction. How dare the school turn something so important into just another basic computer science class?!

When the time came, I scrambled to write my answers neatly in the provided blue book. Page one and two were easy, three a bit rougher, and the fourth was generally tedious but manageable, the way knowledge bases were intended. The fifth page, however, is where I got lost. It was the final page of the exam questions and I completely blanked. The question was a simple application of resolution elimination. I could have done it easily, but I just couldn’t shake the feeling that something was wrong.

This is where things get crazy. I took a deep breath and suddenly all the ambient noise in the room – the loud air system, my professor’s cell phone clicking, the rustling of paper, the gentle sighs of despair and boredom, the grating fluorescent lights, the intrusive scent of sweaty graduate students behind me – all was silent in the tiny computational space between my ears. I began to drift away from my body as my mind spewed its inner narrative all over the remaining fifteen pages of my blue test booklet.

Here is my argument, in all its glory:

This essay was not requested in the test and will probably take up so much room that I will not be able to answer the final question. I know that I am probably going to fail this exam because of it, but I am fine with that. This is not the class I thought it would be. Actually, this wasn’t much of a philosophy class at all. I am beginning to get the impression that the AI department at this school is only multidisciplinary on paper and an extension of computer science in practice. If that is indeed the case, then why even bother making it a separate entity on campus? Why not make it a concentration option of the CS department if that is all you want to offer? I think the AI department isn’t simply a branch of CS precisely because artificial intelligence is not well served by an exclusive computer science paradigm.

I thought graduate school was a place for higher-level inquiry. Instead, this school’s curriculum thus far has been based on the simple memorization and application of existing formulas, concepts, and vocabulary without much discussion about WHY we think these are the best practices in our field. I’m sorry for sounding naïve, but I thought critical thought essentially revolves around the deeper levels of understanding that asking WHY can provide. Instead, WHY is reviled, suppressed, denied in favor of claims of absolute truth and logical soundness.

We, the graduate students, are broken until we no longer know how to think in our own fashion. We are then retrained to think exactly like you. Therefore, graduate school is where innovative thought processes go to die.

I will not be broken.

Now, I am not suggesting that I know more than you or anyone else in this room. This is not a matter of smug self-satisfaction. I still have far to travel on this road of life and, if I am fortunate, I will be able to learn some small fraction of wisdom about my innate curiosities and the inner workings of my own thoughts. That is why I am here. Ancora imparo, and all that jazz. However, I think that treating us like ignorant sheep that are only capable of mind-numbing repetition, pointless and arbitrary paper composition, and organized baby steps in research does us no favors and only serves to provide “grunt work” research for your CV.

Since industrial-era symbolic logic is the only thing of a philosophical nature we touched on, I would like to address it at this time.

I have a question for you: Why do so many very intelligent people have difficulties with symbolic logic? I think this is a relevant question since you have placed so much emphasis on this during the course of this class, despite the fact that other classes in our curriculum cover it better and more in depth. Could it be because you are attempting to distill the whole of human thought and communication, as well as all of the other chaotic influences of the natural world, into a highly structured, antiquated framework that is always logical, neat, and tidy?

Life doesn’t work that way. If you continue to do things as you’ve always done them — and refuse to think outside of the prevailing mindset — you will never reach full, general artificial intelligence.  Singularity will always tantalize, but never be fully attainable.

This query is particularly important in light of known limitations within the burgeoning study of natural language processing. NLP is something that, to-date, the field of artificial intelligence has struggled with precisely because it is bound to the rigid confines of symbolic logic in an illogical media (IE: natural human communication), which is always in flux.

Let’s take it back to the source. What about George Boole? Prior to Boole, logic was full of syllogism and pretty light on anything resembling algebra. Actually, if you think about it, algebra is just another language as all of mathematics is simply another way of describing what we as humans encounter during the course of our lives. It’s a means of measuring, but also a means of conveying information about said measurements. This gets lost very early in young people’s mathematics education and leads to a this-or-that mentality in which students are socially pressured at a young age to pick a preference without being given the big picture information about how the natural spoken language (we’ll use English as an example) is similar to, yet different from formal language, like symbolic logic and math. And prior to Boole, logic was much closer to spoken language than algebra.

If a sixteen year old eccentric teen boy from an impoverished home came to you and said that God had spoken to him in a dream and told him exactly how people think, would you give his ideas any credence? Of course not. People in Boole’s day didn’t take him very seriously, either, at least not in his early career. Posthumously we consider the man a genius, a grandfather of modern computer science and the father of symbolic logic. Boole had his teenage fever dream and I cannot help but think this is something we would be very dismissive of today. He had no formal education of any kind and taught on a University level without ever stepping foot on any campus as a student. Actually, he was persuaded against attending college for fear that it would interrupt his intellectual pursuits. His peers (DeMorgan chief among them) wanted to protect him and his mind from the corruption that occurs when professors compel their students to abandon their own intuitive deductive processes in lieu of those of the establishment. He probably wouldn’t have been able to afford the tuition anyway. (Some things never change.)

God told George Boole, at least according to Boole, that everything we encounter in life can be neatly divided into two opposing categories: TRUTH, which is where God lives and is represented in Boolean algebra as a number “1”; and FALSE, or the absence of truth (and therefore the absence of God: “the truth, the light, the way” – can you see what his Victorian mind was getting at?), represented by the null, or “0”. It doesn’t take a rocket scientist, or even an AI researcher, to realize very quickly that while this may be very useful in simple applications, this kind of reasoning hardly resembles anything like intuitive human reasoning processes.

Symbolic logic (SL) was supposed to mirror human reasoning. I mean it came from “God” to an uneducated teenager in a dream. What could possibly be faulty with this?

LOTS OF THINGS:

  • SL reduces words with meaning into nothing more than empty variables, operators, and (in first-order logic) quantifiers.
  • SL over-simplifies the nouns of natural language by replacing them with variables while placing unique emphasis on operators, like you would in math.
  • However, people – who are probably not using elaborate SL proofs to reason their ways through myriad sundry daily problems- generally place emphasis in speech on nouns, verbs, and existential claims.
  • Existential claims in SL are much weaker than statements such as “I am” are in English.
  • As mentioned above, SL was initially developed to describe human reasoning, as if classic logic were inadequate because it relied more heavily on natural language than mathematic language.
  • SL is used to “translate” natural language (like English) into a formal one (propositional calculus, predicate calculus) for ease of use. That means we use it because it’s convenient. To suggest otherwise is a lie.
  • Losing information via data “translation” of this nature is a notorious problem within artificial intelligence. It is one of the principle reasons why we, as a field, always seem to veer away form projects that require substantial knowledge of human thought processes in favor of projects that merely require a simulacrum of intelligence. We program for rationality and NOT intelligence.
  • This means that our field, though scientifically sexy, is way behind the trajectory for where we thought we’d be by now and way less advanced than the general public assumes.
  • A primary means of “proving” an SL argument is reductio ad absurdum, which basically just means you provide a negation for the statement and somehow it’s magically proven true. It’s an easy proof, but the mechanics of how something can become true just by saying it’s not still kind of mystifies me.
  • I think one of the first things we learn as children when acquiring language is that is not possible to answer every question in every given situation with a simple “yes” or “no”.
  • Presenting a logically sound argument (IE: one that is logically true under all circumstances) will not always result in an argument that is factually true.

My hand is starting to cramp, so I will end this screed with a few closing statements.
Why do so many intelligent people have difficulties with symbolic logic? I think the answer is fairly straightforward. SL simply isn’t intuitive for most people. Therefore, if the primary function of SL was originally to serve as a God-designed, teen genius delivered model of human intuition, it does a pretty terrible job. I would also argue that the more naturally intuitive and rational you are, and the more competent you are with macroscopic systems-level problems, the more difficulty you will have trying to force your natural thought processes to conform to this rigid formal framework. Big concepts don’t always break down neatly into bite-sized binaries.

George Boole changed the world. He did it without a degree. In the age of computers and rapid access to information, the George Booles of the world are left out of the dialogue entirely because they lack easily identifiable credentials. That is why most graduate students come to school. They need your paper to validate their academic ability. They will conform if you instruct them to do so, as the penalty is failure and with it the lack of crucial external validation. I am no better, though I will say that I have a much broader perspective on the importance of this program and my role within it. I do not consider this impromptu essay a cop-out or a hail-Mary pass. It was a conscientious decision to tell you that I have objections and that I am fine with whatever happens from this day forward. This degree is vanity. Because I am aware of it, I am free to think for myself. (See how powerful “I am” can be?)

Carl Sagan’s widow once said (mind you, I’m paraphrasing because I don’t have the text in front of me), “Science can’t give you absolute truth because it’s a permanent revolution, always under vision, and only capable of providing successive approximations of reality.” We can do a lot of very interesting things in AI. I just don’t think that settling for something that is merely rational is nearly enough. We once had such lofty ambitions. We can do better than expert systems and semi-autonomous cars and planes. Those things are neat, but there has to be more to this field than simply aiding big businesses and the military. Resting on what is known seems rather against the very nature of scientific innovation. As an academic discipline and a science, I have to think we can do better than this.

 

 

As you can probably tell, I took full advantage of the entirety of my blue book plus its back cover as well as the two hours allotted for the final. I gave it a quick read-through, closed the cover, and turned it in. I felt such massive relief! I literally laughed (softly, to myself) the entire way back to my car. I came home and brain-dumped my essay for your enjoyment. Now if you’ll excuse me, I’m going to drink a little wine and watch Red Dwarf! 🙂

 

PS: I don’t feel guilty for reducing George Boole down to such a negative description in my essay, IE: calling him an “uneducated teenager.” He was an autodidact without much in the way of formal education. I also think that fact speaks to my overall point.

PPS: Please feel free to contact me if you are recruiting students for an upper level AI or Cog Sci program. I would enjoy an opportunity to go to a more progressive school!

 

Share:

Leave a Reply

Your email address will not be published. Required fields are marked *