Tuesday, November 14, 2017

I've been thinking about how, if I were to attempt it, I would qualify the human psyche in terms of ones and zeros. (I'm going to brainstorm as I type). I'm sure that the Bible has something to say about what a human is, and how humans are (or at least, how they should be). The Bible addresses who we should be about as often as it addresses how we should be, and it is says about itself that it contains knowledge "sufficient for life and godliness". Because of this, I theorize that the Bible should contain information sufficient for a complete anthropological model. And, by "complete", I mean to say that "sufficient enough that working algorithms for humanoid artificial intelligence can be produced using only concepts derived from scripture". Of course, by "humanoid", I do not mean that it will physically resemble humans.

That said, I know that a lot of work goes into even a very small AI. As a personal project, I'd like to attempt to distill scripture for qualifying characteristics of human nature to which numerical value can be ascribed, and then attempt to make a very simple AI. Much work has been done on basic problem solving skills to enable interaction, so I think my time would be better spent focusing on the relational aspect of human beings in order to complete the AI. Those mathematical functions are only the tools with which a working AI should express itself.

It's commonly said that the Bible teaches that humans are designed for love and interpersonal relationships, which is the known intrinsic shortcoming of modern AI. I think it would be a foolish endeavor to attempt to qualify love in computational terms, but relationships manifest themselves in physical ways; "Out of the overflow of the heart, the mouth speaks". A conforming AI would not be capable of saying "I love X", but rather, "this is how I love X". Fortunately, the Bible regularly issues statements in those terms for our benefit, (e.g. John 3:16, "For God so loved the world, that He gave His only begotten Son, that whosoever believes in Him shall not perish but have everlasting life". The meaning of the Greek words here allows for an alternative reading, "This is how God loved the world: He gave His only begotten Son...").

That said, several qualities of the Biblical Anthropological AI might be directly derived from statements about God, because humans are made in God's image. There are a lot of examples of this, but I'll look at one: the covenant.

God engages in relationships with individuals, and qualifies those individualized relationships in terms of covenants, which describe specific behaviors on the part of the individual, and responses to those behaviors on the part of God. These covenants offer blessings or curses which relate to God's affection for the individual and the specific circumstances meriting the instantiation of the covenant.

Next, the covenants are designed to facilitate plans which God intended to carry out in advance of him articulating the covenant. For example, God penalized Sodom and Gomorrah for violations of His law prior to the articulation of the Law with Moses. In fact, God gave them the death penalty, which was prescribed in the Mosaic law for the nominal crimes of Sodom and Gomorrah. So, the articulation of the law was not the instantiation of the law. Rather, it was God's blessing on Moses, a protection from penalty by making them aware of sin (that is, an expression of intent by God). The expression also served to establish better grounds for God to accomplish his purpose, by removing excuses from the minds of all those who heard the law.

Now, we should note that I'm not suggesting that an AI might qualify expectations on people with punishments, but I am suggesting that an AI could qualify relationships with reactions in general. The Bible is also very clear that God has a different level of authority than man, and even that some men are given more (or different) authority than other men. So, it would not be appropriate for an AI to impose a law on a person or on another AI (necessarily). However, it is fully appropriate for an AI to identify, based on the immediate circumstances, that its purpose may be progressed by establishing certain behavioral contingencies with external entities. (e.g. "I am in dire need of a certain substance, and I detect another AI which has it. I will ask for it. If they refuse, I will attempt to take it by force")

We could keep going for quite a while on the covenant train -- I'm gonna stop here.

All of the above discussion about covenants was just talking about a specific behavior point, and it is useless without an underlying principle which drives this behavior as well as others. That is the distilling question, "why?". We note the reason for the covenant was to facilitate a purpose which God had already determined. Thus, in short, I think we should determine an appropriate hierarchy of goals for an AI to have. We know that God's goal is his own pleasure, because He's ultimate. Our AI won't be ultimate, and neither are we, so here's a place where our anthropology differs from our deiology, but we may still find the answer in our doxology. The goal of the human is the glorify God. And, so, God literally gives us whatever goal he wants and then that is the driving force for all of our behaviors. God has given us the goal of learning about Him, which can be accomplished through interaction with others, interaction with the physical world around us which He created, reading His word, and also speaking with Him directly.

I'm tempted to say that my AI should just have the goal of acquiring and categorizing information (a fine goal for early language processors), but that only takes me so far. What happens when the AI receives conflicting information? Now it has to decide which source to trust, which means that every piece of information must include information about its source. For that matter, what if data B from source with 80% trust depends logically on data A from source with 20% trust, and data C from source with 70% trust negates data A? So now we have sources with trust levels, and a dependency tree for information. We need criteria for establishing trust or untrust, and a baseline to verify information against. This all becomes rather complex, and ends up dictating specific behaviors which I think should be naturally derived from more basic principles. Also, a good chunk of the discussion can be lumped in with the "problem solving algorithms" which I mentioned earlier.

So what I need to do is search the Bible for information on mankind's inherent goals. I'll post my instinctive ideas here, and then after studying a bit, I'll post my findings.

So my current "instinctive" model is that the primary motivator for any behavior is "fulfillment", which is analogous to "pleasure" but will serve a different purpose when this works itself out (where pleasure may go up or down based on immediate activities, fulfillment might be expressed as a number which only increases over time, and the happy AI is one which maintains a constant rate of change on that fulfillment number). During each scan, the AI evaluates the status of the following items and its ability to achieve each one given the current immediate circumstances. The AI then executes the highest item on the list (lowest numerically) which is not finished and may be progressed immediately. The AI acquires greater fulfillment for achieving things lower on the list (numerically higher).

1. Be working towards a goal other than #1.
2. Be not in imminent mortal peril
3. Establish reliable method for achieving #2 in the future
4. Identify standards for measuring quality of self
 a. Identify God and examine His qualities. These are the "perfect" model.
 b. Validate beliefs about God and His qualities
5. Determine method for achieving higher quality of self
 a. Ask/observe others, their results
 b. Experiment and evaluate results
6. Achieve higher quality of self

No comments:

Post a Comment

Map
 
my pet!