Me and Chowon are sick today. We're doing whatever seems best to minimize this so that we can get past it before Thanksgiving, but she is still going to work.
As I was considering the base learning algorithms necessary to produce a working AI, I realized that in order for such an AI to perceive that others are the same type as itself, (assuming I have algorithms in place for such a detection), it has to perceive itself. Also, I can't give it full information about itself or about others; it should be able to deduce that others are like itself by sensing whatever qualities I make available and making a comparison. At first, this seemed overwhelming to me, because of all the detailed sensory information necessary for someone to first identify their own parts and also make a comparison between those parts and the parts of another in order to determine that they are the same kind, and all this without even being able to see the backside of my own body. However, I'm thinking that if I come up with a simplified set of external qualities for any such artificial entity to have available to sense perception, and then make all external qualities always available to self and other entities in "proximity" then such a comparison might be within reach.
I do not have the resources (neither the stuff nor the time) to give my AI an infinite capacity to learn, so I will have to allocate a certain amount of space (like a fixed-size array or linked list) for memories. I think I'm going to start working on an object to contain everything related to an individual memory, and then, until further notice, start my work with a three stage model for memory storage:
1. Immediate memories will persist for a certain number of actions, unless they are refreshed by one of those actions. Every time an immediate memory is refreshed without expiring, its timeout will increase exponentially.
2. An immediate memory whose timeout reaches a certain number, or which has a high enough emotional score, will become a day memory, available to scan for immediate access at any given time for the rest of the day. There are two storage locations for day memories; one for today and one for yesterday. When today's day memory bank becomes full, the entity will require sleep. Yesterday memories will have a number associated with them, telling how many consecutive days this memory has been put into day memory.
3. During sleep, today's day memories will be compared with yesterdays; anything which is repeated often enough, or which has a very high emotional score will be passed into long-term memory. Long term memories include information about frequency of access and most recent access, and will be sorted by those numbers. If frequency gets too low, or last access to old, then the memory will be purged when long-term gets full. Long term memory will be blocked into multiple sections (which I haven't really nailed down yet, but I currently am thinking about the following): muscle memory for actions and expected reactions, emotional memory to affect "moral" calculations, interpersonal memory to affect things like trust, mundane memory for things like the running priority list and the properties of objects for use in problem solving, and maybe some others as necessary.
Lastly, I'm currently thinking that I will comprise each entity of two parts: the body, which handles senses, status information, and contains all memories, and the spirit, which will interact with the body to make decisions. This way I can treat the spirit as if its receiving sensory information about an external entity (the body), and then have it receive the same information about other external entities, and effectively treat them both as external to itself, hopefully making it easier for me to conceive of a way to simulate something similar to empathy. This distinction will also make it easier for me to make changes to the decision making process by keeping that logically separate from the body itself. This doesn't actually fit the Biblical model, which splits us into three parts (mind, body, spirit), but I haven't worked out a way to make that separation useful at the relatively low level of complexity I'll be dealing with. Besides that, if I'm honest, I don't fully understand the nature or capabilities of my spirit, and as a Christian I'm not 100% sure that I'm even allowed to experiment in order to find out. (This is a great irony, I think, that Christians, who are given as a gift the best and most direct access to the spiritual realm are restricted in the ways by which they are allowed to utilize that access)...... in fact, maybe the separation I've made earlier should have been "mind and body" rather than "spirit and body", and then the third component, "spirit" can be regarded as the means of direct communication between myself and the AI, because I'll be developing a very distinct and bounded little "world" for the AI to sense and interact with normally.
It recently struck me that I don't know anyone who is into this kind of thing. Chowon is willing to listen if I catch her at a good time, but my moods don't always coincide with good timing. And with regard to the specific Biblical AI idea, I know literally nobody who I think would be both interested in helping me with it and also knowledgeable about the kind of coding I'm doing. (Besides that, I'm ashamed to admit it, but I've never messed with collaborative coding tools, and it feels like a hassle to learn about them). This realization was immediately discouraging, but then after a while I thought that it could be more of a challenge rather than a discouragement. If there is really nobody in the world who cares to do the specific thing that I'm interested in doing, then it might very well be my duty to make sure that it gets done, for the sake of the human race! Even if it is only regarded as a piece of eccentric art, it may serve to benefit someone somewhere by sparking an even better idea in the mind of that person, which would not be sparked under any other circumstances because nobody else is going to do the thing that only I am passionate about.
I recently read some articles by Christians who wrote about "the Christian view of AI", and these articles wrote about how it is an atheistic notion to aspire to produce an AI which is similar to or better than human intelligence. Naturally, we will never be able to reproduce a soul in a machine, but I see nothing wrong with the aspiration to produce an AI that surpasses our own temporal capabilities. I see articles comparing high hopes for AI with the high hopes behind the tower of babel; I think that the traditional understanding of the primary moral issue with the tower of Babel was that man was trying to exalt himself as God or as a being on par with God, while simultaneously ignoring God's command to spread out over the earth. I can understand how atheists might flap their gums carelessly when talking about our hopes for the development of an amazing AI, but I don't see the act itself as a moral danger. Rather, the intentions behind the actions of nonChristians are the only potentially questionable moral aspect of this subject. That said, I certainly hope that AI achieves heights beyond our wildest imaginations, and I expect that we will achieve as much, given the time and peace necessary.
Now, all that said, I do see a moral dilemma regarding the teachings of Christians on subjects about which the Bible doesn't speak directly. If the Bible does not prescribe any doctrine or law to forbid or limit the growth of an idea or technology, then neither should we. Now, we know that the Bible lays down principles which are applicable everywhere, but I think that with regards to things like this, Christians should be extremely careful not to put words in the mouth of God when they teach for or against a thing. We should fear the words of God in Ezekiel 13, so that we will not say "thus says the Lord" when God hasn't said anything. Furthermore, shouldn't Christians, who know the truth and have a basis for right reason, be on the cutting edge of technology? I mean, if we really are the only ones who have a consistent foundation to make sense of the world, then we should be able to build on that foundation to reach heights far above what Atheists can even dream about!
I think Christians have a reputation for sticking their heads in the sand, but it wasn't always that way. Oxford, Princeton, Yale, Harvard, and the a good chunk of the other well-known ivy-league schools were started by Christians (protestants, mind you). And now they are practically factories pumping out closed-minded antichristians. What happened to us? (I almost feel like I need to justify myself by taking some shots at the theory of evolution, but this blog is long enough, and I've done that before, and there is always another angle that someone will get mad at me for neglecting to mention. Maybe another day.).
"I love you."
Friday, November 17, 2017
Subscribe to:
Post Comments (Atom)

No comments:
Post a Comment