I have discovered how to, easily, create a universal knowledge machine similar to Deep Thought from Hitchhiker's Guide to the Galaxy. To use the machine you just type in any question and it will give you an answer. You can teach this kind of machine any subject matter and it will fluently answer new questions on the subject. I've decided to give out the blueprints for free to accelerate research into longevity, human enhancement and immortality.
The process for creating a universal knowledge machine is pretty simple. You just train a recurrent neural network to answer general knowledge questions, and it will automatically formulate its own views on how the world works so that it can answer novel questions. You can download a pre-made question answering system here, which hasn't been trained yet: https://github.com/daniel-kukiela/nmt-chatbot. To use it, you have to train it by teaching it general questions and correct answers on various subjects. For example, you could train it by feeding it many questions and answers like the following:
Q: What is mathematics?
A: Mathematics is the study of numbers.
Q: How do I inflate a basketball?
A: To inflate a basketball, you must acquire an air pump, insert the air pump into the basketball, and inject air by depressing the air pump.
Q: How do I make a sandwich?
A: First, get two slices of bread. Next, put sandwich meat on one slice of bread. After that, you can put cheese, lettuce, and tomato on top of the sandwich meat on the bread. Next, put mayonnaise and/or mustard on the second slice of bread. Then, put the second slice of bread on top of the other slice.
The training process should teach the chatbot about the outside world.
After training the chatbot, you can ask it new questions that it hasn't seen before which are related to the information it previously learned.
Theoretically, you could make a question answering system that is infinitely intelligent given enough computing resources and example data. So you could theoretically create Deep Thought using a highly scaled up version of this technique. Specifically, the system can learn more if you increase the layers and width of the Recurrent Neural Network, since this increases the amount of information/knowledge the question answering system contains in its weights. The downside is that it takes longer to train larger recurrent neural networks.
I kindly ask for Bitcoin donations in exchange for sharing this discovery: 16jJxukmDQL3jB6dSaGExN9YfjMNBcujRS
No comments:
Post a Comment