Shravya Reddy Konda Department of Computer Science University of Maryland, College Park Email: shravyak@cs.umd.edu
Abstract
In this paper, performance of symbolic learning algorithms and neural learning algorithms on different kinds of datasets has been evaluated. Experimental results on the datasets indicate that in the absence of noise, the performances of symbolic and neural learning methods were comparable in most of the cases. For datasets containing only symbolic attributes, in the presence of noise, the performance of neural learning methods was superior to symbolic learning methods. But for datasets containing mixed attributes (few numeric and few nominal), the recent versions of the symbolic learning algorithms performed better when noise was introduced into the datasets.
1. Introduction The problem most often addressed by both neural network and symbolic learning systems is the inductive acquisition of concepts from examples [1]. This problem can be briefly defined as follows: given descriptions of a set of examples each labeled as belonging to a particular class, determine a procedure for correctly assigning new examples to these classes. In the neural network literature, this problem is frequently referred to as supervised or associative learning. For supervised learning, both the symbolic and neural learning methods require the same input data, which is a set of classified examples represented as feature vectors. The performance of both types of learning systems is evaluated by testing how well these systems can accurately classify new examples. Symbolic learning algorithms have been tested on problems ranging from soybean disease diagnosis [2] to classifying chess end games [3]. Neural learning algorithms have been tested on problems ranging from converting text to speech [4] to evaluating moves in backgammon [5]. In this paper, the current problem is to do a
References: 1.Mooney, R., Shalvik, J., and Towell, G. (1991): Symbolic and Neural Learning Algorithms - An experimental comparison, in Machine Learning 6, pp. 111-143. 2. Michalski, R.S., & Chilausky, R.L. (1980): Learning by being told and learning from examples - An experimental comparison of two methods of knowledge acquisition in the context of developing an expert system for soybean disease diagnosis, in Policy Analysis and Information Systems, 4, pp. 125-160. 3. Quinlan, J.R. (1983): Learning efficient classification procedures and their application to chess end games in R.S. Michalski, J.G. Carbonell, & T.M. Mitchell (Eds.), in Machine learning: An artificial intelligence approach (Vol. 1). Palo Alto, CA: Tioga. 4. Sejnowski, T.J., & Rosenberg, C. (1987): Parallel networks that learn to pronounce English text, in Complex Systems, 1, pp. 145-168. 5. Tesauro, G., & Sejnowski, T.J. (1989): A parallel network that learns to play backgammon, in Artificial Intelligence, 39, pp. 357-390. 6. Quinlan, J.R. (1986): Induction on Decision Trees, in Machine Learning 1, 1 7. Quinlan, J.R. (1993): C4.5 – Programs for Machine Learning. San Mateo: Morgan Kaufmann. 8. Rumelhart, D., Hinton, G., & Williams, J. (1986): Learning Internal Representations by Error Propagation, in Parallel Distributed Processing, Vol. 1 (D. Rumelhart k J. McClelland, eds.). MIT Press. 9. Fisher, D.H. and McKusick, K.B. (1989): An empirical comparison of ID3 and backpropagation, in Proc. of the Eleventh International Joint Conference on Artificia1 Intelligence (IJCAI-89), Detroit, MI, August 20-25, pp. 788-793. 10. Mooney, R., Shavlik, J., Towell, G., and Gove, A.(1989): An experimental comparison of symbolic and connectionist learning algorithms, in Proc. of the Eleventh International Joint Conference on Artificial Intelligence (IJCAI-89), Detroit, MI, August 20-25, pp. 775-780. 11. McClelland, J. k Rumelhart, D. (1988). Explorations in Parallel Distributed Processing, MIT Press, Cambridge, MA. .