A Cover Page. Classification of Jewish Law Articles According to the Ethnic Group of their Writers Using Stems

Similar documents
NPTEL NPTEL ONINE CERTIFICATION COURSE. Introduction to Machine Learning. Lecture-59 Ensemble Methods- Bagging,Committee Machines and Stacking

Information Extraction. CS6200 Information Retrieval (and a sort of advertisement for NLP in the spring)

Question Answering. CS486 / 686 University of Waterloo Lecture 23: April 1 st, CS486/686 Slides (c) 2014 P. Poupart 1

The Responsa Project: Some Promising Future Directions

Visual Analytics Based Authorship Discrimination Using Gaussian Mixture Models and Self Organising Maps: Application on Quran and Hadith

MISSOURI S FRAMEWORK FOR CURRICULAR DEVELOPMENT IN MATH TOPIC I: PROBLEM SOLVING

StoryTown Reading/Language Arts Grade 2

Identifying Anaphoric and Non- Anaphoric Noun Phrases to Improve Coreference Resolution

ECE 5424: Introduction to Machine Learning

Agnostic Learning with Ensembles of Classifiers

StoryTown Reading/Language Arts Grade 3

Ms. Shruti Aggarwal Assistant Professor S.G.G.S.W.U. Fatehgarh Sahib

Outline of today s lecture

TEXT MINING TECHNIQUES RORY DUTHIE

Contribution Games and the End-Game Effect: When Things Get Real An Experimental Analysis

A New Parameter for Maintaining Consistency in an Agent's Knowledge Base Using Truth Maintenance System

Studying Religion-Associated Variations in Physicians Clinical Decisions: Theoretical Rationale and Methodological Roadmap

ECE 5424: Introduction to Machine Learning

ECE 5984: Introduction to Machine Learning

ELA CCSS Grade Five. Fifth Grade Reading Standards for Literature (RL)

Studying Adaptive Learning Efficacy using Propensity Score Matching

Prentice Hall Literature: Timeless Voices, Timeless Themes, Bronze Level '2002 Correlated to: Oregon Language Arts Content Standards (Grade 7)

Intelligent Agent for Information Extraction from Arabic Text without Machine Translation

Using Machine Learning Algorithms for Categorizing Quranic Chapters by Major Phases of Prophet Mohammad s Messengership

The Impact of Oath Writing Style on Stylometric Features and Machine Learning Classifiers

Grade 6 correlated to Illinois Learning Standards for Mathematics

Anaphora Resolution in Hindi Language

Prentice Hall Literature: Timeless Voices, Timeless Themes, Silver Level '2002 Correlated to: Oregon Language Arts Content Standards (Grade 8)

McDougal Littell High School Math Program. correlated to. Oregon Mathematics Grade-Level Standards

Deep Neural Networks [GBC] Chap. 6, 7, 8. CS 486/686 University of Waterloo Lecture 18: June 28, 2017

Argument Harvesting Using Chatbots

Correlates to Ohio State Standards

PAGE(S) WHERE TAUGHT (If submission is not text, cite appropriate resource(s))

ELA CCSS Grade Three. Third Grade Reading Standards for Literature (RL)

***** [KST : Knowledge Sharing Technology]

Who wrote the Letter to the Hebrews? Data mining for detection of text authorship

Reference Resolution. Regina Barzilay. February 23, 2004

Reference Resolution. Announcements. Last Time. 3/3 first part of the projects Example topics

Strand 1: Reading Process

Proceedings of the Meeting & workshop on Development of a National IT Strategy Focusing on Indigenous Content Development

Georgia Quality Core Curriculum

An Efficient Indexing Approach to Find Quranic Symbols in Large Texts

Punjab University, Chandigarh. Kurukshetra University, Haryana. Assistant Professor. Lecturer

6.041SC Probabilistic Systems Analysis and Applied Probability, Fall 2013 Transcript Lecture 21

The performance of the Apriori-DHP algorithm with some alternative measures

Artificial Intelligence Prof. Deepak Khemani Department of Computer Science and Engineering Indian Institute of Technology, Madras

KEEP THIS COPY FOR REPRODUCTION Pý:RPCS.15i )OCUMENTATION PAGE 0 ''.1-AC7..<Z C. in;2re PORT DATE JPOTTYPE AND DATES COVERID

A Correlation of. To the. Language Arts Florida Standards (LAFS) Grade 3

1. Introduction Formal deductive logic Overview

Verification of Occurrence of Arabic Word in Quran

Biometrics Prof. Phalguni Gupta Department of Computer Science and Engineering Indian Institute of Technology, Kanpur. Lecture No.

Strand 1: Reading Process

Prioritizing Issues in Islamic Economics and Finance

A Correlation of. To the. Language Arts Florida Standards (LAFS) Grade 5

INTRODUCTION TO THE Holman Christian Standard Bible

ABSTRACT. Religion and Economic Growth: An Analysis at the City Level. Ran Duan, M.S.Eco. Mentor: Lourenço S. Paz, Ph.D.

A Quranic Quote Verification Algorithm for Verses Authentication

Discussion Notes for Bayesian Reasoning

From Machines To The First Person

Houghton Mifflin English 2001 Houghton Mifflin Company Grade Three Grade Five

Tips for Using Logos Bible Software Version 3

Anaphora Resolution. Nuno Nobre

Inimitable Human Intelligence and The Truth on Morality. to life, such as 3D projectors and flying cars. In fairy tales, magical spells are cast to

Natural Language Processing (NLP) 10/30/02 CS470/670 NLP (10/30/02) 1

The World Wide Web and the U.S. Political News Market: Online Appendices

2nd International Workshop on Argument for Agreement and Assurance (AAA 2015), Kanagawa Japan, November 2015

It is One Tailed F-test since the variance of treatment is expected to be large if the null hypothesis is rejected.

A Short Addition to Length: Some Relative Frequencies of Circumstantial Structures

Gesture recognition with Kinect. Joakim Larsson

QUESTION ANSWERING SYSTEM USING SIMILARITY AND CLASSIFICATION TECHNIQUES

Extracting the Semantics of Understood-and- Pronounced of Qur anic Vocabularies Using a Text Mining Approach

USER AWARENESS ON THE AUTHENTICITY OF HADITH IN THE INTERNET: A CASE STUDY

Georgia Quality Core Curriculum 9 12 English/Language Arts Course: American Literature/Composition

Georgia Quality Core Curriculum 9 12 English/Language Arts Course: Ninth Grade Literature and Composition

ECE 5424: Introduction to Machine Learning

ADAIR COUNTY SCHOOL DISTRICT GRADE 03 REPORT CARD Page 1 of 5

All They Know: A Study in Multi-Agent Autoepistemic Reasoning

English Language Arts: Grade 5

Sentiment Flow! A General Model of Web Review Argumentation

Falsification or Confirmation: From Logic to Psychology

Introduction to Statistical Hypothesis Testing Prof. Arun K Tangirala Department of Chemical Engineering Indian Institute of Technology, Madras

Pastor Search Survey Text Analytics Results. An analysis of responses to the open-end questions

LISTENING AND VIEWING: CA 5 Comprehending and Evaluating the Content and Artistic Aspects of Oral and Visual Presentations

A Correlation of. To the. Language Arts Florida Standards (LAFS) Grade 4

Preliminary Examination in Oriental Studies: Setting Conventions

Correlation to Georgia Quality Core Curriculum

Keywords: Knowledge Organization. Discourse Community. Dimension of Knowledge. 1 What is epistemology in knowledge organization?

International Messianic Torah Institute

This report is organized in four sections. The first section discusses the sample design. The next

Keyword based Clustering Technique for Collections of Hadith Chapters

How many imputations do you need? A two stage calculation using a quadratic rule

Information Science and Statistics. Series Editors: M. Jordan J. Kleinberg B. Schölkopf

Radiomics for Disease Characterization: An Outcome Prediction in Cancer Patients

Genesis Numerology. Meir Bar-Ilan. Association for Jewish Astrology and Numerology

Arkansas English Language Arts Standards

ON CAUSAL AND CONSTRUCTIVE MODELLING OF BELIEF CHANGE

AUTHORSHIP DISCRIMINATION ON QURAN AND HADITH USING DISCRIMINATIVE LEAVE-ONE-OUT CLASSIFICATION

Network Analysis of the Four Gospels and the Catechism of the Catholic Church

That's Your Evidence?: Using Mechanical Turk To Develop A Computational Account Of Debate And Argumentation In Online Forums

Proof as a cluster concept in mathematical practice. Keith Weber Rutgers University

Transcription:

A Cover Page Classification of Jewish Law Articles According to the Ethnic Group of their Writers Using Stems Yaakov HaCohen-Kerner 1, Zvi Boger 2, Hananya Beck 1, Elchai Yehudai 1 1 Department of Computer Science, Jerusalem College of Technology (Machon Lev) 21 Havaad Haleumi St., P.O.B. 16031, 91160 Jerusalem, Israel 2 OPTIMAL Industrial Neural Systems Ltd. 54 Rambam St., Be er Sheva, 84243, Israel kerner@jct.ac.il, zvi@peeron.com, {hananya, yehuday}@jct.ac.il Email (college): kerner@jct.ac.il Phone (college): (972)-2-6751287 Phone (secretary): (972)-2-6751294 Fax (secretary): (972)-2-6751046 Topic area: Text Classification, artificial neural network

Classification of Jewish Law Articles According to the Ethnic Group of their Writers Using Stems Yaakov HaCohen-Kerner 1, Zvi Boger 2, Hananya Beck 1, Elchai Yehudai 1 1 Department of Computer Science, Jerusalem College of Technology (Machon Lev) 21 Havaad Haleumi St., P.O.B. 16031, 91160 Jerusalem, Israel 2 OPTIMAL Industrial Neural Systems Ltd. 54 Rambam St., Be er Sheva, 84243, Israel kerner@jct.ac.il, zvi@peeron.com, {hananya, yehuday}@jct.ac.il Abstract In this study, we deal with texts in languages (Hebrew/Aramaic) which have been little studied. Moreover, Semitic language processing in general is of great interest today. In particular, we investigate how to classify Jewish Law articles written in Hebrew-Aramaic according to the ethnic group of their authors. The classification is done using only stems of words, excluding very frequent stems and very rare stems. The motivation is to investigate the cultural differences in writing between Ashkenazi authors and Sephardi authors. After extracting the stems of the words in each article, the most frequent (>95%) and the least frequent (<5%) stems were removed. Using 480 stems as inputs to an artificial neural network model, the classification result, 85% of the validation examples, is reasonable, considering that the stemming software accuracy is not perfect. Discarding 340 less relevant stems, and retraining with 140 stems gave the same error rate. It seems that classification based on stems only may be suitable for such a classification of other texts. It will be interesting to check whether stylistic classification can be also used for other tasks of ethnic classification, e.g.: various kinds of Muslims that use Arabic. 1 Introduction Text classification (TC) is the supervised learning task of assigning natural language text documents to one or more predefined classes (also called categories) according to their content. The meaning of supervised in this definition is that all the documents in a training set are pre-assigned a class before the training process starts. The beginning of research in TC can be identified with Maron s work on probabilistic text classification [24]. TC is applied in many tasks, such as: clustering, document indexing, document filtering, information retrieval (IR), information extraction (IE), word sense disambiguation (WSD), text filtering, and text mining [18, 29]. Current-day TC presents challenges due to the large number of features present in the text set, their dependencies and the large number of training documents. One of the machine learning methods employed for text classification is the artificial neural networks (ANN) technique [26]. This method was found superior to some other ML techniques in [11]. ANN modeling was recently employed for predicting the importance of a literature abstract to researchers downloading references, using the stemmed English words in the abstract as inputs to the ANN [21]. Thus it was interesting to see if this method can be also applied to Hebrew-Aramaic texts. In our research, we plan and apply a model that classifies Responsa (letters written in response to legal questions) according to the ethnic group of their writers. Our corpus is a collection of Responsa written in Hebrew- Aramaic by a number of rabbinic scholars, which are authorities in Jewish law. Our plan is to check whether we can succeed in such a task using only stems of words, excluding very frequent stems and very rare stems. We have built an artificial neural network (ANN) for implementing this task. The motivation is to investigate the cultural differences in writing between Ashkenazi authors and Sephardi authors. The structure of this paper is as follows. First we describe the basics of the Hebrew-Aramaic languages word structure relevant to the task of TC; then a brief introduction to the ANN modeling is presented, with a more detailed view of the particulate large-scale ANN algorithms that we used for the TC task; the data set we used will be presented, with the pre-processing technique we have employed. Results of the ANN classification will be shown, and future avenues of research will conclude the paper. 2 The Hebrew and the Aramaic Languages 2.1 The Hebrew Language Hebrew is a Semitic language. It is written from right to left. Hebrew texts present special problems: (1) function words tend to be conflated into word affixes in Hebrew, thus decreasing the number of function words but increasing the amount of morphological features that can be exploited and (2) the richness of Hebrew morphology (more details are given below). Hebrew words in general and Hebrew verbs in particular are based on three (sometimes four) basic letters, which create the word's stem. The stem of a

Hebrew verb is called p' l 1,2 ( פעל, verb ). The first letter of the stem p (פ) is called pe hapoal; the second letter of the stem' (ע) is called ayin hapoal and the third letter of the stem l (ל) is called lamed hapoal. The names of the letters are especially important for the verbs' declensions according to the suitable verb types. Except for the word s stem, there are other components which create the word s declensions, e.g.: conjugations, verb types, subject, prepositions, belonging, object and terminal letters. In Hebrew, it is impossible to find the declensions of a certain stem without an exact morphological analysis based on these features. The English language is richer in its vocabulary than Hebrew. The English language has about 40,000 stems while Hebrew has only about 3,500 and the number of lexical entries in the English dictionary is 150,000 compared with only 35,000 in the Hebrew dictionary [9]. However, the Hebrew language is richer in its morphology forms [9]. The Hebrew language has 70,000,000 valid (inflected) forms while English has only 1,000,000. For example, the single Hebrew word vkhsykhvhv (וכשיכוהו) is translated into the following sequence of six English words: and when they will hit him. In comparison to the Hebrew verb which undergoes a few changes the English verb stays the same. In Hebrew, there are up to seven thousand declensions for only one stem, while in English there is only a few declensions. For example, the English word eat has only four declensions (eats, eating, eaten and ate). The relevant Hebrew stem khl ( eat,אכל) has thousands of declensions. Ten of them are presented below: (1) khlty you ate ), (3) khlnv,אכלת) I ate ), (2) khlt,אכלתי) he eats ), (5) khvlym,אוכל) we ate ), (4) khvl,אכלנו) (7) eat ), she will,תאכל) they eat ), (6) tkhl,אוכלים) l khvl,לאכל) to eat ), (8) khltyv,אכלתיו) I ate it ), (9),כשאכלת) and I ate ) and (10) ks khlt,ואכלתי) v khlty when you ate ). 2.2 The Aramaic Language Aramaic is another Semitic language. The term Aramaic is derived from Aram, the fifth son of Shem, the firstborn of Noah. [Gen. 10:22]. It is particularly closely related to Hebrew, and was written in a variety of alphabetic scripts. (What is usually called "Hebrew" script is actually an Aramaic script). Aramaic was the language of Semitic peoples throughout the ancient Near East. It is spoken for at least three thousand years. Aramaic is still spoken 1 The Hebrew Transliteration Table, which has been used in this paper, is taken from the web site of the Princeton university library: http://www.loc.gov/catdir/cpso/romanization/hebrew.pdf. 2 In this Section, each Hebrew word is presented in three forms: (1) transliteration of the Hebrew letters written in italics, (2) the Hebrew letters, and (3) its translation into English in quotes. today in its many dialects, especially among the Chaldeans and Assyrians [33]. In the Bible, there are large sections of Aramaic texts in the books of Daniel and Ezra and odd words in other books. Aramaic has influenced Hebrew (as French has influenced English) in words, phrases and grammar. Although Aramaic and Hebrew have much in common, there are several major differences between them. The main difference in grammar is that while Hebrew uses aspects and word order to create tenses, Aramaic uses tense forms. Another important difference is that there are several types of changes in one particular letter in many words. For instance: (1) in some cases an Hebrew prefix is replaced in Aramaic by a suffix (e.g. the (2) (א is changed into the Aramaic suffix ה Hebrew prefix the Hebrew plural noun suffixes ות and,ים are changed into ין and נא in Aramaic and (3) the word which that is integrated as the prefix ש in Hebrew is changed into ד in Aramaic. 3 Previous Stylistic Classification of Hebrew-Aramaic Texts CHAT, a system for stylistic classification of Hebrew- Aramaic texts is presented in [22, 23, 27]. CHAT present applications of several TC tasks to Hebrew-Aramaic texts: 1. Which of a set of known authors is the most likely author of a given document of unknown provenance? 2. Were two given corpora written/edited by the same author or not? 3. Which of a set of documents preceded which and did some influence others? 4. From which version (manuscript) of a document is a given fragment taken? CHAT uses as features only single words, prefixes and suffixes. This system uses simple ML methods such as Winnow and Perceptron. Its datasets contain a few hundreds of documents. CHAT does not investigate the classification of responsa according to the ethnic group of their authors. Classification of Biblical documents has been done by Radai [30-32]. However, he did not implement any ML method. 4 A Brief Introduction on Artificial Neural Networks Modeling ANN modeling is done by learning from examples. ANN is a network of simple (sigmoid, for example) mathematical neurons connected by adjustable

weighted links. The most used ANN architecture is feedforward two-layer ANN, in which neurons are placed in one hidden layer between the data inputs and the neurons of the output layer, and the information flows only from the inputs to the hidden neurons and from them to the output neurons. Training examples are presented as inputs to the ANN, which uses a teacher to train the model. An error is defined as the difference between the model outputs and the known teacher outputs. Error backpropagation algorithms adjust the initial random-valued model connection weights to decrease the error, by repeated presentations of input vectors [3, 36, 38, 39]. Once the ANN is trained and verified by presenting inputs not used in the training, the ANN is used to predict outputs of new inputs presented to it. There are several obstacles in applying an ANN to systems containing a large number of inputs and outputs. Most ANN training algorithms need thousands of repeated presentations ( epochs ) of the inputs to finally achieve small modeling errors. Large ANN tends to get stuck in local minima during the training. An efficient training algorithm set, developed by Guterman and Boger [5, 17], can easily train large scale ANN models, as it pre-computes non-random initial connection weights from the manipulation of training data sets, avoiding or escaping local minima during the training. The ANN architecture used by the Guterman- Boger algorithm is the most common one - fully connected forward only, one hidden layer, and sigmoid activation function. The Guterman-Boger algorithm was successfully used to train ANN models with hundreds to thousands of inputs and outputs [4, 7, 8, 16]. In real-life models, not all inputs are influencing the model outputs in the same degree. A knowledge extraction technique is the ranking of the inputs according to their relevance to the ANN prediction accuracy. Calculating the relative contribution of each input to the variance in the hidden neurons inputs when the training set is presented to the trained ANN model does this. A low relative contribution means that either the variance of the input is small, or that the ANN training has assigned low connection weights from that input to all hidden neurons [5]. The detailed derivation of the input relevance calculation is given in [8]. The least relevant inputs may be discarded and the ANN can be re-trained with the reduced input set that usually gives better prediction accuracy. The explanations for this possible improvement are: a) Elimination of noise or conflicting data in the nonrelevant inputs; b) Reduction of the number of connection weights in the ANN that improves the ratio of the number of examples to the number of connection weights, thus reducing the chance of over-fitting small number of examples to a model with many parameters ( overtraining ). 5 The Application of ANN to Text Classification The idea to match the capabilities of ANN modeling to information retrieval is not new, and many papers are dealing with it. Most of the papers use the unsupervised self-organized maps (SOM) technique for grouping similar examples into clusters [19]. Thus text clusters are formed based on the similarity of keywords in the texts. Once trained, the ANN will classify new documents as belonging to one of these clusters [26, 34, 35, 40]. Recent reviews discuss ANN along with other soft tools for Web mining application [28] and text classification [37]. Several text classification algorithms were compared, and ANN modeling was found to be superior [11]. ANN modeling was used to predict the importance of an e-mail message, or the relevance of a downloaded paper abstract to a researcher [9, 21]. The ability of the ANN to model non-linear, nonobvious relationships can be applied to the matching of the textual features (inputs to the ANN) to the user relevance rating (ANN outputs). When applying statistical methods for the required modeling, subjective selections of the number of terms and the form of the model equations are made. No such assumptions are needed in ANN modeling. In order to use a classification mechanism such as an ANN for document filtering, an appropriate document representation is required. In our case we used a binary vector representation of terms to represent the documents. 6 The Proposed Model The proposed model, in general, is composed of the eight following steps: (1) Building a data set composed of various Jewish Law articles. (2) For each article transform each word (excluding stoplist words) into its estimated stem using a stem learning program. (3) Represent each document as vector of its stems (4) Stems in the bottom 5% and top 95% count were discarded (5) Apply the ANN on these stems (6) Analyze the trained ANN model to identify the more relevant stems (7) Reduce the stem set (8) Re-apply the ANN on the reduced set of stems At step (2) we applied a program that proposes an estimated stem for any given word (without its context) written either in Hebrew or in Aramaic [14-15]. This program is based on Winnow (a simple ML method), identifies the correct stem in about 80% of the words. It

produces only stems made up of three letters. That is, it doesn t find the correct stems for words that their stems contain more than three letters. 7 Experimental Results The dataset employed contained 1000 responsa collected from 20 different rabbinic books, 500 written by Sephardi rabbis and 500 written by Ashkenazi rabbis. Although this data set is relatively small, it is important to point out that these responsa are hard to obtain, because usually they are not available online. These responsa were downloaded from The Global Jewish Database (The Responsa Project 3 ) at Bar-Ilan University. The total number of words in all the files was 2,278,683. After reducing stop-list words, abbreviations and words that contain only one letter, the total number of words in all the files was 1,043,550. These words were transformed to 887 different 3-letter stems using the stem-program mentioned above. For the ANN modeling, stems in the bottom 5% and top 95% count were discarded, and the rest were used to form a binary vector, where 1 signifies the presence of a stem in the text. The number of different legal stems with frequency in files between 5% and 95% was 480, the number of stems that were removed, was 407. Thus, an ANN model was trained with the term presence vector as input, and with five hidden neurons and two binary outputs. The ANN single target for a document is 1 if the author is Sephardi and 0 if the author is Ashkenazi. The data was partitioned by a random selection into 701 training set and 299 validation set, not used in the training. The ANN was trained with the Guterman-Boger set of algorithms described in the earlier sections. The trained ANN model was analyzed for identifying the more relevant inputs that were used to train another, smaller, ANN. The ANN modeling, using 480 inputs, 5 hidden neurons and 1 output architecture, gave zero errors on the training set, 15.4% errors on the validation set. Analysis of the trained ANN model identified 140 stems as the more relevant ones. Retraining an ANN with these inputs, gave a slightly better error rate, 15.0%. While the difference is not statistically significant, a linguistic analysis of the more relevant reduced set may yield interesting results. These 140 stems appear to be the most significant for classifying Jewish Law articles according to the Ethnic group of their writers since they have different frequencies for the two Ethnic groups. In contrast, the 340 removed stems have about the same frequencies for the two Ethnic groups. Among the 140 stems that found to support the classification task we find a few Aramaic stems that are more common in use of Ashkenazi Jews, e.g.: (1) כוי (a special kind of an uncertain animal) that stands as a stem for the word כוותיהו (as him) and (2) פקע (expire/to become invalid) that stands as a stem for the word Examples for stems that are more common in use.אפקעינן of Sephardi Jews are: (1) מור (myrrh) that stands as a stem for the word מרן (a pen name for one of the most important Sephardi rabbis) and (2) צדק (justice) that stands as a stem for the word צדיק (saintly person). Among the 340 removed stems on the one side we can find rather frequent stems such as: (1) למד (learn) that stands as a stem for the family of words related to the Hebrew word למד (learn) and (2) דבר (talk) that stands as a stem for the family of words related to the Hebrew word (talk). On the other side we can find non-frequent דבר stems such as: (1) נגח that stands as a stem for the family נגנ (2) and (gore) נגח of words related to the Hebrew word (play music) that stands as a stem for the family of words related to the Hebrew word נגנ (play music). The 85% correct classification result is reasonable but not excellent. A possible explanation to this finding might be that classification based on stems depends on the efficiency of the stemming program to correctly represent the words. 8 Conclusions and Future Work Stem-based classification has been found as rather successful for ethnic classification of responsa written in Hebrew-Aramaic. This method may be useful in other languages and applications. Future directions for research are: (1) Conducting more experiments using additional Hebrew-Aramaic documents from additional domains, (2) Checking whether stem-based classification can be also used for other tasks of ethnic classification, e.g.: various kinds of Muslims that use Arabic (since Arabic is also a Semitic language that is written from right to left and its words are also based on stems), (3) It will be interesting to compare our research to the same classification task using other popular ML methods, e.g.: SVM, Naïve Bayes, C4.5, Logistic regression and Log-linear models and (4) It will be also interesting to compare our research to the same classification task based on more complex features such as words and/or linguistic features. Concerning research on additional ethnic groups, there are many additional potential directions. For example: (1) Which baseline methods are good for which classification tasks? (2) What are the specific reasons for methods to perform better or worse on different classification tasks? (3) What are the guidelines to choose the correct methods for a certain classification task? 3 http://www.biu.ac.il/icji/responsa/index.html

9 References 1. Argamon-Engelson, S., Koppel M., Avneri G.: Style-based Text Categorization: What Newspaper am I Reading?, in Proc. of AAAI Workshop on Learning for Text Categorization, 1998, (1998) 1-4 2. Baayen, H., H. van Halteren, F. Tweedie.: Outside the Cave of Shadows: Using Syntactic Annotation to Enhance Authorship Attribution, Literary and Linguistic Computing, 11, (1996) 3. Bishop, C.M.: Neural Networks for Pattern Recognition. Clarendon Press (1995) 4. Boger Z.: Application of Neural Networks to Water and Wastewater Treatment Plant Operation. Transactions of the Instrument Society of America, 31 (1), (1992) 25-33 5. Boger, Z., Guterman, H.: Knowledge Extraction from Artificial Neural Networks Models. Proc. of the IEEE Intl. Conference on Systems Man and Cybernetics, SMC'97, Orlando, Florida, (1997) 3030-3035 6. Boger, Z., Kuflik, T., Shoval P., Shapira, B.: Automatic Keyword Identification by Artificial Neural Networks Compared to Manual Identification by Users of Filtering Systems. Information Processing & Management, 37 (2) (2001) 187-198 7. Boger, Z.: Who is Afraid of the Big Bad ANN? Proc. of the International Joint Conference on Neural Networks, IJCNN 02, Hawaii (2002) 2000-2005 8. Boger, Z.: Selection of Quasi-Optimal Inputs in Chemometrics Modeling by Artificial Neural Network Analysis. Analytica Chimica Acta, 490, (1-2) (2003) 31-40 9. Choueka, Y., Conley E. S., Dagan I.: A Comprehensive Bilingual Word Alignment System: Application to Disparate Languages - Hebrew and English, in J. Veronis (Ed.), Parallel Text Processing, Kluwer Academic Publishers (2000) 69 96 10. Clack, C., Farringdon, J., Lidwell, P., Yu, T.: Autonomous Document Classification for Business. In Proceedings of the 1st International Conference on Autonomous Agents Marina del Rey, CA, (1997) 201 208 11. Corrêa, R.F., Ludermir, T.B.: Automatic Text Categorization: Case Study, Proceedings of the VII Brazilian Symposium on Neural Networks (SBRN 02) (2002) 12. Cortes, C., Vapnik, V.: Support-Vector Networks. Machine Learning, 20 (1995) 273-297 13. de Vel, O., A. Anderson, M. Corney, George M.: Mohay Mining E-Mail Content for Author Identification Forensics. SIGMOD Record 30(4) (2001) 55-64 14. Daya, E., Roth D., Wintner, S.: Learning Hebrew Roots: Machine Learning with Linguistic Constraints. Proceedings of EMNLP'04, Barcelona (2004) 15. Daya, E. Learning to Identify Semitic Roots, Master Thesis, University of Haifa, Israel (2005) 16. Greenberg, S., Guterman, H.: Neural Networks Classifiers for Automatic Real-World Image Recognition. Applied Optics, 35, (1996) 4598-4609 17. Guterman, H.: Application of Principal Component Analysis to the Design of Neural Networks. Neural, Parallel and Scientific Computing, 2, (1994) 43-54 18. Knight, K.: Mining online text. Commun. ACM 42, 11, (1999) 58 61 19. Kohonen, T.: Exploration of Very Large Databases by Self-Organizing Maps. Proc. of the IEEE International Conference on Neural Networks, 1, PL1-6 (1997) 20. Kuflik, T.: Methods for Definition of Content-Based and Rule-Based User Profiles in Information Filtering Systems, PhD. Dissertation. Ben-Gurion University of the Negev (2003) 21. Kuflik, T., Boger, Z., Shoval P.: Filtering Search Results Using an Optimal Set of Terms Identified by an Artificial Neural Network, Information Processing & Management, (in Press) (2006) 22. Koppel, M., Mughaz D., Schler J.: Text Categorization for Authorship Verification. Proc. 8th Symposium on Artificial Intelligence and Mathematics, Fort Lauderdale, FL (2004) 23. Koppel, M., Mughaz, D., Akiva, N.: New Methods for Attribution of Rabbinic Literature, Hebrew Linguistics: A Journal for Hebrew Descriptive, Computational and Applied Linguistics, Bar-Ilan University Press, 57 (2006) v-xviii 24. Maron, M.: Automatic Indexing: an Experimental Inquiry. J. Assoc. Comput. Mach. 8 (3) (1961) 404 417 25. Melamed, Rabbi Ezra Zion.: Aramaic-Hebrew-English Dictionary, Feldheim, ISBN: 1-58330-776-1 (2005) 26. Merkl, D., Rauber, A.: Document Classification with Unsupervised Artificial Neural Networks. Soft Computing in Information Retrieval: Techniques and Applications (F. Crestani and G. Pasi, eds.), Heidelberg: Physica Verlag, 50 (2000) 102-121 27. Mughaz, D.: Classification Of Hebrew Texts according to Style, M.Sc. Thesis (in Hebrew), Bar-Ilan University, Ramat-Gan, Israel (2003) 28. Pal, S.K, Talwar, V., Mitra, P.: Web Mining in Soft Computing Framework: Relevance, State of the Art and Future Directions. IEEE Transactions on Neural Networks, 13 (5) (2002) 1163-1177 29. Pazienza, M. T.: ed. Information Extraction. Lecture Notes in Computer Science, Vol. 1299. Springer, Heidelberg, Germany (1997) 30. Radai, Y.: Hamikra hamemuchshav: Hesegim Bikoret umishalot (in Hebrew), Balshanut Ivrit 13 (1978) 92-99 31. Radai, Y.: Od al Hamikra hamemuchshav (in Hebrew), Balshanut Ivrit 15 (1979) 58-59 32. Radai, Y.: Mikra umachshev: Divrei Idkun (in Hebrew), Balshanut Ivrit 19 (1982) 47-52 33. Rosenthal F.: Aramaic Studies During the Past Thirty Years, The Journal of Near Eastern Studies, Chicago (1978) 81-82 34. Ruiz, M.E., Srinivasan, P.: Hierarchical Neural Networks for Text Categorization. Proc. of the 22nd Intl. Conference on Research and Development in Information Retrieval, (1999) 281-282 35. Ruiz, M.E., Srinivasan, P.: Hierarchical Text Categorization Using Neural Networks. Information Retrieval, 5 (1) (2002) 87-118 36. Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning Representations by Back-Propagating Errors. Nature, 323 (1986) 533-536 37. Sebastiani, F. Machine Learning in Automated Text Categorization, ACM Computing Surveys 34 (1) (2002) 1-47 38. Werbos, P.: Beyond Regression: New Tools for Prediction and Analysis in the Behavioral Sciences. Ph.D.

Dissertation, Committee on Appl. Math., Harvard Univ (1974) 39. Werbos, P.: Roots of Back-Propagation: From Ordered Derivatives to Neural Networks to Political Forecasting. John Wiley and Sons, Inc (1993) 40. Wermter, S.: Neural Network Agents for Learning Semantic Text Classification. Information Retrieval, 3 (2000) 87-103