WhitespaceAnalyzer:僅僅是去除空格,對字符沒有l(wèi)owcase化,不支持中文,會(huì)保留原文中的破折號(hào),以空格為邊界,將空格間的內(nèi)容切分為最小的語匯單元。 SimpleAnalyzer:功能強(qiáng)于WhitespaceAnalyzer,將所有的字符lowcase化,不支持中文,保留停用詞,并以非字母字符作為單個(gè)語匯單元的邊界。 StopAnalyzer:StopAnalyzer的功能超越了SimpleAnalyzer,在SimpleAnalyzer的基礎(chǔ)上增加了去除StopWords的功能,不支持中文 StandardAnalyzer:英文的處理能力同于StopAnalyzer,保留XY&Z形式的單詞,且會(huì)把email地址保留下來。支持中文采用的方法為單字切分。 以上四個(gè)Analyzer可以用下例來說明: 輸入字符串:XY&Z mail is - xyz@sohu.com =====Whitespace analyzer==== 分析方法:空格分割 XY&Z mail is - xyz@sohu.com =====Simple analyzer==== 分析方法:空格及各種符號(hào)分割 xy z mail is xyz sohu com =====stop analyzer==== 分析方法:空格及各種符號(hào)分割,去掉停止詞,停止詞包括 is,are,in,on,the等無實(shí)際意義 的詞 xy z mail xyz sohu com =====standard analyzer==== 分析方法:混合分割,包括了去掉停止詞,支持漢語 xy&z mail xyz@sohu.com
ChineseAnalyzer:來自于Lucene的sand box.性能類似于StandardAnalyzer,缺點(diǎn)是不支持中英文混和分詞。 CJKAnalyzer:chedong寫的CJKAnalyzer的功能在英文處理上的功能和StandardAnalyzer相同,但是在漢語的分詞上,不能過濾掉標(biāo)點(diǎn)符號(hào),即使用二元切分。 TjuChineseAnalyzer:自定義的,功能最為強(qiáng)大。TjuChineseAnlyzer的功能相當(dāng)強(qiáng)大,在中文分詞方面由于其調(diào)用的為ICTCLAS的java接口.所以其在中文方面性能上同與ICTCLAS.其在英文分詞上采用了Lucene的StopAnalyzer,可以去除 stopWords,而且可以不區(qū)分大小寫,過濾掉各類標(biāo)點(diǎn)符號(hào)。 各個(gè)Analyzer的功能已經(jīng)比較介紹完畢了,現(xiàn)在咱們應(yīng)該學(xué)寫Analyzer,如何diy自己的analyzer呢?? 如何DIY一個(gè)Analyzer
咱們寫一個(gè)Analyzer,要求有一下功能 (1) 可以處理中文和英文,對于中文實(shí)現(xiàn)的是單字切分,對于英文實(shí)現(xiàn)的是以空格切分. (2) 對于英文部分要進(jìn)行小寫化. (3) 具有過濾功能,可以人工設(shè)定StopWords列表.如果不是人工設(shè)定,系統(tǒng)會(huì)給出默認(rèn)的StopWords列表. (4) 使用P-stemming算法對于英文部分進(jìn)行詞綴處理. 代碼如下: public final class DiyAnalyzer extends Analyzer { private Set stopWords; public static final String[] CHINESE_ENGLISH_STOP_WORDS = { "a", "an", "and", "are", "as", "at", "be", "but", "by", "for", "if", "in", "into", "is", "it", "no", "not", "of", "on", "or", "s", "such", "t", "that", "the", "their", "then", "there", "these", "they", "this", "to", "was", "will", "with", "我", "我們" }; public DiyAnalyzer() { this.stopWords=StopFilter.makeStopSet(CHINESE_ENGLISH_STOP_WORDS); } public DiyAnalyzer(String[] stopWordList) { this.stopWords=StopFilter.makeStopSet(stopWordList); } public TokenStream tokenStream(String fieldName, Reader reader) { TokenStream result = new StandardTokenizer(reader); result = new LowerCaseFilter(result); result = new StopFilter(result, stopWords); result = new PorterStemFilter(result); return result; } public static void main(String[] args) { //好像英文的結(jié)束符號(hào)標(biāo)點(diǎn).,StandardAnalyzer不能識(shí)別 String string = new String("我愛中國,我愛天津大學(xué)!I love China!Tianjin is a City"); Analyzer analyzer = new DiyAnalyzer(); TokenStream ts = analyzer.tokenStream("dummy", new StringReader(string)); Token token; try { while ( (token = ts.next()) != null) { System.out.println(token.toString()); } } catch (IOException ioe) { ioe.printStackTrace(); } } } 可以看見其后的結(jié)果如下: Token's (termText,startOffset,endOffset,type,positionIncrement) is:(愛,1,2,<CJK>,1) Token's (termText,startOffset,endOffset,type,positionIncrement) is:(中,2,3,<CJK>,1) Token's (termText,startOffset,endOffset,type,positionIncrement) is:(國,3,4,<CJK>,1) Token's (termText,startOffset,endOffset,type,positionIncrement) is:(愛,6,7,<CJK>,1) Token's (termText,startOffset,endOffset,type,positionIncrement) is:(天,7,8,<CJK>,1) Token's (termText,startOffset,endOffset,type,positionIncrement) is:(津,8,9,<CJK>,1) Token's (termText,startOffset,endOffset,type,positionIncrement) is:(大,9,10,<CJK>,1) Token's (termText,startOffset,endOffset,type,positionIncrement) is:(學(xué),10,11,<CJK>,1) Token's (termText,startOffset,endOffset,type,positionIncrement) is:(i,12,13,<ALPHANUM>,1) Token's (termText,startOffset,endOffset,type,positionIncrement) is:(love,14,18,<ALPHANUM>,1) Token's (termText,startOffset,endOffset,type,positionIncrement) is:(china,19,24,<ALPHANUM>,1) Token's (termText,startOffset,endOffset,type,positionIncrement) is:(tianjin,25,32,<ALPHANUM>,1) Token's (termText,startOffset,endOffset,type,positionIncrement) is:(citi,39,43,<ALPHANUM>,1) 到此為止這個(gè)簡單的但是功能強(qiáng)大的分詞器就寫完了,下面咱們可以嘗試寫一個(gè)功能更強(qiáng)大的分詞器. 如何DIY一個(gè)功能更加強(qiáng)大Analyzer 譬如你有詞典,然后你根據(jù)正向最大匹配法或者逆向最大匹配法寫了一個(gè)分詞方法,卻想在Lucene中應(yīng)用,很簡單,你只要把他們包裝成Lucene的TokenStream就好了.下邊我以調(diào)用中科院寫的ICTCLAS接口為例,進(jìn)行演示.你去中科院網(wǎng)站可以拿到此接口的free版本,誰叫你沒錢呢,有錢,你就可以購買了.哈哈 好,由于ICTCLAS進(jìn)行分詞之后,在Java中,中間會(huì)以兩個(gè)空格隔開!too easy,我們直接使用繼承Lucene的WhiteSpaceTokenizer就好了. 所以TjuChineseTokenizer 看起來像是這樣. public class TjuChineseTokenizer extends WhitespaceTokenizer { public TjuChineseTokenizer(Reader readerInput) { super(readerInput); } } 而TjuChineseAnalyzer看起來象是這樣 public final class TjuChineseAnalyzer extends Analyzer { private Set stopWords; /** An array containing some common English words that are not usually useful for searching. */ /* public static final String[] CHINESE_ENGLISH_STOP_WORDS = { "a", "an", "and", "are", "as", "at", "be", "but", "by", "for", "if", "in", "into", "is", "it", "no", "not", "of", "on", "or", "s", "such", "t", "that", "the", "their", "then", "there", "these", "they", "this", "to", "was", "will", "with", "我", "我們" }; */ /** Builds an analyzer which removes words in ENGLISH_STOP_WORDS. */ public TjuChineseAnalyzer() { stopWords = StopFilter.makeStopSet(StopWords.SMART_CHINESE_ENGLISH_STOP_WORDS); } /** Builds an analyzer which removes words in the provided array. */ //提供獨(dú)自的stopwords public TjuChineseAnalyzer(String[] stopWords) { this.stopWords = StopFilter.makeStopSet(stopWords); } /** Filters LowerCaseTokenizer with StopFilter. */ public TokenStream tokenStream(String fieldName, Reader reader) { try { ICTCLAS splitWord = new ICTCLAS(); String inputString = FileIO.readerToString(reader); //分詞中間加入了空格 String resultString = splitWord.paragraphProcess(inputString); System.out.println(resultString); TokenStream result = new TjuChineseTokenizer(new StringReader(resultString)); result = new LowerCaseFilter(result); //使用stopWords進(jìn)行過濾 result = new StopFilter(result, stopWords); //使用p-stemming算法進(jìn)行過濾 result = new PorterStemFilter(result); return result; } catch (IOException e) { System.out.println("轉(zhuǎn)換出錯(cuò)"); return null; } } public static void main(String[] args) { String string = "我愛中國人民"; Analyzer analyzer = new TjuChineseAnalyzer(); TokenStream ts = analyzer.tokenStream("dummy", new StringReader(string)); Token token; System.out.println("Tokens:"); try { int n=0; while ( (token = ts.next()) != null) { System.out.println((n++)+"->"+token.toString()); } } catch (IOException ioe) { ioe.printStackTrace(); } } } 對于此程序的輸出接口可以看一下 0->Token's (termText,startOffset,endOffset,type,positionIncrement) is:(愛,3,4,word,1) 1->Token's (termText,startOffset,endOffset,type,positionIncrement) is:(中國,6,8,word,1) 2->Token's (termText,startOffset,endOffset,type,positionIncrement) is:(人民,10,12,word,1) OK,經(jīng)過這樣一番講解,你已經(jīng)對Lucene的Analysis包認(rèn)識(shí)的比較好了,當(dāng)然如果你想更加了解,還是認(rèn)真讀讀源碼才好, 呵呵,源碼說明一切!
|