当前位置: 首页>>代码示例>>C#>>正文


C# Lucene.Net.Analysis.Analyzer.ReusableTokenStream方法代码示例

本文整理汇总了C#中Lucene.Net.Analysis.Analyzer.ReusableTokenStream方法的典型用法代码示例。如果您正苦于以下问题:C# Lucene.Net.Analysis.Analyzer.ReusableTokenStream方法的具体用法?C# Lucene.Net.Analysis.Analyzer.ReusableTokenStream怎么用?C# Lucene.Net.Analysis.Analyzer.ReusableTokenStream使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在Lucene.Net.Analysis.Analyzer的用法示例。


在下文中一共展示了Lucene.Net.Analysis.Analyzer.ReusableTokenStream方法的1个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的C#代码示例。

示例1: InvertField

				/* Invert one occurrence of one field in the document */
				public void  InvertField(Fieldable field, Analyzer analyzer, int maxFieldLength)
				{
					
					if (length > 0)
						position += analyzer.GetPositionIncrementGap(fieldInfo.name);
					
					if (!field.IsTokenized())
					{
						// un-tokenized field
						System.String stringValue = field.StringValue();
						int valueLength = stringValue.Length;
						Token token = localToken;
						token.Clear();
						char[] termBuffer = token.TermBuffer();
						if (termBuffer.Length < valueLength)
							termBuffer = token.ResizeTermBuffer(valueLength);
						DocumentsWriter.GetCharsFromString(stringValue, 0, valueLength, termBuffer, 0);
						token.SetTermLength(valueLength);
						token.SetStartOffset(offset);
						token.SetEndOffset(offset + stringValue.Length);
						AddPosition(token);
						offset += stringValue.Length;
						length++;
					}
					else
					{
						// tokenized field
						TokenStream stream;
						TokenStream streamValue = field.TokenStreamValue();
						
						if (streamValue != null)
							stream = streamValue;
						else
						{
							// the field does not have a TokenStream,
							// so we have to obtain one from the analyzer
							System.IO.TextReader reader; // find or make Reader
							System.IO.TextReader readerValue = field.ReaderValue();
							
							if (readerValue != null)
								reader = readerValue;
							else
							{
								System.String stringValue = field.StringValue();
								if (stringValue == null)
									throw new System.ArgumentException("field must have either TokenStream, String or Reader value");
								Enclosing_Instance.stringReader.Init(stringValue);
								reader = Enclosing_Instance.stringReader;
							}
							
							// Tokenize field and add to postingTable
							stream = analyzer.ReusableTokenStream(fieldInfo.name, reader);
						}
						
						// reset the TokenStream to the first token
						stream.Reset();
						
						try
						{
							offsetEnd = offset - 1;
							for (; ; )
							{
								Token token = stream.Next(localToken);
								if (token == null)
									break;
								position += (token.GetPositionIncrement() - 1);
								AddPosition(token);
								if (++length >= maxFieldLength)
								{
									if (Enclosing_Instance.Enclosing_Instance.infoStream != null)
										Enclosing_Instance.Enclosing_Instance.infoStream.WriteLine("maxFieldLength " + maxFieldLength + " reached for field " + fieldInfo.name + ", ignoring following tokens");
									break;
								}
							}
							offset = offsetEnd + 1;
						}
						finally
						{
							stream.Close();
						}
					}
					
					boost *= field.GetBoost();
				}
开发者ID:vikasraz,项目名称:indexsearchutils,代码行数:85,代码来源:DocumentsWriter.cs


注:本文中的Lucene.Net.Analysis.Analyzer.ReusableTokenStream方法示例由纯净天空整理自Github/MSDocs等开源代码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。