SearchModelFactory.ClassicTokenizer(String, Nullable<Int32>) Method

Definition

Grammar-based tokenizer that is suitable for processing most European-language documents. This tokenizer is implemented using Apache Lucene.

public static Azure.Search.Documents.Indexes.Models.ClassicTokenizer ClassicTokenizer(string name = default, int? maxTokenLength = default);
static member ClassicTokenizer : string * Nullable<int> -> Azure.Search.Documents.Indexes.Models.ClassicTokenizer
Public Shared Function ClassicTokenizer (Optional name As String = Nothing, Optional maxTokenLength As Nullable(Of Integer) = Nothing) As ClassicTokenizer

Parameters

name
String

The name of the tokenizer. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.

maxTokenLength
Nullable<Int32>

The maximum token length. Default is 255. Tokens longer than the maximum length are split. The maximum token length that can be used is 300 characters.

Returns

A new ClassicTokenizer instance for mocking.

Applies to