SearchModelFactory.EdgeNGramTokenizer Method

Definition

Tokenizes the input from an edge into n-grams of the given size(s). This tokenizer is implemented using Apache Lucene.

public static Azure.Search.Documents.Indexes.Models.EdgeNGramTokenizer EdgeNGramTokenizer(string name = default, int? minGram = default, int? maxGram = default, System.Collections.Generic.IEnumerable<Azure.Search.Documents.Indexes.Models.TokenCharacterKind> tokenChars = default);
static member EdgeNGramTokenizer : string * Nullable<int> * Nullable<int> * seq<Azure.Search.Documents.Indexes.Models.TokenCharacterKind> -> Azure.Search.Documents.Indexes.Models.EdgeNGramTokenizer
Public Shared Function EdgeNGramTokenizer (Optional name As String = Nothing, Optional minGram As Nullable(Of Integer) = Nothing, Optional maxGram As Nullable(Of Integer) = Nothing, Optional tokenChars As IEnumerable(Of TokenCharacterKind) = Nothing) As EdgeNGramTokenizer

Parameters

name
String

The name of the tokenizer. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.

minGram
Nullable<Int32>

The minimum n-gram length. Default is 1. Maximum is 300. Must be less than the value of maxGram.

maxGram
Nullable<Int32>

The maximum n-gram length. Default is 2. Maximum is 300.

tokenChars
IEnumerable<TokenCharacterKind>

Character classes to keep in the tokens.

Returns

A new EdgeNGramTokenizer instance for mocking.

Applies to