MicrosoftLanguageTokenizer interface
Divides text using language-specific rules.
- Extends
Properties
| is |
A value indicating how the tokenizer is used. Set to true if used as the search tokenizer, set to false if used as the indexing tokenizer. Default is false. |
| language | The language to use. The default is English. |
| max |
The maximum token length. Tokens longer than the maximum length are split. Maximum token length that can be used is 300 characters. Tokens longer than 300 characters are first split into tokens of length 300 and then each of those tokens is split based on the max token length set. Default is 255. |
| odatatype | A URI fragment specifying the type of tokenizer. |
Inherited Properties
| name | The name of the tokenizer. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters. |
Property Details
isSearchTokenizer
A value indicating how the tokenizer is used. Set to true if used as the search tokenizer, set to false if used as the indexing tokenizer. Default is false.
isSearchTokenizer?: boolean
Property Value
boolean
language
The language to use. The default is English.
language?: MicrosoftTokenizerLanguage
Property Value
maxTokenLength
The maximum token length. Tokens longer than the maximum length are split. Maximum token length that can be used is 300 characters. Tokens longer than 300 characters are first split into tokens of length 300 and then each of those tokens is split based on the max token length set. Default is 255.
maxTokenLength?: number
Property Value
number
odatatype
A URI fragment specifying the type of tokenizer.
odatatype: "#Microsoft.Azure.Search.MicrosoftLanguageTokenizer"
Property Value
"#Microsoft.Azure.Search.MicrosoftLanguageTokenizer"
Inherited Property Details
name
The name of the tokenizer. It must only contain letters, digits, spaces, dashes or underscores, can only start and end with alphanumeric characters, and is limited to 128 characters.
name: string
Property Value
string
Inherited From LexicalTokenizer.name