SearchModelFactory.ScalarQuantizationCompression Method

Definition

Contains configuration options specific to the scalar quantization compression method used during indexing and querying.

public static Azure.Search.Documents.Indexes.Models.ScalarQuantizationCompression ScalarQuantizationCompression(string compressionName = default, Azure.Search.Documents.Indexes.Models.RescoringOptions rescoringOptions = default, int? truncationDimension = default, Azure.Search.Documents.Indexes.Models.ScalarQuantizationParameters parameters = default);
static member ScalarQuantizationCompression : string * Azure.Search.Documents.Indexes.Models.RescoringOptions * Nullable<int> * Azure.Search.Documents.Indexes.Models.ScalarQuantizationParameters -> Azure.Search.Documents.Indexes.Models.ScalarQuantizationCompression
Public Shared Function ScalarQuantizationCompression (Optional compressionName As String = Nothing, Optional rescoringOptions As RescoringOptions = Nothing, Optional truncationDimension As Nullable(Of Integer) = Nothing, Optional parameters As ScalarQuantizationParameters = Nothing) As ScalarQuantizationCompression

Parameters

compressionName
String

The name to associate with this particular configuration.

rescoringOptions
RescoringOptions

Contains the options for rescoring.

truncationDimension
Nullable<Int32>

The number of dimensions to truncate the vectors to. Truncating the vectors reduces the size of the vectors and the amount of data that needs to be transferred during search. This can save storage cost and improve search performance at the expense of recall. It should be only used for embeddings trained with Matryoshka Representation Learning (MRL) such as OpenAI text-embedding-3-large (small). The default value is null, which means no truncation.

parameters
ScalarQuantizationParameters

Contains the parameters specific to Scalar Quantization.

Returns

A new ScalarQuantizationCompression instance for mocking.

Applies to