Excel Query
    • Dark
      Light

    Excel Query

    • Dark
      Light

    Article Summary

    Excel Query

    Note: This component should not be used to load very large (>100mb) excel files. It is advised that such files be converted to CSV and loaded using a Storage Load component, instead.

    This component can load data stored in an Office Open XML Excel sheet into a table. This stages the data, so the table is reloaded each time. You may then use transformations to enrich and manage the data in permanent tables.

    By default, data types are guessed by looking at the cell formatting, not cell contents. This is controlled using the Connection Option "type detection scheme" which can be set to ColumnFormat (the default, which examines the cell formatting), RowScan (which will scan 15 rows of data and guess the data type based on the data values), or None (treat everything as text). A second connection parameter, "row scan depth" controls how many rows to scan when determining column types. None is often a sensible choice if you intend to parse the values later anyway or the types in a single column are mixed.

    The component offers both a Basic and Advanced mode (see below) for generating the Excel query.

    Warning: This component is potentially destructive. If the target table undergoes a change in structure, it will be recreated. Otherwise, the target table is truncated. Setting the Load Option 'Recreate Target Table' to 'Off' will prevent both recreation and truncation. Do not modify the target table structure manually.


    Properties

    Snowflake Properties

    PropertySettingDescription
    NameTextA human-readable name for the component.
    Basic/Advanced ModeChoiceBasic: This mode will build a query for you using settings from Data Source, Data Selection and Data Source Filter parameters. In most cases, this will be sufficient.
    Advanced: This mode will require you to write an SQL-like query. The worksheets become table names, and the columns are either A, B, C... or use the first row as column names depending on the setting of the "Header" connection option.
    Storage TypeChoiceSelect between Amazon S3, Azure Blob and Google Cloud Storage as the host of your excel file.
    Storage URLTextEnter the full path where your .xlsx file can be located. Only Office Open XML (.xlsx) files are supported. Storage containers are explorable here if your credentials include access to resources of the selected Storage Type.
    Contains Header RowChoiceYes - The first row of data is the column names.
    No - The first row of data is just data. Columns will be named A, B, C...
    Cell RangeTextBy default the whole worksheet is considered. However, you may optionally specify a range of cells instead. For example, A5:E100 would only consider rows 5-100 in columns A-E. Wildcards (*) are also supported, for example A5:E* would consider columns A-E and rows 5 onwards.
    Connection OptionsParameterA JDBC parameter supported by the Database Driver. The available parameters are explained in the Excel Data Model.
    They are usually not required as sensible defaults are assumed.
    ValueA value for the given Parameter.
    Data SourceChoiceSelect a data source. Each sheet in the workbook is exposed as a table.
    Data SelectionChoiceSelect one or more columns to return from the query. These may be A, B, C or the first row may be used as a header to provide column names. See the "Header" connection option.
    Data Source FilterInput ColumnThe available input columns vary depending upon the Data Source.
    QualifierIs: Compares the column to the value using the comparator.
    Not: Reverses the effect of the comparison, so "equals" becomes "not equals", "less than" becomes "greater than or equal to", etc.
    ComparatorChoose a method of comparing the column to the value. Possible comparators include: 'Equal To', 'Greater than', 'Less than', 'Greater than or equal to', 'Less than or equal to', 'Like', 'Null'.
    'Equal To' can match exact strings and numeric values while other comparators such as 'Greater than' will work only with numerics. The 'Like' operator allows the wildcard character (%) to be used at the start and end of a string value to match a column. The Null operator matches only Null values, ignoring whatever the value is set to.
    Not all data sources support all comparators, thus it is likely only a subset of the above comparators will be available to choose from.
    ValueThe value to be compared.
    Combine FiltersSelectSelect whether to use the defined filters in combination with one another according to either And or Or.
    SQL QueryTextThis is an SQL-like query, written in SQL. The worksheets become table names, and the columns are either A, B, C... or use the first row as column names depending on the setting of the "Header" connection option. (Property only available in 'Advanced' Mode)
    LimitNumberBy default, all rows are returned, but you can use this to limit the number of rows loaded.
    TypeSelectChoose between using a standard table or an external table.
    External: The data will be put into an S3 bucket and referenced by an external table.
    Standard: The data will be staged on an S3 bucket before being loaded into a table. This is the default setting.
    Primary KeysSelectSelect one or more columns to be designated as the table's primary key.
    WarehouseSelectChoose a Snowflake warehouse that will run the load.
    DatabaseSelectChoose a database to create the new table in.
    SchemaSelectSelect the table schema. The special value, [Environment Default] will use the schema defined in the environment. For more information on using multiple schemas, see this article.
    Target TableStringProvide a new table name.
    Warning: This table will be recreated on each run of the job, and drop any existing table of the same name.
    StageSelectSelect a managed stage. The special value, [Custom], will create a stage "on the fly" for use solely within this component. Selecting [Custom] provides all the properties typically seen in the Manage Stages dialog for your input.
    If you select a managed stage that has already been configured in Manage Stages, the additional properties are not provided, as they have already been configured.
    Manage Stages can be found by clicking the Environments panel in the lower-left, then right-clicking an environment. To learn more, read Manage Stages.
    Stage PlatformSelectSelect a staging setting.
    Snowflake Managed: Allow Matillion ETL to create and use a temporary internal stage on Snowflake for staging the data. This stage, along with the staged data, will cease to exist after loading is complete.
    Existing Amazon S3 Location: Activates the S3 Staging Area property, allowing users to specify a custom staging area on Amazon S3. The Stage Authentication property is also activated, letting users select a method of authenticating the data staging.
    Existing Azure Blob Storage Location: Activates the Storage Account and Blob Container properties, allowing users to specify a custom staging location on Azure. The Stage Authentication property is also activated, letting users select a method of authenticating the data staging.
    Existing Google Cloud Storage Location: Activates the GCS Staging Area property, allowing users to specify a custom staging area within Google Cloud Storage.
    Stage AuthenticationSelectSelect an authentication method for data staging.
    Credentials: Uses the credentials configured in the Matillion ETL environment. If no credentials have been configured, an error will occur.
    Storage Integration: Use a Snowflake storage integration to authentication data staging. A storage integration is a Snowflake object that stores a generated identity and access management (IAM) entity for your external cloud storage, along with an optional set of allowed or blocked storage locations. To learn more, read Create Storage Integration.
    Storage IntegrationSelectSelect a Snowflake storage integration from the dropdown list. Storage integrations are required to permit Snowflake to read data from and write to your cloud storage location (Amazon S3, Microsoft Azure, Google Cloud Storage) and must be set up in advance of selection.
    To learn more about setting up a storage integration for use in Matillion ETL, read Storage Integration Setup Guide.
    This property is only available when Stage Authentication is set to Storage Integration.
    S3 Staging AreaSelectSelect an S3 bucket for temporary storage. Ensure your access credentials have S3 access and permission to write to the bucket. Read Manage Credentials for details on setting up access. The temporary objects created in this bucket will be removed again after the load completes, they are not kept.
    Use Accelerated EndpointBooleanWhen True, data will be loaded via the s3-accelerate endpoint. Please consider the following information:
    • Enabling acceleration can enhance the speed at which data is transferred to the chosen S3 bucket. However, enhanced speed is not always guaranteed. Please consult Amazon S3 Transfer Acceleration Speed Comparison to compare S3 Direct versus S3 Accelerated Transfer speeds.
    • Users must manually set the acceleration configuration of an existing bucket. To learn more, see PutBucketAccelerateConfiguration in the API Reference, available at the AWS documentation.
    • This property is only available if the selected S3 bucket has Amazon S3 Transfer Acceleration enabled. For more information, including how to enable this feature, read Getting started with Amazon S3 Transfer Acceleration.
    • Cases may arise where Matillion ETL cannot determine whether the chosen S3 bucket has Amazon S3 Transfer Acceleration enabled. In these cases, Matillion ETL will reveal this property for user input on a "just in case" basis. In these cases, Matillion ETL may return a validation message that reads OK - Bucket could not be validated. You may also encounter cases where, if you do not have permission to get the status of the acceleration configuration (namely, the permission, GetAccelerateConfiguration) Matillion ETL will again show this property "just in case".
    • The default setting is False.
    Storage AccountSelectSelect a storage account with your desired blob container to be used for staging the data. For more information, read Storage account overview.
    Blob ContainerSelectSelect a Blob container to be used for staging the data. For more information, read Introduction to Azure Blob storage.
    GCS Staging AreaSelectThe URL and path of the target Google Storage bucket to be used for staging the queried data. For more information, read Creating storage buckets.
    EncryptionSelectDecide how the files are encrypted inside the S3 bucket. This property is available when using an existing Amazon S3 location for staging.
    None: No encryption.
    SSE KMS: Encrypt the data according to a key stored on KMS. Read AWS Key Management Service (AWS KMS) to learn more.
    SSE S3: Encrypt the data according to a key stored on an S3 bucket. Read Using server-side encryption with Amazon S3-managed encryption keys (SSE-S3) to learn more..
    KMS Key IDSelectThe ID of the KMS encryption key you have chosen to use in the 'Encryption' property.
    Load OptionsMultiple SelectClean Staged Files: Destroy staged files after loading data. Default is On.
    String Null is Null: Converts any strings equal to "null" into a null value. This is case-sensitive and only works with entirely lower-case strings. Default is Off.
    Recreate Target Table: Choose whether the component recreates its target table before the data load. If Off, the component will use an existing table or create one if it does not exist. Default is On.
    File Prefix: Give staged file names a prefix of your choice. The default setting is an empty field.
    Trim String Columns: Remove leading and trailing characters from a string column. Default is On
    Compression Type: Set the compression type to either gzip or None. The default is gzip.
    Use Grid Variable: Check this checkbox to use a grid variable. This box is unchecked by default.
    New Table NameStringSpecify the name of the new table to be created.
    This property is only available when Type is set to External.
    Stage DatabaseSelect(Specify the stage database. The special value, [Environment Default], will use the database defined in the environment.
    This property is only available when Type is set to External.
    Stage SchemaSelectSpecify the stage schema. The special value, [Environment Default], will use the schema defined in the environment.
    This property is only available when Type is set to External.
    StageSelectSelect a stage.
    This property is only available when Type is set to External.
    Auto DebugSelectChoose whether to automatically log debug information about your load. These logs can be found in the Task History and should be included in support requests concerning the component. Turning this on will override any debugging Connection Options.
    Debug LevelSelectThe level of verbosity with which your debug information is logged. Levels above 1 can log huge amounts of data and result in slower execution.
    1: Will log the query, the number of rows returned by it, the start of execution and the time taken, and any errors.
    2: Will log everything included in Level 1, cache queries, and additional information about the request, if applicable.
    3: Will additionally log the body of the request and the response.
    4: Will additionally log transport-level communication with the data source. This includes SSL negotiation.
    5: Will additionally log communication with the data source and additional details that may be helpful in troubleshooting problems. This includes interface commands.

    Redshift Properties

    PropertySettingDescription
    NameTextA human-readable name for the component.
    Basic/Advanced ModeChoiceBasic: This mode will build a query for you using settings from Data Source, Data Selection and Data Source Filter parameters. In most cases, this will be sufficient.
    Advanced: This mode will require you to write an SQL-like query. The worksheets become table names, and the columns are either A, B, C... or use the first row as column names depending on the setting of the "Header" connection option.
    Storage TypeChoiceSelect between Amazon S3, Azure Blob and Google Cloud Storage as the host of your excel file.
    Storage URLTextEnter the full path where your .xlsx file can be located. Only Office Open XML (.xlsx) files are supported. Storage containers are explorable here if your credentials include access to resources of the selected Storage Type.
    Contains Header RowChoiceYes - The first row of data is the column names.
    No - The first row of data is just data. Columns will be named A, B, C...
    Cell RangeTextBy default the whole worksheet is considered. However, you may optionally specify a range of cells instead. For example, A5:E100 would only consider rows 5-100 in columns A-E. Wildcards (*) are also supported, for example A5:E* would consider columns A-E and rows 5 onwards.
    Connection OptionsParameterA JDBC parameter supported by the database driver. The available parameters are explained in the Excel data model.
    Manual setup is not usually required, since sensible defaults are assumed.
    ValueA value for the given Parameter.
    SQL QueryStringInput an SQL-like query, written according to the Excel data model.
    This property is only available when Basic/Advanced Mode is set to Advanced.
    Data SourceSelectSelect a data source.
    Data SelectionDual ListboxSelect one or more columns from the chosen data source to return from the query.
    Data Source FilterInput ColumnSelect an input column for your filter. The available input columns vary depending upon the data source.
    QualifierIs: Compares the column to the value using the comparator.
    Not: Reverses the effect of the comparison, so "Equals" becomes "Not equals", "Less than" becomes "Greater than or equal to", etc.
    ComparatorSelect the comparator. Note: Not all comparators will work with all possible data sources.
    Choose one of "Equal to", "Greater than", "Less than", "Greater than or equal to", "Less than or equal to", or "Like".
    "Equal to" can match exact strings and numeric values, while other comparators such as "Greater than" and "Less than" will work only with numerics. The "Like" comparator allows the wildcard character % to be used at the start and end of a string value to match a column. The "Null" comparator matches only null values, ignoring whatever the value is set to.
    Note: Not all data sources support all comparators, meaning that, often, only a subset of the above comparators will be available for selection.
    ValueSpecify value to be compared.
    Combine FiltersSelectUse the defined filters in combination with one another according to either And or Or.
    LimitIntegerSet a numeric value to limit the number of rows that can be loaded. Fetching a large number of results will use multiple API calls. These calls are rate-limited by the provider, so fetching a very large number may result in errors.
    TypeSelectChoose between using a standard table or an external table.
    Standard: The data will be staged on an S3 bucket before being loaded into a table.
    External: The data will be put into an S3 bucket and referenced by an external table.
    SchemaSelectSelect the table schema. The special value, [Environment Default], will use the schema defined in the environment. For more information on using multiple schemas, read Schemas.
    Note: An external schema is required if the Type property is set to "External".
    Target TableStringProvide a new table name.
    Warning: This table will be recreated and will drop any existing table of the same name.
    LocationS3 BucketSelect an S3 bucket path that will be used to store the data. Once the data is on an S3 bucket, it can be referenced by an external table.
    This property is only available when the Type property is set to "External".
    S3 Staging AreaSelectSelect an S3 bucket for temporary storage. Ensure your access credentials have S3 access and permission to write to the bucket. Read Manage Credentials for details on setting up access. The temporary objects created in this bucket will be removed again after the load completes, they are not kept.
    Use Accelerated EndpointBooleanWhen True, data will be loaded via the s3-accelerate endpoint. Please consider the following information:
    • Enabling acceleration can enhance the speed at which data is transferred to the chosen S3 bucket. However, enhanced speed is not always guaranteed. Please consult Amazon S3 Transfer Acceleration Speed Comparison to compare S3 Direct versus S3 Accelerated Transfer speeds.
    • Users must manually set the acceleration configuration of an existing bucket. To learn more, see PutBucketAccelerateConfiguration in the API Reference, available at the AWS documentation.
    • This property is only available if the selected S3 bucket has Amazon S3 Transfer Acceleration enabled. For more information, including how to enable this feature, read Getting started with Amazon S3 Transfer Acceleration.
    • Cases may arise where Matillion ETL cannot determine whether the chosen S3 bucket has Amazon S3 Transfer Acceleration enabled. In these cases, Matillion ETL will reveal this property for user input on a "just in case" basis. In these cases, Matillion ETL may return a validation message that reads OK - Bucket could not be validated. You may also encounter cases where, if you do not have permission to get the status of the acceleration configuration (namely, the permission, GetAccelerateConfiguration) Matillion ETL will again show this property "just in case".
    • The default setting is False.
    Distribution StyleSelectAll: Copy rows to all nodes in the Redshift cluster.
    Auto: (Default) Allow Redshift to manage your distribution style.
    Even: Distribute rows around the Redshift cluster evenly.
    Key: Distribute rows around the Redshift cluster according to the value of a key column.
    Table distribution is critical to good performance. See the Distribution styles documentation for more information.
    Sort KeyMultiple SelectThis is optional, and lets users specify one or more columns from the input that should be set as the table's sort key.
    Sort keys are critical to good performance. Read Working with sort keys for more information.
    Sort Key OptionsSelectDecide whether the sort key is of a compound or interleaved variety. Read Working with sort keys for more information.
    Primary KeyMultiple SelectSelect one or more columns to be designated as the table's primary key.
    Load OptionsMultiple Select ColumnsComp Update: Apply automatic compression to the target table. Default is On.
    Stat Update: Automatically update statistics when filling a table. Default is On. In this case, it is updating the statistics of the target table.
    Clean S3 Objects: Automatically remove UUID-based objects on the S3 bucket. Default is On. Effectively, users decide here whether to keep the staged data in the S3 bucket or not.
    String Null is Null: Converts any strings equal to "null" into a null value. This is case-sensitive and only works with entirely lower-case strings. Default is On.
    Recreate Target Table: Choose whether the component recreates its target table before the data load. If Off, the existing table will be used. Default is On.
    File Prefix: Give staged file names a prefix of your choice. When this Load Option is selected, users should set their preferred prefix in the text field.
    Compression Type: Set the compression type to either gzip or None. The default is gzip.
    Use Grid Variable: Check this checkbox to use a grid variable. This box is unchecked by default.
    EncryptionSelectDecide how the files are encrypted inside the S3 bucket. This property is available when using an existing Amazon S3 location for staging.
    None: No encryption.
    SSE KMS: Encrypt the data according to a key stored on KMS. Read AWS Key Management Service (AWS KMS) to learn more.
    SSE S3: Encrypt the data according to a key stored on an S3 bucket. Read Using server-side encryption with Amazon S3-managed encryption keys (SSE-S3) to learn more.
    KMS Key IDSelectThe ID of the KMS encryption key you have chosen to use in the Encryption property.
    Auto DebugSelectChoose whether to automatically log debug information about your load. These logs can be found in the Task History and should be included in support requests concerning the component. Turning this on will override any debugging Connection Options.
    Debug LevelSelectThe level of verbosity with which your debug information is logged. Levels above 1 can log huge amounts of data and result in slower execution.
    1: Will log the query, the number of rows returned by it, the start of execution and the time taken, and any errors.
    2: Will log everything included in Level 1, plus cache queries and additional information about the request, if applicable.
    3: Will additionally log the body of the request and the response.
    4: Will additionally log transport-level communication with the data source. This includes SSL negotiation.
    5: Will additionally log communication with the data source, as well as additional details that may be helpful in troubleshooting problems. This includes interface commands.

    BigQuery Properties

    PropertySettingDescription
    NameTextA human-readable name for the component.
    Basic/Advanced ModeChoiceBasic: This mode will build a query for you using settings from Data Source, Data Selection and Data Source Filter parameters. In most cases, this will be sufficient.
    Advanced: This mode will require you to write an SQL-like query. The worksheets become table names, and the columns are either A, B, C... or use the first row as column names depending on the setting of the "Header" connection option.
    Storage TypeChoiceSelect between Amazon S3, Azure Blob and Google Cloud Storage as the host of your excel file.
    Storage URLTextEnter the full path where your .xlsx file can be located. Only Office Open XML (.xlsx) files are supported. Storage containers are explorable here if your credentials include access to resources of the selected Storage Type.
    Contains Header RowChoiceYes - The first row of data is the column names.
    No - The first row of data is just data. Columns will be named A, B, C...
    Cell RangeTextBy default the whole worksheet is considered. However, you may optionally specify a range of cells instead. For example, A5:E100 would only consider rows 5-100 in columns A-E. Wildcards (*) are also supported, for example A5:E* would consider columns A-E and rows 5 onwards.
    Connection OptionsParameterA JDBC parameter supported by the Database Driver. The available parameters are explained in the Excel Data Model.
    They are usually not required as sensible defaults are assumed.
    ValueA value for the given Parameter.
    Data SourceChoiceSelect a data source. Each sheet in the workbook is exposed as a table.
    Data SelectionChoiceSelect one or more columns to return from the query. These may be A, B, C or the first row may be used as a header to provide column names. See the "Header" connection option.
    Data Source FilterInput ColumnThe available input columns vary depending upon the Data Source.
    QualifierIs: Compares the column to the value using the comparator.
    Not: Reverses the effect of the comparison, so "equals" becomes "not equals", "less than" becomes "greater than or equal to", etc.
    ComparatorChoose a method of comparing the column to the value. Possible comparators include: 'Equal To', 'Greater than', 'Less than', 'Greater than or equal to', 'Less than or equal to', 'Like', 'Null'.
    'Equal To' can match exact strings and numeric values while other comparators such as 'Greater than' will work only with numerics. The 'Like' operator allows the wildcard character (%) to be used at the start and end of a string value to match a column. The Null operator matches only Null values, ignoring whatever the value is set to.
    Not all data sources support all comparators, thus it is likely only a subset of the above comparators will be available to choose from.
    ValueThe value to be compared.
    Combine FiltersSelectSelect whether to use the defined filters in combination with one another according to either And or Or.
    SQL QueryTextThis is an SQL-like query, written in SQL. The worksheets become table names, and the columns are either A, B, C... or use the first row as column names depending on the setting of the "Header" connection option. (Property only available in 'Advanced' Mode)
    LimitNumberBy default, all rows are returned, but you can use this to limit the number of rows loaded.
    Table TypeSelectSelect whether the table is Native (by default in BigQuery) or an external table.
    ProjectSelectThe target BigQuery project to load data into. The special value, [Environment Default], will use the project defined in the Matillion ETL environment.
    DatasetSelectThe target BigQuery dataset to load data into. The special value, [Environment Default], will use the dataset defined in the Matillion ETL environment.
    Target TableStringA name for the table.
    Warning: This table will be recreated and will drop any existing table of the same name.
    Only available when the table type is Native.
    New Target TableStringA name for the new external table.
    Only available when the table type is External.
    Cloud Storage Staging AreaCloud Storage BucketThe URL and path of the target Google Cloud Storage bucket to be used for staging the queried data.
    Only available when the table type is Native.
    LocationCloud Storage BucketThe URL and path of the target Google Cloud Storage bucket.
    Only available when the table type is External.
    Load OptionsMultiple SelectClean Cloud Storage Files: Destroy staged files on Cloud Storage after loading data. Default is On.
    Cloud Storage File Prefix: Give staged file names a prefix of your choice. The default setting is an empty field.
    Recreate Target Table: Choose whether the component recreates its target table before the data load. If Off, the component will use an existing table or create one if it does not exist. Default is On.
    Use Grid Variable: Check this checkbox to use a grid variable. This box is unchecked by default.
    Only available when the table type is Native.
    Auto DebugSelectChoose whether to automatically log debug information about your load. These logs can be found in the Task History and should be included in support requests concerning the component. Turning this on will override any debugging Connection Options.
    Debug LevelSelectThe level of verbosity with which your debug information is logged. Levels above 1 can log huge amounts of data and result in slower execution.
    1: Will log the query, the number of rows returned by it, the start of execution and the time taken, and any errors.
    2: Will log everything included in Level 1, plus cache queries and additional information about the request, if applicable.
    3: Will additionally log the body of the request and the response.
    4: Will additionally log transport-level communication with the data source. This includes SSL negotiation.
    5: Will additionally log communication with the data source, as well as additional details that may be helpful in troubleshooting problems. This includes interface commands.

    Synapse Properties

    PropertySettingDescription
    NameTextA human-readable name for the component.
    Basic/Advanced ModeChoiceBasic: This mode will build a query for you using settings from Data Source, Data Selection and Data Source Filter parameters. In most cases, this will be sufficient.
    Advanced: This mode will require you to write an SQL-like query. The worksheets become table names, and the columns are either A, B, C... or use the first row as column names depending on the setting of the "Header" connection option.
    Storage TypeChoiceSelect between Amazon S3, Azure Blob and Google Cloud Storage as the host of your excel file.
    Storage URLTextEnter the full path where your .xlsx file can be located. Only Office Open XML (.xlsx) files are supported. Storage containers are explorable here if your credentials include access to resources of the selected Storage Type.
    Contains Header RowChoiceYes - The first row of data is the column names.
    No - The first row of data is just data. Columns will be named A, B, C...
    Cell RangeTextBy default the whole worksheet is considered. However, you may optionally specify a range of cells instead. For example, A5:E100 would only consider rows 5-100 in columns A-E. Wildcards (*) are also supported, for example A5:E* would consider columns A-E and rows 5 onwards.
    Connection OptionsParameterA JDBC parameter supported by the Database Driver. The available parameters are explained in the Excel Data Model.
    They are usually not required as sensible defaults are assumed.
    ValueA value for the given Parameter.
    Data SourceChoiceSelect a data source. Each sheet in the workbook is exposed as a table.
    Data SelectionChoiceSelect one or more columns to return from the query. These may be A, B, C or the first row may be used as a header to provide column names. See the "Header" connection option.
    Data Source FilterInput ColumnThe available input columns vary depending upon the Data Source.
    QualifierIs: Compares the column to the value using the comparator.
    Not: Reverses the effect of the comparison, so "equals" becomes "not equals", "less than" becomes "greater than or equal to", etc.
    ComparatorChoose a method of comparing the column to the value. Possible comparators include: 'Equal To', 'Greater than', 'Less than', 'Greater than or equal to', 'Less than or equal to', 'Like', 'Null'.
    'Equal To' can match exact strings and numeric values while other comparators such as 'Greater than' will work only with numerics. The 'Like' operator allows the wildcard character (%) to be used at the start and end of a string value to match a column. The Null operator matches only Null values, ignoring whatever the value is set to.
    Not all data sources support all comparators, thus it is likely only a subset of the above comparators will be available to choose from.
    ValueThe value to be compared.
    Combine FiltersSelectSelect whether to use the defined filters in combination with one another according to either And or Or.
    SQL QueryTextThis is an SQL-like query, written in SQL. The worksheets become table names, and the columns are either A, B, C... or use the first row as column names depending on the setting of the "Header" connection option. (Property only available in 'Advanced' Mode)
    LimitNumberBy default, all rows are returned, but you can use this to limit the number of rows loaded.
    SchemaSelectSelect the table schema. The special value, [Environment Default], will use the schema defined in the environment. For more information on schemas, please see the Azure Synapse documentation.
    TableStringProvide a new table name.
    Warning: This table will be recreated on each run of the job, and drop any existing table of the same name.
    Storage AccountSelectSelect an Azure storage account with your desired blob container to be used for staging the data.
    Please visit the Azure documentation for help creating an Azure Storage Account.
    Blob ContainerSelectSelect a blob container to be used for staging the data. The blob containers available for selection depend on the chosen storage account.
    Load OptionsMultiple SelectConfigure this Orchestration Job's load options. These load options will apply each time the job runs. Sensible defaults are assumed.
    Clean Staged Files: Destroy staged files after loading data. Default is On.
    String Null is Null: Converts any strings equal to "null" into a null value. This load option is case-sensitive and only works with entirely lower-case strings. Default is Off.
    Recreate Target Table: Choose whether the component recreates its target table before the data load. If set to Off, the existing table will be used instead. Default is On.
    File Prefix: Give staged file names a prefix of your choice. The default setting is an empty field.
    Compression Type: Set the compression type to either gzip or None. The default is gzip.
    Use Grid Variable: Check this checkbox to use a grid variable. This box is unchecked by default.
    Distribution StyleSelectSelect the distribution style
    Hash: This setting assigns each row to one distribution by hashing the value stored in the distribution_column_name. The algorithm is deterministic, meaning it always hashes the same value to the same distribution. The distribution column should be defined as NOT NULL, because all rows that have NULL are assigned to the same distribution.
    Replicate: This setting stores one copy of the table on each Compute node. For SQL Data Warehouse, the table is stored on a distribution database on each Compute node. For Parallel Data Warehouse, the table is stored in an SQL Server filegroup that spans the Compute node. This behavior is the default for Parallel Data Warehouse.
    Round Robin: Distributes the rows evenly in a round-robin fashion. This is the default behaviour.
    For more information, please read this article.
    Distribution ColumnSelectSelect the column to act as the distribution column. This property is only available when the Distribution Style property is set to "Hash".
    Index TypeSelectSelect the table indexing type. Options include:
    Clustered: A clustered index may outperform a clustered columnstore table when a single row needs to be retrieved quickly. The disadvantage to using a clustered index is that only queries that benefit are the ones that use a highly selective filter on the clustered index column. Choosing this option prompts the Index Column Grid property.
    Clustered Column Store: This is the default setting. Clustered columnstore tables offer both the highest level of data compression and the best overall query performance, especially for large tables. Choosing this option prompts the Index Column Order property.
    Heap: Users may find that using a heap table is faster for temporarily landing data in Synapse SQL pool. This is because loads to heaps are faster than to index tables, and in some cases, the subsequent read can be done from cache. When a user is loading data only to stage it before running additional transformations, loading the table to a heap table is much faster than loading the data to a clustered columnstore table.
    For more information, please consult the Azure Synapse documentation.
    Index Column GridNameThe name of each column.
    SortAssign a sort orientation of either ascending (Asc) or descending (Desc).
    Index Column OrderMultiple SelectSelect the columns in the order to be indexed.
    Partition KeySelectSelect the table's partition key. Table partitions determine how rows are grouped and stored within a distribution.
    For more information on table partitions, please refer to this article.
    Auto DebugSelectChoose whether to automatically log debug information about your load. These logs can be found in the Task History and should be included in support requests concerning the component. Turning this on will override any debugging Connection Options.
    Debug LevelSelectThe level of verbosity with which your debug information is logged. Levels above 1 can log huge amounts of data and result in slower execution.
    1: Will log the query, the number of rows returned by it, the start of execution and the time taken, and any errors.
    2: Will log everything included in Level 1, cache queries, and additional information about the request, if applicable.
    3: Will additionally log the body of the request and the response.
    4: Will additionally log transport-level communication with the data source. This includes SSL negotiation.
    5: Will additionally log communication with the data source and additional details that may be helpful in troubleshooting problems. This includes interface commands.

    Delta Lake Properties

    PropertySettingDescription
    NameStringA human-readable name for the component.
    Basic/Advanced ModeSelectBasic: This mode will build a query using settings from the Data Source, Data Selection, and Data Source Filter properties. In most cases, this mode will be sufficient.
    Advanced: This mode will require you to write an SQL-like query. The available fields and their descriptions are documented in the Excel Data Model.
    Storage TypeChoiceSelect between Amazon S3, Azure Blob and Google Cloud Storage as the host of your excel file.
    Storage URLTextEnter the full path where your .xlsx file can be located. Only Office Open XML (.xlsx) files are supported. Storage containers are explorable here if your credentials include access to resources of the selected Storage Type.
    Contains Header RowChoiceYes - The first row of data is the column names.
    No - The first row of data is just data. Columns will be named A, B, C...
    Cell RangeTextBy default the whole worksheet is considered. However, you may optionally specify a range of cells instead. For example, A5:E100 would only consider rows 5-100 in columns A-E. Wildcards (*) are also supported, for example A5:E* would consider columns A-E and rows 5 onwards.
    Connection OptionsParameterA JDBC parameter supported by the Database Driver. The available parameters are explained in the Excel .
    Manual setup is not usually required, since sensible defaults are assumed.
    ValueThe corresponding parameter value.
    SQL QuerySQLManually write the component's query in an SQL-like fashion. This property is only available when Basic/Advanced Mode is set to "Advanced".
    Data SourceSelectSelect a data source.
    Data SelectionSelectSelect one or more columns from the chosen data source to return from the query.
    Data Source FilterInput ColumnThe available input columns vary depending upon the data source.
    QualifierIs: Compares the column to the value using the comparator.
    Not: Reverses the effect of the comparison, so "Equals" becomes "Not equals", "Less than" becomes "Greater than or equal to", etc.
    ComparatorChoose a method of comparing the column to the value. Possible comparators include: "Equal to", "Greater than", "Less than", "Greater than or equal to", "Less than or equal to", "Like", "Null".
    "Equal to" can match exact strings and numeric values, while other comparators, such as "Greater than", will work only with numerics. The "Like" operator allows the wildcard character (%) to be used at the start and end of a string value to match a column. The "Null" operator matches only null values, ignoring whatever the value is set to.
    Not all data sources support all comparators, thus it is likely that only a subset of the above comparators will be available to choose from.
    ValueThe value to be compared.
    Combine FiltersSelectUse the defined data source filters in combination with one another according to either And or Or.
    LimitIntegerSet a numeric value to limit the number of rows that can be loaded.
    CatalogSelectSelect a Databricks Unity Catalog. The special value, [Environment Default], will use the catalog specified in the Matillion ETL environment setup. Selecting a catalog will determine which databases are available in the next parameter.
    DatabaseSelectSelect the Delta Lake database. The special value, [Environment Default], will use the database specified in the Matillion ETL environment setup.
    TableStringSpecify the new table name.
    S3 Staging AreaSelect(AWS only) Select an S3 bucket for staging.
    Storage AccountSelect(Azure only) Select an Azure Blob Storage account. An Azure storage account contains all of your Azure Storage data objects: blobs, files, queues, tables, and disks. For more information, read Storage account overview.
    Blob ContainerSelect(Azure only) A Blob Storage location. The available blob containers will depend on the selected storage account.
    EncryptionSelect(AWS only) Specify how files are encrypted inside the S3 Bucket.
    None: No encryption.
    SSE KMS: Encrypt the data according to a key stored on KMS. Read AWS Key Management Service (AWS KMS) to learn more.
    SSE S3: Encrypt the data according to a key stored on an S3 bucket. Read Using server-side encryption with Amazon S3-managed encryption keys (SSE-S3) to learn more.
    KMS Key IDSelect(AWS only) The ID of the KMS encryption key you have chosen to use in the Encryption property.
    Load OptionsMultiple SelectClean Staged Files: Destroy staged files after loading data. The default is On.
    String Null is Null: Converts any strings equal to null into a null value. This is case-sensitive and only works with entirely lower-case strings. The default is Off.
    Recreate Target Table Choose whether the component recreates its target table before the data load. If Off, the component will use an existing table or create one if it does not exist. The default is On.
    File Prefix: Give staged file names a prefix of your choice. The default is an empty field (no prefix).
    Compression Type: Set the compression type to either gzip or None. The default is gzip.
    Use Grid Variable: Check this checkbox to use a grid variable. This box is unchecked by default.
    Auto DebugSelectChoose whether to automatically log debug information about your load or not. These logs can be found in the Task History and should be included in support requests concerning the component. Turning this on will override any debugging Connection Options.
    Debug LevelSelectThe level of verbosity with which your debug information is logged. Levels above 1 can log huge amounts of data and result in slower execution.
    1: Will log the query, the number of rows returned by it, the start of execution and the time taken, and any errors.
    2: Will log everything included in Level 1, plus cache queries and additional information about the request, if applicable.
    3: Will additionally log the body of the request and the response.
    4: Will additionally log transport-level communication with the data source. This includes SSL negotiation.
    5: Will additionally log communication with the data source, as well as additional details that may be helpful in troubleshooting problems. This includes interface commands.

    Variable Exports

    This component makes the following values available to export into variables:

    SourceDescription
    Time Taken To StageThe amount of time (in seconds) taken to fetch the data from the data source and upload it to storage.
    Time Taken To LoadThe amount of time (in seconds) taken to execute the COPY statement to load the data into the target table from storage.

    Strategy

    Download the files from storage to a temporary area on the Matillion instance. Query the sheet and stream the results into objects on storage, recreate or truncate the target table as necessary and then use a COPY command to load the storage objects into the table. Finally, clean up the temporary storage objects.

    Horizontal Spreadsheets

    When the spreadsheet being queried is laid out horizontally (i.e. where column names are arranged vertically in the first column), users can set the Orientation parameter in Matillion ETL's Connection Options to a value of Horizontal. This will instruct Matillion ETL to read each column as a row in the table.

    The Orientation parameter can be used in conjunction with the Header parameter. If header columns are not set and the spreadsheet's orientation is horizontal, column names will be R1, R2, R3, ...

    SELECT R1, R2, R3 FROM Sheet1 WHERE R2 > '50'


    Example 1

    In this example, the Excel Query component is used to create a table populated with sales data from an excel file (.xlsx). The table data is then passed through a simple filter. Bringing data into a table requires an orchestration job, while filtering said data requires a Transformation job, seen below to the left and right, respectively.

    The orchestration job requires 3 components: Start, Excel Query and Transform Data. Start requires no parameterization and Transform Data should simply be given the name of the Transformation job.

    The Excel Query component must be given the path of an existing .xlsx file and the name of the table to write this data to. If the table name does not exist, Excel Query will create it. If the table does exist, Excel Query will overwrite it.

    In this example, an .xlsx file is taken from an S3 bucket using the Excel Query component set up as below. Since we want to grab all of the data, we needn't alter the 'Cell Range', 'Data Source Filter' and 'Limit' properties. The data is being written to a table by the name of 'excel_example', which can be done immediately by right clicking the component and selecting 'Run Component'. This example is particularly apt when you wish to import Excel data to a table with no serious alteration of the source material.

    Ensure each property has 'OK' status before continuing. After this component has run, a table named 'excel_example' will exist in the specified staging area and can be used in transformation jobs. In this example, the excel_example table data is loaded using the Table Input component. Selecting the 'Sample' tab for the Table Input component allows the user to 'Retrieve' rows and a total row count and we can see from the sample below that the data has been read in correctly. Note the columns match the names found in the 'Data Selection' property of the Excel Query component.

    The Filter component is used to find only the data where Jane is the sales rep. Editing the 'Filter Conditions' property of the Filter component allows a new filter to be added, in this case one to check sales_rep_name is equal to 'Jane'.

    Finally, the Filter component's Sample tab can be viewed to ensure the table data is being filtered correctly. As expected, the sample shows only rows where Jane is the sales rep.



    Example 2

    In the previous example, a Filter component was used to take a subsection of data from a table of excel data. In this example, we see how the Excel Query component can be used to do this directly without need of a Transformation job.

    Editing the 'Data Selection' property of the Excel Query component will bring up a filter similar to that of the Filter component. In the same way, we filter only for values where the sales rep name is Jane. Note the component (or job) must be rerun for this new data to overwrite old data and provide a sample.

    An inspection of the resulting table (using a Table Input component to sample the data) shows that the Data Selection has been successful and only rows containing transactions by Jane are imported.

    Finally, we decide that we don't need the 'value' column at all and we'd like Excel Query to omit it. This can be done through the Excel Query component's 'Cell Range' property. In this case we want all rows and all columns except for the 'value' column. Thus the excel formula:

     A*:D* 

    Is useful, using the wildcard value for rows. Column E is then lost. Upon entering this Cell Range, the component may recognise an error in the Data Selection property as it expects the 'Value' column that we have now omitted. Editing the Data Selection property can fix this error. This same error may appear if you attempt to reuse the same Table Input component to view a sample, as it may expect a column that no longer exists, but can be remedied in the same way.

    Again, inspecting the sample data for this table confirms the success of our new Excel Query properties.



    Video




    What's Next