Database Query Component
Run an SQL Query on an accessible database and copy the result to a table, via storage.
This component is for data-staging - getting data into a table in order to perform further processing and transformations on it. The target table should be considered temporary, as it will either be truncated or recreated each time the components runs.
Warning: This component is potentially destructive. If the target table undergoes a change in structure, it will be recreated. Otherwise, the target table is truncated. Setting the Load Option 'Recreate Target Table' to 'Off' will prevent both recreation and truncation. Do not modify the target table structure manually.
Incremental loads using this component can be built using its Incremental Load Generator.
|Name||Text||The descriptive name for the component.|
|Basic/Advanced Mode||Choice||Basic: This mode will build a Query for you using settings from Data Source, Data Selection and Data Source Filter parameters. In most cases, this will be sufficient.
Advanced: This mode will require you to write an SQL-like query to call data from your chosen database.
Microsoft SQL Server: see their website for more details.
Oracle: see their website for more details.
Sybase ASE: see their website for more details.
PostgreSQL: see their website for more details.
MySQL: see their website for more details.
IBM DB2: see their website for more details.
IBM DB2 for i: see their website for more details.
Netezza: see their website for more details.
Teradata: see their website for more details.
Note: For some databases, you must first provide a JDBC driver as not all drivers can be distributed with Matillion ETL. See this article for instructions on managing drivers.
|Connection URL||Text||This is the database JDBC URL used to connect. The format of the URL varies
considerably, however a default 'template' is offered once you have chosen a
database type. Replace any special tags in the URL template with real values.
Although many parameters and options can be added to the end of the URL, it is generally easier to add them to the Connection Options documented below..
|Username||Text||This is your database connection username.|
|Password||Text||This is your database connection password. The password is masked so it can be set, but not read.Users have the option to store their password inside the component but we highly recommend using the Password Manager option.|
|Data Source Filter||Input Column||The available input columns vary depending upon the Data Source.|
|Qualifier||Is: Compares the column to the value using the comparator.
Not: Reverses the effect of the comparison, so "equals" becomes "not equals", "less than" becomes "greater than or equal to", etc.
|Comparator||Choose a method of comparing the column to the value. Possible comparators include: 'Equal To', 'Greater than', 'Less than', 'Greater than or equal to', 'Less than or equal to', 'Like', 'Null'.
'Equal To' can match exact strings and numeric values while other comparators such as 'Greater than' will work only with numerics. The 'Like' operator allows the wildcard character (%) to be used at the start and end of a string value to match a column. The Null operator matches only Null values, ignoring whatever the value is set to.
Not all data sources support all comparators, thus it is likely only a subset of the above comparators will be available to choose from.
|Value||The value to be compared.|
|Combine Filters||Choice||And - Multiple filters must ALL be true for a row to be returned.
Or - Any one of the filters must be true for a row to be returned.
|Limit||Number||Limits the number of rows that are loaded from file.|
|SQL Query||Text||This is an SQL query, written in the dialect of the target
database. It can be as simple as
select * from tablenameIt should be a simple select query. (Property only available in 'Advanced' Mode)
|Warehouse||Select||Choose a Snowflake warehouse that will run the load.|
|Database||Select||Choose a database to create the new table in.|
|Schema||Select||Select the table schema. The special value, [Environment Default] will use the schema defined in the environment. For more information on using multiple schemas, see this article.|
Provide a new table name.
Warning: This table will be recreated and will drop any existing table of the same name.
|Storage Account||Select||(Azure Only) Select a Storage Account with your desired Blob Container to be used for staging the data.|
|Blob Container||Select||(Azure Only) Select a Blob Container to be used for staging the data.|
(AWS Only) Snowflake Managed: Allow Matillion ETL to create and use a temporary internal stage on Snowflake for staging the data. This stage, along with the staged data, will cease to exist after loading is complete.
Existing Amazon S3 Location: Selecting this will avail the user of properties to specify a custom staging area on S3.
|S3 Staging Area||Text||(AWS Only) The name of an S3 bucket for temporary storage. Ensure
your access credentials have S3 access and permission to write
to the bucket. See this document for
details on setting up access. The temporary objects created in this bucket will be removed again after the load completes, they are not kept.
This property is available when using an Existing Amazon S3 Location for Staging.
|Connection Options||Parameter||A JDBC parameter supported by the Database Driver. The available parameters
are determined automatically from the driver, and may change from
version to version.
They are usually not required as sensible defaults are assumed.
|Value||A value for the given Parameter. Parameters and their allowed values
are somewhat database-specific. The links below may help, or if you upload your own JDBC Driver, consult the documentation
that was provided with it.
support support if you think you require an advanced JDBC option.
|Concurrency||Integer||The number of S3 files to create. This helps when loading into
Amazon Redshift as they are loaded in parallel. In addition, Matillion ETL
for Redshift will be able to upload parts of these files
Note: The maximum concurrency is 8 times the number of processors on your cloud instance. For example: An instance with 2 processors has a maximum concurrency of 16.
|Distribution Style||Select||Auto: (Default) Allow Redshift to manage your distribution style.
Even: Distributes rows around the Redshift cluster evenly.
All: Copy rows to all nodes in the Redshift cluster.
Key: Distribute rows around the Redshift cluster according to the value of a key column.
Table distribution is critical to good performance. See the Amazon Redshift documentation for more information.
|Table Distribution Key||Select||This is only displayed if the Table Distribution Style is set to Key. It is the column used to determine which cluster node the row is stored on.|
|Table Sort Key||Select||This is optional, and specifies the columns from the input that should be
set as the table's sort-key.
Sort-keys are critical to good performance - see the Amazon Redshift documentation for more information.
|Sort Key Options||Select||Decide whether the sort key is of a compound or interleaved variety - see the Amazon Redshift documentation for more information.|
|Project||Text||The target BigQuery project to load data into.|
|Dataset||Text||The target BigQuery dataset to load data into.|
|Cloud Storage Staging Area||Text||The URL and path of the target Google Storage bucket to be used for staging the queried data.|
|Encryption||Select||(AWS Only) Decide on how the files are encrypted inside the S3 Bucket.This property is available when using an Existing Amazon S3 Location for Staging.
None: No encryption.
SSE KMS: Encrypt the data according to a key stored on KMS.
SSE S3: Encrypt the data according to a key stored on an S3 bucket.
|KMS Key ID||Select||(AWS Only) The ID of the KMS encryption key you have chosen to use in the 'Encryption' property.|
|Primary Key||Select Multiple||Select one or more columns to be designated as Primary Keys for the table.|
|Load Options||Multiple Selection||
Comp Update: Apply automatic compression to the target table (if ON). Default is ON.
Stat Update: Automatically update statistics when filling a table (if ON). Default is ON. In this case, it is updating the statistics of the target table.
Clean S3 Objects: Automatically remove UUID-based objects on the S3 Bucket (if ON). Default is ON. Effectively decides whether to keep the staged data in the S3 Bucket or not.
String Null is Null: Converts any strings equal to "null" into a null value. This is case sensitive and only works with entirely lower-case strings. Default is ON.
Recreate Target Table:Choose whether the component recreates its target table before the data load. If OFF, the existing table will be used. Default is ON.
|Load Options||Multiple Select||Clean Cloud Storage Files: (If On) Destroy staged files on Cloud Storage after loading data. Default is On.
Cloud Storage File Prefix: Give staged file names a prefix of your choice. Default is empty (no prefix).
This component makes the following values available to export into variables:
|Time Taken To Stage||The amount of time (in seconds) taken to fetch the data from the data source and upload it to storage.|
|Time Taken To Load||The amount of time (in seconds) taken to execute the COPY statement to load the data into the target table from storage.|
Connect to the target database and issue the query. Stream the results into objects on S3. Then create or truncate the target table and issue a COPY command to load the S3 objects into the table. Finally, clean up the temporary S3 objects.
In this example we connect to a source database that contains a table of records that indicate an email sent via SES had been rejected (bounced). The Database Query component is used to load data from the database into a table.
In the Database Query component properties, we supply the URL to connect to our database and relevant credentials where they are needed. Data is selected for loading using an SQL Query. In this case, we take all data using a "select * ..." query.
When running, the results of the query are copied to rds_ses_bounce which is reloaded each time the component runs. To confirm that our data is in the table (and to perform Transformations) we can use the Table Input component in a Transformation job. Using the Sample tab, we can take a quick look at the data to confirm the load was successful.