API Query

API Query

API Query is a generic query component to read data from JSON and XML based API's. An API is described in a "Profile" definition, which is a collection of XML files describing the API and mapping it to tables, rows and columns.

For this component to be useful, the user must have API enabled. To do this, an administrator must check the API box found by the user's name in the 'User Configuration' section of the Admin Menu. If you would like you connect to a custom API, please contact Matillion Support for further details.

Note that although the queries from this component are exposed in an SQL-like language, the exact semantics can be surprising. There are some special pseudo columns which can be part of a query filter, but are not returned as data. This, along with many Value Formatters, is fully described in the REST Data Model.

The functions available to use in the pseudo-SQL inside the API Query component are listed and explained in the API Query Functions documentation.

Warning: This component is potentially destructive. If the target table undergoes a change in structure, it will be recreated. Otherwise, the target table is truncated. Setting the Load Option 'Recreate Target Table' to 'Off' will prevent both recreation and truncation. Do not modify the target table structure manually.


Properties

Property Setting Description
Name Text The descriptive name for the component.
Basic/Advanced Mode Choice Basic - This mode will build a Query for you using settings from Data Source, Data Selection and Data Source Filter parameters. In most cases, this will be sufficient.
Advanced - This mode will require you to write an SQL-like query which is translated into one or more API calls. The data model depends upon the Profile chosen. Columns used must be defined by an rsb:check tag in the API Profile.
Profile Choice A profile maps an SQL-like language to an available API. A Sample API is always available, additional profiles can be added to your Matillion ETL instance.
Data Source Choice Selected a data source, for example Likes.
Data Selection Choice Select one or more columns to return from the query.
Data Source Filter Input Column The available input columns vary depending upon the Data Source.
Qualifier Is - Compares the column to the value using the comparator.
Not - Reverses the effect of the comparison, so "equals" becomes "not equals", "less than" becomes "greater than or equal to", etc.
Comparator Choose a method of comparing the column to the value. Possible comparators include: 'Equal To', 'Greater than', 'Less than', 'Greater than or equal to', 'Less than or equal to', 'Like', 'Null'.
'Equal To' can match exact strings and numeric values while other comparators such as 'Greater than' will work only with numerics. The 'Like' operator allows the wildcard character (%) to be used at the start and end of a string value to match a column. The Null operator matches only Null values, ignoring whatever the value is set to.
Not all data sources support all comparators, thus it is likely only a subset of the above comparators will be available to choose from.
Value The value to be compared.
SQL Query Text This is an SQL-like query, written according to the Profile definition. (Property only available in 'Advanced' Mode)
Limit Number Fetching a large number of results may use multiple API calls. These calls can be rate-limited by the provider, so fetching a very large number may result in errors.
Connection Options Parameter A JDBC parameter supported by the Database Driver. The available parameters are determined automatically from the driver, and may change from version to version.
They are usually not required as sensible defaults are assumed.
Value A value for the given Parameter. The parameters and allowed values depend upon the selected profile.
Warehouse Select Choose a Snowflake warehouse that will run the load.
Storage Account Select (Azure Only) Select a Storage Account with your desired Blob Container to be used for staging the data.
Blob Container Select (Azure Only) Select a Blob Container to be used for staging the data.
Staging Select (AWS Only) Snowflake Managed: Allow Matillion ETL to create and use a temporary internal stage on Snowflake for staging the data. This stage, along with the staged data, will cease to exist after loading is complete.
Existing Amazon S3 Location: Selecting this will avail the user of properties to specify a custom staging area on S3.
S3 Staging Area Text (AWS Only) The name of an S3 bucket for temporary storage. Ensure your access credentials have S3 access and permission to write to the bucket. See this document for details on setting up access. The temporary objects created in this bucket will be removed again after the load completes, they are not kept.
Database Select Choose a database to create the new table in.
Schema Select Select the table schema. The special value, [Environment Default] will use the schema defined in the environment. For more information on using multiple schemas, see this article.
Target Table Text Provide a new table name.
Warning: This table will be recreated and will drop any existing table of the same name.
Table Distribution Style Select Even - the default option, distribute rows around the Redshift Cluster evenly.
All - copy rows to all nodes in the Redshift Cluster.
Key - distribute rows around the Redshift cluster according to the value of a key column.
Table-distribution is critical to good performance - see the Amazon Redshift documentation for more information.
Table Distribution Key Select This is only displayed if the Table Distribution Style is set to Key. It is the column used to determine which cluster node the row is stored on.
Table Sort Key Select This is optional, and specifies the columns from the input that should be set as the tables sort-key.
Sort-keys are critical to good performance - see the Amazon Redshift documentation for more information.
Sort Key Options Select Decide whether the sort key is of a compound or interleaved variety - see the Amazon Redshift documentation for more information.
Project Text The target BigQuery project to load data into.
Dataset Text The target BigQuery dataset to load data into.
Cloud Storage Staging Area Text The URL and path of the target Google Storage bucket to be used for staging the queried data.
Encryption Select (AWS Only) Decide on how the files are encrypted inside the S3 Bucket.This property is available when using an Existing Amazon S3 Location for Staging.
None: No encryption.
SSE KMS: Encrypt the data according to a key stored on KMS.
SSE S3: Encrypt the data according to a key stored on an S3 bucket.
KMS Key ID Select (AWS Only) The ID of the KMS encryption key you have chosen to use in the 'Encryption' property.
Load Options Multiple Selection Comp Update: Apply automatic compression to the target table (if ON). Default is ON.
Stat Update: Automatically update statistics when filling a table (if ON). Default is ON. In this case, it is updating the statistics of the target table.
Clean S3 Objects: Automatically remove UUID-based objects on the S3 Bucket (if ON). Default is ON. Effectively decides whether to keep the staged data in the S3 Bucket or not.
String Null is Null: Converts any strings equal to "null" into a null value. This is case sensitive and only works with entirely lower-case strings. Default is ON.
Recreate Target Table:Choose whether the component recreates its target table before the data load. If OFF, the existing table will be used. Default is ON.
Load Options Multiple Select Clean Cloud Storage Files: (If On) Destroy staged files on Cloud Storage after loading data. Default is On.
Cloud Storage File Prefix: Give staged file names a prefix of your choice. Default is empty (no prefix).
Auto Debug Select Choose whether to automatically log debug information about your load. These logs can be found in the Task History and should be included in support requests concerning the component. Turning this on will override any debugging Connection Options.
Debug Level Select The level of verbosity with which your debug information is logged. Levels above 1 can log huge amounts of data and result in slower execution.
1: Will log the query, the number of rows returned by it, the start of execution and the time taken, and any errors.
2: Will log everything included in Level 1, cache queries, and additional information about the request, if applicable.
3: Will additionally log the body of the request and the response.
4: Will additionally log transport-level communication with the data source. This includes SSL negotiation.
5: Will additionally log communication with the data source and additional details that may be helpful in troubleshooting problems. This includes interface commands.

Variable Exports

This component makes the following values available to export into variables:

Source Description
Time Taken To Stage The amount of time (in seconds) taken to fetch the data from the data source and upload it to storage.
Time Taken To Load The amount of time (in seconds) taken to execute the COPY statement to load the data into the target table from storage.


Strategy

Connect to the target database and issue the query. Stream the results into objects on S3. Then create or truncate the target table and issue a COPY command to load the S3 objects into the table. Finally, clean up the temporary S3 objects.


Advanced Mode

Running the component in 'basic' mode will allow the user to choose from a set of premade queries. In advanced mode, the user can write their own queries. Note that any available column can be used in the WHERE clause but will still be ignored (without presenting an error) if the column is not explicitly dealt with in the RSD. For information on writing your own API Profiles, please refer to Creating API Profiles Support. In the case of the default Matillion API, defined columns are:

  • id
  • groupName
  • projectName
  • running
  • since

Note: id cannot be combined with groupName or projectName. id also does not change according to whether since or running are used.

The functions available to use in Advanced Mode are listed and explained in the API Query Functions documentation.

It is necessary to enclose string literal parameters in single quotes in this component. For example, to use the DATEPART function:

DATEPART(datepart, date [,integer_datefirst])

The 'datepart' is the part of the date we want returned and has several valid values with abbreviations such as weekday (wd) or hour (hh). These abbreviations can be passed as string literals and require single-quotes. So this function could be used as below:

DATEPART('dw', current_date())


Example

In this example, the API Query component is used to import Matillion Task information into a table.

The API Query Component is set up to use the table 'task_table' that has been created in the Create/Replace Table Component. The component is set up to use the Matillion API included in Matillion ETL and the Data Source is specified as 'Run History Details' (details can be found through Project → Manage API Profiles → Matillion API).

This component uses an API call equivalent to the one below:

http://10.12.1.28/rest/v0/tasks?projectName=DelMe&running=false

This API would return the Task History for the specified instance in JSON form, such as the snippet below:

      {
        "type": "VALIDATE_ORCHESTRATION",
        "jobID": 1057,
        "jobName": "Regression Pack",
        "jobRevision": 2,
        "jobTimestamp": 1485246350127,
        "componentID": 1062,
        "componentName": "Orc/Trans Regression suite",
        "state": "SUCCESS",
        "rowCount": -1,
        "startTime": 1485246445284,
        "endTime": 1485246445313,
        "message": ""
      },
      {
        "type": "VALIDATE_ORCHESTRATION",
        "jobID": 3511,
        "jobName": "Run Tests",
        "jobRevision": 2,
        "jobTimestamp": 1485246351156,
        "componentID": 3540,
        "componentName": "Start 0",
        "state": "SUCCESS",
        "rowCount": -1,
        "startTime": 1485246445327,
        "endTime": 1485246445338,
        "message": ""
      },

The JSON is not particularly accessible when wanting to look over a large number of jobs by eye, nor is it directly available on the database. Thankfully, the API Query Component will take this JSON and reformat it into a table according to the specified API profile and make this table available. The API profiles can be found through Project → Manage API Profiles.

As indicated by our component properties, we are using the Matillion API and the 'Run History Details' RSD contained within. This RSD defines the conversion of the JSON data into a table format. For more information on API Profiles, see the API documentation.

Running this Orchestration job will create a table and populate it with task data from this instance of Matillion. Below is a sample of the resulting table.


Videos