Create External Table
Create a table that references data stored in an external storage system, such as Google Cloud Storage.
For full information on working with tables on Google Cloud Platorm, see the official documentation here.
|Name||Text||The descriptive name for the component.
This is automatically determined from the table name when the Table Name property is first set.
|Project||Text||Enter the name of the Google Cloud Platform Project that the table belongs to.|
|Dataset||Text||Enter the name of the Google Cloud Platform Dataset that the table belongs to.|
|New Table Name||Text||Select an existing table to load data into.|
|Table Metadata||Column Name||The name of the new column|
|Data Type||For more information on available BQ datatypes please refer to the Google Cloud documentation.
String - This type can hold any type of data, subject to a maximum size.
Integer - This type is suitable for whole-number types (no decimals).
Float - This type is suitable for numeric types, with or without decimals.
Boolean - This type is suitable for data that is either 'true' or 'false'.
Date: A formatted date object without time. See the Official GCP documentation
Time: A formatted time object without date.See the Official GCP documentation
DateTime: A formatted timestamp containing both date and time that is easily readable by the user.See the Official GCP documentation
Timestamp: This type is a timestamp left unformatted (exists as Unix/Epoch Time).
|Mode||The field mode. Default is 'NULLABLE'.
NULLABLE: Field allows null values
REQUIRED: Field does not accept null values
REPEATED: Field can accept multiple values
|Create/Replace||Select||Create: The default option, creates a new table. This will generate an error if a table with the same name already exists, but will never destroy existing data.
Create if not exists: This will only create a new table if a table of the same name does not already exist. It will not destroy existing data. If the schema of the existing table does not match the schema defined in this component, no attempt is made to fix or correct it, which could lead to errors later in the job if you did not expect an existing table to exist, or to have a different schema to the one defined in this component.
Replace This drops any existing table of the same name, and then creates a new table. This guarantees that after the component succeeds the table matches the schema defined in this component, however any existing data in an existing table will be lost.Note: Since other database objects may depends upon this table,
drop ... cascadeis used which may actually remove many other database objects.
|Google Storage URL Location||Select||The URL of the Google Storage bucket to get the files from. This follows the format gs://bucket-name/location, where location is optional.|
|Compression||Select||Whether the input file is compressed in GZIP format or not compressed at all.|
|File Format||Select||Cloud Datastore Backup
JSON (New line delimited): this requires an additional "JSON Format".
|Number of Errors Allowed||Text||The maximum number of individual parsing errors that cause the whole load to fail. Values up to this will be substituted as null values. This value defaults to 0.|
|Ignore Unknown Values||Select||Yes: Accept rows that contain values that do not match the schema. Unknown values are ignored. Will ignore extra values at the end of a line for CSV files.
No: Omit any rows with invalid values.
|Delimiter||Select||The delimiter that separates columns. The default is a Comma. A [TAB] character can be specified as "\t".|
|CSV Quoter||Text||Specifies the character to be used as the quote character when using the CSV option.|
|Encoding||Select||The encoding the data is in. This defaults to UTF-8.|
|Header Rows To Skip||Text||The number of rows at the top of the file to ignore - defaults to 0.|
|Allow quoted newlines||Select||Yes: Allow a CSV value to contain a newline character when the value is encased in quotation marks.
No: A new line character, regardless of quotations, is always considered a new row.
|Allow Jagged Rows||Select||Yes: Missing values are treated as 'null' but accepted.
No: Rows with missing data are treated as bad records. Note: A bad record will count toward the 'Maximum Errors' count.
In this example, we will be referencing data that is held on a GCP bucket. Buckets can be viewed, managed and created through the GCP console along with the data they hold.
To begin, we use the data staging component 'Jira Query' to load data our desired data into a table, then we unload it (as a CSV) to a specific place on a storage bucket where it will be held. Since we only want to very occasionally query this large data set, it is prudent to avoid having it staged to a BQ table and so instead we use an external table. Thus, after unloading the data, we attach the External Table component before any transformations take place. The job is shown below.
The External Table component is used to create the external table that will reference our data. Its properties are shown below. Since the data was unloaded in CSV format, we prepare the External Table component to expect the same. It may be necessary to make use of the 'Ignore Unknown Values' and 'Number of Errors Allowed' properties if your CSV file is unusually formatted.
In the 'Table Metadata' property, we have provided column names and data types so that the component can correctly assess the data found in the CSV.
Now this data can be queried and sampled using a Transformation job. Sampling the data provides a quick test to ensure the external table has been created correctly and is referencing the data.