Cloud Dataprep by Trifacta is an intelligent data service for visually exploring, cleaning, and preparing structured and unstructured data for analysis. We will explore the Cloud Dataprep UI to build a data transformation pipeline that runs at a scheduled interval and outputs results into BigQuery
The dataset you'll use is an ecommerce dataset that has millions of Google Analytics session records for the Google Merchandise Store loaded into BigQuery.
• Connect BigQuery datasets to Cloud Dataprep
• Explore dataset quality with Cloud Dataprep
• Create a data transformation pipeline with Cloud Dataprep
• Schedule transformation jobs outputs to BigQuery
We need BigQuery as an endpoint for dataset ingestion to the pipeline and as a destination for the output when the pipeline is completed.
Creating a BigQuery Dataset
Create a new BigQuery dataset to receive the output table of your new pipeline. CREATE DATASET
• For Dataset ID, type ecommerce.
Create table
#standardSQL | |
CREATE OR REPLACE TABLE ecommerce.all_sessions_raw_dataprep | |
OPTIONS( | |
description="Raw data from analyst team to ingest into Cloud Dataprep" | |
) AS | |
SELECT * FROM `data-to-insights.ecommerce.all_sessions_raw` | |
WHERE date = '20170801'; # limiting to one day of data 56k rows |
Open Cloud Dataprep
Connecting BigQuery data to Cloud Dataprep
Connect Cloud Dataprep to your BigQuery data source. On the Cloud Dataprep page:
Click Create Flow in the top-right corner.
In the Create Flow dialog, specify these details:
• For Flow Name, type Ecommerce Analytics Pipeline
• For Flow Description, type Revenue reporting table
Click Create
Click Import & Add Datasets.
click BigQuery.
Create dataset
Import & Add to Flow
Exploring ecommerce data fields with a UI
In the right pane, click Add new Recipe.
Click Edit Recipe.
Cloud Dataprep loads a sample of your dataset into the Transformer view. This process might take a few seconds.
Grey bar under totalTransactionRevenue represent missing values for the totalTransactionRevenue field. This means that a lot of sessions in this sample did not generate revenue. Later, we will filter out these values so our final table only has customer transactions and associated revenue.
Maximum timeOnSite in seconds, Maximum pageviews, and Maximum sessionQualityDim for the data sample
• Maximum Time On Site: 5,561 seconds (or 92 minutes)
• Maximum Pageviews: 155 pages
• Maximum Session Quality Dimension: 97
A red bar indicates mismatched values. While sampling data, Cloud Dataprep attempts to automatically identify the type of each column. If you do not see a red bar for the productSKU column, then this means that Cloud Dataprep correctly identified the type for the column (i.e. the String type). If you do see a red bar, then this means that Cloud Dataprep found enough number values in its sampling to determine (incorrectly) that the type should be Integer. Cloud Dataprep also detected some non-integer values and therefore flagged those values as mismatched. In fact, the productSKU is not always an integer (for example, a correct value might be "GGOEGOCD078399"). So in this case, Cloud Dataprep incorrectly identified the column type: it should be a string, not an integer.
Cleaning the data
Clean the data by deleting unused columns, eliminating duplicates, creating calculated fields, and filtering out unwanted rows.
Converting the productSKU column data type
To ensure that the productSKU column type is a string data type, open the menu to the right of the productSKU column, then click Change type > String.
Deleting unused columns
We will be deleting the itemQuantity and itemRevenue columns as they only contain NULL values are not useful
Deduplicating rows
Team has informed you there may be duplicate session values included in the source dataset. Let's remove these with a new deduplicate step.
- Click the Filter rows icon in the toolbar, then click Remove duplicate rows.
Click Add
Review the recipe
Filtering out sessions without revenue
Your team has asked you to create a table of all user sessions that bought at least one item from the website. Filter out user sessions with NULL revenue.
Under the totalTransactionRevenue column, click the grey Missing values bar. All rows with a missing value for totalTransactionRevenue are now highlighted in red.
In the Suggestions panel, in Delete rows , click Add.
This step filters your dataset to only include transactions with revenue (where totalTransactionRevenue is not NULL).
Filtering sessions for PAGE views
The dataset contains sessions of different types, for example PAGE (for page views) or EVENT (for triggered events like "viewed product categories" or "added to cart"). To avoid double counting session pageviews, add a filter to only include page view related hits.
In the histogram below the type column, click the bar for PAGE. All rows with the type PAGE are now highlighted in green.
In the Suggestions panel, in Keep rows, and click Add.
Enriching the data
• visitId: an identifier for this session. This is part of the value usually stored as the utmb cookie. This is only unique to the user. For a completely unique ID, you should use a combination of fullVisitorId and visitId.*
As we see, visitId is not unique across all users. We will need to create a unique identifier.
Creating a new column for a unique session ID
As you discovered, the dataset has no single column for a unique visitor session. Create a unique ID for each session by concatenating the fullVisitorID and visitId fields.
Click on the Merge columns icon in the toolbar.
For Columns, select fullVisitorId and visitId.
For Separator type a single hyphen character: -
For the New column name, type unique_session_id.
Click Add.
The unique_session_id is now a combination of the fullVisitorId and visitId. We will explore in a later lab whether each row in this dataset is at the unique session level (one row per user session) or something even more granular.
Creating a case statement for the ecommerce action type
As you saw earlier, values in the eCommerceAction_type column are integers that map to actual ecommerce actions performed in that session. For example, 3 = "Add to Cart" or 5 = "Check out." This mapping will not be immediately apparent to our end users so let's create a calculated field that brings in the value name.
Click on the Conditions icon in the toolbar, then click Case on single column.
For Column to evaluate, specify eCommerceAction_type.
Next to Cases (1), click Add 8 times for a total of 9 cases.
Value to compare New value:
0 'Unknown'
1 'Click through of product lists'
2 'Product detail views'
3 'Add product(s) to cart'
4 'Remove product(s) from cart'
5 'Check out'
6 'Completed purchase'
7 'Refund of purchase'
8 'Checkout options'
For New column name, type eCommerceAction_label. Leave the other fields at their default values.
Click Add.
Adjusting values in the totalTransactionRevenue column
As mentioned in the schema, the totalTransactionRevenue column contains values passed to Analytics multiplied by 10^6 (e.g., 2.40 would be given as 2400000). You now divide contents of that column by 10^6 to get the original values.
Open the menu to the right of the totalTransactionRevenue column, then select Calculate > Custom formula.
For Formula, type: DIVIDE(totalTransactionRevenue,1000000) and for New column name, type: totalTransactionRevenue1. Notice the preview for the transformation:
Click Add.
To convert the new totalTransactionRevenue1 column's type to a decimal data type, open the menu to the right of the totalTransactionRevenue1 column by clicking , then click Change type > Decimal.
Review the full list of steps in your recipe:
Running and scheduling Cloud Dataprep jobs to BigQuery
Click Run Job
Hover over the Publishing Actions created and click Edit.
Select BigQuery as a data sink in the left bar
Select your existing ecommerce dataset
Select Create new Table
For Table Name, type revenue_reporting
For options, Truncate the table every run
Click Update
Review the setting then Run Job
Once your Cloud Dataprep job is completed (takes 10 - 15 minutes), refresh your BigQuery page and confirm that the output table revenue_reporting exists.
You will know your revenue reporting table is ready when the below query successfully executes:
-- generate a report showing the most recent transactions
SELECT | |
date, | |
unique_session_id, | |
fullVisitorId, | |
totalTransactionRevenue1, | |
-- push the entire checkout basket into an array | |
ARRAY_AGG(DISTINCT v2ProductName) AS products_bought | |
FROM | |
ecommerce.revenue_reporting | |
-- only include products that were actually bought | |
WHERE productQuantity > 0 | |
GROUP BY 1,2,3,4 | |
ORDER BY date DESC |
Creating a scheduled pipeline job
Even if your pipeline is still running, you can also schedule the execution of pipeline in the next step so the job can be re-run automatically on a regular basis to account for newer data.
Note: You can navigate and perform other operations while jobs are running.
You will now schedule a recurrent job execution. Click the Flows icon on the left of the screen.
On the right of your Ecommerce Analytics Pipeline flow click the More icon (...), then click Schedule Flow.
In the Add Schedule dialog:
For Frequency, select Weekly.
For day of week, select Saturday and unselect Sunday.
For time, enter 3:00 and select AM.
Click Save.
The job is now scheduled to run every Saturday at 3AM
IMPORTANT: You will not be able to view your scheduled flows until you setup a scheduled output destination
- In your flow, click the output node as shown below:
Under Scheduled Destinations, click Add
In the Scheduled Publishing settings page click Add Publishing Action
Specify an output destination in BigQuery like the one you created previously.
Monitoring jobs
Click the Jobs icon on the left of the screen.
You see the list of jobs, and wait until your job is marked as Completed.
No comments:
Post a Comment