
Azure Data Factory Cookbook
By :

Often, it is convenient to run a data movement pipeline in response to an event. One of the most common scenarios is triggering a pipeline run in response to the addition or deletion of blobs in a monitored storage account. Azure Data Factory supports this functionality.
In this recipe, we shall create an event-based trigger that will invoke a pipeline whenever new backup files are added to a monitored folder. The pipeline will move backup files to another folder.
backups
.backups
container. Call it Backups
.Event.Grid Provider
with your subscription:First, we create the pipeline that will be triggered when a new blob is created:
pl_orchestration_recipe_7_trigger
.Filter for Backup
. In the Settings tab, change Condition to @endswith(item().name, '.backup')
:
Figure 2.36: Configuring the Filter for Backup activity
@activity('Filter For Backup').output.Value
in the Settings tab:
Figure 2.37: Updating the ForEach activity
Copy from Data to Backup
CsvData
(the parameterized dataset created in the first recipe)@item().name
Backups
datasetDelete1
). Configure it in the following way.
In the Source tab, specify Source Dataset as CsvData
. In the Filename field, enter @item().name
.
In the Logging Settings tab, uncheck the Enable Logging checkbox.
NOTE
In this tutorial, we do not need to keep track of the files we deleted. However, in a production environment, you will want to evaluate your requirements very carefully: it might be necessary to set up a logging store and enable logging for your Delete activity.
Figure 2.38: The ForEach activity canvas and configurations for the Delete activity
Figure 2.39: Trigger configuration
After you select Continue, you will see the Data Preview blade. Click OK to finish creating the trigger.
We have created a pipeline and a trigger, but we did not assign the trigger to the pipeline. Let’s do so now.
pl_orchestration_recipe_7
). Click the Add Trigger button and select the New/Edit option.
In the Add trigger blade, select the newly created trigger_blob_added trigger. Review the configurations in the Edit trigger and Data preview blades, and hit OK to assign the trigger to the pipeline:
Figure 2.40: Assigning a trigger to the pipeline
pl_orchestration_recipe_1
pipeline. That should create the backup files in the data container. The trigger we designed will invoke the pl_orchestration_recipe_7
pipeline and move the files from the data
container to the backups
container.Under the hood, Azure Data Factory uses a service called Event Grid to detect changes in the blob (that is why we had to register the Microsoft.EventGrid
provider before starting with the recipe). Event Grid is a Microsoft service that allows you to send events from a source to a destination. Right now, only blob addition and deletion events are integrated.
The trigger configuration options offer us fine-grained control over what files we want to monitor. In the recipe, we specified that the pipeline should be triggered when a new file with the.backup
extension is created in the data container in our storage account. We can monitor the following, for example:
airlines/
)..backup
files within any container: To accomplish this, select all containers in the container field and leave .backup
in the blob name ends with field.To find out other ways to configure the trigger to monitor files in a way that fulfills your business needs, please refer to the documentation listed in the See also section.
In the recipe, we worked with event triggers. The types of events that ADF supports are currently limited to blob creation and deletion; however, this selection may be expanded in the future. If you need to have your pipeline triggered by another type of event, the way to do it is by creating and configuring another Azure service (for example, a function app) to monitor your events and start a pipeline run when an event of interest happens. You will learn more about ADF integration with other services in Chapter 7, Extending Azure Data Factory with Logic Apps and Azure Functions.
ADF also offers two other kinds of triggers: a scheduled trigger and a tumbling window trigger.
A scheduled trigger invokes the pipeline at regular intervals. ADF offers rich configuration options: apart from recurrence (number of times a minute, a day, a week, and so on), you can configure start and end dates and more granular controls for the hour and minute of the run for a daily trigger, the day of the week for weekly triggers, and the day(s) of the month for monthly triggers.
A tumbling window trigger bears many similarities to the scheduled trigger (it will invoke the pipeline at regular intervals), but it has several features that make it well suited to collecting and processing historical data:
trigger().outputs.WindowStartTime
trigger().outputs.WindowEndTime
A tumbling window trigger also offers the ability to specify a dependency between pipelines. This feature allows users to design complex workflows that reuse existing pipelines.
Both event-based and scheduled triggers have a many-to-many relationship with pipelines: one trigger may be assigned to many pipelines, and a pipeline may have more than one trigger. A tumbling window trigger is pipeline-specific: it may only be assigned to one pipeline, and a pipeline may only have one tumbling window trigger.
To learn more about all three types of ADF triggers, start here:
Join our community’s Discord space for discussions with the authors and other readers:
Change the font size
Change margin width
Change background colour