Let's look at parameterized linked services in Azure Data Factory. Within Azure Data Factory, it is possible to parameterize a linked service in which you can pass through dynamic values while at runtime. A use case for this situation could be connecting to several different databases that are on the same sequel server. In which you might think about parameterizing the database name in the linked service definition. The benefit of doing so is that you don't have to create a single linked service for each database that is on the same sequel server. It is also possible to parameterize other properties of the link service like a user name. If you decide to parameterize linked services in Azure Data Factory, you can do so in the Azure Data Factory user interface. The Azure portal are your chosen programming interface. When you alter the length service through the user interface, data factory can provide you with built in parameterization for some of the connectors. Amazon Redshift, Azure Cosmos DB, SQL API, Azure Database for my SQL, Azure SQL Database, Azure Synapse Analytics formerly SQL DW, MySQL, Oracle, SQL Server, Generic HTTP and Generic REST. To locate the options for parameterizing, navigate to the creation edit blade of the linked service. If you cannot use the built in parameterization because you're using a different type of connector, you can still edit the Jason through the user interface. At the bottom of the linked service creation edit blade, expand advanced, check specify dynamic contents in Jason format checkbox, and then specify the linked service Jason payload. You can also access the Jason edit in a different way. After you create a linked service without parameterization in management hub linked services, find the specific linked service and click the code button to edit the Jason. Setting global parameters in an Azure Data Factory pipeline allows you to use these constants for consumption in pipeline expressions. A use case for setting global parameters is when you have multiple pipelines where the parameter names and values are identical. If you use the continuous integration and deployment process with Azure Data Factory, the global parameters can be overridden if you so wish, for each and every environment that you have created. To create a global parameter, go to the Global parameters tab in the managed section. Select New to open the creation side Nav. In the side Nav enter a name, select a data type and specify the value of your parameter. After a global parameter is created, you can edit it by clicking the parameters name. To alter multiple parameters at once, select Edit All. When using global parameters in a pipeline in Azure Data factory, it is mostly referenced in pipeline expressions. For example, if a pipeline references to a resource, like a data set or data flow, you can pass down the global parameter value through the resource parameter. The command or reference for global parameters in Azure data factory is pipeline().globalParameters. When you integrate global parameters in a pipeline, using continuous integration and continuous deployment with Azure Data Factory, you have two ways to implement the process. Include global parameters in the Azure Resource Manager template and deploy global parameters via a PowerShell script. In most CI/CD practices, it is beneficial to include global parameters in the Azure Resource Manager templates. The reason it's recommended is because of the native integration with CI/CD where global parameters are added as a parameter in an Azure Resource Manager template due to changes in several environments that are worked in. In order to enable global parameters in an Azure Resource Manager template, you navigate to the management hub. You do have to be aware that once you add global parameters to an Azure Resource Manager template, it adds an Azure Data Factory level setting, which can override other settings, like GIT conflicts. The use case for deploying global parameters through a PowerShell script could be because you might have the previously mentioned settings enabled in an elevated environment like UAT or prod. Within Azure Data Factory, you can use mapping data flows and therefore you can use parameters. The parameter values will be set by the calling pipeline through the execute data flow activity. There are three options for setting the values in the data flow activity expressions. Use the pipeline control flow expression language to set a dynamic value. Use the data flow expression language to set a dynamic value, and use either expression language to set a static literal value. By using parameterizing mapping data flows, your data flows are generalized, flexible and reusable. To add parameters to your data flow, click on the blank portion of the data flow canvas to see the general properties. In the settings pan, you will see a tab called parameters. Select New to generate a new parameter. For each parameter you must assign a name. Select a type and optionally set a default value. If you have created a data flow in which you have set parameters, it is possible to execute it from a pipeline using the execute data flow activity. Once you have added the activity to the pipeline canvas, you'll find the data flow parameters in the activities parameters tab. Assigning parameter values ensures that you are able to use the parameters in a pipeline expression language or data flow expression language based on sparked types. You can also combine the two, that is, pipeline and data flow expression parameters.