Azure Data Factory V2 – Incremental loading with configuration stored in a table – Complete solution, step by step.

This article is obsolete. A lot has changed since 2018, both the documentation and ADF contain a lot of key information, so I recommend that you refer to the official sources like “Delta copy”, or “ADF templates” link: https://docs.microsoft.com/en-us/azure/data-factory/solution-template-delta-copy-with-control-table

This post explains things that are difficult to find even in English. That’s why I will break my rule and will not write it in my native language! Po wersję polską zapraszam do google translate :>

Introduction


Loading data using Azure Data Factory v2 is really simple. Just drop Copy activity to your pipeline, choose a source and sink table, configure some properties and that’s it – done with just a few clicks!

But what if you have dozens or hundreds of tables to copy? Are you gonna do it for every object?

Fortunately, you do not have to do this! All you need is dynamic parameters and a few simple tricks 🙂

Also, this will give you the option of creating incremental feeds, so that – at next run – it will transfer only newly added data.

Mappings

Before we start diving into details, let’s demystify some basic ADFv2 mapping principles.

  • Copy activity doesn’t need to have defined column mappings at all,
  • it can dynamically map them using its own mechanism which retrieves source and destination (sink) metadata,
  • if you use polybase, it will do it using column order (1st column from source to 1st column at destination etc.),
  • if you do not use polybase, it will map them using their names but watch out – it’s case sensitive matching!
  • So all you have to do is to just keep the same structure and data types on the destination tables (sink), as they are in a source database.

Bear in mind, that if your columns are different between source and destination, you will have to provide custom mappings. This tutorial doesn’t show how to do it, but it is possible to pass them using “Get metadata” activity to retrieve column specification from the source, then you have to parse it and pass as JSON structure into the mapping dynamic input. you can read about mappings in official documentation: https://docs.microsoft.com/en-us/azure/data-factory/copy-activity-schema-and-type-mapping

String interpolation – the key to success

My entire solution is based on one cool feature, that is called string interpolation. It is a part of built-in expression engine, that simply allows you to just inject any value from JSON object or an expression directly into string input, without any concatenate functions or operators. It’s fast and easy. Just wrap your expression between  @{ ... } . It will always return it as a string.

Below is a screen from official documentation, that clarifies how this feature works:

Read more about JSON expressions at https://docs.microsoft.com/en-us/azure/data-factory/control-flow-expression-language-functions#expressions

 

So what we are going to do? :>


Good question 😉

In my example, I will show you how to transfer data incrementally from Oracle and PostgreSQL tables into Azure SQL Database.

All of this using configuration stored in a table, which in short, keeps information about Copy activity settings needed to achieve our goal 🙂

Adding new definitions into config will also automatically enable transfer for them, without any need to modify Azure Data Factory pipelines.

So you can transfer as many tables as you want, in one pipeline, at once. Triggering with one click 🙂

 

Every process needs diagram :>

 

 

Basically, we will do:

  1. Get configuration from our config table inside Azure SQL Database using Lookup activity, then pass it to Filter activity to split configs for Oracle and PostgreSQL.
  2. In Foreach activity created for every type of database, we will create simple logic that retrieves maximum update date from every table.
  3.  Then we will prepare dynamically expressions for SOURCE and SINK properties in Copy activity. MAX UPDATEDATE, retrieved above, and previous WATERMARK DATE, retrieved from config, will set our boundaries in WHERE clause. Every detail like table name or table columns we will pass as a query using string interpolation, directly from JSON expression. Sink destination will be also parametrized.
  4. Now Azure Data Factory can execute queries evaluated dynamically from JSON expressions, it will run them in parallel just to speed up data transfer.
  5. Every successfully transferred portion of incremental data for a given table has to be marked as done. We can do this saving MAX UPDATEDATE in configuration, so that next incremental load will know what to take and what to skip. We will use here: Stored procedure activity.
This example simplifies the process as much as it is possible. Remember, in your solution you have to implement logic for every unsuccessful operation. You can achieve that using On Failure control flow with some activities (chosen depending on your needs) and timeout/retry options set individually for every activity in your pipeline.

 

About sources

I will use PostgreSQL 10 and Oracle 11 XE installed on my Ubuntu 18.04 inside VirtualBox machine.

In Oracle, tables and data were generated from EXMP/DEPT samples delivered with XE version.

In PostgreSQL – from dvd rental sample database: http://www.postgresqltutorial.com/postgresql-sample-database/

 

I simply chose three largest tables from each database. You can find them in a configuration shown below this section.

 

Every database is accessible from my Self-hosted Integration Runtime. I will show an example how to add the server to Linked Services, but skip configuring Integration Runtime. You can read about creating self-hosted IR here: https://docs.microsoft.com/en-us/azure/data-factory/create-self-hosted-integration-runtime.

 

About configuration

In my Azure SQL Database I have created a simple configuration table:

Id is just an identity value, SRC_name is a type of source server (ORA or PG).

SRC and DST tab columns maps source and destination objects. Cols defines selected columns, Watermark Column and Value stores incremental metadata.

And finally Enabled just enables particular configuration (table data import).

As Andy rightly noted in the comment below this post, it is possible to use “Cols” also to implement SQL logic, like functions, aliases etc. The value from this column is rewritten directly to the query (more precisely – concatenated between SELECT and FROM clause). So you can use it according to your needs.

 

This is how it looks with initial configuration:

Create script:

 

EDIT 19.10.2018

Microsoft announced, that now you can parametrize also linked connections!

https://azure.microsoft.com/en-us/blog/parameterize-connections-to-your-data-stores-in-azure-data-factory/

Let’s get started (finally :P)


Preparations!

Go to your Azure Data Factory portal @ https://adf.azure.com/

Select Author button with pencil icon:

 

Creating server connections (Linked Services)

We can’t do anything without defining Linked Services, which are just connections to your servers (on-prem and cloud).

  1. Go to   and click 
  2. Find your database type, select and click 
  3. Give all needed data, like server ip/host, port, SID (Oracle need this), login and password.
  4. You can  if everything is ok. Click Finish to save your connection definition.
Some types of servers, such as PostgreSQL or MySQL, require separate .NET drivers. Check your server type here in Microsoft Docs and search for Prerequisites to match your scenario.

I have created three connections. Here are their names and server types:

 

Creating datasets

Creating linked services is just telling ADF what are connection settings (like connection strings).

Datasets, on the other hand, points directly to database objects.

BUT they can be parametrized, so you can just create ONE dataset and use it passing different parameters to get data from multiple tables within same source database 🙂

Source datasets

Source datasets don’t need any parameters. We will later use built-in query parametrization to pass object names.

  1. Go to   and click + and choose 
  2. Choose your datataset type, for example 
  3. Rename it just as you like. We will use name: “ORA”
  4. Set proper Linked service option, just like this for oracle database: 
  5. And that’s it! No need to set anything else. Just repeat these steps for every source database, that you have.

In my example, I’ve created two source datasets, ORA and PG

As you can see, we need to create also the third dataset. It will work as a source too, BUT also as a parametrizable sink (destination). So creating it is little different than others.

Sink dataset

Sinking data needs one more extra parameter, which will store destination table name.

  1. Create dataset just like in the previous example, choose your destination type. In my case, it will be Azure SQL Database.
  2. Go to , declare one String parameter called “TableName”. Set the value to anything you like. It’s just dummy value, ADF just doesn’t like empty parameters, so we have to set a default value.
  3. Now, go to , set Table as dynamic content. This will be tricky :). Just click “Select…”, don’t choose any value, just click somewhere in empty space. The magic option “Add dynamic content” now appears! You have to click it or hit alt+p. 
  4. “Add Dynamic Content” windows is now visible. Type: “@dataset().TableName” or just click “TableName” in “Parameters” section below “Functions”.
  5. The table name is now parameterized. And looks like this: 

 

Parametrizable PIPELINE with dynamic data loading.


Ok, our connections are defined. Now it’s time to copy data :>

 

Creating pipeline

  1. Go to you ADF and click PLUS symbol near search box on the left and choose “Pipeline“: 
  2. Reanme it. I will use “LOAD DELTA“.
  3. Go to Parameters, create new String parameter called ConfigTable. Set value to our configuration table name: load.cfg . This will simply parametrize you configuration source. So that in the future it would be possible to load a completely different set of sources by changing only one parameter :>
  4. In case you missed it, SAVE your work by clicking “Save All” if you’re using GIT or “Publish All” if not ;]

 

Creating Lookup – GET CFG

First, we have to get configuration. We will use Lookup activity to retrieve it from the database.

Bear in mind, that lookup activity has some limits. Currently, the maximum number of rows, that can be returned by Lookup activity is 5000, and up to 2MB in size. Also max duration for Lookup activity before timeout is one hour. Go to documentation for latest info and updates.
    1. Drag and drop  into your pipline
    2. Rename it. This is important, we will use this name later in our solution. I will use value “GET CFG“.
    3. In “Settings” choose 
    4. Now, don’t bother TableName set to dummy :> Just in “Use Query” set to “Query“, click “Add dynamic content” and type:
    5. Unmark “First row only“, we need all rows, not just first. All should look like this:

 

Creating Filters – ORA CFG & PG CFG

Now we have to split configs for oracle and PostgreSQL. We will use Filter activity on rows retrieved in “GET CFG” lookup.

  1. Drag and drop twice.
  2. Rename the first block to “ORA CFG“, second to “PG CFG“.
  3. Now go to “ORA CFG“, then “Settings“.
  4. In Items, click Add dynamic content and type:  @activity('GET CFG').output.value . As you probably guess, this will point directly to GET CFG output rows 🙂
  5. In Condition, click Add dynamic content and  type: @equals(item().SRC_name,'ORA') . We have to match rows for oracle settings. So we know, that there is a column in config table called “SRC_name“. We can use it to filter out all rows, except that with value ‘ORA’ 🙂 .
  6. Do the same with lookup activity “PG CFG“. Of course, change the value for a condition.

It should look like this:

Creating ForEach – FOR EACH ORA & FOR EACH PG

Now it’s time to iterate over each row filtered in separate containers (ORA CFG and PG CFG).

  1. Drag and drop two  blocks, rename them as “FOR EACH ORA” and “FOR EACH PG“. Connect each to proper filter acitivity. Just like in this example:  
  2. Click “FOR EACH ORA“, go to “Settings“, in Items clik Add dynamic content and type:  @activity('ORA CFG').output.value . We are telling ForEach, that it has to iterate over results returned in “ORA CFG”. They are stored in JSON array.
  3. Do this also in FOR EACH PG. Type:  @activity('PG CFG').output.value
  4. Now, you can edit Activities and add only “WAIT”  activity to debug your pipeline. I will skip this part. Just remember to delete WAIT block at the end of your tests.

 

Inside ForEach – GET MAX ORA -> COPY ORA -> UPDATE WATERMARK ORA

Place these blocks into FOR EACH ORA. Justo go there, click “Activities” and then 

Every row, that ForEach activity is iterating over, is accessible using @item() .

And every column in that row, can be reached just by using  @item().ColumnName .

Remember, that you can surround every expression in brackets @{ }  to use it as a string interpolation. Then you can concatenate it with other strings and expressions just like that:  Value of the parameter WatermarkColumn is: @{item().WatermarkColumn}

 

GET MAX ORA

  1. Go to “GET MAX ORA“, then Settings
  2. Choose your source dataset “ORA“, Use Query: “Query” and click Add dynamic content
  3. Type  SELECT MAX(@{item().WatermarkColumn}) as maxd FROM @{item().SRC_tab} . This will get a maximum date in your watermark column. We will use it as RIGHT BOUNDRY for delta slice.
  4. Check if  First row only is turned on.

It should look like this:

 

COPY ORA

Now the most important part :> Copy activity with a lot of parametrized things… So pay attention, it’s not so hard to understand but every detail matters.

Source

  1. In source settings, choose Source Dataset to ORA, in Use query select Query.
  2. Below Query input, click Add dynamic content and paste this:

Now, this needs some explanation 🙂

 

 

  • ORA CFG output has all columns and their values from our config.
  • We will use SRC_tab as table name, Cols as columns for SELECT query, WatermatkColumn as LastChange DateTime column name and WatermarkValue for LEFT BOUNDRY (greater than, >).
  • GET MAX ORA output stores date of a last updated row in the source table. So this is why we are using it as a RIGHT BOUNDRY (less than or equal, <=)
  • And the tricky thing, ORACLE doesn’t support implicit conversion from the string with ISO 8601 date. So we need to extract it properly with TO_DATE function.

So the source is a query from ORA dataset:

 

Sink

Sink is our destination. Here we will set parametrized table name and truncate query.

  1. Select 
  2. Parametrize TableName as dynamic content with value:  @{item().DST_tab}
  3. Also, do the same with Pre-copy script and put there:  TRUNCATE TABLE @{item().DST_tab}
As De jan properly noticed in comments, you are not obligated to use any Pre-copy script here. You can leave this box empty or run other commands like partitions switching.

It should look like this:

 

Mappings and Settings

All other things should just be set to defaults. You don’t have to parametrize mappings if you just copy data from and to tables that have the same structure.

Of course, you can dynamically create them if you want, but it is a good practice to transfer data 1:1 – both structure and values from source to staging.

 

UPDATE WATERMARK ORA

Now we have to confirm, that load has finished and then update previous watermark value with the new one.

We will use a stored procedure. The code is simple:

Create it on your Azure SQL database. Then use it in ADF:

  1. Drop  into project, connect constraint from COPY ORA into it. Rename as “UPDATE WATERMARK ORA” and view properties.
  2. In SQL Account set 
  3. Now go to “Stored Procedure”, select our procedure name and click “Import parameter”.
  4. Now w have to pass values for procedure parametrs. And we will also parametrize them. Id should be  @{item().id}  and NewWatermatk has to be:  @{activity('GET MAX ORA').output.firstRow.MAXD} .

 

And basically, that’s all! This logic should copy rows from all Oracle tables defined in the configuration.

We can now test it. This can be done with “Debug” or just by triggering pipeline run.

If everything is working fine, we can just copy/paste all content from “FOR EACH ORA” into “FOR EACH PG“.

Just remember to properly rename all activities to reflect new source/destination names (PG). Also, all parameters and SELECT queries have to be redefined. Luckily PostgreSQL support ISO dates out of the box.

Source code


Here are all components in JSON. You can use them to copy/paste logic directly inside ADF V2 code editor or save as files in GIT repository.

Below is source code for pipeline only. All other things can be downloaded in zip file in “Download all” at the bottom of this article.

Pipeline

 

Download all

IncrementalCopy_ADFv2.zip

 

130 thoughts on “Azure Data Factory V2 – Incremental loading with configuration stored in a table – Complete solution, step by step.

  1. Hi Michal, you are absolutely correct, fileName declared as pipeline parameter and filled with value at sink destination. Folder path can be simplified:

    “folderPath”: {
    “value”: “@concat(‘/dev-raw-data-zone/oracle_erp_full_tables/’,formatDateTime(pipeline().parameters.windowStart, ‘yyyy/MM/dd’))”,
    “type”: “Expression”
    }
    Want to thank you again, your example is a most complete, understandable and comprehensive learning guideline I was able to find on line.
    Bruce.

  2. How is it that you can have truncate table in your pipeline? Your case is not appending new records to a table that holds everything, but just coping new records to a staging table, right?
    Simply removing truncate table statement would append records. it would work for system that never updates records. If records are modified and they come over again, there would be a need to do some merging or overwriting.

    1. @De jan
      Thank you for pointing that out 🙂

      Of course, it always depends on our needs.
      This example shows, that you can use “Pre-copy script” before loading will start.
      In another case, this field could be empty or, for example, run partition switching or even partition truncating (yes, since MSSQL 2016 version we can truncate partitions).

      ETL staging is a huge topic. I used to work with many scenarios in different projects, so maybe this was too obvious for me, that there is a choice 🙂
      I have added relevant information under the example of “pre-script”. Thanks!

  3. Hello Michał,

    First of all, thank you very much for such a detailed guide. It’s been a great help for me while trying to implement an incremental load solution from Azure SQL Database to Azure SQL Data Warehouse.

    However, if it’s not too much trouble, I’d appreciate some help in the COPY ORA section, step 3: “Also, do the same with Pre-copy script and put there: TRUNCATE TABLE @{item().DST_tab}”.

    Could you please elaborate on why this TRUNCATE TABLE is needed? As it stands, my baseline version of the target table is being truncated and once the pipeline ends it remains empty.

    I’m struggling to understand what the role of this TRUNCATE command is.

    Thanks in advance.

    Regards,
    Pedro

  4. Hello Michal

    Thanks for sharing the post, it’s really nice,
    Can you please suggest me something about:

    I have a csv file in blob storage that is updated daily with change in name of file (e.g. Products.25-03-2018 and data is continuously changed every day). I created a data factory to take csv file from blob storage to Azure SQL database, I cannot figure out how I can resolve this issue? may be dynamic content?
    if you can suggest me something to do, that would be great !

    Second, in your example above if the source and sink is same that is Azure SQL database, how it will work?

    Thanks in advance

    Fraz

    1. Hi Fraz!
      So, if I understand it correctly, you have a problem with dynamic parametrization of a source file.
      And the pattern of the filename is always: Products.DD-MM-YYYY.csv

      If this is a case when you just need to take Products.25-03-2018.csv by the day 25.03.2018 and for that particular day pipeline will be run – then it’s a matter of using two (or three) functions for dynamic concatenation in your source file expression.

      Look at the documentation here:
      https://docs.microsoft.com/en-us/azure/data-factory/control-flow-expression-language-functions#date-functions

      1. THE CURRENT DATE
      You can always return the current date and time in UTC format using: utcnow().
      Now the question is – do you have files with date taken from UTC timestamp or maybe it’s in another timezone? And if this timezone has daylight saving time you should also consider it in your expression. Depending on the answers you can use function addhours() because it looks like Data Factory expressions do not have any timezone aware date functions (yet?)

      2. FORMATTING TO YOUR NEEDS
      Then, after using utcnow() and anything else that will give you proper date, now you can format it to the string and using custom format.
      Look at formatDateTime() function, you can use it with custom format declaration taken directly from .NET platform. Its described here: https://docs.microsoft.com/en-us/dotnet/standard/base-types/custom-date-and-time-format-strings

      So finally it should like something similar to this:
      formatDateTime(utcnow(), ‘dd-MM-yyyy’)

      3. CONCATENATE STRING TO MATCH YOUR FILES:
      So in the final stage, it should be concatenated with all names required to identify your file. Here we can use another function, but of course, you can use string interpolation explained in the article 🙂

      concat(‘Products.’, formatDateTime(utcnow(), ‘dd-MM-yyyy’), ‘.csv’)

      The answer to your question regarding if it does matter which db source and db sink you will have … Well in general – no, but of course I’m using some special functions like TO_DATE which does not exist in SQL Server so you have to replace anything that applies to ORACLE or PG with the proper replacement available in MS database 😉 That’s all, everything else should work just like in a example above ;]

      Regards,
      m.

      1. Hi Michal
        Thank you very much for the detailed explanation, I am going to try it today and if there is anything I could not do, I may bother you once more,

        Thanks again, appreciate your support!
        Fraz

  5. My files are on ADLS gen 2 raw area. I want to copy few columns from individual files to a ADLS gen 2 STAGE area . I want to achieve it dynamically. I just have one copy activity. I want file mappings to be stored in sql table, read it and copy only required columns for individual files as per mapping.. Any help please.

    If I understand correctly, you have to define exact column names of individual files on data lake storage to utilize schema/column mapping option of ADF. That way it can’t be dynamic. The only way seems to be with databricks.

    1. Hi Sanjeet.
      This requires a different approach.
      The SQL commands are fortunately flexible, so you can construct a query as you like. Column lists can be just a definition hardcoded dynamically as a query string. Then we can easily store them just as a text in the database.

      With semi-structured files like CSV this is something different. Unfortunately, we do not use SQL to query them.

      Nevertheless, you need to dig deeper into the topic of schema mappings:
      https://docs.microsoft.com/bs-latn-ba/azure/data-factory/copy-activity-schema-and-type-mapping
      With some definitions stored in your database, you need to build dynamically a mapping section "mappings": [] . TIP: if your database support JSON formatting, maybe you should try to convert your query to JSON before you will retrieve mappings from config table. At least I would do it if I was to submit JSON in ADF 🙂
      Then you can pass this mapping definition as a dynamic parameter in your copy activity:
      http://sql.pawlikowski.pro/wp-content/uploads/2019/05/mapping_add_dynamic_content.png

    2. Just one more thing.
      Remember that you can use Get Metadata Activity to dig into the specific file and check the structure.
      https://docs.microsoft.com/bs-latn-ba/azure/data-factory/control-flow-get-metadata-activity
      I didn’t test it, but it looks like ADLS Gen2 is supported and you can get structure and even columnCount. This is something that can be used also to get something dinamically.
      I imagine a scenario, where I can use it to retrieve the structure and then parse it (with ADF expressions or maybe in my SQL Server Database with some additional JSON parsing commands) 🙂

  6. Dear Michal,

    Thanks, this article is really helpful.

    Please if you could advise that if this will also take care of Updated columns ? i.e., if any value has been updated in source database then it would be updated in dest.

    Regards,
    Ashish

    1. Ashish,
      Hmm, If I understood you correctly, the answer is no.
      This mechanism is using watermark columns to detect the change. It’s a simple one, if value has been changed since last upload – take the entire row.

      Updates on source – that’s just a different issue and I do not describe it in my article, since it is one of the most common used this days.
      So if you will update one column, for example COUNTRY_NAME, without marking the change in watermark column which is UPDATE_DATE, the change will not gonna be tracked and no update will be send to destination system.

      1. Dear Michal,

        Many thanks for your prompt response.

        Apologies , I wasn’t clear in my query. My original query was that ‘If a column is updated in a table along with it’s watermark column would that update in target table as well?’.

        For example if ‘COUNTRY_NAME’ has been updated along with watermark column which is ‘UPDATE_DATE’ would that be updated in target database ?

        Thanks ,
        Ashish

        Thanks ,
        Ashish

        1. Ok, I think, that your question is related to the fact, that you are confusing stage load with replication 🙂 With my example you will copy entire row, from source table to the destination staging table. Table record on the destination is never updated. It’s always inserteded as a new row. This is a part of a process that is most common in data warehousing. Where you first load all rows with all column values first to the empty table, then accordingly to your needs, you merge or refresh partitions to your core tables updating old rows with new ones.

          1. Thank you so much Michal , this is perfect article i have found on internet for Azure 🙂

            To maintain our database , we will add one more activity in our flow which is to delete and insert updated rows from staging table to target data system 🙂

            Thanks,
            Ashish

            1. That should be fine, as long as it fits your scale and time window.

              You see, for bigger systems (over dozens of terabytes of storage, hundreds or even dozens of gigabytes of data to load), DELETE operations cost too much for the engine and generates bloat in your transaction logs.
              In Azure Data Warehouse (or Parallel Data Warehouse, APS systems or any other MPP system) it’s better to divide the load into partitions and create them once again by UNIONing old and new data, switching the partitions and drop the old one. A good example is a usage of CTAS (create table as select) mentioned in good practices here: https://docs.microsoft.com/en-us/azure/sql-data-warehouse/sql-data-warehouse-best-practices#minimize-transaction-sizes

              You can try to build it also in your data warehouse, but of course only if scale will be (or is) that big.

              That sounds more complicated to implement, but in the end is cheaper than DELETE process (BULK INSERT is always faster, and delete also requires mechanisms like ghost row deletion, which is in Sql Server (click here) or Vacuum process over dead tuples (click here) in PostgreSQL). But this is another topic 🙂

  7. Hi Michal,

    Great Article!!!

    I have a question on timestamp column. Suppose we have thousands of tables for which timestamp column is not there and do not want to alter those tables to add such column as well.

    Wow would i fetch the delta values for thousands of tables which doesn’t have timestamp value?

    Please suggest

    1. That is really tricky question because the answer is of course like this: “it depends”.

      The first and most obvious scenario is to use the trigger on a table and create the logic to store updated row identifier and the timestamp somewhere else (in prepared structure).
      But again, if we are talking about critical transaction system with the requirement of having as short INSERTS/UPDATES as possible – triggers are the ugliest and most inefficient way to do it. But sometimes there is no other way.

      Anyway, if we are talking about SQL Server as a source, there are some other possible solutions, less cpu and io intensive than triggers. Change data capture or change tracking. You can check them in the documentation. But first read all requirements and limitations, there are always some points to consider.

  8. Dear Michal,

    We have implemented this solution and this is working as expected , thank you.

    Need a little help in SQL Procedure side to update the timestamp in Watermark table so we have created parameterised proc. and passing table name as well in variable, we had to create Dynamic sql to achieve this

    This is giving us error ‘Conversion failed when converting date and/or time from character string’. We are using below syntax in our procedure

    set @SQL = ‘update’ + @tablename + ‘set watermarkvalue =’ + @Newwatermarkvalue + ‘where id =’ + @id;

    Execcute sp__executesql @SQL

    please can you advise on any workaround for this

    Regards,
    Ashish

    1. Hi Ashish,

      first of all – what stands behind your parameters? What type they are and what values they have?
      Your error message indicates, that you have a problem with conversion to date. But without any values, I cannot suggest any solutions.

      Below is just a simple example, but it all depends on your scenario if this can be helpful or not.
      It uses temporary objects and rollbacks at the end, so you can run it anywhere, anytime.


      SET NOCOUNT ON;
      BEGIN TRAN;

      CREATE TABLE #test_watermark
      (
      id INT
      DEFAULT 1,
      watermarkvalue DATETIME
      DEFAULT GETDATE()
      );

      GO

      INSERT INTO #test_watermark
      DEFAULT VALUES;
      GO

      CREATE PROC #up_watermark
      (
      @tablename sysname,
      @Newwatermarkvalue NVARCHAR(100),
      @id NCHAR(1)
      )
      AS
      BEGIN

      DECLARE @SQL NVARCHAR(MAX);
      SET @SQL = N'UPDATE ' + @tablename + N' SET watermarkvalue = ' + @Newwatermarkvalue + N' where id = ' + @id;
      PRINT @SQL;
      EXEC sp_executesql @SQL;
      END;

      GO

      SELECT *
      FROM #test_watermark;

      EXEC #up_watermark @tablename = '#test_watermark', -- sysname
      @Newwatermarkvalue = '''2010-01-01''', -- nvarchar(100)
      @id = '1'; -- nchar(1)

      SELECT *
      FROM #test_watermark;

      ROLLBACK;

      1. Hi Michal,

        Thank you.

        I have created below procedure and can see from ADF that below values are getting passed.

        create PROCEDURE sp_UpdateWatermark
        (
        @id nvarchar(10),
        @NewWatermark datetime,
        @TableName NVARCHAR(200)
        )
        AS
        BEGIN
        SET NOCOUNT ON;
        DECLARE @Sql NVARCHAR(500)
        SET @Sql = ‘Update ‘ + @TableName + ‘set WatermarkValue=’ + convert(varchar(200), @NewWatermark,120 ) + ‘Where id =’ + @id;
        EXECUTE sp_executesql @Sql
        END

        —————values passed from ADF ———-
        “id”: {
        “value”: “4”,
        “type”: “String”
        },
        “NewWatermark”: {
        “value”: “2019-08-01T12:37:43Z”,
        “type”: “DateTime”
        },
        “TableName”: {
        “value”: “CFG.DW_ACCOUNT_CHARGE_DTL_HIST_V”,
        “type”: “String”
        }

        1. Hi Ashish,
          ok, now i get it. So the problem is with the ZULU time passed from ADF expressions and you want to save the new value in the config table. That is strange, I remember that my solution worked out of the box in Azure SQL Database and no conversion was needed. Nevertheless, you can pass the date with zulu time as string, then convert it to date (as 127) and back to string (as 120).
          SELECT '''' + CONVERT(NVARCHAR(100), CONVERT(datetime, '2019-08-01T12:37:43Z', 127), 120) + ''''

  9. Hi Michal,

    thank you. It’s me again 🙂 apologies for so many queries.

    We have now copied data in our Staging tables. You have already suggested us about CTAS options to maintain data warehouse. Is that achievable in ADF itself to write ?

    We want to update and insert in dimension tables in the same pipeline.

    Regards,
    Ashish

  10. Hi Michal,

    Please ignore my above question. I have created a parameterised procedure to achieve this in ADF pipeline 🙂

    Thanks,
    Ashish

Leave a Reply