Loading many columns in FDMEE

Loading many columns in FDMEE

Need to upload for the same POV and account different detailed values ?
Very often we need to upload detailed amount value instead of total value to be able to have the composition of the total amount, here AA Balance and GP Balance instead of TL Balance.



How to do that?

The solution is to load both information in the standard way, ie one line in FDMEE and then duplicate this line while switching values.

Let’s go further.

Step 1 :

In “Target application”, in addition to standard dimensions, we add a dimension field of type “LOOKUP” to get the second value which is associated to UDx data table column, here UD6:


Step 2 :

According to the following file :

Entity;BU;Acc;Sub;Acc Desc;AA_Balance;GP_Balance;TL_Balance
00765;765000002000;1020;1;First Curr Balance;50000.00;1000.00;51000.00
00730;730000002000;1020;11;Second Curr Balance;30000.00;5000.00;35000.00
00765;765000002000;1020;11;First DKK Balance;75600.00;789.00;76389.00

In “Import format” in addition to “standard” amount, here AA_Balance, add a new line to retrieve GP_Balance:


Step 3 :

To prevent the lines being summed, we have to specify a custom field (C4 for exemple).

During import process FDMEE uses a  temporarily data table : TDATASEG_T .  We need to create a script in the event “AfImport” .
This script will duplicate lines to be imported and rearrange the data :

In the first part of the query, we specify the fields. Then in the select we will get in place of AMOUNT, AMOUNTX, the UDx we choose above, here UD6. Here as we want to identify the lines we change the value of C4 (UD4) with “C4GP”. And finally, UD6 and UD6X, no longer needed, are set to empty value.
This works fine with SQL Server. With Oracle Database, DATAKEY is mandatory so the beginning of the select query looks like:

   query = query + ” select DATAKEY + (select count(DATAKEY) from FDM.TDATASEG_T where LOADID= ? ), LOADID …

Let’s see the result

Note: be careful if you are using “100,00” number formatting instead of “100.00”. For amount you can use Fill=EuroToUs directive in import format to convert numbers. UDx dimensions are nvchar values so they accept “100,00” string. But you will get an error when converting to decimal. To avoid that either you put a script in import format for the second value to convert it to numbers:


def GP(strField, strRecord):    
if strField == "":        
   return strField    
   return strField.replace(",",".")

or modify the query by replacing:

case when UD6 = '' THEN 0 else cast(UD6 as decimal(29,12)) end


case when UD6 = '' THEN 0 else cast(REPLACE(UD6, ',' , '.') as decimal(29,12)) end



Next article On-Demand rules in HFM
Previous article Using JHAT to automate HFM tasks


  1. Good post,
    i have one dought after this line (This script will duplicate lines to be imported and rearrange the data 🙂 you provide script image so i want to know do i need to write whole script same as in image

    • Hi Ashima,

      I feel happy your are interested by this topic.

      I prefer to put all the fields even if some are certainly not mandatory.
      The data model is not mine and I prefer avoid side effects.
      The second reason is by putting insert and select query on separate lines in a text editor, I really can see fields transposition.
      Please find below the queries for SQL server and the speficity for Oracle:
      query = “INSERT INTO [dbo].[TDATASEG_T] ([LOADID] ,[PARTITIONKEY] ,[CATKEY] ,[PERIODKEY] ,[DATAVIEW] ,[CURKEY] ,[CALCACCTTYPE] ,[CHANGESIGN] ,[JOURNALID] ,[AMOUNT] ,[AMOUNTX] , [DESC1] ,[DESC2] ,[ACCOUNT] ,[ACCOUNTX] ,[ACCOUNTR] ,[ACCOUNTF] ,[ENTITY] ,[ENTITYX] ,[ENTITYR] ,[ENTITYF] ,[ICP] ,[ICPX] ,[ICPR] ,[ICPF] ,[UD1] ,[UD1X] ,[UD1R] ,[UD1F] ,[UD2] ,[UD2X] ,[UD2R] ,[UD2F] ,[UD3] ,[UD3X] ,[UD3R] ,[UD3F] ,[UD4] ,[UD4X] ,[UD4R] ,[UD4F] ,[UD5] ,[UD5X] ,[UD5R] ,[UD5F] ,[UD6] ,[UD6X] , [UD6R] ,[UD6F] ,[UD7] ,[UD7X] ,[UD7R] ,[UD7F] ,[UD8] ,[UD8X] ,[UD8R] ,[UD8F] ,[ARCHIVEID] ,[HASMEMOITEM] ,[STATICDATAKEY] ,[UD9] ,[UD9X] ,[UD9R] ,[UD9F] ,[UD10] ,[UD10X] ,[UD10R] ,[UD10F] ,[UD11] ,[UD11X] ,[UD11R] ,[UD11F] ,[UD12] ,[UD12X] ,[UD12R] ,[UD12F] ,[UD13] ,[UD13X] ,[UD13R] ,[UD13F] ,[UD14] ,[UD14X] ,[UD14R] ,[UD14F] ,[UD15] ,[UD15X] ,[UD15R] ,[UD15F] ,[UD16] ,[UD16X] ,[UD16R] ,[UD16F] ,[UD17] ,[UD17X] ,[UD17R] ,[UD17F] ,[UD18] ,[UD18X] ,[UD18R] ,[UD18F] ,[UD19] ,[UD19X] ,[UD19R] ,[UD19F] ,[UD20] ,[UD20X] ,[UD20R] ,[UD20F] ,[ATTR1] ,[ATTR2] ,[ATTR3] ,[ATTR4] ,[ATTR5] ,[ATTR6] ,[ATTR7] ,[ATTR8] ,[ATTR9] ,[ATTR10] ,[ATTR11] ,[ATTR12] ,[ATTR13] ,[ATTR14] ,[VALID_FLAG] ,[CODE_COMBINATION_ID] ,[SOURCE_SYSTEM_NAME] ,[TARGET_APPLICATION_TYPE] ,[TARGET_APPLICATION_NAME] ,[PLAN_NAME] ,[LEDGER] ,[LEDGER_TYPE] ,[YEAR] ,[PERIOD] ,[SCENARIO] ,[CURRENCY] ,[CURRENCY_OBJECT_ID] ,[GL_PERIOD_YEAR] ,[GL_PERIOD_NUM] ,[AMOUNT_YTD] ,[AMOUNT_PTD] ,[RULE_ID] ,[SOURCE_LEDGER_ID] ,[STAT_BALANCE_FLAG]) ”
      query = query + ” select [LOADID] ,[PARTITIONKEY] ,[CATKEY] ,[PERIODKEY] ,[DATAVIEW] ,[CURKEY] ,[CALCACCTTYPE] ,[CHANGESIGN] ,[JOURNALID] ,case when UD6 = ” THEN 0 else cast([UD6] as decimal(29,12)) end , case when UD6 = ” THEN 0 else cast([UD6] as decimal(29,12)) end ,[DESC1] ,[DESC2] ,[ACCOUNT] ,[ACCOUNTX] ,[ACCOUNTR] ,[ACCOUNTF] ,[ENTITY] ,[ENTITYX] ,[ENTITYR] ,[ENTITYF] ,[ICP] ,[ICPX] ,[ICPR] ,[ICPF] ,[UD1] ,[UD1X] ,[UD1R] ,[UD1F] ,[UD2] ,[UD2X] ,[UD2R] ,[UD2F] ,[UD3] ,[UD3X] ,[UD3R] ,[UD3F] ,’C4GP’ ,[UD4X] ,[UD4R] ,[UD4F] ,[UD5] ,[UD5X] ,[UD5R] ,[UD5F] ,” ,” ,[UD6R] ,[UD6F] ,[UD7] ,[UD7X] ,[UD7R] ,[UD7F] ,[UD8] ,[UD8X] ,[UD8R] ,[UD8F] ,[ARCHIVEID] ,[HASMEMOITEM] ,[STATICDATAKEY] ,[UD9] ,[UD9X] ,[UD9R] ,[UD9F] ,[UD10] ,[UD10X] ,[UD10R] ,[UD10F] ,[UD11] ,[UD11X] ,[UD11R] ,[UD11F] ,[UD12] ,[UD12X] ,[UD12R] ,[UD12F] ,[UD13] ,[UD13X] ,[UD13R] ,[UD13F] ,[UD14] ,[UD14X] ,[UD14R] ,[UD14F] ,[UD15] ,[UD15X] ,[UD15R] ,[UD15F] ,[UD16] ,[UD16X] ,[UD16R] ,[UD16F] ,[UD17] ,[UD17X] ,[UD17R] ,[UD17F] ,[UD18] ,[UD18X] ,[UD18R] ,[UD18F] ,[UD19] ,[UD19X] ,[UD19R] ,[UD19F] ,[UD20] ,[UD20X] ,[UD20R] ,[UD20F] ,[ATTR1] ,[ATTR2] ,[ATTR3] ,[ATTR4] ,[ATTR5] ,[ATTR6] ,[ATTR7] ,[ATTR8] ,[ATTR9] ,[ATTR10] ,[ATTR11] ,[ATTR12] ,’Duplicate line’ ,[ATTR14] ,[VALID_FLAG] ,[CODE_COMBINATION_ID] ,[SOURCE_SYSTEM_NAME] ,[TARGET_APPLICATION_TYPE] ,[TARGET_APPLICATION_NAME] ,[PLAN_NAME] ,[LEDGER] ,[LEDGER_TYPE] ,[YEAR] ,[PERIOD] ,[SCENARIO] ,[CURRENCY] ,[CURRENCY_OBJECT_ID] ,[GL_PERIOD_YEAR] ,[GL_PERIOD_NUM] ,[AMOUNT_YTD] ,[AMOUNT_PTD] ,[RULE_ID] ,[SOURCE_LEDGER_ID] ,[STAT_BALANCE_FLAG] from [dbo].[TDATASEG_T] where UD6 ” and LOADID = ? ”

      params = [ fdmContext[“LOADID”] ]
      fdmAPI.executeDML(query, params, True)
      fdmAPI.logInfo(“execution query” + str(fdmContext[“RULEID”])+query)

      except Exception, e:
      fdmAPI.logInfo(“erreur query”) + str(e)

      query = query + ” select DATAKEY + (select count(DATAKEY) from FDM.TDATASEG_T where LOADID= ? ), LOADID …

      Hope this will be helpful for you.

      Best regards.


      • Thanks Patrick. I would like to know how I can sum up two identical lines and if they sum up to zero then they get ignored from the validation/export. So for example we have a line in the file that creates an invalid intersection. An offset entry is entered in the ERP to offset and therefore now we have 2 lines in the load file, one for 1,000 and one for -1,000 for example. As these two lines sum zero I want to sum them up after Import so to avoid another intersection error, in this case for two lines. IS this possible? Thanks so much!

  2. Hi Patrick,
    Great post.
    I had a similar query regarding the data load mappings. I’m trying to create a recon ID using an SQL script as follows:

    CASE WHEN UD1 LIKE ‘%’ THEN UD9X||’-‘||UD10X||’-‘||UD11X END

    But for some reason, the recon ID target values have only a “-” rather than the expected outcome. Could you possibly suggest how we can address this or if I’m missing something?

    Please let me know. Thank you.

    • Hi Monik,
      Check in wich order your dimensions mapping are processed.
      If UD1, I suppose you try to map UD1, is procceed before UD9 and UD10 (standard behaviour), you cannot have UD9X or UD10X (mapped values) at this time.
      You can change the processing order by changing the sequence number in target application.


  3. Hi Patrick,

    Hope you are doing well.

    I have a situation where I have to a calculation based on amount but the problem is that these are 3 different lines. 2 columns are same in the 3 lines. Let me explain the scenario below:
    LINE3 = VAT
    In the above 3 lines there two columns called Reference ID and Type which are same for the 3 lines.

    The calculation is COMMISSION *100/SALES

    Is there a way to do the above calculation. Can you please help me.

    • I Anubhav,

      As I understand your problem, in an AfterProcMap event for exemple you can do something like :
      queryupdate=”update tdataseg set amountx = zcom.commision*100/zsal.sales inner join (select amountx as commision from tdataseg where loadid=? and here your criteria to identify commision line) zcom inner join (select amountx as sales from tdataseg where loadid=? and here your criteria to identify sales line) zsal”
      params = [ fdmContext[“LOADID”], fdmContext[“LOADID”]]
      fdmAPI.executeDML(queryupdate, params, True)
      and the same thing for VAT*100/COMMISSION

      Hope this help you.


Leave a Comment

Your data will be safe! Your e-mail address will not be published. Also other data will not be shared with third person. Required fields marked as *

18 + eight =