Multirow task execution

  • 4 February 2019
  • 36 replies
  • 2258 views
Multirow task execution

Show first post

36 replies

Userlevel 5
Badge +15


⚠️ In the future this feature will be configurable through the metamodel and will probably use table valued parameters instead of XML. We'll be able to automatically upgrade the model, but you'll need to change the code yourself!

 

Gradually our codebase gets cluttered more and more with your originally suggested solution. Can you get us an update on when you expect the future will arrive on this matter?

What we've currently have done is having everywhere the same structure of creating a temporary table, which should be replaceable with the actual solution. We’re everywhere using @multi_row as input and @record as actual table used.

On each template we apply it (almost all tasks), we do something like this:

declare @record table (project_id id not null
, part_id id not null)

insert into @record (project_id
, part_id)
select x.value('(project_id)[1]', 'id')
, x.value('(part_id)[1]', 'id')
from @multi_row.nodes('rows/row') as t(x)

I hope for us that we by doing so we only need to remove the code above and can use the @records table.

 

@Ricky This is on our backlog, but we have not yet planned a date for this.

It is, however, trivial to locate and rewrite the select (and delete) from the XML to use a table valued parameter. And from a functional point of view, it doesn't make any difference.

And how to deal with unit tests on multi rows? Currently I have a long input string (<rows><row><item>1</item></row><row><item>2</item></row></rows>), but that should be converted to some kind of other data structure allowing multiple rows at once? That is less trivial than find and replace?

Userlevel 7
Badge +11

@Ricky This is on our backlog, but we have not yet planned a date for this.

It is, however, trivial to locate and rewrite the select (and delete) from the XML to use a table valued parameter. And from a functional point of view, it doesn't make any difference.

Userlevel 3
Badge +8


⚠️ In the future this feature will be configurable through the metamodel and will probably use table valued parameters instead of XML. We'll be able to automatically upgrade the model, but you'll need to change the code yourself!

 

Gradually our codebase gets cluttered more and more with your originally suggested solution. Can you get us an update on when you expect the future will arrive on this matter?

Userlevel 5
Badge +5

Does this feature also work for reports?

 

Reports currently don't support this. We should think about this in a design and investigate whether report implementations are able to process these kind of structured data.

I'd suggest to park this question in a separate post/idea to think about alternatives and to investigate the need for this.

Link to the parked idea:

https://community.thinkwisesoftware.com/ideas-18/multirow-report-execution-1048#post3001

Userlevel 5
Badge +20

Does this feature also work for reports? 

I need to make a report with an output based on selected rows in a grid.

 

Does Multirow task execution still work from version 2020.1?

Userlevel 4
Badge +13
I'm trying something similar with the copy file process action to copy a batch of files by obtaining "from file" and "to file" values from a select query. I put all those rows in a table variable and then loop through these rows one by one. But this is the part that eludes me:

The HTTP connector should be called while the parameters have a value.


What happens now is that the loop finishes and ends up outputing a single set of null values. How can I insert a sql "while" loop into a process flow so that it outputs parameters set by set into the process variables?
Userlevel 3
Badge +3
code:
declare @pointer int = 1
while @pointer < 3
begin
select row.value('(./param1[1])', 'int')
, row.value('(./param2[1])', 'varchar(10)')
from @xml.nodes('(/rows/row[sql:variable("@pointer")])') as t2(row)

set @pointer += 1
end
Userlevel 3
Badge +3
I don't think looping in processflows ever even crossed my mind. This could be a very nice solution.
One thing though, I would personally prefer keeping a pointer (int) to the current row, this would keep the input intact.

I had to fiddle around a bit, but here's an example of using a pointer:
Userlevel 7
Badge +11
Hi Pim, good question.

A possible solution would be to create a process flow in which you recursively store the values of the first row in process flow parameters, delete the row from the XML, and then call the HTTP connector using these parameters.

The SQL code for the task or process procedure to assign the parameters would look something like this:
code:
-- Store the values of the first row
select
@param1 = row.value('./param1[1]', 'int'),
@param2 = row.value('./param2[1]', 'varchar(10)')
from @xml_input.nodes('/rows/row[1]') as t2(row)

-- Delete the first row
set @xml_input.modify('delete /rows/row[1]')

The HTTP connector should be called while the parameters have a value.
Also make sure you use a different name for the XML input parameter of the task than the MultiselectParameterID.
Userlevel 3
Badge +3
Sounds great. But I can't quite see yet whether this would solve our issue where the process in question contains a http connector. The api we're talking to, only allows a single item to be requested every time. Which means the http connector needs to be ran for every item in the selection, which (if I read correctly) is still not possible.
Userlevel 7
Badge +5
This feature works very well in conjunction with drag-drop. When you drag multiple rows, you can have a multirow-task execute once with all the dragged rows.

Reply