Hi René,
The Web GUI does not support bulk imports in a set-based manner. I think there are a few things you can do to speed up the process, but if the order of magnitude is 50k records, then I think our natively supported features will always remain slow. So, if you want native support for fast, bulk imports, then this would indeed be a feature request.
With that said, I think you can actually implement this feature fairly easily yourself. You can for instance create a task with a file upload parameter to accept a CSV file and write it to a location where the database can access it. Then, with a process flow, automatically call another task which performs a bulk insert statement, using the uploaded CSV file.
If you don't want the database server to have access to the file system, then you can probably still achieve a reasonable level of performance by sending the CSV data itself to the task (by using a Read File connector in the process flow), parsing it to a tabular structure and using the select into statement.
Hi René,
The Web GUI does not support bulk imports in a set-based manner. I think there are a few things you can do to speed up the process, but if the order of magnitude is 50k records, then I think our natively supported features will always remain slow. So, if you want native support for fast, bulk imports, then this would indeed be a feature request.
With that said, I think you can actually implement this feature fairly easily yourself. You can for instance create a task with a file upload parameter to accept a CSV file and write it to a location where the database can access it. Then, with a process flow, automatically call another task which performs a bulk insert statement, using the uploaded CSV file.
If you don't want the database server to have access to the file system, then you can probably still achieve a reasonable level of performance by sending the CSV data itself to the task (by using a Read File connector in the process flow), parsing it to a tabular structure and using the select into statement.
Hi,
Problem with developing a solution ourself is that it's to complex for the end user. The simplicity of a generic import is easier to understand for an end user. Next to CSV, also Excel files should be supported. Besides that I'm bound to an Azure platform, so file handling isn't the best solution. Building a custom import for generic formats (Excel/CSV) is not the optimal use of the Thinkwise platform I think.
Regarding the feature request, it should be bulk import or not having a time out (which is the big major issue, solved by an asynchronous import that does not time out, and is transactional (!)). Two complete different approaches, I'm not sure yet what is the best. Any thoughts on that?
Hi René,
I think that both asynchronous imports and transactional imports are nice feature requests. An idea has already been posted for asynchronous imports, you can vote on it here. I recommend posting an idea for transactional imports, if you would like to see this feature.
While asynchrony will solve the timeout issue that you are facing, I don't think it's a complete solution to the problem, which is that imports are simply quite slow when the number of records is very high (i.e. >10k). I think that there's a trade-off here between applying application logic to each record and importing records very quickly. I don't think that either option is a one size fits all solution, so it might be worthwhile to submit an idea to support both and making it configurable.
It is already possible to avoid the timeout by the way. You can set the executionTimeout option in the web.config of the Web GUI. It is not ideal, since this is an application-wide setting and it won't prevent the Web GUI from being blocked for a long time, but it should allow the import to finish.
In case to should build async import in the Web GUI itself.
You should think about the load an action like this will cause on your Azure environment, especially when multiple users will run it separately. This will be very costly moments if you Azure environment manage to scale up fast enough.
Regards Erik