Skip to main content

I tried to import an Excel file using the standard import feature. The excel file contained approximately 8000 rows. The functionality (lay, def etc) is minimal and still it took around 30 minutes to import the entire Excel file. We are using platform version 2025.1.

This is not the performance we want/need. It comes down to a little over 4 seconds per row.

What can I do to increase the performance?

If you want to perform large scale imports. Then it’s probably best to use a staging table without any defaults, layouts, contexts or other logic. Just a plain full insert. 

The import performs an insert for each row separately (much like a cursor). This means that for each row also the default and context are triggered. I did a simple test. 100 rows = 1 second without logic. With a default that does nothing the runtime goes up to 8 seconds. You can imagine what this does to an import that has 8000 rows with a default that actual logic. 

I see a couple of opportunities to increase performance on this process, I will discuss this with the development team. 

So my advice: create staging table without logic. 
Run a process to check the data for the default/reference etc. 
Create a task to import the succesfull rows. And make some logging for the rows that cause an error. 


Thanks for your answer Erwin. I will see if I can do that.


Reply