Skip to main content

Since 2014 we started with our standard Thinkwise application JP-Bouwmanagement for the construction industry as a Windows application. Every customer is using the same version of the application.

Last year we created a web server (Microsoft Windows Server 2016 Standard, x64, quad-core, 32 GB) to host our application for customers who wanted to switch from Windows to Web. SQL Server 2017 runs on a separate server (Microsoft Windows Server 2016 Standard, x64, quad-core, 16 GB).

We are now using one installation of the Thinkwise Web GUI. Each customer website in IIS points to the same Web GUI folder. Of course each website has its own Application Pool User for permissions.

Performance is okay, but it could be better. Currently about 25 customers are using the Web GUI to user our application. Each customer is using their own database.

Is this setup on IIS best practice? Or is it better to create a Web GUI folder per customer?

Can anyone advise us on how we can improve performance?

Hi Johan,

This is a very open question you are asking.

First of all your could inspect the overall health of both the web-server and the database-server. Also our network to these servers could play a large role in the whole picture. 

Could you pin-point specific locations/actions in the application which are particular slow? 

Overall it is better to avoid a lot of nesting levels or datasets with a lot of columns in the grid. Cubes and reports are single user actions which could provide particular pressure to the web-server.

Overall your deployment plan doesn't sound too bad. The way you use IIS application pools ( eventual per customer) decide whether everyone works on the same process of the web-server or not.You could play with that. Notice they will by default fall asleep without connectivity. So the next attempts will take a little longer.

Could you please be a little more specific?

Regards,
Erik


Hi Erik,

I know it is a very general question, but I was especially interested in the setup for IIS. I understand from your comment that we did not implement this in a wrong way.

The problem with the performance (although it is not that bad, but it can be better) does not allways show. So it is hard to pin-point it to specific locations or actions. Sometimes just moving your cursor from one column to another takes more than a second. I did review my default and layout procedures but did not find any problem. In other cases opening up a subject takes a while. If possible I introduced prefilters to limit the amount of data that is being loaded when opening a subject. I also minimized the number of records per page.

I am sure that we can improve things by using a different version of SQL Server. At the moment we are still using the Express version. I know this has limitations on memory and use of processor cores. The problem with SQL Server is that it is getting very expensive in the way we are using it. But I’ll start a new question about this subject (because the subject of this issue is about IIS).

But I am glad to hear that our deployment plan for IIS is not too bad :-)

Thanks for your respond.

 


Sounds like the performance peaks are a little random. You could monitor your hardware and network to find out which part of your web-environment is the most heavy loaded at a peak moment.

E.g. you don't want any of the servers to run for a longer time on a CPU of 75% or even 100% because processes will be caused and the load will build up. When it takes to long, eventually IIS will restart your application to free resources.

I think a lot of answers will be given when you get a clear overview of the health of your environment.

I hope this offers you enough information to stabilise your environment even more.