Django template render speed issue for large queryset

Hello,

I created this topic on stackoverflow and completely forgot about it. Unfortunately even after all this time there is no response, so I hope coming to the source will yield a fruitful result. I have copied the context from the original post. I can only post 2 links as I am new user but please append /questions/76501238/django-template-render-speed-issue-for-large-queryset to the stackoverflow url to view the post.

Just to note that the application is still running great and is not slow by any means, but I am just trying to be proactive incase the issue highlighted by the below does creep up in the future.

“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”
I have completed a web application for work and was doing some various tests. One of the tests was for a “exaggerated” number of items on a given page and how long it would take to load (the page is basically a dashboard for a helpdesk system) . My results are quite bad with the test page taking some 20 to 25 seconds for 1200 items on a standard spec system and 3 - 10s on a very high spec system.

I dont ever expect the active calls to reach that number on the dashboard but am looking at a worst case scenario. I have also tried with pagination, but I am probably implementing it wrong as going to the next page seems to rerun the queries and takes the same time as displaying all 1200 items irrespective of whether I am displaying 1 item or 10.

As the project is quite large and contains sensitive information, I cant really provide the code here for reproducible results but I created another basic project that basically highlights what I am experiencing. It can be downloaded here:

[Slow render - Google Drive](https://demo project)

This code is not that bad as not many attributes are referenced in the template (short.html) for loop which only refences 5 elements as opposed to the main projects 30 or so.

The view

def shorts(req):
all_shorts = Shorter.objects.all()

return render(req, 'shortner/short.html', {'all_shorts': all_shorts})

The model

class Shorter(models.Model):
short_num = models.CharField(max_length=8, primary_key=True)
short_suf = models.CharField(max_length=8)
short_ou = models.CharField(max_length=256)
short_url = models.CharField(max_length=128)
short_desc = models.CharField(max_length=512, null=True, blank=True, default='blah, blah, blah')

def __str__(self):
    return self.short_ou + ' is shortened to ' + self.short_url

The html template

    {% for short in all_shorts %}
          <div class="accordion-item" style="border-width: thick; border-color: darkblue;">  
            <h2 class="accordion-header" id="heading_{{ task.reference }}">
              <button class="accordion-button collapsed" type="button" data-bs-toggle="collapse" data-bs-target="#collapse_{{ task.reference }}" aria-expanded="false" aria-controls="collapse_{{ task.reference }}">

                  <span class="row">
                  <span class="col"><strong>Number: </strong>{{ short.short_num }}</span>
                  <span class="col"><strong>Suffix: </strong>{{ short.short_suf }}</span>
                  <span class="col"><strong>Original URL: </strong>{{ short.short_ou }}</span>
                  <span class="col"><strong>Shortened URL: </strong>{{ short.short_url }}</span>
                  <span class="w-100"></span>
                      <br>
                      <hr>
                  <span class="w-100"></span>

                  <span class="col"><strong>Description: </strong>{{ short.short_desc }}</span>
                  </span>

              </button>
            </h2>
          </div>
{% endfor %}

I am definitely suspecting this template forloop as 1 query is run in under 1ms according to the django debug toolbar yet the page takes in this example 5 seconds to fully render (using the standard/weak pc as most users have those).

The main project is currently live via IIS and is extremely fast but then again there is not much data just yet. To avoid any possible issues and assuming I am correct about the issue been with the template forloop, is there anything that I can do to try an speed up the template render?

“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”“”

First, be aware that Django Debug Toolbar does add some not-insigificant overhead to your page generation. If you’re doing benchmarking, you need to do it without DDT involved.

Second, before you think about needing to improve performance, you should have an idea of what your critical thresholds are. You mention some extremes, but you really haven’t identified when you may need to make some changes.

Finally, you’re want to make sure you understand exactly where the slowdowns are occurring. It is known that the Django template engine is not the speediest tool possible.

Pagination is one important and viable option. If you’re not seeing a benefit from it, then you are likely doing something wrong.

There are some other options that you have that could reduce the time. You could pre-render and cache some portions of this, depending upon how often this data changes. You could also try generating parts of the html yourself without using the Django rendering engine.

But it does all start with knowing and understanding where the bottlenecks exist.

Thank you Ken.

Yes, the slowness is there even without DDT.

In terms of the threshold, I fores see issues startng in a year from now assumming the application is still use at that time and usuage remains consistent. I am basing this off an average of 20 calls been logged through the system per week. So far there are 150 calls in the two months it has been up as employees are slowly becoming acquinted with it. So 20x52weeks=1040. 1040 + the current 150 = +/-1200 records which is what is referenced in this discussion.

I did also read up abit on the templating engine where they raised similarites between it and jinja2 which is apparently exceptionally fast. I never managed to get it working in Django though so stuck to the standard DTL.

My hopes where indeed pinned on pagination where I read it is a lazy process but after a couple hundred records, it seems to suffer which makes no sense to me. In the live system during testing of 1200 records, it would take as long to render a paginated page of 1 item as it would all 1200 items on the same page. I do suspect as you say, I have done something wrong there and will have to dig deep.

The caching option is one I would like to try and I am assuming you mean server side. Please can you elaborate a bit more on this or point me to an example if possible.

Another thought with regards to the bottleneck was that the application DB is still SQLite. I am trying to keep it as portable as possible so stuck to SQLite for that reason. I have 3 other apps running without issue also using SQLite but they are not as DB heavy as this one. From your experience, is it neccessary to switch from SQLite to a more enteprise DB like PostGre or MySQL even if the DB is so small. The SQLite DB was only around 10MB for the 1200 records with the issue. For additional context, the DB in the demo linked is only 644KB but has 3844 records in the main table.

See the docs at: Django’s cache framework | Django documentation | Django

The easiest way to check this is to run your view without rendering the full queryset - or even just try running the query in the Django shell. Seperate the time taken for the query from the time required to render. What is DDT showing you in terms of where the time is being spent?

Thanks for the link. Looks complicated, but I will work through it and come back for assistance when if I get stuck.

I ran the process on the test server that currently has 185 records. It yields 6ms

The tests I had originally done with 1200 records also yield around the same sub 10ms result but the page renders in 10 - 30s depending on which server I run the code on.

The Time tab in DDT provides more information, along with enabling the Profiling tab and seeing what it measures.

OK. The time tab is not clickable but I think the profiling option may lead me in the right direction as there is alot more presented in it timewise.

I will only be able to run it on the primary test server with 1200 records tomorrow to see results and if I can identify where things are been slowed down. Failing which I will try the caching option in coming weeks. Hopefully a solution will arise from one of them.

Thank you again Ken, I should have started here 3 months ago instead of stackoverflow.