GSoC Proposal Feedback - Django Benchmarking Project

I have uploaded the first draft of my GSoC proposal -

GSoC proposal

Please review it and suggest changes that are to be made

Thank you very much

1 Like

HI @deepakdinesh1123 — I think that’s a nice proposal. One thing I really like is that there’s different goals, so there’s base milestones and then some that might be stretch goals depending on progress. :+1:

1 Like

@carltongibson Thank you very much for providing your thoughts. I will continue working on the proposal and make any changes if necessary.

@smithdc1 Sorry for disturbing you. Can you please review my proposal and suggest any changes that are to be made?

regards,

Deepak

Hi @deepakdinesh1123

I’m sorry, I do not currently have time to review this in detail. I’ve had a quick skim and I have the following thoughts / observations.

  • Likely the way the benchmark are written in ASV are not best practice. I think currently there is django.setup() (or similar) steps in each file. Can that be simplified?
  • I wonder if we’ve got benchmarks in the right place, and right granularity. We should focus on the core request-response cycle as that is what gets “hit” the most. Do we have good coverage here? Even a small benefit in this part would be beneficial.
  • How about running with different python versions? Python is getting faster, could evidence to folk that upgrading python is the best performance boost in town. (Easier said than done to implement, maybe)
  • More specifically what is the proposal about regularly running? Every commit, some other timeframe? On a related note, maybe we’re concerned about performance impact on a specific PR, is there an option to run it in that one case ahead of it being merged. Currently we have “buildbot, test selenium” (or similar), or maybe we can mark it with a github flag to get it to run?

Thanks again for your effort here, l’d also echo C’s comments about the proposal is good that there are different goals and milestones.

1 Like

@smithdc1 Thank you very much, I reviewed all the points, and here are my thoughts

  1. The django.setup() can be moved to a utils file, similar to what is currently in djangobench.

  2. To provide granularity for the benchmarks I will rearrange the benchmarks in the following structure

    └── Benchmarks
    ├── model_benchmarks
    │ ├── model_create
    ├── benchmark.py
    │ ├── model_save
    ├── benchmark.py
    ├── form_benchmarks
    │ ├── form_validate
    │ ├── form_create
    └── …
    ├──benchmarks
    └── …

  3. I was able to set up a simple benchmark for the request-response cycle, I will create a pull request tomorrow.

  4. I was able to run the benchmarks with python 3.10, 3.9, and 3.8 but versions lower than this started throwing different errors and a lot of changes need to be made if multiple versions are to be used, so for now I can add the versions for which the benchmark works.

  5. @carltongibson had mentioned in comment that getting the benchmarks to run periodically would be a good way to go as every commit might be overkill, to begin with, so I had thought of getting it both done and choose one or both from the result.

  6. I tried to get the benchmarks to run when a comment is made on a PR(tried to make it work as a command), but Github API only supports retrieval of review comments so I was not able to implement it in probot or actions.

Good work. :+1:

Looking forward to seeing your proposal submitted.