GSoC Proposal Feedback - Django Benchmarking Project

@smithdc1 Thank you very much, I reviewed all the points, and here are my thoughts

  1. The django.setup() can be moved to a utils file, similar to what is currently in djangobench.

  2. To provide granularity for the benchmarks I will rearrange the benchmarks in the following structure

    └── Benchmarks
    ├── model_benchmarks
    │ ├── model_create
    ├── benchmark.py
    │ ├── model_save
    ├── benchmark.py
    ├── form_benchmarks
    │ ├── form_validate
    │ ├── form_create
    └── …
    ├──benchmarks
    └── …

  3. I was able to set up a simple benchmark for the request-response cycle, I will create a pull request tomorrow.

  4. I was able to run the benchmarks with python 3.10, 3.9, and 3.8 but versions lower than this started throwing different errors and a lot of changes need to be made if multiple versions are to be used, so for now I can add the versions for which the benchmark works.

  5. @carltongibson had mentioned in comment that getting the benchmarks to run periodically would be a good way to go as every commit might be overkill, to begin with, so I had thought of getting it both done and choose one or both from the result.

  6. I tried to get the benchmarks to run when a comment is made on a PR(tried to make it work as a command), but Github API only supports retrieval of review comments so I was not able to implement it in probot or actions.