I have uploaded the first draft of my GSoC proposal -
Please review it and suggest changes that are to be made
Thank you very much
I have uploaded the first draft of my GSoC proposal -
Please review it and suggest changes that are to be made
Thank you very much
HI @deepakdinesh1123 — I think that’s a nice proposal. One thing I really like is that there’s different goals, so there’s base milestones and then some that might be stretch goals depending on progress.
@carltongibson Thank you very much for providing your thoughts. I will continue working on the proposal and make any changes if necessary.
@smithdc1 Sorry for disturbing you. Can you please review my proposal and suggest any changes that are to be made?
regards,
Deepak
I’m sorry, I do not currently have time to review this in detail. I’ve had a quick skim and I have the following thoughts / observations.
django.setup()
(or similar) steps in each file. Can that be simplified?Thanks again for your effort here, l’d also echo C’s comments about the proposal is good that there are different goals and milestones.
@smithdc1 Thank you very much, I reviewed all the points, and here are my thoughts
The django.setup() can be moved to a utils file, similar to what is currently in djangobench.
To provide granularity for the benchmarks I will rearrange the benchmarks in the following structure
└── Benchmarks
├── model_benchmarks
│ ├── model_create
├── benchmark.py
│ ├── model_save
├── benchmark.py
├── form_benchmarks
│ ├── form_validate
│ ├── form_create
└── …
├──benchmarks
└── …
I was able to set up a simple benchmark for the request-response cycle, I will create a pull request tomorrow.
I was able to run the benchmarks with python 3.10, 3.9, and 3.8 but versions lower than this started throwing different errors and a lot of changes need to be made if multiple versions are to be used, so for now I can add the versions for which the benchmark works.
@carltongibson had mentioned in comment that getting the benchmarks to run periodically would be a good way to go as every commit might be overkill, to begin with, so I had thought of getting it both done and choose one or both from the result.
I tried to get the benchmarks to run when a comment is made on a PR(tried to make it work as a command), but Github API only supports retrieval of review comments so I was not able to implement it in probot or actions.
Good work.
Looking forward to seeing your proposal submitted.