The setup script tries to touch the file tmp/restart.txt. However, it does not check if the tmp dir already exists, breaking the script if that is the case.
It is broken since it does metric_configuration.kalibro_ranges.each, but that yields an empty list. It's necessary to call the all method before calling each, so it will actually check existing ranges correctly.
How can we decide the thresholds for the ranges of the LOC metric on Radon?
Using the same values as Analizo does not seem good. @marcheing and I have found that the file with highest LOC value on the python 3 standard library is somewhere between 4 and 5 thousand.
Any ideas?
Can't we redefine this validation? In some cases, having weight equal to zero may be useful.
For example, the blank lines metric from Radon, since it does not have any meaning by itself. Is only means something when compared to the total number of lines. Therefore, assigning a range for it may be confusing.
Currently the JSON serialization converts from Float to String, but the opposite direction is never done. This causes Infinities to be converted to 0 sometimes, or comparisons to do strange things instead of working correctly.
Since we allow metric configuration updates, it would be a good idea to persist metric configuration snapshots.
That would avoid inconsistencies when processing a repository.
For example, if one changes the script of a compound metric configuration, that metric's meaning would not be the same on different metric configuration snapshots.
The meaningful way to aggregate the metric Lines of code is by the sum. But it is hard to produce meaninfu ranges with this aggregation form once or the values are too loose for METHODS and CLASSES but correct for SOFTWARE. Or the range interval is too smal for the SOFTWARE granularity and correct for the CLASS and METHODS.
In order to address this, we need to provide ranges that may vary accordingly to the granularity.
Currently, we only calculate the relation between a given metric and all the metric configurations.
We could present even more useful information if we had more statistics on the data we have stored. In this direction, we could aim to answer the following questions:
What are the most used metrics?
For each metric used, what are the most used ranges?
What are the differences and similarities between the metrics and ranges used across the supported programming languages?
What types of metrics are most used on the supported programming languages? Security, structural, hotspot metrics, etc?
Currently it seems some places properly create/use the languages field in Metrics, and others do not. We should decide what to do with it, and make sure everything is uniform.
There some metrics that are supposed to be part of compound ones. Those have no meaning by themselves and should not affect the project grade.
So it would be useful to have a special case of MetricConfiguration with no ranges and so that KalibroProcessor do not try to interpretate nad incorporate in the grade.
The default rails server is getting used by the packaged distributions. Thus it is using webrick which is known to not be able to handle large loads os requests. Rails 5 already feature puma as the default webserver.
What is the version of the postgresql Indicated to use on development environment ? 9.1 or 9.3 ? The reason of question is because on the Readme of prezento the Indicated version is 9.3.