Board index » off-topic » [WL] QC rating

[WL] QC rating


2006-04-12 02:56:27 AM
off-topic10
I know it is all anonymous system but same time it is not
When someone is putting a rating on the report and it is low rating then it
should be a requirement to put explanation for low rating - this might
simply goes into comment field without an ability to be removed by author if
only rating is modified.
This will help authors with reports and people who put rating to be more
responsible for their ratings...
 
 

Re:[WL] QC rating

Quote
When someone is putting a rating on the report and it is low rating then
it should be a requirement to put explanation for low rating
Is your post by any chance triggered by the low ratings you got for your
report # 27191 ? While I haven't rated this report personally - yet - I
probably would have rated it between 2 and 3 too, for the following reasons:
- assuming you know about the nature of floating point numbers it should be
quite clear that TDateTime is not the perfect type to store timestamps and
compare them to the millisecond; if every millisecond counts, consider
storing the date as a TDateTime and store the time in milliseconds as an
Integer.
- even if you ignore the point above, in your sample you are converting a
floating point number to a string and then back assuming that it wouldn't
loose any precision.
Beside that, I find that it is rather difficult to get the point of the
report. You are showing code and it's resulting output in the description
and then repeat that in the Steps. Nowhere is it mentioned what you would
expect, what you actually get and what kind of consequences you are facing
because of the difference.
This is not meant to offend you. Its just my 2 cents.
Cheers
 

Re:[WL] QC rating

"Sebastian Modersohn" < XXXX@XXXXX.COM >wrote in message
Quote
>When someone is putting a rating on the report and it is low rating then
>it should be a requirement to put explanation for low rating

Is your post by any chance triggered by the low ratings you got for your
report # 27191 ? While I haven't rated this report personally - yet - I
probably would have rated it between 2 and 3 too, for the following
reasons:
yes and no
it is just general observation.
Quote
- assuming you know about the nature of floating point numbers it should
be quite clear that TDateTime is not the perfect type to store timestamps
and compare them to the millisecond; if every millisecond counts, consider
storing the date as a TDateTime and store the time in milliseconds as an
Integer.
TDateTime is a type which represent date and time up to point of the
millisecond
any conversons from/to it should not change a value which is stored.
Float value IS commonly used to avoid locale format dependency and not
introduce UTC convention especially when send via XML
Float is Float on every machine and DateTime as String is different based on
the locale until someone is using something like this '2005-01-31
10:23:34.212'.
Please notice that it is not about Time in Milliseconds but about HOW this
value is stored.
Off value is because of float presentation as string.
In my example
38807.3958333333 is not the same like 38807.39583333333 (+ one 3 after
.) or 38807.3958(3)
for what that matter.
So, IF I assign 38807.39583 to the float it cannot just became 38807.395833
To be precise TDateTime = type Double; so it is especially important that
ANY conversion from/to TDateTime (Double) will persist stored value without
introducing or reducing number of digits where possible.
Any functions which manipulate TDateTime should act the same way like
conversion.
I hope I am more clear now
I also put additional test case to show what exactly happening.
it is easy to reproduce.
Quote
- even if you ignore the point above, in your sample you are converting a
floating point number to a string and then back assuming that it wouldn't
loose any precision.
I am not ignoring your point but in real life one does not always dictate
how data is stored or presented. In our case it is VS.Net/XML and D32/XML
packets. Why should I do additional conversion when DateTimeToStr and
FloatToStr or access TDateTime/Double value directly should produce the same
result no matter how data is massaged in the middle.
Quote
Beside that, I find that it is rather difficult to get the point of the
report. You are showing code and it's resulting output in the description
and then repeat that in the Steps. Nowhere is it mentioned what you would
expect, what you actually get and what kind of consequences you are facing
because of the difference.
I disagree, if you look at the comment after the result of the call you
could see that it points to the problem
Of course you need to read the code to see a difference in accessing the
date.
Quote
This is not meant to offend you. Its just my 2 cents.
None taken. And this is exactly my point in the post here - if someone
doesn't see point of the report then talk about it and not just reject it
blankly. Yes, one would need to read the report and follow the steps, but
does not mean that report is off completely.
I've modified the report a little, see if it is better now.
PS. there is a reason for the any report, it might be not so clear for
everybody who reads it and authors (if they care about it) would go long
mile to make the report opened, IF people who spend time to set Rating will
spend time on supporting their rating. Not all of us are native English
speakers, but it does not makes result of our work worse or better. Believe
me, I am trying to put my thoughts as clear as possible, but the same time
you need to understand differences in how ideas are presented in different
languages.
 

{smallsort}

Re:[WL] QC rating

Quote
TDateTime is a type which represent date and time up to point of the
millisecond
any conversons from/to it should not change a value which is stored.
Hm. I tend to disagree. Knowing that a TDateTime is stored as a float
(Double as you pointed out), you should also be aware that it "inherits"
precision problems which apply to floating point numbers. Maybe this isn't
documented well enough, but that is a different issue.
Quote
Float value IS commonly used to avoid locale format dependency and not
introduce UTC convention especially when send via XML
Float is Float on every machine and DateTime as String is different based
on the locale until someone is using something like this '2005-01-31
10:23:34.212'.
That's why I suggested using a TDateTime for the Date portion and an Integer
for the time in milliseconds. AFAICS this doesn't introduce any locale
dependencies.
Quote
Please notice that it is not about Time in Milliseconds but about HOW this
value is stored.
Off value is because of float presentation as string.
Exactly. TDateTime has been a double for a *very* long time. There's no way
to change that without breaking a lot of code. I mean, really *lots* of
code. I've seen INI files there a timestamp has been saved as a float string
etc.
Quote
I hope I am more clear now
I also put additional test case to show what exactly happening.
it is easy to reproduce.
I had a look at the updated report. I'm sorry but this doesn't make the
report more clear. It adds more examples. What I would have prefered is a
simple syntax like expected behaviour and actual behaviour. Quite a few
reports employ this and IMHO that makes it easier to grasp what the actual
problem is.
Quote
I am not ignoring your point but in real life one does not always dictate
how data is stored or presented. In our case it is VS.Net/XML and D32/XML
packets.
Sure. No doubt about that. Out of curiosity: If you reimport that XML with
.NET does it show the same problems? I would expect so.
Quote
I disagree, if you look at the comment after the result of the call you
could see that it points to the problem
Of course you need to read the code to see a difference in accessing the
date.
I didn't say that it was impossible. I said that it was comparatively
difficult. Adding a few lines about the expected and the actual behaviour,
as I mentioned above, would make the report easier to understand.
Quote
And this is exactly my point in the post here - if someone doesn't see
point of the report then talk about it and not just reject it blankly.
Yes, one would need to read the report and follow the steps, but does not
mean that report is off completely.
I wouldn't rate the report if I would have the feeling that I didn't
understand what the report was about. That doesn't mean that I wouldn't give
it a rather low rating, because it is *hard* to understand.
Quote
I've modified the report a little, see if it is better now.
To be honest, I don't think it is much better now. See my comments above
about expected and actual behaviour. Just an additional point: you claim
that the conversion could get you "off" by hours if not more. I can't see
how this can happen. If you've got a test case I'm more than willing to be
proven wrong.
Quote
PS. there is a reason for the any report, it might be not so clear for
everybody who reads it and authors (if they care about it) would go long
mile to make the report opened, IF people who spend time to set Rating
will spend time on supporting their rating. Not all of us are native
English speakers, but it does not makes result of our work worse or
better. Believe me, I am trying to put my thoughts as clear as possible,
but the same time you need to understand differences in how ideas are
presented in different languages.
Point easily taken. I'm not a native English speaker too, so I do know
exactly what you are talking about. One way to get around that would be to
post a potential report in the newsgroups and then wait for the input of
others. If there is any interest in the subject, you should get enough
feedback to prepare a very good report. I realize that this might mean that
even more effort for reporting a bug is needed.
Cheers
 

Re:[WL] QC rating

Quote
>TDateTime is a type which represent date and time up to point of the
>millisecond
>any conversons from/to it should not change a value which is stored.

Hm. I tend to disagree. Knowing that a TDateTime is stored as a float
(Double as you pointed out), you should also be aware that it "inherits"
precision problems which apply to floating point numbers. Maybe this isn't
documented well enough, but that is a different issue.
Precision problem does not appear until you start to add/mutliply/etc.
I would agree with you if situation was in the area when E^n adjustment has
to be made but not here.
Quote
>Float value IS commonly used to avoid locale format dependency and not
>introduce UTC convention especially when send via XML
>Float is Float on every machine and DateTime as String is different based
>on the locale until someone is using something like this '2005-01-31
>10:23:34.212'.

That's why I suggested using a TDateTime for the Date portion and an
Integer for the time in milliseconds. AFAICS this doesn't introduce any
locale dependencies.
I am not in position to change existing logic when regular code should work
just fine.
Quote
>Please notice that it is not about Time in Milliseconds but about HOW
>this value is stored.
>Off value is because of float presentation as string.

Exactly. TDateTime has been a double for a *very* long time. There's no
way to change that without breaking a lot of code. I mean, really *lots*
of code. I've seen INI files there a timestamp has been saved as a float
string etc.
if you familiar with MSSQL server there is difference in DECIMAL/FLOAT/MONEY
types - one is more precise then another
Same is here. Double could be precise enough for DateTime, it is current
implementation which should recognize this.
Quote
>I hope I am more clear now
>I also put additional test case to show what exactly happening.
>it is easy to reproduce.

I had a look at the updated report. I'm sorry but this doesn't make the
report more clear. It adds more examples. What I would have prefered is a
simple syntax like expected behaviour and actual behaviour. Quite a few
reports employ this and IMHO that makes it easier to grasp what the actual
problem is.
I did modify it one more trime to point one the exact place... at least I
did try...
Quote
>I am not ignoring your point but in real life one does not always dictate
>how data is stored or presented. In our case it is VS.Net/XML and D32/XML
>packets.

Sure. No doubt about that. Out of curiosity: If you reimport that XML with
.NET does it show the same problems? I would expect so.
No, it doesn't. But I will try to do exact the same code tomorrow in C#
Quote
>I disagree, if you look at the comment after the result of the call you
>could see that it points to the problem
>Of course you need to read the code to see a difference in accessing the
>date.

I didn't say that it was impossible. I said that it was comparatively
difficult. Adding a few lines about the expected and the actual behaviour,
as I mentioned above, would make the report easier to understand.

>And this is exactly my point in the post here - if someone doesn't see
>point of the report then talk about it and not just reject it blankly.
>Yes, one would need to read the report and follow the steps, but does not
>mean that report is off completely.

I wouldn't rate the report if I would have the feeling that I didn't
understand what the report was about. That doesn't mean that I wouldn't
give it a rather low rating, because it is *hard* to understand.

>I've modified the report a little, see if it is better now.

To be honest, I don't think it is much better now. See my comments above
about expected and actual behaviour. Just an additional point: you claim
that the conversion could get you "off" by hours if not more. I can't see
how this can happen. If you've got a test case I'm more than willing to be
proven wrong.
if you are on the verge between 1st half an hour and next then 1 milisecond
will throw you off
 

Re:[WL] QC rating

"Serge Dosyukov (Dragon Soft)" <serge [AT] dragonsoftru [DoT] com>wrote in
message news: XXXX@XXXXX.COM ...
Quote

>>TDateTime is a type which represent date and time up to point of the
>>millisecond
>>any conversons from/to it should not change a value which is stored.
>
>Hm. I tend to disagree. Knowing that a TDateTime is stored as a float
>(Double as you pointed out), you should also be aware that it "inherits"
>precision problems which apply to floating point numbers. Maybe this
>isn't documented well enough, but that is a different issue.

Precision problem does not appear until you start to add/mutliply/etc.
I would agree with you if situation was in the area when E^n adjustment
has to be made but not here.

That just makes it show up more often.
You can have problems on the integer level too.
Store a few odd integers in floats.
Then do a trunc and assign to an integer.
roughly half of them will be off by one.
This is just theory, but I don't have time to try
it tonight. I can create a test project tomorrow
if you care.
--
Thanks,
Brad.
 

Re:[WL] QC rating

Serge,
I have added my comments to
qc.borland.com/wc/qcmain.aspx
I think that you have just discovered some problems with
floating point numbers that many of us have long since
learned to live with.
Regards, JohnH
 

Re:[WL] QC rating

"Brad White" <bwhite at inebraska.com>wrote
Quote
Store a few odd integers in floats.
Then do a trunc and assign to an integer.
roughly half of them will be off by one.
This is just theory, but I don't have time to try
it tonight. I can create a test project tomorrow
if you care.
Brad,
I bet that your theory is wrong. I think that plus or minus integers
up about 2^<number of bits in mantissa>will be stored without
any error. For type extended, that is about 19-20 decimal
digits.
Regards, JohnH
 

Re:[WL] QC rating

Quote
>Precision problem does not appear until you start to add/mutliply/etc.
>I would agree with you if situation was in the area when E^n adjustment
>has to be made but not here.
>

That just makes it show up more often.
You can have problems on the integer level too.

Store a few odd integers in floats.
Then do a trunc and assign to an integer.
roughly half of them will be off by one.
It is funny you've mentioned Trunc function, it is the one which is used in
Span logic, might be its produces off by one?
Quote
This is just theory, but I don't have time to try
it tonight. I can create a test project tomorrow
if you care.
Thank you, will be great.
I can then extend the case.
 

Re:[WL] QC rating

Thank you for your comment, when more information appear (from Brad) I will
extend a project.
Yes, I know it is an "legacy" problem, and we live with them... but it is
exactly my point - why should we? I hate work around and I will never agree
with someone who is saying that "since it is known problem and it has work
around then lets just leave it alone". If it is a problem (no matter how old
is it) and it is reproducible, then record it, open it and let it flow in
the pipe. If people think it is important then they will rate it high and
have chance to fix it...
thank you, guys, for all your help
Quote
I have added my comments to
qc.borland.com/wc/qcmain.aspx

I think that you have just discovered some problems with
floating point numbers that many of us have long since
learned to live with.
 

Re:[WL] QC rating

"Serge Dosyukov (Dragon Soft)" wrote
Quote
Yes, I know it is an "legacy" problem, and we live with them... but it is
exactly my point - why should we? ...
I can suggest two "fixes"
(1) For values that must be input, displayed, and/or communicated
with limited number of digits, make sure that the limited-digits values
and the programs variables and data storage mechanisms can hold
the complete range of allowed values *exactly*.
Or
(2) Make sure that the values in their transformations into and out
of the data storage mechanisms can be from end-to-end exactly
without corruption, even if though the in-between storage values
are not exactly correct.
--JohnH
 

Re:[WL] QC rating

Correction:
I have added my comments to
qc.borland.com/wc/qcmain.aspx
Also see
qc.borland.com/wc/qcmain.aspx
qc.borland.com/wc/qcmain.aspx
--JohnH
 

Re:[WL] QC rating

Quote
>>I am not ignoring your point but in real life one does not always
>>dictate how data is stored or presented. In our case it is VS.Net/XML
>>and D32/XML packets.
>
>Sure. No doubt about that. Out of curiosity: If you reimport that XML
>with .NET does it show the same problems? I would expect so.

No, it doesn't. But I will try to do exact the same code tomorrow in C#
Would you please confirm this? As John Herbster has mentioned in a comment
to your QC report, the biggest loss in precision is happening when the float
gets converted into the string. Since that part would come from your C# app,
it would be quite interesting to see if its happening there too.
Quote
if you are on the verge between 1st half an hour and next then 1
milisecond will throw you off
Only when you start to round or compare floating point numbers. Use
Math.CompareValue() and either use the default Epsilon or one that suits
your needs.
HTH
 

Re:[WL] QC rating

Quote
Yes, I know it is an "legacy" problem, and we live with them... but it is
exactly my point - why should we?
Because you can't fix it? You are experiencing problems which are intrinsic
to floating point numbers. The only way to get around that would be using
something different than a float to represent a date/timestamp and I don't
think this is what you are suggesting.
Quote
I hate work around and I will never agree with someone who is saying that
"since it is known problem and it has work around then lets just leave it
alone". If it is a problem (no matter how old is it) and it is
reproducible, then record it, open it and let it flow in the pipe.
I agree, but only if there is a chance that this might actually get fixed.
And I don't consider redefining TDateTime as a completely different type to
be a fix.
Quote
If people think it is important then they will rate it high and have
chance to fix it...
They would have to vote for it. Ratings are suppossed to measure the quality
of the report, i.e. is it clearly written, has it got steps, a testcase, is
it reproducible etc.
Regards
 

Re:[WL] QC rating

Quote
Because you can't fix it? You are experiencing problems which are
intrinsic to floating point numbers. The only way to get around that would
be using something different than a float to represent a date/timestamp
and I don't think this is what you are suggesting.
simplest solution I've found is to add .00000000002 to the float value which
adds an "error" to the value, but then bring all of them to the same
significant digits which happens to be the same for the functions when
rounded
Quote
>I hate work around and I will never agree with someone who is saying that
>"since it is known problem and it has work around then lets just leave it
>alone". If it is a problem (no matter how old is it) and it is
>reproducible, then record it, open it and let it flow in the pipe.

I agree, but only if there is a chance that this might actually get fixed.
And I don't consider redefining TDateTime as a completely different type
to be a fix.
see a "fix" above
Quote
>If people think it is important then they will rate it high and have
>chance to fix it...

They would have to vote for it. Ratings are suppossed to measure the
quality of the report, i.e. is it clearly written, has it got steps, a
testcase, is it reproducible etc.
I usually rely on the rating when looking at the reports to open
I will expect to rely on votes when error has to be fixed or feature
implemented.