Quote
Kjell Rilbe <kjell.ri...@upec.se> wrote:
>The real question remains unanswered: Do the TUpdateSQL queries ever get
>unprepared? If so, where? If not, shouldn't they? Is this issue
>documented anywhere?
My answer is that I don't know. Of course they should. The prepare
step allocates resources in a relational database and the engine has
to be told that the resources are no longer needed or else they just
build up and cumulatively slow the session. All such resources are
temporary and can always be flushed by shutting the engine down and
restarting it, but that's not a desirable solution, no matter how
easy.
All I can tell you now is that I haven't seen the problem yet. But you
can bet that if I see it, I'll dig into the VCL code until I find the
answer and then I'll find a way to fix it.
Quote
>...
>> 1. You _can_ attach more than one UpdateSQL to a dataset. You do that
>> by leaving the Dataset.UpdateObject blank and using the
>> TUpdateSQL.Dataset property instead. You have to do this in code
>> because that property is not published. Point more than one TUpdateSQL
>> to a dataset and the TDataset.ApplyUpdates fires both. (But remember
>> to unlink before you destroy the form that the TUpdateSQLs are on.)
>This is simply not true. If I leave the UpdateObject property blank
>(nil), I can't open the dataset as live, without attaching an
>OnUpdateRecord handler. If I do attach an OnUpdateRecord handler, no
>UpdateSQL will be triggered even if UpdateObject *is* assigned! This is
>because the handler is supposed to do that instead.
Couple of things here. If by "live" you mean RequestLive = True then I
have to say that RequestLive never needs to be true in cached updates.
I leave it false by default.
And what I said is true. I do it. Suppose you have the query MyQuery
and cached updates is set to true. You may optionally leave the
UpdateObject blank. Instead, before you open the query, do this:
MyUpdateSQL.Dataset := MyQuery;
This works. And you may have as many UpdateSQL objects as you like
pointing to MyQuery. I often have more than one.
Finally, OnUpdateRecord is not required, even though it is used (by me
anyway) more often than not. Try this in a test app: One grid, one
query with datasource. Make the query for one table. Set to cached
updates. (So far, no code, right?). Once you get that working, you'll
find that it works but doesn't save. Then put this one line:
MyQuery.ApplyUpdates:
under a button or in the TForm1.OnCloseQuery event.
You will find (to your surprise, I think) that this works. Your
changes are saved. This is the irreducible minimum. All else is an
elaboration.
Quote
>On the other hand, if I assign UpdateObject with one of the UpdateSQLs I
>need, and in code set all the UpdateSQLs' DataSet to point to the query,
>only the one in the UpdateObject property is triggered.
If you use the UpdateSQL.Dataset property, then the
TQuery.UpdateObject must be blank. There's a conflict otherwise.
Quote
>Trust me, I just tried it! There is no way a TQuery can automatically
>trigger more than one TUpdateSQL. You have to code an OnUpdateRecord
>handler to do that. If you don't believe me, you can go ahead and check
>the source for TUpdateSQL.SetDataSet and
>TBDEDataSet.CachedUpdateCallBack. SetDataSet does not inform the TQuery
>that it has been attached to the TUpdateSQL, so there's no way the
>TQuery can trigger the TUpdateSQL. Also, CahcedUpdateCallBack calls the
>OnUpdateRecord handler if assigned, otherwise calls UpdateObject.Apply
>if UpdateObject is assigned. There's no code there to cycle through
>multiple update objects. Nada.
You may be right about this. I've never tried it anyway. That is, when
I have more than one UpdateSQL component pointing to a query, there
are business reasons not to want it to be automatic. The
OnUpdateRecord declaration allows for only one Dataset and one
UpdateKind. A modification to one dataset may be accompanied by an
insert to the other, so I prefer to handle the management manually in
OnUpdateRecord, updating both UpdateSQL components from there.
Quote
>...
>I don't understand your point here. Why would I want to manually prepare
>the UpdateSQL queries? The TUpdateSQL.ExecSQL already does that! That
>was the problem!
As I said, I'm not sure what good I got from it. My point was that it
was possible to manipulate the UpdateSQL queries. (In my case I was
scanning and updating a very large file and wanted to keep the local
cache size down. So I iterated the ApplyUpates/CommitUpdates every 10
records. I wanted the UpdateSQL.Query.Prepared to survive that so I
gave the manual approach a try. Not sure if it did anything but I got
the performance I wanted, i.e. about 1 second per record)
Quote
>I checked the source for both. No call to UnPrepare or anything
>that could lead to a call to UnPrepare. All it does is call a BDE api
>function with a phase1 or phase2 commit command respectively. OK, so
>perhaps there's a callback from the BDE? Wrong again. There is one,
>TBDEDataSet.CachedUpdateCallBack. But that does nothing but call
>OnUpdateRecord or UpdateObject.Apply (whichever is assigned).
Interesting. I checked the documentation I have for
dbiEndDelayedUpdates and there's not help there either. Thanks for the
heads up.
If you really need to know, you can check the
UpdateSQL.Query[uk].Prepared after CommitUpdates.
OTOH, it may be a good idea to leave it alone til the form is
destroyed. But I wouldn't depend on the destructor doing the right
thing with the db engine. If I find this to be a problem, I will
explicitly unprepare those in the destructor just because it's good db
management to do so.
Quote
>...
>This way, if the ApplyUpdates call is done in a transaction, as you
>strongly recommend, the OnUpdateRecord does not start another
>transaction (which is impossible as far as I know, at least with some
>servers). This is fine, because atomicity is guaranteed for the
>OnUpdateRecord work by the already present transaction. If, however, the
>ApplyUpdates call is not done in a transaction, the OnUpdateRecord
>*must* guarantee atomicity on its own accord, and it will! Anything
>wrong with this?
I'm not sure how you are using "atomicity." I just wonder why in
blazes anyone would want to run one transaction for every record in a
feature that is designed to run many records in a transaction. At
best, you defeat the purpose of cached updates and, in defeating
something as complicated as this, there's got to be trouble. (I do not
go where angels fear to tread.)
Quote
>I do note that the docs for the BDE api function DbiApplyDelayedUpdates
>for applying delayed (cached) updates says that it should be called in a
>transaction. But I assume this is because the entires set of updated
>records must be rolled back if an error occurs, because there is no way
>of marking only part of the cache as committed. In my case, I know that
>there is always exactly one record that's modified, so that problem is
>null and void, thus no reason to put a transaction around the
>ApplyUpdates call. Right?
And so, IMHO, there's no reason to use cached updates, except that you
have one joined query and need to modify more than one table.
But you should be using cached updates to capture a set of records
that need to be updated en masse or not at all. You should want the
rollback so that you can make corrections or blow the changes away. If
you're not designing in that way, then turn cached updates off and
write the code to use another query to do the updates.
In other words, I think you should stop fighting it. Go with the
design or do something else.
Quote
>...Like I said, I don't want caching! I'm forced to
>use it to be able to have a live dataset, that's all. To make it appear
>like a "normal" non-cached live dataset, I'd like to ApplyUpdates after
>every post (and delete of course).
Then put ApplyUpdates in a transaction block and solve your problem.
Quote
>> If you do all that and still get a deadlock, then it's something else
>> in your code. One useful trick is to try to reproduce your problem in
>> a test app, outside the context of your main app.
>The deadlock is some kind of limitation in MSSQL/ODBC. I get the
>deadlock with VB/Access as well, so I can rule out BDE and Delphi. Not
>sure whether it's ODBC or MSSQL itself though. I've bumped into the same
>deadlock problem several times now, and it's very annoying. I can only
>recommend to use another server. ...and make sure no queries remain
>prepared unless I really want them to. Hence this discussion.
At my client site today, we did some concurrency testing on our new
system. We put two people on the same receiving record and had them
both make changes. We got the expected deadlock and that was because
(and _only_ because) we had two people contending for the same
dataset.
But the good news was that, doing cached updates my way, the deadlock
was only temporary. When the data was saved by the people in the right
order, it was possible to resolve the deadlock and get both changes
saved. I haven't figured out the exact sequence yet, but I know now
that I can turn that deadlock from a fatal error into a resolvable
notification.
Your deadlock comes because you have more than one query (read cursor)
looking at the same data and you're trying to update out of order. I
have no idea how you did that, but I'd bet a meal that it's in your
code.
Good luck.
Phil Cain
--