Board index » delphi » D5 \ dBase 7.5 performane problems

D5 \ dBase 7.5 performane problems

Hi,

     How do I increase my performance using dBase 7.5 and Delphi 5?
When I use Delphi to access dBase tables, I can insert on tables between
4,000 to 13,000 recs per minute depending on the key structure.  However
if I use dBase directly I can get upwards of 60,000+ inserts a minute.  I
have found a similar discrepancies in deletes and finds.  I am using
TTables.  Any ideas?

Thanks,

Mike Thrapp
Software Consulting Associates

 

Re:D5 \ dBase 7.5 performane problems


|      How do I increase my performance using dBase 7.5 and Delphi 5?
| When I use Delphi to access dBase tables, I can insert on tables between
| 4,000 to 13,000 recs per minute depending on the key structure.  However
| if I use dBase directly I can get upwards of 60,000+ inserts a minute.  I
| have found a similar discrepancies in deletes and finds.  I am using
| TTables.  Any ideas?

You're using TTables - is that local or network? I'm guessing you are doing
record by record inserts / deletes. How wide is the table (bytes per
record)?

What performance are you looking for? How much coding are you prepared
to do to get that performance?

There are several things you can do depending on your answers.

Garry Kernan

Re:D5 \ dBase 7.5 performane problems


Garry,

     Thanks for the response.  At this point I have the data local, but I
will be deploying on a network.  I have also tested on a network (Novell).
In terms of what performance I am looking for - the answer is the fastest
I can get and I am willing to do significant coding for it if necessary.

Thanks,
Mike

Quote
Garry Kernan <gker...@sk.sympatico.ca> wrote in message

news:8vdpk7$s872@bornews.inprise.com...
Quote
> |      How do I increase my performance using dBase 7.5 and Delphi 5?
> | When I use Delphi to access dBase tables, I can insert on tables between
> | 4,000 to 13,000 recs per minute depending on the key structure.  However
> | if I use dBase directly I can get upwards of 60,000+ inserts a minute.
I
> | have found a similar discrepancies in deletes and finds.  I am using
> | TTables.  Any ideas?

> You're using TTables - is that local or network? I'm guessing you are
doing
> record by record inserts / deletes. How wide is the table (bytes per
> record)?

> What performance are you looking for? How much coding are you prepared
> to do to get that performance?

> There are several things you can do depending on your answers.

> Garry Kernan

Re:D5 \ dBase 7.5 performane problems


Mike,

|      Thanks for the response.  At this point I have the data local, but I
| will be deploying on a network.  I have also tested on a network (Novell).
| In terms of what performance I am looking for - the answer is the fastest
| I can get and I am willing to do significant coding for it if necessary.

OK
1) for record by record you need to instantiate TFields rather than
fieldbyname for each insert. Also you should use asInteger, asString
etc. rather that .value.

2) for better performance fill exlsuively opened local tables and then
use batchmoves across the network This can be particularly beneficial
on large tables. The reason is you give the BDE a chance to update the
tables and then update the indexes as a block. Record by record
appends / deletes force the BDE to update the index after each operation.

3) for best performance use block writes for inserts. This means you need
to obtain a memory buffer and fill it yourself. Check out:

   TTable.RecordSize;

  Check( dbiPutField(hCursor, iField, pRecBuf, pSrc ));
  Check( dbiWriteBlock( dBASETable.Handle, recs, pBuf ) );

On a 450 PIII with local paradox tables and block writes I can achieve good
speed.

I needed to convert an 84 byte per record GPS table containing 5.7 million
records
(875 megabytes including indexes) to a 42 byte per record table.

My strategy was
1 - create table with Primary Key (paradox stores data in primary key order)
2 - perform conversion
3 - add secondary indexes.

11.5 minutes which works out to about 500,000 records per minute.

Garry Kernan

Re:D5 \ dBase 7.5 performane problems


Garry,

     Thanks - I will try this.

Mike

Quote
Garry Kernan <gker...@sk.sympatico.ca> wrote in message

news:8ve37u$2od11@bornews.inprise.com...
Quote
> Mike,

> |      Thanks for the response.  At this point I have the data local, but
I
> | will be deploying on a network.  I have also tested on a network
(Novell).
> | In terms of what performance I am looking for - the answer is the
fastest
> | I can get and I am willing to do significant coding for it if necessary.

> OK
> 1) for record by record you need to instantiate TFields rather than
> fieldbyname for each insert. Also you should use asInteger, asString
> etc. rather that .value.

> 2) for better performance fill exlsuively opened local tables and then
> use batchmoves across the network This can be particularly beneficial
> on large tables. The reason is you give the BDE a chance to update the
> tables and then update the indexes as a block. Record by record
> appends / deletes force the BDE to update the index after each operation.

> 3) for best performance use block writes for inserts. This means you need
> to obtain a memory buffer and fill it yourself. Check out:

>    TTable.RecordSize;

>   Check( dbiPutField(hCursor, iField, pRecBuf, pSrc ));
>   Check( dbiWriteBlock( dBASETable.Handle, recs, pBuf ) );

> On a 450 PIII with local paradox tables and block writes I can achieve
good
> speed.

> I needed to convert an 84 byte per record GPS table containing 5.7 million
> records
> (875 megabytes including indexes) to a 42 byte per record table.

> My strategy was
> 1 - create table with Primary Key (paradox stores data in primary key
order)
> 2 - perform conversion
> 3 - add secondary indexes.

> 11.5 minutes which works out to about 500,000 records per minute.

> Garry Kernan

Other Threads