How about creating import tables in the summarized base? These tables would
have the same structure than the "real" tables. Every time you update the
summarized database, you start by emptying the import tables (actually they
should be empty, since you should empty them in the end of each updating).
Then you bring the new data in the import tables, and after that you can
update the "real" tables with a few sql-commands. In the end you empty the
You could even do this by having the import tables in a separate database,
or - if the sql commands don't get too complicated - you could do the
update directly without any import tables.
>> I have designed an Access based application which holds
>> summarized information from a larger database (many Gb).
>> The summary base will be between 500 Mb and 1 Gb, so
>> Access seems a good choice as the application will be
>> deployed to several disconnected computers. They will
>> only use it for reading, no updates.
>> Every month the database needs to be updated with new,
>> updated and summarized information in the various tables.
>> Now, I am wondering how I can best perform these data
>> imports. For the tests for database size I just used the
>> import function that is part of Access (file menu
>> import). However, when it comes to the real imports I
>> need to check for existing records in the target base and
>> update these, sometimes replacing data, sometimes adding
>> figures to existing field values. The intrinsic import
>> function of the Access does not seem capable of handling
>> This seems to leave me with ADO or DAO though VB, or is
>> there any other loader utility or method that I could use?
>> Also, if I use ADO, which would be the better way to go
>> about it?
>> Consider that my data file will have several millions of
>> rows. For each row I will need to search for an existing
>> record and update it, if it exists. If it doesn't the
>> record needs to be added. I have only just started
>> working with ADO, and mostly with recordsets. However,
>> the process of creating a new record set (to query for an
>> existing record) for each row in my datafile does not
>> seem very effective performance wise.
>> Does anyone know of a good method?
>> With kind regards,