We currently have 17 tables with about 115,000 rows statically linked via SQL Source Control. This is just a small fraction of the total list of tables that we would like to link. When clicking on the "Commit Changes" tab for the SSMS, it takes about 5 minutes before the change list shows up.
This is currently tolerable under optimal conditions. Under sub-optimal conditions (e.g. when you connect remotely via VPN), I have timeout issues.
Looking forward, I am not sure that we will be able to link all of the lookup tables that we would like to ultimately manage. We have dozens of tables and I'm afraid that the time it takes to view the change list would take over an hour, which would not be acceptable.
Are there ways to minimize the time that it takes to determine the change list when you are versioning the data in many large tables? We use these linked tables to generate the update scripts (via SQL Data Compare) from the Dev to QA to Staging to Production environments.
Some possible things that I was thinking of:
1) Link the tables and generate the scripts and then copy the scripts to a separate folder and then unlink the table.
Obviously, this is a clunky solution because you have to repeat the process every time you insert/update/delete a row in the table.
2) Don't link the tables at all. Just use the SQL Data Compare command line executable and hand pick the tables that you want to deploy via the /Include switch. I am afraid of doing this because it would be very easy to have test data accidentally make it to production, since developers aren't manually committing changes to source control.
3) In SSMS, you could create two Registered Database connections to the same database (using different logins, for example). One connection could just link the schema to one folder in source control (e.g. http://mysvnserver/myDbRepository/trunk/SchemaScripts
). The other connection could link the Data to source control (http://mysvnserver/myDbRepository/trunk/DataScripts
) where you filter out all objects except the tables involved (with no indexes, no foreign key constraints, no defaults) and just link the data. This seems like a viable solution, but a little clunky and awkward to say the least. This would also mean that you have the schema of the tables checked into multiple places.
Is there a more elegant solution for versioning monstrous tables?