Best practice to populate static data using a Visual Studio 2010 database project?

How do you populate your database with static, source-controlled data using a Visual Studio database project? I have tried all three strategies below, finding each one to be progressively better than the last. I am using but not completely satisfied with strategy 3. Do you have an another alternative?

  1. Place insert scripts in the "Data Generation Plans" folder. Reference the scripts in the "Script.PostDeployment.sql" file to include them in the deployment process.

    -- advantage: straight-forward -- drawback: slooooooow -- drawback: subsequent deploys must first delete static data or check for non-existence of data => inefficient

  2. Insert the data into the database the first time using whatever method is most convenient (e.g. could be the SSMS edit table feature). Extract that data using the bcp command line utility to create a bunch of data files and add them to your project. Create a script referenced in the "Scripts.PostDeployment.sql" file that executes a "bulk insert" statement for each data file.

    -- advantage: much faster than insert statements -- advantage: can leverage SSMS edit table feature -- drawback: each bulk insert statement requires a fully qualified file name to the data file so if the data files are located on my machine at "C:\Projects\Dev\Source\foo.dat" then the remote dev machine must also have them at that location or the bulk insert statement fails -- drawback: must delete existing static data before executing bulk insert statements on subsequent deploys

  3. Create temporary tables during deployment to hold the static data and use the sql merge statement to synchronize these tables with the target tables. See either of these blog posts.

    -- advantage: seems like sql merge has the perfect semantics for the problem -- drawback: the logic for this strategy is repeated in each file -- drawback: table definitions are repeated as temporary tables in the sql merge files

Is there a superior alternative strategy? I gave up on strategy 1 because it was too slow. I dislike strategy 2 due to the fully qualified file name issue. I am satisfied but not thrilled by strategy 3. Is there a best practice?


In your insert.sql script, you can put a GUID in the [__RefactorLog] table (which is a system table used by deployment) and check if this GUID exist before inserting your data like this :

:setvar SOMEID "784B2FC9-2B1E-5798-8478-24EE856E62AE" //create guid with Tools\CreateGuid in VS2010

IF NOT EXISTS (SELECT [OperationKey] FROM [dbo].[__RefactorLog] where [OperationKey] = '$(SOMEID )')



INSERT INTO [dbo].[__RefactorLog] ([OperationKey] ) values( '$(SOMEID )' )


Then you insert data only if not exist or if you want to (by changing the Guid).

This is how I solved this problem in case anyone else finds this useful...

The strategy is to set a sqlcmdvars variable before building the database project. This variable would contain the absolute path to the build folder that can be referenced from the post deployment script. Then it would be a simple matter to use that in the deployment script for any additional files or resources that you might need. The advantage of this strategy is that all the paths are relative to the project file instead of requiring a hard coded shared path.

Create a new Sql command variable name $(MSBuildProjectDirectory). This will get overriden in the prebuild script.

Create an msbuild script that would set the sql command variable and build the database.

<Project ToolsVersion="4.0" DefaultTargets="BuildDatabase"  xmlns="">
<Import Project="$(MSBuildExtensionsPath)\MSBuildCommunityTasks\MSBuild.Community.Tasks.Targets"/>
    <DeploymentConnectionString>Data Source=$(DatabaseServer)%3BIntegrated Security=True%3BPooling=False</DeploymentConnectionString>
<Target Name="BuildDatabase">
    <!-- Sets the projet path variable so that the post deployment script can determine the location of the bulk insert csv files. -->
        Value="$(MSBuildProjectDirectory)\DatabaseProjectName" />

        <Output TaskParameter="TargetOutputs" ItemName="SqlFiles"/>

Update your post deployment script as follows...

BULK INSERT [dbo].[TableName] FROM '$(MSBuildProjectDirectory)\Scripts\Post-Deployment\Data\YourDataFile.csv'

You can use the schema output from the database project to update the target database There is a cmd tool to run it on other machine is not with in view of you vs2010 IDE

So you data will still be the same, unless you have drops on any column

We haven't rolled our VS 2010 db project into Production yet, but for our in-house project we load the production database into the target database and build/deploy to it during the dev/test phase. That said, I understand that probably will not work for you Tim if you have multiple prod databases and static data that goes out to each. But, it can be done for single prod db shops like ours.

Need Your Help

Perl: while ($key = each %hash) doesn't stop at key = 0

perl hash each while-loop

I don't need this, obviously; I'm just curious about what's going on here. Am I missing

How best to do Agile Development with Trac?

plugins agile scrum trac

We use Trac as our bug tracking / development / wiki system and I was wondering if anyone has experience and uses some of the Trac Agile/Scrum plugins or functionalities? Anything you'd recommend?