How To Grab Windows SIDs with PowerShell

There are those times where you just need those SIDs, no matter what door you knock on.
For those fateful times, you can use this snippet of code to grab the SID from a given target user.

#this is the target user name
$username = "SomeDomainSomeUserName";

# get the account object
$account = New-Object System.Security.Principal.NTAccount $username;

# then get the sid value from it
$sid = $account.Translate([System.Security.Principal.SecurityIdentifier]).Value;

# or just do it all at once
$sid = (New-Object System.Security.Principal.NTAccount $username).Translate([System.Security.Principal.SecurityIdentifier]).Value;

Index: PowerShell HowTo’s


How To Generate MERGE statements with SQLCMD

While it is usually a good idea to keep away from the injectionable fun of dynamic SQL, there are some cases where it does help you out a bit.
One such example is generating code in your SSDT post deployment scripts.

Configuring the bit of code below and including it in your SSDT post deployment script will allow you to pull in data automatically from another database on the same server (say a common data harness for the team) right after deploying your SSDT project onto a new or existing personal development database. The use of a fully configured MERGE statement means the entire table is synced, considering INSERTS, UPDATES, DELETES and even proper handling of NULL columns. And because the code is generated dynamically, it will be tolerant of design changes in the underlying tables.

The code is also SQLCMD friendly, feel free to adjust the variable names to whatever is better fit to your own project.

:setvar SourceDatabaseName MySourceDatabase
:setvar SourceSchemaName dbo
:setvar SourceTableName MySourceTable

:setvar TargetDatabaseName MyTargetDatabase
:setvar TargetSchemaName dbo
:setvar TargetTableName MyTargetTable

:setvar KeyColumn DATE_KEY

DECLARE @NL NCHAR(2) = NCHAR(13) + NCHAR(10);
DECLARE @TB NCHAR(1) = NCHAR(9);
DECLARE @TB2 NCHAR(2) = REPLICATE(@TB, 2);
DECLARE @TB3 NCHAR(3) = REPLICATE(@TB, 3);

DECLARE @SQL NVARCHAR(MAX) = '';

/* declare the source block */
SET @SQL += 'WITH Source AS' + @NL + '(' + @NL + @TB + 'SELECT';

/* list the columns from the target table */
SELECT
	@SQL += @NL + @TB2 + QUOTENAME(C.name) + ','
FROM
	[$(TargetDatabaseName)].sys.all_columns AS C
WHERE
	C.object_id = OBJECT_ID('[$(TargetSchemaName)].[$(TargetTableName)]')
;
SET @SQL = SUBSTRING(@SQL, 1, LEN(@SQL) - 1);

/* add the rest */
SET @SQL += @NL + @TB + 'FROM' + @NL + @TB2 + '[$(SourceDatabaseName)].[$(SourceSchemaName)].[$(SourceTableName)]' + @NL + ')';

/* add the merge */
SET @SQL +=
	@NL + 'MERGE INTO [$(TargetDatabaseName)].[$(TargetSchemaName)].[$(TargetTableName)] AS T' +
	@NL + 'USING Source AS S' +
	@NL + 'ON S.[$(KeyColumn)] = T.[$(KeyColumn)]' +
	@NL + 'WHEN NOT MATCHED BY TARGET THEN' +
	@NL + 'INSERT' +
	@NL + '(';

/* list the columns to be inserted into */
SELECT
	@SQL += @NL + @TB + QUOTENAME(C.name) + ','
FROM
	[$(TargetDatabaseName)].sys.all_columns AS C
WHERE
	C.object_id = OBJECT_ID('[$(TargetSchemaName)].[$(TargetTableName)]')
;
SET @SQL = SUBSTRING(@SQL, 1, LEN(@SQL) - 1);

/* close and add the values section */
SET @SQL +=
	@NL + ')' +
	@NL + 'VALUES' +
	@NL + '(';

/* list the columns to be sourced */
SELECT
	@SQL += @NL + @TB + QUOTENAME(C.name) + ','
FROM
	[$(TargetDatabaseName)].sys.all_columns AS C
WHERE
	C.object_id = OBJECT_ID('[$(TargetSchemaName)].[$(TargetTableName)]')
;
SET @SQL = SUBSTRING(@SQL, 1, LEN(@SQL) - 1);

/* close and add the intermission */
SET @SQL +=
	@NL + ')' +
	@NL + 'WHEN NOT MATCHED BY SOURCE THEN' +
	@NL + @TB + 'DELETE' +
	@NL + 'WHEN MATCHED AND NOT' +
	@NL + '(';

/* list the columns to be compared */
SELECT
	@SQL +=
		@NL + @TB + '(' +
		@NL + @TB2 + 'T.' + QUOTENAME(C.name) + ' IS NULL AND S.' + QUOTENAME(c.name) + ' IS NULL OR' +
		@NL + @TB2 + '(T.' + QUOTENAME(C.name) + ' IS NOT NULL AND S.' + QUOTENAME(c.name) + ' IS NOT NULL AND T.' + QUOTENAME(C.name) + ' = S.' + QUOTENAME(C.name) + ')' +
		@NL + @TB + ')' +
		@NL + @TB + 'AND'
FROM
	[$(TargetDatabaseName)].sys.all_columns AS C
WHERE
	C.object_id = OBJECT_ID('[$(TargetSchemaName)].[$(TargetTableName)]')
;
SET @SQL = SUBSTRING(@SQL, 1, LEN(@SQL) - 6);

/* close and add the update */
SET @SQL +=
	@NL + ')' +
	@NL + 'THEN' +
	@NL + @TB + 'UPDATE' +
	@NL + @TB2 + 'SET';

SELECT
	@SQL +=
		@NL + @TB3 + QUOTENAME(C.name) + ' = S.' + QUOTENAME(C.name) + ','
FROM
	[$(TargetDatabaseName)].sys.all_columns AS C
WHERE
	C.object_id = OBJECT_ID('[$(TargetSchemaName)].[$(TargetTableName)]')
	AND C.name <> '$(KeyColumn)'
;
SET @SQL = SUBSTRING(@SQL, 1, LEN(@SQL) - 1);

/* done, let's not forget to close */
SET @SQL += @NL + ';'

/* change this into an sp_executesql once you're feeling brave */
SELECT @SQL;

Some upgrades can definitely be added to this code, namely:

  • Supporting multiple keys for comparison (or using the entire set of columns if the table has no keys).
  • Being able to fetch data from another server (perhaps through a linked server or OPENQUERY).
  • Wrapping this into a cleaner Stored Procedure. Considering this relies on SQLCMD, that might be a good challenge.

If you do end up adding this stuff, please comment below and I’ll add in those changes.

How To Create A Numbers Table In SSDT

Do you need to quickly add a numbers table to your SSDT Database Project? Here’s how.

Add a new table object to your database project, like so:

CREATE TABLE [dbo].[Numbers]
(
	Number INT NOT NULL,
	CONSTRAINT PK_Numbers
		PRIMARY KEY CLUSTERED (Number)
		WITH (FILLFACTOR = 100, ALLOW_ROW_LOCKS = OFF, ALLOW_PAGE_LOCKS = OFF, DATA_COMPRESSION = ROW)
)

Note the details: FILLFACTOR = 100: This table will be effectively read-only, so we can make use of all the pages to minimize reads. ALLOW_ROW_LOCKS = OFF, ALLOW_PAGE_LOCKS = OFF: With a read-only table there is no need for the overhead of row and page level locking, so we can evade these altogether. Queries will then escalate to table locking without additional help. DATA_COMPRESSION = ROW: This helps a bit in minimizing the footprint of the table on disk. ROW level is used instead of PAGE due to all the values in a page being unique, so little benefit to be had from dictionary compression algorithms.

Create a SQLCMD variable in your project named $(NumbersTableRowCount), or something else of your choosing, and assign it the count of numbers you want your table to have. Be sensible with this, you can always double up the rows in your queries. I tend to use 1M in order to make spoofing data easier, but this will be overkill for most systems.

Add the code below to your Post-Deployment Script:

PRINT 'Populating The Numbers Table';
GO
DECLARE @Count INT = CAST('$(NumbersTableRowCount)' AS INT);
IF (SELECT COUNT(*) FROM [dbo].[Numbers]) <> @Count
BEGIN
	WITH Numbers AS
	(
		SELECT
			1 AS Number
		WHERE
			1 < @Count
		UNION ALL
		SELECT
			Number + 1 AS Number
		FROM
			Numbers
		WHERE
			Number < @Count
	)
	MERGE INTO [dbo].[Numbers] AS T
	USING Numbers AS S
	ON S.Number = T.Number
	WHEN NOT MATCHED BY TARGET THEN
		INSERT (Number)
		VALUES (Number)
	WHEN NOT MATCHED BY SOURCE THEN
		DELETE
	OPTION
		(MAXRECURSION 0)
	;
END
GO
ALTER INDEX PK_Numbers ON [dbo].[Numbers] REBUILD;
GO

Granted, the use of MERGE and a Recursive CTE there might be far from being the most efficient way to populate the table, but they do demonstrate two important principles in a database project.

  • No dependencies (if possible). Using a CTE means we’re not depending on other tables that might or might not be there in the future (like the commonly used spt_values), so the code becomes more portable between different versions of SQL Server.
  • Using a MERGE statement instead of an assortment of other statements abides by the principle of declarative programming. The [Numbers] CTE block defines “how” you want your table contents to look like, and then the MERGE merely enforces this against whatever is the current status of the database.

And you’re good to go! A Numbers Table is especially useful for running algorithms or spoofing data, like so:

WITH Source AS
(
	SELECT
		ROW_KEY = N.Number,
		ROW_NAME = 'Spoofed Row ' + CAST(N.Number AS VARCHAR(100))
	FROM
		dbo.Numbers AS N
	WHERE
		N.Number < 100000
)
MERGE INTO dbo.TARGET_TABLE AS T
USING Source AS S
	ON S.ROW_KEY = T.ROW_KEY
WHEN NOT MATCHED BY TARGET THEN
	INSERT (ROW_KEY, ROW_NAME)
	VALUES (ROW_KEY, ROW_NAME)
;

How To Create A New Folder In The SSIS Catalog In T-SQL

The snippet below will create the given project folder in your target SSIS Catalog Database, but only if it does not exist yet, making the script safe for use in an incremental deployment process.

DECLARE @FolderName SYSNAME = 'MyFolder2';
DECLARE @FolderDescription NVARCHAR(1024) = 'MyFolder Description';
DECLARE @FolderID BIGINT = (SELECT folder_id FROM catalog.folders WHERE name = @FolderName);
IF @FolderID IS NULL
BEGIN
	EXEC catalog.create_folder @folder_name = @FolderName, @folder_id = @FolderID OUTPUT;
	EXEC catalog.set_folder_description @folder_name = @FolderName, @folder_description = @FolderDescription;
END

This is how you use it:

1) Set the @FolderName variable to the name you want your folder to have.
2) Set the @FolderDescription variable to something useful for your project.
3) Run the snippet against your target SSISDB database.

Note that this will work only from SQL Server 2012 onwards.

How To Import SQLPS While Keeping The Current PowerShell Provider

One of the very few annoyances of using the SQL PowerShell Provider becomes apparent when you import its module. Your current location, probably a working folder somewhere in your file system, will get overridden by the default location of the new provider.

To avoid this behaviour, you can include a function like the one below in your scripts.
This will hold on to your current location, import the module and then reset your location to what it was before.

function Import-SQLPS
{
    $Current = Get-Location
    Import-Module SQLPS -DisableNameChecking
    Set-Location $Current
}

Index: PowerShell HowTo’s


How To RegEx Replace Multiple Strings In Text Files With Powershell

What’s the only thing better than replacing multiple strings in text files using powershell? Why, doing it with regular expressions, of course! Here’s how.

Following from a previous post on doing a mass file replace with a lookup, the slightly changed snippet below will let you do it using the power of regular expressions to transform the values you replace along the way.

Param (
    [String]$List = "TransformList.csv",
    [String]$Files = ".Files*.txt"
)
$ReplacementList = Import-Csv $List;
Get-ChildItem $Files |
ForEach-Object {
    $Content = Get-Content -Path $_.FullName;
    foreach ($ReplacementItem in $ReplacementList)
    {
        $Content = $Content -IReplace $ReplacementItem.OldValue, $ReplacementItem.NewValue;
    }
    Set-Content -Path $_.FullName -Value $Content
}

Save this snippet as FindTransform-InFilesUsingLookup.ps1 in a folder of your choice.
You can run this script by either moving your files to the default locations, as noted by the $List and $Files parameters in the code, or, you can run this script as below:

.FindTransform-InFilesUsingLookup.ps1 .TransformList.csv .Files*.txt

Let’s give this a go. First of all, create a couple of files in the Files folder (for lazyness sake) in the same folder as the script:

StuffToReplace1.txt:

The quick brown fox jumps over the lazy dog over and over again

StuffToReplace2.txt:

The funky dancing duck jumps over the lazy cat over and over again

Now create a list of transformations you wish to do. To keep it simple, I’ll only create one here. Our transformation will identify the subject and object in each text (fox and dog in the first one, duck and cat in the second one), and then switch them around.

Let’s take the text below and save it as TransformList.csv in the same folder as the script:

OldValue,NewValue
(?&lt;TheSubject&gt;w+) jumps over the lazy (?&lt;TheObject&gt;w+),${TheObject} jumps over the lazy ${TheSubject}

Now execute the command below to see the results:

.FindTransform-InFilesUsingLookup.ps1 .TransformList.csv .Files*.txt

And this is what we get in the affected files:

StuffToReplace1.txt:

The quick brown dog jumps over the lazy fox over and over again

StuffToReplace2.txt:

The funky dancing cat jumps over the lazy duck over and over again

Now this is what we call an inversion of roles – powered by PowerShell.


Index: PowerShell HowTo’s


How To Replace Multiple Strings In Text Files In PowerShell

Ever had to do an unusual high number of find-replacements on an even higher numbered group of files? Remember despairing at the ticking clock while you ached through them one by one? We both know the pain then. Let’s avoid that in the future with some powershell sugar.

The little script below will run a mass find replace on all targeted files, using a lookup csv file to decide what strings to find and with what to replace them.

The Script

See the snippet below? Save it as Replace-InFilesUsingList.ps1, or something else you prefer.

Param (
    [String]$List = "ReplacementList.csv",
    [String]$Files = ".Files*.*"
)
$ReplacementList = Import-Csv $List;
Get-ChildItem $Files |
ForEach-Object {
    $Content = Get-Content -Path $_.FullName;
    foreach ($ReplacementItem in $ReplacementList)
    {
        $Content = $Content.Replace($ReplacementItem.OldValue, $ReplacementItem.NewValue)
    }
    Set-Content -Path $_.FullName -Value $Content
}

The Lookup File

So how does the snippet above knows exactly what to find and replace? That’s where the lookup file comes in. As an example, save the text below as MyLookupFile.csv in the same folder you saved the Replace-InFilesUsingList.ps1 script file.

OldValue,NewValue
MyFirstValueToFind,MyFirstValueToReplaceWith
MySecondValueToFind,MySecondValueToReplaceWith
MyThirdFileToFind,MyThirdValueToReplaceWith

You might have noticed there are no spaces or other odd characters in the file but that’s only to keep this example simple. It is a CSV file after all, so you can put whatever you wish, provided its values are separated by commas, as the name goes.

The Test Files

Make sure you know the files you want to target…

So you don’t end up mangling files you don’t really want to touch!
For this example, let’s create a collection of files to go along with the lookup file.
We’ll save the snippets below in their respective files under the Files subfolder, just under where you saved the Replace-InFilesUsingList.ps1 script file.

FileToTarget1.txt

Somewhere herein lies MyFirstValueToFind just waiting to be found, not to mention MySecondValueToFind as well.

FileToTarget2.txt

Somewhere herein lies MySecondValueToFind just waiting to be found, not to mention MyThirdValueToFind as well.

FileToTarget3.txt

Somewhere herein lies MyFirstValueToFind just waiting to be found, not to mention MyThirdValueToFind as well.

FileNotToTarget.txt

Somewhere herein the lot of MyFirstValueToFind, MySecondValueToFind and MyThirdValueToFind are hiding in secret, hoping not to be found at all.

This covers some combinations of values in the files, just to be sure we can see different replacements being made. Note that the folder isn’t strictly required, it just helps you make sure you don’t touch anything else by accident.

The Test

Let’s give it a run then. Open up a PowerShell window, navigate to the folder where you put your script file and run the command below.

.Replace-InFilesUsingList.ps1 -List ".MyLookupFile.csv" -Files ".FilesFileToTarget*.txt"

Notice what’s happening here. We’re specifically telling the script to only lookup at the files we want it to look, using a file name mask.

Let’s check out what happened to the files then, one at a time.

FileToTarget1.txt

Somewhere herein lies MyFirstValueToReplaceWith just waiting to be found, not to mention MySecondValueToReplaceWith as well.

Success!

FileToTarget2.txt

Somewhere herein lies <strong>MySecondValueToReplaceWith</strong> just waiting to be found, not to mention <strong>MyThirdValueToReplaceWith</strong> as well.

Success as well!

FileToTarget3.txt

Somewhere herein lies <strong>MyFirstValueToReplaceWith</strong> just waiting to be found, not to mention <strong>MyThirdValueToReplaceWith</strong> as well.

We’re on a roll here!

FileNotToTarget.txt

Somewhere herein the lot of MyFirstValueToFind, MySecondValueToFind and MyThirdValueToFind are hiding in secret, hoping not to be found at all.

And it looks like our outlier file survived the find replace onslaught. Cool.

Up Next

Continue reading to see how to do all this using regular expressions too.


Index: PowerShell HowTo’s