Category Archives: How-to

ADDSCHK – Quick check on domain size

Sometimes while looking at somebody’s else ADDS environment you may want to know some basics about it – things such as total number of users, or in which OU this specific server is hiding. What surprises me a lot is that how frequently you can see people telling you that they don’t have right consoles here on this server (while their just in one PoSh line from all they need), or they not sure if they have permissions (which they usually do have). If you are lucky you just spend some time waiting for a person switching over to some other machine or directly to DC (yes to DC, just because ADUC console lives there 🙂 ), or in some other cases you will be dragged through multiple redirects / additions of people to the call only to end up explaining final person in that chain exact steps to be performed to get your questions answered (which you were perfectly able to do without switching servers and involving other people, in the first place).

Unless you already got it, it is more preferable and faster just to do yourself a favor of comfortably staying on the server where you working and issue Install-WindowsFeature RSAT-AD-PowerShell to solve missing tools problem in 20 seconds, and then, use PoSh to get your questions answered. Here is sample PS function, which I named similarly to  CHKDSK (thing of which I have very fond memories ever since I use it to help my classmate to repair his HDD at the time of 1-2 GB hard drives and Windows 95) – ADDSCHK:

In the world where increasing number of people does not hone their “I can do this in N ways” skills (and sometimes even “I understand how it works” too), you frequently better off speaking PoSh with infrastructure directly than with those who entrusted to keep it up and running 🙂

Facebooktwittergoogle_plusredditpinterestlinkedinmail

K2 – How to identify process name by process instance ID

Somehow I kept forgetting this thing frequently enough to expend some effort to write this 🙂

At times when you troubleshooting something in K2 you need to identify process having only process instance ID and frequently knowledge of the solutions and workflow is a missing part (developer is away on vacations or , in the worst case scenario, nobody even knows if there was a developer in the first place 🙂 ). As a sample scenario, you can think of troubleshooting failed process escalation or process instance which stuck in Running state.

Let’s look at this in more details. For failed escalation you will definitely have error in K2 host server log and entry in K2 Server.Async table – that will give you  ProcInstID value, and your next steps are: A) Find out which process this instance belongs to and B) Status of this instance. Finding (B), at least if your process is in error state is easy as it supposed to be listed in  Error Profiles View where you can retry error and also see Process Instance ID and process name.

But in case your instance not listed in Error Profiles View, or let’s say you going step by step before jumping into Error Profiles, then you still have 2 options to get Process Name process instance ID:

(1) Using Workflow Reporting SmartObjects. You can use Process Instance SmartObject (Workflow Reports > Workflow General > Process Instance) to get list of Process Instances – you just feed ProcInstID to it to get back ProcessSetID:

Process Instance SmO Get List

Process Set ID in turn can be feed to Process Overview SmartObject (Workflow Reports > Workflow General > Process Overview) which will give you Process Name:

Process Overview SmO Get List

(2) Querying K2 database (in case you already in SSMS and too lazy to switch over too K2 Server/Tester Tool 🙂 ). Here is a SQL query you need to run:

SQL Query – Get Process Name by Process Instance ID

Facebooktwittergoogle_plusredditpinterestlinkedinmail

Unable to run ConnectionStringEditor.exe – “Did you run the tool from the K2HostServer directory?”

Sometimes you may get quite strange errors with simple solutions and silly reasons. Here is an example – you get a complaint that K2 ConnectionStringEditor.exe cannot be started with the following error message:

And that may really confuse you, especially when you are sure that it is clean, new and shiny, correctly performed installation 🙂 You start wondering what cofig is required and why on earth it had disappeared when there are seemingly no people tend to kill configs during the log clean up nearby. But what you need here before go into panic mode is second look at larger screenshot maybe… like this one:

So the error is actually caused by attempt to run the tool from search results instead of doing that by locating it in its default location (“C:\Program Files (x86)\K2 blackpearl\Host Server\Bin\ConnectionStringEditor.exe”).

Another quite typical error with this utility is attempt to run it without elevation on a server which has UAC enabled (I assume it should be any production Windows server, but you may see it disabled in some environments still).

Facebooktwittergoogle_plusredditpinterestlinkedinmail

MSTSC unable to connect with “CredSSP encryption oracle remediation” message

Just run into this “CredSSP encryption oracle remediation” error message while trying to connect to Windows Server 2016 from Windows 10 client today. This issue is explained in MSFT KB article – “CredSSP encryption oracle remediation” error when RDP to a Windows VM in Azure and it all boils down to specific updates missing either on the server or on the client. In my case I was sure that my client box was fully updated and it was un-patched server outside of my control which was a culprit. For this scenario workaround is to set  Encryption Oracle Remediation policy to Enabled, and then change Protection Level to Vulnerable. Fastest way to do it on standalone box is to add registry key, which we can do using REG ADD command:

For workaround which works in un-patched client connecting to patched server scenarion refer to aforementioned MSFT KB article.

Facebooktwittergoogle_plusredditpinterestlinkedinmail

How to enable TDE (SQL 2017/Azure SQL Database)

TDE is a SQL Server feature which encrypts your data at rest, i.e. your database files. When TDE is enabled encryption of the database file is performed at the page level. The pages in an encrypted database are encrypted before they are written to disk and decrypted when read into memory. TDE does not increase the size of the encrypted database. Here is TDE architecture schema from MSFT documentation:

Transparent Data Encryption Architecture

Transparent Data Encryption Architecture

This blog post explains how to enable Transparent Data Encryption (TDE) for SQL Database (on-premise/Azure).

Scenario 1. On-premise SQL Server 2017 (this will also work for SQL Server in a Azure VM). You can use the following SQL script to enable TDE:

Be sure to replace ‘K2’ with your target database name and adjust password value. Script uses IF clauses to avid creating things which already exist (which are missing in the sample script you can find in MSFT documentation). Once TDE is enabled you can confirm this in the database properties using SSMS GUI:

Scenario 2. Azure SQL Database. Script mentioned above won’t work here. Easiest/default approach to enable TDE for Azure SQL Database is to do so from Azure Portal:

This approach called service-managed transparent data encryption and by default database encryption key is protected by a built-in server certificate. All newly created SQL databases are encrypted by default by using service-managed transparent data encryption.

Other approach called Bring Your Own Key and requires use of Azure Key Vault.

TDE can also be managed with PowerShell, Transact-SQL and REST API. PowerShell contains number of cmdlets for that:

 

And using T-SQL you can use ALTER DATABASE (Azure SQL Database) SET ENCRYPTION ON/OFF command (encrypts or decrypts a database) and two dynamic management views:

  • databasesys.dm_database_encryption_keys which returns information about the encryption state of a database and its associated database encryption keys
  • sys.dm_pdw_nodes_database_encryption_keys which returns information about the encryption state of each data warehouse node and its associated database encryption keys

Once TDE has been enabled there is also options to check whether it is enabled or not using T-SQL:

For further information refer to official MSFT documentation:

Transparent Data Encryption (TDE)

The SQL Server Security Blog on TDE with FAQFacebooktwittergoogle_plusredditpinterestlinkedinmail

How to: Connect to Azure SQL from Visual Studio

This specific day is already described by some people as GDPRmageddon based on amount of emails people receive from all companies they ever dealt with at one or another point about their policies updates and so on. I’m not going to talk about this today, instead I decided to write this tiny post on how to connect to Azure SQL from Visual Studio.

Actually short answer to this is just fire off Visual Studio and select View > SQL Server Object Explorer, the rest is just “follow the wizard” thing, and it is all documented by MSFT of course. But I’m quite well aware about widespread allergy to official documentation (no matter how good it is) and also wanted to try this my self for the very first time recently, and this is how this post came about.

QUESTION: How to connect to Azure SQL from Visual Studio?

ANSWER:

1) First things first. Assuming you don’t have Visual Studio installed on your machine, you should download and install it. There is a free version named Visual Studio Community which can be downloaded here: https://www.visualstudio.com/vs/community

2) Once tiny web installer downloaded run it and click continue to kick off installation process:

Tiny installer will start fetching data from the internet before installing components on your  machine:

3) Once data downloaded you can opt in for default installation which will require 597 MB of disk space:

4) Once installation is complete Visual Studio will suggest you Sign up or Sign In to use additional services, but you can avoid that by clicking “Not now, maybe later” link:

5) Of course you must select Dark theme otherwise nothing will work and click on Start Visual Studio button 🙂

Of course other themes work too but you would look suspiciously in the developers crowd 🙂

6) Once Visual Studio opened you supposed to go to View > SQL Server Object Explorer, and… And if you followed previous steps exactly/selected default Visual Studio installation configuration SQL Server Object Explorer won’t be available in View menu:

7) To address this, re-run Visual Studio installer, open Individual components tab and mark SQL Server Data Tools component – it will automatically select bunch of dependencies upping disk space requirements from 597 MB to 2,52 GB:

8) Once installation complete, re-run Visual Studio and click SQL Server Object Explorer from View menu:

9) Once it opened, right click on SQL Server node in SQL Server Object Explorer tree and select “Add SQL Server…” option:

10) It will start Connect wizard where you can specify your Azure SQL server name and credentials (yes even for Azure SQL database you are connecting through “SQL server” which is logical entity you must have to connect to Azure SQL database(s) and should not be confused with SQL Server in Azure VM 🙂 ):

 

11) Both SSMS and Visual Studio are smart enough to detect if your server name and credentials are good but you don’t have firewall rule created to allow connection, and if you provide your Azure credentials these applications will be able to create required firewall rule for you:

To provide credentials you supposed to click on “Add an account…” under Azure account as shown above. It will open standard Microsoft credentials prompt dialog:

After you provided valid credentials you should be able to click OK in the previous window to create firewall rule:

If you navigate to Azure portal you will be able to see your public IP added to Azure SQL server firewall rules there.

12) And with valid credentials and firewall rule in place you will be able to connect and browse through your databases and objects and execute SQL queries more or less in the same way as in SSMS:

I hope now you are clear on how to connect to Azure SQL from Visual Studio 🙂 But in case you still have any questions just leave them in comments.

Facebooktwittergoogle_plusredditpinterestlinkedinmail

Simple walkthrough: Using K2 Database Consolidation Tool

Purpose of this blog post is to outline K2 databases consolidation process using K2 Database Consolidation Tool.

When you may need it? For older K2 deployments when initial installer used to create 14 separate databases instead of one “K2” database we expect to see with current K2 versions. Such environments even after upgrades to newer versions carry on to have these 14 databases and only starting from K2 4.7 databases consolidation is enforced and you cannot upgrade till you consolidate your databases into one. So you can still see non-consolidated K2 database en environments which run any version of K2 up to 4.6.11 including.

To perform consolidation of these 14 K2 databases into one you need to obtain appropriate version of K2 Database Consolidation Tool from K2 support. Below you can see basic steps you need to perform while performing K2 databases consolidation using this tool.

1) First we need to check collation of your existing K2 databases, this is necessary because consolidation tool won’t handle conversions from one locale to another and consolidation will fail. You can run this script to see collation of your non-consolidated K2 DBs:

As you can see on the screenshot below output of this script shows that my non-consolidated databases have Ukrainian_100_CI_AS collation:

2) Make sure that your target SQL Server service instance has the same collation as your databases either via GUI:

or script:

and copy your non-consolidated databases to the target server which will be hosting consolidated database (unless it is not the same server which was hosting them initially).

2) Obtain K2 Database Consolidation Tool from K2 support, extract it on your SQL server which hosts your K2 databases and launch SourceCode.Database.Consolidator.exe, once you start it you will be presented with the following UI:

3) It will detect your non-consolidated K2 DBs (<No Instance> in the Instance drop down means that you are connecting to default, not named SQL Server instance) and here you need to select your target DB – just select <New Database>, specify “Create Database Name” (I’m using default name used by K2 installer which is K2) and click Create:

4) Once you click Create, database K2 will be created in the same collation as your SQL Server instance (your target DB will contain all the required tables and structure but no data) and Start button become available to you so that you can start consolidation process:

5) Before clicking on Start make sure  your K2 service is stopped. Despite we just created our target “K2” database we still getting warning that all data in target DB will be truncated and we have to click Yes to start consolidation process:

Once you clicked on next you will have to wait for a while till consolidation completes (in the bottom of the tool window in its “status line” you will see current operations which are being performed during databases consolidation process. Time which is necessary to complete this process is heavily depends on your server performance and volume of data in your source databases.

In some scenarios (e.g. source and destination collations have different locale IDs or you moved source databases to another SQL server without re-creating their master key) consolidation process may fail leaving your non-consolidated databases databases in read-only state:

In such scenario you need to review consolidation log to identify and address errors and once done. Switch your source databases back to RW mode (as explained here), delete your target database and start again from step (2). When consolidation completes successfully source non-consolidated databases also stay in read-only mode.

If consolidation completes without errors you will get a message confirming this and also informing you that ReconfigureServer.ps1 script has been created:

You can also click on Log Directory link which will open consolidation log file location – as usual you can open it and make sure than neither ‘Logged Warning’ or ‘Logged Error’ can be found anywhere in this log beyond Legend section in the beginning.

6) In the directory which contains K2 Database Consolidation Tool you will need to take ReconfigureServer.ps1 script and copy it over to your K2 server. This script fires off K2 blackpearl Setup Manager while instructing it to connect to your new consolidated DB:

Here is this script code which you can copy/paste:

Once you run this script on K2 server it will start K2 Setup Manager where you need to go through all pages of “Configure K2 blackpearl” process:

You will see on the database configuration step of the wizard that thanks to PS script we already targeting our new consolidated DB:

Once reconfiguration process is completes (without errors and warnings) you can start testing how your K2 environments behaves after K2 consolidation process.

Facebooktwittergoogle_plusredditpinterestlinkedinmail

SQL script to attach detached non-consolidated K2 DBs

I keep playing with SQL and non-consolidated K2 DBs and in previous post I covered bringing “these 14” back online, now I realized that other case where SSMS requires way too many click is attaching “these 14” back (let’s say after you rebuild your SQL instance system DBs to change instance collation).

Quick google allowed me to find relevant question on dba.stackexchange.com where I took script which generates CREATE DATABASE FOR ATTACH for all existing user databases. Next having my 14 non consolidated K2 DBs I generated the following script to attach them back in bulk:

You can either use this CREATE DATABASE FOR ATTACH for all existing user databases script while your K2 databases are still attached, of it they are not just replace paths in script listed above and execute modified script to attach them quickly.

Facebooktwittergoogle_plusredditpinterestlinkedinmail

SQL Script to switch all currently RO databases to RW mode

I was doing some testing of K2 databases consolidation process which required me to re-run database consolidation process more than once to re-try it. Unfortunately K2 Database Consolidation Tool leaves all databases in read-only mode if something fails during consolidation process. If you remember K2 used to have 14 separate data bases prior to consolidated DB was introduced (see picture below).

Typing 14 statements manually to bring all these database to read-write mode is a bit time consuming so I came up with the following script:

Essentially it will select all databases currently in RO state and will output bunch of statements to bring all of them to RW state as an output:

Just copy-paste this script output into new query window of SSMS and press F5 🙂

It may be useful for you once in a while (and if not for this specific use case, then as an example of generating some repetitive statements which contain select statement results inside).

Facebooktwittergoogle_plusredditpinterestlinkedinmail

Configuring Windows Server 2016 Core Domain Controller

In Windows Server 2016 you no longer have an opportunity to switch back and forth between core and GUI installation, hence you cannot do install and configure AD DS in a lazy way (using full GUI) and then convert it to core. That was something I discovered hard way long time ago – so I already have separate VHDX templates for Server 2016 core and full GUI VMs.

But it has been quite a while since I was playing with Server Core so when I starting provisioning my new Server 2016 core domain controller VMs today I realized that I need to remember quite a few commands to fully install AD DS on Server Core. I was about to create a blog post listing essential commands, but actually found very well written blog post on TechNet covering exactly that: Chad’s Quick Notes – Installing a Domain Controller with Server 2016 Core. So just sharing it here, instead of writing the same myself 🙂

Facebooktwittergoogle_plusredditpinterestlinkedinmail