Unable to create new/edit existing Oracle Service Service Instance after changing K2 installation path

Recently I bumped into a problem which was super obvious in retrospective, yet took me some time to untangle it. K2 environment was upgraded from 4.6.11 to 4.7 and K2 installation path was changed in the process (drive letter). After upgrade was completed without warnings or errors, we did some more testing and found that one of the forms which was using Oracle Service Instance based SmartObject started to throw an error similar to this one: 

Could not load file or assembly – SourceCode.SmartObjects.Services.Oracle.dll

Essentially it was very clear from the error message that Oracle Service instance keep looking for related assembly in old installation location (wrong drive letter). We switched to SmartObjects Services Tool only to see that there we are unable to edit or create new service instance of this service type. At this point I looked at old cases mentioning similar error message and surprisingly large amount of them was proposing workarounds and things not quite related with the root cause. We spend some time addressing missing prerequisite for this service type – 64-bit Oracle Data Access Components (ODAC) version 2.121.2.0 or higher, which mentioned as such in 4.7 user guide (_) and checking some related settings and so on.

But I next paid attention to the fact that environment had 2 service type for Oracle one of them was working, while another one does not. I next dropped assembly mentioned in error message in old installation location and restarted K2 service – it then fixed first Oracle service instance, but broken another one – it started to say that assembly SourceCode.SmartObjects.Services.Oracle.dll has been already loaded from another location, and this brought my focus back to the real problem – somehow one of the Oracle service types was not updated by K2 Setup Manager to use new installation path. Probably it was somehow “custom” and somehow was skipped by installer because of that. Anyhow my next step was finding where this path is defined. As soon as I confirmed that I cannot see/edit Service Type definition XML from SmartObjects Services Tool I switched to K2 database to check it there.

Necessary word of warning: Backup your K2 database before attempting any direct manipulations in it, and make sure you understand what you are doing before starting doing that 🙂

Service type definitions live in the follow [SmartBroker].[ServiceType] table, so I located “problematic” service type to check on its XML which is stored in ServiceTypeXML column. Here is the sample query to quickly search for service instance definition based on its Display Name:

Than will return you XML column value, on which you can click to view it as a formatted XML, here is an example of how it looks like:

Service Type XML

As you can easily service type definition contains assembly path parameter in its XML. So now it is only a question of updating it with correct value. Here is sample script to do that:

That will iron out problem with misbehaving service type. I don’t think that it can be very frequent problem as normally installer updates all the assembly paths definition with new path. But, especially if you have some custom service type, you may want to scan your service types definitions for any vestiges of old installation path. Here is a sample script which will display all Service Instances definitions which contain old drive letter reference (my example uses “D:\%” as a search criteria):

I hope that this blog post may help someone who may bump into similar error in K2 and if not, then maybe you can make use of SQL script samples which use filtering based on values within XML columns.

Facebooktwittergoogle_plusredditpinterestlinkedinmail

ADDSCHK – Quick check on domain size

Sometimes while looking at somebody’s else ADDS environment you may want to know some basics about it – things such as total number of users, or in which OU this specific server is hiding. What surprises me a lot is that how frequently you can see people telling you that they don’t have right consoles here on this server (while their just in one PoSh line from all they need), or they not sure if they have permissions (which they usually do have). If you are lucky you just spend some time waiting for a person switching over to some other machine or directly to DC (yes to DC, just because ADUC console lives there 🙂 ), or in some other cases you will be dragged through multiple redirects / additions of people to the call only to end up explaining final person in that chain exact steps to be performed to get your questions answered (which you were perfectly able to do without switching servers and involving other people, in the first place).

Unless you already got it, it is more preferable and faster just to do yourself a favor of comfortably staying on the server where you working and issue Install-WindowsFeature RSAT-AD-PowerShell to solve missing tools problem in 20 seconds, and then, use PoSh to get your questions answered. Here is sample PS function, which I named similarly to  CHKDSK (thing of which I have very fond memories ever since I use it to help my classmate to repair his HDD at the time of 1-2 GB hard drives and Windows 95) – ADDSCHK:

In the world where increasing number of people does not hone their “I can do this in N ways” skills (and sometimes even “I understand how it works” too), you frequently better off speaking PoSh with infrastructure directly than with those who entrusted to keep it up and running 🙂

Facebooktwittergoogle_plusredditpinterestlinkedinmail

StarWind rPerf – cross-platform RDMA performance testing utility

Normally, I write the “technical how to” type of articles, but this one will be more of a product review/introduction (though, I think even with this format we can go into technical details 😊). Relatively recently, StarWind  released a free tool which allows you to measure latency and bandwidth of RDMA connections (pay attention to conjunction “and” here) and to do this in heterogeneous environments (meaning that you can measure Windows – Linux RDMA connection bandwidth and latency). This utility is called rPerf and can be downloaded for free from StarWind website. To download it, you will need to fill in a little form with some of your data, but that’s not much to pay for a good tool, right?

I would allow myself to write a little bit on what RDMA is so that we are clear on what we are going to measure here with this utility😊 (though, this technology is a huge topic in its own which calls for a lot of reading to fully understand it). Next, we will touch a little bit on what rPerf can do for you and even more briefly how to use it (just because it is straightforward and easy).

What is RDMA? RDMA or Remote Direct Memory Access is a technology which enables direct access from memory of one computer to another bypassing OS data buffers of both computers (meaning it is all happens on hardware level through device drivers). That type of access allows you to have high-throughput and low-latency networking which is something you really need for massively parallel computing clusters. RDMA-enabled data transfers do not add extra load on CPUs, caches or context switches, allowing your data transfers to continue in parallel with others system tasks. As an example of practical use case may be Hyper-V live migration, there is a YouTube video from Mellanox demonstrating a comparison of live migration performance with RDMA vs. TCP (and it shows impressive 29 seconds VS 2 hours result).

RDMA read and write requests are delivered directly to the network allowing for fast message transfer and reduced latency, but also introduces certain problems of single-side communications, where target node is not notified about the completion of the request (you may want to read up more on this to really understand this technology).

How can you get it? RDMA implementations require you to have both hardware (NIC) and software support (API and drivers support) and currently different varieties of RDMA implementations exist: Virtual Interface Architecture, RoCE (RDMA over Converged Ethernet), InfiniBand, Omni-Path, iWARP.

All in all, you most likely will find RDMA capability in high-end servers (you need to make sure that you have NIC supporting RDMA, something from Broadcom, Cavium or Mellanox Technologies) and HPC type of Microsoft Azure VMs (H16r, H16mr, A8 and A9, and some of N-series sizes with “r” in their name too).

What can you do with rPerf? You can measure RDMA link performance between RDMA-enabled hosts. The rPerf tool is a CLI utility which has to be run on both machines: one of them running as a server and another as a client. On the machine which you run as a client you specify the number of read iterations, buffer size and queue depth to start testing and once test completes you are going to get throughput in MiB/s and kIOPS along with latency information units/microseconds (minimum/maximum/average).

I’ve already mentioned that one of the strong points of this tool is its ability to work cross-platform. OS wise it supports Windows 7/Server 2012 or newer, CentOS 7, Ubuntu. Windows based OS must have Network Direct Provider v1 and lossless RDMA configured. Keep in mind that the latest drivers from the NIC manufacturer are recommended as standard Windows drivers don’t have ND API support. In case of Linux-based OS, you will need the latest network drivers with RDMA and RoCE support.

All the command switches you need to use are well documented in the technical paper dedicated to this tool on StarWind site so I won’t be dwelling on that, and I would say that best thing is to try to use this tool in your RDMA enabled environments.

Having real numbers comes in really handy in scenarios when you set up your cluster and need to make sure which mix of technologies gives you the best latency, or when you need to make sure whether your setup meets the requirements of your workload or application demand outlined by an application vendor, or (and this is the most frequently forgotten thing) when you need to set up the baseline performance numbers of your environment to be able to compare against it once your setup receives higher load or when service consumers report degraded performance. With rPerf, you can solve at least one part of writing your performance baseline documentation. Having some firm numbers for RDMA connection performance also serves well for verifying/auditing RDMA connection performance in any other scenario and with rPerf you can do it with one simple cross-platform tool.

Facebooktwittergoogle_plusredditpinterestlinkedinmail

K2 – How to identify process name by process instance ID

Somehow I kept forgetting this thing frequently enough to expend some effort to write this 🙂

At times when you troubleshooting something in K2 you need to identify process having only process instance ID and frequently knowledge of the solutions and workflow is a missing part (developer is away on vacations or , in the worst case scenario, nobody even knows if there was a developer in the first place 🙂 ). As a sample scenario, you can think of troubleshooting failed process escalation or process instance which stuck in Running state.

Let’s look at this in more details. For failed escalation you will definitely have error in K2 host server log and entry in K2 Server.Async table – that will give you  ProcInstID value, and your next steps are: A) Find out which process this instance belongs to and B) Status of this instance. Finding (B), at least if your process is in error state is easy as it supposed to be listed in  Error Profiles View where you can retry error and also see Process Instance ID and process name.

But in case your instance not listed in Error Profiles View, or let’s say you going step by step before jumping into Error Profiles, then you still have 2 options to get Process Name process instance ID:

(1) Using Workflow Reporting SmartObjects. You can use Process Instance SmartObject (Workflow Reports > Workflow General > Process Instance) to get list of Process Instances – you just feed ProcInstID to it to get back ProcessSetID:

Process Instance SmO Get List

Process Set ID in turn can be feed to Process Overview SmartObject (Workflow Reports > Workflow General > Process Overview) which will give you Process Name:

Process Overview SmO Get List

(2) Querying K2 database (in case you already in SSMS and too lazy to switch over too K2 Server/Tester Tool 🙂 ). Here is a SQL query you need to run:

SQL Query – Get Process Name by Process Instance ID

Facebooktwittergoogle_plusredditpinterestlinkedinmail

GA of K2 5.2 – time to try new version

Today 17.10.2018 K2 5.2 went into GA stage meaning news about release were sent to all clients and partners and starting from now we can download this new and shiny version from K2 portal. So it is a perfect time to do a little review. Without further ado let me start with this.

You can download 5.2 installer from K2 portal. And providing you have test VM with current version of K2, update manager will get you to new version  withing 30 minutes or so. Once installer completes extraction of files you presented with splash screen:

Splash screen provides you with essential information (.NET 4.6.1 requirement, where to run an so on) and allows you to kick off installation process (conservative people like me can still locate Setup.exe and run it from Installation folder).

In case of existing installation detected K2 Update manager detects that and gets you upgraded just in few steps:

In case you run with multiple security labels you will immediately notice improved label selection UI which is no longer looks like something from the past and fully aligned with modern K2 UI design:

Additionally you will notice increased number of available OAuth resource types:

My favorite under the hood improvement, which is really huge thing, is completely rebuilt identity cache and sync architecture which was brought into on-prem product from its cloud version (if I employ Microsoft-speak “battle-tested in the cloud” and so on). At this stage all the internal infrastructure of new Sync Engine is already here in 5.2 RTM, yet it is disabled – stay tuned for official news for when this feature will go live for all customers. At initial stage K2 will work with selected customers to assist them to enable and transition to the new Sync Engine. But like I said, you already can see that underlying infrastructure for New Sync engine is already here in 5.2 release. In case you familiar with back end/underlying tables you can tell that number of Identity tables has increased:

And Identity.Identity table has been expanded too:

Long story short with all these changes and new sync engine enabled your Identity cache sync speed will be greatly improved and, for example, even your URM Get Users SmO call against Azure AD label can be served from cache without doing query to AAD.

There is more improvements and new features and I will try to cover them in greater details a bit later.

Additional resources / next steps:

Download K2 5.2

5.2 Release Notes

5.2 User Guide

5.2 Installation and Configuration Guide aka ICG

K2 5.2 Developer Reference aka DevRef

K2 Five (5.2) Fix Packs

Facebooktwittergoogle_plusredditpinterestlinkedinmail

K2 and AAD manual integration configuration – errors and solutions

Even after doing 3-5 installations which leverage manual integration between K2 and active directory I keep bumping into errors which at times take disproportionally large amount of time to decipher them and pin point that tiny/silly error in configuration settings which prevents your setup from working. So I decided to collate them all into the “symptom-solution list” and keep in one place – i.e. in this blog post.

AADSTS50011: The reply url specified in the request does not match the reply urls configured for the application


AADSTS50011

That most likely means that Token Endpoint Reply URL is not specified in your AAD app properties. That URL should look as follows https://{K2SiteURL}/identity/token/oauth/2 and you need to make sure that it is added in your app Settings.  You do that in Azure Portal > Azure Active Directory > App Registrations > %Your_K2_App_Name% > Settings > Reply URLs. 

Add required URL and be sure to wait something like 30 seconds at least after applying this change and try logging in again.

AADSTS700016: Application with identifier ‘%APP_URL%’ was not found in the directory ‘%AZURE_DIRECTORY_ID%’.

AADSTS700016

dThis error message means either mismatch between identifierUris listed in your app manifest file and URL reported in error message or absence of these URLs in app manifest. Your K2 AAD app manifest file should contain your Runtime and Designer URLs, as shown on the screenshot below:

One thing to remember here is that when you edit App ID URI from AAD app properties and put updated value there it overwrites your identifierUris list in app manifest values – it removes your Designer and Runtime URLs from there (and anything else listed there) and puts updated App ID URI value there, which will give you  AADSTS700016 error.

Claim mapping configuration cannot be found for this claim. Claim information: Name=”

This was the one I wasted hours of troubleshooting time triple-checking all my configuration and asking each and everyone to help me spot what is wrong with my set up, only to discover that I tried to perform logon with my Azure tenant admin account which was listed in AAD users list as account with “Microsoft Account” source whereas it is necessary to create a user in AAD (all of those listed in AAD Users list with “Azure Active Directory” specified in Source column). I’m not 100% sure if this problem can be better handled on K2 side to present more actionable/clear error message – one we have here nudge you into direction of checking identity claim mapping while in this scenario problem is completely different.

I will be extending this list with other error messages as I encounter them.

Facebooktwittergoogle_plusredditpinterestlinkedinmail

Unable to run ConnectionStringEditor.exe – “Did you run the tool from the K2HostServer directory?”

Sometimes you may get quite strange errors with simple solutions and silly reasons. Here is an example – you get a complaint that K2 ConnectionStringEditor.exe cannot be started with the following error message:

And that may really confuse you, especially when you are sure that it is clean, new and shiny, correctly performed installation 🙂 You start wondering what cofig is required and why on earth it had disappeared when there are seemingly no people tend to kill configs during the log clean up nearby. But what you need here before go into panic mode is second look at larger screenshot maybe… like this one:

So the error is actually caused by attempt to run the tool from search results instead of doing that by locating it in its default location (“C:\Program Files (x86)\K2 blackpearl\Host Server\Bin\ConnectionStringEditor.exe”).

Another quite typical error with this utility is attempt to run it without elevation on a server which has UAC enabled (I assume it should be any production Windows server, but you may see it disabled in some environments still).

Facebooktwittergoogle_plusredditpinterestlinkedinmail

K2 Configuration Analysis – Could not create certificate: Keyset does not exist

I was doing K2 5.1 installation recently and bumped into the following error reported by K2 installer:

The following was recorded in installer trace:

>> Certificates.FindCertificate: Find cert start
>> Certificates.FindCertificate: Find certificate: D9F8C4EF74BD33B6C9FCA421E68222E77B1A3315 in LocalMachine – My
>> Certificates.FindCertificate: Found 1 certificates
>> Certificates.ExportCertToString: Exporting…
>> EnsureCertificate.Execute: Logged Error: Could not create certificate: System.Security.Cryptography.CryptographicException: Keyset does not exist

at System.Security.Cryptography.CryptographicException.ThrowCryptographicException(Int32 hr)
at System.Security.Cryptography.X509Certificates.X509Utils._ExportCertificatesToBlob(SafeCertStoreHandle safeCertStoreHandle, X509ContentType contentType, IntPtr password)
at System.Security.Cryptography.X509Certificates.X509Certificate.ExportHelper(X509ContentType contentType, Object password)
at SourceCode.Install.Security.Certificates.ExportCertToString(X509Certificate2 cert, String password)
at SourceCode.Install.Package.Actions.Security.EnsureCertificate.Execute(Target target)
>> EnsureCertificate.Execute: End EnsureCertificate, success: False
>> Target.Execute: !Completed: Server – EnsureCertificate, Success: False

It was really strange as everything was working fine and seemingly all required certificates were in place. I tried to check on “C:\ProgramData\Microsoft\Crypto\RSA” folder permissions but they were seemingly OK and updating/editing them didn’t resolve K2 installer error for me. Though as I later found out I was looking into the right direction. I next rebooted the server a few times and eventually lost RDP connectivity to it with the following error message displayed by MSTSC client:

At this point it was took over by Windows server admin who resolved MSTSC error after stumbling on the blog post at Ask the Performance Team Blog and correcting RSA folder permissions. As I understand final solution was… …drum roll… ensuring/fixing correct permissions on “C:\ProgramData\Microsoft\Crypto\RSA” folder. Specifically NT AUTHORITY\NETWORK SERVICE account didn’t have rights on “C:\ProgramData\Microsoft\Crypto\RSA\MachineKeys” folder. Once permissions were adjusted it fixed both MSTSC connectivity and K2 installer error mentioned above.

My takeaways from this case and aforementioned blog post are the following:

– For RDP problem there was correlating server side even with ID 36870: A fatal error occurred when attempting to access the TLS server credential private key. The error code returned from the cryptographic module is 0x8009030D. The internal error state is 10001.

– In general if I’ll see this error next time I will run Procmon while reproducing this error/running installer with expectation that it will show an “Access Denied” error while trying to access some of the MachineKeys, e.g.:

“C:\ProgramData\Microsoft\Crypto\RSA\MachineKeys\f686aace6942fb7f7ceb231212eef4a4”

Once we have access denied error spotted it should be possible to correct permissions with certainty that problem is indeed caused by them. As we had certificate GUID mentioned in K2 installer trace I assume there should be corresponding access denied error related with accessing this specific machine key which can be caught with help of Procmon utility.

Facebooktwittergoogle_plusredditpinterestlinkedinmail

Microsoft Azure 70-473 Design and Implement Cloud Data Platform Solutions – Design and Implement Security

Earlier I tried to compile list of Microsoft documentation and other resources relevant for 70-473 exam preparation but I quickly realized that exam scope makes this list too huge and unwieldy. I now decided that I would rather split this into smaller resources lists following specific exam sections. Below you can see the list of reading resources relevant for Design and Implement Security section of 70-473 exam. You will mainly find links to Microsoft documentation in this list.

Here you can see Design and Implement Security exam section topics as described on official exam page:

  • Design and implement SQL Server Database security
    • Configure firewalls; manage logins, users, and roles; assign permissions; configure auditing; configure Transparent Database Encryption (TDE); configure row-level security; configure data encryption; configure data masking; configure Always Encrypted
  • Design and implement Azure SQL Database security
    • Configure firewalls; manage logins, users, and roles; assign permissions; configure auditing; configure row-level security; configure data encryption; configure data masking; configure Always Encrypted, configure Automatic Threat Detection

I tried to structure list of links below based on sub-objectives.

Configure Firewalls

Azure SQL Database and SQL Data Warehouse firewall rules

sp_set_database_firewall_rule (Azure SQL Database)

Azure SQL Database: Firewall security

Configure a Windows Firewall for Database Engine Access

Configure a Server to Listen on a Specific TCP Port
Configure the Windows Firewall to Allow SQL Server Access

TCP/IP Properties (IP Addresses Tab)

SQL Server: Frequently Used Ports

Security Considerations for SQL Server in Azure Virtual Machines

Manage logins, users and roles

Server and Database Roles in SQL Server

Managing Users, Roles, and Logins

Getting Started with Database Engine

Database-Level Roles

Server-Level Roles

CREATE CREDENTIAL (Transact-SQL)

SQL Server Separation of Duties (Word document download)

Assign Permissions

Getting Started with Database Engine Permissions

New Permissions in SQL Server 2014: IMPERSONATE ANY LOGIN, SELECT ALL USER SECURABLES, CONNECT ANY DATABASE and the old CONTROL SERVER

GRANT Server Permissions (Transact-SQL)

SQL Server Best Practices – Implementation of Database Object Schemas
+ see also: What are some best practices for using schemas in SQL Server?

Azure SQL Database and SQL Data Warehouse access control

Principals (Database Engine)

Configure Auditing

Get started with SQL database auditing

Set-AzureRmSqlServerAuditingPolicy

Use-AzureRmSqlServerAuditingPolicy

Configure Transparent Database Encryption (TDE)

Transparent Data Encryption (TDE)

Transparent data encryption for SQL Database and Data Warehouse 

Enable TDE on SQL Server Using EKM

ALTER DATABASE (Azure SQL Database)

Configure Row-Level Security (RLS)

SQL Server 2016 : Implement Row Level Security using Predicate Function and Security Policy

Row-Level Security

SQL Server Security Blog – Row-Level Security block predicates are generally available on Azure SQL DatabaseCREATE SECURITY POLICY (Transact-SQL)

Configure Data Encryption

SQL Server 2016 New Features: Security and Encryption

Encrypt a Column of Data (column/cell level encryption)

Extensible Key Management Using Azure Key Vault (SQL Server)

CREATE ASYMMETRIC KEY (Transact-SQL)

SQL Server Certificates and Asymmetric Keys
OPEN SYMMETRIC KEY (Transact-SQL)

Get started with Azure Key Vault
About keys, secrets, and certificates

Configure Data Masking

SQL Database dynamic data masking

ALTER TABLE (Transact-SQL)

Use Dynamic Data Masking to obfuscate your sensitive data

Configure Always Encrypted

Always Encrypted (Database Engine)

Always Encrypted (client development)

Develop using Always Encrypted with .NET Framework Data Provider

Always Encrypted: Protect sensitive data in SQL Database and store your encryption keys in Azure Key Vault

Microsoft Azure SQL Database provides unparalleled data security in the cloud with Always Encrypted

Configure Always Encrypted using SQL Server Management Studio

SqlConnection.ConnectionString Property
Use .NET (C#) with Visual Studio to connect and query an Azure SQL database

Configure Automatic Threat Detection

Use PowerShell to configure SQL Database auditing and threat detection

Azure SQL Database Threat Detection

Azure Security Center Documentation

Other/General

Controlling and granting database access to SQL Database and SQL Data Warehouse – Secure your Azure SQL Database

Azure Cosmos DB: SQL API getting started tutorial

Get started with Azure Table storage and the Azure Cosmos DB Table API using .NET

ADO.NET Overview

Securing your SQL Database

Azure Storage replication

Collation and Unicode Support

AzureRM.Sql

Management Data Warehouse

SQL Server Profiler

Monitoring SQL Server Performance

Monitor Resource Usage (System Monitor)

Next time I will try to compile similar list for Design and implement high availability, disaster recovery, and scalability section of the exam.

Facebooktwittergoogle_plusredditpinterestlinkedinmail

MSTSC unable to connect with “CredSSP encryption oracle remediation” message

Just run into this “CredSSP encryption oracle remediation” error message while trying to connect to Windows Server 2016 from Windows 10 client today. This issue is explained in MSFT KB article – “CredSSP encryption oracle remediation” error when RDP to a Windows VM in Azure and it all boils down to specific updates missing either on the server or on the client. In my case I was sure that my client box was fully updated and it was un-patched server outside of my control which was a culprit. For this scenario workaround is to set  Encryption Oracle Remediation policy to Enabled, and then change Protection Level to Vulnerable. Fastest way to do it on standalone box is to add registry key, which we can do using REG ADD command:

For workaround which works in un-patched client connecting to patched server scenarion refer to aforementioned MSFT KB article.

Facebooktwittergoogle_plusredditpinterestlinkedinmail