Credits: https://www.developertoarchitect.com/downloads/worksheets.html
Recently I was working on a solution where there was a requirement to implement the audit trail feature. An audit trail is a security-relevant chronological record, set of records, and/or source/destination of records that provide documentary evidence of the sequence of activities that have affected at any time a specific operation, procedure, event, or device.
After going through the available options and comparing each one’s pros and cons, we have settled upon a SQL Server feature called Temporal Tables.
What are Temporal Tables?
Temporal tables (also known as system-versioned temporal tables) are a database feature that brings built-in support for providing information about data stored in the table at any point in time, rather than only the data that is correct at the current moment in time. This database feature was made available from SQL Server 2016 onwards.
A system-versioned temporal table is a type of user table designed to keep a full history of data changes, allowing easy point-in-time analysis. This type of temporal table is referred to as a system-versioned temporal table because the period of validity for each row is managed by the system (that is, the database engine).
Every temporal table has two explicitly defined columns, each with a datetime2 data type. These columns are referred to as period columns. These period columns are used exclusively by the system to record the period of validity for each row, whenever a row is modified. The main table that stores current data is referred to as the current table, or simply as the temporal table.
In addition to these period columns, a temporal table also contains a reference to another table with a mirrored schema, called the history table. The system uses the history table to automatically store the previous version of the row each time a row in the temporal table gets updated or deleted. During temporal table creation, users can specify an existing history table (which must be schema compliant) or let the system create a default history table.
I you are planning to make use of Temporal Tables in .NET Core, this feature was made accessible from EF Core 6.0. The feature supports:
Recently I was required to include some functionality which consists of compression and decompression of data. To accomplish this task I made use of the GZipStream object.
Initially, I was using the following code for compression:
using var gs = new GZipStream(mso, CompressionMode.Compress);{
msi.CopyTo(gs);
After trying using this code, no compressed data was being produced. After changing the code to the below, compressed data was being produced correctly.
using var gs = new GZipStream(mso, CompressionMode.Compress);{
{
msi.CopyTo(gs);
}
The functionality for the above shared code should be identical, but for some reason it behaves differently.
Following the above, I did some additional tests using a different Compression Mode as Decompress. Use both code samples shown above, during decompression issue wasn't reproduced.
Recently we were required to do bulk delete of data that exists on Cosmos DB. This was required as a one time process for the purpose of cleaning orphaned data that has been caused due to legacy functionality from another system.
To provide an overview of the current structure for recording data onto Cosmos DB is setup as shown below. In our instance the partition_key is the car_color
Unfortunately, Cosmos DB doesn't provide functionality out of the box to accommodate this requirement. To accomplish this task, one can make use of the following stored procedure. This needs to be added to the container where the data needs to be deleted.
When executing this stored procedure apart from the query parameter, one needs to also include the partition_key. Hence the data that would be deleted would only be within a specific partition and not on the whole container. Should the user require to delete data from different partitions, the stored procedure needs to be executed multiple times by supplying each time the related partition_key.
Recently I was required to do some analysis on some of the APIs offered within the Azure Cognitive Services suite. Initially the APIs that were going to be analyzed were the following listed below. However one of them was later dropped due to the reasons that will be provided later.
Recently I faced a requirement where from a method I needed to call an independent background task. Below is a snippet
The challenge that was encountered is that the background task was required to make use of objects which were instantiated within the constructor of the class via dependency injection.
The background task was required to make use of the mediator object. However upon running this code, I started experiencing Cannot access a disposed object.
To overcome this problem what was required is to created a scoped object within the background job. Hence we made use of the IServiceScopeFactory which would help in instantiating an object for the required service within the scope of the background task.
When retrieving data from ElasticSearch, one approach to do this is via the following code.
Should the document structure be something similar to the following, upon running the above search request the response would include all data persisted and map it to all the provided properties.
Optimizing retrieval by excluding unwanted properties
There might be instances where we would need only specific data within this structure. On can just remove the undesired properties from the Document class. Though this would present the user with only the set of wanted properties, performance wise it wouldn't have any benefits. This is because ElasticSearch would still return all the persisted data. The exclusion of the data would be just happening during the process when mapping the result to our structure.
To improve the performance by avoiding bulk and useless network traffic from the ElasticSearch server, we can optimize the search request to exclude unwanted properties. This can be achieved via the following code.
Using the above search request, we would be instructing ElasticSearch to exclude the tags properties from the response. Hence, the result sent by ElasticSearch would consist of the properties Id and Name.
Optimizing retrieval by including wanted properties
Using the above search request, we would be instructing ElasticSearch to include only the Name property. All other properties would be set as null.