https://mazeez.dev/Muhammad Azeez20232023-02-03T19:19:38Zhttps://mazeez.dev/assets/images/cover.jpgA blog about software engineering and beyond!https://mazeez.dev/posts/chat-gpt-csharp-bindingsGenerating C# bindings for native libraries by using ChatGPT2022-12-17T00:00:00Z<p>One of the most boring and tedious work in wrapping native libraries in C# is writing the DllImport functions.
Fortunatley, Large Language Models like ChatGPT has become powerful enough to be able to generate C# bindings from a C header file.
Or at the very least, they give you a good foundation that you can tweak and improve upon.</p>
<blockquote class="blockquote">
<p>This post is part of the C# advent calendar 2022, for other articles in this collection see <a href="https://csadvent.christmas/">C# Advent Calendar 2022</a>.</p>
</blockquote>
<p>Here is an example where I ask ChatGPT to give me a C# binding for <a href="https://extism.org/">Extism</a>'s SDK:</p>
<p>My promopt:</p>
<pre><code>Convert this header file to C# DllImport functions
typedef int32_t ExtismPlugin;
typedef uint64_t ExtismSize;
ExtismPlugin extism_plugin_register(const uint8_t *wasm, ExtismSize wasm_size, bool with_wasi);
bool extism_plugin_update(ExtismPlugin index,
const uint8_t *wasm,
ExtismSize wasm_size,
bool with_wasi);
bool extism_plugin_config(ExtismPlugin plugin, const uint8_t *json, ExtismSize json_size);
bool extism_function_exists(ExtismPlugin plugin, const char *func_name);
int32_t extism_call(ExtismPlugin plugin_id,
const char *func_name,
const uint8_t *data,
ExtismSize data_len);
const char *extism_error(ExtismPlugin plugin);
ExtismSize extism_output_length(ExtismPlugin plugin);
void extism_output_get(ExtismPlugin plugin, uint8_t *buf, ExtismSize len);
bool extism_log_file(const char *filename, const char *log_level);
const char *extism_version();
</code></pre>
<p>ChatGPT's response:</p>
<pre><code>using System;
using System.Runtime.InteropServices;
public static class ExtismNative
{
// The DllImport attribute specifies the library to import and the function to call
[DllImport("extism", CallingConvention = CallingConvention.Cdecl)]
public static extern int extism_plugin_register(byte[] wasm, ulong wasm_size, bool with_wasi);
[DllImport("extism", CallingConvention = CallingConvention.Cdecl)]
public static extern bool extism_plugin_update(int index, byte[] wasm, ulong wasm_size, bool with_wasi);
[DllImport("extism", CallingConvention = CallingConvention.Cdecl)]
public static extern bool extism_plugin_config(int plugin, byte[] json, ulong json_size);
[DllImport("extism", CallingConvention = CallingConvention.Cdecl)]
public static extern bool extism_function_exists(int plugin, string func_name);
[DllImport("extism", CallingConvention = CallingConvention.Cdecl)]
public static extern int extism_call(int plugin_id, string func_name, byte[] data, ulong data_len);
[DllImport("extism", CallingConvention = CallingConvention.Cdecl)]
public static extern IntPtr extism_error(int plugin);
[DllImport("extism", CallingConvention = CallingConvention.Cdecl)]
public static extern ulong extism_output_length(int plugin);
[DllImport("extism", CallingConvention = CallingConvention.Cdecl)]
public static extern void extism_output_get(int plugin, byte[] buf, ulong len);
[DllImport("extism", CallingConvention = CallingConvention.Cdecl)]
public static extern bool extism_log_file(string filename, string log_level);
[DllImport("extism", CallingConvention = CallingConvention.Cdecl)]
public static extern IntPtr extism_version();
}
</code></pre>
<p>Please note that it generates a slightly different response every time you ask it, but the general idea is the same. While it's not perfect,
it's definitely better than starting form scratch. I took the response from ChatGPT and tweaked the response a bit to write
<a href="https://github.com/extism/extism/blob/dd0c5757da43b6d12e8e51fcad9f8f0409280b2b/dotnet/src/Extism.Sdk/LibExtism.cs">this handcrafted version</a>.</p>
<p>One of the most boring and tedious work in wrapping native libraries in C# is writing the DllImport functions.
Fortunatley, Large Language Models like ChatGPT has become powerful enough to be able to generate C# bindings from a C header file.
Or at the very least, they give you a good foundation that you can tweak and improve upon.</p>https://mazeez.dev/posts/working-remotely-from-iraqWorking remotely from Iraq as a Software Developer2022-10-08T00:00:00Z<p><strong>Disclaimer:</strong> I am writing about my personal experience here. Your mileage may vary.</p>
<p>I have been working remotely for a European company since the beginning of the year. I have been through a lot of ups and downs since then, I want to document some of the insights I learned.</p>
<p>Working remotely among the tech community is getting more and more common right now in Iraq. I have several friends who work for companies in Europe and other places. Most of the pre-requisites of working remotely are available in Iraq and Kurdistan, even though they are not as convenient as one might hope, so I think more software developers should consider it as an option.</p>
<h2 id="finding-a-remote-job">Finding a remote job</h2>
<p>The first step of working remotely, is to find a job opportunity that's a good fit for you. Things that would help you get a job are:</p>
<ul>
<li>The most important requirement is being good at what you do. Whether you're a Software Developer, A DevOps Engineer, A QA Engineer, A Product Owner, A designer, etc. Being able to show how you benefit your potential employers helps you a lot. But don't let your imposter syndrome stop you from achieving your goals. Never filter yourself out of opportunities in life.</li>
<li>English language. Your English should be good enough to conduct your day to day tasks without problems. Communication is really important in remote jobs and sometimes more difficult than on-site jobs. Having a good English language level is very helpful.</li>
<li>Try to participate in online communities, contribute to open source projects, and make friends online. I have gotten all of my jobs (including my current one) through friends. While LinkedIn is known for professional networking, don't ignore Twitter and GitHub.</li>
</ul>
<p>When looking for job opportunities, try to focus on Europe. Because US companies are much more hesitant about hiring from Iraq and they have complicated tax requirements. Also, Europe's time zone is very close to Iraq's time zone, so you don't have to sacrifice your social life. But take that as a guideline not a rule, there are always exceptions. I know people who have joined companies from other parts of the world too.</p>
<p>Where to find a remote job:</p>
<ul>
<li><a href="https://angel.co/">https://angel.co/</a></li>
<li><a href="https://weworkremotely.com/">https://weworkremotely.com/</a></li>
<li><a href="https://remoteok.com/">https://remoteok.com/</a></li>
</ul>
<p>Go search through the above websites and see which tech stacks and positions have the most positions. Maybe by changing your tech stack you can increase your chances of getting a job.</p>
<p>During the interview process, it's good to consider these points:</p>
<ul>
<li>Working hours. Do they want you to follow a specific working schedule? Or can you work flexibly? Or maybe something in between. I asked my current company to work on Sundays and take Fridays off so that I can spend Fridays with my family. The point is, you can talk to them about your preferences and try to find a good common ground.</li>
<li>Are they willing to go through some hoops to pay you? A lot of the services that companies use to pay their employees don't work for Iraq. So they have to be willing to make an exception for you.</li>
<li>Management's attitude towards working remotely.</li>
<li>Optional: Visa Sponsorship. If you're interested, some companies are willing to sponsor you for a work visa in order to go and live in their country.</li>
</ul>
<p>In terms of employment, you'll probably be hired as a "subcontractor". Which means you won't become an official employee and that's because of Tax reasons. If they make you an employee it complicates their taxes and they will have to open an office in Iraq. I am okay with that even though it means you don't get some of the benefits like health insurance. But make sure there is a contract between you and your potential employer, because you'll need it for opening a bank account and also it lowers your chance of getting scammed.</p>
<h2 id="getting-paid">Getting Paid</h2>
<p>Unfortunately most payment services (PayPal, Wise, etc) don't work for Iraq. These are some of the available options:</p>
<ol>
<li>Bank transfer: The most reliable option in my experience.</li>
<li>Money transfer services like Western Union: They have limits on the transfer amount and if based on what I have heard, you get flagged if you use them regularly.</li>
<li>Crypto currencies: I don't like them so I haven't tried them..</li>
</ol>
<p>I personally use Bank Transfer so I am going to focus on that here. From experience the reliable options in Erbil are:</p>
<ul>
<li><a href="https://www.bbacbank.com">https://www.bbacbank.com</a>: Doesn't open personal accounts anymore</li>
<li><a href="https://www.byblosbank.com">https://www.byblosbank.com</a></li>
<li><a href="https://nbi.iq">https://nbi.iq</a>: I had a very negative experience with it and they didn't open an account for me in the end. But a friend of mine uses it for receiving his salary and it seems to work for him. If you did decide to go with them visit their 60m branch, not their 100m branch.</li>
<li><a href="https://fib.iq">https://fib.iq</a>: Special thanks to <a href="https://www.linkedin.com/in/akamfoad/">Akam Foad</a> for confirming that it also can be used for international transfers.</li>
</ul>
<p>Opening a bank account requires:</p>
<ol>
<li>A support letter from your employer. A contract between you and your employer having your name, your passport number, and your salary also works. But because working remotely is still in its early days, some of the bank employees might have difficulty understanding your use case.</li>
<li>A passport or National ID</li>
<li>A deposit. This varies from bank to bank. Usually it's somewhere between 200 USD to 1200 USD.</li>
<li>One or more photos.</li>
<li>A ton of signatures.</li>
<li>And other documents depending on the bank.</li>
</ol>
<p>It can take a couple of hours or a couple of days, depending on the situation and the bank. You usually get a debit card (and optionally a credit card) that you can use to withdraw money from ATMs. Withdrawing money from the same bank's ATM is free. Ask your bank about the limits of your debit card. It can range from 1000 USD/day to 5000 USD/day.</p>
<p>After opening the account you will get a SWIFT code (the bank's unique identifier in the global banking system) and an IBAN (your unique identifier). Ask your bank for transfer instructions. It contains the list of of intermediary banks that your employer's bank can send the money through. International money transfer is very similar to computer network routing where each bank is a router. Unfortunately, most of the Iraqi banks are not very well known internationally and you have to provide the specific intermediary banks to make sure your transfers are successful. Also, make sure you open your account in USD and also receive money in USD. Euro is slower and more expensive and most banks here don't support it.</p>
<p>From my experience your first transfer take some time (a couple of weeks to a month). And sometimes the bank asks you to provide them with an invoice for the transaction. I use a simple google sheet (that I got from a friend) to create the invoices. I don't stamp them. Some banks require you to sign and scan them when you send it to them. After the first transfer, subsequent transfers should take less than a week.</p>
<p>The transfer fee depends on the sending bank, the receiving bank, and the intermediary banks. My current employer pays the transfer fee so this might be something you can ask your potential employer.</p>
<h2 id="finding-a-space-to-work-in">Finding a space to work in</h2>
<p>You have a lot of options when it comes to working remotely:</p>
<ul>
<li>Working from home: You can invest in building a home office for yourself. I tried this option first and it didn't work out for me. After a couple of months I was gaining weight and my social life was next to zero.</li>
<li>Co-working spaces: There are several co-working spaces in Iraq. And they provide your with a desk, a Wifi, and some common area. You pay them monthly or daily. You can go work there for a day and see if it suits you. My problem was I couldn't do my meetings which was a deal breaker. Especially because in the beginning you'll have a lot of meetings to onboard you.</li>
<li>Renting an office: I am not sure if individuals can rent an office in Iraq. Because they ask you for your company papers or you have to be a member of some specific syndicates (Lawyers for example).</li>
<li>Cafes: I haven't tried this option.</li>
</ul>
<p>Overall, you can experiment with different options and see which one fits your needs and preferences best.</p>
<p>I wish you luck in your journey and don't hestitate to reach out if you have any questions. However, when asking questions please follow these simple guidelines:</p>
<ul>
<li><a href="https://nohello.net/">https://nohello.net/</a></li>
<li>Be specific and give me some context. Each person's situation is different.</li>
</ul>
<p>I wish you best of luck!</p>
<p><strong>Disclaimer:</strong> I am writing about my personal experience here. Your mileage may vary.</p>https://mazeez.dev/posts/periodic-backups-potsgresql-s3-cronAutomatic periodic backups from PostgreSQL to S3 using Cron2022-01-29T00:00:00Z<p>If you're managing your own databases you'll need to make sure you database is backed up properly. A good option is to store your backups in an S3-compatible object storage.</p>
<ol>
<li>Install <code>s3cmd</code>:</li>
</ol>
<pre><code>sudo apt install s3cmd
</code></pre>
<ol start="2">
<li>Configure it:</li>
</ol>
<pre><code>s3cmd --configure
</code></pre>
<p>It will ask you several questions, consult your S3 providers docs for more information.</p>
<ol start="3">
<li>Write a bash script to create a postgresql dump and upload it to S3:</li>
</ol>
<pre><code class="language-bash">DIR=$(dirname "${BASH_SOURCE[0]}")
DB_NAME=my_db
DUMP_PATH="$DIR/$DB_NAME_$(date +"%Y-%m-%d@%H-%M").dump"
# DUMP the database
pg_dump --encoding utf8 --format c --compress 9 --file $DUMP_PATH $DB_NAME
DUMP_KEY=$DUMP_PATH | cut -c 3- # Remove the ./ from the path
# Upload the dump file to S3
s3cmd put $DUMP_PATH s3://$DB_NAME-db-backups/$DB_NAME/$DUMP_KEY
# Remove the dump file from disk
rm $DUMP_PATH
</code></pre>
<p>For more information about taking PostgreSQL backups, checkout <a href="https://mazeez.dev/posts/backup-and-restore-in-postgres">my previous post</a>.</p>
<ol start="4">
<li>Write a crontab job to run your script periodically:</li>
</ol>
<pre><code>crontab -e
</code></pre>
<pre><code>0 */3 * * * path/to/your/script/job.sh
</code></pre>
<p><strong>Note:</strong> This expression means the job will be run every 3 hours. You can change it to whatever your want.</p>
<p>If you're managing your own databases you'll need to make sure you database is backed up properly. A good option is to store your backups in an S3-compatible object storage.</p>https://mazeez.dev/posts/textarea-bidiSupporting Bi-directional text in Html TextArea2022-01-13T00:00:00Z<p>A <code><textarea></code> is an HTML element used to capture multiline user input. By default, it's direction is either <code>right to left</code> or <code>left to right</code>. But what if we want each paragraph to have it's own direction. This is very useful when the text is a mix of multiple languages. For example: Kurdish and English.</p>
<p>I asked the quesiton on Twitter and my good friend <a href="https://twitter.com/AkamFoad">Akam Foad</a> came to the rescue:</p>
<blockquote class="twitter-tweet"><p lang="en" dir="ltr">have you tested applying `unicode-bidi: plaintext` on textarea?<a href="https://t.co/UC8A0ZKYKj">https://t.co/UC8A0ZKYKj</a></p>— Akam Foad (@AkamFoad) <a href="https://twitter.com/AkamFoad/status/1481557755248918531?ref_src=twsrc%5Etfw">January 13, 2022</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
<p>It turns out, that you can easily support this by specifying <code>unicode-bidi: plaintext</code> in the styles of the <code><textarea></code></p>
<pre><code class="language-html"><textarea style="unicode-bidi:plaintext"></textarea>
</code></pre>
<p>And this is the result:</p>
<script async src="//jsfiddle.net/mhmd_azeez/egzpcovr/2/embed/html,result/"></script>
<p>Without <code>unicode-bidi:plaintext</code>:</p>
<script async src="//jsfiddle.net/mhmd_azeez/hmL9s6a7/embed/html,result/"></script>
<p><strong>Update:</strong> Setting <code>dir="auto"</code> attribute on the <code><textarea></code> has the same effect:</p>
<pre><code class="language-html"><textarea dir="auto"></textarea>
</code></pre>
<p>A <code><textarea></code> is an HTML element used to capture multiline user input. By default, it's direction is either <code>right to left</code> or <code>left to right</code>. But what if we want each paragraph to have it's own direction. This is very useful when the text is a mix of multiple languages. For example: Kurdish and English.</p>https://mazeez.dev/posts/why-google-oauth-profile-picture-returns-403Why public google user content images return 4032022-01-09T00:00:00Z<p>When using Google as your OIDC provider you can ask for the <code>picture</code> claim which contains the user's profile picture. It's usually a url like this:</p>
<pre><code>https://lh3.googleusercontent.com/erjNVzk6nPUaUZuOTg2ObT12EzWWIokbuRdyuTkxRGR1nXQ5vhYk34twIt05FmaBNt7_yB3J
</code></pre>
<p>I wanted to show the user profile in an <code><img></code> tag, but Google was responding with 403. I searched around for an answer and I stumbled upon <a href="https://stackoverflow.com/a/61042200/7003797">this stackoverflow answer</a> which had the solution:</p>
<pre><code><img src="https://lh3.googleusercontent.com/erjNVzk6nPUaUZuOTg2ObT12EzWWIokbuRdyuTkxRGR1nXQ5vhYk34twIt05FmaBNt7_yB3J" referrerpolicy="no-referrer">
</code></pre>
<p>By setting the <a href="https://developer.mozilla.org/en-US/docs/Web/HTML/Element/img#attr-referrerpolicy"><code>referrerpolicy</code></a> attribute to <code>no-referrer</code>, the browser will not send the <code>referrer</code> header and this seems to solve the issue.</p>
<p>When using Google as your OIDC provider you can ask for the <code>picture</code> claim which contains the user's profile picture. It's usually a url like this:</p>https://mazeez.dev/posts/auth-in-integration-testsMocking Authentication and Authorization in ASP.NET Core Integration Tests2021-12-12T00:00:00Z<p>ASP.NET Core makes writing integration tests very easy and even fun. One aspect that might be a bit tough to figure out is authentication and authorization. We might want to run integration tests under different users and different roles.</p>
<p>To get started, let's assume we have an endpoint like this:</p>
<pre><code class="language-cs">app.MapGet("hi", (HttpContext httpContext) =>
{
var userId = httpContext.User?.Claims?.FirstOrDefault(c => c.Type == ClaimTypes.NameIdentifier)?.Value;
return $"Hello #{userId}";
}).RequireAuthorization();
</code></pre>
<p>It's a very simple endpoint. It gets the currently logged in user's ID and says hello to them.</p>
<p>To make it possible to mock auth, we have to register a custom <a href="https://docs.microsoft.com/en-us/dotnet/api/microsoft.aspnetcore.authentication.authenticationhandler-1?view=aspnetcore-6.0"><code>AuthenticationHandler</code></a>.</p>
<p>Here is a simple implementation of a mock Authentication Handler:</p>
<pre><code class="language-cs">public class TestAuthHandlerOptions : AuthenticationSchemeOptions
{
public string DefaultUserId { get; set; } = null!;
}
public class TestAuthHandler : AuthenticationHandler<TestAuthHandlerOptions>
{
public const string UserId = "UserId";
public const string AuthenticationScheme = "Test";
private readonly string _defaultUserId;
public TestAuthHandler(
IOptionsMonitor<TestAuthHandlerOptions> options,
ILoggerFactory logger,
UrlEncoder encoder,
ISystemClock clock) : base(options, logger, encoder, clock)
{
_defaultUserId = options.CurrentValue.DefaultUserId;
}
protected override Task<AuthenticateResult> HandleAuthenticateAsync()
{
var claims = new List<Claim> { new Claim(ClaimTypes.Name, "Test user") };
// Extract User ID from the request headers if it exists,
// otherwise use the default User ID from the options.
if (Context.Request.Headers.TryGetValue(UserId, out var userId))
{
claims.Add(new Claim(ClaimTypes.NameIdentifier, userId[0]));
}
else
{
claims.Add(new Claim(ClaimTypes.NameIdentifier, _defaultUserId));
}
// TODO: Add as many claims as you need here
var identity = new ClaimsIdentity(claims, AuthenticationScheme);
var principal = new ClaimsPrincipal(identity);
var ticket = new AuthenticationTicket(principal, AuthenticationScheme);
var result = AuthenticateResult.Success(ticket);
return Task.FromResult(result);
}
}
</code></pre>
<p>The basic idea is this: by default authenticate every request with user id provided in the <code>TestAuthHandlerOptions</code>. If a test wants to send a request under on behalf of a different user, they can do so by sending the user ID in the <code>UserId</code> header of the HTTP request.</p>
<p>We also need to create a custom WebApplicationFactory that takes advantage of our mock Authentication Handler:</p>
<pre><code class="language-cs">public class WebAppFactory : WebApplicationFactory<Program>
{
public string DefaultUserId { get; set; } = "1";
protected override void ConfigureWebHost(IWebHostBuilder builder)
{
builder.ConfigureTestServices(services =>
{
services.Configure<TestAuthHandlerOptions>(options => options.DefaultUserId = DefaultUserId);
services.AddAuthentication(TestAuthHandler.AuthenticationScheme)
.AddScheme<TestAuthHandlerOptions, TestAuthHandler>(TestAuthHandler.AuthenticationScheme, options => { });
});
}
}
</code></pre>
<p>We have defined a <code>DefaultUserId</code> property on the factory so that the individual test fixtures can specify their own default user ID.</p>
<p>And we can use the mock authentication in the test cases like this:</p>
<pre><code class="language-cs">public class SimpleTest : IClassFixture<WebAppFactory>
{
private HttpClient _httpClient;
public SimpleTest(WebAppFactory factory)
{
factory.DefaultUserId = "5";
_httpClient = factory.CreateClient();
_httpClient.BaseAddress = new Uri("https://localhost/");
// Use our mock Auth scheme
_httpClient.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Test");
}
[Fact]
public async Task SayHiToNumber5()
{
_httpClient.DefaultRequestHeaders.Remove(TestAuthHandler.UserId);
var response = await _httpClient.GetStringAsync("hi");
Assert.Equal("Hello #5", response);
}
[Fact]
public async Task SayHiToNumber1()
{
_httpClient.DefaultRequestHeaders.Add(TestAuthHandler.UserId, "1");
var response = await _httpClient.GetStringAsync("hi");
Assert.Equal("Hello #1", response);
}
[Fact]
public async Task SayHiToNumber3()
{
_httpClient.DefaultRequestHeaders.Add(TestAuthHandler.UserId, "3");
var response = await _httpClient.GetStringAsync("hi");
Assert.Equal("Hello #3", response);
}
}
</code></pre>
<p>And that's it! With a few lines of code, you now have a flexible mock authentication scheme that you can use in your tests. You can also customize it to match your needs.</p>
<p>You can download the source code on <a href="https://github.com/mhmd-azeez/IntegrationTestAuth">GitHub</a>.</p>
<p>ASP.NET Core makes writing integration tests very easy and even fun. One aspect that might be a bit tough to figure out is authentication and authorization. We might want to run integration tests under different users and different roles.</p>https://mazeez.dev/posts/email-snapshot-testingTesting Email Templates in ASP.NET Core2021-12-09T00:00:00Z<blockquote class="blockquote">
<p>This post is my annual contribution to the 2021 <a href="https://www.csadvent.christmas/">C# Advent Calendar</a>. Please check out all the great posts from our wonderful community!</p>
</blockquote>
<p>Many systems require sending emails to notify users. And testing these notifications manually is a pain. So it's one of the best use cases for integration testing. First, let's create strongly typed model for our <code>Welcome</code> email:</p>
<pre><code class="language-cs">public class Welcome
{
public string FullName { get; set; }
}
</code></pre>
<p>And we create a Razor template for the email in <code>EmailTemplates/Welcome.cshtml</code>:</p>
<pre><code class="language-html">@model EmailSnapshotTesting.EmailTemplates.Welcome
@{
Layout = "~/EmailTemplates/_Layout.cshtml";
}
<h1>Welcome @Model.FullName</h1>
<p>Welcome to our wonderful service!</p>
</code></pre>
<p>And this is how the layout is going to look like in <code>EmaiTemplates/_Layout.cshtml</code>:</p>
<pre><code class="language-html"><!DOCTYPE html>
<html>
<head>
<meta name="viewport" content="width=device-width" />
</head>
<body>
<div>
@RenderBody()
</div>
</body>
</html>
</code></pre>
<p>And then we create a service to send emails:</p>
<pre><code class="language-cs">public class MailerService : IMailerService
{
private readonly IEmailRenderer _renderer;
private readonly IMailPostman _postman;
public MailerService(
IEmailRenderer renderer,
IMailPostman postman)
{
_renderer = renderer;
_postman = postman;
}
public async Task SendWelcomeEmail(string address, Welcome welcome)
{
await SendEmail($"Welcome {welcome.FullName}!", address, welcome);
}
public async Task SendEmail<T>(string subject, string address, T model)
{
var html = await _renderer.Render(model);
await _postman.SendEmail(new Message
{
Subject = subject,
Address = address,
HtmlBody = html
});
}
}
</code></pre>
<p>The <code>MailerService</code> needs an <code>IEmailRenderer</code> to get HTML content from the strongly typed model and an <code>IMailPostman</code> to send the emails.</p>
<p>Here is an implementation of <code>IEmailRenderer</code> that renders the Razor template we specified above:</p>
<pre><code class="language-cs">using Microsoft.AspNetCore.Mvc;
using Microsoft.AspNetCore.Mvc.Abstractions;
using Microsoft.AspNetCore.Mvc.ModelBinding;
using Microsoft.AspNetCore.Mvc.Razor;
using Microsoft.AspNetCore.Mvc.Rendering;
using Microsoft.AspNetCore.Mvc.ViewFeatures;
namespace EmailSnapshotTesting.Services;
// https://stackoverflow.com/a/49275145
// https://ppolyzos.com/2016/09/09/asp-net-core-render-view-to-string/
public class RazorEmailRenderer : IEmailRenderer
{
private readonly IRazorViewEngine _razorViewEngine;
private readonly ITempDataProvider _tempDataProvider;
private readonly IServiceProvider _serviceProvider;
public RazorEmailRenderer(
IRazorViewEngine razorViewEngine,
ITempDataProvider tempDataProvider,
IServiceProvider serviceProvider)
{
_razorViewEngine = razorViewEngine;
_tempDataProvider = tempDataProvider;
_serviceProvider = serviceProvider;
}
public async Task<string> Render<T>(T model)
{
// Note: You can also support multiple languages by separating each locale into a folder
var viewPath = $"~/EmailTemplates/{typeof(T).Name}.cshtml";
var result = _razorViewEngine.GetView(null, viewPath, true);
if (result.Success != true)
{
var searchedLocations = string.Join("\n", result.SearchedLocations);
throw new InvalidOperationException($"Could not find this view: {viewPath}. Searched locations:\n{searchedLocations}");
}
var view = result.View;
var httpContext = new DefaultHttpContext();
httpContext.RequestServices = _serviceProvider;
var actionContext = new ActionContext(
httpContext,
httpContext.GetRouteData(),
new ActionDescriptor()
);
using (var writer = new StringWriter())
{
var viewDataDict = new ViewDataDictionary(
new EmptyModelMetadataProvider(),
new ModelStateDictionary());
viewDataDict.Model = model;
var viewContext = new ViewContext(
actionContext,
view,
viewDataDict,
new TempDataDictionary(
httpContext.HttpContext,
_tempDataProvider
),
writer,
new HtmlHelperOptions { }
);
await view.RenderAsync(viewContext);
return writer.ToString();
}
}
}
</code></pre>
<p>Now let's create a fake implementation of the <code>IEmailPostman</code> for the integration tests:</p>
<pre><code class="language-cs">public class FakePostman : IMailPostman
{
public Task SendEmail(Message message)
{
LastMessage = message;
return Task.CompletedTask;
}
public Message LastMessage { get; set; }
}
</code></pre>
<p>Let's now register all of our services:</p>
<pre><code class="language-cs">builder.Services.AddScoped<IMailerService, MailerService>();
builder.Services.AddScoped<IEmailRenderer, RazorEmailRenderer>();
// In your project, you have to register a real postman in your app
// and swap it our with this fake postman in the integration tests
// by creating a custom WebApplicationFactory. For more info see:
// https://docs.microsoft.com/en-us/aspnet/core/test/integration-tests?view=aspnetcore-6.0#customize-webapplicationfactory
builder.Services.AddScoped<IMailPostman, FakePostman>();
</code></pre>
<p>We create a test project called <code>IntegrationTests</code> using XUnit and inside the test project we create a folder called <code>Snapshots</code> to store the expected html results.</p>
<p>Then we can create our snapshot tests:</p>
<pre><code class="language-cs">public class EmailTests : IClassFixture<WebApplicationFactory<Program>>
{
private readonly IEmailRenderer _renderer;
private readonly string _folderPath;
public EmailTests(WebApplicationFactory<Program> factory)
{
// Get the path for the snapshots folder
var environment = factory.Services.GetRequiredService<IWebHostEnvironment>();
_folderPath = Path.Combine(environment.ContentRootPath, "../IntegrationTests/Snapshots");
var scope = factory.Services.CreateScope();
_renderer = scope.ServiceProvider.GetRequiredService<IEmailRenderer>();
}
[Fact]
public async Task CanSendWelcomeEmail()
{
var postman = new FakePostman();
var mailService = new MailerService(_renderer, postman);
await mailService.SendWelcomeEmail("person@example.com", new Welcome
{
FullName = "Example Person"
});
Assert.Equal("person@example.com", postman.LastMessage.Address);
Assert.Equal("Welcome Example Person!", postman.LastMessage.Subject);
await SaveToFile("Welcome.actual.html", postman.LastMessage.HtmlBody);
var expectedBody = await File.ReadAllTextAsync(Path.Combine(_folderPath, "Welcome.expected.html"));
Assert.Equal(Sanitize(postman.LastMessage.HtmlBody), Sanitize(expectedBody));
}
private string Sanitize(string text)
{
return text
.Replace("\r\n", "\n")
.Replace('\r', '\n');
}
private async Task SaveToFile(string name, string content)
{
var fullPath = Path.Combine(_folderPath, name);
Directory.CreateDirectory(Path.GetDirectoryName(fullPath));
await File.WriteAllTextAsync(fullPath, content);
}
}
</code></pre>
<p>The first time your run <code>CanSendWelcomeEmail</code> it's going to fail because <code>IntegrationTests/Snapshots/Welcome.expected.html</code> doesn't exist. But it has created <code>IntegrationTests/Snapshots/Welcome.actual.html</code>. So go ahead and take a look at it, it should be something like this:</p>
<pre><code class="language-html"><!DOCTYPE html>
<html>
<head>
<meta name="viewport" content="width=device-width" />
</head>
<body>
<div>
<h1>Welcome Example Person</h1>
<p>Welcome to our wonderful service!</p>
</div>
</body>
</html>
</code></pre>
<p>You can test out the html using something like <a href="https://putsmail.com/">PutsMail</a> or <a href="https://testi.at/">Testi@</a>. If you like the result, rename it the file to <code>IntegrationTests/Snapshots/Welcome.expected.html</code>.</p>
<p>Because we don't want git to track the actual results, you'll have to add this line to your .gitignore file:</p>
<pre><code>*.actual.html
</code></pre>
<p>Now you have snapshot tests for your email templates, whenever you change them, you can easily see the results without having to manually click through the UI to send the emails. This will make your feedback loop much faster.</p>
<p>You can download the source code on <a href="https://github.com/mhmd-azeez/EmailSnapshotTesting">GitHub</a>.</p>
<p>This post is my annual contribution to the 2021 <a href="https://www.csadvent.christmas/">C# Advent Calendar</a>. Please check out all the great posts from our wonderful community!</p>https://mazeez.dev/posts/asp-net-core-api-auth0Securing ASP.NET Core APIs using Auth02021-10-09T00:00:00Z<h1 id="introduction">Introduction</h1>
<p>Modern applications are complex and take many different forms: Web apps, mobile apps, desktop apps, CLI apps, other APIs, Bots, IoT apps, and so on. In this blog post we discuss what it takes for your API to let any app communicate with it securely via the OAuth 2.0 and OpenID Connect protocols.</p>
<p>By utilizing these standards, you can have a single Identity Provider protecting all of your apps. Allowing your users to sign into all of your apps using the same account, providing a seamless SSO (Single Sign On and Single Sign Out) experience. Another great benefit is you can easily allow 3rd party apps to connect to your APIs without compromising security.</p>
<h1 id="oauth">OAuth</h1>
<h2 id="overview-and-history-of-oauth">Overview and history of OAuth</h2>
<p>OAuth is an open standard for autho</p>
<h2 id="different-actors-in-oauth">Different actors in OAuth</h2>
<h2 id="whats-jwt">What's JWT</h2>
<h1 id="openid-connect">OpenID Connect</h1>
<ol>
<li>Overview and history of OIDC</li>
<li>Access Token vs Id Token</li>
</ol>
<h1 id="authentication-flows">Authentication Flows</h1>
<ol>
<li>Authorization Code Flow</li>
<li>Authorization Code Flow + PKCE</li>
<li>Resource Owner Password Flow</li>
<li>Client Credentials Flow</li>
</ol>
<h1 id="choosing-an-openid-connect-provider">Choosing an OpenID Connect Provider</h1>
<ol>
<li>Implementing your own OpenID Connect Provider</li>
<li>Using an open source OpenID Connect Provider</li>
<li>Using a cloud-based OpenID Connect Provider</li>
</ol>
<h1 id="integrating-asp.net-core-api-with-auth0">Integrating ASP.NET Core API with Auth0</h1>
<ol>
<li>Overview of Auth0 and mapping concepts with OIDC</li>
<li>Integrate your API with Auth0</li>
<li>Test your API Authentication using Swagger</li>
<li>Where should we store user information</li>
<li>Using Auth0's API to onboard users</li>
<li>Role based authorization using Auth0</li>
<li>Permission based authorization using Auth0</li>
<li>Service to service communication</li>
</ol>
<h1 id="bonus-custom-authorization-implementation">Bonus: Custom Authorization Implementation</h1>
<ol>
<li>Role based authorization</li>
<li>Permission based authorization</li>
</ol>
<h1 id="related-resources">Related resources</h1>
<ol>
<li><a href="https://app.pluralsight.com/library/courses/securing-aspnet-core-3-oauth2-openid-connect">Kevin Dockx - Securing ASP.NET Core 3 with OAuth2 and OpenID Connect</a></li>
<li><a href="https://app.pluralsight.com/library/courses/securing-microservices-asp-dot-net-core">Kevin Dockx - Securing Microservices in ASP.NET Core</a></li>
<li><a href="https://developer.okta.com/blog/2017/06/21/what-the-heck-is-oauth">What the Heck is OAuth?</a></li>
</ol>
<p>Modern applications are complex and take many different forms: Web apps, mobile apps, desktop apps, CLI apps, other APIs, Bots, IoT apps, and so on. In this blog post we discuss what it takes for your API to let any app communicate with it securely via the OAuth 2.0 and OpenID Connect protocols.</p>https://mazeez.dev/posts/beware-of-http-redirectsBeware of HTTP Redirects!2021-09-08T00:00:00Z<p>Today, I spent an hour debugging why an http call was getting 401 as a response. I was setting the Authorization header properly and the token was valid. This was the call I was making:</p>
<pre><code class="language-csharp">var httpClient = new HttpClient();
httpClient.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", "...");
var response = await httpClient.GetAsync("api/endpoint?parameter=true");
</code></pre>
<p>When I inspected the <code>Request</code> property of the <code>HttpResponse</code>, I saw that there was no <code>Authorization</code> header. It was the first clue. For some reason, somewhere in the http pipeline, the header was not being forwarded properly.</p>
<p>What made it worse was the fact that I was using a custom <code>HttpMessageHandler</code> to get and renew access tokens. So it made my debugging more difficult because for a while I was thinking that there was a bug in the custom <code>HttpMessageHandler</code>.</p>
<p>After a few searches it lead me to <a href="https://stackoverflow.com/a/68418735/7003797">this stackoverflow answer</a>. It turned out, the API was redirecting the HTTP call. And when an HTTP call gets redirected, the <code>Authorization</code> header is removed as <a href="https://docs.microsoft.com/en-us/dotnet/api/system.net.http.httpclienthandler.allowautoredirect?view=net-5.0#remarks">explained by the official docs</a>. This behavior seems to be consistent with <a href="https://curl.se/">curl</a>.</p>
<p>After realizing the issue was because of redirection, I wanted to know where was the call being redirected to. So I disabled automatic redirection on the handler:</p>
<pre><code class="language-csharp">var handler = new HttpClientHandler
{
AllowAutoRedirect = false
};
var httpClient = new HttpClient(handler);
</code></pre>
<p>I inspected the <code>Location</code> header and it was redirecting the call to "api/endpoint/?parameter=true". Can you spot the difference with the original URL? Let me make it easier for you:</p>
<pre><code>BAD: api/endpoint?parameter=true
GOOD: api/endpoint/?parameter=true
</code></pre>
<p>This just makes me appreciate ASP.NET Core's router. It's much more forgiving and has sane defaults.</p>
<p>The sad part is, this is not the first time I have been bitten by this behavior of HttpClient. I remember I got into a similar issue a few years ago.</p>
<p>Today, I spent an hour debugging why an http call was getting 401 as a response. I was setting the Authorization header properly and the token was valid. This was the call I was making:</p>https://mazeez.dev/posts/quick-edit-modeWhy my console app gets stuck2021-08-09T00:00:00Z<p>In my current project, we have a console app that runs in the background and sends data to a frontend application. The app works great but sometimes it stops working and we need to press a key for it to continue. It turned out it was because of Windows Console's <code>Quick Mode</code> feature. <a href="https://stackoverflow.com/a/30517482">When a user clicks on the console window, it hangs the app execution to allow the user to select the text.</a>.</p>
<p>Fortunately, you can easily disable Quick Edit for your app:</p>
<pre><code class="language-csharp">// http://msdn.microsoft.com/en-us/library/ms686033(VS.85).aspx
[DllImport("kernel32.dll")]
public static extern bool SetConsoleMode(IntPtr hConsoleHandle, uint dwMode);
private const uint ENABLE_EXTENDED_FLAGS = 0x0080;
private static void DisableQuickEditMode()
{
// Disable QuickEdit Mode
// Quick Edit mode freezes the app to let users select text.
// We don't want that. We want the app to run smoothly in the background.
// - https://stackoverflow.com/q/4453692
// - https://stackoverflow.com/a/4453779
// - https://stackoverflow.com/a/30517482
IntPtr handle = Process.GetCurrentProcess().MainWindowHandle;
SetConsoleMode(handle, ENABLE_EXTENDED_FLAGS);
}
public static void Main(string[] args)
{
DisableQuickEditMode();
// Do stuff
}
</code></pre>
<p>In my current project, we have a console app that runs in the background and sends data to a frontend application. The app works great but sometimes it stops working and we need to press a key for it to continue. It turned out it was because of Windows Console's <code>Quick Mode</code> feature. <a href="https://stackoverflow.com/a/30517482">When a user clicks on the console window, it hangs the app execution to allow the user to select the text.</a>.</p>https://mazeez.dev/posts/asp-net-core-api-checklistASP.NET Core API Checklist2021-06-13T00:00:00Z<p>Building modern APIs require a lot of things to make them reliable, observable, and scalable. In no particular order, here are some of them that help you build better APIs:</p>
<blockquote class="blockquote">
<p><strong>Note:</strong> If you have any other points in mind, you can send me <a href="https://github.com/mhmd-azeez/website/blob/master/input/posts/asp-net-core-api-checklist.md">a PR here</a>.</p>
</blockquote>
<h2 id="healthchecks">1. Healthchecks</h2>
<p>Healthchecks are important in making sure that we know when anything happens to our APIs. We can setup dashboards to monitor them and setup alerting to let us know when one of the APIs is unhealthy. They are also important when deploying your apps to kubernetes. Kubernetes can monitor healthchecks of your APIs and automatically try to kill the unhealthy instances and create new instances to take their place.</p>
<p>There are two kinds of healthchecks:</p>
<ul>
<li><p><strong>Liveliness:</strong> indicates if your API has crashed and must be restarted.</p>
</li>
<li><p><strong>Readiness:</strong> indicates if your API has been intialized and is ready for processing requests. When launching a new instance of an API, it might need some time to intialize dependencies and load data to be ready.</p>
</li>
</ul>
<h3 id="more-information">More Information:</h3>
<ul>
<li><a href="https://docs.microsoft.com/en-us/aspnet/core/host-and-deploy/health-checks">MS Docs - Health checks in ASP.NET Core</a></li>
<li><a href="https://www.youtube.com/watch?v=Kbfto6Y2xdw">IAmTimCorey - Intro to Health Checks in .NET Core</a></li>
</ul>
<h2 id="logging">2. Logging</h2>
<p>Logging provides valuable information when trying to debug unexpected behavior. But too much logging can significantly slow down our APIs. For that reason we can set the logging level to <code>Warning</code> in production and only lower it when we need to.</p>
<p>By default ASP.NET Core provides an abstraction layer for logging that supports <a href="https://docs.microsoft.com/en-us/aspnet/core/fundamentals/logging/?view=aspnetcore-5.0#log-message-template">Structured Logging</a>.</p>
<p>A very popular logging library that many people use with ASP.NET Core is <a href="https://serilog.net/">Serilog</a>. Serilog has more <a href="https://github.com/serilog/serilog/wiki/Provided-Sinks">Sinks</a> than the default ASP.NET Core loggging abstraction and can easily be integrated with ASP.NET Core.</p>
<h3 id="more-information-1">More information:</h3>
<ul>
<li><a href="https://docs.microsoft.com/en-us/aspnet/core/fundamentals/logging">MS Docs - Logging in .NET Core and ASP.NET Core</a></li>
<li><a href="https://www.youtube.com/watch?v=_iryZxv8Rxw">IAmTimCorey - C# Logging with Serilog and Seq - Structured Logging Made Easy</a></li>
</ul>
<h2 id="observability">3. Observability</h2>
<p>This includes a few things:
2. Performance monitoring (P99, P95, P50)
3. Metrics: Specific counters you or your business cares about
4. Tracing: Being able to see the entire lifecycle of each request from frontend to API to data source.</p>
<p><a href="https://opentelemetry.io/">OpenTelemetry</a> is an open standard for doing all of the above and <a href="https://devblogs.microsoft.com/aspnet/observability-asp-net-core-apps/">ASP.NET Core supports it</a>. The good news is, if you use OpenTelemetry, there is a rich ecosystem of tools and services that you can integrate with.</p>
<p>All of the major cloud providers have services that you can use to view the captured data.</p>
<h2 id="error-reporting">4. Error reporting</h2>
<p>There are tools specifically for capturing, storing and showing exceptions that have been raised in your APIs. They the exceptions by their type and location and show how many times they have occurred. Some tools include:</p>
<ul>
<li><a href="https://sentry.io">Sentry</a></li>
<li><a href="https://rollbar.com/">Rollbar</a></li>
<li><a href="https://raygun.com/">Raygun</a></li>
</ul>
<h2 id="status-endpoint">5. Status Endpoint</h2>
<p>It's also good to have a status endpoint in your API that shows the name of the API, version of the API and when was it started. This can be used to create a dashboard showing all of the different services and their versions. Something like this:</p>
<pre><code class="language-csharp">using Microsoft.AspNetCore.Authorization;
using Microsoft.AspNetCore.Mvc;
using System;
using System.Diagnostics;
namespace MyCoolApi.Controllers
{
public class StatusResponse
{
public string Name { get; set; }
public string Version { get; set; }
public DateTime StartTime { get; set; }
public string Host { get; set; }
}
[ApiController]
[Route("status")]
public class StatusController
{
[HttpGet]
public StatusResponse Get()
{
var version = typeof(Startup).Assembly.GetName().Version;
return new StatusResponse
{
Name = "my-cool-api",
Version = $"{version.Major}.{version.Minor}.{version.Build}",
StartTime = Process.GetCurrentProcess().StartTime,
Host = Environment.MachineName
};
}
}
}
</code></pre>
<h2 id="http-resiliency">6. Http Resiliency</h2>
<p>Although it's generally preferred for your APIs to communicate with other APIs using asynchronous messaging, sometimes you need to call other APIs using HTTP calls.</p>
<p>We need to bake in some level of resiliency by automatically retrying transient failures. This can easily be done by using something like <a href="https://github.com/App-vNext/Polly">Polly</a>.</p>
<h3 id="more-information-2">More information:</h3>
<ul>
<li><a href="https://docs.microsoft.com/en-us/dotnet/architecture/microservices/implement-resilient-applications/implement-http-call-retries-exponential-backoff-polly">Implement HTTP call retries with exponential backoff with IHttpClientFactory and Polly policies</a></li>
<li><a href="https://app.pluralsight.com/library/courses/polly-fault-tolerant-web-service-requests">Bryan Hogan - Fault Tolerant Web Service Requests with Polly</a></li>
</ul>
<h2 id="statelessness-and-containerization">7. Statelessness and Containerization</h2>
<p>Containers are a great way to make sure your APIs can be easily scaled out and deployed to multiple environments in a repeatable manner. However to make sure you can get the most out of containerization you should try to make sure your APIs are stateless.</p>
<p>Being stateless means that they don't hold any critical data in memory. For caching you can use a centralized caching technology like <a href="https://redis.io/">Redis</a> instead. This way you can start as many instances as you need without worrying about having stale cached data or data duplication.</p>
<p>You must also be careful about background jobs. You must make sure the different instances don't process background jobs multiple times. And for message queues, you have to implement the <a href="https://docs.microsoft.com/en-us/azure/architecture/patterns/competing-consumers">Competing Consumers</a> pattern which some message buses support natively.</p>
<h3 id="more-information-3">More information:</h3>
<ul>
<li><a href="https://docs.microsoft.com/en-us/dotnet/core/docker/build-container">MS Docs - Tutorial: Containerize a .NET Core app</a></li>
<li><a href="https://www.hangfire.io/">Hangfire</a></li>
<li><a href="https://www.quartz-scheduler.net/">Quartz.NET</a></li>
<li><a href="https://docs.microsoft.com/en-us/azure/architecture/patterns/competing-consumers">MS Docs - Competing Consumers</a></li>
</ul>
<h2 id="openapi-spec-swagger">9. OpenAPI Spec / Swagger</h2>
<p>Documenting your APIs is very important. Swagger integrates with ASP.NET Core and automatically finds all of the routes (Controller actions) and shows them in a beautiful dashboard that you can customize.</p>
<h3 id="more-information-4">More information:</h3>
<ul>
<li><a href="(https://docs.microsoft.com/en-us/aspnet/core/tutorials/web-api-help-pages-using-swagger?view=aspnetcore-5.0)">MS Docs - ASP.NET Core web API documentation with Swagger / OpenAPI</a></li>
<li><a href="https://www.pluralsight.com/courses/aspdotnet-core-api-openapi-swagger">Kevin Dockx - Documenting an ASP.NET Core API with OpenAPI / Swagger</a></li>
</ul>
<h2 id="configuration-and-options">10. Configuration and Options</h2>
<p>ASP.NET Core has an extensible configuration mechanism. It can pull configurations from json files, environment variables and command line arguments. You can also provide custom sources for configuration. It also provide ways to easily fetch the configurations in a type-safe manner. it also provides an easy mechanism to <a href="https://docs.microsoft.com/en-us/aspnet/core/fundamentals/configuration/options?view=aspnetcore-5.0#options-validation">validate the configuration sections</a>.</p>
<h3 id="more-information-5">More information:</h3>
<ul>
<li><a href="https://docs.microsoft.com/en-us/aspnet/core/fundamentals/configuration/options">MS Docs - Options Pattern in ASP.NET Core</a></li>
<li><a href="https://www.pluralsight.com/courses/dotnet-core-aspnet-core-configuration-options/">Steve Gordon - Using Configuration and Options in .NET Core and ASP.NET Core Apps</a></li>
</ul>
<h2 id="integration-and-unit-tests">11. Integration and Unit Tests</h2>
<p>ASP.NET Core has made it easy to write Unit tests by making the whole framework DI friendly. It has also made Integration tests easy by <a href="https://docs.microsoft.com/en-us/dotnet/api/microsoft.aspnetcore.mvc.testing.webapplicationfactory-1"><code>WebApplicationFactory</code></a> Having automated tests saves a lot of time and makes your APIs more robust. And when writing integration tests, try to use the same database technology that you use for production. If you're using Postgres in production, don't use Sqlite or In-Memory DB Providers for integration tests.</p>
<h3 id="more-information-6">More information:</h3>
<ul>
<li><a href="https://docs.microsoft.com/en-us/aspnet/core/test/integration-tests">MS Docs - Integration tests in ASP.NET Core</a></li>
<li><a href="https://www.pluralsight.com/courses/integration-testing-asp-dot-net-core-applications-best-practices">Steve Gordon - Integration Testing ASP.NET Core Applications: Best Practices</a></li>
</ul>
<h2 id="build-beautiful-rest-apis">12. Build beautiful REST APIs</h2>
<p>If you're building REST APIs, there are some conventions that make your APIs more pleasant and intuitive to use.</p>
<h3 id="more-information-7">More information:</h3>
<ul>
<li><a href="https://stackoverflow.blog/2020/03/02/best-practices-for-rest-api-design/">Stackoverflow Blog - Best practices for REST API design</a></li>
<li><a href="https://martinfowler.com/articles/richardsonMaturityModel.html">Martin Fowler - Richardson Maturity Model</a></li>
</ul>
<h2 id="authentication-and-authorization">13. Authentication and Authorization</h2>
<p>Authentication is the process of identifying a user and authorization is knowing and enforcing what each user can and can't do. The most popular standards for authentication is OpenIDConnect which is an authentication layer on top of OAuth 2.</p>
<p>There are some popular Identity Providers that you can easily integrate with your API:</p>
<ul>
<li><a href="https://auth0.com/">Auth0</a></li>
<li><a href="https://www.okta.com/">Okta</a></li>
<li><a href="https://fusionauth.io/">FusionAuth</a></li>
</ul>
<p>And there are some open source Identity and Access Management servers that you can run on-prem:</p>
<ul>
<li><a href="https://www.keycloak.org/">Keycloak</a></li>
<li><a href="https://gluu.org/">Gluu</a></li>
</ul>
<p>And there are some libraries that you can use to build your own OIDC server:</p>
<ul>
<li><a href="https://duendesoftware.com/">IdentityServer</a></li>
<li><a href="https://github.com/openiddict/openiddict-core">OpenIddict</a></li>
</ul>
<h3 id="more-information-8">More information</h3>
<ul>
<li><a href="https://app.pluralsight.com/library/courses/securing-aspnet-core-3-oauth2-openid-connect">Kevin Dockx - Securing ASP.NET Core 3 with OAuth2 and OpenID Connect</a></li>
<li><a href="https://app.pluralsight.com/library/courses/securing-microservices-asp-dot-net-core">Kevin Dockx - Securing Microservices in ASP.NET Core</a></li>
</ul>
<h2 id="security">14. Security</h2>
<h3 id="cors">14.1 CORS</h3>
<p>Cross-Origin Resource Sharing allows frontends to call your API even if they are not on the same domain as the API. By default in ASP.NET Core CORS is disabled.</p>
<h4 id="more-information-9">More information</h4>
<ul>
<li><a href="https://docs.microsoft.com/en-us/aspnet/core/security/cors">MS Docs - Enable Cross-Origin Requests (CORS) in ASP.NET Core</a></li>
</ul>
<h3 id="https-enforcing">14.2 HTTPS Enforcing</h3>
<p>For this there are two scenarios:</p>
<ul>
<li>You're using Kestrel on edge: Then you have to make sure it's only listening to and respond over HTTPS.</li>
<li>You've put ASP.NET Core behind a reverse proxy: Then you generally terminate HTTPS on the proxy and it's the proxy's job to enforce HTTPS.</li>
</ul>
<h4 id="more-information-10">More Information</h4>
<ul>
<li><a href="https://docs.microsoft.com/en-us/aspnet/core/security/enforcing-ssl">MS Docs - Enforce HTTPS in ASP.NET Core</a></li>
</ul>
<h3 id="cross-site-scripting-xss">14.3 Cross-Site Scripting (XSS)</h3>
<p>Cross-Site Scripting (XSS) is a security vulnerability which enables an attacker to place client side scripts (usually JavaScript) into web pages. You can prevent it by sanitizing inputs from the user.</p>
<h4 id="more-information-11">More Information</h4>
<ul>
<li><a href="https://docs.microsoft.com/en-us/aspnet/core/security/cross-site-scripting">Prevent Cross-Site Scripting (XSS) in ASP.NET Core</a></li>
</ul>
<h2 id="api-versioning">15. API Versioning</h2>
<p>Versioning your APIs allow you to maintain backward compatibility when making breaking changes. You can maintaing multiple versions at the same time and then deprecate versions over time.</p>
<h3 id="more-information-12">More Information</h3>
<ul>
<li><a href="https://exceptionnotfound.net/overview-of-api-versioning-in-asp-net-core-3-0/">Overview of API Versioning in ASP.NET Core 3.0+</a></li>
</ul>
<hr />
<p>Updates:</p>
<ul>
<li>Fixed ordering</li>
<li>Added 13 (Auth) and 14 (Security). Special thanks to <a href="https://www.reddit.com/user/Matti-Koopa">Matti-Koopa</a>.</li>
<li>Added 15 (API Versioning)</li>
</ul>
<p>Building modern APIs require a lot of things to make them reliable, observable, and scalable. In no particular order, here are some of them that help you build better APIs:</p>https://mazeez.dev/posts/pg-trgm-similarity-search-and-fast-likeString similarity search and fast LIKE operator using pg_trgm2021-05-12T00:00:00Z<p>SQL supports wildcard search on strings using <code>LIKE</code> operator which accepts <code>%</code> and <code>_</code> wildcards. The problem with <code>LIKE</code> is it's not very fast if you have a lot of rows and the query is <a href="https://en.wikipedia.org/wiki/Sargable">non-sargable</a>. And in some cases you need to provide fuzzy search capabilities where the results don't have to exactly match the query.</p>
<p>PostgreSQL has the <a href="https://www.postgresql.org/docs/9.6/pgtrgm.html"><code>pg_trgm</code> extension</a> that solves both problems:</p>
<ul>
<li>It has <code>gin</code> and <code>gist</code> indexes for speeding up <code>LIKE</code> and other string operators</li>
<li>It has <code>similarity</code> function and <code>%</code> operator for string similarity search using trigrams.</li>
</ul>
<p>Let's assume we have this table:</p>
<pre><code class="language-sql">CREATE TABLE persons (
id int4 NOT NULL GENERATED ALWAYS AS IDENTITY,
forenames varchar(100) NOT NULL,
surname varchar(100) NOT NULL,
forenames_normalized varchar(100) NOT NULL,
surname_normalized varchar(100) NOT NULL,
CONSTRAINT persons_pk PRIMARY KEY (id)
);
</code></pre>
<p><strong>Note:</strong> Normalized columns are lowercase versions of the normal columns and special characters are removed. You can also remove character accents. This is to make the search experience better for the user as they don't have to type in the exact case and punctuations.</p>
<p>I inserted 10M rows of fake data generated by <a href="https://github.com/bchavez/Bogus">Bogus</a> into the table. You can <a href="http://github.com/mhmd-azeez/PgTrgm">download the dump here</a>.</p>
<p>If we run a <code>LIKE</code> query on it:</p>
<pre><code class="language-sql">select * from persons p
where surname_normalized like '%tche%' and forenames_normalized like '%nde%'
</code></pre>
<p>On my laptop it takes PostgreSQL about a second to return the results:</p>
<pre><code class="language-sql">Gather (cost=1000.00..142174.75 rows=10 width=30) (actual time=9.719..639.460 rows=75 loops=1)
Workers Planned: 2
Workers Launched: 2
-> Parallel Seq Scan on persons p (cost=0.00..141173.75 rows=4 width=30) (actual time=3.425..605.240 rows=25 loops=3)
Filter: (((surname_normalized)::text ~~ '%tche%'::text) AND ((forenames_normalized)::text ~~ '%nde%'::text))
Rows Removed by Filter: 3333308
Planning Time: 0.097 ms
Execution Time: 639.494 ms
</code></pre>
<p>It seems like all of rows rows are scanned in the table. To speed things up, first we need to enable the <code>pg_trgm</code> extension on the database:</p>
<pre><code class="language-sql">create extension if not exists pg_trgm;
</code></pre>
<p>Then we can use the <code>gin</code> index on the normalized columns:</p>
<pre><code class="language-sql">create index if not exists idx_gin_persons_on_names on persons using gin (forenames_normalized gin_trgm_ops, surname_normalized gin_trgm_ops)
</code></pre>
<p><strong>Note:</strong> <code>gin</code> index and <code>gin_trgm_ops</code> operator are part of <code>pg_trgm</code>.</p>
<p>Adding the <code>gin</code> index took about a minute on my laptop for 10M rows.</p>
<p>Now let's see if the results have improved:</p>
<pre><code class="language-sql">Bitmap Heap Scan on persons p (cost=54.20..3692.46 rows=995 width=30) (actual time=4.011..4.066 rows=75 loops=1)
Recheck Cond: (((forenames_normalized)::text ~~ '%nde%'::text) AND ((surname_normalized)::text ~~ '%tche%'::text))
Heap Blocks: exact=75
-> Bitmap Index Scan on idx_gin_persons_on_names (cost=0.00..53.95 rows=995 width=0) (actual time=3.999..3.999 rows=75 loops=1)
Index Cond: (((forenames_normalized)::text ~~ '%nde%'::text) AND ((surname_normalized)::text ~~ '%tche%'::text))
Planning Time: 0.092 ms
Execution Time: 4.120 ms
</code></pre>
<p>Instead of <code>639.494 ms</code> for execution time, now it only takes <code>4.1 ms</code>! That's because instead of sequentially scanning all of the rows in the document, it scanned the <code>gin</code> index.</p>
<p>Great, now let's take a look at how to do fuzzy search:</p>
<p>Let's say we are trying to find someone with forename(s) of <code>anderson</code> and surname of <code>mitchell</code>:</p>
<pre><code class="language-sql">select id, forenames, surname, ((similarity('mitchel', surname_normalized) + similarity('andersen', forenames_normalized)) / 2) as score from persons p
order by score desc
limit 10
</code></pre>
<p>This query takes about 58 seconds to complete. The <code>similarity</code> function is expensive, so we have to try not to use it as much as possible. For that, we can use the similarity operator (<code>%</code>) to filter out the rows that are below a certain threshold. By default the threshold is 30% similarity (<code>0.3</code>) but you can change that using <code>set_limit</code>. Now let's use it:</p>
<pre><code class="language-sql">select id, forenames, surname, ((similarity('mitchel', surname_normalized) + similarity('andersen', forenames_normalized)) / 2) as score from persons p
where forenames_normalized % 'andersen' and surname_normalized % 'mitchel'
order by score desc
limit 10
</code></pre>
<p>Now it takes about <code>100ms</code> on my laptop. A huge improvement over 58 seconds :)</p>
<h2 id="edge-cases">Edge Cases</h2>
<p><code>pg_trgm</code> uses tri-grams for indexing. It means that each string is broken into all possible 3 letter components. For example <code>mitchel</code>'s trigrams are: <code>mit</code>,<code>itc</code>,<code>tch</code>,<code>che</code>,<code>hel</code> and <code>michelle</code>'s trigrams are: <code>mic</code>,<code>ich</code>,<code>che</code>,<code>hel</code>,<code>ell</code>,<code>lle</code>. They share 2 trigrams so the similarity of <code>mitchel</code> with <code>michelle</code> is 30%.</p>
<p>This approach is not useful for words that are less than 3 letters. As you can't form any trigrams. So this query:</p>
<pre><code class="language-sql">select * from persons p
where surname_normalized like '%he%' and forenames_normalized like '%de%'
</code></pre>
<p>Takes the same amount of time on both the indexed table and the non-indexed table because PostgreSQL does sequential scan for both of them:</p>
<pre><code class="language-sql">Gather (cost=1000.00..147095.90 rows=49229 width=30) (actual time=1.169..655.329 rows=21216 loops=1)
Workers Planned: 2
Workers Launched: 2
-> Parallel Seq Scan on persons p (cost=0.00..141173.00 rows=20512 width=30) (actual time=0.397..583.521 rows=7072 loops=3)
Filter: (((surname_normalized)::text ~~ '%he%'::text) AND ((forenames_normalized)::text ~~ '%de%'::text))
Rows Removed by Filter: 3326261
Planning Time: 0.105 ms
Execution Time: 655.974 ms
</code></pre>
<p>There can be cases where the index makes things slower. So please test it for your own use case and weight the trade-offs. Also keep in mind that <a href="https://iamsafts.com/posts/postgres-gin-performance/">inserts and updates take longer with the index</a>.</p>
<h2 id="benchmarks">Benchmarks</h2>
<p>I wrote some very simple benchmarks using <a href="https://github.com/dotnet/BenchmarkDotNet">BenchmarkDotNet</a> and here is the results:</p>
<pre><code>// * Summary *
BenchmarkDotNet=v0.12.1, OS=Windows 10.0.19041.928 (2004/?/20H1)
Intel Core i7-8550U CPU 1.80GHz (Kaby Lake R), 1 CPU, 8 logical and 4 physical cores
.NET Core SDK=5.0.201
[Host] : .NET Core 5.0.4 (CoreCLR 5.0.421.11614, CoreFX 5.0.421.11614), X64 RyuJIT
DefaultJob : .NET Core 5.0.4 (CoreCLR 5.0.421.11614, CoreFX 5.0.421.11614), X64 RyuJIT
| Method | Mean | Error | StdDev | Median |
|---------------------:|-------------:|-----------:|-----------:|-----------:|
| LikeOnGinIndex | 5.398 ms | 0.7167 ms | 2.113 ms | 4.170 ms |
| Like | 1,035.140 ms | 55.0098 ms | 158.716 ms | 991.495 ms |
| SimilarityOnGinIndex | 137.339 ms | 14.7610 ms | 43.523 ms | 114.342 ms |
</code></pre>
<p><strong>Note</strong>: Please download the database dump and code on <a href="http://github.com/mhmd-azeez/PgTrgm">GitHub</a>.</p>
<p>SQL supports wildcard search on strings using <code>LIKE</code> operator which accepts <code>%</code> and <code>_</code> wildcards. The problem with <code>LIKE</code> is it's not very fast if you have a lot of rows and the query is <a href="https://en.wikipedia.org/wiki/Sargable">non-sargable</a>. And in some cases you need to provide fuzzy search capabilities where the results don't have to exactly match the query.</p>https://mazeez.dev/posts/backup-and-restore-in-postgresBacking up and restoring databases in Postgres2021-04-01T00:00:00Z<p>To get a dump of a database you can use <code>pg_dump</code> or <code>pg_dumpall</code> for dumping an entire cluster. It <a href="https://www.postgresql.org/docs/9.1/app-pgdump.html">supports 4 formats</a>:</p>
<table class="table">
<thead>
<tr>
<th>Format</th>
<th>Description</th>
<th>Restore via</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>plain</code></td>
<td>Output a plain-text SQL script file (the default).</td>
<td><code>psql</code></td>
</tr>
<tr>
<td><code>custom</code></td>
<td>Output a custom-format archive suitable for input into pg_restore. Together with the directory output format, this is the most flexible output format in that it allows manual selection and reordering of archived items during restore. This format is also compressed by default.</td>
<td><code>pg_restore</code></td>
</tr>
<tr>
<td><code>directory</code></td>
<td>Output a directory-format archive suitable for input into pg_restore. This will create a directory with one file for each table and blob being dumped, plus a so-called Table of Contents file describing the dumped objects in a machine-readable format that pg_restore can read. A directory format archive can be manipulated with standard Unix tools; for example, files in an uncompressed archive can be compressed with the gzip tool. This format is compressed by default.</td>
<td><code>pg_restore</code></td>
</tr>
<tr>
<td><code>tar</code></td>
<td>Output a <code>tar</code>-format archive suitable for input into pg_restore. The tar format is compatible with the directory format: extracting a tar-format archive produces a valid directory-format archive. However, the tar format does not support compression. Also, when using tar format the relative order of table data items cannot be changed during restore.</td>
<td><code>pg_restore</code></td>
</tr>
</tbody>
</table>
<h3 id="how-to-backup-a-database">How to backup a database</h3>
<p>To create a dump of <code>sample-db</code> in <code>custom</code> format and save it to <code>sample-db.dump</code>:</p>
<pre><code class="language-bash">pg_dump -U postgres --encoding utf8 -F c -f sample-db.dump sample-db
</code></pre>
<p>To create a dump of <code>sample-db</code> in <code>plain</code> format and save it to <code>sample-db.sql</code>:</p>
<pre><code class="language-bash">pg_dump -U postgres --encoding utf8 -F p -f stoplist.sql stoplist
</code></pre>
<h3 id="how-to-restore-a-database-dump">How to restore a database dump</h3>
<p>First create an empty database to restore the dump to.</p>
<pre><code class="language-bash"># We use template0 because it's is empty and it doesn't conflict with the schemas and tables in the dump.
createdb -U postgres restored-db --template=template0
</code></pre>
<p>Restore <code>custom</code>, <code>directory</code>, and <code>tar</code> format dumps using <code>pg_restore</code>:</p>
<pre><code class="language-bash">pg_restore -U postgres -d restored-db < ./sample-db.dump
</code></pre>
<p>Restore <code>plain</code> format dumps using <code>psql</code>:</p>
<pre><code class="language-bash">psql -U postgres -d restored-db < ./sample-db.sql
</code></pre>
<blockquote class="blockquote">
<p><strong>Note:</strong> In Powershell the <code><</code> operator doesn't work. So you'll have to use <code>cmd</code> on Windows.</p>
</blockquote>
<h3 id="errors-you-might-come-across">Errors you might come across:</h3>
<ol>
<li>Corrupted dumps</li>
</ol>
<pre><code>pg_restore: [archiver] found unexpected block id (x) when reading data -- expected y
</code></pre>
<pre><code>pg_restore: error unrecognized data block type
</code></pre>
<p>This might mean the dump is corrupted. One possible reason is the database contained Unicode data and the dump was not encoded in utf8. Use <code>--encoding utf8</code> when running <code>pg_dump</code> to fix that.</p>
<ol start="2">
<li>Restoring <code>plain</code> format dumps using <code>pg_restore</code>:</li>
</ol>
<pre><code>pg_restore: [archiver] did not find magic string in file header
</code></pre>
<pre><code>pg_restore: [archiver] input file does not appear to be a valid archive
</code></pre>
<p>This happens if you run <code>pg_restore</code> on a <code>plain</code> format dump. Use <code>psql</code> to restore it instead.</p>
<p>If you have any other tips/tricks, please write the down in the comments!</p>
<p>To get a dump of a database you can use <code>pg_dump</code> or <code>pg_dumpall</code> for dumping an entire cluster. It <a href="https://www.postgresql.org/docs/9.1/app-pgdump.html">supports 4 formats</a>:</p>https://mazeez.dev/posts/why-i-love-powershellWhy I love Powershell as a scripting language2021-03-14T00:00:00Z<p>Every once in a while, I have to write a script to automate a task. Maybe it's part of a CI/CD pipeline, or it's part of my dev workflow. My favorite language for writing scripts is PowerShell. Here is why:</p>
<h1 id="powershell-has-an-object-pipeline">1. Powershell has an object pipeline</h1>
<p>When you pass data (or pipe them) from one command to the next, the data is treated as an object, not as a string. Which means the data can have different properties and it retains these properties.</p>
<p>This is very powerful and unlocks all kinds of composability scenarios. Commands don't have to be single purpose, they don't have to provide different switches to get different outputs. They give you everything and you can use only the properties you need.</p>
<h1 id="you-have-the-full-power-of.net">2. You have the full power of .NET</h1>
<p>Powershell is a first class programming language on .NET. You can do almost everything with Powershell. that you can do with C# or F#. <a href="https://devblogs.microsoft.com/scripting/create-a-simple-graphical-interface-for-a-powershell-script/">You can even create GUIs if you want to</a>. This means that you have access to the coherent and well designed Base Class Library of .NET as well as the entire gallery of <a href="https://www.nuget.org/">nuget</a> packages. It also has <a href="https://www.powershellgallery.com/packages">a lot of modules of it's own</a>.</p>
<h1 id="you-can-use-selectwheregroupby">3. You can use Select/Where/GroupBy</h1>
<p>Because Powershell is built on top of .NET, it has access to .NET's version of map/filter/reduce. They make scripts much more pleasant to write and read. However the names are a bit different from Javascript or other languages:</p>
<table class="table">
<thead>
<tr>
<th>JS Name</th>
<th>Powershell</th>
</tr>
</thead>
<tbody>
<tr>
<td>Filter</td>
<td>Where</td>
</tr>
<tr>
<td>Map</td>
<td>Select</td>
</tr>
<tr>
<td>Reduce</td>
<td>Measure / ForEach</td>
</tr>
<tr>
<td>Reduce</td>
<td>GroupBy</td>
</tr>
</tbody>
</table>
<h1 id="its-now-cross-platform-and-open-source">4. It's now cross-platform and open-source</h1>
<p><a href="https://github.com/PowerShell/PowerShell">Powershell Core</a> is a cross-platform and open-source version of Powershell built on top of .NET Core maintained by Microsoft.</p>
<h1 id="working-with-data-is-easy">5. Working with Data is easy</h1>
<p>You can easily work with <a href="https://adamtheautomator.com/powershell-excel/">Excel</a>, <a href="https://docs.microsoft.com/en-us/powershell/module/microsoft.powershell.utility/select-xml">XML</a> and <a href="https://docs.microsoft.com/en-us/powershell/module/microsoft.powershell.utility/convertfrom-json">JSON</a> data which makes working with APIs much easier.</p>
<h1 id="some-cool-examples">Some cool examples</h1>
<p>Here are some cool examples that demonstrate the points above:</p>
<h2 id="get-top-3-largest-files-in-a-folder">Get top 3 largest files in a folder</h2>
<pre><code class="language-powershell">Get-Childitem 'C:\Windows\System32' |
Where Length -gt (10MB) | # Only files that are greater than (-gt) 10 MB (MB is a constant in PS!)
Sort -Descending -Property Length | # Sort files by their length ascending
Select -First 3 Name, Length # Only select name and length properties (projection)
</code></pre>
<h3 id="output">Output</h3>
<pre><code>Name Length
---- ------
MRT.exe 131005360
nvcompiler.dll 40444864
WindowsCodecsRaw.dll 32612880
</code></pre>
<h2 id="calling-a-rest-api">Calling a REST API</h2>
<p>In this example we call a JSON REST API and print a property</p>
<pre><code class="language-powershell">$uri = 'https://cat-fact.herokuapp.com/facts/random' # random cat fact API
$fact = Invoke-RestMethod -Uri $uri
Write-Host $fact.text
</code></pre>
<h3 id="output-1">Output:</h3>
<pre><code>Cats make about 100 different sounds. Dogs make only about 10.
</code></pre>
<h2 id="export-all-process-information-as-an-excel-sheet">Export all process information as an excel sheet</h2>
<pre><code class="language-powershell">Get-Process | Select-Object Company, Name, Handles | Export-Excel
</code></pre>
<h3 id="result">Result:</h3>
<table class="table">
<thead>
<tr>
<th>Company</th>
<th>Name</th>
<th>Handles</th>
</tr>
</thead>
<tbody>
<tr>
<td>Microsoft Corporation</td>
<td>Calculator</td>
<td>537</td>
</tr>
<tr>
<td>Google LLC</td>
<td>chrome</td>
<td>449</td>
</tr>
<tr>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
</tbody>
</table>
<blockquote class="blockquote">
<p>Note: You'll have to install <a href="https://github.com/dfinke/ImportExcel">ImportExcel</a> module for this example to work.</p>
</blockquote>
<h2 id="importing-data-from-excel">Importing data from excel</h2>
<p>Consider this excel sheet:</p>
<table class="table">
<thead>
<tr>
<th><strong>City</strong></th>
<th><strong>Population</strong></th>
</tr>
</thead>
<tbody>
<tr>
<td>Erbil</td>
<td>1500000</td>
</tr>
<tr>
<td>Sulaymani</td>
<td>739182</td>
</tr>
<tr>
<td>Duhok</td>
<td>1293000</td>
</tr>
<tr>
<td>Kirkuk</td>
<td>1598000</td>
</tr>
<tr>
<td>Halabja</td>
<td>245700</td>
</tr>
</tbody>
</table>
<pre><code class="language-powershell"># Get the average poluation of cities in Kurdistan:
Import-Excel 'F:\cities.xlsx' | Measure -Average -Property population | Select -Property Average
</code></pre>
<blockquote class="blockquote">
<p>Note: You'll have to install <a href="https://github.com/dfinke/ImportExcel">ImportExcel</a> module for this example to work.</p>
<p>Note 2: Data is from Wikipedia.</p>
</blockquote>
<h3 id="ouput">Ouput:</h3>
<pre><code>1075176.4
</code></pre>
<p>While my preferred scripting language is Powershell, I strongly believe everyone should use whatever tools/languages they are productive in. There is no best language. Everyone has different preferences and that's okay.</p>
<p>If you're using Powershell on Windows, I suggest you read <a href="https://www.hanselman.com/blog/taking-your-powershell-prompt-to-the-next-level-with-windows-terminal-and-oh-my-posh-3">this post</a> to make your experience even better.</p>
<p>Every once in a while, I have to write a script to automate a task. Maybe it's part of a CI/CD pipeline, or it's part of my dev workflow. My favorite language for writing scripts is PowerShell. Here is why:</p>https://mazeez.dev/posts/background-job-scheduling-using-hangfireBackground Job Scheduling using Hangfire2020-12-05T00:00:00Z<blockquote class="blockquote">
<p><strong>Note:</strong> This post is part of C# <a href="https://www.csadvent.christmas/">Advent Calendar 2020</a>.</p>
</blockquote>
<p>Sometimes you've tasks that would take too much time to do in the request-response model. For example sending multiple emails. So you have to send the emails asynchronously (i.e. in the background) and returning a response before the jobs are done. Or maybe you want to do some task periodically. For example generating a resource-intensive report at midnight or sending monthly invoices to customers. In short, you want to be able to schedule jobs and be able to track their progress and see their results.</p>
<p>You can of course use an in-memory model and use ASP.NET Core's <a href="https://docs.microsoft.com/en-us/aspnet/core/fundamentals/host/hosted-services">IHostedService</a>. But then if the application restarts all of the scheduled jobs would be gone. So you need to persist the jobs so that they are not lost. And Maybe you want to have multiple workers (consumers) processing the background jobs and the works might be on different machines. And what about retrying jobs when they fail?</p>
<p><strong><a href="https://www.hangfire.io/">Hangfire</a></strong> is a library that helps you do all of that and more very easily. It's very popular and well tested. It's also customizable and supports different kinds of storage mechanisms. And it supports different styles and techniques of background job processing.</p>
<img src="../assets/images/posts/background-job-scheduling-using-hangfire/dashboard.jpg" width="800">
<h2 id="how-to-use-hangfire">How to use Hangfire</h2>
<p>We are going to host hangfire in an ASP.NET Core app and use SQLite for storage. You can also use MSSQL, PostgreSQL, MySQL and other database engines and host it in a console app. The <a href="https://docs.hangfire.io/en/latest/getting-started/aspnet-core-applications.html">official guide</a> is very good but here are the steps:</p>
<ol>
<li><p>Add these Nuget packages*:</p>
<pre><code class="language-xml"><PackageReference Include="Hangfire.Core" Version="1.7.18" />
<PackageReference Include="Hangfire.AspNetCore" Version="1.7.18" />
<PackageReference Include="Hangfire.Storage.SQLite" Version="0.2.4" />
</code></pre>
</li>
<li><p>Add Hangfire to Dependency Container:</p>
<pre><code class="language-csharp">// Add Hangfire services.
services.AddHangfire(configuration => configuration
.SetDataCompatibilityLevel(CompatibilityLevel.Version_170)
.UseSimpleAssemblyNameTypeSerializer()
.UseRecommendedSerializerSettings()
.UseSQLiteStorage());
// Add the processing server as IHostedService
services.AddHangfireServer();
</code></pre>
</li>
<li><p>Define Hangfire Dashboard route:</p>
<pre><code class="language-csharp">app.UseEndpoints(endpoints =>
{
endpoints.MapRazorPages();
endpoints.MapHangfireDashboard();
});
</code></pre>
</li>
</ol>
<p>Now you can run the app and go to <code>/hangfire</code> and see it. But there are no jobs yet.</p>
<h2 id="enqueuing-jobs">Enqueuing Jobs</h2>
<p>If you want to enqueue a job in a fire-and-forget fashion (i.e. you don't want to wait for the result and you don't care much about when exactly it's going to happen), you do something like this:</p>
<pre><code>backgroundJobs.Enqueue(() => Console.WriteLine("Hello world from Hangfire!"));
</code></pre>
<p>What if you wanted to something that might require some dependencies. For example you need a connection to the database or access to configurations? Well, then you can create a class for the job and get your dependencies through Dependency Injection:</p>
<pre><code class="language-csharp">public class SendEmailsJob
{
public SendEmailsJob(IConfiguration configuration)
{
// You can ask for configuration or any other
// dependency the job might need via Dependency Injection
}
[JobDisplayName("Send {0} emails")]
[AutomaticRetry(Attempts = 3)]
public async Task Execute(int count)
{
for (int i = 0; i < count; i++)
{
await Task.Delay(1000);
}
}
}
</code></pre>
<p>As you can see we have defined a method called <code>Execute</code> which accepts a parameter. We have also decorated it with some attributes to control how it's displayed in the Hangfire dashboard or how many times Hangfire would automatically retry the job if it fails.</p>
<p>And this is how you'd enqueue the job:</p>
<pre><code class="language-csharp">_backgroundJobClient.Enqueue<SendEmailsJob>(job => job.Execute(5));
</code></pre>
<h2 id="scheduling-jobs">Scheduling Jobs</h2>
<p>If you want a job to be executed periodically on a defined schedule, you can write something like this:</p>
<pre><code class="language-csharp">Hangfire.RecurringJob.AddOrUpdate<SendEmailsJob>(job => job.Execute(10), cronExpression: "*/5 * * * *");
</code></pre>
<p><strong>Note:</strong> <code>RecurringJob</code> is a static class.</p>
<p>The job can be a simple expression or a class. And you define the schedule using a Cron Expression. You can construct Cron expressions through sites like <a href="https://crontab.cronhub.io/">this</a> and <a href="https://crontab.guru/">this</a>.</p>
<img src="../assets/images/posts/background-job-scheduling-using-hangfire/job.jpg" width="800">
<h2 id="extensions">Extensions</h2>
<p>Hangfire has a lot of extensions, you can check them out <a href="https://www.hangfire.io/extensions.html">here</a>. But my favorite extension is this one. Which lets you output real time logs :)</p>
<img src="https://github.com/pieceofsummer/Hangfire.Console/raw/master/dashboard.png" width="600" />
<p><strong>Note:</strong> The console is not real-time if you use SQLite as the storage engine.</p>
<h2 id="source-code">Source code</h2>
<p>I have put a sample application <a href="https://github.com/mhmd-azeez/HangfireDemo">here </a> which contains the codes in this article and the rest of the ASP.NET Core app.</p>
<p><strong>Note:</strong> This post is part of C# <a href="https://www.csadvent.christmas/">Advent Calendar 2020</a>.</p>https://mazeez.dev/posts/csproj-include-folders-recursivelyHow to include folders as link recursively in csproj files2020-10-13T00:00:00Z<p>Suppose we have a folder called <code>dependencies</code> with this structure:</p>
<pre><code> - dependencies
- child-folder
- file.txt
- file2.txt
- file3.txt
</code></pre>
<p>If you want to add the entire folder as link and preserve the directory structure, you can use this inside your <code>.csproj</code> file:</p>
<pre><code><ItemGroup>
<None Include="..\dependencies\**\*">
<Link>dependencies\%(RecursiveDir)/%(FileName)%(Extension)</Link>
<CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
</None>
</ItemGroup>
</code></pre>
<p>This will include <code>dependencies</code> folder inside your project along with all of its child folder and files.</p>
<p>Suppose we have a folder called <code>dependencies</code> with this structure:</p>https://mazeez.dev/posts/github-party-trickGitHub Party Trick2020-08-25T00:00:00Z<p>GitHub associates commits with people via email addresses. Each commit is signed with a commit and a name. So when you push a repo to GitHub, it looks for a user with that email address and associates the commit with that user. This allows some cool tricks!</p>
<p>You can commit as your favorite programmer in your repositories! <a href="https://github.com/encrypt0r/trick/graphs/contributors">Example</a>:</p>
<p><img src="../assets/images/posts/github-party-trick/contributors.png" class="img-fluid" alt="List of contributors contains Rhyan Dhall, Rich Harris, and Linus Torvalds" /></p>
<p>In a repository, change your email address to their email address:</p>
<pre><code>git config user.email "email@example.com"
</code></pre>
<p>Optional: Change your name to their name too!</p>
<pre><code>git config user.name "Famous Person"
</code></pre>
<p>And now you can commit and push to GitHub. Now GitHub associates the commits with their account!</p>
<p>Note: The commits don't show up on their profile page. GitHub <a href="https://docs.github.com/en/github/setting-up-and-managing-your-github-profile/why-are-my-contributions-not-showing-up-on-my-profile#commits">has this to say</a> about showing commits on a user's profile:</p>
<blockquote class="blockquote">
<p>Commits will appear on your contributions graph if they meet <strong>all</strong> of the following conditions:</p>
<ul>
<li>The email address used for the commits is associated with your GitHub account.</li>
<li>The commits were made in a standalone repository, not a fork.</li>
<li>The commits were made:
<ul>
<li>In the repository's default branch (usually <code>master</code>)</li>
<li>In the <code>gh-pages</code> branch (for repositories with project sites)</li>
</ul>
</li>
</ul>
<p>For more information on project sites, see "<a href="https://docs.github.com/en/github/working-with-github-pages/about-github-pages#types-of-github-pages-sites">About GitHub Pages</a>."</p>
<p>In addition, <strong>at least one</strong> of the following must be true:</p>
<ul>
<li>You are a collaborator on the repository or are a member of the organization that owns the repository.</li>
<li>You have forked the repository.</li>
<li>You have opened a pull request or issue in the repository.</li>
<li>You have starred the repository.</li>
</ul>
</blockquote>
<p><strong>DISCLAIMER:</strong></p>
<p><img src="https://s.yimg.com/ny/api/res/1.2/27_UvTRiSb4a5C42zgkIeQ--%7EA/YXBwaWQ9aGlnaGxhbmRlcjtzbT0xO3c9NTAwO2g9MjAw/http://media.zenfs.com/en/homerun/feed_manager_auto_publish_494/3da2941d2b6e5249f73bed9bd44fdbf3" class="img-fluid" alt="Identity theft is not a joke~" /></p>
<p>While we used this as a fun trick, it can be used for nefarious reasons. One way to workaround this issue is to <a href="https://git-scm.com/book/en/v2/Git-Tools-Signing-Your-Work">cryptographically sign your commits</a>. So that people can be sure that you have made the commits.</p>
<p>GitHub associates commits with people via email addresses. Each commit is signed with a commit and a name. So when you push a repo to GitHub, it looks for a user with that email address and associates the commit with that user. This allows some cool tricks!</p>https://mazeez.dev/posts/csharp-source-generatorsUse C# Source Generators to make all of your methods async!2020-05-05T00:00:00Z<p>C# Source Generators is a C# 9 feature that lets your generate source code during build. For more information read the <a href="https://devblogs.microsoft.com/dotnet/introducing-c-source-generators/">announcement blog post</a> on .NET blog.</p>
<blockquote class="blockquote">
<p>NOTE: If it's not clear, this post is satire. Don't use any of the code or suggestions in production.</p>
</blockquote>
<p>Have you ever had problems with app speed? Well, the easiest way to speed up your app is to make your methods async. And the easiest way of making your methods async is by using <code>Task.Run</code>.</p>
<p>For example, if we had a method like this:</p>
<pre><code class="language-csharp">static void PrintNumber(int number)
{
Console.WriteLine(number);
}
</code></pre>
<p>Let's assume that <code>PrintNumber</code> is slow and we want to speed it up, what do we do? We stick it in Task.Run and voila! all of our problems are solved!</p>
<pre><code class="language-csharp">static Task PrintNumberAsync(int number)
{
return Task.Run(() => PrintNumber(number));
}
</code></pre>
<p>Okay, how are c# source generators useful here? Well, we can write a source generator that turns all of your methods async just by applying a simple attribute!</p>
<pre><code class="language-csharp">partial class Program
{
static async Task Main(string[] args)
{
await PrintNumberAsync(42);
}
[Asyncify]
static void PrintNumber(int number)
{
Console.WriteLine(number);
}
}
</code></pre>
<p>Notice how <code>Main</code> calls <code>PrintNumberAsync</code> without us needing to define it? That's because there is a source generator that generates the async version of any method that's decorated with the <code>Asyncify</code> attribute.</p>
<p>The source generator is very simple but a little verbose, so I don't include the source code here. But it's available on <a href="https://github.com/encrypt0r/FunWithSourceGenerators">GitHub</a>.</p>
<p>The basic idea is that our source generator asks for any method that has our specific attribute (<code>Asyncify</code>) applied to it.</p>
<p>It then groups the methods by their class, for each class it creates another partial class that contains the async versions for the specified methods.</p>
<p>Here is how we generate each method:</p>
<pre><code class="language-csharp">private void ProcessMethod(StringBuilder source, IMethodSymbol methodSymbol)
{
// SayHello => SayHelloAsync
string asyncMethodName = $"{methodSymbol.Name}Async";
var staticModifier = methodSymbol.IsStatic ? "static" : string.Empty;
// void => Task, bool => Task<bool>
var asyncReturnType = methodSymbol.ReturnType.Name == "Void" ?
"Task" :
$"Task<{methodSymbol.ReturnType.Name}>";
// int number, string name
var parameters = string.Join(",", methodSymbol.Parameters.Select(p => $"{p.Type} {p.Name}"));
// number, name
var arguments = string.Join(",", methodSymbol.Parameters.Select(p => p.Name));
source.Append($@"
public {staticModifier} {asyncReturnType} {asyncMethodName}({parameters})
{{
return Task.Run(() => {methodSymbol.Name}({arguments}));
}}
");
}
</code></pre>
<p>Because C# Source Generators are part of build, we get a lot metadata (like <a href="https://docs.microsoft.com/en-us/dotnet/api/microsoft.codeanalysis.imethodsymbol?view=roslyn-dotnet">IMethodSymbol</a>) about all of the classes, methods, fields, etc. It's like reflection, but in compile time. If you have experience with Analyzers it's very similar because they are both based on Roslyn. But unlike Analyzers, C# Source Generators emit code using strings, not syntax tree deltas. To be honest, I like the string approach. It's a lot more friendly. However, it can get hard to maintain for complicated scenarios.</p>
<p>To make our source generator even more useful, we can modify it so that we can apply the <code>Asyncify</code> attribute on a class and all of the methods of the class get asyncified! I'll leave that as an exercise.</p>
<p>C# Source Generators is a C# 9 feature that lets your generate source code during build. For more information read the <a href="https://devblogs.microsoft.com/dotnet/introducing-c-source-generators/">announcement blog post</a> on .NET blog.</p>https://mazeez.dev/posts/update-sideloaded-uwp-2Enabling automatic updates for sideloaded UWP apps: Multiple update channels2020-04-25T00:00:00Z<p>In a <a href="https://mazeez.dev/posts/update-sideloaded-uwp">previous</a> post we talked about getting more control over the update process in sideloaded UWP apps. And in <a href="https://mazeez.dev/posts/uwp-devops">another post</a> we talked about setting up CI/CD pipelines for UWP apps. This time we will talk about how to create multiple update channels, i.e. alpha, beta, stable, for our UWP app.</p>
<p>The idea is very simple, and although I am going to use Azure Blob Storage, nothing stops you from using a normal web host.</p>
<h2 id="build-pipeline">Build Pipeline</h2>
<p>All of the channels should use the same build pipeline. The pipeline builds and signs the UWP. I have talked about setting up a build pipeline for UWP apps <a href="https://mazeez.dev/posts/uwp-devops">here</a>. You don't need to change much.</p>
<h2 id="release-pipeline">Release Pipeline</h2>
<p>All of the magic happens in the release pipeline. We can have multiple update channels by creating multiple containers in Azure DevOps (or using different prefixes, whichever you like more) for each channel.</p>
<p><img src="../assets/images/posts/update-sideloaded-uwp-2/channels.png" class="img-fluid" alt="channels" /></p>
<blockquote class="blockquote">
<p>Note: When creating the storage account, set the the performance tier to <code>Standard</code> (<code>Premium</code> only supports <code>Page Blob</code>s) and when creating the containers set the container access level to <code>Blob</code>.</p>
</blockquote>
<p>Now for each channel, we create a stage in the release pipeline</p>
<p><img src="../assets/images/posts/update-sideloaded-uwp-2/release-stages.png" class="img-fluid" alt="release-stages" /></p>
<p>Between each step we can have various approval processes or gates, Azure Pipelines makes these kinds of things incredibly easy.</p>
<p>The stages are very similar, here is what each stage has to do:</p>
<p><img src="../assets/images/posts/update-sideloaded-uwp-2/stage.png" class="img-fluid" alt="stage" /></p>
<h3 id="change-index.html-and.installer-urls">1. Change index.html and .installer urls</h3>
<p>Because we now have multiple channels, each channel will have its own url. So we have to change <code>index.html</code> and the <code>.appinstaller</code> file to reflect the channels' URL. This can be done easily via a simple PowerShell script.</p>
<pre><code class="language-powershell">Write-Host "Changing urls for beta channel..."
$oldUrl = "Old URL HERE"
$newUrl = "https://YOUR-STORAGE-ACCOUNT-HERE.blob.core.windows.net/beta"
$folder = "$(System.DefaultWorkingDirectory)/BUILD-ARTIFICAT-NAME/drop/AppxPackages"
$htmlPath = "$folder/index.html"
$installerPath = "$folder/APP-NAME.appinstaller"
((Get-Content -path $htmlPath -Raw) -replace $oldUrl,$newUrl) | Set-Content -Path $htmlPath
((Get-Content -path $installerPath -Raw) -replace $oldUrl,$newUrl) | Set-Content -Path $installerPath
</code></pre>
<h3 id="delete-old-versions-to-reduce-cost">2. Delete old versions to reduce cost</h3>
<p>Although Azure Blob Storage is very cheap, UWP apps can become very large, ours is about 300 MB when compiled in <code>Release</code> configuration for both <code>x64</code> and <code>x86</code> architectures. So in order to make sure we don't store too much data, we will delete the older versions and only leave the last N versions.</p>
<p>For this we can use the <a href="https://github.com/microsoft/azure-pipelines-tasks/blob/master/Tasks/AzureCLIV2/Readme.md">Azure CLI</a> task. It's similar to a Powershell script, but you're already logged in to your azure subscription. In the script part we can use something like this:</p>
<blockquote class="blockquote">
<p>Note: Azure CLI added support for PowerShell scripts in v2.</p>
</blockquote>
<pre><code class="language-powershell">$container = "beta"
$accountName = "STORAGE-ACCOUNT-NAME"
# Get List of blobs in the container and deserialize the list
$blobs = az storage blob list -c $container --prefix APP-NAME-HERE_ --account-name $accountName | convertFrom-json
# Create a HashTable so that we can store the name of the directory and its creation time
$dict = @{}
Foreach($blob in $blobs)
{
$versionName = $blob | Select-Object -ExpandProperty name
$versionName = $versionName.split("/")[0]
$date = $blob | Select-Object -ExpandProperty properties | Select-Object -ExpandProperty creationTime
$date = [datetime]$date
$dict[$versionName] = $date
}
# Sort the directories by their creation time and skip the top 2 (we don't want to remove last 2 version)
$ordered = $dict.GetEnumerator() | Sort-Object -Property Value -Descending | Select-Object -Skip 2
Write-Host Deleting obsolete versions...
Foreach($blob in $ordered)
{
# APP-NAME_1.0.1.0_Test => APP-NAME_1.0.1.0_Test/*
$dir = $blob.Key + "/*"
az storage blob delete-batch -s $container --pattern $dir --account-name $accountName
Write-Host $blob.Key " - " $blob.Value
}
Write-Host Done
</code></pre>
<h3 id="copy-the-build-artifact-to-channel-container">3. Copy the build artifact to channel container</h3>
<p>We can use the amazingly fast <a href="https://github.com/microsoft/azure-pipelines-tasks/blob/master/Tasks/AzureFileCopyV1/README.md">AzureBlob File Copy</a> task to upload the build artifact to azure blob storage. Using it is straightforward, you give it a folder to upload and the name of the container as well as the subscription and it uploads the specified files in no time.</p>
<h3 id="optional-upload-the-uwp-symbols-to-appcenter">4. (Optional) Upload the UWP symbols to AppCenter</h3>
<p>This only needs to be done in the first stage. In order for AppCenter crashes to give you detailed stack traces, it needs the symbols to translate memory addresses to function and class names. You can find more <a href="https://docs.microsoft.com/en-us/appcenter/diagnostics/windows-support#symbolication">details here</a>.</p>
<p>I have used the <a href="https://github.com/microsoft/appcenter-cli">appcenter cli</a> to upload the symbols.</p>
<pre><code class="language-powershell"># AppCenter UWP symbolication
# https://docs.microsoft.com/en-us/appcenter/diagnostics/windows-support#symbolication
# Install appcenter cli
# https://github.com/microsoft/appcenter-cli
npm install -g appcenter-cli
# Login to AppCenter
appcenter login --token $env:AppCenterToken --quiet
# Get the path fo the symbol files (64 bit and 32 bit)
$symbolFiles = Get-ChildItem *.appxsym -Recurse -Force | Select-Object -ExpandProperty FullName
# Upload the symbols
Foreach($symbolFile in $symbolFiles)
{
Write-Host "Uploading $symbolFile..."
appcenter crashes upload-symbols --app ORGANIZATION-NAME/APP-NAME --appxsym $symbolFile
}
</code></pre>
<blockquote class="blockquote">
<p>Note: in the current version, appcenter cli seems to use some deprecated libraries. So I have set <code>Fail on Standard Error</code> to false on the Powershell task so that the task doesn't fail.</p>
<p>Note2: I am using Azure Pipelines secrets to pass the AppCenter token to appcenter cli. For that, you have to explicitly map the secret to an environment variable in the <code>Environment Variables</code> part of the Powershell task.</p>
</blockquote>
<p>Congratulations! Now you've got yourself a decent multichannel auto-update process. All that remains is to teach the UWP app to look at different <code>.appinstaller</code> files based on which update channel its listening on.</p>
<p>In a <a href="https://mazeez.dev/posts/update-sideloaded-uwp">previous</a> post we talked about getting more control over the update process in sideloaded UWP apps. And in <a href="https://mazeez.dev/posts/uwp-devops">another post</a> we talked about setting up CI/CD pipelines for UWP apps. This time we will talk about how to create multiple update channels, i.e. alpha, beta, stable, for our UWP app.</p>https://mazeez.dev/posts/code-signing-certificate-iraqGetting a code signing certificate for a company in Iraq2020-04-16T00:00:00Z<p>If your company is based in one of the developed countries, chances are you won't have much of a problem. It would take a week at most to get a code signing certificate. However, If your company is based in a country like Iraq, which doesn't have an up-to-date online-accessible list of its companies, then you are going to have a harder time. Fortunately, this guide makes the process much easier and faster. The documents might vary by country, but the certificate authority support team will help you go through the process.</p>
<p>Before we start I want to define some terms, because many people (like myself when I started out) are not familiar with the territory. Nate McMaster explains them very well in his <a href="https://natemcmaster.com/blog/2018/07/02/code-signing/">blog post</a>:</p>
<blockquote class="blockquote">
<ul>
<li><p><a href="https://en.wikipedia.org/wiki/Code_signing"><em><strong>Code signing</strong></em></a> means applying a digital signature to the executable binaries (for example <code>McMaster.Extensions.CommandLineUtils.dll</code>). This signature confirms the authenticity and integrity of the files.</p>
</li>
<li><p><em><strong>Authenticity</strong></em> proves the files came from me, Nathan McMaster, and not someone pretending to be me.</p>
</li>
<li><p><em><strong>Integrity</strong></em> also proves the files have not been altered by anyone since I made them.</p>
</li>
<li><p><a href="https://en.wikipedia.org/wiki/Public_key_certificate">A <em><strong>certificate</strong></em></a> contains public information about me and <a href="https://en.wikipedia.org/wiki/Public-key_cryptography">a public key</a>. Anyone can see my certificate, but only I can produce a signature with it because I keep secret the private key, which matches with the public key in the certificate. Anyone can create a certificate for free on their own, but Windows apps won’t treat this as “trusted” unless you get a certificate from a CA.</p>
</li>
<li><p><a href="https://en.wikipedia.org/wiki/Certificate_authority">A <em><strong>certificate authority</strong></em> (CA)</a> is an entity that issues certificates. In my case, I worked with <a href="https://digicert.com/">DigiCert</a> to get a certificate. This certificate, unlike a self-created cert, contains additional information which proves DigiCert gave me the certificate.</p>
</li>
</ul>
</blockquote>
<p>What are the benefits of code signing certificates? Well, I had the same question. And I got two good reasons:</p>
<?# Twitter 1210259160908075008 /?>
<?# Twitter 1210259181854449672 /?>
<p>And code signing certificates come in two flavors: <em>Standard Code Signing Certificate</em> and <em>Extended Validation (EV) Code Signing Certificates</em>. <em>EV Code Signing Certificates</em> are more expensive and harder to get but they provide with instant trust with Microsoft Smart Screen. But they also require either hardware USB tokens or and <a href="https://en.wikipedia.org/wiki/Hardware_security_module"><em>Hardware Security Module</em></a> to sign software. You can also store them in something like Azure Key Vault. In my limited experience, the standard one is enough and gives you less headache to deal with.</p>
<p>It took us more than 5 months to get a code signing certificate. Partly because we had no experience with the whole process and partly because our company is based in Iraq. One of our mistakes was that we asked for a EV code signing certificate at first, which made the validation process much harder.</p>
<h3 id="get-a-db-number">1. Get a <a href="https://en.wikipedia.org/wiki/Data_Universal_Numbering_System">D&B</a> Number</h3>
<p>The official website of D&B does not open in Iraq and to my knowledge D&B doesn't have a branch in Iraq. Fortunately, you can go to <a href="http://upik.de/en">http://upik.de/en</a> and get D&B number. It makes your life much easier as all of the CAs I have talked to consider them a trusted source. They will ask you a few questions and will require documents to prove your company is legitimate. We sent them:</p>
<ul>
<li>Company establishment certificate</li>
<li>Decision of approval for a company establishment.</li>
</ul>
<p>Because these documents were in Kurdish (our company is based in Kurdistan region of Iraq), we translated both documents and send scanned and sent both the translated version and the original version.</p>
<p>The validation process can take up to 30 days. However, ours took about a week.</p>
<h3 id="find-a-ca-that-can-work-with-companies-in-iraq">2. Find a CA that can work with companies in Iraq</h3>
<p>Not all of the CAs can issue certificates for companies in Iraq. They say it's because sanctions and things like that. But some companies have branches in Middle East and so they will happily issue a certificate for you. Talk with their sales or customer support to confirm that.</p>
<p>And make sure the CA is partnered to whatever operating system vendor you care about. Here is <a href="https://docs.microsoft.com/en-us/security/trusted-root/participants-list">the list of CAs partnered with Microsoft</a>.</p>
<h3 id="go-through-the-validation-process">3. Go through the validation process</h3>
<p>In the validation process, they will ask you some questions about your company name, address, and field of work. Make sure to write the full legal name when they ask for company name.</p>
<p>They will also ask you for documents to prove the answers you've sent them. We sent these documents:</p>
<ul>
<li>Decision of approval for a company establishment.</li>
<li>Our lease contract</li>
</ul>
<p>Because you already have a D&B number, things should go on smoothly. The CAs are usually helpful and try their best to help you go through the process.</p>
<h2 id="faq">FAQ</h2>
<p>Before starting the process I had these questions:</p>
<ol>
<li><strong>Should I get the EV or the standard code signing certificate?</strong>
To be honest the standard code signing certificate seems enough. We signed our app using a standard code signing certificate and Windows didn't complain even on the first download of the app.</li>
<li><strong>Can I sign multiple apps with the same certificate?</strong>
Yes! The certificate is for your company, not a specific app. So you can sign as many apps as you need. Be careful though, if your certificate gets into the hands of malicious people they might sign malware with it and the CA will be forced to revoke the certificate.</li>
<li><strong>Can an individual get a code signing certificate?</strong>
Yes! Although I am not familiar with the process.</li>
</ol>
<p>If your company is based in one of the developed countries, chances are you won't have much of a problem. It would take a week at most to get a code signing certificate. However, If your company is based in a country like Iraq, which doesn't have an up-to-date online-accessible list of its companies, then you are going to have a harder time. Fortunately, this guide makes the process much easier and faster. The documents might vary by country, but the certificate authority support team will help you go through the process.</p>