I'm a huge fan of .NET Core global tools. I've done a podcast on Global Tools. Just like Node and other platform have globally tools that can be easily and quickly installed and then used in build scripts, CI/CD (Continuous Integration/Continuous Deployment) systems, or just general command line utilities, .NET Global Tools are easily made (by you!) and distributed via NuGet.
If you've got the .NET SDK installed you can try out a global tool just like this.
dotnet tool install -g dotnetsay
Then run this example with "dotnetsay," it's fun.
stepping back a moment, you may be familiar with PowerShell. It's a scripting language and a command line shell like Bash or DOS or the Windows Command Prompt. You may think of PowerShell as a tool for maintaining and managing Windows Servers.
However in recent years, PowerShell has gone cross platform and runs most anywhere. It's lightweight and has .NET Core at its, ahem, core. You can use PowerShell for scripting systems on any platform and if you're a .NET developer the team has made installing and immediately using PowerShell in scripts a one liner - which is genius. It's PowerShell as a .NET Global Tool.
Here's an example output from my system running Ubuntu. I just "dotnet tool install --global PowerShell."
$ dotnet --version 2.1.502 $ dotnet tool install --global PowerShell You can invoke the tool using the following command: pwsh Tool 'powershell' (version '6.2.2') was successfully installed. $ pwsh PowerShell 6.1.1 https://aka.ms/pscore6-docs Type 'help' to get help. PS /mnt/c/Users/Scott/Desktop> exit
Here I've checked that I have .NET 2.x or above, then I install PowerShell. I can run scripts or I can drop into the interactive shell. Note the PS prompt and my current directory above.
In fact, PowerShell is so useful as a scripting language when combined with .NET Core that PowerShell has been included as a global tool within the .NET Core 3.0 Preview Docker images since Preview 4. This means you can use PowerShell lines/scripts inside Docker images.
FROM mcr.microsoft.com/dotnet/core/sdk:3.0 RUN pwsh -c Get-Date RUN pwsh -c "Get-Module -ListAvailable | Select-Object -Property Name, Path"
Being able to easily install PowerShell as a global tool means you can count on it in your scripts, CI/CDs systems, or docker containers. It's also nice to be able to be able to use existing PowerShell scripts cross platform.
I'm impressed with this idea - installing PowerShell itself as a .NET Global Tool. Very clever and useful.
Sponsor: Ossum unifies agile planning, version control, and continuous integration into a smart platform that saves 3x the time and effort so your team can focus on building their next great product. Sign up free.
About Scott
Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.
There's some interesting stuff quietly happening in the "Console App" world within open source .NET Core right now. Within the https://github.com/dotnet/command-line-api repository are three packages:
System.CommandLine.Experimental
System.CommandLine.DragonFruit
System.CommandLine.Rendering
These are interesting experiments and directions that are exploring how to make Console apps easier to write, more compelling, and more useful.
The one I am the most infatuated with is DragonFruit.
Historically Console apps in classic C look like this:
That first argument argc is the count of the number of arguments you've passed in, and argv is an array of pointers to 'strings,' essentially. The actual parsing of the command line arguments and the semantic meaning of the args you've decided on are totally on you.
It's a pretty straight conceptual port from C to C#, right? It's an array of strings. Argc is gone because you can just args.Length.
If you want to make an app that does a bunch of different stuff, you've got a lot of string parsing before you get to DO the actual stuff you're app is supposed to do. In my experience, a simple console app with real proper command line arg validation can end up with half the code parsing crap and half doing stuff.
myapp.com someCommand --param:value --verbose
The larger question - one that DragonFruit tries to answer - is why doesn't .NET do the boring stuff for you in an easy and idiomatic way?
From their docs, what if you could declare a strongly-typed Main method? This was the question that led to the creation of the experimental app model called "DragonFruit", which allows you to create an entry point with multiple parameters of various types and using default values, like this:
static void Main(int intOption = 42, bool boolOption = false, FileInfo fileOption = null)
{
Console.WriteLine($"The value of intOption is: {intOption}");
Console.WriteLine($"The value of boolOption is: {boolOption}");
Console.WriteLine($"The value of fileOption is: {fileOption?.FullName ?? "null"}");
}
In this concept, the Main method - the entry point - is an interface that can be used to infer options and apply defaults.
using System;
namespace DragonFruit { class Program { /// <summary> /// DragonFruit simple example program /// </summary> /// <param name="verbose">Show verbose output</param> /// <param name="flavor">Which flavor to use</param> /// <param name="count">How many smoothies?</param> static int Main( bool verbose, string flavor = "chocolate", int count = 1) { if (verbose) { Console.WriteLine("Running in verbose mode"); } Console.WriteLine($"Creating {count} banana {(count == 1 ? "smoothie" : "smoothies")} with {flavor}"); return 0; } } }
I can run it like this:
> dotnet run --flavor Vanilla --count 3 Creating 3 banana smoothies with Vanilla
The way DragonFruit does this is super clever. During the build process, DragonFruit changes this public strongly typed Main to a private (so it's not seen from the outside - .NET won't consider it an entry point. It's then replaced with a Main like this, but you'll never see it as it's in the compiled/generated artifact.
I really like this idea and I'd love to see it taken further! Have you used DragonFruit on a project? Or are you using another command line argument parser?
Sponsor: Ossum unifies agile planning, version control, and continuous integration into a smart platform that saves 3x the time and effort so your team can focus on building their next great product. Sign up free.
About Scott
Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.
As I've mentioned lately, I'm quietly moving my Website from a physical machine to a number of Cloud Services hosted in Azure. This is an attempt to not just modernize the system - no reason to change things just to change them - but to take advantage of a number of benefits that a straight web host sometimes doesn't have. I want to have multiple microsites (the main page, the podcast, the blog, etc) with regular backups, CI/CD pipeline (check in code, go straight to staging), production swaps, a global CDN for content, etc.
I'm breaking a single machine into a series of small sites BUT I want to still maintain ALL my existing URLs (for good or bad) and the most important one is hanselman.com/blog/ that I now want to point to hanselmanblog.azurewebsites.net.
That means that the Azure Front Door will be receiving all the traffic - it's the Front Door! - and then forward it on to the Azure Web App. That means:
It's worth considering a few things. Front Door MAY be overkill for what I'm doing because I have a small, modest site. Right now I've got several backends, but they aren't yet globally distributed. If I had a system with lots of regions and lots of App Services all over the world AND a lot of static content, Front Door would be a perfect fit. Right now I have just a few App Services (Backends in this context) and I'm using Front Door primarily to manage the hanselman.com top level domain and manage traffic with URL routing.
On the plus side, that might mean Azure Front Door was exactly what I needed, it was super easy to set up Front Door as there's a visual Front Door Designer. It was less than 20 minutes to get it all routed, and SSL certs too just a few hours more. You can see below that I associated staging.hanselman.com with two Backend Pools. This UI in the Azure Portal is (IMHO) far easier than the Azure Application Gateway. Additionally, Front Door is Global while App Gateway is Regional. If you were a massive global site, you might put Azure Front Door in ahem, front, and Azure App Gateway behind it, regionally.
Again, a little overkill as my Pools are pools are pools of one, but it gives me room to grow. I could easily balance traffic globally in the future.
CONFUSION: In the past with my little startup I've used Azure Traffic Manager to route traffic to several App Services hosted all over the global. When I heard of Front Door I was confused, but it seems like Traffic Manager is mostly global DNS load balancing for any network traffic, while Front Door is Layer 7 load balancing for HTTP traffic, and uses a variety of reasons to route traffic. Azure Front Door also can act as a CDN and cache all your content as well. There's lots of detail on Front Door's routing architecture details and traffic routing methods. Azure Front Door is definitely the most sophisticated and comprehensive system for fronting all my traffic. I'm still learning what's the right size app for it and I'm not sure a blog is the ideal example app.
Here's how I set up /blog to hit one Backend Pool. I have it accepting both HTTP and HTTPS. Originally I had a few extra Front Door rules, one for HTTP, one for HTTPs, and I set the HTTP one to redirect to HTTPS. However, Front door charges 3 cents an hour for the each of the first 5 routing rules (then about a penny an hour for each after 5) but I don't (personally) think I should pay for what I consider "best practice" rules. That means, forcing HTTPS (an internet standard, these days) as well as URL canonicalization with a trailing slash after paths. That means /blog should 301 to /blog/ etc. These are simple prescriptive things that everyone should be doing. If I was putting a legacy app behind a Front Door, then this power and flexibility in path control would be a boon that I'd be happy to pay for. But in these cases I may be able to have that redirection work done lower down in the app itself and save money every month. I'll update this post if the pricing changes.
After I set up Azure Front Door I noticed my staging blog was getting hit every few seconds, all day forever. I realized there are some health checks but since there's 80+ Azure Front Door locations and they are all checking the health of my app, it was adding up to a lot of traffic. For a large app, you need these health checks to make sure traffic fails over and you really know if you app is healthy. For my blog, less so.
There's a few ways to tell Front Door to chill. First, I don't need Azure Front Door doing a GET requests on /. I can instead ask it to check something lighter weight. With ASP.NET 2.2 it's as easy as adding HealthChecks. It's much easier, less traffic, and you can make the health check as comprehensive as you want.
app.UseHealthChecks("/healthcheck");
Next I turned the Interval WAY app so it wouldn't bug me every few seconds.
These two small changes made a huge difference in my traffic as I didn't have so much extra "pinging."
After setting up Azure Front Door, I also turned on Custom Domain HTTPs and pointing staging to it. It was very easy to set up and was included in the cost.
NOTE: I will have to at some point manage the Apex/Naked domain issue where hanselman.com and www.hanselman.com both resolve to my website. It seems this can be handled by either CNAME flattening or DNS chasing and I need to check with my DNS provider to see if this is supported. I suspect I can do it with an ALIAS record. Barring that, Azure also offers a Azure DNS hosting service.
There is another option I haven't explored yet called Azure Application Gateway that I may test out and see if it's cheaper for what I need. I primarily need SSL cert management and URL routing.
I'm continuing to explore as I build out this migration plan. Let me know your thoughts in the comments.
Sponsor: Develop Xamarin applications without difficulty with the latest JetBrains Rider: Xcode integration, JetBrains Xamarin SDK, and manage the required SDKs for Android development, all right from the IDE. Get it today
About Scott
Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.
I'm quietly moving my Website from a physical machine to a number of Cloud Services hosted in Azure. This is an attempt to not just modernize the system - no reason to change things just to change them - but to take advantage of a number of benefits that a straight web host sometimes doesn't have. I want to have multiple microsites (the main page, the podcast, the blog, etc) with regular backups, CI/CD pipeline (check in code, go straight to staging), production swaps, a global CDN for content, etc.
I'm also moving from an ASP.NET 4 (was ASP.NET 2 until recently) site to ASP.NET Core 2.x LTS and changing my URL structure. I am aiming to save money but I'm not doing this as a "spend basically nothing" project. Yes, I could convert my site to a static HTML generated blog using any numberof great static site generators, or even a Headless CMS. Yes I could host it in Azure Storage fronted by a CMS, or even as a series of Azure Functions. But I have 17 years of content in DasBlog, I like DasBlog, and it's being actively updated to .NET Core and it's a fun app. I also have custom Razor sites in the form of my podcast site and they work great with a great workflow. I want to find a balance of cost effectiveness, features, ease of use, and reliability. What I have now is a sinking feeling like my site is gonna die tomorrow and I'm not ready to deal with it. So, there you go.
Currently my sites live on a real machine with real folders and it's fronted by IIS on a Windows Server. There's an app (an IIS Application, to be clear) leaving at \ so that means hanselman.com/ hits / which is likely c:\inetpub\wwwroot full stop.
For historical reasons, when you hit hanselman.com/blog/ you're hitting the /blog IIS Application which could be at d:\whatever but may be at c:\inetpub\wwwroot\blog or even at c:\blog. Who knows. The Application and ASP.NET within it knows that the site is at hanselman.com/blog.
That's important, since I may write a URL like ~/about when writing code. If I'm in the hanselman.com/blog app, then ~/about means hanselman.com/blog/about. If I write /about, that means hanselman.com/about. So the ~ is a shorthand for "starting at this App's base URL." This is great and useful and makes Link generation super easy, but it only works if your app knows what it's server-side base URL is.
To be clear, we are talking about the reality of the generated URL that's sent to and from the browser, not about any physical reality on the disk or server or app.
I've moved my world to three Azure App Services called hanselminutes, hanselman, and hanselmanblog. They have names like http://hanselman.azurewebsites.net for example.
ASIDE: You'll note that hitting hanselman.azurewebsites.com will hit an app that looks stopped. I don't want that site to serve traffic from there, I want it to be served from http://hanselman.com, right? Specifically only from Azure Front Door which I'll talk about in another post soon. So I'll use the Access Restrictions and Software Based Networking in Azure to deny all traffic to that site, except traffic from Azure - in this case, from the Azure Front Door Reverse Proxy I'll be using.
That looks like this in this Access Restrictions part of the Azure Portal.
Since the hanselman.com app will point to hanselman.azurewebsites.net (or one of its staging slots) there's no issue with URL generation. If I say / I mean /, the root of the site. If I generate a URL like "~/about" I'll get hanselman.com/about, right?
Is part of the /path being removed or is a path being added?
In the case of DasBlog, we have a configuration setting so that the app knows where it LOOKS like it is, from the Browser URL's perspective.
My blog is at /blog so I add that in some middleware in my Startup.cs. Certainly YOU don't need to have this in config - do whatever works for you as long as context.Request.PathBase is set as the app should see it. I set this very early in my pipeline.
That if statement is there because most folks don't install their blog at /blog, so it doesn't add the middleware.
//if you've configured it at /blog or /whatever, set that pathbase so ~ will generate correctly Uri rootUri = new Uri(dasBlogSettings.SiteConfiguration.Root); string path = rootUri.AbsolutePath;
//Deal with path base and proxies that change the request path if (path != "/") { app.Use((context, next) => { context.Request.PathBase = new PathString(path); return next.Invoke(); }); }
Sometimes you want the OPPOSITE of this. That would mean that I wanted, perhaps hanselman.com to point to hanselman.azurewebsites.net/blog/. In that case I'd do this in my Startup.cs's ConfigureServices:
app.UsePathBase("/blog");
Be aware that If you're hosting ASP.NET Core apps behind Nginx or Apache or really anything, you'll also want ASP.NET Core to respect X-Forwarded-For and other X-Forwarded standard headers. You'll also likely want the app to refuse to speak to anyone who isn't a certain list of proxies or configured URLs.
I configure these in Startup.cs's ConfigureServices from a semicolon delimited list in my config, but you can do this in a number of ways.
Since Azure Front Door adds these headers as it forwards traffic, from my app's point of view it "just works" once I've added that above and then this in Configure()
app.UseForwardedHeaders();
There seems to be some confusion on hosting behind a reverse proxy in a few GitHub Issues. I'd like to see my scenario ( /foo -> / ) be a single line of code, as we see that the other scenario ( / -> /foo ) is a single line.
Have you had any issues with URL generation when hosting your Apps behind a reverse proxy?
Sponsor: Develop Xamarin applications without difficulty with the latest JetBrains Rider: Xcode integration, JetBrains Xamarin SDK, and manage the required SDKs for Android development, all right from the IDE. Get it today
About Scott
Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.
I'm doing a quiet backend migration/update to my family of sites. If I do it right, there will be minimal disruption. Even though I'm a one person show (plus Mandy my podcast editor) the Cloud lets me manage digital assets like I'm a whole company with an IT department. Sure, I could FTP files directly into production, but why do that when I've got free/cheap stuff like Azure DevOps.
As I'm one person, I do want to keep costs down whenever possible and I've said so in my "Penny Pinching in the Cloud" series. However, I do pay for value, so I'll try to keep costs down whenever possible, but I will pay for things I feel are useful or provide me with a quality product and/or save me time and hassle. For example, Azure Pipelines gives one free Microsoft-hosted CI/CD pipeline and 1 hosted job with 1800 minutes a month. This should be more than enough for my set up.
Additionally, sometimes I'll shift costs in order to both save money and improve performance as when I moved my Podcast's image hosting over to an Azure CDN in 2013. Fast forward 6 years and since I'm doing this larger migration, I wanted to see how easy it would be to migrate even more of my static assets to a CDN.
Make a Storage Account and ensure that its Access Level is Public if you intend folks to be able to read from it over HTTP GETs.
With in the Azure Portal you can make a CDN Profile that uses Microsoft, Akamai, or Verizon for the actual Content Distribution Network. I picked Microsoft's because I have experience with Akamai and Verizon (they were solid) and I wanted to see how the new Microsoft one does. It claims to put content within 50ms of 60 countries.
You'll have a CDN endpoint host name that points to an Origin. That Origin is the Origin of your content. The Origin can be an existing place you have stuff so the CDN will basically check there first, cache things, then serve the content. Or it can be an existing WebApp or in my case, Storage.
I didn't want to make things too complex, so I have a single Storage Account with a few containers. Then I mapped custom endpoints for both my blog AND my podcast, but they share the same storage. I also took advantage of the free SSL Certs so images.hanselman.com and images.hanselminutes.com are both SSL. Gotta get those "A" grades from https://securityheaders.com right?
I've settled on a basic scheme where anything that was "/images/foo.png" is now "https://images.hanselman.com/blog/foo.png" or /main/ or /podcast/ depending on who is using it. This made search and replaces reliable and easy.
It's truly a less-than-an-hour operation to enable a CDN on an existing site. While I've chose to use Azure Storage as my backing store, you can just use your existing site's /images folder and just change your markup to pull from the CDN's URL. I recommend you make a simple cdn.yourdomain.com or images.yourdomain.com. This can also easily be enabled with a CNAME in less than an hour. Having your images at another URL via a subdomain CNAME also allows for even more download parallelism from the browser.
This is all in my Staging so you won't see it quite yet until I flip the switch on the whole migration.
Exploring future possibilities for dynamic image content
I haven't yet moved all my blog content images (a few gigs, but many thousands of images) to a CDN as I don't want to change my existing publishing workflow with Open Live Writer. I'm considering a flow that keeps the images uploading to the Web App but then using the Custom Origin Path options in the Azure CDN that will have images get picked up Web App then get served by the CDN. In order to manage 17 years of images though, I'd need to catch URLs like this:
As I see it, there's a few options to get images from a GET of /blog/content/binary to be served by a CDN:
Dynamically Rewrite the URLs on the way OUT (as the HTML is generated)
CPU expensive, but ya it's cached in my WebApp
Rewriting Middleware to redirect (301?) requests to the new images location
Easiest option, but costs everyone a 301 on all image GETs
Programmatically change the stored markup (basically forloop over my "database," search and replace, AND ensure future images use this new URL)
A hassle, but a one time hassle
Not sure about future images, I might have to change my publishing flow AND run the process on new posts occasionally.
What are your thoughts on if I should go all the way and manage EVERY blog image vs the low hanging fruit of shared static assets? Worth it, or overkill?
The learning process continues!
Sponsor: Seq delivers the diagnostics, dashboarding, and alerting capabilities needed by modern development teams - all on your infrastructure. Download now.
About Scott
Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.