While working with ASP.NET Framework we sometimes need to get the physical path to a folder on the filesystem. The most common way to do this is using Server.MapPath("~/relative-folder"). I've researched best practices around this several times just to forget the details a couple of months later so in this post I'll outline my findings are share some of my own best practices.

So when building a web app there is mainly two "contexts" in which I need to get file system information, in my actual application code and in some of my unit tests. Most of the time I strive to mock out IO from my unit tests but in some scenarios, I also need to perform actual testing with the filesystem to be more confident that my tests are not giving me false positives.

Physical file paths in web applications

The "goto" standard back in the days was to always use HttpContext.Current.Server.MapPath() which would "translate" a relative path into a full file system path. BUT. This object is request-bound, meaning that it will only exist inside the context of a web request. If we run inside a background job with something like Hangfire or Quartz this object will not exist. That's why I always recommend using HostingEnvironment.MapPath(path) that will work in both request-context and in background jobs.

I also wanted to know if and how these might differ from one another so I created this table to see how Server.MapPath() behaves.

Code Returns
HttpContext.Current.Server.MapPath("") D:\Dev\TestApp
HttpContext.Current.Server.MapPath("/") D:\Dev\TestApp\
HttpContext.Current.Server.MapPath("~/") D:\Dev\TestApp\
HttpContext.Current.Server.MapPath("App_Plugins") D:\Dev\TestApp\App_Plugins
HttpContext.Current.Server.MapPath("/App_Plugins") D:\Dev\TestApp\App_Plugins
HttpContext.Current.Server.MapPath("~/App_Plugins") D:\Dev\TestApp\App_Plugins
HttpContext.Current.Server.MapPath("App_Plugins/") D:\Dev\TestApp\App_Plugins\
HttpContext.Current.Server.MapPath("/App_Plugins/") D:\Dev\TestApp\App_Plugins\
HttpContext.Current.Server.MapPath("~/App_Plugins/") D:\Dev\TestApp\App_Plugins\


Note that it does not matter if the relative path starts with "/", "~/", or just the folder name. Also, note that any trailing slash in the relative path will be reflected with a trailing slash in the file system path.

Doing the same thing with HostingEnvironment.MapPath() reveals some differences.

Code Returns
HostingEnvironment.MapPath("") Throws exception
HostingEnvironment.MapPath("/") D:\Dev\TestApp\
HostingEnvironment.MapPath("~/") D:\Dev\TestApp\
HostingEnvironment.MapPath("App_Plugins") Throws exception
HostingEnvironment.MapPath("/App_Plugins") D:\Dev\TestApp\App_Plugins
HostingEnvironment.MapPath("~/App_Plugins") D:\Dev\TestApp\App_Plugins
HostingEnvironment.MapPath("App_Plugins/") Throws exception
HostingEnvironment.MapPath("/App_Plugins/") D:\Dev\TestApp\App_Plugins\
HostingEnvironment.MapPath("~/App_Plugins/") D:\Dev\TestApp\App_Plugins\


Note here that the relative path must start with either "/" or "~/" otherwise, the method will throw.

Overall conclusions and recommendations

  • Always use HostingEnvironment.MapPath().
  • A folder path is indicated by the trailing slash, otherwise, it's a file path. Consider always using trailing slash for folders.
  • Always make sure that the relative path passed to MapPath() starts with a slash.
  • Be aware that the method will respect and include any trailing slash from the relative path into the physical path.
  • Avoid using things like AppDomain.CurrentDomain.BaseDirectory and build paths based on this as any virtual directories configured in IIS will not be respected with this approach.

 

Physical Paths in unit tests

This one is a little harder as we want our unit tests to be "self-contained" and not be dependent on any magic path on the developer's filesystem or a build server. Inside a unit test or any .NET app, you can always find the path to the executing program with AppDomain.CurrentDomain.BaseDirectory, in the case of a unit test this would return something like d:\Dev\TestApp\My.UnitTest\Bin\Debug. One might be tempted to traverse the path with ..\..\ to get to the project root but this only works if the folders created have this exact nomenclature. I would argue that there is a better way:

Create a folder called "MockFileSystem" inside your test project, this will act as the "root" of your application similar to what you would get from HostingEnvironment.MapPath("/"). Inside this folder, we can replicate the relevant files and store them inside our test project. It's important that we set the "Build action" for each item to "Content" and choose the "Copy if newer" option. This way the folder structure and files will be copied to the application´s root folder.

Have your application code depend on an abstraction of the MapPath()-method, in my case, this is an interface like this:

internal interface IFileSystemHelper
{
    string MapPath(string path);    
}

​The implementation inside the web project would look like this:

internal sealed class FileSystemHelper : IFileSystemHelper
{
    public string MapPath(string path)
    {
        return HostingEnvironment.MapPath(path);        
    } 
}

And in my unit test project:

internal class MockFileSystemHelper : IFileSystemHelper
{
    public string MapPath(string path)
    {
        string baseDir = AppDomain.CurrentDomain.BaseDirectory + "MockFileSystem\\";

        path = path.TrimStart('~').Replace("/", "\\").TrimStart('\\');
        
        var full = Path.Combine(baseDir, path);

        return full;  
    }
}


To avoid the "issue" with some relative folders having trailing slashes and some not we could have our implementations strip any trailing slash from the returned path to be sure that we always get a full path without any trailing slash. Something like this:

public string MapPath(string path)
{
    return HostingEnvironment.MapPath(path).TrimEnd('\\');        
}

 

 

I was working on a ASP.NET-project the other day where we use a runtime cache (aka. application cache) that lives for the duration of the application lifetime. We use this to store some frequently used data and we update the cache when something changes.

The cache implementation is not state of the art and I figured I’ll share some learnings and pitfalls that I’ve fallen into over the years.

Mutable objects in the runtime cache

First of all: An mutable object, in contrast to an immutable object, is an object that can change it's state (aka properties on the object can change value without having to create a new object). Since an immutable object can't alter its state, we need to create a new instance of the object if we need to change any values. In .NET a standard class with get/set properties is mutable while DateTime, TimeSpan, and many others are immutable.

Years ago one of the biggest gotcha for me with using the MemoryCache in .NET is that it will actually store objects. Not serialized objects but real objects in memory and only pass the reference to any consumer.

This is of course great for performance, but it also means that one has to be very careful about how these objects are used. We could deep clone the object when fetching it from the cache to avoid many of the issues I’m going to point out here but in our case, we used the “vanilla memory cache” in .NET.

Since the objects are mutable, we can easily change the state, ie. change a property or add an item to a list – we just need to remember that the next time this object is accessed the new values will be there, and the old values are gone.


Updating values

Have a look at this code sample:

public class SomeService {

    public bool UpdateCustomer(CustomerViewModel vm)
    {
        // Getting the value from the CustomerService, which is wrapped in a 
        // caching-decorator that uses the .NET MemoryCache.
        var customer = _customerService.GetCustomer(vm.CustomerId);

        customer.Name = vm.Name;
        customer.City = vm.City;
        customer.MaxOrderAmount = vm.MaxOrderAmount;

        var saveResult = _customerService.Save(customer);

        if(saveResult.HasValidationErrors)
            return ValidationError(saveResult);

        return Success();

    }
}

As you can see, we’re applying the changes from the view model into the Customer model and then saving it with the CustomerService which will validate the Customer before saving it. Let’s say that there is a validation error, the service will set HasValidationErrors to true and we’ll return the issues to the view.

BUT! This code contains a nasty bug. Since the GetCustomer()-method returns an object from the cache, the changes we make to the object (setting the values from the view model) will be persisted in the cache no matter if the validation is successful or not. This is all very logical and makes sense but it’s a big “gotcha” in terms of how caching works.

Another thing that has happened to me over the years: I was reading an object from the cache that had related entities (think customers with a List<Order>). I wanted to pass a Customer together with only paid orders to another service so I modified the order-property on the Customer like so:

customer.Orders = customer.Orders.Where(x=>x.Paid == true).ToList();

This felt great and the service that I called could use the customer-object from the cache. The only problem is that the underlying collection of orders is modified and the next time I read the Customer from the cache only the paid orders will be in the collection.

Threading and runtime cache

Most of the time the in-memory runtime cache would be shared inside the application, since I’m mostly doing ASP.NET this would be all threads used by the webserver to process requests.

Here we need to keep in mind that while one thread might be reading the cache, getting a reference to an object to read it – another thread might be in the processing of updating values on the same object, it might even be in the middle of this processes and depending on implementation the object might be in an invalid state (one property has been updated but not the other) causing errors on the read-side since the values do not make sense.

Solutions?

Going forward I can see a couple of things that would make it harder to “do it wrong”.

  • Always use un-cached business objects when modifying state. (ie. the method above should not read from the cache). This way we can safely apply changes to mutable objects and validate like in the sample above.
  • Cache a “read-only”-representation of the underlying business object. This representation could be a CustomerReadOnly-class with private setters for all the properties. This way the consuming code can’t change the state by mistake.
  • Use C# 9 record types, they are immutable so it's impossible to change the state of the cached object. I changes are needed a new instance of the record would have to be created - which will not impact the cached object. This way, any "implicit" changes to the cache are impossible.

There is a lot more to this subject but I figured I’ll post this as a starting point.

Ever since I've started using JetBrains Rider I've become a huge fan of the IDE, if you have ever used Resharper for Visual Studio - Rider is Resharper - but fast =D

Rider supports Javascript/TypeScript debugging with Chrome. Using this we can set breakpoints in our code and have the Rider show variables etc. when the client app executes. There are several ways to go this but we mostly develop on a local IIS instance which means that our dev-environments most of the time have a custom domain - ie enkelmedia.se.local or something like this.

Configure JS/TS build

Before we configure Rider we need to make sure that our Javascript code can be debugged. Most of the time we would use some tools to transpile, minify and process the source code into a Javascript bundle. For the debugger to work we must make sure to include the source maps in our build. When using Webpack this is configured like this: 

module.exports = {
    ...
    devtool: 'inline-source-map',
    ...
}

One way to double-check that this works is to put a "console.log" in one of the typescript files and make sure that the console output in Chrome shows the .ts file as the source of the console.log-statement.

Configure Rider

We need to set up a custom "Debug Configuration" for the Javascript/Typescript debugging. Go to "Run | Edit Configurations" and add a new "Javascript Debug"-configuration, in the "URL" field paste the URL to the application we want to debug ie. "http://enkelmedia.se.local".

To use this configuration, in the upper right corner choose the configuration from the dropdown and click the debug-icon (the bug). This will start a fresh instance of Google Chrome with the debugger attached.

More information and documentation:

https://www.jetbrains.com/help/rider/Configuring_JavaScript_Debugger.html

Today I held my talk on the yearly Umbraco Conference CodeGarden, due to the pandemic this year’s conference was all digital which turned out to be really good. I would like to thank everyone involved in making this a great experience it’s almost as great as the IRL experience.

So my talk was on the subject “10 things every Umbraco-developer should know” and I’ll try to create a short summary here. If you want to go into details, go ahead and download the slides.


1. Properties

A “Document Type” in Umbraco has “Properties”, these uses “Data Types” and a “Data Type” is a “Property Editor” with optional configuration.
A “Property Editor” can be used on any number of “Data Types” and a “Data Type” can be used on any number of “Document Types”.

One can alter the behavior of the “Property Editor” using “Data Type”-configuration.

2. Property Value Converts

A Property Value Converter is a class that knows how to convert the stored value into something useful for the front end.
If we use the “Multi Node Tree Picker”, the data stored in the database would be like this:

umb://document/ee82cba3a0e740639ae13026a4f72a3d,umb://document/9c1c3a2c72b045e2a3c33c068164a018,umb://document/3cce2545e3ac44ecbf55a52cc5965db3

The “Property Value Converter” knows how to take this data and create a list of IPublishedContent-items from the cache to use on the front end.

When you build your own Property Editors, don’t forget to create a Property Value Converter.

Also, do try Callum Whyte’s package Super Value Converters.

3. Database vs Cache

When you “Save” content in Umbraco it’s is saved to the database, when you “Save and Publish” it’s stored in the database and also Published to the Cache.

You should make sure that the front end of your website only uses the Cache to present data.

Do: Use IPublishedContent, UmbracoHelper, and UmbracoContext.ContentCache

Avoid: Using IContent, ContentService, MediaService, and other services.

Custom services, repositories, and background threads. In you’re your custom code the best approach to fetch data from the cache is to inject the IUmbracoContextFactory into your service.

Example:

public class BlogService : IBlogService
    {
        private readonly IUmbracoContextFactory _contextFactory;
        private readonly IScopeProvider _scopeProvider;

        public BlogService(
            IUmbracoContextFactory contextFactory, 
            IScopeProvider scopeProvider)
        {
            _contextFactory = contextFactory;
            _scopeProvider = scopeProvider;
        }

        public List<BlogPost> GetPosts(string category)
        {
            using (var scope = _scopeProvider.CreateScope(autoComplete:true))
            {
                using (var ctx = _contextFactory.EnsureUmbracoContext())
                {
                    var blogContainer = ctx.UmbracoContext.ContentCache.GetById(1123);

                    var posts = blogContainer.Children.ToList();

                    return BlogPostMapper.Map(posts);
                }
            }
        }

    }

 

More details on why what is happening here can be found in this thread on our.Umbraco.com-forum.

 4. Content Versions

All content in Umbraco is versioned. Every time you Save or Publish something in Umbraco a new version is created.

On the "Info"-Content App on a Content-node you can see old versions and Rollback to them if you need to.

You should avoid storing data that changes a lot (volatile data) as content in Umbraco. Ie. if you are running an import every 30 minutes, this creates 48 versions per day, in a year this is 17 520 versions.

Solution? Use the UnVersion-package that automatically cleans old versions.

5. Modes Builder

Provides strongly-typed models for the front end of your Umbraco website.

Before Models builder, we had to get a property like this: @Model.GetPropertyValue("myProperty"), but with Models Builder we can go @Model.MyProperty to get the value in a strongly typed way.

In V8, Models Builder is configured in web.config, and we tend to configure it like this:

 

<add key="Umbraco.ModelsBuilder.Enable" value="true" />
<add key="Umbraco.ModelsBuilder.ModelsMode" value="LiveAppData" />
<add key="Umbraco.ModelsBuilder.ModelsNamespace" value="MySite.Web.Models.Cms" />
<add key="Umbraco.ModelsBuilder.ModelsDirectory" value="~/../MySite.Web/Models/Cms" />
<add key="Umbraco.ModelsBuilder.AcceptUnsafeModelsDirectory" value="true" />

 

The model-classes generated by Models Builder is just a wrapper around "IPublishedContent", you can create a new instance of a typed model and pass the IPublishedContent:

IPublishedContent content = Umbraco.Content(4211);
var blogModel = new BlogPage(conten);
var title = blogModel.PageTitle;

And since Models Builder makes the Umbraco cache return actual instances of the model classes you could just cast the IPublishedContent:

IPublishedContent content = Umbraco.Content(4211);
var blogModel = content as BlogPage;
var title = blogModel.PageTitle;

When using Composition, Models Builder will create C#-interfaces for these, and you can check if an instance implements this interface (hence is using the Composition):

IPublishedContent content = Umbraco.Content(4211);
string heroImageUrl = null;
if (content is IHero hero)
{
    heroImageUrl = hero.HeroImage.Url;
}

 

6. Debugging

Use the Log Viewer in the Settings section to view the logs and entries created by the website.

Also, these files can be found on disc in /App_Data/Logs

Use a tool like Compact Log Viewer to watch the logs outside of the backoffice.

Also, you can write to the log from your custom code:

using Umbraco.Core.Logging;

public class MyThing : IMyThing
{
    private readonly ILogger _logger;
    
    public MyThing(ILogger logger)
    {
        _logger = logger;
    }
    
    public void DoSomething(string value)
    {
        _logger.Info<MyThing>($"My thing executed DoSomething()");

    }
}

You can also use measure the performance of certain blocks in your code using the IProfilingLogger:

using Umbraco.Core.Logging;

public class MyThing : IMyThing
{
    
    private readonly IProfilingLogger _profLog;

    public MyThing(IProfilingLogger profLog)
    {
        _profLog = profLog;
    }
    
    public void DoSomething(string value)
    {
        using (_profLog.TraceDuration<MyThing>("Starting work","Done with work"))
        {
            Thread.Sleep(250);
        }
    }
}

 

 7. Lucene / Examine

Examine is the built-in "Search Engine" in Umbraco, it used Lucene.NET to index Content, Media, and Members.

It provides fast free text search and is great to perform filtering on large data sets, ie. a product or article filter.

Some take away's:

  • The Umbraco 8-cache (NuCache) is a lot faster than the V7-cache
  • Prefer NuCache if you're working with less than 500 content nodes and few filtering options
  • Use Examine/Lucene to filter larger data sets with many filtering options
  • Do test what's best for you

A more detailed presentation around this can be found here.

8. Inversion of Control / Dependency Injection

The concept of injecting dependencies into your classes is a good practice and it becomes more important with Umbraco 9 that runs in .NET 5 where DI is a first-class citizen.

LifeTimes in Umbraco 8 (LightInject)

  • Transient: A new instance every time
  • Singleton: Same instance for the application lifetime
  • Scope: New instance for every "Scope" in the DI-container. In Umbraco, this is for every web request.
  • Request: New instance for every request to the container. This is similar to Transient. Avoid this.

9.  NPoco

The "Micro ORM" used by Umbraco Core, used to read, update and delete data in the database. You can think of this as a "Lightweight and fast Entity Framework". You are free to use EF in your own code if you want to but you can also use NPoco:

public class MovieRepository : IMovieRepository
{
    private readonly IScopeProvider _scopeProvider;

    public MovieRepository(IScopeProvider scopeProvider)
    {
        _scopeProvider = scopeProvider;
    }

    public bool Save(Movie movie)
    {
        return true;
    }

    public List<Movie> GetByYear(int year)
    {
        using (var scope = _scopeProvider.CreateScope(autoComplete:true))
        {
            var dtos = scope.Database.Fetch<MovieDto>(
                scope.SqlContext.Sql().SelectAll()
                    .From<MovieDto>()
                    .Where<MovieDto>(x=>x.Year == year));

            return MovieMapper.Map(dtos);
            
        }
    }
}

The Scope's created but IScopeProvider can be nested, but: Don't forget to "Complete" your scopes.

10. AngularJS hacks for the backoffice.

You can use AngularJS's $httpProvider.interceptors to intercept requests/responses to/from the Umbraco backoffice APIs.

This is useful if you would like to alter the response coming back from the API in any way. One example is to remove the default "Consent"-field that is included when creating a new form in Umbraco Forms.

angular.module('umbraco.services').config([
   '$httpProvider', function ($httpProvider) {
       $httpProvider.interceptors.push(['$q','$injector', function ($q, $injector) {
           return {
               
               'response': function(response) {

                   // Overrides the response from the API-endpoint that created the Forms, 
                   // The controller is hardcoded to always append the "data consent"-field as the last field in the collection
                   // So by running pop() on the collection.

                   // Does the returned content match the endpoint for GetScaffoldWithWorkflows?
                   if (response.config.url.indexOf('backoffice/UmbracoForms/Form/GetScaffoldWithWorkflows?template=') > -1) {
                       response.data.pages[0].fieldSets[0].containers[0].fields.pop();
                   }

                   return response;
               }
               
           };
       }]);

   }]);

You need to include this javascript in the backoffice using a package.manifest-file.

When starting upp a website on fresh install of IIS I sometimes get this error:

HTTP Error 500.19

There is not much information more than the error that says: The requested page cannot be accessed because the related configuration data for the page is invalid.

The first thing I check is the permission settings for the folders, after this one could just try to remove elements from web.config to figure out what inside web.config that is considered "invalid". Since I work a lot with Umbraco, MOST of the time the problem is the <rewrite>-element. Without the right components installed on server these elements will be unknown to IIS and considered invalid.

What do to?

If you have the <rewrite>-element with <rules> configured in web.config make sure that you have installed "URL Rewrite", this is my favorite method:

  • Download and install "Web Platform Installer" from Microsofts website.
  • Run Web Platform Installer, click on "Product" and search for "URL Rewrite".
  • Click on the "Add"-button in the Install-column and follow the instructions.

 

When running unit tests over "complex" data ie. an html, XML, or json-file it's sometimes good to keep this data in its own file and not in the C# code like this:

The example above is from one of our utility projects for Umbraco where we're parsing the grid to remove any empty p-tags from the end of a Rich Text Editor. To really know that this works and also keeps working we've created unit tests for different kinds of grid input.

 

First off, we need to include the files in the project and then set there "Build Action" to "Embedded Resource", right-click on the file, and choose "Properties" to see this options pane:

 

After this we can read the content of the files like this:

var content = new AssemblyTestData<MyUnitTestClass>(".Files.").ReadString("test-data.json");

Here's the code for the AssemblyTestData-class:

// <summary>
/// Utility to read content of embedded assembly resources
/// </summary>
/// <typeparam name="T">The calling type, used to get the resource namespace</typeparam>
public class AssemblyTestData<T>
{
    private readonly string _additionalNameSpace;

    /// <summary>
    /// 
    /// </summary>
    /// <param name="additionalNameSpace">If the files to read is in another namespace than the calling class, add this here ie ".Files</param>
    public AssemblyTestData(string additionalNameSpace = "")
    {
        _additionalNameSpace = additionalNameSpace;
    }

    public string ReadString(string filename)
    {
        var bytes = ReadBytes(filename);
        
        return Encoding.UTF8.GetString(bytes)
            .Trim(new char[]{'\uFEFF','\u200B'}); // Removes boom-chars

    }

    public byte[] ReadBytes(string filename)
    {
        var type = typeof(T);
        var assembly = type.Assembly;
        var stream = assembly.GetManifestResourceStream(type.Namespace + _additionalNameSpace + filename);

        using (var memoryStream = new MemoryStream())
        {
            stream.CopyTo(memoryStream);
            return memoryStream.ToArray();
        }
    }
}

Happy testing!

 

Umbraco CMS ships with the great MiniProfiler both with Umbraco 7 and 8.

I'm not going to repeat everything from the documentation but today when I wanted to see some profiling for a backoffice API-controller I'm working on I found that it's really easy to show the profiler logs by going to 

 

https://mysite.com/mini-profiler-resources/results-index

I recently played around with Microsoft's new "Windows Terminal", a great new tool for working with different command-line tools in Windows.

I use the following setup:

 

My default setup for the Windows Terminal settings looks like this:

"defaults":
{
    
    // Put settings here that you want to apply to all profiles.
    "colorScheme": "One Half Dark", //"Tango Dark",
    "fontFace": "Cascadia Code PL",

    "useAcrylic" : true,
    "acrylicOpacity" : 0.9,

    "startingDirectory": "." //add this

},

 

 Also if you want the git-bash as one of the options on the dropdown for shells, just add this to the "list"-property in the Windows Terminal settings:

 {
    "guid" : "{14ad203f-52cc-4110-90d6-d96e0f41b64d}",
    "name" : "Git Bash",
    "historySize" : 9001,
    "commandline" : "C:/Program Files/Git/usr/bin/bash.exe --login",
    "icon" : "C:/Program Files/Git/mingw32/share/git/git-for-windows.ico",
    
    "useAcrylic" : true,
    "acrylicOpacity" : 0.9,
    
    "padding" : "0, 0, 0, 0",
    "snapOnInput" : true

}

Note: The path to git might be different, sometimes something like c:/program files (x86)/

 

I've found myself thinking a lot about a good naming strategy or naming convention for website projects that works with data in several different formats. Before I start to outline my current ideas (might change over time) I would like to set the stage for the project.

Let's say we have a website project with the all of these "features":

  • A rich domain model with domain entities
  • A database to store state of the domain model including repositories
  • A MVC front end
  • A web api aimed for the front end website (not 3rd party integrations)
  • A public web api for 3rd party integration's

Now comes the challenge, all of these different touch points/end points into the system often needs to represent the same thing/entity. We've been taught that for one should not use the domain entity as view model in a MVC-view, we should have a dedicated type that acts as the view model, the problem is that each of these end points probably needs a dedicated type which presents us with a problem that I've had a really hard time to find a perfect solution for:

How should we name all these dedicated types/classes?

First of all, these special representations of the core rich domain model entities are all DTOs (data transfer objects) but since we probably need these DTOs to look different depending on the context we need to name them in a smart and understandable way. The goal would be that a developer should be able to understand what "type of DTO" the code is working with by looking at the type name.

So let's start with some ideas and let's say that the core domain model that we're working here is type called Car.

MVC-view model

This one is quite simple, a very common practice is to suffix the core entity with ViewModel.

Idea: CarViewModel.

Database mapping DTO

In most of my solutions I use a repository that will take in the domain model and store it's state, the repository also returns instances of domain entities. During this process we need to map the domain entity to a model that is suitable for the storage we're using. Let's say a database. So most of the time I would represent the database table as a DTO.

Idea: CarDto

Model for website frontend APIs

Here we're talking about features on the website that make async javascript requests to the backend, something like a filter, a search feature, auto suggest or what ever. These are "ApiModel" but they are used in a context where they've probably will end up as some kind of "view model" when they are rendered into the website. I'm quite sure that this kind of model is different from the model used in public APIs for 3rd parties, so I would like to use a naming convention for them that makes it clear that they are used only for the website.

Idea: CarFrontendModel

Model for public API for 3rd parties

Here we're talking about model types for a external public web api if we're implementing a API that is restful these models could be called a Resource or a ApiModel. I like the idea of calling the "Resource" since a restful api could/shuld have navigation-properties etc. So in this case the "CarResource" might have links to a BrandResource or a "DealerResource". I'm very hesitant about this one but one has to come up with something.

Idea: CarResource


I had two main purposes of writing this blog post, first one is to document and share my ideas and the second is to get input and/or feedback. If you feel that you have other thoughts around this, please share in the comments! 

Note: I might (and almost hope) that I'll change my conclusions above based on more experience and feedback.

 

 

When working with Umbraco Forms there are some scenarios when you want to extend the functionality to perform something custom. Every time a Form is submitted a new Record is created for this Form, this Record is stored in the database and also passed to all Workflows that are configured on the form.

In our case we wanted to implement a HoneyPot to avoid some of the SPAM that comes through the forms so in cases we wanted to be able to remove a record from a custom WorkflowType. I found some solutions for Umbraco 7 but non of these worked on Umbraco 8 so I got my hands dirty and started to implement this. 

I did not find a way to remove the Record from within the Workflows Execute()-method since everything I tried caused exceptions. I managed to solve it by firing of a Task that runs some time after the Record has been created. 

Here's the code that we used:

public class DeleteWorkflow : WorkflowType
{
    public DeleteWorkflow()
    {
        this.Id = new Guid("466BAB6D-ECF1-4BE8-B0E7-6C6ACC495565");
        this.Name = "Delete Record";
        this.Description = "Deletes the record from the Database";
        this.Icon = "icon-delete";
    }
    

    public override WorkflowExecutionStatus Execute(Record record, RecordEventArgs e)
    {
        Task.Run(() => DeleteRecordWithDelay(record.UniqueId.ToString(),record.Form.ToString()));

        return WorkflowExecutionStatus.Completed;
    }


    public override List<Exception> ValidateSettings()
    {
        return new List<Exception>();
    }

    
    public static async Task DeleteRecordWithDelay(string recordId,string formId)
    {
        await Task.Delay(5000);

        try
        {
        
            IRecordService recordService = DependencyResolver.Current.GetService<IRecordService>();
            IRecordStorage recordStorage = DependencyResolver.Current.GetService<IRecordStorage>();
            IFormService formService = DependencyResolver.Current.GetService<IFormService>();
        
            var form = formService.GetForm(Guid.Parse(formId));
            var record = recordStorage.GetRecordByUniqueId(Guid.Parse(recordId), form);
        

            recordService.Delete(record, form);
        }
        catch (Exception e)
        {

            var exception = e.Message;
            throw;
        }
    }
}