Blog http://www.diplo.co.uk/blog/ The blog of Dan 'Diplo' Booth where I muse about all sorts of rubbish from web development, Umbraco and programming to politics, media and culture. Come on in, the water is lovely! Copyright Dan 'Diplo' Booth en-GB AI Art with Stable Diffusion There has been a quiet revolution in AI generated art. Here I explore how we can use deep-learning AI systems to generate images from text prompts to create brand new art forms that are only limited by the bounds of your own imagination. https://www.diplo.co.uk/blog/music-film-tv/ai-art-with-stable-diffusion/ http://www.diplo.co.uk/5072.aspx Tue, 11 Oct 2022 00:00:00 GMT You might not have noticed, but there has been a quiet revolution in AI generated art. With the rise of DALL-E, Midjourney, Imagen and now Stable Diffusion we have seen deep-learning AI systems that can generate images simply from text prompts, analogous to the way AI systems like GPT-3 have revolutionised text-to-text generation.

In simple terms, this means you type a short, descriptive sentence like, "wealthy cat wearing a top hat painted by Rembrandt" and the AI systems then generates images that are variations on that theme. You can see an example I generated myself using that exact prompt:

Rembrandt Cat

This is known as "text-to-image" or just "txt2img". You input a description and the AI model generates images based on that input.

Not so long ago these systems were only accessible to computer scientists and researchers working at the tech giants, like Open AI or Google. They then started to become more widely available, but required expensive subscriptions to either APIs or web-based front-ends that wrapped the API. So whilst they became accessible to more people, they still tended to be used mostly by tech "geeks" or the more dedicated professional digital artists.

But this all changed in 2022 when StabilityAI released their own deep-learning AI text-to-image model called Stable Diffusion. (the name references the diffusion process that these image generators utilise to take noise and refine it until it becomes diffused into a 'stable" image). The big difference is that Stable Diffusion is open source - anyone can download and run it on a home PC system (albeit one that requires a powerful GPU with lots of vRAM). This has effectively democratised the generation of images using AI and opened it up to a much wider audience, as you can see on the StableAI Discord channel.

Whilst it's not super-simple to get running (you need Git, Python and an understanding of command-line tools) there are many ways of accessing it, including GUIs and commercial online versions such as Dreamstudio. You can also access it via the simple and free web-interface on Hugging Face. To run at home you also need a powerful GPU - preferably a recent Nvidia RTX card with 8GB vRAM.

What Can it Do?

Basically, whatever you can dream up, it can generate - with some caveats. AI models need training and Stable Diffusion itself was trained on pairs of images and captions taken from LAION-5B, a publicly available dataset that utilises images "scraped" from the internet. This obviously influences what it can "dream" up and there are whole debates about the ethics of this. It's also dependent on how it interprets your text input - natural language parsing is a whole field in itself. Then there's also a random factor, as you usually provide a random "seed" when rendering your image(s). On top of this there is a whole art to tailoring your prompts to generate what you require - it's not quite the case you can just type in anything you want as you will get a very literal result. In fact there are whole tools dedicated to generating prompts, such as Magic Prompt. There is also now a dedicated Stable Diffusion search engine in the from of Lexica.

So, enough boring text and here are a few examples of things I've generated using Stable Diffusion. (You can view many more in the AI Art gallery on this very site).

Want to see how Scarlett Johansson might look in a futuristic sci-fi Barbarella / Bladerunner crossover?

Scarlet Johannsson

Or what if you want to blend the disparate aesthetics of punk rock with the Art Noveau?

Punk Art Noveau

And talking of punk, you can also do something more photo-realistic, by manipulating real images and combining them with fantasy elements to create "really angry punk rockers":

Or something more pleasing, as I dreamt up a "girl in a red coat walking through the forest in Autumn with dappled sunlight in style of Studio Ghibli"

Or for fun imagining what you would get if you combine David Bowie's album covers with that of Iron Maiden...

Bowie Maiden

Or perhaps a rumination on death and decay, creating a virtual collage of things you might find decaying on a forest floor...

Or the same concept but using magical things you might find on a beach (or an installation at the Tate Modern):

Beach

Or a beautiful witch with thorns and roses in their hair...

Witch

Or take the iridescent colours of a peacock feather and apply them to an exotic virtual model...

Peacock

If you want to view more, than please head over to my AI Art gallery.

]]>
God Mode Updates for Umbraco 10 New features for God Mode for Umbraco 10, including viewing dependency injected services; listing Content Finders and URL providers; improved partial detection and more! https://www.diplo.co.uk/blog/web-development/god-mode-updates-for-umbraco-10/ http://www.diplo.co.uk/2257.aspx Wed, 10 Aug 2022 00:00:00 GMT With the move to .NET 6 (NET Core) that Umbraco 10 brought it's given me an excuse to work on some new features for my God Mode Umbraco package. If you've no idea what I'm talking about then check out my post about bringing it to .NET Core with Umbraco 9. Working with the .NET 6 codebase is a lot nicer, so it's given me a bit more incentive to add new stuff. Incidentally, if you have any ideas of what you'd like to see in the package, then let me know.

What's New?

I'll summarise the main new features in God Mode 10.2.0.

  • The ability to view all the services injected into the IOC container and their implementations and lifetime (scope). For instance, if you register IMyService in a composer that implements the concrete class MyFabService then it will be listed and you can see whether it's registered as a Singleton, Scoped or Transient. So you can always see what implementations of services are being used when you use dependency injection. This covers all services - both your own, Umbraco's own services and any others added by 3rd parties.
  • The option to list all Content Finders - handy if you have created your own custom ones
  • A list of all URL Providers that the site uses (again, handy if you are using custom ones..)
  • More diagnostics - including a new Umbraco Infrastructure setting that lists things like registered Background Tasks, Sections, Dashboards and Middleware with their implementing type. Very geeky!
  • Much improved Partial view parsing when detecting what partials are being used - so it doesn't matter if the partial is added async or via a Tag Helper, it should still be found.
  • Likewise, better View Component detection
  • There have also been some minor UI updates and styling fixes.
  • This version also contains the new Copy DataType feature I added in the previous release. This allows you to easily copy any Umbraco datatype with a single right click. This is handy for when you want to clone a particularly complex datatype and keep all the configuration. I use this with the Block List editor a lot!

Service Browser Example

Here's an example of the service browser in action, letting you see what services are registered. In this example I've typed "ICache" in the search box to limit the results to all interfaces that start with ICache. You can then see all the implementations and their lifetime.

IOC Service Browser

The 10.2.0 release is available from all good NuGet repos right now! And, of course, it's free and open source.

]]>
Diplo Translator for Umbraco Diplo Translator is a free Umbraco 10 package that uses an AI translation service to automatically translate all empty Dictionary items in your Umbraco site in one button click. It provides a lightning-fast way to translate all your dictionary content using AI translation. https://www.diplo.co.uk/blog/web-development/diplo-translator-for-umbraco/ http://www.diplo.co.uk/2256.aspx Tue, 05 Jul 2022 00:00:00 GMT If you ever build a multilingual site for Umbraco then you'll know about the Dictionary. It's the place in Umbraco where you can create a "kind-of" multilingual key/value store for all the odd words and phrases you might want translating.

It's not really changed much across many incarnations of Umbraco and, whilst it works, it can be a little awkward to populate. You have to edit each item individually and then it requires you to provide translations for each item. I recently build a site that has 27 different languages and around 150 dictionary keys - that means a translator needs to provide over 4000 different translations and then an editor needs to populate them all. It's a long-winded, tedious task and can be quite error prone, too. So this got me to thinking... There has to be a better way!

The Better Way

I'd played around with Microsoft Azure Cognitive Services for various tasks before - such as providing text analysis, generating summaries and sentiment analysis. But I also knew Microsoft offered their own AI-powered translation service, Microsoft Translator. Now, there used to be a time when automated translation services where pretty poor - they just couldn't handle the nuances of "real" speech and fell flat. But a lot has changed, and now you'd be hard pressed to notice the difference between an AI translation and a professional human. So my thought was...

"What if we could use an AI translation service to automatically translate all the empty Dictionary items in Umbraco?"

Luckily, Umbraco is very extensible and I knew it was something I could build, so I built a quick proof of concept and then set-about building a package. I decided to build the package for the (at this time) spanking new Umbraco 10 - running on .NET 6. Mainly because I wanted to play around with the latest Umbraco but also to help me learn more about .NET Core. 

Introducing... Diplo. Translator

"This is a package for Umbraco 10 CMS that adds a Translate option to the Umbraco Dictionary within the Translation tree. This option can be used to automatically translate all the empty dictionary items in the tree from the selected language using an AI-based translation service. By default this is Microsoft Translator. In future other providers may be supported."

Usage

It's pretty simple to use. After installation and configuration you just select the language you want to translate from (which defaults to your default language) and then click the "Translate" button. It will then go through every dictionary item that doesn't have a translation and translate it using the Microsoft Translator API. You can see an example of the dialog below:

Umbraco screenshot of Translator menu  

The nice thing about Microsoft Translator is that it's easy to set-up if you already have an Azure Portal account. And you can just use the free level, which gives you a very generous, "2M chars of any combination of standard translation and custom training free per month". Given that most dictionary values are quite short then it won't cost most people anything to use.

Video Demo

I built a quick demo which I created a video from. It's of an older build, but it should give you an idea of how it works and how quickly it can churn through translations:

Where can I get it?

Like all my packages it is free to use and open-source. The easiest way to use it is to install from NuGet:

NuGet: www.nuget.org/packages/Diplo.Translator/

dotnet add package Diplo.Translator

Our Umbraco: https://our.umbraco.com/packages/backoffice-extensions/diplo-translator/

Source Code: https://github.com/DanDiplo/Diplo.Translator

]]>
Diplo Audit Log Viewer for Umbraco 10 Diplo Audit Log Viewer for Umbraco 10 CMS allows you to easily view and search the Content changes and Audit data that is stored in your Umbraco site's umbracoLog and umbracoAudit tables. It creates a custom tree within the Settings section that lets you view the contents of both those tables and presents the results in a filterable, sortable and searchable paginated list. https://www.diplo.co.uk/blog/web-development/diplo-audit-log-viewer-for-umbraco-10/ http://www.diplo.co.uk/2253.aspx Mon, 27 Jun 2022 00:00:00 GMT This is just a quick note to say I've updated my Diplo Audit Log Viewer package for Umbraco 10 (and .NET 6)...

Functionally it's virtually identical to the v8 version which you can read all about in my previous post -  Diplo Audit Log Viewer for Umbraco 8. So if you want to know more, then please read that post. But to quickly recap:

Content Audit Logs

The Content Audit Logs tree allows you to view, filter, search and paginate through all the umbracoLog table entries. This is the log of all changes to content that is made in Umbraco.

You can filter by:

  • The type of content change eg. Save, Publish, Delete etc.
  • The user who performed the change
  • The content node that was changed (using a handy page picker)
  • Date range
  • You can also perform a free-text search of the log contents

Where a change affected a content node then the Id of the node is shown along with it's content type (eg. Document, Media, Member) and clicking the Id of the node will take you directly to edit the content. This works for content, members, media, document types, data types etc.

Content Log Umbraco v10

Audit Trail Logs

The Audit Trail Logs tree allows you to view, filter, search and paginate through all the umbracoAudit table entries. This is the log of all audit events, such as log-ins, changes of password or where a User changes the actual document types or member types etc.

You can filter by:

  • The type of event eg. umbraco/user/sign-in/login
  • The user who performed the action
  • The user who the action was performed upon (if relevant)
  • Date range
  • You can also perform a free-text search of the log contents

Audit Log Umbraco v10

Source Code

Like all my packages they are free and open source - you can find the latest source here: https://github.com/DanDiplo/Umbraco.AuditLogViewer 

]]>
Diplo Media Download for Umbraco This is an Umbraco 8 package that allows you to download files from the Umbraco Media Library as a zip archive. You can download both files and folders. When downloading a folder it can include any nested folders and preserves the correct paths (so in theory you could download your entire media library in one zip archive file!). https://www.diplo.co.uk/blog/web-development/diplo-media-download-for-umbraco/ http://www.diplo.co.uk/2248.aspx Mon, 04 Apr 2022 00:00:00 GMT Today I release a new package for Umbraco that allows you to easily download files from the Umbraco media library in one click. This is an Umbraco 8 package that allows you to download files from the Umbraco Media Library as a zip archive. You can download both files and folders. When downloading a folder it can include any nested folders and preserves the correct paths (so in theory you could download your entire media library in one zip archive file!).

It came about when I wanted to migrate files from an existing site to a new site. Whilst you could go to the server and copy or FTP the contents of the /Media/ folder in Umbraco, you'd find one big folder with single-level folders with obscure names like 0bkdaww3. The nested structure of the Media library isn't represented on disk - it's preserved in the database instead. So there's no simple way of downloading a file (or the entire folder) and preserving the nested structure you see in the Media tree.

That's where this package comes in - it adds a new Download action to the Media library menu that lets you download either a single file or the current folder as a zip archive. You also have the option of including any nested folders beneath the current folder - and it preserves the structure in the zip archive (to any level).

Note: If you have a large Media library then downloading the entire thing as a single zip archive could take some time!

Screenshots

Browse to the folder you want to download in the Media library:

Media Folder

Select the folder and choose the Download action from the Actions menu (or right click on the folder or file in the tree).

Choose your options in the dialog and confirm:

Download Zip Dialog

Then you have your zip archive downloaded nicely on disk, with all the sub-folders preserved:

Zip File Archive of Media

Why Umbraco 8?

You may be wondering why this package is just for Umbraco 8. Well, the simple reason is that I personally have to manage a lot more Umbraco 8 sites than 9 - and there are probably a lot more Umbraco 8 sites out there. Also, Umbraco already ships with a zip library in the form of SharpZipLib, which was handy. But, once it's thoroughly tested on 8 I will look to migrate it to v9.

Caveats

But what if your media isn't stored on disk in the conventional way? What if it's in Blob storage or you have a custom Umbraco File System Provider? Well, I'll be honest; I don't know. I suspect it may well not work. But if you know anything definite please let me know!

Download

NuGet: https://www.nuget.org/packages/Diplo.MediaDownloader/ 

Our Umbraco: https://our.umbraco.com/packages/backoffice-extensions/diplo-media-download/ 

Like all my packages, they are open-source and completely free. You can find the source-code below (and you are welcome to contribute):

GitHub: https://github.com/DanDiplo/Umbraco.MediaDownload

]]>
God Mode comes to Umbraco 9 I've released my popular Umbraco CMS developer package God Mode for the new Umbraco 9, fully running on .NET Core. It's still free, open-source and now even better than ever! https://www.diplo.co.uk/blog/web-development/god-mode-comes-to-umbraco-9/ http://www.diplo.co.uk/2231.aspx Wed, 29 Sep 2021 00:00:00 GMT Diplo God Mode makes Umbraco 9 developers invincible! Oh, and also Umbraco 10 developers too, now.

With the release of Umbraco 9 I've started to work on migrating my packages over to this new version of the .NET CMS. If you aren't aware, Umbraco 9 has a lot of architectural changes, with the biggest being it now runs on .NET 5 (.NET Core) on ASP.NET Core. I won't go into what all these changes are, as they are well documented, but suffice to say they mean a lot has changed "under the hood". The good news (for us package devs) is that the Umbraco back-end still runs on good ol' AngularJS so most of the changes you need to make are in C# rather than in the front-end UI layers, which can pretty much stay the same. This means that the v9 version of God Mode is functionally identical to the v8 version and even has a few extra additions (such as using Infinite Editing for all editors, a new Tag Browser and being able to filter by element types and variants).

I won't go into a lot of detail on what God Mode does, as this is covered in my post about the v8 version here, but the top-level summary is:

"God Mode is a developer tool within the Settings section of Umbraco that allows you to browse, query and search your document types and compositions; your templates and partials; your datatypes and property editors; your media library; your custom controllers and models. It provides diagnostics about your site and the server it is hosted on."

Because of changes in Umbraco 9/10, it's only available as a NuGet package here: https://www.nuget.org/packages/Diplo.GodMode/ 

You can add it simply with the .NET CLI:

dotnet add package Diplo.GodMode

And, of course, you can find out more in the Umbraco Package repository on Our Umbraco: https://our.umbraco.com/packages/developer-tools/diplo-god-mode/

Screenshots

For those who prefer to see rather than read, here's a few example screenshots from the v9/10 version:

These show the diagnostics you can access in God Mode, just to show it really is running on Umbraco 9...

Umbraco Diagnostics

And this shows some of the diagnostics you can get of the server hosting your site:

Server Diagnostics

The Tag Browser is a new edition. It lets you see all the tags you have created in Umbraco and what content is associated with the tag. It works with content and media and makes use of Infinite Editor to easily edit the associated content. You can also delete tags directly from the interface and this also removes their association with content. There's even a list of all orphaned tags shown at the bottom of the list.

Tag Browser

And old favourites like the Document Type browser are still there, of course, letting you search and filter with lots of options:

Document Type Browser

And here's some of the detail you can find for an individual document type:

Document Type Detail

Source Code

Like all my packages I still like to keep them open-source and free as a way of giving back to the Umbraco Community, which is still one of the friendliest around.

This continues with the v9 version which can be found on GitHub in this repo: https://github.com/DanDiplo/Umbraco.GodMode/tree/v9

The v10 version can be found on GitHub in this repo: https://github.com/DanDiplo/Umbraco.GodMode/tree/v10

If you fancy making a PR or find an issue, that's the place to go! Have fun :)

]]>
Building Multilingual Websites in Umbraco 8 I was recently fortunate enough to get an article published in the 2020 edition of 24 Days In Umbraco - a seasonal tradition that brings together different authors from the community in one big bumper month of knowledge sharing! Find out what I wrote about and how it can help you build a multilingual website in Umbraco 8... https://www.diplo.co.uk/blog/web-development/building-multilingual-websites-in-umbraco-8/ http://www.diplo.co.uk/2222.aspx Fri, 04 Dec 2020 00:00:00 GMT Umbraco

I was recently fortunate enough to get an article published in the 2020 edition of 24 Days In Umbraco - a seasonal tradition that brings together different authors from the community in one big bumper month of knowledge sharing!

I decided to write about my experiences building multilingual websites in Umbraco 8. I'm not going to copy what I've already written, but just give a quick precis and then you can read the full article here.

The article is aimed at developers who are familiar with Umbraco but have maybe not built a multilingual site before (or have only built them in Umbraco 7, which is quite different). The article aims to:

  • Provide advice on what you should discuss with your client before starting the build
  • Issues you may encounter with repeatable content (eg. Nested Content, Grid or Block List Editor)
  • How Examine search changes for multilingual sites
  • How to deal with caching across languages
  • Tips on how to generate HrefLang tags in Umbraco
  • Some code to get your started making a language selector to easily swap between the configured languages in your Umbraco 8 site

So for all the details please have a read of: https://24days.in/umbraco-cms/2020/multilingual-websites-in-umbraco-8/ 

]]>
God Mode Umbraco 8 Package God Mode for Umbraco 8 allows you to browse, query and search your document types and compositions; your templates and partials; your datatypes and property editors; your media library; your custom controllers and models. It provides diagnostics about your Umbraco site and the server it is hosted on. https://www.diplo.co.uk/blog/web-development/god-mode-umbraco-8-package/ http://www.diplo.co.uk/2184.aspx Wed, 24 Jul 2019 00:00:00 GMT Diplo God Mode makes Umbraco developers invincible!

This custom tree for the Settings section of Umbraco 8 allows you to browse, query and search your document types and compositions; your templates and partials; your datatypes and property editors; your media library; your custom controllers and models. It provides diagnostics about your site and the server it is hosted on.

Note: This is the Umbraco 8 release. For the new Umbraco 9 version please read this blog post instead. And for the old Umbraco 7 version please read this blog post instead.

*** Runner up in the 2022 Umbraco Package Awards in the Best Developer Tool category​! ***

Rational

As a developer working with Umbraco you often need to be able to work out things like:

  • Which document types use this property editor? Or which use a specific instance (data type)?
  • What templates does this partial appear in?
  • Which document types use a specific property?
  • What are the largest items in my Media Library?
  • Which controllers does this site use and what type are they?
  • Which document types inherit from a given composition?
  • Which of my partials are cached and in what template?
  • How is Umbraco configured? How is my server configured?
  • What Controllers and Models are being used in the site?

God Mode is a developer tool for Umbraco that answers these questions. It is a complete rebuild of my Umbraco 7 version and has been rebuilt from scratch to work with Umbraco 8. It features a new UI and uses some fancy Umbraco 8 features such as D.I. etc. Under the hood a lot has changed in Umbraco 8, so this required changing almost every element that interacted with Umbraco services and the various datalayers.

Features

  • Easily see which document types inherit from any of your compositions
  • See which document types use which property editor or data type instance
  • See which partials are used by all your templates and which of those are cached
  • Find out which data types are being used (or not!)
  • Browse all media in the Media Library and sort it by file type, size or media type
  • See which controllers (Surface, API and RenderMvc) are being used and in what namespaces and DLLs
  • View all generated models (that inherit from PublishedContentModel)
  • Browse all Umbraco Settings, plus all Server settings and MVC settings
  • Look at any assembly in your site and see which types implement a particular interface
  • Plus lots more!

Screenshots

Stop talking and show me some pretty pictures you have expertly taken using the Windows snipping tool! OK, since you ask...

Document Type Browser

God Mode Doc Type Browser

Document Type Browser Detail

Document Type Detail for Content Page

Data Type Browser

God Mode Data Type browser

Browsing Surface Controllers

God Mode Surface Controller browser

Site Diagnostics

God Mode Umbraco diagnostics and settings

Video Demo

Watch as I fumble about trying to demonstrate this thing...

Download

OK, that's enough screenshots. It looks like the most amazing thing I've ever seen; where can I get my hands on the binaries?

NuGet: https://www.nuget.org/packages/Diplo.GodMode/

Our Umbraco: https://our.umbraco.com/packages/developer-tools/diplo-god-mode/

Source Code: https://github.com/DanDiplo/Umbraco.GodMode/

]]>
Diplo Audit Log Viewer for Umbraco 8 Diplo Audit Log Viewer for Umbraco 8 allows you to easily view, filter and search the content and audit log data that is stored in the UmbracoLog and UmbracoAudit tables in your site's database. This table contains all changes that are made in your site. This log viewer allows you to view this data in an easy-to-use interface that integrates into the Umbraco Settings tree. https://www.diplo.co.uk/blog/web-development/diplo-audit-log-viewer-for-umbraco-8/ http://www.diplo.co.uk/2163.aspx Tue, 25 Jun 2019 00:00:00 GMT A Little History

Way back in 2016 I created a package for Umbraco 7 that allowed you to view the contents of the umbracoLog table. You can find more about that in the blog post I wrote at the time. With the release of Umbraco 8 earlier this year (2019) I decided it was time to start looking at moving some of my packages over to v8 and decided this package would be a good choice. However, I hadn't reckoned on just how much in v8 had changed - especially in terms of working with the more low-level aspects of Umbraco, such as using the back-office services and interacting with databases. So what started out as a simple port became a wholesale rewrite! If you are a developer, I'm sure you are familiar with this!

So What Does It Do?

Basically if you install the package you will get two new trees in the Settings section of Umbraco within the Third Party area, like this:

Audit Tree in Settings

The Content Audit Logs tree allows you to view, filter, search and paginate through all the umbracoLog table entries. This is the log of all changes to content that is made in Umbraco.

You can filter by:

  • The type of content change eg. Save, Publish, Delete etc.
  • The user who performed the change
  • The content node that was changed (using a handy page picker)
  • Date range
  • You can also perform a free-text search of the log contents

Where a change affected a content node then the Id of the node is shown along with it's content type (eg. Document, Media, Member) and clicking the Id of the node will take you directly to edit the content. This works for content, members, media, document types, data types etc.

The Audit Trail Logs tree allows you to view, filter, search and paginate through all the umbracoAudit table entries. This is the log of all audit events, such as log-ins, changes of password or where a User changes the actual document types or member types etc.

You can filter by:

  • The type of event eg. umbraco/user/sign-in/login
  • The user who performed the action
  • The user who the action was performed upon (if relevant)
  • Date range
  • You can also perform a free-text search of the log contents

For both logs you can order the data by the relevant column by clicking it (clicking it again reverses the order). You can also step through the entries via the pagination controls.

Screenshots

As Leonardo da Vinci once said, "A screenshot is worth a thousand pizzas"...

Content Log Viewer

Content Log

Filtering By User and Searching

Content Log Filtering

Audit Log Viewer

Content Log

Filtering Audit Log by Date

Audit Log Filtering by Date

What's New for v8

So without going into loads of technical detail, I decided to rewrite most of the code so:

  • I used dependency injection to register my services (using Composing) so they could be injected in the API controllers
  • I used interfaces for the services (so in theory you can swap them out for your own implementation if you really want!)
  • I rewrote the database access code to work with Umbraco 8's implementation of NPoco - this has changed a lot from v7 and isn't documented anywhere!
  • I rewrote most of the AngularJS code in line with the way Umbraco recommend
  • I restyled the views to match Umbraco 8 styling
  • I added a brand new viewer that allows you to look at the new(ish) umbracoAudit table that was added at some point in late v7

How Do You Use It?

You can install it either via NuGet or as a package from Our Umbraco. See full details below.

Viewing Content Logs

  • Click the Content Audit Logs tree heading to view a table of all changes to content.
  • You can then use the filters at the top to filter the data. You can also use the quick filters in the tree to quickly filter by date range or by the top pages to be modified recently.
  • You can order any column by clicking on it's heading. Clicking again reverses the order.
  • If you see an entry with an "eye" symbol next to the Action name you can click the row to view the log comment text.
  • If an entry has a value under the Node column you can click the ID and it will take you to edit the associated content that has been changed - whether this be a page, a document type or media etc.
  • Use the pagination to move between pages of log data.

Viewing Audit Logs

  • Click the Audit Trail Logs tree heading to view a table of all audit trail events.
  • You can then use the filters at the top to filter the data. You can also use the quick filters in the tree to quickly filter by date range.
  • You can order any column by clicking on it's heading. Clicking again reverses the order.
  • Use the pagination to move between pages of log data.

Where Can I Get It?

NuGet: https://www.nuget.org/packages/Diplo.AuditLogViewer/

Packagehttps://our.umbraco.com/packages/developer-tools/diplo-audit-log-viewer/

GitHub: https://github.com/DanDiplo/Umbraco.AuditLogViewer

Note: This is for Umbraco 8 only! If you need a version for Umbraco v7 then please go here instead

]]>
Diplo Dictionary Editor for Umbraco CMS An Umbraco package that creates a custom section for editing Dictionary values in Umbraco. It allows easy editing of all dictionary items and also allows the dictionary to be exported and imported in CSV format. https://www.diplo.co.uk/blog/web-development/diplo-dictionary-editor-for-umbraco/ http://www.diplo.co.uk/2088.aspx Fri, 23 Mar 2018 00:00:00 GMT Umbraco

Umbraco is a great CMS with some cool features for creating multilingual sites. One of these features is the Dictionary tree that resides in Settings that allows a developer to add an entry comprised of a "key" and then translations for each of the installed languages.

However, one of the common issues that crops up a lot is that you often want to let the backend Editors change these values - but you don't want to grant them access to the entire Settings area. You also don't want to let them edit the keys or create new values - since these won't work unless they're added to templates.

Whilst there have been a few packages released to address this, these where all for Umbraco 6 and now either don't work or don't take advantage of the UI advances in Umbraco 7. So... and you can probably guess where this is going... I bit the bullet and developed my own custom Umbraco Dictionary Editor package. This has a number of features...

Note: This is just for Umbraco 7. If you use Umbraco 10 you might be interested in my A.I. based automatic translation for the Dictionary? If so, see Diplo Translator.

Diplo Dictionary Editor Features

  • Created as a custom section so you can grant granular access to it - so only Editors interested in translation can access it, for instance
  • Allows you edit all Dictionary items within a single page using a quick and intuitive interface (powered by AngularJS)
  • Bulk editing - you can edit multiple values keys and values within one page and only need to save once
  • Allows sorting the dictionary so that it is either nested or alphabetic
  • Allows filtering by language so you can limit editing to a particular language
  • Has a quick, inline search function to quickly locate keys
  • Works with nested dictionary values to any depth
  • Only updates values that have changed when saving
  • Only allows values to be changed - prevents editors from adding new values
  • Allows exporting the entire dictionary (or just one selected language) to a CSV file for off-line editing
  • Allows importing a CSV file back
  • For Umbraco 7.7 and up

You can check out some screenshots below or, if you can't wait to try it, just go to the Downloads!

Screenshots

They say a picture is worth a thousand words and I'm a lazy writer so...

The Default Editor Screen

Here you can view all Dictionary keys. To edit an item simply click it and it will expand to show the editing interface for each language.

Default Editor

Searching an Item and Editing It

Below you can see filtering by a keyword ("app") and editing one of the located items for all languages:

Dictionary Search & Edit

Filtering by a Specific Language

You can also limit editing to a particular language selected from a dropdown:

Filter by Language

Sorting Alphabetically

You can display the dictionary keys either nested or, as below, in alphabetical order

Sort Alphabetically

Exporting to CSV

You can export the entire dictionary (or just a specific language) to a CSV file so it can be edited off-line or sent to a translator

Export to CSV

Importing from CSV

And, of course, you can import the CSV back into Umbraco:

Import from CSV

Downloads & Source Code

Note: This is only supported for Umbraco 7.7 and later

You can download the package from NuGet at https://www.nuget.org/packages/Diplo.DictionaryEditor/

Or as an Umbraco package from https://our.umbraco.org/projects/backoffice-extensions/diplo-dictionary-editor/

The source code is also available on GitHub at https://github.com/DanDiplo/Diplo.DictionaryEditor 

If you have any issues or features requests then please post them on the issue tracker on GitHub.

]]>
I'm MVP (Yeah, you know me) I'm very proud and humbled to have been awarded the honorary title of Umbraco MVP at this years Codegarden 17 festival in Denmark. https://www.diplo.co.uk/blog/web-development/im-mvp-yeah-you-know-me/ http://www.diplo.co.uk/2055.aspx Fri, 09 Jun 2017 00:00:00 GMT MVP.jpg

Every year the Codegarden festival takes place in Denmark, celebrating our favourite CMS, Umbraco. Even though I couldn't make it this year I was very proud to find out from the "Chief Unicorn" himself, Neils Hartvig, that I'd been awarded the prestigious Umbraco MVP award ("Most Valued Person").

Umbraco MVP

I really couldn't believe it, since there are so many outstanding people who work with Umbraco, but I was greatly honoured. Whilst I couldn't make it to Odense to pick up my award, it was shipped over to me by the lovely Martin Wülser Larsen of Umbraco HQ, to whom I'm very grateful. Not only did I get a spiffing trophy, which proudly sits on my desk, I got another Umbraco t-shirt to add to my collection and some of the trendy new purple stickers! So #h5yr to everyone involved and a big thanks - you really made me feel proud of the Umbraco community.

And here she is, the lovely trophy...

Umbraco MVP Trophy

Umbraco MVP 2019

]]>
Diplo Audit Log Viewer for Umbraco 7 CMS Diplo Audit Log Viewer for Umbraco CMS (7.4 and above) allows you to easily view and search the audit data that is stored in the UmbracoLog table in your site's database. This table contains all changes that are made to all content in your site. This log viewer allows you to view this data in an easy-to-use interface that integrates into the Umbraco Developer tree. https://www.diplo.co.uk/blog/web-development/diplo-audit-log-viewer-for-umbraco/ http://www.diplo.co.uk/2017.aspx Sat, 19 Nov 2016 00:00:00 GMT Umraco AngularJS

Important Note

This is about the Umbraco v7 version of my Audit Log Viewer for Umbraco.

If you want the latest version for Umbraco v8 then please read Diplo Audit Log Viewer for Umbraco 8 instead.

If you want the version for Umbraco 10 then see Diplo Audit Log Viewer for Umbraco 10.

The Challenge

Those with long memories will remember that in Umbraco 4 there was an umbracoLog table that contained every change made in Umbraco - both to content (Save, Publish, Delete etc) and also trace data raised by Umbraco (exceptions, debug messages, startup info etc). Sometime in Umbraco version 6 the trace data was changed to use log4Net and logged to daily plain-text files in /App_Data/Log/ - and this led me to create perhaps my most popular package - Diplo Trace Log Viewer. But this splitting of audit and trace data into two different channels meant that there was still no easy way of viewing the audit data in the umbracoLog table - sure, you can view the audit trail for individual pages, but you can't view it all in one place. Well, not until now...!!! (OK, I believe other people have written log viewers for this table, but allow me a little hyperbole, please!).

The Solution

So my aim was to bring the same style interface that Diplo Trace Log Viewer has, but this time for viewing audit data - this includes every change every user makes - whether it be publishing a page, deleting some media, editing a content type or even installing a package. All that data is there, but not easy to access (unless you enjoy writing lots of SQL...)

One of the main differences between this and my other package is that all the data is stored in the database - this makes it easier in many respects (no need for 100+ character regexes to parse the file) but also raises a few challenges - you have to ensure the SQL you write works against both SQL Server and SQL CE (and maybe MySQL if anyone still uses that!). You also need to implement server-side pagination, since I know from experience the log table can grow to many thousands of entries. Luckily Umbraco's customised version of PetaPoco/nPoco allows easy pagination using the Umbraco.Core.Persistence.Page<T> class which works well in conjunction with the umbPagination Angular directive added in 7.4 to generate database-agnostic pagination.

Features

  • Filter log data by log type (Save, Publish, Delete etc)
  • Filter by user (ie. person responsible)
  • Filter by date or date range (ie. all audit events that occurred within a given period)
  • Filter by node (with easy to use content-picker)
  • Search the log data comments by keyword
  • Handy quick filters for the more common audit tasks
  • Uses fast, server side pagination of data so it should be quick no matter how large your log table has become
  • Angular interface that integrates with Umbraco
  • Quick "edit" links to users and content

Show Me The Screenshots

Showing the quick filters in the sidebar and the main interface and data on the right

Filters

Uses server-side pagination with an Angular directive

Pagination

Allows you to view more detail of each log entry

Log Detail

OK, Where Can I Get It?

You can download it from either NuGet (recommended) or via a traditional Umbraco package. And, of course, the source is freely available (but please give me a shout if you use it for anything interesting, K?).

NuGet: https://www.nuget.org/packages/Diplo.AuditLogViewer/1.0.4

Umbraco Package: https://our.umbraco.org/projects/developer-tools/diplo-audit-log-viewer/ (use 1.0.4 version for Umbraco v7)

Source Code: https://github.com/DanDiplo/Umbraco.AuditLogViewer/tree/v7

]]>
God Mode Umbraco 7 Package God Mode is a custom Umbraco 7.4 tree aimed at Developers to make it easy to locate, search and query your Umbraco 7 assets (doctypes, templates, property editors, media, controllers, models etc). https://www.diplo.co.uk/blog/web-development/god-mode-umbraco-7-package/ http://www.diplo.co.uk/1975.aspx Thu, 09 Jun 2016 00:00:00 GMT Umbraco

Diplo God Mode makes Umbraco 7 developers invincible!

This custom tree for the Developer section of Umbraco allows you to browse, query and search your document types and compositions; your templates and partials; your datatypes and property editors; your media library; your custom controllers and models.

Important Note: This is the older version for Umbraco 7. The latest version is for Umbraco 9 and more info can be found here. For the Umbraco 8 version please read this post.

Rational

As a developer working with Umbraco 7 you often need to be able to work out things like:

  • Which document types use this property editor? Or which use a specific instance (data type)?
  • What templates does this partial appear in?
  • Which document types use a specific property?
  • What are the largest items in my Media Library?
  • Which controllers does this site use and what type are they?
  • Which document types inherit from a given composition?
  • Which of my partials are cached and in what template?
  • How is Umbraco configured? How is my server configured?

To answer these queries I built a quick AngularJS developer tree that allowed me to enter some of these queries. It was very rough, performed terribly and looked bad. But I knew it could be very useful, so over time rebuilt it "properly" and gave it a nice UI and tweaked the performance until it's lightning fast. This blog post announces the release of this package (both traditional Umbraco package and NuGet) to other developers. If you are impatient then watch the YouTube video demo.

Features

  • Easily see which document types inherit from any of your compositions
  • See which document types use which property editor or data type instance
  • See which partials are used by all your templates and which of those are cached
  • Find out which data types are being used (or not!)
  • Browse all media in the Media Library and sort it by file type, size or media type
  • See which controllers (Surface, API and RenderMvc) are being used and in what namespaces and DLLs
  • View all generated models (that inherit from PublishedContentModel)
  • Browse all Umbraco Settings, plus all Server settings and MVC settings
  • Look at any assembly in your site and see which types implement a particular interface
  • Plus lots more!

Screenshots

God Mode document type browser

Umbraco 7 Document Type Browser

Umbraco Settings Diagnostics

Demonstration

Demo based on the excellent LocalGov Starter Kit by Kevin Jump.

Download

Note this is only for Umbraco 7.4.3 or above! Latest v7 release requires v7.7.

For the Umbraco 8 version please see please read this post.

NuGet: https://www.nuget.org/packages/Diplo.GodMode/

PM> Install-Package Diplo.GodMode

Package: https://our.umbraco.org/projects/developer-tools/diplo-god-mode/

]]>
404 Page Finder for Multi-Site Umbraco Installations How to create a custom IContentFinder to use as a custom 404 Page Not Found handler in Umbraco that works when you have a multi-domain site that requires different 404 pages for each site (such as a multilingual website). https://www.diplo.co.uk/blog/web-development/404-page-finder-for-multi-site-umbraco-installations/ http://www.diplo.co.uk/1971.aspx Wed, 13 Apr 2016 00:00:00 GMT Umbraco

Out-of-the-box Umbraco helpfully let's you define a custom 404 Not Found page by configuring the Node ID of the page you wish to use in the /config/umbracoSettings.config file. This works great for when you have just one site in an installation. But what happens if you have a multi-site Umbraco set-up? For instance, if you created a multilingual site where each site has it's own home page and you want the 404 Page to be translated into the site's language? I'm talking about a structure like this:

Content

Home GB

Pages

Home FR

Pages

Home DE

Pages

What you want in this instance is to have a different 404 Page for under each of the three sites. But Umbraco will only let you have one! How do you get around this? 

IContentFinder to the Rescue!

Luckily Umbraco is very extensible and the clever developers added an interface called IContentFinder. This is very simple and has one method you need to implement:

public interface IContentFinder
{
  bool TryFindContent(PublishedContentRequest contentRequest);
}

Essentially what this does is allow you to intercept the current published content request and do something with it. In our case what we want to do is handle the instance when it is NULL - that is, when Umbraco can't find any Published Content to serve. This is where we want to intervene and serve up our custom 404 page - but one related to the site the viewer is in. So if they are in the French (FR) site they get the French version, not the English (GB) version.

The way we are going to do this is by creating a new Document Type called PageNotFound (though feel free to use whatever name you want) - the main thing is that you just have one instance of this document type per site placed under the home page. That way we easily find it relative to the site the user is in, using a simple query. After that it's just a case of creating the custom IContentFinder to find the correct instance and registering it in the OnApplicationStarting event.

Show Me Some Code!

OK, enough theory, here's some code to show this in action. You can easily adapt this to your own needs.

]]>
Umbraco UK Festival 2015 My thoughts and feelings around the 6th UK Umbraco Festival in London, which gets bigger and better every year. A truly inspiring event and a great reflection of the Umbraco community. https://www.diplo.co.uk/blog/web-development/umbraco-uk-festival-2015/ http://www.diplo.co.uk/1937.aspx Sat, 31 Oct 2015 00:00:00 GMT Umbraco UK Festival 2015

For the past 6 years the fantastic guys at The Cogworks have been organising the UK Umbraco Festival - and each year it gets bigger and better.

I was lucky enough to get to travel down (with my colleague and good friend @milquetoastable) and experience the awesome sense of community the festival engenders. You really wouldn't think a mere CMS could generate such devotion - but Umbraco isn't just a content-management system; at it's heart is the community and it is the community that make it special. So it was fitting, perhaps, that this year's festival took place in The Crypt on the Green - a working Church. Nobody could fail to miss the analogy of the faithful gathering to worship. And the faithful weren't disappointed, as a full contingent of the Umbraco core team made the trip over to the UK to take part.

Church of Umbraco

The Keynote

As is usual Neils (the 'father' of Umbraco) took the keynote along with Per Ploug and wowed everyone with demonstrations of Umbraco as a Service and the new features in the forthcoming 7.4.0 release (currently still in development). The (perhaps unfortunately acronymed) UAAS has certainly come a long way since I saw sneak preview of it in beta over a year ago. It could be a real game-changer for Umbraco and certainly seems to have a lot of offer teams wanting a slicker deployment process. I also experienced a small blush of pride when Seb tweeted that very day that my Diplo Trace Log Viewer had become the first package to be certified as UASS compatible.

Alongside this the improvements coming in 7.4.0+ also look fantastic - top of every developer's wish-list will be the new and very slick Content Type Editor that vastly improves the process of creating document types and their associated property editors. However, smaller UX improvements, like improved filtering in the Media Library or the ability to categorise doc types into folders will also be very welcome. Along side this, announcements that Umbraco now sports a full HAL-compatible REST layer for content were almost casually dropped in. Safe to say that 7.4 looks to be one of the biggest releases to date.

Wide Range of Speakers

But the festival isn't just about Umbraco. A wide range of speakers talked about a variety of topics that spanned everything from user-experience testing to esoteric graph databases. Whilst there wasn't anything as potentially life-changing as Doug Robar's inspiring speech last year, there was still plenty to learn and love. It wouldn't be fair to single out individual speakers, especially as there were more events on than any one person could attend. However, it's safe to say that many people were enthralled by Microsoft evangelist Martin Beeby's talk about the future of ASP.NET (vNext or 5). I've blogged before about the "new openness at Microsoft" but to actually hear the enthusiasm for open-source and open-standards coming directly from a Microsoft employee really brought it home just how much Microsoft have shifted culturally. It's pretty clear that the future of ASP.NET has never looked more exciting.

V8 and Beyond...

Shannon on Umbraco v8

After a day of inspiring talks it was a hard task for core developer Shannon Deminick to take the stage and finish off proceedings. Whilst I've followed "Shazwazza's" coding activities for a while (and knew he was a very talented developer) I'd never heard him speak before. Public speaking to large audiences isn't something that comes naturally to many developers, but Shannon really blew away everyone in the room with his enthusiasm for his talk in which he outlined the roadmap for Umbraco 8 and beyond. From my scribbled notes the main parts I remember are:

Content Variants 

This is basically the ability for Umbraco to have "virtual nodes" that are linked to a master node to enable variations of that content. Think a better way of doing multi-lingual sites when you want a 1-to-1 translation of languages, for instance.

Segmentation

This allows content to be targeted at different end users based on a wide-variety configurable parameters. Think personalisation of content for members based on, for instance, their interests.

"Le Cache Nouveau"

Championed by Stephan, this is a new swappable caching layer for Umbraco that can replace the current XML cache. The main thing to take away is that this should increase performance of querying cached content considerably. You can read more about this on Stephan's blog.

Mega Code Clean Up

This should be the final purge of legacy code and should see final good-riddance to those horribly named assemblies (like umbraco.businesslogic) from the core. Basically expect a neater, leaner and more testable Umbraco devoid of legacy baggage.

GUID all Things

This is a continuation of the process where unique IDs are being moved from being integer-based to GUIDs. The big advantage of GUIDs is that they should remain unique when content is synchronised, enabling Umbraco Deploy (formerly Courier) to perform it's magic far more reliably.

Latest Libs

This should please a lot of developers who have ever faced versioning conflicts. It basically involved updating Umbraco's 3rd party libraries to the latest versions - both front-end (jQuery, AngularJS etc) to back-end (Automapper, JSON.NET and, of course, Log4Net).

Examine 2

This is a progression of Shannon's Examine (an indexing layer on top of Lucene.NET) taking it to the next level. Better faster indexing and the ability to include Grid content were just some features I remember.

The Presentation

You can watch Shannon's presentation online now - The Road to V8... and Beyond. And you can find the Umbraco Roadmap on Our Umbraco.

The Future is... Orange

Can't wait for v8 and can't wait for next year's festival. I hope to see all my friends there next year!

Swag Bag

]]>
Manipulating Query String in CSharp If you're an ASP.NET developer then sooner or later you will come across an occasion when you want to manipulate or parse values from the current request's querystring. To make this easier I developed a small C# library that enables developers to easily manipulate the querystring collection. https://www.diplo.co.uk/blog/web-development/manipulating-query-string-in-csharp/ http://www.diplo.co.uk/1619.aspx Fri, 15 May 2015 00:00:00 GMT Microsoft CSharp

If you're an ASP.NET developer then sooner or later you will come across an occasion when you want to manipulate or parse values from the current request's querystring (the querystring being the name/value pairs you often see appended to URLs).

The good news is that ASP.NET pages makes available the HttpRequest.QueryString property. The bad news is that this returns a read-only NameValueCollection that cannot be easily manipulated or even iterated over (you can't, for instance, use the foreach construct with a NameValueCollection). This means that parsing values in the querystring becomes a real chore.

To this end I developed a very small C# library (that can be found on GitHub) that enables developers to easily manipulate the querystring collection. It has useful methods to add, remove, replace, count and parse values within a querystring. Instead of outlining all the methods in detail I will show a small code excerpt below that uses the library to perform a few common tasks. If you're like me, you'll find this way is the best way to grasp how it works.

]]>
Diplo Link Checker for Umbraco Diplo Link Checker is a free package for Umbraco 7 CMS that I developed. It allows an editor to easily check their Umbraco site for broken or problematic links. https://www.diplo.co.uk/blog/web-development/diplo-link-checker-for-umbraco/ http://www.diplo.co.uk/1873.aspx Tue, 28 Apr 2015 00:00:00 GMT Umraco AngularJS

Diplo Link Checker is a dashboard add-on for Umbraco 7 that allows an editor to easily check their site for broken or problematic links.

I've been a great fan of the Umbraco Content Management System for many years. It's powerful, flexible, open source and has a great community associated with it. One of its many great features is that it allows developers to contribute packages that extend or enhance Umbraco's functionality. Best of all, thanks to the ethos of open source, the vast majority of these packages are available to download free of charge. I have benefited greatly from some of the great packages so I thought it was time to give a little something back…

So I created a dashboard package from Umbraco 7 (using AngularJS and WebAPI) that allows an editor to check all the links within an Umbraco site.

Features

  • Able to check an entire site, or just a section or even a single page
  • Completely asynchronous so can check multiple links simultaneously and provide real-time feedback
  • Caches link status so only checks each unique link once (within a short period)
  • Works for all types of links - external, internal, HTML, files, images and even CSS and JavaScript files
  • Provides feedback on errors with help dialogue plus an overview of all status codes
  • Quick edit facility allows you to easily edit the page that contains the broken link directly within Umbraco
  • Advanced options allow you to set the timeout period, toggle between viewing all checked links and only links that have problems
  • You can whitelist HTTP codes and only report on those
  • You can also configure it to ignore ports (if you are behind a reverse proxy, for example)

Screenshot

Umbraco Link Checker

Note: This is only for Umbraco 7.1 and above.
You can view more screenshots on the Our Umbraco package page.

How it Works and Source Code

The basic premise is that the checker first iterates over every published page in the site from the chosen start node (using Umbraco's published content API) and creates a list of the page IDs to be checked.

This list is then passed back to an Angular controller that sends an asynchronous request to an Umbraco Web API controller, passing in the ID of the node to be checked.

A service then makes a HTTP GET request to the full URL of the page to get back the entire HTML for the page. This HTML is then parsed, using the HtmlAgility pack (which comes with Umbraco) and a list of every link in the page is collated. Certain link types are discarded that cannot be checked (such as mailto: links etc).

Another service then makes asynchronous HTTP HEAD requests to each of the links in the page using the HttpClient class in .NET that allows you to easily to make multiple requests in parallel (an HTTP HEAD request doesn't send back the content body, just the status, so is much faster than downloading entire pages). The HTTP status code of each request is then recorded and then sent back to the Angular controller that updates the UI with the results. I also keep a track of every URL that has been checked in a (in-memory) cache, and if the same URL is requested then the result is retrieved from memory, rather than re-checking it again.

The Angular UI layer then allows filtering to be performed as well as showing more detailed results for each link, including things like a full description of the status code, then line number in the HTML where the link was found etc.

Source Code

You can find the entire source code on my GitHub page. It's a bit rough, but hey, that's what pull requests are for!

NuGet

You can also install the package via NuGet:

Install-Package Diplo.LinkChecker

https://www.nuget.org/packages/Diplo.LinkChecker/

]]>
JavaScript Equivalents of C# LINQ Methods If you are a C# developer then you'll be familiar with LINQ (Language-Integrated Query). In this post I provide some examples of how you can perform equivalent functions in JavaScript. https://www.diplo.co.uk/blog/web-development/javascript-equivalents-of-c-linq-methods/ http://www.diplo.co.uk/1906.aspx Fri, 06 Feb 2015 00:00:00 GMT Microsoft CSharp

If you are a C# developer then you'll be familiar with LINQ (Language-Integrated Query). You'll also know that the most common flavour of LINQ - LINQ to Objects - contains a number of useful extension methods that work against the IEnumerable<T> interface

These methods are extremely useful for performing operations against collections (arrays, lists etc) such as filtering, selecting and aggregating. Whilst the naming of LINQ methods tends to be derived from SQL (eg. "WHERE", "SELECT", "SUM" etc) the actual operations are more functional in nature. This means that other functional languages -such as, say, JavaScript - have similar operations. However, as primarily a C# developer I tend to "think in LINQ" and I'm often stuck remembering what the JavaScript equivalents of common LINQ methods are.

So, as primarily a memory aid for myself, I created a quick Gist on GitHub that outlines how you perform common LINQ collection queries against JavaScript arrays. The results can seen below:

]]>
Useful Umbraco Extension Methods This post explains how you can create C# extension methods to make querying Umbraco's IPublishedContent more intuitive and seamless. It also includes some examples of methods I commonly use in my projects. https://www.diplo.co.uk/blog/web-development/useful-umbraco-extension-methods/ http://www.diplo.co.uk/1877.aspx Wed, 04 Feb 2015 00:00:00 GMT Umbraco

Umbraco 6.x MVC added a new querying API that revolves around the IPublishedContent interface. This interface defines common properties and methods that a single content "node" is comprised of.

Properties include standard things like the Id and Name of the content, the CreateDate and also a collection of any custom properties. Methods include traversal functions that allow you to get all the Children() or Ancestors() of the current node. Interestingly, many of these are implemented as extension methods in the static class Umbraco.Web.PublishedContentExtensions. This then naturally leads us to the realisation that we can extend Umbraco by adding our own extension methods to IPublishedContent. All we need to do is create a static class and add it either as a part of a class library or (even more simply) put it in App_Code.

I've created a few a useful methods that I use myself that I'd like to share below. Most of these revolve around the idea that you often want to filter out certain nodes when generating things like navigation and menus. Whilst Umbraco supports the convention of a property called "umbracoNaviHide" (that manifests itself as the method IsVisible()) there are other common cases where you might want to exclude nodes from menus. For example, you generally don't want to have pages that haven't been assigned a template in your navigation as they will invariably lead to a 404 Not Found if served.

Example Queries

So, for instance, if you wanted to get all the child nodes of your home page that are not hidden from the menu you might write a query like this:

var menuItems = Model.Content.AncestorOrSelf(1).Children(x => x.IsVisible());

If you wished to extend this to also exclude nodes that don't have a template it would look like this:

var menuItems = Model.Content.AncestorOrSelf(1).Children(x => x.IsVisible() && x.TemplateId > 0);

As you can see this is getting a little unwieldy. Wouldn't it be nicer if we could rewrite this query as:

var menuItems = Model.Content.HomePage().Children().Where(x => x.IsInMenu());

Well, thanks to the wonder of extension methods you can! I've included some examples below that include these, but it's easy to come up with your own, too!

Tip: Accessing From Any View

To make extension methods accessible from any Razor View you can add your namespaces to the web.config folder inside your Views folder (not the main web.config).

web.config

        <system.web.webPages.razor>
<host factoryType="System.Web.Mvc.MvcWebRazorHostFactory, System.Web.Mvc, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" />
<pages pageBaseType="System.Web.Mvc.WebViewPage">
<namespaces>
<!-- Core Umbraco References -->
<add namespace="System.Web.Mvc" />
<add namespace="System.Web.Mvc.Ajax" />
<add namespace="System.Web.Mvc.Html" />
<add namespace="System.Web.Routing" />
<add namespace="Umbraco.Web" />
<add namespace="Umbraco.Core" />
<add namespace="Umbraco.Core.Models" />
<add namespace="Umbraco.Web.Mvc" />
<add namespace="umbraco" />
<add namespace="Examine" />

<!-- Custom Namespace -->
<add namespace="MyNameSpace.UmbracoExtensions" />

</namespaces>
</pages>
</system.web.webPages.razor>
    

 

]]>
Keeping Things Compact Free online and offline tools that can help compact and optimise your CSS stylesheets and JavaScript files. This will ensure your website loads quickly and will also keep your code concise and fast. https://www.diplo.co.uk/blog/web-development/keeping-things-compact/ http://www.diplo.co.uk/1625.aspx Sun, 01 Jun 2014 00:00:00 GMT Kittens in a Box

Most designers realise the importance of keeping image file sizes low in order to improve the speed a website loads and use appropriate tools to achieve this. But what about other assets, such as style-sheets and JavaScript files that also get included? Are there tools that can help reduce the size of these files, too?

Luckily, such tools exist and are freely accessible on the web. No need to download software, you can access them via any browser.

CSS Compactors

For CSS there is http://www.csstidyonline.com/ which is an implementation of the CSSTidy library on SourceForge. Smiliar to this is also http://csscompressor.com/ and http://www.codebeautifier.com/ These not only compact the CSS (by removing redundant whitespace and comments) but it also intelligently consolidates and re-arranges styles to make them more concise. This leads to more efficient and cleaner code.

If all you want to do is minify / compress your code for production then you can try https://www.toptal.com/developers/cssminifier or http://www.cleancss.com/css-minify/. These optimise your CSS at the expense of being easily human readable. But if the only thing reading your CSS is a web browser, then readability is of little concern.

JavaScript Minifiers

As for JavaScript, there are many techniques known as "minification" available to help compress the size of your scripts. Minification not only strips out redundant whitespace but also will re-write your code to make it as concise as possible (by reducing variable names, removing unneeded brackets etc.). The downside is you won't be able to read it, so only use this on finished code and always keep a copy of the original!

Google Closure Compiler

Whilst there are a lot of good minification software available one of the best is offered by the ubiquitous Google, under the rather obtuse name of Google Closure Compiler. This is available as a Java application but is also accessible via a RESTful API and, more importantly for us, online at http://closure-compiler.appspot.com/ 

To quote Google it,

"It parses your JavaScript, analyzes it, removes dead code and rewrites and minimizes what's left. It also checks syntax, variable references, and types, and warns about common JavaScript pitfalls."

UglifyJS and Others

Another very popular minifier library is UglifyJS.

But there are other minifiers available online, too, such as:

Visual Studio Extension

If you use Visual Studio there is also the excellent (and free!) Web Essentials extension. Amongst it's many features are the built in option to be able to right click any CSS or JavaScript file and minify it on the spot. Even better, whenever the source file is changed, the minified file is updated accordingly.

Sublime Text Plugins

If you use Sublime, then there are also many minifiers available, such as:

So there we have it - some great, free online tools that can simply and effectively compact your web assets to ensure your site loads as fast as possible.

]]>
Umbraco XML Sitemap In this post I want to show you how to automatically generate an XML (Google) sitemap for an Umbraco CMS site. The code uses the Umbraco API, Linq To XML and a generic .ashx hander to do this in one small file. https://www.diplo.co.uk/blog/web-development/umbraco-xml-sitemap/ http://www.diplo.co.uk/1622.aspx Sat, 31 May 2014 00:00:00 GMT Umbraco

It's becoming more and more commonplace to add an XML Sitemap to your website. In a nutshell a sitemap allows search engines to easily discover every page in your site hierarchy and therefore crawl them. A lot of people think sitemaps are a Google thing, but actually they are a standard defined by http://www.sitemaps.org/, though of course Google does utilise them. But so do many other search engines, too.

So what does a sitemap look like? In essence it's a very simple XML file that contains a list of all the pages in your site. The sitemap protocol defines the structure of the XML.

Creating a Sitemap for Umbraco CMS

In this post I want to concentrate on automatically generating a sitemap for an Umbraco site (Umbraco is a popular .NET CMS). There are a couple of Umbraco packages that do this, but I find my way simpler and quicker to deploy - simply copy the file into the root of your website and that is it - no packages needed.

I also aim to show you how you can use the Umbraco PublishedContent API and LinqToXml (introduced in .NET 3.5) to do this. The code will be in C#, but should be easily convertible to other .NET languages, and is created using a standard generic handler .ashx file.

The basic principle is to create an XDocument and then iterate over every published node (using IPublishedContent) creating an XElement for each node (page). The handler then outputs the resulting XML document, setting the correct content-type so that the output is seen as XML by search engines.

 

You can then just reference this in your robots.txt file like this:

User-Agent: *

Sitemap: http://www.example.com/sitemap.ashx
 
]]>
Subtitles In this post I praise the hidden world of foreign-language cinema and TV that we rarely experience in the English speaker world. I also wonder why we see so few subtitled drams on British TV. https://www.diplo.co.uk/blog/music-film-tv/subtitles/ http://www.diplo.co.uk/1624.aspx Sat, 31 May 2014 00:00:00 GMT subtitles.jpg

There tends to be an assumption across the English-speaking world that we produce the best TV and film. We may grudgingly concede that Europe and Asia have some of the best cuisine and art, but when it comes to visual media like cinema and television then we think we have it licked.

Let's face it: we often just can't be bothered with "foreign stuff" when we have such an abundance of media available for easy consumption in our native language. Why go out of the way to find that little sushi place when McDonalds is on every high street?

In the UK we have the BBC, which we are rightly proud of, and access to a wide range of commercial channels. Yet ask yourself how many non-English TV shows are broadcast on British TV. Not many. We have a wealth of US imports available on our TV stations,some high qualitysome not, but almost nothing from Europe and the wider world. Is it really the case they have nothing to offer us of value at all?

Luckily in recent years the BBC has slightly bucked the trend. In 2006 it screened the excellent French police drama series 'Engrenages', broadcasting it under the name of 'Spiral' on BBC4 (the original French title literally translates as 'gears' but 'spiral' equally conveys the labyrinthine nature of its plot). It also showed the brilliant Danish social crime drama Forbrydelsen ('The Killing') that took one crime and looked at the repercussions this shocking event had for everyone involved with the victim.

More recently BBC2 has been showing another excellent police drama, this time from Sweden, called 'Wallander' (though only after showing a British remake starring Kenneth Brannagh before hand). Both these series are as good as the best UK dramas and also have the added bonus of giving us a glimpse into a different culture. As Salman Rushdie once noted, "Fictions are lies that tell the truth" and it is through fiction we often perceive a greater truth. 'Spiral' was an eye-opener on the French judicial process and 'Wallander' has made me realise that Britain and Sweden face the same social concerns.

Sky Arts have also had some great foreign drama shows, such as the Israeli series 'Prisoners of War' (which inspired US drama Homeland); Italian Mafia/gangster drama 'Romanzo Criminale' and 'Gomorrah' and more recently 'The Legacy'.

Of course it's not just TV; there is also a rich world of foreign language cinema waiting to be viewed. It seems that any successful 'foreign' drama gets a Hollywood remake, and yet these are invariably inferior to the original. Why view a facsimile when the original is more vibrant? In recent years some of the best films made have come from outside Hollywood.

Directors such as Alejandro González IñárrituKrzysztof KieslowskiMichael HanekeGuillermo del Toro and Chan-wook Park have all, in their own ways, made films that match or exceed the best English language films of the same period. So don't just presume sub-titles mean a film is sub-standard, give them a chance and open your eyes to the bigger world that's out there waiting to be discovered.

]]>
Date Range Picker for Umbraco 7 Creating a Date Range Picker property editor for Umbraco 7 ("Belle") using Angular JS. The post also includes information on creating a Property Value Converter. Finally there is a download link to the Date Range Picker package on Our Umbraco. https://www.diplo.co.uk/blog/web-development/date-range-picker-for-umbraco-7/ http://www.diplo.co.uk/1677.aspx Sun, 19 Jan 2014 00:00:00 GMT Umraco AngularJS

Apart from a brand new UI, Umbraco 7 ('Belle') also came with an entirely new way of creating what are known as Property Editors (data types). These are no longer ASP.NET controls but are now created entirely in HTML, CSS and JavaScript using the Angular.js framework (developed by Google). This is a major change for how you develop using Umbraco, so I was keen to spend some time learning the basics (especially as I'd never used Angular before).

I always find the best way to learn is to just "do it" so I decided I'd make my own property editor. I remembered I'd always wanted to see a date-range picker in Umbraco, so this seemed like a good choice to make. The idea being that a date-range picker allows you to pick two dates - a start date and an end date. This is useful for things like events or bookings that always have a start and end date and where the end date has to be greater than the start date.

Creating a Property Editor

Rather than reinvent the wheel entirely I had a look around to see if there were any client-side based date range pickers out there I could just adapt to use with Umbraco. Eventually I came across a blog post by Dan Grossman called  A date range picker for Twitter Bootstrap. In it Dan described how he'd created a picker that uses the popular Twitter Bootstrap theme. This seemed perfect since Umbraco 7 is also Bootstrap-based, so I knew it would fit in well with the UI.

To integrate the package with Umbraco I followed the tutorials that Per Plough had written on GitHub for Umbraco 7. These covered the basics and I have to say getting it up and running was pretty simple. Because the control I was integrating was client-side based then it fitted in well with new way of developing for Umbraco.

One problem I did have as the fact that Angular didn't seem to pick up on jQuery events. It turns out that ng-model listens for changes on "input" element and so doesn't "hear" events raised by jQuery. The way around this I found was to force the element to manually trigger using jQuery .trigger() method. I answered a question on StackOverflow that explains this in more detail.

Another issue people seem to have using Angular with Umbraco is down to the way Umbraco hard caches Angular controllers (so even force reloading doesn't get the changes). The way I found to get around this was to add debug="true" to the web.config and also tick the 'Disable Cache' option in the Settings of the Chrome developer tools.

Creating a Property Value Converter

Storing the date range in Umbraco, though, is just half the story. To be really useful it would be nice if we could also get the date range value back as a .NET object. Luckily there is a way to do this using Property Value Converters. These are basically .NET classes that convert from a storage format to a strongly typed .NET object. There isn't much info about these currently available so I basically had a look at the source code for Umbraco 7 and tried to figure this out for myself :)

Basically, you inherit from the PropertyValueConverterBase class and the override the methods you wish to implement - in my case this was ConvertDataToSource.

Using a converter allows you to query the IPublishedContent and get it back as strongly-typed object:

var dateRange = Model.Content.GetPropertyValue<Diplo.DateRangePicker.DateRange>("dateRange");

Date Range Picker Screenshots

To get an idea of what the date range picker looks like then a few screenshots speak volumes...

Picking Dates

Date Range

Configuration

Configuring Property Editor

Usage in a Razor Script

@{
    var dateRange = Model.Content.GetPropertyValue<Diplo.DateRangePicker.DateRange>("dateRange");
    <p>Your date range is from @dateRange.StartDate.ToShortDateString() to@dateRange.EndDate.ToShortDateString()</p>
}

You can also access the date range as a dynamic object via the following syntax (assuming you have a property alias called ‘dateRange’ on your page):

@CurrentPage.DateRange.StartDate @CurrentPage.DateRange.EndDate

You can also just output the entire date range as a string using:

@Umbraco.Field("dateRange")

This will output something like 10/17/2013 - 1/27/2014

and is dependent on the current culture settings of the page.

]]>
Diplo Trace Log Viewer for Umbraco Diplo Trace Log Viewer is an Umbraco developer plugin that allows you to easily select and view the contents of the trace log files that Umbraco generates. The control also allows you to search, order and filter log events (e.g. you may only wish to view Errors or Warnings) - all from within Umbraco 6 or Umbraco 7. https://www.diplo.co.uk/blog/web-development/diplo-trace-log-viewer-for-umbraco/ http://www.diplo.co.uk/1669.aspx Sun, 23 Jun 2013 00:00:00 GMT Umraco AngularJS

From around Umbraco 4.10 the way Umbraco logs information was changed. A new logging class was created in Umbraco.Core.Logging called LogHelper that utilises an implementation of Log4Net.

Previously all logs were generated in the database inside the umbracoLog table. However, this new class logs to plain text files which are rotated daily and stored in the /App_Data/Logs/ folder within your Umbraco site. If you look in there you should see a file called UmbracoTraceLog.txt. This is the current log file, whilst the older files with a date suffix are the historical logs.

Anatomy Of a Log File

If you open the UmbracoTraceLog.txt file you will see it's just a plain text file that contains log entries that look similar to the following:

2013-06-23 17:39:00,021 [11] INFO  Umbraco.Core.CoreBootManager - [Thread 5] Umbraco application starting
2013-06-23 17:39:00,039 [11] INFO  Umbraco.Core.PluginManager - [Thread 5] Determining hash of code files on disk
2013-06-23 17:39:00,044 [11] INFO  Umbraco.Core.PluginManager - [Thread 5] Hash determined (took 4ms)
2013-06-23 17:39:00,203 [11] INFO  Umbraco.Core.PluginManager - [Thread 5] Completed resolution of types of Umbraco.Core.PropertyEditors.IPropertyEditorValueConverter, found 0 (took 151ms)
2013-06-23 17:39:00,209 [11] INFO  Umbraco.Core.PluginManager - [Thread 5] Completed resolution of types of Umbraco.Web.Mvc.SurfaceController, found 0 (took 2ms)
2013-06-23 17:39:00,218 [11] INFO  Umbraco.Core.PluginManager - [Thread 5] Completed resolution of types of Umbraco.Core.Media.IThumbnailProvider, found 3 (took 2ms)
2013-06-23 17:39:00,220 [11] INFO  Umbraco.Core.PluginManager - [Thread 5] Completed resolution of types of Umbraco.Core.Media.IImageUrlProvider, found 1 (took 1ms)
2013-06-23 17:39:00,240 [11] INFO  Umbraco.Core.PluginManager - [Thread 5] Completed resolution of types of umbraco.interfaces.IApplicationStartupHandler, found 8 (took 3ms)
2013-06-23 17:39:00,286 [11] INFO  Umbraco.Core.CoreBootManager - [Thread 5] Umbraco application startup complete (took 262ms)
2013-06-23 17:39:00,294 [11] INFO  Umbraco.Core.PluginManager - [Thread 5] Completed resolution of types of umbraco.interfaces.IApplication, found 7 (took 5ms)
2013-06-23 17:39:00,451 [11] INFO  Umbraco.Core.PluginManager - [Thread 5] Completed resolution of types of umbraco.interfaces.ITree, found 27 (took 3ms)
2013-06-23 17:39:00,774 [11] ERROR Umbraco.Web.UmbracoApplication - [Thread 5] An unhandled exception occurred
System.ApplicationException: The current httpContext can only be set once during a request.
   at Umbraco.Web.UmbracoContext.set_Current(UmbracoContext value)
   at Umbraco.Web.UmbracoContext.EnsureContext(HttpContextBase httpContext, ApplicationContext applicationContext, Boolean replaceContext)
   at Umbraco.Web.UmbracoModule.BeginRequest(HttpContextBase httpContext)
   at Umbraco.Web.UmbracoModule.b__6(Object sender, EventArgs e)
   at System.Web.HttpApplication.SyncEventExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute()
   at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously)
2013-06-23 17:39:06,414 [11] INFO  Umbraco.Web.UmbracoApplication - [Thread 5] Application shutdown. Reason: HostingEnvironment

As you can see, log files contain the following information for each entry:

  • The date and time of the log entry
  • A number in square brackets that has something to do with the thread that was running (?)
  • The name of the .NET type that generated the log entry (ie. the name of the class from within which the call to write to the log was made)
  • The logging level (ie. INFO, WARNING, ERROR)
  • Another thread reference (?)
  • The message being logged (which, in the case of an error, may be an exception stack trace)

The format of this log is defined in /config/log4net.config and can be tweaked there.

Whilst these log files are quite easily human readable, they aren't viewable from within Umbraco itself. That means to view them you need to have direct access to the server and to the /App_Data/Logs/ folder within your site. Another drawback of the new format is that they aren't easily sortable or filterable - in fact, the new log file format isn't at all easy to parse since the fields and entries are not delimitted in any way.

Introducing Diplo Trace Log Viewer

To get around the problem of not being able to (easily) view log files I've written a small Umbraco package that installs itself into the Developer tree (in Umbraco 7) or as a Dashboard panel (in Umbraco 6). It allows you to select, view, sort, filter and search log files.

Umbraco 7 Version

Umbraco 7 Version

Umbraco 6 Version

Umbraco 6 Version

Give Me The Download!!!!!

You can download and install the package (it's free!) from http://our.umbraco.org/projects/developer-tools/diplo-trace-log-viewer

There is also a NuGet package version which you can install - with thanks to Jeavon Leopold.

If you are really nosey you can get the source code (for the Umbraco 7 version) on GitHub.

Note: The latest version (2.x) has been rewritten from scratch for Umbraco 7 using AngularJS and WebApi controllers.

Also note I have another log viewer package for the umbracoLog table data - Diplo Audit Log Viewer.

]]>
Searching uCommerce Products in Umbraco using Lucene.NET In my previous post I told you how you can easily index uCommerce products (in Umbraco) using the Lucene text search engine. But indexing products is only half the story - you also need to be able to search those products too! In this post I'll show you how to write a simple Razor script to search your index and return the most relevant results https://www.diplo.co.uk/blog/web-development/searching-ucommerce-products-in-umbraco-using-lucenenet/ http://www.diplo.co.uk/1672.aspx Thu, 24 Jan 2013 00:00:00 GMT Lucene.Net.png

This blog post is a followup to a previous post explaining how to index products in uCommerce using Lucene .NET If you haven't read that post, please do so first.

Introduction to Searching with Lucene

In my previous post I told you how you can easily index uCommerce products (in Umbraco) using the Lucene text search engine. But indexing products is only half the story - you also need to be able to search those products too! Luckily searching a Lucene index is fairly straightforward once you get past some of the jargon - though more advanced searches do require an understanding or Lucene query language.

In this post I'll show you a simple example of an Umbraco Razor macroscript (.cshtml) that can be used to search your index and return the results in order of relevance. If you deal exclusively in XSLT then I'm afraid you'll have to work out how to create your own XSLT extension to execute this. Or if you prefer to working using ASP.NET user controls then that will work, too.

The Basics of a Lucene Search

To search a Lucene index you need to do a few things:

  • Know the path to the directory where your index is stored
  • Create your search query - normally by parsing a search phrase
  • Execute that query and return some results (in order of relevance)
  • Iterate over the results, extracting the relevant field data from each document returned by the search
  • Display those results (normally with a link back to the item that was indexed)

Code Example

So how would some code that does this look? Well, below I'll show you a simple example. It's simple in that it performs the basics, but if you want things like pagination or highlighting then that is something you'll need to implement yourself :p Note that in this example the searchPhrase is 'hard-coded' but in reality you'd probably pass this in from a query string parameter.

        /* The path to where your index folder is */
string dirPath = Server.MapPath("~/App_Data/TEMP/ExamineIndexes/ProductsIndex/");
DirectoryInfo di = new DirectoryInfo(dirPath);

/* The maximum number of results to show */
const int maxResults = 100;

/* The phrase you are searching for */
string searchPhrase ="google nexus 7";

var analyzer = new Lucene.Net.Analysis.Standard.StandardAnalyzer(Lucene.Net.Util.Version.LUCENE_29);

/* Create a new boolean query using the fields you want to search */
Lucene.Net.Search.BooleanQuery bq = new Lucene.Net.Search.BooleanQuery();
Lucene.Net.Search.Query query;

var parser = new Lucene.Net.QueryParsers.QueryParser(Lucene.Net.Util.Version.LUCENE_29,"DisplayName", analyzer);
query = parser.Parse(searchPhrase);
query.SetBoost(20);// boost score to make this field more relevant
bq.Add(query,Lucene.Net.Search.BooleanClause.Occur.SHOULD);

parser =new Lucene.Net.QueryParsers.QueryParser(Lucene.Net.Util.Version.LUCENE_29,"Sku", analyzer);
query = parser.Parse(searchPhrase);
query.SetBoost(50);
bq.Add(query,Lucene.Net.Search.BooleanClause.Occur.SHOULD);

parser = new Lucene.Net.QueryParsers.QueryParser(Lucene.Net.Util.Version.LUCENE_29,"Description", analyzer);
query = parser.Parse(searchPhrase);
bq.Add(query,Lucene.Net.Search.BooleanClause.Occur.SHOULD);

/* Open the directory to be searched... */

using(var directory = Lucene.Net.Store.FSDirectory.Open(di))
{
    using(var searcher =new Lucene.Net.Search.IndexSearcher(Lucene.Net.Index.IndexReader.Open(directory,true)))
    {
        /* execute the query and return the top hits */
        var collector = Lucene.Net.Search.TopScoreDocCollector.create(maxResults,true);
        searcher.Search(bq, collector);
        Lucene.Net.Search.ScoreDoc[] hits = collector.TopDocs().ScoreDocs;

        <h3>Found@hits.LengthResultsfor"@searchPhrase"</h3>

        /* loop over the results and extract the fields we want from the index and display them */

        <ul>
        @for (int i = 0; i < hits.Length; i++)
        {
            int docId = hits[i].doc;
            float score = hits[i].score; //thisis an indicator of relevance
            Lucene.Net.Documents.Document doc = searcher.Doc(docId);

            /* retrieve values from our fields */
            int productId = int.Parse(doc.Get("ID"));
            string productUrl = doc.Get("Url");
            string productName = doc.Get("DisplayName");
            
            <li><a href="@productUrl">@productName</a> (@score)</li>
        }
        </ul>
    }
}
    

How It Works

The main thrust of the code is to do with parsing the search phrase to create your query. It is the query that is then passed to the searcher object that performs the actual search. There are various ways to parse a query in Lucene, but in this example I use the BooleanQuery class. This class allows us to search across multiple fields by combining queries - you can basically define whether the phrase SHOULD occur, MUST occur or MUST_NOT occur. In our query I use "SHOULD" to indicate that our search phrase should occur in at least one of the fields for a match to occur. I also use the query.SetBoost() method on some fields to indicate they are more important than others - so, in my example, a match in the DisplayName field of the product is weighted higher than one in the Description.

After we have parsed the search we can then excute it to get an array of ScoreDoc objects back. We can then loop over these and extract the values from the fields we indexed. So, for instance, to get back the ID of the product then you can simply use the Get("ID") method of the doc to return the field value. Once you have this you can always use the uCommerce API to get a reference to the actual Product object:

int productId = int.Parse(doc.Get("ID"));
var product = UCommerce.EntitiesV2.Product.Get(productId);

And that is basically it. Of course, you can make this code much neater, but I hope it will be a good starting point for you all.

]]>
Indexing uCommerce Products in Umbraco with Lucene.NET This post deals with indexing and searching products in uCommerce (an Umbraco ecommerce platform) using Lucene .NET (a dot net port of the Java Lucene text search engine library). This first post is about how to create an index using Lucene and C#, and will be followed up by a post of how to use that index for searching in Part Two. https://www.diplo.co.uk/blog/web-development/indexing-ucommerce-products-in-umbraco-with-lucenenet/ http://www.diplo.co.uk/1671.aspx Wed, 23 Jan 2013 00:00:00 GMT Lucene.Net.png

This blog post deals with indexing and searching products in uCommerce using Lucene .NET (a dot net port of the Java Lucene text search engine library). This first post is about how to create an index using Lucene, and will be followed up by a post of how to use that index for searching in Part Two.

Introduction to uCommerce and Database-driven Searching

uCommerce is a popular e-commerce platform that is built upon the Umbraco CMS. It is a powerful and flexible platform for building online shops, but (rather like Umbraco itself) it is more of a framework than an "out-of-the-box" product. In other words, it comes with all the functionality you need to develop your shopping site, but leaves it up to you how to "assemble" the parts together. Initially uCommerce was built around XSLT templating, but the latest version has been built with Razor templating in mind, too. To get started using Razor then it is well worth checking out the uCommerce Razor store.

However, one of the weaknesses of uCommerce is the search. Because uCommerce is database driven (and built on-top of nHibernate ORM) then it is assumed you will use the API to perform any product searches. This works OK for basic single keyword searches, as you can formulate queries farily easily. For example, a simple search might look like:

Example

        var keyword = HttpContext.Current.Request.QueryString["search"];

if(!string.IsNullOrWhiteSpace(keyword))
{
    var products =Product.Find(p =>
                            p.VariantSku == null
                            && p.DisplayOnSite
                            &&
                            (
                                p.Sku.Contains(keyword)
                                || p.Name.Contains(keyword)
                                || p.ProductDescriptions.Any(d => d.DisplayName.Contains(keyword) 
                                                                    || d.ShortDescription.Contains(keyword)
                                                                    || d.LongDescription.Contains(keyword)
                                                            )
                            )
                        );
}
    

However, you are limited to basic keyword matches where you are looking for one string in another string using Contains(). This works fine if you type in a singe keyword but will probably fail to return anything if you enter an entire phrase (since that entire phrase will have to be matched in it's entirety in one of your fields). To put it simply, potential customers will expect a lot more than this.

A Better Way of Searching - Using the Lucene Text Search Engine

Lucene is, "a high-performance, full-featured text search engine library written entirely in Java. It is a technology suitable for nearly any application that requires full-text search, especially cross-platform." Lucene .NET is the .NET port of this library. If you are familiar with Umbraco you will have heard of Examine- well, Examine is just Umbraco's implementation of Lucene (used for searching the backend). So the good news is that if you have installed Umbraco, you also have installed Lucene. NET too.

This means that we can easily use the power of Lucene to create a searchable index of products in our uCommerce database. The advantage of this is that Lucene supports much more powerful queries and is also lightening fast, too.

Creating a Product Indexer

It is actually very simple to create an index of all your products using Lucene. There are 4 basic steps:

  1. Identify and create a folder for where you want to store your index files
  2. Use the uCommerce API to create a query to retrurn the products you want to index
  3. For every product create a Lucene Document containing the fields you wish to store in your index
  4. Inserts these documents in your index and write it to your index folder

There are many ways you can create an index - such as in an ASP.NET user control, a C# class library or a Razor macroscript. However, the way I'm going to do it is using a generic ASP.NET handler file (ASHX file). The advantage of a handler is that it is lightweight and can easily be called by accessing a URL in your site. The code I'll show below is just a starting point, of course - but it should be enough to get you started.

Show Me teh Codes!!!

OK, enough waffle - just show me some code for how to do this! Fair enough, a working example is the easiest way to learn (this is based on uCommerce v3). To create a generic handler (.ashx) file you can use Visual Studio - chose "Add New Item" and then select "Generic Handler" from the list. You can place this file anywhere within your website, but I've called mine "IndexProducts" which will give you a file called IndexProducts.ashx. In the code-behind you can then replace the example code with the C# code below:

Code Example

        %@WebHandler Language="C#" Class="IndexProducts" %>

using System;
using System.Web;
using System.Collections;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using UCommerce.EntitiesV2;
using UCommerce.Extensions;
using umbraco.cms.businesslogic.web;
using Lucene.Net.Analysis;
using Lucene.Net.Documents;

/// 
/// Handler to create Lucene indexes for uCommerce products
/// 
public class IndexProducts : IHttpHandler
{
    public void ProcessRequest(HttpContext context)
    {
        context.Response.ContentType="text/plain";

        // set script timeout to be 600 seconds incase indexing takes a while
        context.Server.ScriptTimeout = 600;

        // define the folder where indexes will be created
        conststring indexPath = "~/App_Data/TEMP/ExamineIndexes/ProductsIndex"; 
        
        CreateLuceneIndex(indexPath, context);
    }

    privatevoidCreateLuceneIndex(string basePath,HttpContext context)
    {
        // purely used for diagnostics
        var stopwatch = newSystem.Diagnostics.Stopwatch(); 
        
        /* get the absolute path to the directory where the indexes will be created (and if it doesn't exist, create it) */

        string dirPath = context.Server.MapPath(basePath);

        if (!Directory.Exists(dirPath))
        {
            Directory.CreateDirectory(dirPath);
        }

        DirectoryInfo di = newDirectoryInfo(dirPath);
        Lucene.Net.Store.FSDirectory directory = Lucene.Net.Store.FSDirectory.Open(di);
        
        /* Select the standard Lucene analyser */
        
        var analyzer = new Lucene.Net.Analysis.Standard.StandardAnalyzer(Lucene.Net.Util.Version.LUCENE_29);
        
        stopwatch.Start();
        int count =0;
        
        /* Open the index writer using the selected analyser */

        using(Lucene.Net.Index.IndexWriter writer = newLucene.Net.Index.IndexWriter(directory, analyzer,true,Lucene.Net.Index.IndexWriter.MaxFieldLength.UNLIMITED))
        {
            // Get all the visible products from uCommerce we wish to index
            var products = Product.All().Where(p => p.DisplayOnSite);

            // Loop through the products
            foreach(var product in products)
            {
                /* For every product, we create a new document and add the fields we want to index to it */
                
                var doc = newLucene.Net.Documents.Document();
                
                var url = UCommerce.Api.CatalogLibrary.GetNiceUrlForProduct(product);
                
                /* Note: the field "ManufacturerCode" is an example custom field which you probably won't have - so remove */
                
                doc.Add(newLucene.Net.Documents.Field("ID", product.Id.ToString(),Field.Store.YES,Field.Index.NOT_ANALYZED,Field.TermVector.YES));
                doc.Add(newLucene.Net.Documents.Field("Url", url,Field.Store.YES,Field.Index.NOT_ANALYZED,Field.TermVector.YES));
                doc.Add(newField("Sku", product.Sku,Field.Store.YES,Field.Index.ANALYZED,Field.TermVector.YES));
                doc.Add(newField("DisplayName", product.DisplayName()?? product.Name,Field.Store.YES,Field.Index.ANALYZED,Field.TermVector.YES));
                doc.Add(newField("Description", product.LongDescription()??"",  Field.Store.YES,Field.Index.ANALYZED,Field.TermVector.YES));
                doc.Add(newField("ManufacturerCode", product.GetPropertyValue("ManufacturerCode"),Field.Store.YES,Field.Index.ANALYZED,Field.TermVector.YES));
                writer.AddDocument(doc);
                count++;
            }

            /* We optimise the index and close the writer */
            
            writer.Optimize();
            writer.Close();
        }
        
        stopwatch.Stop();

        context.Response.Write(String.Format("Indexed {0} products in {1}.\n\n", count, stopwatch.Elapsed.ToString()));
    }

    publicboolIsReusable
    {
        get
        {
            returnfalse;
        }
    }

}
    

How it Works

Note: This is an old post and you'd use a SurfaceController or similar in more recent versions of Umbraco MVC to perform the indexing.

The main part of the script is contained in the foreach loop that iterates over the products fetched by the uCommerce API query. In this we we create a new document and the add the fields we want to index to it. In my example I include a custom field called ManufacturerCode - this is just an example of how to access a custom property, and will probably need changing depending on what custom fields you may have. You can store as many (or as few) fields as you like, though the more text you store the bigger your index will be.

For way we store values are as a Field in Lucene - check the docs for more detail on what the different values mean. Generally you will want to store the value in the field as analysed and searchable.

How to Run the Script

An .ashx file is just like an .aspx page and can be run from a browser. So simply navigate to your script in a browser to run it. If all goes well you will see a message similar to the one below:

Indexed 1015 products in 00:00:01.6257507.

As you will notice, Lucene is very fast!

Scheduled Indexing

In reality you won't want to index your products manually by navigating to a script in your browser. What you will probably want is a scheduled task to trigger indexing. Luckily it's easy to wire up this using Umbraco's own Task Scheduler. Just open the umbracoSettings.config file in /config/ and navigate to the scheduledTasks element. Here you can add your own task that is called on a regular schedule (in my example every hour - 3600 seconds). Just enter the full URL to your .ashx file, something like:

<scheduledTasks>
   
<!-- add tasks that should be called with an interval (seconds) -->
   
<tasklog="true" alias="productsLuceneIndex" interval="3600" url="http://localhost/IndexProducts.ashx"/>
</scheduledTasks>

And there you have it, your products will now be indexed on a regular basis!

Remember to tune in for Part Two where I show you how to search your sparkly new products index...

]]>
Creating an Umbraco 4/6 Form using pure Razor How to create a strongly-typed form in Umbraco 4 (WebForms) using pure Razor CSHTML scripting - without the need for ViewState, User Controls or any "web forms" gunk! You'll also see how you can perform both server-side and client-side validation (using jQuery validate) without cluttering your presentation layer. https://www.diplo.co.uk/blog/web-development/creating-an-umbraco-46-form-using-pure-razor/ http://www.diplo.co.uk/1651.aspx Thu, 24 May 2012 00:00:00 GMT Umbraco

Note: This is an old post and only really applicable to older versions of Umbraco 4 and 6 where you are using WebForms and DynamicNode. It's definitely not the best way to create a form using MVC in Umbraco.

Traditionally forms have always been created in Umbraco 4 via a Web Forms user-control. Usually these will use standard ASP.NET server controls and validators. However, this type of form can exhibit a number of problems:

  • It will require you to create your content within a single server side form tag with runat="server". This can be restrictive when you have multiple forms you want to handle.
  • ASP.NET validator controls aren't very configurable and don't provide the rich feedback people now expect from forms.
  • The framework inject all sorts of JavaScript and ViewState into your nice, clean semantic mark-up.
  • Your form won't be easily testable.

So in this post I'm going to show you how to create a simple form using pure Razor and C# - with no web-forms, ViewState or ASP.NET controls in site! The aim of this is to demonstrate some useful techniques that help fulfill the following criteria:

  • Our form must use both server-side and client-side validation.
  • It should utilise strongly-typed models to help prevent typo errors and provide full intellisense.
  • It mustn't place any restrictions on mark-up or layout.
  • It should not require any dependency on ASP.NET web-forms (so no runat="server" required).
  • It should use a simple, unintrusive "honey pot" method to deter spam bots.
  • It should prevent accidental re-submission of the form if the page is reloaded.
  • Our "business logic" should be separated form our presentation layer (as far as is possible) and be testable.

Also, given all the fury over Umbraco 5 being dropped (and hence no MVC) I thought I'd show how you can at least use some of the general concepts from MVC in Umbraco 4 (and, yes, I realise this is no substitute - but it's still a great way of avoiding using web-forms).

If you are impatient then you can download this concept as a package from the Umbraco Package Repository.

Creating Our Form

The actual form we are going to create is ultimately unimportant - it's the techniques and ideas behind it that I want to concentrate on. But for our example we'll create a simple contact form (since it's one of the most common forms you'll use). Here's what it will look like:

Diplo Razor Form

Whilst this post isn't about mark-up, I'll give an example of this particular form just so you can see how simple it is:

        <form id="contactForm" action="" method="post">
        <fieldset>
            <legend>Your Details</legend>
            <dl>
                <dt>
                    <label for="firstname">First Name</label>
                </dt>
                <dd>
                    <input type="text" id="firstname" class="required" name="firstname" value="@form.FirstName" maxlength="20" />
                </dd>
            </dl>
....
    

 

As you can see, very simple semantic mark-up that contains no server controls at all. However, you are complety free to use whatever mark-up and CSS you like. 

The Model

No, this has nothing to do with Kraftwerk! Instead, we borrow a concept from MVC and use a simple class to represent (or 'model') the form and it's fields. The idea is that this class will be responsible for parsing and storing the values of the form in a manner that allows us to easily deal with it in code. It will be strongly-typed and can be easily passed to other methods (for instance, we could have a mail class that takes an instance of our model and uses it to generate an email or perhaps a data class that persists the values to our database).

To keep things simple we will give responsibility to our model for parsing the request values posted from our form and also for server-side validation (though in a more complex system you may wish to delegate this responsibility to a service layer). So, let's have a look at our "model" class (she's quite the looker!):

 

        public class ContactFormModel
{
    public string Title {get; private set;}
    public string LastName {get; private set;}
    public string FirstName {get; private set;}
    public string Email {get; private set;}
    public string AgeRange {get; private set;}
    public string Comment {get; private set;}
    public string HoneyPot {get; private set;}
    public const string SessionKey = "Diplo.Form.Submitted";

    /// <summary>
    /// Constructs a new contact form model with default values
    /// </summary>
    public ContactFormModel()
    {
        this.Title = "Mr"; // We set default selected value
    }

    /// <summary>
    /// Constructs a new contact form model from the query string or form variables
    /// </summary>
    /// <param name="request">The name value collection</param>
    public ContactFormModel(NameValueCollection request)
    {
        this.Title = request["title"];
        this.LastName = request["lastname"];
        this.FirstName = request["firstname"];
        this.Email = request["email"];
        this.Comment = request["comment"];
        this.AgeRange = request["agerange"];
        this.HoneyPot = request["honeybear"];
    }

    /// <summary>
    /// Validates the model
    /// </summary>
    /// <returns>An empty list if valid otherwise a list of error messages</returns>
    public List<string>Validate()
    {
        List<string> errors = new List<string>();

        if (String.IsNullOrEmpty(this.Title))
            errors.Add("Please select a title");

        if (String.IsNullOrEmpty(this.FirstName))
            errors.Add("Please enter your first name");

        if (String.IsNullOrEmpty(this.LastName))
            errors.Add("Please enter your last name");

        if (String.IsNullOrEmpty(this.Email))
            errors.Add("Please enter your email address");
        else
        {
            try
            {
                MailAddress address = newMailAddress(this.Email);
            }
            catch(FormatException)
            {
                errors.Add("Your email address was not in the correct format");
            }
        }

        if (String.IsNullOrEmpty(this.AgeRange))
            errors.Add("The age range you selected is not valid");

        if (!String.IsNullOrEmpty(this.HoneyPot))
            errors.Add("Please do NOT complete the field marked IGNORE. This is to verify you are human and not a spam bot.");

        return errors;
    }
}
    

 

As you can see the class has a number of properties that represent each field in the form. It also has two constructors - an empty constructor that creates an "empty" instance of the form with some default values and another constructor that takes a NameValueCollection as it's only parameter. This may seem like an odd type to chose, but when you remember that both the Request.QueryString and Request.Form values that represent HTTP posted data use this type, it becomes clear why it's a good choice. You can pass in either to the constructor of our ContactFormModel to initialise it. But why not just pass in the actual HttpRequest? Well, the answer is simple - our class isn't dependant on it so it is easier to test (we can easily create a unit test that mocks a NameValueCollection for testing, for instance).

Our ContactFormModel also has just one simple method:

Validate() - this checks all fields have been completed correctly and, if not, returns a list of error messages (which we can display in the UI). This allows up to easily perform server-side validation of all our fields (essential even when validating client-side). The logic you use to validate fields is completely up to you!

You'll also notice a const (static) value called SessionKey. This just defines a unique "key" that can be used to store a token in Session. I'll come back to this later.

So you can see that our model can parse the posted form values and also validate whether they are correct. This means none of that "business logic" need be present in our Razor script, keeping things clean and readable. Ideally we want our Razor scripts to contain just presentation logic and nothing more. Which leads us nicely to the Razor script itself...

The Script

So now we've seen the model let's take a look at the actual Razor script (that you can use in a macro within Umbraco). As I mentioned, the idea is to keep it simple with the "heavy lifting" done in the model (or, in a larger application, a service layer). You'll see that most of the following script is just HTML markup:

 

        @inherits umbraco.MacroEngines.DynamicNodeContext
@using umbraco.MacroEngines;
@usingSystem.Web.Mvc;
@usingDiplo;

<h2>ExampleContactForm</h2>

@{
    bool isPostBack = !String.IsNullOrEmpty(Request.Form["submit-button"]);
    var formModel = new ContactFormModel();

    if (!isPostBack)
    {
        Session.Remove(ContactFormModel.SessionKey);
        @RenderForm(formModel)
    }
    else
    {
        if (Session[ContactFormModel.SessionKey] != null)
        {
            @DisplayResubmissionError()
            return;
        }

        formModel = new ContactFormModel(Request.Form);
        var errors = formModel.Validate();
        
        if (errors.Count > 0)
        {   
            @DisplayErrors(errors)
            @RenderForm(formModel)
        }
        else
        {
            @DisplayCompletionMessage(formModel)
        }
    }
}

@helper RenderForm(ContactFormModel form)
{
    Repository repository = new Repository();
    var titles = repository.GetTitles();
    var ages = repository.GetAgeRanges();
    const string selected = "selected=\"selected\"";
    
    <form id="contactForm" action="" method="post">
        <fieldset>
            <legend>Your Details</legend>
            <dl>
                <dt>
                    <label for="title">Title</label>
                </dt>
                <dd>
                    <select id="title" name="title"class="required">
                        <option value="">-PleaseSelect-</option>
                    @foreach(var title in titles)
                     {
                        <option value="@title.Key"@Library.If(form.Title== title.Key, selected)>@title.Value</option>
                     }
                    </select>
                </dd>
            </dl>
            <dl>
                <dt>
                    <label for="firstname">FirstName</label>
                </dt>
                <dd>
                    <input type="text" id="firstname"class="required" name="firstname" value="@form.FirstName" maxlength="20"/>
                </dd>
            </dl>
            <dl>
                <dt>
                    <label for="lastname">LastName</label>
                </dt>
                <dd>
                    <input type="text" id="lastname"class="required" name="lastname" value="@form.LastName" maxlength="20"/>
                </dd>
            </dl>
            <dl>
                <dt>
                    <label for="email">Email</label>
                </dt>
                <dd>
                    <input type="text" id="email"class="required  email" name="email" value="@form.Email" maxlength="255"/>
                </dd>
            </dl>
            <dl>
                <dt>
                    <label for="agerange">AgeRange</label>
                </dt>
                <dd>
                    <select size="1" name="agerange" id="agerange"class="required">
                      <option value="">-PleaseSelect-</option>
                      @foreach(var age in ages)
                      {
                        <option value="@age.Key"@Library.If(form.AgeRange== age.Key, selected)>@age.Value</option>
                      }
                    </select>
                </dd>
            </dl>
            <dl>
                <dt>
                    <label for="comment">Comment</label>
                </dt>
                <dd>
                    <textarea id="comment" name="comment" rows="5" cols="50">@form.Comment</textarea>
                </dd>
            </dl>
            <dl class="honey">
                <dt>
                    <label for="comment">IGNORE</label>
                </dt>
                <dd>
                    <input type="text" id="honeybear" name="honeybear" value="" maxlength="50" />
                </dd>
            </dl>
        </fieldset>
        <div>
            <input type="submit" name="submit-button" value="Submit" class="button" />
        </div>
    </form>
}

@helperDisplayErrors(IEnumerable<string> errors)
{
    <h3>Oops!</h3>
    
    <p>Please fix the following problems andtry again:</p>
    
    <ul>
        @foreach(var error in errors)
        {
            <li>@error</li>
        }
    </ul>
}

@helperDisplayCompletionMessage(ContactFormModel formModel)
{
    <h3>ThankYou</h3>
    
    <p>
        Thank you, @formModel.FirstName, your message has been successfully submitted!
    </p>
    
    <p>
        The data you submitted is:
    </p>
    
    <table>
        <tr>
            <th>Name</th><th>Value</th>
        </tr>
        <tr>
            <td>Title</td><td>@formModel.Title</td>
        </tr>
        <tr>
            <td>First Name</td><td>@formModel.FirstName</td>
        </tr>
        <tr>
            <td>LastName</td><td>@formModel.LastName</td>
        </tr>
        <tr>
            <td>Email</td><td>@formModel.Email</td>
        </tr>
        <tr>
            <td>AgeRange</td><td>@formModel.AgeRange</td>
        </tr>
        <tr>
            <td>Comment</td><td>@formModel.Comment</td>
        </tr>
    </table>
    
    <p>Note: If you try and reload the page the form won't be resubmitted.</p>
    
    Session.Add(ContactFormModel.SessionKey,true);
    
    // You can now easily email this or store it in a database etc.
}

@helperDisplayResubmissionError()
{
    <h3>Oops!</h3>
    <p>You are trying to submit the form again!</p>
}
    

How it Works

bool isPostBack =!String.IsNullOrEmpty(Request.Form["submit-button"]);

This essentialy does what an ASP.NET webform does and sets a boolean value to determine whether the form has been submitted (or "posted-back").

if (!isPostBack) { @RenderForm(formModel) }

If the form hasn't been submitted (ie. this is the first time the page has been loaded) then we call a helper method to render the form fields, passing in an empty form model (as the form has yet to be completed).

        var formModel = newContactFormModel(Request.Form);

if(Session[ContactFormModel.SessionKey]!=null)
{
	@DisplayResubmissionError()
	return;
}

if(errors.Count>0)
{   
	@DisplayErrors(errors)
	@RenderForm(formModel)
}
else
{
	@DisplayCompletionMessage(formModel)
}
    

 

The second part of the logic is executed when the form is submitted:

The code first checks whether a Session variable exists which denotes that the form has already been submitted (as we set it upon a successful submission). This should prevent accidental (or, indeed, purposeful) re-submissions of the from by reloading the page etc.

Assuming the form isn't a resubmission we then instantiate our ContactFormModel using the overload that takes in NameValueCollection which allows us to pass in the Request.Form collection. The constructor than parses out the form values and assigns them to the matching properties. This gives us a nice, strongly typed object to work with.

After that we can call the Validate() method on our model which, if you remember, validates all the fields have been filled in correctly. This returns a collection of error messages. If the there are no errors (i.e. the collection has zero items in it) then we know the form is OK. If it does have errors, we display these (using a simple helper) and then re-render the form using our model. This ensures all the fields that were filled in when the form was submitted still remain filled in (it's like ViewState without the headaches!).

Lastly, we display a completion message where you can put in whatever message you like. This could be be a content-managed rich-text field pulled in from Umbraco or it could be just a static value. You'll also notice we pass our ContactFormModel to this method, too. At this stage this will be fully populated and hold all the values our user submitted. We can then do something with this data - such as store it in a database or email it.

Client Side Validation with jQuery Validate

Whilst server-side validation is essential it is nice to provide client-side validation, too. This is more responsive to the user and also spares the server some load. You'll realise, of course, that we can't just use standard ASP.NET validator controls because this is a Razor script. Luckily, we can come up with something better (and easier) to implement by using the jQuery Validator plugin. This makes validating our form as simple as this:

<script type="text/javascript">
    $
(document).ready(function()
   
{
        $
("#contactForm").validate();
   
});
</script>

The "magic" works by marking the fields we wish to be validated with special CSS classes that inform the validator plugin what validation to perform. For instance, to mark a field as both required and to validate it as an email address you would add:

class="required email"

That's it! Of course, it's highly extensible so you can perform much more complex validation, too.

Preventing Spam

One thing you'll be aware of is that putting a form on a page encourages spam, often from automated "spam bots" that fill in all the form fields (often with adverts) and the submit it. One way of getting around this is to use a CAPTCHA control, but these can be obtrusive and off-putting to users. They also can add overhead to the logic of the form. So, as an alternative, I'm going to use another more simple method which is a variation of the "honey pot" technique.

How this works is to have one of your forms fields hidden via CSS (display:none). Humans shouldn't see this field and thus they won't complete it. However, automated spam-bots will fill in everything they see, getting "trapped" in the process. So as part of the Validate() method in our model we simply need to check whether this field has been completed and warn the user not to fill it in - a human will be able to understand this but a bot will not. Of course, bots are getting sophisticated and some will "see through" this, but it's a simple measure that will weed out a lot of spam.

In Summary

  • Razor scripting using C# (or VB) is a "first class" component of Umbraco 4 (from version 4.6 upward) - no need for XSLT or User Controls, if you don't want.
  • Just like in MVC we want our Razor scipts to be primarily concerned with presentation and not business logic.
  • You can break Razor scipts down by using @helper methods to make them more readable.
  • You don't need ViewState to maintain the state of form variables - they are sent as part of the HTTP header.
  • Even in Web Forms you can still have as many forms on a page as you like - so long as  they don't have runat="server" on them.
  • jQuery validate is a great way of performing client-side validation.

Download Package

I've created a small package that you can download from the Umbraco Package Repository that illustrates the code in this blog post. The repository also contains a link to the source-code if you don't want the package.

]]>
Using the ClientDependency Framework in Umbraco Minification is the process of removing unnecessary whitespace and characters from your website assets without changing the functionality. This is desirable as it helps make your pages load quicker which, in turn, helps keep visitors happy and also can improve your SEO. https://www.diplo.co.uk/blog/web-development/using-the-clientdependency-framework-in-umbraco/ http://www.diplo.co.uk/1659.aspx Thu, 15 Mar 2012 00:00:00 GMT Umbraco

Minification is the process of removing unnecessary whitespace and characters from your website assets (typically JavaScript and CSS) without changing the functionality. This is desirable as it helps make your pages load quicker which, in turn, helps keep visitors happy and also can improve your SEO. Sounds great, huh?

The trouble is that whilst it reduces the size of your files it can also make them very difficult to read and work with. If you've ever looked at the minified versions of jQuery then you'll know that it's unreadable. So, whilst it's desirable to minify files, it's something that many people don't bother with because of the hassle it can cause when you need to make modifications.

Wouldn't it be Great if There Was an Easier Way?

One of the lesser known features of Umbraco is that is ships with something called the ClientDependency Framework (CDF). You may have seen a strange folder in your /App_Data/TEMP/ directory called ClientDependency, full of strange looking files. But what are these and what is the CDF? Well, the website for the CDF describes it thus:

"ClientDependency will not only manage the inter-dependencies of scripts and styles between all your views, controls and pages but has the added benefit of managing all of the file compression, combination & minification for you. It will even detect and process script/styles that aren't registered with the framework and other requests such as json that can be minified/compressed"

In other words, what the CDF can do is enable you to include all assets (JS & CSS) easily and also combine them into single files which are then minified. Combining files is just as essential as minification if you want to reduce page load speeds as it reduces the number of HTTP requests being made.

The developers of Umbraco realised this and have used it to compress and reduce the number of scripts being used in the Umbraco back-end. The files you see in /App_Data/TEMP/ClientDependency/ are the cached versions of the minified scripts that the CDF has created.

So How do I use the ClientDependency Framework?

Note: This is an old post and applies to Umbraco 4 (4.5 and up) using Web Forms. If you are using Umbraco 6 or 7 with MVC then you should read the documentation Wiki instead

Luckily, since the CDF comes pre-installed with Umbraco then you can use it too without any real effort. Let's look at the make-up of a typical (but very simplified) master page in Umbraco:

<%@Master Language="C#" MasterPageFile="/umbraco/masterpages/default.master" AutoEventWireup="true" %>

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<htmlxmlns="http://www.w3.org/1999/xhtml"xml:lang="en"lang="en">
<head>
<title></title>
<link rel="stylesheet"href="/css/main.css"type="text/css"/>
<link rel="stylesheet"href="/css/blog.css"type="text/css"/>
<script type="text/javascript"src="/scripts/main.js"></script>
<script type="text/javascript"src="/scripts/jquery.plugin.js"></script>
<script type="text/javascript"src="/scripts/default.js"></script>
</head>
<body>
....
</body>
</html>

As you can see there are two CSS files and three JavaScript files being included. To perform the same task using the CDF you would change your master page to look something like this:

<%@Master Language="C#" MasterPageFile="/umbraco/masterpages/default.master" AutoEventWireup="true" %>
<%@Register Namespace="ClientDependency.Core.Controls"Assembly="ClientDependency.Core" TagPrefix="CD" %>

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
   
<head>
     
<title></title>
     
       
<CD:ClientDependencyLoader runat="server" id="Loader">
           
<Paths>
               
<CD:ClientDependency PathName="Styles" Path="/css"/>
               
<CD:ClientDependency PathName="Scripts" Path='/scripts'/>
           
</Paths>
       
</CD:ClientDependencyLoader>
       
       
<CD:CssInclude runat="server"FilePath="/css/main.css"/>
       
<CD:CssInclude runat="server"FilePath="/css/blog.css"/>

       
<CD:JsInclude runat="server"FilePath="/scripts/main.js"/>
       
<CD:JsInclude runat="server"FilePath="/scripts/jquery.plugin.js"/>
       
<CD:JsInclude runat="server"FilePath="/scripts/default.js"/>

   
</head>
   
<body>
        ....
   
</body>
</html>

If you run this and view the source you'll see how the five CSS and JavaScript have been combined and minified into just two files. These are then cached for future use.

How Does This Work?

There are three main steps to this:

  • First you need to register the CDF controls assembly with a standard Register directive.
  • Then you add the ClientDependencyLoader control that basically acts as a placeholder for where your scripts will be "injected". In other words, place this where you would like your files to be included.
  • Lastly you use the CssInclude and JsInclude files to include your resources. The beauty is you can place these anywhere you like in your page, the files will be included where the ClientDependencyLoader is. You can also programatically include files, too.

Note that the CDF doesn't alter your original files - it generates it's own combined and minified versions of your originals.

For more details please check the CDF Documentation.

One imporant thing to note is that the files are only minified and cached when you have debug="false" in your web.config file. However, once they are minified then your files are also cached for performance reasons. The downside of this is that you need to clear the cache if you change any of your original source files. This is why it's recommended to have debug="true" when developing and then disable it once you "go live". However, if you need to clear the cache you can:

  • Change the version number in the ClientDependency config
  • Delete the ClientDependency folder from App_Data
  • You DO NOT have to restart the app pool to clear ClientDependency cache

The CDF offers far more features, such as registering files from "code behind" or via Razor CSHMTL scripts, "rogue" script detection and more, but I'd recommend reading the documentation for how to do this. You can, of course, use the CDF outside of Umbraco too - there are no dependencies.

As I mentioned, this post specifically refers to Umbraco 4, but the basic principles will probably apply to 5 too - check the documentation on registering the CDF in MVC applications.

]]>
Small is Beautiful - .NET Micro ORMs Here I look into the rise of "micro" ORMs for .NET and C#. The new generation of lightweight, single-file objects mappers are designed to be the answer to the "bloat" of the heavyweight Object Relational Mappers. Performance, speed and simplicity are the keywords - but can they live up to the hype? https://www.diplo.co.uk/blog/web-development/small-is-beautiful-net-micro-orms/ http://www.diplo.co.uk/1626.aspx Mon, 15 Aug 2011 00:00:00 GMT Kittens in a Box

The concept of Object relational mapping (ORM) has been around for a long time, but it really seems to have come to prominence in the .NET world with the rise of LINQ and ASP.NET MVC. 

Microsoft themselves have released two major ORMs in recent years - first there was LINQ To SQL and then, more recently, the heavy-weight ADO .NET Entity Framework. These join established offerings from 3rd parties such as NHibernate and SubSonic.

Rise of the ORM

Perhaps the reason for this growth in interest in ORMs is the promise they hold - to liberate us from the tedious and often repetitive task of hand-writing SQL queries, loading results and then hydrating objects from those results. Wouldn't it be nice if we could just create our database and then auto-magically have all the CRUD methods necessary to manipulate our classes generated for us? Wouldn't it be even better if we could just create our classes and generate the database and associated methods from those? This is the seductive promise of the ORM.

A Fickle Temptress

However, the ORM can be a fickle temptress - her beguiling looks are not all they seem. Eventually, if you work on any decent size projcet you will encounter the dreaded  Object-relational impedance mismatch. In essence this is the disconnect between the square peg of relational databases and the round hole of object-orientated programming. Whilst ORMs may mask this gap (with layers of abstraction and complexity), they don't always succeed in fully shielding it from you. There inevitably comes a time when you end up thinking, "Actually, if only I could hand-tune this query in SQL....". 

Sometimes this is down to performance (ORMs can often be many times slower than native SQL) and sometimes it is down to not being able to exactly generate the query you would like without resorting to hacks (the ubiquitous 'coding horror' of Subsonic). Usually there will come a point where you are banging your head against the wall - simply because modern ORMs are extremely complex beasts and sometimes that leaky abstraction ends up as a pool on your desk. What was designed to save you time and effort suddenly ends up causing your more frustration and effort. I'm sure we've all been there.

Enter the Micro ORM

A few smart people realised that perhaps trying to do away with SQL all together was perhaps not such a great idea. Whilst SQL is undoubtedly an ugly and inconsistent language it still is often, for better or worse, the best option for querying relational databases. After all, that is it's Raison d'être, yes? But mapping data to objects is still tedious, you'd agree, and does seem like a candidate for some kind of automation. This is where the micro ORM comes in - it strips down the traditional heavy-weight ORM to its core and throws away the complexity (think data contexts, XML mapping files, learning new query languages).

The new breed of micro ORM tends to be a compact single class library (often distributed as a single file), open-source (so you can meddle with it) and lightning quick (when compared to the more heavy-weight ORM offerings). They generally do away with all the trappings so you are left with a bunch of simple classes and methods for hydrating your objects from SQL queries. Nothing fancy, just a few basic methods that enable you to do the job with minimal fuss. Documentation tends to be fairly minimal simply because you can pick the concepts up from a few code examples - no need to scour Amazon to find weighty books that explain how you are supposed to use the damn thing!

The Main Contenders

Currently the most prominent micro ORMs that are available for use in .NET are:

I'll take a very quick look at all of these in the next section...

Massive

Massive started life as Rob Connery playing around with the new Dynamic datatype introduced in .NET 4.0. He explains his reasoning as:

"I wanted to stay as close to the "metal" as I possibly could. This (to me) meant that I didn't want to worry about Types, Tracking Containers, Contexts and so on. What I did want to do is describe my table - and this is the only type I have (which, ironically, is a Dynamic type)".

Massive has a simple querying model allowing you to do stuff like this:

var table = newProducts();
//grab all the products
var products = table.All();
//just grab from category 4. This uses named parameters
var productsFour = table.All(columns:"ProductName as Name",where:"WHERE categoryID=@0",args:4);

For those that don't like the inheritance model of the initial release the second release also allows you to wire-up your tables up like this:

var tbl = new DynamicModel("northwind", tableName:"Products", primaryKeyField:"ProductID");

And the ironically named Massive achieves this is all in 365 lines of code!

Simple.Data

Mark Rendle's Simple.Data is, in some ways, one of the grand-fathers of the new wave of pared-down data access libraries (it inspired Massive, for example). Is it an ORM? Well, Mark would say not. He states on his blog:

It's a database access library built on the foundations of the dynamic keyword and components in .NET 4.0.

It's not an O/RM. It looks a bit like one, but it doesn't need objects, it doesn't need a relational database, and it doesn't need any mapping configuration. So it's an O/RM without the O or the R or the M. So it's just a /.

Semantics aside, Simple.Data lives up to its name in that it's lightweight and, erm, simple. An example from the Wiki shows how it can be used:

var db =Database.Open();// Connection specified in config.
var user = db.Users.FindByNameAndPassword(name, password);

But, you may be thinking, where does the FindByNameAndPassword method on the User class come from? Well, Simple.Data is clever in that it makes clever use of the dynamic type. As Mark explains in this post:

In this example, the type returned by Database.Open() is dynamic. It doesn't have a Users property, but when that property is referenced on it, it returns a new instance of a DynamicTable type, again as dynamic. That instance doesn't actually have a method called FindByNameAndPassword, but when it's called, it sees "FindBy" at the start of the method, so it pulls apart the rest of the method name, combines it with the arguments, and builds an ADO.NET command which safely encapsulates the name and password values inside parameters.

Clever stuff, no?

PetaPoco

PetaPoco is another relatively new ORM from Top Ten Software. It was inspired by Massive (and, like Massive, is a single file) but unlike massive it can work with plain old CLR objects (otherwise known as POCOs). It's slightly more comprehensive than some of the more minimalist implementations like Dapper, but is still extremely terse compared with the "bloat" of a full-on ORM. For instance, you can decorate classes with attributes to make updates and inserts simpler, but you aren't required to.

An example of PetaPoco in action is:

var a = db.SingleOrDefault("SELECT * FROM articles WHERE article_id=@0",123));

long count=db.ExecuteScalar("SELECT Count(*) FROM articles");

Pretty simple and easy to understand, you'd agree?

As a bonus, a customised version of PetaPoco also powers the data-layer in my favourite CMS, Umbraco.

Dapper

The last micro ORM we'll be looking at is Dapper - "a simple object mapper for .NET" by Sam Saffron. Again, it's just a single class file and very easy to add to your project.

Dapper came about because Sam, a brilliant developer who works on the StackExchange family of sites, needed to improve the performance of pages on StackOverflow (a very high impression site). His blog post A Day in the Life of a Slow Page at Stack Overflow   outlines his frustration with the SQL generated by LINQ to SQL and the way it "leaks performance":

The LINQ-2-SQL abstraction is leaking 78ms of performance, OUCH. This is because it needs to generate a SQL statement from our fancy inline LINQ. It also needs to generate deserializers for our 5 objects. 78ms of overhead for 60 rows, to me seems unreasonable.

Dapper is perhaps the leanest and therefore the fastest of all the mappers we have looked at. In most circumstances you wouldn't be able to tell the difference between it and native ADO.NET code. But it is still simple to use, as shown below in this example:

public class Dog
{
public int? Age {get;set;}
public Guid Id {get;set;}
public string Name {get;set;}
public float? Weight{get;set;}

public int IgnoredProperty {get { return1; } }
}

var guid = Guid.NewGuid();
var dog = connection.Query("select Age = @Age, Id = @Id",new{Age=(int?)null,Id= guid });

dog.Count()
.IsEqualTo(1);

dog.First().Age
.IsNull();

dog.First().Id
.IsEqualTo(guid);

Conclusions

As you can see these "micro" ORMs manage to maintain a nice balance between simplicity of use and functionality. Yes, you will often end up hand-writing more code compared to using a fully-fledged ORM. On the other hand you will probably end up tearing less of your hair out, too! Of course micro ORMs have their downside, too - you might miss things like transaction management, object tracking, intellisense, compile-time query checking etc. Like in all programming there is never a single silver bullet. However, the ability to fine-tune your queries and remove the "bloat" can feel refreshing and give your app a real speed increase, too.

Of course, there is nothing stopping you from augmenting a "real" ORM with a "micro" ORM for those parts where you need to tune performance (as StackOverflow do). In most cases you can mix and match and get the best out of both worlds.

So which of the above should you try? Well, I'd say all of them - most can be installed very simply (some via Nuget packages) and you can then play around to get a feel for how they work. In the end, it probably comes down to personal preference and community support.

Addendum

Since writing this I've become aware of yet more micro ORM's for .NET. I haven't had time to evaluate these personally, but here are a some more to consider:

  • FluentData - "FluentData is a Micro ORM that makes it simple to select, insert, update and delete data in a database. It gives the developer the power of ADO.NET but with the convenience of an ORM. It has a simple to use fluent API that uses SQL - the best and most suitable language to query data, and SQL or fluent builders to insert, update and delete data."

You can also find a whole host of Micro ORMs on NuGet.

]]>
Creating a Paged List in Umbraco using Razor How to display a paginated list of Umbraco pages using the new Razor scripting language added to Umbraco 4.7. I'll also show you how you can make use of extension methods and Razor helpers to improve your code and make it more reusable. https://www.diplo.co.uk/blog/web-development/creating-a-paged-list-in-umbraco-using-razor/ http://www.diplo.co.uk/1657.aspx Tue, 21 Jun 2011 00:00:00 GMT Umbraco

Note: This is an old post for Umbraco 4. In newer versions such as Umbraco 6 and 7 you should be using iPublishedContent and Partial Views and not DynamicNode and Macros.

In this blog post I'll show you how to display a list of Umbraco pages with some simple pagination using the new Razor scripting language added in the latest version of Umbraco. I'll be using Umbraco 4.7 as the basis of this post.

The Problem

Imagine, for instance, we have a section within our site with a lot of child pages you want to list (such as a News section). Your content tree might look like this:

Instead of listing them all in one long list it would be nice to add pagination to break-up the long list into manageable sections. Wouldn't it be preferable to show the following output instead?

Well, using Razor scripts you can achieve this goal quite easily! Plus it is much simpler to implement than it would be using an equivalent XSLT macro.

Getting Started

You can create a new Razor script in the Umbraco back-office section by going to the Developer section and right-clicking the Scripting Files folder. Call the file RazorListing and add the following basic script (this is using cshtml):

 
@inherits umbraco.MacroEngines.DynamicNodeContext
@{
   
int pageSize;// How many items per page
   
int page;// The page we are viewing

   
/* Set up parameters */
   
   
if (!int.TryParse(Parameter.PageSize,out pageSize))
   
{
        pageSize
=6;
   
}

   
if (!int.TryParse(Request.QueryString["page"],out page))
   
{
        page
=1;
   
}

   
/* This is your basic query to select the nodes you want */

   
var nodes = Model.Children.Where("Visible").OrderBy("displayDate desc");
   
   
int totalNodes = nodes.Count();
   
int totalPages = (int)Math.Ceiling((double)totalNodes /(double)pageSize);
   
   
/* Bounds checking */
   
   
if (page > totalPages)
   
{
        page
= totalPages;  
   
}
   
elseif (page <1)
   
{
        page
= 1;
   
}
}

<h2>Found @totalNodes results.Showing Page @page of @totalPages</h2>

<ul>
    @foreach (var item in nodes.Skip((page - 1) * pageSize).Take(pageSize))
    {
        <li><a href="@item.Url">@item.Name</
a>(@item.DisplayDate.ToShortDateString())</li>
   
}
</ul>

<ul class="paging">
    @for (int p = 1; p < totalPages + 1; p++)
    {
        string selected = (p == page) ? "selected" : String.Empty;
        <li class="@selected"><a href="?page=@p" title="Go to page @p of results">@p</
a></li>
   
}
</ul>

How to Use

Once you have created your script you can add it as a Macro to your Umbraco template as usual. You can then add a PageSize parameter to it and supply whatever value you like your page size to be. This will give you something like this:

<umbraco:Macro Alias="RazorListing" PageSize="5" runat="server"></umbraco:Macro>

How The Script Works

The script is relatively simple. We have two integer variables called pageSize and page. The variable called pageSize is populated via a Parameter that passes the value through from a macro. Because macros pass everything as strings we first check it can be parsed as an Int and then perform the conversion. If, for some reason, it can't be converted we assign a default value of 6.

int pageSize;// How many items per page
int page;// The page we are viewing

/* Set up parameters */
   
if (!int.TryParse(Parameter.PageSize,out pageSize))
{
    pageSize
= 6;
}

We also populate our page variable by checking whether a page number has been passed in via the query string (if not we default to page 1).

if (!int.TryParse(Request.QueryString["page"], out page)) 
{
     page
= 1;
}

After this we perform our basic query against the Model (which represents the current page node) to retrieve all child pages that are visible which we then order by our custom displayDate in descending order (so we get the newest first). We store the result in a variable called nodes.

var nodes = Model.Children.Where("Visible").OrderBy("DisplayDate desc");

Note: Umbraco 4.7 has a bug that stops it ordering properly using "native" properties such as CreateDate or UpdateDate. If you try and use these you'll find ordering doesn't work. This should be fixed in 4.71.

We then count the number of pages returned by our LINQ-style query and store the value in an integer called totalNodes. We then work out how many pages of results there are by dividing totalNodes by our pageSize and rounding the results up. We also do a little check to ensure that the current page isn't out of bounds (in case someone has naughtily edited the query string).

var nodes =Model.Children.Where("Visible").OrderBy("DisplayDate desc");
   
int totalNodes = nodes.Count();
int totalPages = (int)Math.Ceiling((double)totalNodes /(double)pageSize);
   
/* Bounds checking */
   
if (page > totalPages)
{
    page
= totalPages;  
}
elseif (page <1)
{
    page
= 1;
}

We can then display the results by looping through the nodes using the Skip and Take IEnumerable extension methods to ensure we only iterate the current page of nodes.

<h2>Found @totalNodes results. Showing Page @page of @totalPages</h2>

<ul>
    @foreach (var item in nodes.Skip((page - 1) * pageSize).Take(pageSize))
    {
       
<li><a href="@item.Url">@item.Name</a> (@item.DisplayDate.ToShortDateString())</li>
    }
</ul>

Once we've done this we then can display some basic pagination using a simple for-loop to generate an unordered list with a link for each of the pages intotalPages. The link then passes in the page to jump to as part of the query string. We also check whether the page we are displaying is the current page and, if it is, give it a different CSS class called "selected".

<ul class="paging">
    @for (int p = 1; p < totalPages + 1; p++)
    {
        string selected = (p == page) ? "selected" : String.Empty;
       
<li class="@selected"><a href="?page=@p"title="Go to page @p of results">@p</a></li>
    }
</ul>

But We Can do Better...

This is nice, but there were a couple of niggling problems that bugged me about this. First off, using the algorithm that uses Skip() and Take() to get the current page of results is a little clunky. Wouldn't it be better if it was simplified to a method we could call?

Secondly, the code that generates the paging is all mixed in with the listing. But what if we wanted to have paging elsewhere? Do we really want to copy and paste this code around? That's not very DRY now, is it?

So How Can We Improve Things?

Well, the first thing we can do is create a custom extension method that helps us grab the "page" of results we want. The Umbraco blog on Razor points us in the correct direction. The quickest way to create an extension method is simply to add a static class file to your App_Code folder (you can always create a proper class library project later). What I did was create a C# class called RazorExtensions.cs in App_Code that looked like this:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using umbraco.MacroEngines;

namespace Diplo
{
   
public static class RazorExtensions
   
{
       
public static DynamicNodeList Paged(this DynamicNodeList nodes,int page,int pageSize)
       
{
           
return new DynamicNodeList(nodes.Items.Skip((page -1)* pageSize).Take(pageSize));
       
}
   
}
}

As you can see this method "extends" the DynamicNodeList that Umbraco returns and performs the skip and take operation using a more fluid style.

To solve the second problem we can use something called a Razor Helper file that Microsoft introduced as part of the Razor scripting engine. These work just as well with Umbraco as they do in ASP.NET MVC and WebMatrix. You also can create these in the App_Code via Visual Studio. Just right-click yourApp_Code folder and select Add File and add a new Razor Helper. I just created a file called DiploHelpers which created a DiploHelpers.cshtml file for me. Into this file I created this simple paging helper method:

@helper GeneratePaging(int currentPage,int totalPages)
{
   
if (totalPages > 1)
   
{
       
<ul class="paging">
       
@for (int p = 1; p < totalPages + 1; p++)
       
{
           
string selected =(p == currentPage)?"selected":String.Empty;
           
<li class="@selected">
               
<a href="?page=@p" title="Go to page @p of results">@p</a>
            </
li>
       
}
       
</ul>
   
}
}

As you can see this takes our former paging code and encapsulates it within a helper that can be re-used elsewhere. If, at a later date, you want to make this simple method a little more complex (e.g. add previous and next links) then you only need to alter the code in one place.

Putting it All Together

I'll show you below how you can use these helpers in a new version of the paging script that replaces the last part (that generates the pages list and displays pagination) with the following code:

<ul>
    @foreach (var item in nodes.Paged(page, pageSize))
    {
       
<li><a href="@item.Url">@item.Name</a> (@item.DisplayDate.ToShortDateString())</li>
    }
</ul>

@DiploHelpers.GeneratePaging(page, totalPages)

As you can see we call the Paged extension method against our nodes and include the GeneratePaging script via the @DiploHelpers directive (which matches the name of our .cshtml script in App_Code). Note how we can pass in variables to both extension methods and Razor helpers.

So there you go - paging in Umbraco using Razor. Simples!

]]>
Creating a Simple Image Gallery in Umbraco How to create a simple photo gallery for the Umbraco 4 CMS. The gallery will generate HTML using some custom XSLT and this will be enhanced by a jQuery lightbox plugin. Behind the scenes we also use the ImageGen package to create thumbnails of the photos in the gallery. https://www.diplo.co.uk/blog/web-development/creating-a-simple-image-gallery-in-umbraco/ http://www.diplo.co.uk/1658.aspx Fri, 08 Apr 2011 00:00:00 GMT Umbraco

This is an old post written for Umbraco 4 using XSLT and isn't really relevant for Umbraco 7.

In this blog post I'll show you how to create a simple photo gallery using the Umbraco CMS (I'll be targeting the Umbraco 4.5 and above schema). Whilst there are already a few gallery packages available I often find that "rolling your own" gives you more flexibility and control over the end results. It's also more instructive and fun to create your own, too!

Note: In the following tutorial I will assume you understand the basic concepts of Umbraco and are comfortable building simple pages with it. If you're a complete beginner, this might not be for you...

Basic Concepts

To create the gallery we'll use a few components:

  • An Umbraco document-type that allows an editor to create a new gallery page
  • An XSLT file that generates the HTML for the photos in the gallery
  • A 3rd party jQuery plugin for generating a 'lightbox' style modal window for each image
  • An excellent Umbraco package called ImageGen for generating thumbnails

The basic concept for the gallery is that an editor will create a gallery page within their Umbraco site. The gallery page will have one property that utilises the built-in Media Picker datatype to allow the editor to select a folder in the Media Library. A macro will be added to a template that calls the XSLT, passing in the id of the media folder. The XSLT will then iterate through every image in the folder, generating an unordered list of images and links. The image will be a thumbnail, generated using ImageGen, and it will be surrounded with a hyperlink to the original image. A little jQuery magic will the pop-up the original image in a nice lightbox. Bobs your uncle!

Putting It All Together

OK, now we (hopefully!) understand the idea behind the gallery and what components we'll be using we can start to look at some snippets of code. Note I won't be giving out the entire code to the gallery; rather I'll be using this tutorial as a way of pointing you in the right direction to create your own. In the long-run it's better to understand how to build something than blindly copying and pasting (not that I don't indulge in the latter sometimes myself, of course).

Creating a Gallery Document Type

The first thing we'll need to do is create a new Document Type in Umbraco (under the Settings section). You can call it what you like (such as Media Gallery). This can inherit from a common Master if you like, it doesn't matter. The main thing is that you add a new property to it that uses the Media Picker datatype. In my example I'll give this an alias of mediaFolderId as we will be using it to select a folder within the media library.

Once you've created a page instance using this Document Type you can then use it to select a folder in the Media Library containing the images you want to display in your gallery. This way you can easily add new images simply by uploading them to the folder.

Installing ImageGen Package

As mentioned, we will be using Douglas Robar's excellent ImageGen package for creating the image thumbnails, so before we proceed you should install ImageGen from the Umbraco Package Repository (you'll find it under Website Utilities). If you're not sure how to do this, check this blog post by Tim Geyssens. Note the standard version is free to use and this does everything we need (but if you like it support Doug and buy the pro version!).

Adding the XSLT

OK, now we've created a way of selecting a Media Folder we need to create the XSLT that grabs all the images from the selected folder and generates the HTML list. To do this we make use of an umbraco Library XSLT extension called umbraco.library:GetMedia(). What this basically does is return some XML that describes all the images in the folder - we can then iterate over them using XSL for-each. Note I'm calling GetMedia() with the second parameter as 'true' which I believe tells it to optimise the query to return all items in a go.

This is illustrated below in the following XSLT snippet:

   <xsl:output method="xml"omit-xml-declaration="yes"/>

   
<xsl:param name="currentPage"/>

   
<xsl:variable name="mediaFolderId" select="number($currentPage/mediaFolderId)"/>
   
<xsl:variable name="thumbWidth" select="number(256)"/>
   
<xsl:variable name="thumbHeight" select="number(170)"/>

   
<xsl:template match="/">

       
<!-- Displays all images from a folder in the Media Library -->

       
<xsl:if test="number($mediaFolderId)">

           
<ul id="gallery">
               
<xsl:for-each select="umbraco.library:GetMedia($mediaFolderId, true())/Image">
                   
<xsl:iftest="umbracoFile !=''">
                       
<li>
                           
<a href="{umbracoFile}"title="{@nodeName}"rel="gallery">
                               
<imgsrc="/imageGen.ashx?image={umbraco.library:UrlEncode(umbracoFile)}&amp;width={$thumbWidth}&amp;height={$thumbHeight}"width="{$thumbWidth}"height="{$thumbHeight}"alt="{@nodeName}"title="{@nodeName}"class="thumbnail"/>
                           
</a>
                       
</li>
                   
</xsl:if>
               
</xsl:for-each>
           
</ul>

       
</xsl:if>

   
</xsl:template>  

As you can see we use a for-each loop to get all the images in the folder with the id that matches the value stored in the current page's mediaFolderIdproperty. This in turn generates an unordered list that consists of a link to the original image and a thumbnail of the image resized dynamically by ImageGen. We also add a few ids and classes to the HTML that can be used to style the list using CSS and also to identify the gallery so we can use it with a jQuery lightbox plugin. For accessibility we also ensure that we use the image name (@nodeName) in any alt tags and title tags.

For reference, the generated HTML would look something like this:

<ul id="gallery">
   
<li><a href="/media/18228/ferrari_f40.jpg"title="Ferrari F40"rel="gallery">
       
<img src="/umbraco/imageGen.ashx?image=%2fmedia%2f18228%2fferrari_f40.jpg&amp;width=256&amp;height=170"
           
width="256"height="170"alt="Ferrari F40"title="Ferrari F40"class="thumbnail"/></a>
   
</li>
   
<li><a href="/media/18233/ferrari 365.jpg"title="Ferrari 512BB"rel="gallery">
       
<img src="/umbraco/imageGen.ashx?image=%2fmedia%2f18233%2fferrari+365.jpg&amp;width=256&amp;height=170"
           
width="256"height="170"alt="Ferrari 512BB"title="Ferrari 512BB"class="thumbnail"/></a>
   
</li>
</ul>

Once you've created the XSLT you can then add it to a Macro and embed it within a page template. I'll assume you know how to do this. But as an example, it would look something like this:

    <div id="photos">      
     
<umbraco:MacroAlias="DiploMediaGallery" runat="server"></umbraco:Macro>
   
</div>

Just insert the macro inside the template where you want the gallery to be displayed.

Add a jQuery Lightbox

Finally to finish things off we will add a nice jQuery 'lightbox' style plugin that will turn the link to the original image into a nice modal pop-up window. For this example I'll be using a lightweight jQuery plugin called ColorBox. However, you can easily adapt it to use any other similar plugin, such as FancyBox. Just follow the instructions on colorpowered.com/colorbox/ for installing ColorBox and remember to include a reference to the jQuery library. You should then be able to use a simple bit of jQuery script to initialise the gallery, such as the code I use on my site which is:

    <script type="text/javascript">

      $
(document).ready(function()
     
{
        $
('#gallery').find("li a").colorbox(
       
{
          maxWidth
:"94%",
          maxHeight
:"94%"
       
});
     
});

   
</script>

Basically when the DOM is loaded this calls the ColorBox plugin telling it to work it's lightbox magic on all the nested LI A child elements of the UL id of#gallery UL element. This turns the hyperlinks into nice lightboxes (for users that have JavaScript enabled). In other words, when a user clicks an image instead of it opening in another window it pops-up in the lightbox with a nice caption (based on the title).

]]>
Using jQuery Templates to Bind to JSON data How you can use the new jQuery Templates plug-in to easily bind JSON data (returned from a simple Ajax request) to placeholders in your HTML to create rich, client-side templates. https://www.diplo.co.uk/blog/web-development/using-jquery-templates-to-bind-to-json-data/ http://www.diplo.co.uk/1915.aspx Tue, 01 Mar 2011 00:00:00 GMT jQuery.png

Note that this is an old post and there may well be better alternatives now, such as KnockoutJS , AngularJS, jsRenderHandlebarsJS etc.

Note it still works, though - If you want to pop straight to the demo then look here!

Original Post from 2011

In 2010 Microsoft raised a few developer eyebrows when it turned out they were contributing ideas and code to the jQuery Project. One of these ideas turned out to be a plug-in that made the task of binding data to repeatable segments of HTML much simpler and cleaner. Microsoft called this idea jQuery Templates. The guys at jQuery liked the idea and so, around the middle of last year, jQuery Templates were added to the library as official jQuery plug-ins. Note the old repo can now be found on GitHub at https://github.com/BorisMoore/jquery-tmpl

In this post I'll show you how you can use jQuery Templates to easily bind to JSON data returned from a simple Ajax request. For the example we will be using the Flickr API, since it is a free, fast and reliable API that plays nicely with JSONP requests (just think of JSONP as way of dealing with cross-domain Ajax requests without running into cross-domain scripting problems).

So What are Templates For?

The jQuery documentation describe templates thus:

"A template contains markup with binding expressions. The template is applied to data objects or arrays, and rendered into the HTML DOM."

If you are familiar with .NET then this might remind you of something - Databinding. This is no surprise since both the ideas come from Microsoft. But even if you don't know .NET, you will recognise that it is a common occurence to take a stream of data and have to transform that into HTML. In the past you might have done this by concatenating strings to make HTML - a rather messy and error-bound affair, complicated by having to escape quotes and the like.

For instance, you may recognise ugly jQuery code like the following for creating an HTML unordered list by iterating over a collection of JSON objects:

var html = "";
$.each(data.items, function(key, val)
{
    html += "<li>" + val.title + "</li>";
});

It works, but is fiddly even for such a simple construct. When you start generating more complex HTML it soon becomes an unmaintainable mess. This is where templates come in - they solve this problem by separating your mark-up from the data. The idea behind templates is simple - you create your HTML as a template that contains 'variables' that 'bind' to the data you are consuming. It's probably easiest to see this in example of what a template looks like:

<script id="flickrTemplate" type="text/x-jquery-tmpl">
    <li>${title}</li>
</script>

As you can see, a template is really just a JavaScript script block (with a unique ID) that contains a fragment of HTML. The HTML, in turn, can contain placeholder variables like ${title} that are replaced by the actual values in the JSON data when the template is bound (i.e. the data and the template are combined to produce actual HTML).

A Full Example using Flickr API

So, putting this all together, I'll show you very quickly how you can use templates to bind to JSON returned from the Flickr API (this data will consist of the latest photographs from the Flickr timeline, which will contain titles, URLs and other metadata). Just bare in mind that jQuery Templates are still in beta and thus subject to change (which might break this example at some point in the future!).

OK. First, we will use jQuery Ajax to query the API with a JSONP request that returns the photographs and descriptions. The JSON we expect to get back looks like this:

jsonFlickrFeed({
        "title": "Uploads from everyone",
        "link": "http://www.flickr.com/photos/",
        "description": "",
        "modified": "2011-03-01T10:49:41Z",
        "generator": "http://www.flickr.com/",
        "items": [
       {
            "title": "20110226__7AT0068.jpg",
            "link": "http://www.flickr.com/photos/alf_69/5488083463/",
            "media": {"m":"http://farm6.static.flickr.com/5057/5488083463_2e67cda7bf_m.jpg"},
            "date_taken": "2011-02-26T20:56:37-08:00",
            "published": "2011-03-01T10:49:41Z",
            "author": "nobody@flickr.com (alf_69)",
            "author_id": "32114797@N03",
            "tags": "candeloris mimmas80thbirthdayparty"
       },
       {
            "title": "Rockies09_1931 Lake McDonald 3.03pm",
            "link": "http://www.flickr.com/photos/ausken/5488083475/",
            "media": {"m":"http://farm6.static.flickr.com/5013/5488083475_4f46fe6b1d_m.jpg"},
            "date_taken": "2009-07-31T15:03:47-08:00",
            "published": "2011-03-01T10:49:41Z",
            "author": "nobody@flickr.com (AusKen)",
            "author_id": "11814526@N05",
            "tags": ""
       },
..... etc

So, now we know what data to expect we can create a template to render each of the items returned. For this example we will create a simple list that contains the title, image and a link to the original image on Flickr. We create our template and place it in a script block as below:

<!-- This is the template -->
<script id="flickrTemplate" type="text/x-jquery-tmpl">
    <li>
        <h2>${title}</h2>
        <div><a href="${link}"><img src="${media.m}" alt="${title}" /></a></div>
    </li>
</script>

(Notice how we can reference nested JSON items by using a period - so that to get the media items with an "m" attribute we just use ${media.m}).

We then make our Ajax request to Flickr, specifying we are using a JSONP callback. This query should return the photo feed as JSON into the variable called data. We then take the items within the data object and pass it to the template as data.items (identifying the template by it's ID of 'flickrTemplate'). This 'binds' the JSON to the template, so that each variable in the template is replaced by it's equivalent value:

    $(document).ready(function()
    {
        $.ajax({
            url: "http://api.flickr.com/services/feeds/photos_public.gne",
            data: "format=json",
            jsonp: "jsoncallback",
            dataType: "jsonp",
            success: function(data)
            {
                $("#flickrTemplate").tmpl(data.items).appendTo("#placeholder");
            }
        });
    });

Finally we use the appendTo() function to add the rendered template items to an HTML UL identified uniquely as 'placeholder'. This effectively adds the HTML list to the DOM.

<ul id="placeholder"></ul>

Demonstration

You can view a demonstration of this in action here. Just view the page source to see the entire code. Hopefully you will agree templates are a neat and elegant way of binding data.

]]>
jQuery Background Image Rotator How I created a very simple jQuery plugin that can be used to rotate a sequence of images as the background image on any element. I also explain how this plug-in differs from a standard carousel in its intent and execution. The post contains a demonstration, sourcode and a download option. https://www.diplo.co.uk/blog/web-development/jquery-background-image-rotator/ http://www.diplo.co.uk/1643.aspx Wed, 23 Feb 2011 00:00:00 GMT jQuery.png

Recently I came across a situation where I wanted to be able to rotate the background images on a web-page in sequence, with a delay between each iteration.

Whilst there are plenty of jQuery Carousels available, these usually involve generating some HTML first (such as a list of images) and then 'progressively enhancing' this into a carousel. I didn't want this, as the images I wanted to rotate didn't exist with the current DOM. I also found that existing carousel plugins tended to be packed with lots of functionality that I didn't need. So, I thought, this looks like an ideal opportunity to try and write my first real jQuery plugin - nice and simple, too.

The jQuery Solution

For my plug-in to work I needed it to take a few parameters:

  1. A list of image filenames
  2. A delay period between each transition
  3. The DOM element for it to work on

I also decided it would be a good idea to be able to pass in a base directory for where the images lived (rather than having to pass in the full path). It also seemed like a good idea to be able to pre-cache the next image to be loaded so that it would be ready to be displayed on the next rotation. I also decided there was no need for any fancy transitions or effects, since these aren't that simple to add when dealing with background images and would only complicate what is supposed to be some very simple code.

After a quick re-cap on how to write a basic plugin I ended up with the following code (which you can download or view a demo of below):

Code

        (function($)
{
    $.fn.extend({
        bgrotate: function(options)
        {
            var defaults = {
                delay: 1000,
                images: [],
                imagedir: "/images/"
            }

            var o = $.extend(defaults, options);
            var $obj = $(this);
            var cache = [];
            var i = 0;
            var preCache = true;

            return this.each(function()
            {
                setInterval(function() { setBack($obj, o.images, o.imagedir) }, o.delay);
            });

            function setBack(elem, backgrounds, imagedir)
            {
                elem.css("background-image", "url(" + imagedir + backgrounds[i] + ")");
                i++;
                if (i == backgrounds.length)
                {
                    i = 0;
                    preCache = false;
                }
                if (preCache)
                {
                    var cacheImage = document.createElement('img');
                    cacheImage.src = imagedir + backgrounds[i];
                    cache.push(cacheImage);
                }
            }
        }
    });
})(jQuery);
    

The plug-in then can be called simply like in this example below (where we are rotating a sequence of images on the page's BODY element with a delay of 4 seconds between each image):

$("body").bgrotate({ delay: 4000, imagedir: "/images/backgrounds/", images: ["249.jpg", "335.jpg", "419.gif", "87.jpg", "96830.png"] });

I will be the first to admit I am still new to writing jQuery, so please take this code at face value and don't expect it to be an example of best practice. It works for me and if you want to use it, feel free!

The basis of the code is that it loops through all the images in an array and sets the background image style on the calling element to that of the image. It then uses the native javascript function setInterval to create a delay before the next iteration. The only noteable feature is that the next image in the sequence is pre-loaded by creating an IMG DOM element that forces the browser to load it. This is only done the first time the image is displayed.

If you have any suggestions on how to improve it or make it or more efficient, then please feel free to add a comment. I can't, though, promise to offer any help or support for this - take it as it is, please :)

Download and Demo

You can view a simple demonstration of the plug-in in action here.

You can download the javascript directly here or grab the source code from GitHub via the link below.

]]>
Eating Animals My thoughts about Jonathan Safran Foer’s thought-provoking book, 'Eating Animals', in which he takes a philosophical look at the debate around the ethics of eating meat, fatherhood and what it means to be a vegetarian. https://www.diplo.co.uk/blog/society-politics/eating-animals/ http://www.diplo.co.uk/1606.aspx Tue, 02 Nov 2010 00:00:00 GMT Jonathan Safran Foer

'Eating Animals', Jonathan Safran Foer's first non-fiction book, was always going to be of particular interest to me since I was already a fan of his first two novels ('Everything Is Illuminated' and 'Extremely Loud and Incredibly Close') and have a personal interest in the subject matter. OK, perhaps more than just an interest, since I've been a vegetarian for over a decade now. Yet vegetarianism is something I came to more for convenience (I married a vegetarian) rather than out of high-minded conviction. I was curious: Would this book bolster my convictions or leave me questioning them?

The title of the book is interestingly ambiguous - humans are, after all, "animals that eat" and also animals that eat other animals. This philosophical approach to the subject is something that characterises Safran Foer's work - this is no one-sided diatribe or rant. Instead the author leads us through an intellectual journey that he himself undertook as he prepared for approaching fatherhood - what should he be feeding his child? This journey leads him to explore the myths and rituals that revolve around our eating habits and, ultimately, to find out in person what is really involved in the mass production of meat for human consumption. It is a journey that leads him from his grandmother's post-war kitchen table to the industrial-scale factory farms where most of our food now comes from. Along the way we meet many interesting characters and hear a variety of points-of-view. In the end, though, we cannot help but have our thoughts provoked and our ideals questioned.

Making The Connection

Perhaps at the heart of the book is the truism that most people avoid thinking about the connection between the animals they see around them and the meat that is on their plate. It is this disconnect that allows people to avoid the uncomfortable moral questions that are inevitably raised by killing animals for food. And once you have a disconnect, you have a gap - and this gap is most readily filled by the type of unscrupulous operations that most of us would rather not think too much about. But think about it we must if we are to face up to the truth, however unpleasant the realisation may be.

So what is so shocking about eating animals? After all, we are natural omnivores, and humans have eaten a meat-based diet for millennia. Surely it is a natural process and, therefore, unquestionably OK? But let's consider a few issues, both moral and practical, that the book throws up…

The Bullet Points

  • For most people, stopping eating animals would be the single biggest contribution they could make to reduce the sum of suffering and misery in this world.
  • For most people, stopping eating meat would be the single biggest thing they could do to protect the environment and help reduce global warming.
  • A mainly vegetarian based diet is more sustainable for the future of the planet.

The Moral Issues

Why is it most of us would recoil at hurting a cat or a dog, let alone eating one, yet we can casually consume the flesh of a pig, an animal that is easily as intelligent, feels pain just as readily and is as deserving of respect? Isn't this hypocritical? Why as a people do we spend billions of dollars pampering one set of animals whilst consigning the other set to short lives of absolute misery and horror? We cannot both be animal lovers and animal eaters - it is a dichotomy that cannot be reasonably resolved.

The vast majority of the meat we consume in the West is produced on factory farms. In fact the word "farm" is really a misnomer since there are no traditional farmers manning the operations - we have replaced farmers with cheap, itinerant labour forces culled from the poorest sections of society. Instead of fields we have industrial scale hangers filled with row after row of distressed animals held in cramped and unhygienic conditions. Are the battery chickens, confined to a "living" space the size of an A4 sheet of paper, where they cannot stretch their wings, deprived of natural light and so crippled and in pain they can hardly stand, really something we want to endorse for the price of cheap chicken? These birds are so disease-ridden they have to be pumped full of antibiotics so they can even survive the few weeks to slaughter. Yet each year billions of animals are treated like this. Is it moral?

It is no exaggeration to say the vast majority of animals we eat for food spend their lives in misery, pain and distress. Intensive farming is really nothing more than a euphemism for collective torture, so horrific are the circumstances of their short lives in captivity. The animals that do survive are then killed in often cruel and disturbing circumstances and their remains are then "processed" and packaged for us to eat. And by eating them, we are condoning this process, even if we are not willing to make the conscious connection. But think about it - if there is one single thing you can do in your life to reduce the sum of suffering in the world it is to stop buying factory-farmed "products". Is the suffering and ultimate death of a living creature really worth the fleeting pleasure of consumption?

Most people, perhaps understandably, would rather not think too much about the relation between the food on their plate and the life, suffering and death of a sentient being. But can we really absolve ourselves of all responsibility by hiding behind a thin veneer of feigned ignorance?

The Environmental Issues

But eating animals not only contributes to suffering, but it is also one of the most significant contributors to environmental damage there is. According to the United Nations Food and Agriculture Organization the livestock sector generates more greenhouse gas emissions as measured in CO2 equivalent - 18 percent - than the entire transport sector (cars, aeroplanes) as well as being a major source of land and water degradation. But it's not only global warming that a meat-based diet contributes toward. According to the FAO,

"The livestock business is among the most damaging sectors to the earth's increasingly scarce water resources, contributing among other things to water pollution, eutrophication and the degeneration of coral reefs."

On top of that meat farming is inefficient when it comes to feeding people. According to the American Journal of Clinical Nutrition, "a meat-based diet requires more energy, land, and water resources than a lacto-ovo-vegetarian diet". And the Water Education Foundation notes that it takes 2,464 gallons of water to produce one pound of beef in California. According to the think tank Chatham House,

"The global livestock industry produces more greenhouse gas emissions than all cars, planes, trains and ships combined".

On top of this, livestock in the U.S. alone produce 2.7 trillion pounds of manure each year. (That's about ten times more waste than was produced by all the American people). But unlike human waste this effluent is not treated, it is just thrown back onto the land where it ends up in rivers and streams where it has a huge negative environmental impact.

And it's not just meat but also fish production that is unsustainable. According to the World Wildlife Fund,

"Nearly half of the world's recorded fish catch is unused, wasted or not accounted for, according to estimates in a new scientific paper co-authored by WWF, the global conservation organization." 

Greenpeace note that,

"Recent estimates show that for every four pounds of fish caught worldwide, fishermen throw away more than a pound (bycatch) of other marine animals."

In shrimp trawls the ratio is fatally worse: for every pound of shrimp, four or more pounds of unwanted creatures die. A staggering 100 million sharks and rays are caught and discarded each year. An estimated 300,000 cetaceans (whales, dolphins and porpoises) also die as "bycatch" each year, because they are unable to escape when caught in nets.

Conclusion

"Eating Animals" is truly food-for-thought. It explores complex issues in an original way and manages to entertain as much as it does shock. Interesting characters and points-of-view permeate the book, such as the vegetarian wife of a cattle rancher or the turkey farmer trying desperately to swim against the tide of industrial poultry production.

Ultimately, though, it is a book that is impossible to read without seriously considering our personal relationship to our food and its wider impact on the environment. If I was to level a criticism against it, it is that the reference material is very heavily weighted toward a US audience (the UK edition has a short preface, but otherwise the material is the same). However, it still makes for fascinating reading and manages to be both touching and horrifying in equal measure.

]]>
Reason My homage to one of the finest pieces of audio software ever created, Propellerhead's Reason Virtual Studio https://www.diplo.co.uk/blog/music-film-tv/reason/ http://www.diplo.co.uk/1836.aspx Wed, 11 Aug 2010 00:00:00 GMT Reason.jpg

Anyone who has made electronic music will know that analogue synths are revered.

The warmth of their tones, the richness of their harmonics and the sheer fatness of their timbre makes them special when compared with their more modern digital counterpart. You'll also know they can be very expensive. Classic models such as the Roland TB-303 bass machine, famous for it's distinctive 'acid' bass line sound, will cost you a lot of cash (if you can even get hold of one).

Now imagine how much a whole studio would cost you, complete with a virtually infinite amount of synths, samplers, mixing desks and effects would set you back? Not to mention the amount of space it would require to house them all. On top of that you then have the added hassle of having to wire them together, manually program them via clumsy interfaces and keep them in working order. Because of this, not many people ever get the chance to experience these classic instruments; for most they remain a far-off and unobtainable dream.

Propellerheads

This is where a small Swedish software house called Propellerhead Software stepped in and changed everything. In 1997 they released a small, fairly low-profile software application called ReBirth for the PC and Apple Mac that was destined to become a cult classic. What this did was accurately model, in software, the sound of the classic Roland TR-808 and TR-909 drum machines as well as two TB-303 bass machines. To quote the Props, ReBirth offered " All the quirks and subtle qualities of analogue, combined with the convenience of modern computers (a minimum of cables, integration with your sequencer software, complete front panel automation, real-time audio streaming and much more)."

However, Rebirth was not without it's flaws. In some ways it too accurately modelled the old synths, making it awkward to program and difficult to integrate into other sequencers (though rewire made this easier). Despite this it was still revolutionary, showing that the power of modern home computers could now accurately reproduce the subtleties of analogue synthesis. However, Rebirth wasn't the product the Propellerhead's wanted to make, this was yet to come... NB. Rebirth can now be downloaded free from Propellerheads The ReBirth Museum. Check out a piece of history now!

I've Found a Reason

And, in early 2001, Propellerhead's released Reason 1. This was everything that they had hoped Rebirth would be, and then some. In effect it was a whole home studio, compromising of a virtual mixing desk, an analogue synth called the Subtractor (which, naturally, utilised subtractive synthesis), a sampler called the NN-19 (amusingly named after Paul Hardcastle's hit '19') plus a versatile sample-based drum machine called, naturally, Redrum. Throw in effects such as reverb, delay, distortion, chorus and phasers plus a full-blown MIDI sequencer, a REX loop player and you had everything you needed to make electronic music, but at the fraction of the price of hardware. Sure, other companies had released 'soft synths' before, but what made Reason special was the way it was all integrated together - beautiful design, an amazingly intuitive interface and a simple-yet-powerful sequencer all combined to make it such as joy to use.

Propellerheads, though, were not just content to sit back on their laurels. In 2002 they released Reason 2 which added on new features to Reason whilst still being backward compatible with songs made in the earlier version. The new version introduced a powerful new synth module called Malström into the rack. This unit used a new type of synthesis - 'graintable' - that was a combination of granular synthesis and wavetable synthesis. Also new to Reason 2 was a more advanced sampler, called the NN-XT, which could handle multi-sampling and velocity switched notes in it's stride. For good measure the Prop's also threw in a brand new soundbank called the Orkester Refill, which featured top-notch 24bit samples of orchestral instruments.

Then, in 2003, Propellerhead's enhanced Reason's legacy by announcing Reason 2.5 - a free upgrade for Reason 2 owners. This upgrade added three new advanced effects units - the RV70000 digital reverb, the BV512 Vocoder and the awesome Scream 4 Sound Destruction Unit. For good measure they also threw in three more units that helped make routing sounds much easier. Can you say fairer than that? :)

Well, actually, yes. For in 2005 Reason 3 was released, adding extensive mastering capabilities to Reason so that you could export tracks that were CD quality straight from the software. Not only that but a new device, The Combinator, was added to Reason's rack, doing exactly what it says on the tin. This genius idea allows you to assemble many devices and effects together and save them as a new device just as easily as you would save a patch. Amazing!

The latest version of the software (at time of this article), Reason 4 was released in 2007. It included a new modular synth called (in typical Swedish style) Thor; an arpeggiator; ReGroove, a detimer/dequantizer; and a complete change to Reason's sequencer that includes tempo and meter changes as well as support for complex meters.

My Reason Tracks

By now you might just have guessed I'm a big fan of Reason :) So it won't come as any shock to learn that I've made lots of tracks using it. Yep, this is the part of the site where I pimp my music made in Reason. Obviously this tends to be mostly electronic and instrumental, but you'll perhaps be surprised by the wide range of styles that you can achieve using this versatile software. My tracks range from ambient electronica and melodic techno right through to modern jazz and trip-hop.

However, if you don't have Reason then there's no need to worry, as you can also find mastered versions of all my songs in the MP3 section of this site. There truly is no escaping!

]]>
The Tarnished Generation We seem to labour under a delusion in this country that our national football team, England, are good at kicking a leather sphere around. This is despite not winning or even featured prominently in a major competition for nearly 45 years. https://www.diplo.co.uk/blog/society-politics/the-tarnished-generation/ http://www.diplo.co.uk/1621.aspx Mon, 28 Jun 2010 00:00:00 GMT England 2010 World Cup Team

We seem to labour under the misapprehension in this country that our national football team, England, are good at kicking the leather sphere around.

This belief seems unaccountable with reason or sanity, but I'd guess it somehow stems from a few sources: a) We "invented the game" b) we "love the game" and c) the Premiership is one of the best leagues in the world. Whilst these may be true, they are easily rebutted by pointing out a) We also "invented" tennis and look how hopeless we are at that b) Being passionate about something is no substitute for talent and c) There are very few English players in the top flight of the Premiership.

Despite this we somehow entered this World Cup with some kind of collective delusion, fuelled by the media and so-called "experts" that we stood a good chance of winning this year and were one of the favourites. This is despite the fact that we've not won a major tournament (or even reached the finals) for nearly 45 years and dismally failed only two years previously to qualify for the European Championships with much of the same squad. Yet somehow the sheen from the so-called " Golden Generation" cannot be tarnished by reality until, of course, it all comes crashing down to Earth and the recriminations begin (fuelled by the very same tabloids and media "experts" who previously built them up).

The Golden Generation. Really?

Why do we fall for this? Is it the media? Or the hyperbole of mass-marketing campaigns run by marketing agencies on behalf of multi-nations wanting to have some of the "gold dust" rub off on them? Or is it just our desire to have heroes and feel that England can be great again? Regardless, the vein of gold that runs through the current team seems much more like pyrite than precious metal. The following players where some of the ones actually picked by our £6 million a year manager, Fabio Capello as part of the first team. Let's take a closer look...

Robert Green (GK) - Plays for West Ham, a club that barely scraped relegation this year, and has no experience of playing in major tournaments for either club or country and has barely a handful of caps to his name. Yet started as our No. 1 keeper and his only significant contribution was to gift the USA a goal with the type of school boy error that guarantees a prime place in the inevitable "James Corden's Soccer Howlers" that will be gracing unfortunate dad's Xmas stockings next year.

Jamie Carragher (DF) - Even the most ardent Liverpool fan would concede Carragher is more leaden than golden. He evidently realised this himself by retiring from international football. Yet Capello, like some mad scientist, resurrected his dying career and launched his reanimated corpse onto the international stage, with predictably frightening results.

Glen Johnson (DF) - A promising enough young defender (if you discount tackling and tracking back as being a prerequisite of the job) but still lacking in top-flight experience.

Mathew Upson (DF) - Another West Ham player (remember they barely scraped relegation this season whilst conceding 66 goals) with no real major competition experience. But, hey, he looks the part.

John Terry (DF) - Now Terry is a born leader - or so he seems to believe. He proves this by shagging his team mate's wives and holding press conferences undermining his manager. If only his undoubted passion would translate into staying in position he might not have embarrassed himself so much against Germany. But like a figure in a Greek tragedy his hubris permeates his every action.

Ledley King (DF) - Capello once said he would only select players based on form and fitness. Yet King was clearly not fit and his recent form has been strewn with injuries making him more liability than "reliability". Still, he needed a nice holiday and South Africa is beautiful this time of year.

Frank Lampard (MF) - Lampard can truly claim to have played at a high level for both club and country (and, no, I'm not implying he was a customer of John Terry's dad...). And yet he continually fails to deliver in an England shirt; can't play alongside Gerrard; didn't score a single (allowed!) goal in either this or the last world cup; misses vital penalties and was booed off in Euro 2006. Apart from that, truly golden.

Shaun Wright-Phillips (MF) - Philips has the pace England so often lacks on the wing, but unfortunately neither the brain nor the vision needed to do anything with it. After failing at Chelsea he went back to Man City where he's most often seen sitting on the subs bench.

Steven Gerrad (MF) - He may be thick as two short planks and a genuine Scouse thug, but he does have the virtue of both experience and talent when he plays in central midfield. Thus he was played on the left, a position he failed to stay in and so aimlessly drifted through most of the campaign. A career owning gauche restaurants catering to brain-dead wags awaits.

Joe Cole (MF) - Cole is an exceptional player. At least, when he's on the bench he suddenly becomes so. If only he'd remained there then people might have been able to maintain the delusion he is still a talent and not an unfit Chelsea reject with no match form this season.

Emile Heskey (FW) - Our first choice striker managed to score a sum total of 3 premiership goals this year. Even his most vociferous advocates would admit he can't score - which most people would concede is slightly concerning in a striker. The only thing he managed to kick with any force this World Cup was Rio Ferdinand - ruling out of one our best players for the entire competition.

Wayne Rooney (FW) - Even the most ardent Man U. haters would admit Rooney is a genuine talent. Yet he continually fails to reproduce his club form for England, and has clearly not recovered fully from the ankle knock he received at the end of the last Premiership season. Yet the idea of not playing him didn't seem to cross Capello's mind. And they say one sign of madness is trying the same thing over and over again with the hope of a different outcome…

Peter Crouch (FW) - Crouch actually has by far the best goal scoring ratio of any England forward of recent years. Yet he looks like a gangly idiot, so you can see why he would offend Capello's innate sense of Italian style. Which, presumably, is why he barely featured in the entire campaign?

The simple fact is that there aren't that many good English players - and the ones that are good are good because they play as part of good squad in a position they have made their own. Football is a team sport, and a good team is far more than the sum of the individuals in the same way that a good recipe is more than throwing a few of your favourite foods in a pot and stirring.

So what can be done? Well, pay me £6 million and I'll let you know. But let's forget all that - there's always the European Championship in two years. And this time I have a really good feeling…

]]>
Audio Mastering Audio mastering is one of the essential arts you need to learn to make your tracks sound good on CD, vinyl or even as MP3. This is my short guide to getting the best out of your tracks when mastering them. https://www.diplo.co.uk/blog/music-film-tv/audio-mastering/ http://www.diplo.co.uk/1834.aspx Tue, 22 Jun 2010 00:00:00 GMT mixing_desk.jpg

Audio mastering is one of the essential arts you need to learn to make your tracks sound good on CD, vinyl or even as MP3.

People often confuse mastering with mixing, but the two are different in many ways : Mixing is balancing the levels between instruments and getting the individual instruments to sound good, where as mastering is the final step where you want to polish the overall sound and maximise volume. Often the reason that commercial CDs sound so much louder than your own mixes is that they use compression and clever limiting techniques to maximise the levels, boosting the overall sound. However, with the right tools and some patience you can do this in a home studio.

If you're looking for a far more detailed explanation of what mastering is and why it is needed then read these excellent articles at Digido.Com.

Tools You Will Need

To master a track you will ideally need a decent sound editor (preferably one that will work at high bit and sample rates and will accept DirectX and/or VST, such as SoundForge, WaveLab or Adobe Audition (formally CoolEdit Pro). There are also stand-alone packages dedicated just to mastering, such as T-Racks. However, if you haven't got one of the 'big three' then check out HitSquad : Shareware Music Machine for a list of free and shareware audio editors. Ideally you will also have some high quality plugins for your audio editor for mastering with (as the ones that come with audio editors are sometimes lacking) - the best ones I know of are made by Waves. At the very least you will need to be able to normalise your audio, EQ it and compress it. Ideally you should also have access to plugins that will remove DC offset and also a multi-band compressor, parametric EQ, stereo imager and a limiter.

The First Steps

The first thing you need to do before mastering audio is make sure you are happy with your mix! It's very difficult, if not impossible, to fix errors in your mix when you are mastering, so make sure your mixdown is as good as it can be before exporting it to an audio file for mastering. Preferably listen to it on good monitor speakers (after resting your ears) at different volumes to make sure that everything sounds balanced and that your bass frequencies are prominent but not too 'boomy'. It's also a good idea to listen to your mix on cheapo PC speakers too, so you know how it will sound on more basic setups - again make sure that everything still seems balanced on crappy speakers.

Once you are happy with the mix the next step it to export it as an audio file. Before doing this, though, make sure your mix isn't clipping - when you are working in the digital realm you NEVER want your maximum level to exceed 0dB (unless you are Iggy Pop). Don't worry if your mix sounds a little quiet, that can be solved in the mastering stage - just don't be tempted to let it clip.

OK, now you need to actually export your track as audio (normally this will be as a .wav file on PC or as an .aiff file on Mac). Often you will get a choice of what resolution to export your audio at ie. the sample and bit rate. Generally it's best to choose the highest bit and sample rate your audio software will support - 24bit/96Khz is usually the best quality, if you are given the choice. Do this even if your audio card doesn't support playback or recording at these sample rates (this might sound counter-intuitive, but trust me it will still work!). It's probably beyond the scope of this short piece to explain why you should always master at high sample-rates, but basically it's down to reducing errors caused by floating point precision - the higher the precision the less chance of errors creeping in. However, your final mix should still be "CD Quality" 16bit/44.1Khz.

The Mastering

Next you want to EQ your audio file - this will be a matter of personal taste, but now is the time you can boost the bass or add a little more 'air' to the mix. It's also a good idea to roll-off any inaudible, low frequency bass sounds - usually a high-pass filter set to roll of frequencies below 60hz will do. This will help clear up your bottom end and avoid things sounding muddy, especially on systems with sub woofers. If you have any plugins such as Waves MaxxBass now would be a good time to use them.

Next you should look at compressing your mix - this reduces the peaks and allows you to increase the overall amplitude, or loudness, of your mix. If possible use a multi-band compressor which allows you to add different levels of compression to different frequencies. If you're not sure about compression then read more about it's uses here and here. Just remember to avoid clipping, as digital clipping is nasty! If you have access to one a limiter is very useful (a limiter is basically a 'brick wall' compressor that stops a signal ever going beyond a defined threshold - typically set this threshold to -0.3 dB for CD mastering). A great limiter is the Waves L1 or L2 Ultramaximizer.

Remember you should put your limiter last in any audio chain. Also be aware that once you have raised your overall level to very near 0dB that you should NOT do any more processing on the audio else you risk introducing clipping (believe it or not, even subtracting EQ can actually increase levels).

Beside EQ and compression there are other tools you can use too, depending on the effect you wish to achieve. Sometimes adding a very small amount of reverb, especially one that defines a real 3 dimensional space, can help bring your mix together. This will have the effect of situating your track in a virtual sound-scape. For instance, a touch of room reverb might be useful to bring a rock mix together, whereas for more ambient, experimental works a larger reverb could help expand the sound.

Just don't over do it, especially if you've already used a lot of reverb when mixing, otherwise your mix will sound mushy. If you can limit the reverb to just the higher bands this would be preferable, as you don't want your bass and kick drums to sound 'boomy'. Another useful tool is a stereo-imaging tool that can help give your mix a wider stereo field. This is useful for mixes that sound a bit too mono, or when you are making vast electronic music soundscapes. Again, easy does it, as too much can make your mix sound strange and the bass sounds weak.

Stop Dithering!

One final thing you'll have to take into account if your mastering digital music is dithering and re-sampling. No, I'm not referring to the usual procrastination and uncertainty you have when mixing, but rather the process of getting your final master into the right format for burning to CD. As you probably know CD Audio uses a sample rate of 44.1Khz and is 16 bits deep - so what do you do to make sure your pristine 24bit/96Khz master sounds good at this lower rate? Well, this is were dither and re-sampling come in. Basically this is the process of reducing the resolution of audio by aliasing the waveform - it's rather similar to when you shrink a photograph in Photoshop and you use bi-cubic resampling to remove the jaggy edges you'd otherwise get. Good dithering plugins introduce a small amount of noise to help smooth the audio 'jaggies' when the bit depth is reduced (this is often referred to as noise shaping).

Now, adding noise to your master doesn't sound like a good thing, but believe me this is virtually inaudible and really does help maintain the quality when you dither down. It's something you need to if you're mastering for CD, but make sure you do it at the final when you are fully happy with your mastered mix.

]]>
Fair Votes Now Playing around with the BBC Election Seat Calculator for the 2010 UK general election further convinced me what an unfair and undemocratic system "first past the post" really is. This is why we need electoral reform to provide "fair votes" for everyone https://www.diplo.co.uk/blog/society-politics/fair-votes-now/ http://www.diplo.co.uk/1692.aspx Sat, 08 May 2010 00:00:00 GMT Parliament_at_Sunset.JPG

Playing around with the BBC Election Seat Calculator for the 2010 UK general election further convinced me what an unfair and undemocratic system "first past the post" really is. In simple terms, if each of the 3 main parties got 25% of the vote, and the rest go the remaining 25% then we'd still end up with a result something like this:

Election Prediction UK 2010

This simple visual representation shows exactly how un-representative this 19th Century system really is and shows why electoral reform is so necessary. If people feel their vote is not counting then this seriously undermines the democratic process and alienates and disenfranchises a large sector of the population. This is why we need Fair Votes Now.

]]>