Apr 12 2015

How to Handle Cross-Site Scripting in ASP.NET MVC Application?

Category: MVC | Asp.netCelinSmith @ 10:22
Oops…this is not one of my posts…I invited a friend of mine, to write about Cross Site Scripting attacks. Enjoy!

One common concern that almost all website owners share is maintaining the security of the website. Perhaps, you may adopt commonly practiced and most recommended security techniques to keep your site secure. But, you might not be aware of or overlook the security problems that occurs because of trusting your users too much.

While most of your website, users will perform only the required actions, a malicious user will make every possible effort to obtain access to even most sensitive areas of your site. In this post, we'll discuss about how you can handle Cross Site Scripting (one of the most common security exploit) in ASP.NET Web apps. But before that, let's first discuss in detail about Cross Site Scripting.

Cross Site Scripting – An Overview

Cross Site Scripting (also referred to as XSS) is a kind of vulnerability that occurs when some hacker injects malicious code (ideally script) inside a web page or the database. The insertion of code takes place without the user's knowledge. XSS happens whenever a web application allows to display user's input or input provided by any outside resources without validating the input properly. XSS attacks enables a user to insert malicious JavaScript, HTML and other cross-side scripting languages into a dynamic page – that is not validated.

Here are a few most of the common cross site scripting attacks you should be aware of:

  • Stored XSS (also known as Persistent or Type I): This type of XSS attack occurs when user input remains stored somewhere - in the database, comment field, data forum, etc.
  • Reflected XSS (also known as Non-Persistent or Type II): It takes place when some user input is returned from the server in the form of an error message – including all of the data provided by the user as part of a request made to the server – but without making the data safe for browser rendering.
  • DOM based XSS (or Type-0): Both stored as well as reflected types of XSS attacks come in this category, wherein an attacker makes modifications to the DOM elements and can utilize DOM data.

How to Handle Cross Site Scripting?

There are two different ways following which, you can handle XSS attacks:

1. Check for any XSS vulnerabilities.

One best way to handle cross-site scripting attack requires you to perform a security test on your web applications. In simple words, check out for for any cross-site scripting vulnerabilities. This is where Web Vulnerability Scanner comes in handy. It scans an entire site and perform automatic checks to find out cross-site scripting vulnerabilities. Additionally, it identifies all the URLs/scripts on your site that are vulnerable to XSS attacks, making it easy for you to fix the vulnerability.

One great web vulnerability scanner is “Acunetix Web Vulnerability Scanner” that crawls a site for finding Cross-site Scripting, SQL injection, and other type of vulnerabilities.

2. Prevent XSS attacks from occurring.

Let's consider an example, for avoiding XSS attacks in MVC applications. When any user inserts malicious HTML markup and message into an MVC application, it will display an annoying alert.

image

MVC will reject a user's login request when HTML markup is added in the message box (as shown in the screenshot above). MVC, by default, prevents any requests containing HTML markup, in order to avoid XSS attacks.

image

Now, if you give users the ability to submit HTML markup along with a message, then use any one of the below mentioned approaches:

1st Way (Model Level):

image

2nd Way (Controller Level):

clip_image005

Though, using any of the aforementioned approaches will not execute the request validation part, but you may still encounter a problem: the HTML markup will be encoded.

image

You can fix the problem, using @Html.Raw(item.MessageText).

image

But when allowing a user to write HTML markup with MessageText, you may find some malicious script in the message text, as shown below.

image

image

Now let us talk about how using NuGet packages, you can prevent cross-site scripting attacks from occurring in an MVC application. NuGet is a package manager that comes loaded with tons of packages (or libraries) that make use of a safe approach to HTML encoding. In our case, we'll be using the AntiXSS library.

Open NuGet, and then look for the "AntiXSS" package. Once you've found the package, simply install it.

image

After the installation, you'll be able to see two new libraries in your projects references folder: AntiXssLibrary and HtmlSantizationLibrary.

image

All you need to do is to make a little change in the controller, so as to prevent XSS.

image

That's it! Now, in case any user will try inserting a malicious script with the text message, it will automatically be removed.

image

Conclusion

Hope going through this post will serve as a resourceful guide for developers, looking for ways to handle Cross-site Scripting attacks on their ASP.NET website application.

About the Author:

Celin Smith has been working as a professional asp.net developer with Xicom Technologies Ltd- a leading ASP.Net Development Company offering a range of software solutions like IT Outsourcing Services, Custom Software Development and Web Application Development Services . She has been an avid writer and loves writing interesting and informative stuff on web and mobile applications. You can reach her via her Facebook or @celinsmith1 .

Tags: , ,

Jul 11 2014

Integrate NodeJS tools in Visual Studio/TFS

Category: JavaScript | Asp.netVincenzo @ 03:11

Oops…this is not one of my posts…I invited a friend of mine, Vincenzo, to write about using Node.js tools with Visual Studio. Enjoy!

The presence of NodeJs on TFS online hosted build controller and the recent installation of git client in the platform (suggested by a tweet of mine: https://twitter.com/D3DVincent/status/480968128227074049), and on AppHarbor too (http://support.appharbor.com/discussions/problems/60449-git-binary-in-command-line) opens up new interesting possibilities of integrating Visual Studio and TFS online with the awesome Node JS tools.

In this article I will show you how to set up your Visual Studio and TFS online environment to take advantage of all this.

Note: this is not a NodeJs/tools tutorial. I will assume you have a knowledge about nodejs, npm, bower, tsd, gulp and the packages you're going to install. I will only show you the integration with visual studio.

1) At first, let's make sure to have nodejs installed (www.nodejs.org) with NPM installed and its directories installed into the PATH environment variable.

2) Install a git client (required by bower and tsd) and insert its executables into the PATH environment variable (the current installation package does not set this for you automatically). The most used one on Windows is msysgit (http://msysgit.github.io/), you can find it on WebPlatform too.

clip_image001

clip_image002

clip_image003

Let's make sure that all commands are now available on the command line, by typing its names on the console.

clip_image004

Thanks to this tip, we have now available on Visual Studio console all the tools we need to perform out nodejs tasks.

Prior to see the following steps, I will spend few words on how Npm works.
Unlike nuget, npm has the concept of local and global packages. For this reason, every package installed through this tool can be installed as global (-g flag on the commands). This will make the package available on every directory of your dev machine.

The other option has the same behavior of nuget packages: it will create on your project a new directory called node_modules in which all packages will be stored. You will find a useful .bin directory to all them.

Usually, having packages installed globally is very convenient. In addition, nodejs is smart enough to redirect your commands to the local node_modules directory, if any. This will prevent the usual compatibility issue (more on this, http://blog.nodejs.org/2011/03/23/npm-1-0-global-vs-local-installation)

Let's now open Visual Studio and work with the new tools installed. We will start with a brand new Asp Net empty web application project.
Unfortunately the package manager console looks like unable to handle interactive commands. So we will have to switch on command line some time.
Go to the package manager console and type this command

npm init

The command will ask us few questions about our new webapplication and write a package.json file for us.

clip_image006

This json file will track all installed package and will be used as a source for restoring packages during a build.

Visual Studio does know nothing about package.json, since we're using an external tool. This file must be included in our source control. I prefer to keep it in the solution too, but this is up to you. Let's go to the solution explorer, show all files and include in solution the package.json file

clip_image007

Now we have npm installed with a repository of over 80 thousand packages. You can browse all of them here: https://www.npmjs.org/.

For a modern web application, I always install these packages from npm repository:

1)Bower (package manager for javascript libraries)

2)Tsd (typescript definitions for javascript libraries)

3)GulpJS (task and automation tools)

npm install bower --save

npm install tsd --save

npm install gulp --save

The --save flag instructs npm to write this installation to packages.json file. If all was done correctly, our file should look like this:

clip_image008

Thanks to these entries, we will be able to perform a package restore on TFS and on our dev machines too.

Since I do not have global packages installed, I have to point the bin directory of node_modules folder.

clip_image010

Let's init tsd too:

clip_image012

And do not forget to include json files into solution.

clip_image013

We're done with interactive commands. We can close command prompt and work comfortably from visual studio and its package manager console.

Next step are just to install packages and tools: I will take jquery and angularjs from bower, their typings from tsd and set up a simple minify task with gulp:

clip_image015

The installed components are, obviously, external to Visual Studio. We have to include them into the project:

clip_image016

In my example, I will include only the highlighted voices. Anyway, the strategy is up to you. If you will select these files, they will be published on Azure or your WebDeployment enabled server. If not, you will have to extend the MsBuild task to include these files.

Now we have a more or less complete development solutions. Install other packages and code your solutions.

Question from .NET developers: why do not use nuget package manager to install DefinitelyTyped Typescript definitions? Well, actually nuget is not able to restore content folder from its packages and developers are not going to cover this use case (you can find more about this issue and why it won’t be fixed here: http://nuget.codeplex.com/workitem/2094. Due to this, we are forced to include into our checkin typings file too. Even if a lot of persons suggests that this is the right thing to do, I do not like this kind of procedure. Switching to the cliend side, I can remove and restore these kind of packages as well and keep under source control only MY app files.

Done that, is time for automation.

At first, let's configure exclusions for TFS source control. We have to add a new .tfignore file in root of our solution and add the following lines:

\NodeJsTools\bower_components
\NodeJsTools\typings

This will exclude these folders from source control, even if they are included into solution file. We do not need to add node_modules since it's completely external.

This means that our hosted source code will miss all nodejs modules and all bower/tsd reference files. In this way, our application will never work (and won't pass the build, since the Typescript target installed into tfs controller will fail: it won't find definitions for angular and jquery).

Here is where MsBuild can help us. We will inject our dependencies task directly on BeforeBuild target.
Tfs controller have got nodejs installed (and npm too) with a git client (needed for bower).

Let's open our project file (.csproj) and let's paste this.

  <Target Name="BeforeBuild">
    <Exec Command="npm install" />
    <Exec Command="node_modules\.bin\tsd reinstall" />
    <Exec Command="node_modules\.bin\bower install --config.interactive=false" />
  </Target>

This target, which will run before build, will install all modules listed into packages.json, then all definitions in tsd.json and all the ones of bower.json. In this way, all references will be fine and the build will pass. Note that I'm using local packages in node_modules since I cannot assume that packages will be installed on tfs hosted build controller.
Let's add these lines too:

  <Target Name="AfterBuild">
    <Exec Command="node_modules\.bin\gulp $(Configuration)" />
  </Target>

These will run my gulp task depending on configuration. In Release, for example, I use to do angular template cache, html and js minification and things like this. In Debug, I usually run jasmine tests. The decision is up to you.

Let's now extend the Clean target to delete also all nodejs stuff. Insert into a PropertyGroup this tag:

     <CleanDependsOn>
        $(CleanDependsOn);
        CleanNodeFiles;
    </CleanDependsOn>

And then let's define this target:

<Target Name="CleanNodeFiles">
      <ItemGroup>
        <TypeScriptGenerated Include="%(TypeScriptCompile.RelativeDir)%(TypeScriptCompile.Filename).js" Condition="!$([System.String]::new('%(TypeScriptCompile.Filename)').EndsWith('.d'))" />
        <TypeScriptGenerated Include="%(TypeScriptCompile.RelativeDir)%(TypeScriptCompile.Filename).js.map" Condition="!$([System.String]::new('%(TypeScriptCompile.Filename)').EndsWith('.d'))" />
    </ItemGroup>

    <Delete Files="@(TypeScriptGenerated)" />
    <RemoveDir Directories=".\build\;.\bower_components\;.\typings\"/>
    <Exec Command=".\tools\DelFolder .\node_modules" />
  </Target>

These lines will delete all typescript generated files and all bower/tsd files.
You may see that node_modules directories and subdirectories are deleted using a DelFolder script and not through the remove dir. This is because node_modules are very deep directory and RemoveDir target cannot delete paths longer that 256 characters. For this reason, I use to include a script in my source control (not in the project, however) with these lines:

@echo off
if \{%1\}==\{\} @echo Syntax: DelFolder FolderPath&goto :EOF
if not exist %1 @echo Syntax: DelFolder FolderPath - %1 NOT found.&goto :EOF
setlocal
set folder=%1
set MT="%TEMP%\DelFolder_%RANDOM%"
MD %MT%
RoboCopy %MT% %folder% /MIR
RD /S /Q %MT%
RD /S /Q %folder%
endlocal

As final task, we may want to add custom files (generated by gulp js tasks for example). We can do it with other few lines of msbuild code:

<Target Name="DefineCustomFiles">
    <ItemGroup>
      <CustomFilesToInclude Include=".\build\**\*.*">
        <Dir>./</Dir>
      </CustomFilesToInclude>
    </ItemGroup>
  </Target>

  <Target Name="CustomCollectFiles">
    <ItemGroup>
      <FilesForPackagingFromProject Include="@(CustomFilesToInclude)">
        <DestinationRelativePath>%(CustomFilesToInclude.Dir)\%(RecursiveDir)%(Filename)%(Extension)</DestinationRelativePath>
      </FilesForPackagingFromProject>
    </ItemGroup>
  </Target>

These lines instruct the deployment to include all build folder and place it into the root.
A typical use case is this one:
Usually you have got your html files into solutions and using gulp, you minify your views. Of course you will not save these changes into your project root, or your source files will be modified and not usable (since they are minified).
For this reason, you may want usually to place all modified files into another folder, (in my case, build). Of course, Visual Studio does know nothing about these files since they are completely external to your webapplication. Thanks to this task, we will include these files into WebDeployment and place it into root folder, replacing the "development" files.
This task can be useful to include into deployment the bower_components folder (selecting the appropriate scripts) and removing from your .csproj all the folder reference. This choice is up to you. However, I usually prefer to keep scripts reference into solutions to always have a clear overview of what my webapplication needs to run up.

Ok, let's casting out nines: clean your project and verify that all nodeJs folders are gone. It may take a bit since the robocopy script is slower than a simple directory delete. Then build your project and all the stuff should be in its place.

You can find all the source code for this demo cloning this repository: https://github.com/XVincentX/NodeVStudio
Good. We're done. Have fun.

 

P.S: The project is evolving: have a look here -> https://github.com/XVincentX/NodeJsMsBuild
P.S2: I was mentioned in Morning Brew and Asp.Net site too!

image image

Tags: , ,

Jun 22 2014

New Versions of Mvc Controls Toolkit and Data Moving Controls Suite

Category: Asp.net | Javascript | MVC | WebApiFrancesco @ 09:43

New 3.0.0 release of the Mvc Controls Toolkit. See the list of changes.

New 1.2 release of the Data Moving Controls Suite. See the list of changes.

 

Enjoy!

Francesco

Tags: , , ,

Feb 18 2014

Data Moving Mvc Control Suite Available for Purchase!

Category: WebApi | MVC | JavaScript | Asp.netFrancesco @ 06:54

Finally, the Data Moving Controls Suite is available for purchase! Not only, several powerful asp.net mvc controls easy to configure with a fluent interface and easy to style with your favourite framework (supported: jQuey UI, JQuery Mobile and Twitter Bootstrap), but also, a complete Single Page Application Framework. A sophisticated validation framework that extends the standard server/client asp.net mvc validation framework, the possibility to store control settings to reuse them in several pages, futuristic user interfaces based on Interaction Primitives, and more...

Hurry Up 25% off till 2014-4-31! 

Try to win your license by solving the Triangles Enumeration Problem.

Tags: , , ,

Jan 7 2014

JavaScript Intensive Web Applications 4: JSON based Ajax and Single Page Applications

Category: MVC | JavaScript | WebApiFrancesco @ 06:19

JavaScript Intensive Web Application 1: Getting JavaScript Intellisense

JavaScript Intensive Web Applications 2: Enhancing Html with JavaScript

JavaScript Intensive Web Applications 3: Enhancing Applications with Ajax

jsFarmIntellisense

In this last post of the series, I discuss the use of JSON based Ajax calls and client side View Models. I will propose also a simple implementation of a knockout.js binding to apply a generic jQuery plug-in to an Html node. The post is concluded with a short analysis of Single Page Application frameworks.

In my previous post we have seen that Html returning Ajax calls update the needed parts of an Html page while keeping unmodified the remainder of the page. This allow a  tighter interaction between user and server because the user may work on other areas of the page while waiting for a server response, and he/she may ask  supplementary information to the server when he/she is in the middle of a task without loosing the whole page state.

The user experience may be improved further if we are able to maintain the whole state of the current task on the client, because this way we reduce further the need to communicate with the server: the user may prepare all data for the server while receiving immediately all needed help and suggestions with no need to communicate with the server in this first stage. Communication with the server is needed only after everything has been prepared. For instance, the user may modify all data contained in a grid, recurring to a detail  window when needed. Entities connected with one-to-many relations with the main list may be edited in the detail view. Everything without communicating with the server! Then, when all changes have been done, the user performs a single submit, and updates the global state of the system. The server answer may contain corrections performed by the server to the data supplied by the user, that are automatically applied to the client copy of the data.

In other words maintaining the whole state of a task on the client side allows a tighter user-machine cooperation since this cooperation may be performed without waiting for remote server answers. However, the increased complexity of the client side requires a robust and modular architecture of the client code. In particular, since we move logics, and not only UI on the client side, Html nodes that are  mainly UI staffs must be supported by JavaScript models. Models and Html nodes should cooperate while keeping separation of concerns between logics and UI. This means that all processing must take pace on models that are then rendered by using client side templates. Accordingly, Ajax calls can’t return Html anymore, but must return JavaScript models.

Summing up, all architectures where the whole state of the current task is maintained on the client should have the following features:

  1. JSON communication with the server. The format of the data exchanged between server and client might be also Xml based, but as a matter of fact at the moment, the simpler JSON protocol is a kind of standard.
  2. Html is created dynamically by instantiating client templates, thus this kind of Web Applications are not visible to search engines.
  3. The state of client and server must be kept aligned, by performing simultaneous updates on both client and server in a transactional fashion. This means, for instance, that if a server update fails for some reason the client must be able to restore the state of the last client server synchronization.

As a matter of fact at the moment point 3 has not received the needed attention also in sophisticated Single Page Application frameworks, that don’t supply general tools to face it, so the problem is substantially left to custom solutions of the developers.

In the case of Html based Ajax communication we have seen that, since the communication is substantially based on form submits, the server relies on all input fields having adequate names to build a model that then is passed to the Action methods that serve the client requests. In JSON based commutations, instead,  input fields names are completely irrelevant since action methods receive substantially JavaScript models.

Html Ids, and CSS classes are also used as “addresses” to select Html nodes to enhance with JavaScript code.  Several frameworks like knockout.js and angular.js avoid the use of these ids and CSS classes as a way to attach JavaScript behavior to Html nodes. In their case, model properties are “connected” to Html nodes through the so called bindings that are substantially communication channels between Html nodes and the JavaScript properties that updates one of them when the other changes. They may be one-way or two ways. Bindings may also connect Html nodes with JavaScript functions, and the developer may also define custom bindings, thus bindings solve completely the problem of connecting Html nodes with JavaScript code with no need to provide unique ids, or selection-purpose CSS classes.

Below how to use a custom knockout.js binding for applying jQuery Plug-ins to Html nodes:

 

  1. <input type="button" value="my button" data-bind="jqplugins: ['button']"/>
  2. <input type="button" value="my button"
  3. data-bind="jqplugins: [{ name: 'button', options: {label: 'click me'}}]"/>

 

The binding name is followed by an array whose elements may be either simple strings, in case there are no plug-in options, or objects with a name and an option property. As you can see in knockout.js bindings are contained in the Html5 data-bind attribute.

Below the JavaScript code that defines the jqplugins custom binding:

 

  1. (function ($) {
  2.     function applyPlugin(jElement, name, options) {
  3.         if (typeof $.fn[name] !== 'function') {
  4.             throw new Error("unrecognized plug-in name: " + name);
  5.         }
  6.         if (!options) jElement[name]();
  7.         else jElement[name](options);
  8.     }
  9.     ko.bindingHandlers.jqplugins = {
  10.         update: function (element, valueAccessor, allBindingsAccessor) {
  11.             var allPlugins = ko.utils.unwrapObservable(valueAccessor());
  12.             var jElement = $(element);
  13.             for (var i = 0; i < allPlugins.length; i++) {
  14.                 var curr = allPlugins[i];
  15.                 if (typeof (curr) === 'string')
  16.                     applyPlugin(jElement, curr, null);
  17.                 else {
  18.                     applyPlugin(jElement,
  19.                         ko.utils.unwrapObservable(curr.name),
  20.                         ko.utils.unwrapObservable(curr.options));
  21.                 }
  22.             }
  23.         }
  24.     }
  25. })(jQuery)

 

The code above enables the use of all available jQuery plug-ins on all knockout.js based architectures, so that we can move to advanced client architectures based to knockout.js without renouncing to our favorite widgets and CSS/JavaScript frameworks like jQuey UI, Bootstrap, jQuery Mobile, and Zurb Foundation.

 

As a next step we may pass from storing the whole state of a single task, to storing the whole application state on the client side, which implies that the whole application must live in a single Html physical page(otherwise the whole state would be lost). Similar applications are called Single Page Applications.

In a Single Page Application Virtual pages are created dynamically by instantiating client templates that substitute the Html of any previous virtual  page in the same physical page. The same physical page may show simultaneously several virtual pages in different areas. For instance, a virtual page might play the role of master, and another the role of detail page.

Most of Single Page Application frameworks have also the concept of virtual link and/or of routing, and may connect the virtual pages to the browser history, so that the user may navigate among virtual pages with usual links and with the browser buttons.

But… why re-implementing the whole browser behavior inside a single physical page? What are the advantages of Single Page Applications compared to “multiple physical pages applications” based on Client View models?

In general having the whole application state on the client side reduces further the need to communicate with the server, thus increasing the responsiveness to the user inputs. More specifically:

  1. Once the client templates needed to create a new virtual page have been downloaded from the server further accesses to the same virtual page become very fast. On the contrary, loading a complex client model based page that is able to store the whole state of a task may be time consuming, so saving this loading time improves the user experience considerably.
  2. The state of a previously visited virtual page may be maintained so that the user finds the virtual page in exactly the same state he/she left it. This improves the cooperation between different tasks that are someway connected: the user may move forth and back between several virtual pages with the browser buttons while performing a complex task without loosing the state of each page.
  3. The same physical page may contain simultaneously several virtual pages in different areas. Thus, the user may move forth and back between several virtual page in one area, while keeping the content of another area. This scenario enables advanced form of cooperation between virtual pages.
  4. The whole Single Page Application may be designed to work also off-line. When the user has finished working the whole application state may be saved in the local storage and restored when he/she needs to perform further changes, or when he/she can go on-line to perform a synchronization with the server.

The main problem Single Page Application developers are faced with is keeping a large JavaScript codebase modular and maintainable.  Since virtual pages are actually client templates <-> ViewModel pairs, the concept of virtual page itself has been conceived in a way to increase modularity. However several virtual pages need also a way to cooperate that doesn’t undermine their modularity and the independence of each virtual pages from the remainder of the system.

In particular:

  1. Each virtual page definition should not depend on the remainder of the system to keep modularity, which, in turn, implies that virtual pages may not contain direct references to other external data structures.
  2. Notwithstanding point 1, some kind of cooperation that doesn’t undermine modularity, must be achieved among model-view pairs and among model-view pairs and the application models. A modular cooperation may be achieved by injecting interfaces that connect each model-view pair with the external environment as soon as a model-view pair is added to the page.
  3. Pointers, to data structures contained inside each virtual page should be either avoided or handled by resource managers to avoid they are used when a virtual page has been released or when it is not in an active state.

Separation is ensured someway by the concept of ViewModel itself. Durandal.js uses AMD modules to encode ViewModels. AMD protocol is a powerful technique for both loading dynamically and injecting other code modules that the current module might depend on and consequently for handling a large JavaScript codebase. However, the dependency tree is hardwired, so the injection mechanism is more adequate to inject code than dynamic data structures that might depend on the state of the ongoing computation. Accordingly, the full achievement of point 2) requires an explicit programming effort. Angular.js uses a custom dependency injection and module loading mechanism. That mechanism is easier to use, but it is less adequate for managing large codebases (in my opinion , not adequate at all). However, the fact that the injection mechanism is less structured make easier the injection of dynamic data structures when a model-view pair is instantiated.

In general most of frameworks ensure separation with some kind of cooperation, but no frameworks offer a completely out-of-the-box solution for point 2, and an out-of-the-box solution form managing the lifetime of pointers that have been injected into model-view pairs to ensure an adequate cooperation in the context of the ongoing computation (point 3). More specifically, the lifetime of pointers to AMD modules(or other types of dynamically loaded modules), that have been injected are automatically handled, but there is no out-the-of-the-box mechanism for managing pointers that a model-view pair might have to data structures contained into another model-view pair, so the developer has the burden of coding all controls needed to ensure the validity of each pointer, in order to avoid the use of pointers to data structures contained in model-view pairs that have been removed from the page.

The need for a more robust solution to problems 2 and 3 is among is among the reasons that pushed me to to implement a custom Single Page Application framework in the Data Moving Controls suite. The Data Moving SPA framework (see here, and here) relies on contextual rules that “adapt” each virtual page that is being loaded to the “current context”. Where the “current context” includes both interface implementations that connect the virtual page to the remainder of the system and information about the current state of the application, such as, if the user is logged or not, the current culture (that is the browser language and culture settings), and so on. Contextual rules are used also to redirect a not logged user to a login virtual page and to verify if the user has the needed authorizations to access the current virtual page. The interface implementations passed by the contextual rules to the virtual page View Models include also all resource managers needed  for  sharing data structures among all application virtual pages safely. Another communication mechanism is the possibility to pass input data to any page that is being loaded. Such input data are analogous to the input data passed in the query string. In fact, this input may be included also in virtual links.

Another big challenge of Single Page Applications is the duplication of code in both client and server side. In fact, the same classes, input validation criteria, and other metadata must be available on both client and server side, and when the languages used by the two sides are different, this become a big problem. The Meteor framework uses JavaScript on both server and client, and allow code sharing between the two sides. The main  price to pay for this solution is the use of a language that is not strongly typed also on the server side.  In the Data Moving SPA we faced this problem by equipping SPA server with dynamic JavaScript files implemented with Razor views. This way JavaScript data structures may be obtained by serializing in JavaScript their equivalent .Net data structures.

Another important problem all SPA must solve is the data synchronization between Client and Server. Durandal.js works quite well with Breeze.Js that offers some synchronization services for the case the server may be modeled as an OData source. Breeze.Js may be adapted also to most of all other SPA framework, but this solution is acceptable only if there is almost no business logics between the client and the server side database. In fact, only in this case the server API may be exposed as an OData source only, with no need of more complex communication. 

Meteor,  takes care of sever/client synchronization in a way that is completely transparent to the developer. A similar solution facilitates the coding of simple applications, but may be inadequate for complex business systems that needs to control explicitly communication between client and server.

The Data Moving SPA framework offers retrievalManagers to submit a wide range of (customizable) queries(that includes also OData queries)  to the server, while viewModelUpdatesManagers and updatesManagers take care of synchronizing a generic data structure with the server in a transactional fashion, by taking into account both changes in several Entity Sets (additions, modifications, and deletes), and changes in temporary data structures(core workspaces). As a result of the synchronization process they may return either errors that are automatically dispatched in the right places of the UI, in case of failure, or remote commands that apply modifications to the client side data structure to be synchronized with the server. While the synchronization process is completely automatic, the developer has full control on which data to synchronize, and when to synchronize them, and also the possibility to customize various part of the process.

 

That’s all! This post ends the short series about JavaScript intensive web application. This series is no way a tutorial that describes extensively all details of the techniques that has been discussed, but just a guide on how to select the right technique for each application and on how to solve some architectural issues that are not usually discussed in other places.

 

Stay tuned! 

Francesco

Tags: , ,

Dec 22 2013

JavaScript Intensive Web Applications 3: Enhancing Applications with Ajax

Category: JavascriptFrancesco @ 09:46

JavaScript Intensive Web Application 1: Getting JavaScript Intellisense

JavaScript Intensive Web Applications 2: Enhancing Html with JavaScript

paramsIntellisense1

What are the advantages and drawbacks of using Ajax to update web pages? How to decide if Ajax calls should return html or JSON? In this post I will give some answers to the above questions and I will give some tricks to enhance also the Html created dynamically as a consequence of Ajax calls with JQuery Plug-ins .

Most of people that uses Ajax when asked why they are using it, answer that Ajax calls improve performance and user experience….Well, for sure they improve user experience, but I don’t know if possible performance improvements might be relevant…Finding the right answer to all these questions is the first step toward an optimized use of Ajax based techniques.

What are the times that compose the the total response time of a server request? A network latency time is needed to establish the connection with the server, a trasmission time is needed to send all bytes (that depends on the available bandwidth), a server response time, and a browser re-drawing time. Now, if the server is well designed, and if we are not sending tons of html , the bottlenecks are the latency time and the browser re-drawing time. With all nowadays continuous technological improvements bandwidth will impact always less on performance. Also the browser re-draw time will impact always less on the performance. So the request response time will be always more and more tied to the network latency. Accordingly, redrawing the whole page or just a part of it would require almost the same communication time. Moreover re-drawing just a not negligible area of the page (say the 25% of the page) requires almost the same time as re-drawing the whole page since a whole page re-draw is more efficient than a partial re-draw.

As a conclusion in most of the cases Ajax techniques don’t imply any appreciable improvement in the total response time! So why using Ajax?

  1. If we need to refresh just a small part of the page, as in the case of  an auto-complete that write suggestions under the textbox we are writing in, there is a not negligible improvement of the response time.
  2. During the Ajax update the state of the remainder of the page is maintained. What does this mean? From the user experience point of view this means something like: the browser will not loose the scrolling we have done, and the textbox the user is writing in will not loose the focus…otherwise it would be impossible to have auto-complete and similar widgets working properly. One might object that we might restore the whole state also after a standard page redraw…Yes it is true…but before the page has been completely redrawn the user would see the page returning to the top of the document, the Textbox disappearing, and so on….and similar unacceptable stuffs.
    Maintaining the whole state is important also for more macroscopic state information. Let think ,for instance, to a grid with a detail view that retrieves row details from the server when a row is selected, and show them in a separate area of the page. A complete page refresh might cause the loss of the whole grid state, that is the data page shown, sorting and filtering, the possible scrolling of the grid body, etc. Now, in theory, it is possible to rebuild the whole state also after a whole page refresh but this might confuse the user that would see the grid disappearing, being re-drawn in a different position and then being scrolled till reaching the previous position. However, this isn’t the only drawback, if grid data are taken from a shared database (as it is usual..), also the same page with the same sorting and filtering might show different data disappointing completely the user. Moreover, any attempt to take into account all these state information on the server side might undermine the modularity of our application turning our Controllers into “spaghetti code”.

So when  is it convenient to use Ajax techniques?

Simple, either when we need to update just a small part of the page or when we need to keep the state of a part of the page. Also the reason for implementing our application as a Single Page Application, that is, an application that never leaves the same physical page, is always the same: keeping state information in the physical page. In the next post dedicated to Single Page Applications we will see also other reasons for keeping state information in the page but the fact remains that …the reason why Single Page Applications exist is… keeping state information in the page.

Now, when our pages become more an more complex keeping state information inside Html nodes may lead to “spaghetti code”, so in a way that is completely analogous to the Mvc pattern on the server side it is more and more convenient to store information inside a client side ViewModel, and then using that ViewModel to render adequate Html. On the server side we use Razor Views to turn models into Html, while on the client side we use client side templates to create dynamically Html from a JavaScript model…..Well, this is the main reason to use Ajax calls where client and server exchange JSON!

JSON based Ajax techniques will be discussed in greater detail in the next post of this series. Here it is worth to point out just when they should be preferred to standard Ajax techniques where the server returns immediately the needed html. Since JSON techniques conforms with the idea of using a client side ViewModel, that in turn ensures a better modularity, one might draw the conclusion that they should always be preferred to standard Html-returning Ajax techniques!….NO…false, JSON based techniques should be preferred only when you may use client side templates! Below the typical reasons that, in some circumnstances, prevent the use of client templates:

  1. Pages created with client templates are not visible to search engines.
  2. Some slow mobile device might not be able to render client templates with an acceptable performance.

You might object that also in case Ajax calls return Html, that Html is not visible to search engines. TRUE….but IRRELEVANT because, when we use Ajax calls returning Html, the initial page is not rendered with Ajax, because the same Action Method that serves an Ajax request may be called also when the initial page is rendered,…but without using Ajax and usingt @Html.Action(…), and @Html.RenderAction(…), instead. This way, the initial page is completely visible to search engines. Accordingly, if we make a clever use of Html-returning Ajax Mvc controllers we may produce web Applications that are completely visible to search engines. Here “clever use” means, for instance, that when we change the page of a grid we don’t do it with Ajax but with a link based pager (possibly...with a smart encoding of the page number in the URL). In other terms, we should use Ajax only for that operations that are not performed by search engines. So, for instance,  we may show a detail area in a grid page, but then the same detail page must be available also as a separate page either through a "pretty url", through a link.

 

Let see in detail, how we may avoid initial Ajax calls when we render the whole page with the simple example of the grid with a detail view.

Suppose we have a PlannerController with a ToDoList action method that fills a ViewModel with a paged list of ToDoItems, and a DisplayDetailToDo action method that fills a ViewModel with the details of a single ToDoItem. Suppose that we display the ViewModel  filled by the ToDoList action method in a ToDoList View containing a Grid and a detail area, and suppose that initially the detail are should contain the first item of the grid. Then the detail area of the ToDoList View should be something like:

  1. <div id="detailArea" data-update-url="@Url.Action("DisplayDetailToDo", "Planner")">
  2.     @Html.Action("DisplayDetailToDo", "Planner", new {ItemId=Model.Items[0].ItemId})
  3. </div>

Then, whenever the user select the ToDoItem with Id –> selectedId in the grid we perform the Ajax call:

  1. var ajaxRoot=$('#detailArea');
  2. ajaxRoot.load(ajaxRoot.attr("data-update-url")+"?ItemId="+selectedId );

That we may place in a click handler (my previous post shows how to add modularly click handlers) that catches all events bubbled by the rows of the grid. The grid side code depends on the chosen grid, but we may take selectedId from an Html5 attribute of the button, link or other Html node used to select the grid row.

In both cases we use the same action method that should be something like:

  1. public ActionResult DisplayDetailToDo(int ItemId)
  2. {
  3.     var model = repository.GetToDo(ItemId);
  4.     ...
  5.     ...
  6.     ...
  7.     return PartialView(model);
  8. }

Where I omitted all errors handling code.

Our problem now is how to enhance also the Html returned by the Ajax call with jQuery widgets. We may use the basically the same technique I have shown in my previous post, based on the widgetsHelpers.initialize method, since when we insert new Html with the jQuery .html method all JavaScript contained in the Html string is executed. However, the widgetsHelpers.initialize method contains the .ready jQuery method…that doesn’t work with dynamically added content. This problem is easily solved with a temporary substitution of the .ready jQuery method with a custom method during the processing of the Ajax response:

  1. var delayedExecution = [];
  2. var newReady = function (x) {
  3.     delayedExecution.push(x);
  4. };
  5. var oldReady = jQuery.fn.ready;
  6. jQuery.fn.ready = newReady;
  7. try {
  8.     //response processing here
  9. }
  10. finally {
  11.     jQuery.fn.ready = oldReady;
  12. }
  13. for (var i = 0; i < delayedExecution.length; i++)
  14.     delayedExecution[i]();

 

Where in most of the cases the response processing is just the call to the jQuery  .html method. Thus, we may define a widgetsHelpers.dynamicHtml(jTarget, html) that does the job of attaching an Html string to a jTarget node while ensuring that all JavaScript enhancements contained in the Html string are properly applied:

  1. widgetsHelpers.dynamicHtml=function(jTarget, html){
  2.     var delayedExecution = [];
  3.     var newReady = function (x) {
  4.         delayedExecution.push(x);
  5.     };
  6.     var oldReady = jQuery.fn.ready;
  7.     jQuery.fn.ready = newReady;
  8.     try {
  9.         jTarget.html(html);
  10.     }
  11.     finally {
  12.         jQuery.fn.ready = oldReady;
  13.     }
  14.     for (var i = 0; i < delayedExecution.length; i++)
  15.         delayedExecution[i]();
  16. };

 

 

However, we have another problem, too….Avoiding that the jQuery plug-ins that we apply to the newly added Html are re-applied also to the remainder of the Html page. In fact, if, for instance, we enhance all input fields contained in our dynamic Html that have the “datetime” CSS class with a Bootstrap datepicker, the datepicker plug-in would be re-applied also to the input fields of the remainder of the page  with the same attribute. Often jQuery plug-ins are robust and re-applying them to the same nodes doesn’t produce any effect. However, you can’t rely on this robustness, and, in any case, a similar solution would be very inefficient. The only way out is using different names….however, as we have seen in my previous post the code for generating datepickers is all contained into an unique Date.cshtml partial view that is called both by our initial page and by any other Ajax request.

Actually this is not the only “name convention” problem of Ajax provided content that we must solve! Normally, in Asp.net Mvc all input fields have names that MUST be strictly tied to the position where their content will be inserted in the ViewModel of the action method that the receive the data posted by the client. So for instance, a date that must be inserted, in the DateOfBirth property of Person instance inserted in the PersonalInfos property of the ViewModel MUST be rendered in an input field with name PersonalInfos.DateOfBirth, and id PersonalInfos_DateOfBirth, otherwise the default model binder wouldn’t be able to fill properly the ViewModel. The dot in the name is turned into an underscore in the id because the id can’t contain dots. Now, if the ViewModel used to render the page is the same as the model used to receive the post all above conventions are automatically enforced by the Asp.net Mvc Html helpers (TextBoxFor, etc.).

However, in general the ViewModel used by the Ajax controller differs from the one used for the initial page, since the Ajax call furnish just a part of the page data. So, for instance, in our previous example, if the Ajax call returns just the data obtained by rendering a Person object the name of our Date field would be DateOfBirth instead of PersonalInfos.DateOfBirth. Now if the Person data are submitted separately with another Ajax call to an Action method that uses a Person ViewModel all works ok, but if the Person data must be submitted together with the main page data we must add someway the PersonalInfos prefix.

Adding the PersonalInfos prefix to all input fields rendered by the partial view used by the controller that respond to the Ajax request is quite easy. It is enough to add the following code at the beginning of the View:

Html.ViewData.TemplateInfo.HtmlFieldPrefix = "PersonalInfos";

The prefix should be added just to the top level Partial View, since each time we call EditorFor and DisplayFor the Asp.net Mvc engine takes care of defining the right prefix for the child Partial View. However, in general the server doesn’t know this prefix, since the prefix depends on the role that the Ajax content will play in the overall page ViewModel. Suppose, for instance that the Html returned by the Ajax call must be used to add a new row to a grid, that contains Person data. The prefix to add should be something like AllPersons[i], where i is the 0 based index of the new row in the grid. So if the grid already contains 10, rows i=10, if the grid already contains 15 rows i= 15, and so on. In other terms only the client may know our prefix! So we must add the prefix as a further parameter of the Ajax call.

Unluckily the previous prefix, in general, cannot be used also to solve the problem we have with the datetime CSS class, because in the second case the CSS class must be unique withineach Ajax call not within a specific position in the ViewModel. Accordingly, for the CSS classes used to enhance the Ajax Html we might use a different prefix based on a count of all Ajax calls made to the server from the current Html page:

  1. (function ($) {
  2.     ...
  3.     ...
  4.     ...
  5.     var ajaxCount=0;
  6.     widgetsHelpers.newClassPrefix= function(){
  7.         return "classprefix"+(ajaxCount++);
  8.     };
  9. })(jQuery)

The two prefixes must be added to the parameters of the Ajax call together with the original request parameters, say,  personId, to get the final request URL:

ajaxRoot.attr("data-update-url")+"?personId="+personId+"&htmlPrefix="+htmlPrefix+"&classPrefix="+classPrefix

On the server side, any Ajax enabled controller must take care of receiving the two prefixes:

public ActionResult PersonData(int personId, string htmlPrefix, string classPrefix)
{
    if (!string.IsNullOrWhiteSpace(htmlPrefix)) ViewData[htmlPrefix]=htmlPrefix;
    if (!string.IsNullOrWhiteSpace(classPrefix)) System.Web.HttpContext.Current.Items["classPrefix"] = classPrefix;
    var model = repository.GetPersonById(personId);
    ...
    ...
    ...
    return PartialView(model);
}

 

The classPrefix has been added to the HttpContext dictionary, since it must be used by all Partial Views called in the current request, while the htmlPrefix has been added to the ViewData since it must be used just by the top level Partial View.

Now in the top level Partial View:

  1. @{
  2.     Html.ViewData.TemplateInfo.HtmlFieldPrefix = ViewData["htmlPrefix"] as string ?? "";
  3.     string classPrefix = System.Web.HttpContext.Current.Items.Contains("classPrefix") ?
  4.         System.Web.HttpContext.Current.Items["classPrefix"] as string + "-" :
  5.         "";          
  6. }

In the Data.cshtml Partial View, and in general in all Partial Views that might be involved in an Ajax call:

 

  1. @{
  2.     string classPrefix = System.Web.HttpContext.Current.Items.Contains("classPrefix") ?
  3.         System.Web.HttpContext.Current.Items["classPrefix"] as string + "-" :
  4.         "";          
  5. }

 

and then in each CSS class enhanced input field:

@Html.TextBoxFor(m => m.DateOfBirth, new {@class=classPrefix+"datetime"})

 

That’s all for now!

In the next post Json based Ajax calls and Single Page Applications.

Stay tuned!

Francesco

Tags: , , , , ,

Dec 10 2013

JavaScript Intensive Web Applications 2: Enhancing Html with JavaScript

Category: JavascriptFrancesco @ 06:35

JavaScript Intensive Web Applications 1: Getting JavaScript Intellisense

JavaScript Intensive Web Applications 3: Enhancing Applications with Ajax

jsFarmIntellisense 

There are mainly three ways you may improve your application with JavaScript, each with its vantages and disadvantages:

  1. Enhancing the page Html with JavaScript widgets
  2. Refreshing Html page areas with fresh Html returned by Ajax calls
  3. Creating Html dynamically using JSON returned by Ajax calls

In this post I will speak about the first technique that is the only one that has substantially no drawbacks. The other ones will be discussed in further posts of the same series.

In this and in all other posts of this series I assume that your web Application is implemented with Asp.net Mvc.

If we suppose that the application submits user inputs contained in Html input fields with standard form submits, JavaScript becomes just a tool that may  improve the appearance of the page and that may help the user to fill more easily the input fields. In other terms, it becomes a sort of “turbo CSS” we may use to improve the appearance and the user experience. This is the main idea that is behind all jQuery widgets that select Html nodes with CSS selectors and enhance them in a way similar to the way a CSS rule would do.

Unluckily, the “pseudo-styles” applied by jQuery widgets are not automatically enforced also on newly added Html, so jQuery widgets create problems when they are used together with Ajax techniques. We will analyze in detail these problems and how to solve them in the posts of this series dedicated to Ajax. In what follows I assume that no dynamic Html is added to the page, or that some small piece of Html that might be added automatically by some jQuery widget doesn’t need further enhancements by other jQuery widgets.

Do JavaScript enhancements have drawbacks? Since all browsers support JavaScript, …substantially no… if some simple cautions are adopted:

  1. You pay attention to cross browser compatibility. If you use jQuery and jQuery based frameworks like: jQuery UI, jQuery Mobile, Bootstrap, and Zurb Foundation this should be quite automatic.
  2. All Widgets that you use just enhance an existing Html. The basic functionality should be available, maybe with an awful unacceptable appearance, also if JavaScript is not supported. This requirements is not added for compatibility with browsers that don’t support JavaScript ….that don't exist anymore, but for compatibility with the search engines. If your application is an intranet, or if your page should not be available to search engines you may drop this point. Again if you use the above mentioned jQuery frameworks, and look at the specifications of further jQuery plug-ins you might use (most of the existing jQuery plug-ins conform to this requirement), and if you design properly your custom jQuery plug-ins, also this point should not be a problem.
  3. JavaScript enhancements must not undermine the accessibility of the page. This means all Widgets must use the right Html tags, and if needed, ARIA attributes. For instance, something that has the semantics of a list must be rendered with <li> tags also if it is enhanced with JavaScript. <table> …<tr>…<td> must not be used for layout but only for tabular data, if you need a table like layout, please use adequate CSS like display: table, and similar, instead. All widgets included in the jQuery frameworks I listed previously are ARIA compliant and conform to the requirements of this point.
  4. You use a well defined architecture, to avoid JavaScript “spaghetti code”. Architectures based on the idea of jQuery plug-in helps a lot but you need an effective way to organize all JavaScript modules used by the various pages. I will show you a trick based on require.js, and partial views  to add modularly as many JavaScript and CSS widget-files as you like, without undermining the maintainability of the application.
  5. You pay attention to the development time of each page and you avoid to fall into an endless loop of improvements with new, or better widgets.

All instructions that enhance the html must be executed after the DOM is ready, thus they may be inserted either at the end of the page Html body, or in the page header enclosed in a jQuery $(document).ready(….) handler.

Now, several “influencers” in the area suggest to insert all JavaScript at the end of the Html body and to avoid the use of .ready(). The reason is that any JavaScript placed in the page header slower the page rendering. However, most of the times I prefer the user see a white page loading instead of a page before that it has been enhanced by JavaScript because when you use complex widgets (a Tab widget is enough to show the phenomenon) the page may be unacceptable before its enhancement also if a search engine is able to understand its content :). For this reason I usually place all JavaScript libraries in the header (they are slow to load) and the page enhancing code at the end of the page body. This way, since usually the page enhancing code is quite fast the user see a blank page first, when all JavaScript libraries are loading, and after a fast adjustment (when the page enhancing code is executing) the final page.

I suggest to include all page enhancing code in a separate file, that should contains just lines of the type:

$("<selector>").myPlugin();

The file should contain just lines like the one above to keep the semantics of a “pseudo-CSS” file. This means that if you define custom widgets the widgets code should be included into a different JavaScript library file, that may be included in the page header together with all other JavaScript libraries.

The call to each plug-in should not contain any argument: all plug-in parameters  should be inserted in Html5 attributes. This way all enhancement calls become “standard” and may be created automatically by general purpose JavaScript code (see below). However, this implies that whenever you substitute a widget with another widget that performs the same job, you must modify also the Html; usually this is not a problem if you enclose the Html to be enhanced in a single server-module that is called in the remainder of the Html.  The example below, involving a Bootstrap datepicker, show how to proceed:

Html:

<input type="text" class="datepicker" value="02/16/12" data-date-format="mm/dd/yy" id="dp2" >

Enhancing JavaScript code:

$('.datepicker').datepicker();

 

 

 

 

The single line of JavaScript enhancing code above enhances all input fields with a datepicker CSS class. If you are using Asp.net Mvc, input fields with the datepicker class may be generated automatically with Html.EditorFor(…) if you define a Date.cshtml Mvc Template and if you decorate all DateTime properties that represent pure dates with a DateTypeAttribute  with a Date type value. This way any change to the datepicker parameters require just the change of the Date.cshtml file.

JQuery Mobile, Bootstrap and Zurb Foundation assign predefined classes and/or Html5 attributes to all predefined widgets and enhance them automatically on the .ready event, so you need to add an enhancing JavaScript file only if you use custom widgets. We will see the drawbacks of this approach when discussing Ajax techniques.

Event Handlers may be attached by specialized jQuey extensions, like in the example below:

 

$('.click-operation').clickHandler();

 

The click-operation class may be applied to all nodes that needs a click handler. Then, each single node might contain a data-event-operation Html5 attribute that specifies the specific operation to be carried out on that node. A possible implementation of the clickHandler jQuery extension is:

jQuery.fn.clickHandler = function () {
    this.click(function (evt) {
        switch (jQuery(evt.target).attr("data-event-operation ")) {
            case "op1": ....; break;
            case "op2": ....; break;
            ....
        }
        evt.stopPropagation();
    });
    return this;
}

I used evt.target, so the click handler may be used also for bubbled click events. Moreover, I called stopPropagation to avoid that the event is bubbled to a possible ancestor clickHandler.

Returning to the datepicker example. Since it is not part of the default widgets Bootstrap comes with, we might decide to substitute it with another widget. Imagine also that analogously we would like to substitute also other widgets with better implementations….Wow… a not easy job…we should modify a lot of JavaScript files included in all pages that contain the widgets we have substituted. If we were able to include the references to the datepicker JavaScript file in the same Date.cshtml  partial view that contain the Html of the datepicker it would be enough to make a few modifications to this file and in 10 minutes we would have a different  datepicker working. This way we might be able also to test easily several widgets.

The problem described above is a conceptual problem that is intrinsic in the the pseudo-CSS approach used to manage the Widgets. Widgets are conceptually different by style rules because style rules are part of a “closed specification” while widgets are not, so there are different widgets that do the same job, and new Widgets appears every day: the only way to deal with an “open set” is by enforcing modularity and by defining interfaces. In other terms we must enclose all code of a widget into an unique module that offers a standard interface to the remainder of the system.

Below a simple trick that solves the problem.Let add to the bottom of our Date.cshtml file the following snippet of code:

  1. <script type="text/javascript">
  2.     widgetsHelpers.initialize(["@Url.Content("~/Scripts/bootstrap-datepicker.js")"],
  3.                                 ["@Url.Content("~/Content/datepicker.css")"],
  4.                                 "datepicker",
  5.                                 ".datepicker")
  6. </script>

The first argument contains all JavaScript files(it is an array) with the needed code, the second argument the possibly null list  of CSS Urls that might be needed, the third argument the name of the jQuery plug-in method to call, and finally the selector that characterize all inputs that must be enhanced with the datepicker.

The implementation of the widgetsHelpers.initialize function uses the require.js library to load asynchronously the JavaScript files and is straightforward:

  1. (function ($) {
  2.     window["widgetsHelpers"] = window["widgetsHelpers"] || {};
  3.     var widgetsHelpers = window.widgetsHelpers;
  4.     widgetsHelpers.modules =
  5.         {
  6.             css: {},
  7.             js: {},
  8.             widgets: {}
  9.         };
  10.     function loadCss(url) {//load a Css file and adds it to the page
  11.         widgetsHelpers.modules.css[url] = true;
  12.         var link = document.createElement("link");
  13.         link.type = "text/css";
  14.         link.rel = "stylesheet";
  15.         link.href = url;
  16.         document.getElementsByTagName("head")[0].appendChild(link);
  17.     }
  18.     widgetsHelpers.initialize = function(js, css, widget, selector){
  19.         if (!widgetsHelpers.modules.widgets[selector]){
  20.             widgetsHelpers.modules.widgets[selector] = true;
  21.             $(document).ready(function(){
  22.                 if(css) {
  23.                     for (var i=0; i < css.length; i++)
  24.                         if(!widgetsHelpers.modules.css[css[i]]) loadCss(css[i]);
  25.                 }
  26.                 if (js ){
  27.                     var nJs = [];
  28.                     for(var i = 0; i<js.length; i++)
  29.                         if(!widgetsHelpers.modules.js[js[i]]) {
  30.                             nJs.push(js[i]);
  31.                             widgetsHelpers.modules.js[js[i]] = true;
  32.                         }
  33.                     if(nJs.length)
  34.                         require(nJs, function () {
  35.                             $(selector)[widget]();
  36.                         });
  37.                     else{
  38.                         $(selector)[widget]();
  39.                     }
  40.                 }
  41.                 else{
  42.                     $(selector)[widget]();
  43.                 }
  44.             });
  45.         }
  46.     };
  47.     widgetsHelpers.loadCss = loadCss;
  48. })(jQuery)

We create a namespace, than we create the dictionary widgetsHelpers.modules to “remember” the JavaScript, CSS files and modules that have been already loaded. The loadCss function loads all CSS files that cannot be loaded with require.js.

Finally the initialize function, verifies if another call has already required the same widget, and, if not, on the .ready event loads both the needed CSS and JavaScript files (if not null and if not already loaded), then it applies the widget on the provided selector.

In case  a single partial view needs a JavaScript module containing the definitions of several widgets we may use the widgetsHelpers.intializeAll instead:

 

  1. widgetsHelpers.initializeAll = function(js, css, widgetsArray, selectorsArray){
  2.     widgetsHelpers.intialize(js, css, widgetsArray[0], selectorsArray[0]);
  3.     for(var i=1; i<widgetsArray.length; i++) widgetsHelpers.intialize(
  4.         null, null, widgetsArray[i], selectorsArray[i]);
  5. };

widgetsArray and selectorArray are arrays that contain respectively all widget names and all jQuery selectors used to reference these widgets from the Html nodes. The JavaScript file and the CSS files are passed just in the first call to initialize, while all other calls are needed just to create the pseudo-CSS rules.

The same partial view may contain several calls to initialize and/or initializeAll in case the widgets are split in different files.

That’s all for now!

In the next post all secrets of Ajax based applications…and new useful tricks.

Stay tuned!

Francesco

Tags: , , , , ,

Dec 2 2013

JavaScript Intensive Web Application 1: Getting JavaScript Intellisense

Category: JavascriptFrancesco @ 15:43

JavaScript Intensive Web Applications 2: Enhancing Html with JavaScript

JavaScript Intensive Web Applications 3: Enhancing Applications with Ajax

This is the first of a series of tutorials on the use of client techniques in Web Applications. We will discuss when it is convenient to use Ajax, or JavaScript intensive Web pages, or Json communication or Single Page Applications, and how to solve some typical “nightmares” that these techniques bring with them.

In this first tutorial we will try to remove (or just to lower…) one of the main barriers that discourages the development of large JavaScript codebases: the absence of syntax checks and Visual Studio Intellisense comparable with the ones we have in other strongly typed languages.

Actually Visual Studio and a lot of other JavaScript editors are able to signal immediately pure syntax errors. The main problem is that they are not so smart also in inferring types, and  consequently in furnishing adequate intellisense. The reason of this incapability are basically two:

  1. JavaScript is a dynamic, not strongly typed language. This means that the same variable or function parameter may store different data types, and that consequently the JavaScript editor cannot rely on the variable/parameter data type to perform type checking and to give adequate intellisense.
  2. JavaScript contains no concept of module  reference and/or linking, so a JavaScript file comes to know all details about external functions and prototypes only at run-time when all needed modules are for sure available.

Visual studio offers tools for resolving easily the second problem: namely when you are in a JavaScript file you may add some kind of references to other JavaScript files used by the current module by using the syntax of Xml comments. Xml comments are JavaScript comments composed by /// followed by adequate Xml expressions. Since they are comments they are ignored by both JavaScript minifiers and JavaScript's interpreters.

The syntax for a JavaScript reference Xml comment is basically:

  1. /// <reference path="/path/subpath/..../JavascriptFileToReference.js" />

We may use also “~” to denotes the root of our web application.

When we are editing a JavaScript file, it is enough to drag the file we would like to include from the Solution Explorer to the file we are editing to get automatically the reference Xml comment.

If a JavaScript file is included in an Html page or .cshtml page there is no need to reference it also with a reference Xml comment to get JavaScript help on its code. However, often Html, or .cshtml files use JavaScript files that they don’t include directly for different reasons such as: 1) they might use code retrieved via AMD, 2) the JavaScript files might be included in a _Layout page or in another .cshtml page in case they are partial views, 3) the .cshtml file might be used to produce a dynamic JavaScript file, instead of an Html page.

In all above cases we may use a reference Xml comment inside the <script> tags that enclose the JavaScript code. However, unluckily, in this case we can’t drag the file to reference but we have to insert the reference Xml comment manually.

So now we are able to reference JavaScript library to get intellisense…so the problem now is to to actually get intellisense on each JavaScript variable. While JavaScript is not strongly typed, starting from Visual studio 2012, the JavaScript intellisense improved a lot, and now Visual Studio is able to infer the type that should be contained in a variable from the previous code. For instance if you write:

(function () {
    var simpleOperation = function () {
        this.mult = function (x, y) {
            return x * y;
        };
    };

and then:

  1. var operation = new simpleOperation();

Then we get help on the variable operation:

jsNewIntellisense

We get the same help also if the object is returned by a farm function:

(function () {
    var simpleOperationFarm = function () {
        return{
            mult: function (x, y) {
                return x * y;
            }
        };   
    }
    var operation = simpleOperationFarm();

 

jsFarmIntellisense

 

 

 

 

 

In general Visual Studio >= 2012 do the best to infer a type from a static analysis of the code. However, very often static analysis is not able to infer types in a dynamic language like JavaScript.

However, we may use a couple of tricks to “pass” to Visual studio the information on the types contained in a variable or parameter.

The first trick may be applied to the parameters of a function: immediately after the parameters declaration we may place a param Xml comment:

function (operation) {
        /// <param name = "operation" value = "new simpleOperation()"/>

The value attribute may contain any JavaScript expression, but typically we put, a creation operation, a farm function, a simple value (such as an integer, or a string), an array, an object, or nested arrays and objects. Below, a suitable value to get help on objects that are elements of an array:

paramsIntellisense1

 

Notwithstanding some syntax error…we get our intellisense!

We might obtain a similar result also with:

function (operation) {
    /// <param name = "operation" value = "[{mult: function(x, y){}}]"/>

 

 

Now we are able to get help on each function parameter…but often knowing the type of the function parameters is not enough to infer the type of each variable that is local to that function, or the type of an object manipulated by the function…(for instance because they were not passed as a parameter, but it is part of the function closure). Moreover, sometimes JavaScript functions accept parameters that may take several different types.

Now, we may call a method or to read/set a property of an object in a given place of a JavaScript function only if we know that in that part of code a member or variable must necessarily contain a given type, because we must be sure the property or method we are referring to actually exists! Thus, let suppose that we know that in some part of our code the type of the variable operation, or the type of the member mayObject.operation must be SimpleOperation, then we may enclose  that part of our code in a function:

 

 

(function (..., operation, ...) {
    /// <param name = "operation" value = "new simpleOperation()"/>
    // now we may get intellisense
    ...
    ...
    ...
})(..., myObject.operation, ...);

 

 

in case we can’t enclose the code inside a function we may use this other trick:

 

myObject.operation = myObject.operation || new simpleOperation();

 

Since we supposed we are sure myObject.operation contains a simpleOperation,  the second operand of || will never be evaluated, so our instruction do simply…nothing but helping VisualStudio to infer the type of myObject.operation .

Needless to say the second operand of the above || may contain the same kind of expressions of the value attribute of the param Xml attribute.

The above tricks enable us to get intellisense in any situation! Thus the main nightmare of JavaScript coding has been “mitigated”!

 

That’s all for now!

In the next post a deeper analysis of JavaScript intensive Web techniques.

 

Stay tuned!

Francesco

Tags: , , , , ,

Sep 2 2013

Data Moving Plug-in new Single Page Application View Engine

Category: Francesco @ 10:28

The Data Moving Plug-in will be available for purchase in a few days. The final version comes with an advanced Single Page Application view engine that handles automatically virtual pages and virtual links. Virtual pages may be connected to the browser history, so the user may navigate among the virtual pages with the back and forward browser buttons and may be stored in a page store to keep their state so that when the user return back to the same virtual page he find it in exactly the same state he left it. Thus the Data Moving Plug-in Single Page Application View Engine enables the developer to implement web application that behave like native windows application.

The SPA framework includes also an authorization framework that co-ordinate the virtual pages and views framework with the standard asp.net authorization system. If a user can't access a virtual page it is automatically redirect to the login virtual page.

Not only Views but also AMD modules are built dynamically by Mvc controllers using razor views so that their content may depend on context-related information, such as selected language and logged user. Moreover, this way javascript data structures may be created by serializing their equivalent .Net data structures, thus avoiding the duplication of code in both javascript and .Net.

Context rules specified with a simple syntax by the developer in javascript files supervision the addition of parameters to the AMD and template calls to the server to adapt them to the context (logged user, selected language, etc.) and define virtual page redirection rules that adapt the navigation to the current context (user authorizations, and global application state).

A preliminary tutorial on the Data Moving Plug-in new Single Page Application View Engine is available on the blog of the Data Moving Plug-in web site. Below a video that shows the SPA view engine working:

Tags:

Jun 19 2013

The Code of all Data Moving Plug-in Examples is Available on Codeplex

Category: Francesco @ 05:09

The code of all Data Moving Plug-in examples shown in all previous videos and tutorials is available on the Data Moving Plug-in Examples Codeplex site. All examples require the installation of the Data Moving Plug-in, so at moment only the ones that received an evaluation or complimentary copy of the Data Moving Plug-in may run them. However, it is interesting in any case to inspect the full code for anyone who already read a tutorial or watched a video  about the Data Moving Plug-in.

The Data Moving Plug-in will be available for purchase at the beginning of september and will contain also a complete SPA framework. In the meantime give  a look to the Data Moving Plug-in web site.

For any clarification, please don’t esitate to contact me with the contact form of this blog.

 

The Data Moving Plug-in Team is Looking for resellers, see here for more details.

Tags: