Interface Design for MoSync Applications

MoSync is an open-source cross-platform mobile application development environment that makes it easy to develop apps for all major mobile platforms from a single code base.
MoSync provides a number of ways for creating user interfaces. You can choose one based on what your application requires and what you are most comfortable with. You can develop your MoSync applications with plain HTML/HTML5 UI or Native UI or Java based UI for older devices. Along with supporting multiple UI, MoSync also provides multiple libraries for implementing these UI. 
This blog post discusses some of these methods, their pros and cons.

The Problem

When I started working with MoSync, I was working on a TMDB API based movie application. The target platforms were Android, iOS and Windows with native look and feel. You guessed it right! I had an option of using Native C++ library or Wormhole NativeUI JavaScript API or Widget C API.
The Native C++ library suffers the same problem as most of the UI libraries. It provides a number of useful classes, but you need to write C++ code to create your interfaces. 
The MoSync Wormhole library provides a bridge between the HTML5/JavaScript and C++ layers of your application. Using the wormhole library it is possible to use HTML5 markup to create NativeUI interfaces. The HTML5 in this case is used more as XML markup, i.e. you create div tags for everything with some special data attributes to specify widget properties. MoSync creates a hidden WebView which parses the HTML5 markup, uses JavaScript to navigate the DOM and creates NativeUI widgets for each HTML5 element. I still couldn’t use this approach because:
  • This approach is reported to be inefficient on some platforms e.g. Windows phone.
  • The NativeUI widgets created this way are accessible via JavaScript just as a web page e.g. getElementById(), I would have had to write code to be able to access them from C++ code.
  • The JavaScript-C++ bridge is not very intuitive. It just provides a way to expose functions and not objects.
  • The life-cycle of widgets is out of control.

The Solution

In my case, the rest of the application was written in C++. All I wanted was to use some code-free markup to create interfaces dynamically at run time. What we required was: 
  • Ability to write some markup to define UI.
  • Ability to load UI definition from multiple JSON files.
  • Easy access to UI widgets from code.
  • ove responsibility to create/destroy widgets to some low level class.

Similar methods are used by many frameworks and UI libraries e.g. MFC uses .rc resource files to define user interfaces, Qt uses .ui xml files. MoSync supports both XML and JSON parsing. Personally I am not a very big fan of XML so I went with JSON to define UI.

I wrote a class to dynamically create UI at run time from JSON files. The class handles the above scenarios and supports loading multiple JSON files. So, it is possible to define global properties in a separate file and keep UI for different screens in different files. 
As for accessing the widgets easily, the class provides a getWidget() function, which takes the complete widget name as argument and returns the widget. Also, it maintains a map of name-widget pairs internally. If the widget already exists, it is returned otherwise the widget is created and returned. In case there are any child widgets, they are created as well and added to the widget map. 
Regarding destroying any widget, the class provides a destroyWidget() function, which again takes complete name of a widget and destroys it. This function is used by another higher level class – ScreenManager. This class handles creating/displaying/hiding/destroying screens. When a screen gets lowered, it automatically gets destroyed to release any acquired resources. 

Conclusion

MoSync is a very powerful framework to develop cross platform applications. Although it supports multiple UI solutions and libraries to implement them, there is still scope for making life easier for developers. The class I wrote works quite well for my requirements, but it can still be improved in both efficiency and functionality.

Cross Platform Tools – Choosing Right One for Your Mobile App

Different cross platform technologies have emerged to overcome the challenge of building native build for different operating systems. Although cross platform technologies come as good alternative, every platform has its own set of limitations. The big question is to identify an appropriate tool based on requirements. In this blog post we would address this problem by comparing some of the popular cross platform toolsWe will look at Titanimum, Kony and MoSync as they are some of the most popular cross platform tools being used today. We will be doing a comparative analysis of these tools based on the ease of development, memory footprint analysis and App performance.

Ease of Development

Titanium has powerful widgets and a rich platform. It provides eclipse based studio and uses JS for development. Titanium provides good documentation and a strong developer community. It also supports device debugging. It uses native SDK for testing the app on emulator. Disadvantages include its lack of support for Windows 8.
As an alternative tool MoSync uses both C++ and HTML 5/JS as the development language. Mosync Reload can be used to test changes on multiple devices and simulators. MoSync also provides extensive documentation. MoSync doesn’t provide debugging option for building native UI. It requires native SDK for testing native UI apps on emulator.
Kony, as compared to the above two, provides a strong IDE for developing apps. With the advanced drag and drop feature like native platforms, Kony stands out from other cross platform tools. It supports third party widget libraries like Sencha and JQuery Mobile. Kony provides strong support for Enterprise based apps. In spite of all advantages, Kony’s adoption remains low because it is a paid tool. The developer community is not strong and it takes a lot of time to build apps for platforms like BB.

Memory Footprint Analysis

Mobile users are sensitive about sizes of application. Different cross platform frameworks uses plugins to access native features which in turns increases size of executables.
Platform
Executable Size
Android Native
~ 1 MB
iOS Native
~ 1 MB
Titanium
~ 8 MB
MoSync
~ 2 MB
Kony
~ 6 MB
Both Kony and Titanium use a plugin to access native features in their build but MoSync generates and uses native widgets. For a very large size application these figures may be irrelevant but for simple Productivity category app size of the executable can become an issue.

Performance

We have seen a big improvement of performance of Apps running on cross platform tools in recent years. Earlier doing some native actions or showing animation affected the app performance, mainly on Android. Even today, performance of a native build is better than any app built on cross platform tools.

We tested performance for a large List View and Switching/Loading between pages and observed that for Titanium , Mosync and Kony. Performance is good and it is on par with native builds.

In a nutshell

The choice of platform depends on your requirements. MoSync can be preferred choice if existing dev expertise is in C/C++ and also for cases where application needs to be deployed for larger set of Mobile Platforms. For enterprise based apps, Kony can be a better. If your target platform is only Android and iOS then Titanium should be preferred. None of these platforms support Game development and they better be avoided if your App relies extensively on native features.

All cross platform tools still have to go long-way to become par with native platforms like Android and iOS in term of ease of development and performance.

Multi-tenancy in Cloud Application through Meta Data Driven Architecture

A multi-tenant architecture is designed to allow tenant-specific configurations at the UI, business rules, business processes and data model layers. This is enabled without changing the code thereby transforming complex customization into configuration of software. This drives the clear need for “metadata driven everything” including metadata driven database, metadata driven SOA, Metadata driven business layer, Metadata driven AOP and Metadata driven user interfaces.

Metadata Driven Database

To develop a Multi-tenanted database, one of the following architecture approaches applies:

  • Shared Tables among Tenants
  • Flexible Schema, Shared Tables
  • Multi-Schema, Private Tables
  • Single Schema, Private Tables for Tenants
  • Multi-Instance

As the service grows building a cloud database service to manage a vast, ever-changing set of actual database would be difficult. Rules pertaining to whom, where, how etc. may become an overhead as the application and numbers of clients grow.

Metadata driven approach involves collecting all these answers in tables so that it could be reused. It involves putting info about all tables, columns, indexes, constraints, partitions, SPs, parameters, functions; rules defines in business and transaction steps in a SP.

In a true metadata driven database, no rule and procedure refer to tables directly and even these rules are abstracted and used through metadata.

Metadata Driven SOA

To be a true service-oriented application the fractal model must be applicable from the system boundary to the database, with service interfaces defined for each component or sub-system and each service treated as a black- box by the caller.

The metadata-driven nature of the services of application leads the solution to a dead-end if a pure technical ‘code it’ approach is taken. In such a metadata-driven application exposing functions is replaced by exposing metadata.

Exposing the metadata itself is not the true intent of a metadata-driven application. Driving the propagation of services [functions] over the system boundary is a more accurate manner of phasing the approach that needs to be employed.

A metadata-driven application is capable of providing a bridging approach to propagate its services into many technologies via code generation. This is a direct result of all services being regular and that all service descriptions are available in a meta-format at both build-time and runtime.

Metadata Driven Business Layer

In the past, business logic and workflow were written using if else condition. If a business model or workflow is being designed in a multitenant environment, then the very first step has to be preparing metadata configurations. It should include the data source, extractions steps, transformation routing, loading and the rules and execution logic derivation source. Next step has to be the decision of tools and language, the usage of which can generate code and workflows out of the configurations. The final and the most challenging one will be changing the mindset of developers to “not create workflows and business objects but write code which can generate”.

Metadata Driven AOP

Metadata and the Join Point Model

A join point is an identifiable point in the execution of a system. The model defines which join points in a system are exposed and how they are captured. To implement crosscutting functionality using aspects, you need to capture the required join points using a programming construct called a pointcut.

Pointcuts select join points and collect the context at selected join points. All AOP systems provide a language to define pointcuts. The sophistication of the pointcut language is a differentiating factor among the various AOP systems. The more mature the pointcut language, the easier it is to write robust pointcuts.

Capturing Join Points with Metadata

Signature-based pointcuts cannot capture the join points needed to implement certain crosscutting concerns. For example, how would you capture join points requiring transaction management or authorization? Nothing inherent in an element’s name or signature suggests transactionality or authorization characteristics. The pointcut required in these situations can get unwieldy. The example is in AspectJ but pointcuts in other systems are conceptually identical.

pointcut transactedOps() 

    : execution(public void Account.credit(..))

      || execution(public void Account.debit(..)) 

Situations like these invite the use of metadata to capture the required join points. For example, you could write a pointcut as shown below to capture the execution of all the methods carrying the @Transactional annotation.

pointcut execution(@Transactional * *.*(..));

AOP systems and their join point models can be augmented by consuming metadata annotations. By piggybacking on code generation support it’s possible to consume metadata even when the core AOP system doesn’t directly support it.
Metadata support in AOP systems

To support metadata-based crosscutting, an AOP system needs to provide a way to consume and supply annotations. An AOP system that supports consuming annotations will let you select join points based on annotations associated with program elements. The current AOP systems that offer such support extend the definition for various signature patterns to allow annotation types and properties to be specified. For example, a pointcut could select all the methods carrying an annotation of type Timing. Further, it could subselect only methods with the value property exceeding, say, 25. To implement advice dependent on both annotation type and properties, the system could include pointcut syntax capturing the annotation instances associated with the join points. Lastly, the system could also allow advice to access annotation instances through reflective APIs.

Metadata Driven User Interfaces

Many business applications require the user interface (UI) to be extensible as the requirements vary from one customer to another. Client-side business logic for the UI may also need customization based on individual user need. A screen layout for a user might be different from another user. This may include control position, visibility, UIs for various mobile devices. The business logic customization also includes customizing validation rules, changing control properties, and other modifications. For example, a manager may have different options for deleting and moving files than a subordinate.

There are many techniques for enabling business applications to be extensible or customizable. Most applications solve this problem by storing customizable items such as UI layout and client-side business logic as metadata in a repository. This metadata can then be interpreted by a run-time engine to display the screen to users and to execute the client-side business logic when the user performs an action on the screen.

The advantages of this approach are:

  • Redeployment of components on the presentation layer is not required as the customization is done in a central repository.
  • A very light client installation is required. One only needs to deploy the run-time engine to the client machine.

While designing a Metadata driven UI, the following components are taken into account:

  1. Metadata Service. An ordinary service layer delivers Meta data for UI
  2. Login/Role Controller
  3. Action Controller
  4. Widget Controller
  5. MetaTree
  6. TreeService

Multi-tenancy in cloud applications can have a huge impact on the application delivery and productivity of an IT company.  Yet most people who use cloud and its services tend to ignore it owing to it’s “behind the scenes” functionality. Many old applications have been written in multitenant manner but moving them to SAAS or converting legacy to SOA might become a challenge. Meta data driven programming is indeed a different paradigm. However, it has a capability to solve numerous challenges associated not only with multi-tenancy but other cloud issues as well.

Designing the Persistence Layer

The trend in recent years is to model the entities in application’s object model and to use the database for storing the persistence information defined in the entity layer. This helps you create loosely coupled application components, richer relationship between objects, support various databases etc. With tools like Hibernate, it is really easy to map an object model into a relational database schema.

Defining entities for persistence layer is similar to defining any object model. Look for common properties that all objects can share. While defining the entities, use OOPS concepts like inheritance and polymorphism.

The focus should be to achieve the following:

  1. Avoid code duplication in the persistence layer
  2. Implement common functionality in the domain
  3. Audit each and every change properly
  4. Handle concurrency to avoid data loss/data getting overridden
Here are some of the best practices:

Base Class for Common Fields 
Create base class containing fields like id, version, timestamp and other common fields. Use inheritance to avoid code duplication. Also use JPA annotations wherever possible.

Polymorphism
Have a hierarchy of classes to be used while defining the object model. There will be cases when one base class is not enough. On the other hand, you can have entities which might not require all the fields defined in the base class.

DAO Pattern
Encapsulate logic for accessing data in an object or set of related objects. The goal should be to have database related logic in the DAO classes.

Dependency Injection
Query data by injecting the entity class that the DAO will be querying, along with defining the generic type.

Pagination
Write a generic implementation to support pagination. The goal should be to push pagination all the way down to the database and reduce resource consumption.

Auditing
Add audit fields/annotations to the objects which require auditing. This helps you keep track of objects- when are they created, who created them, who modified the object and when.

Lazy Loading
Use this when you do not want to query all data from an association when an object is loaded.

Eager Loading
Mark the associations with FetchType EAGER incase the association is needed mandatorily.

Querying Associations
Use Lazy loading in general, but if you need to load the association at query time, you can set the query params to make the association as Eager.

Persistence Layer with Behavior
Let persistence layer own the behavior of entities. It means that apart from creating the object graph you should write code to operate on the domain objects.

In my next post I will cover how Domain Driven Design can be used to achieve this easily.

Concurrency
Use optimistic concurrency control with versioning. You can use version numbers, or timestamps, to detect conflicting updates and to prevent lost updates. If you don’t configure Hibernate to use optimistic locking, it uses no locking at all. So, in this case the last update always wins.

Hope you enjoy designing persistence layer, keeping these key points in mind!

Entity Framework- Is it the solution for all data access requirements?

With latest entity Framework (EF) releases and with the entire world moving to Object Relational Mappings for all the data access needs, it is quite normal for a developer to think that EF is the right choice before starting to design the data access layer. The latest release by Microsoft has made it easy for the developers to code and get started with Entity Framework. It is when someone starts writing the actual code that they realize that just EF may not actually be the right choice for all requirements.

Following are some scenarios where EF will not work or perform as you would expect your data layer to:
a) Bulk Insert, Update or Delete
b) Provide locking hints in your queries

Why entity framework is not the right choice when you need bulk insert, update or delete?

One of the biggest problems with EF is that it does not support bulk queries. If you add more than one entity for insert, update or delete, the context internally will make as many round trips as the number of entities added to it for insert, update or delete. Imagine if you are dealing with thousands of records to insert, update or delete, EF will make that many round trips to SQL Server which is not good. If you write your own SQL Bulk copy query, the entire data is transferred in a single go which will greatly improve your app’s performance.

EF Example:

Demo Model:

Person is a table that I added in my DemoModel.edmx

Code for adding Person Entities:

private static void EFDemo()
        {
            DemoEntities context = new DemoEntities();
            context.People.Add(new Person
            {
                FirstName = “John”,
                LastName = “Brown”,
                Age = 28,
                Country = “USA”,
                Gender = “M”
            });
            context.People.Add(new Person
            {
                FirstName = “Michelle”,
                LastName = “Brown”,
                Age = 28,
                Country = “USA”,
                Gender = “F”
            });
            context.SaveChanges();
        }

Queries captured through SQL Server Profiler:

1) EXEC SP_EXECUTESQL N’INSERT [DBO].[PERSON]([FIRSTNAME], [MIDDLENAME], [LASTNAME], [AGE], [COUNTRY], [GENDER]) VALUES (@0, NULL, @1, @2, @3, @4) SELECT [PERSONID] FROM [DBO].[PERSON] WHERE @@ROWCOUNT > 0 AND [PERSONID] = SCOPE_IDENTITY()’,N’@0 NVARCHAR(64),@1 NVARCHAR(64),@2 INT,@3 NVARCHAR(128),@4 VARCHAR(16)’,@0=N’JOHN’,@1=N’BROWN’,@2=28,@3=N’USA’,@4=’M’

2) EXEC SP_EXECUTESQL N’INSERT [DBO].[PERSON]([FIRSTNAME], [MIDDLENAME], [LASTNAME], [AGE], [COUNTRY], [GENDER]) VALUES (@0, NULL, @1, @2, @3, @4) SELECT [PERSONID] FROM [DBO].[PERSON] WHERE @@ROWCOUNT > 0 AND [PERSONID] = SCOPE_IDENTITY()’,N’@0 NVARCHAR(64),@1 NVARCHAR(64),@2 INT,@3 NVARCHAR(128),@4 VARCHAR(16)’,@0=N’MICHELLE’,@1=N’BROWN’,@2=28,@3=N’USA’,@4=’F’

As you can see, there are two queries fired by entity framework for two person objects. The other point worth taking note is because Person has an identity column “PersonId”, EF also makes a select query to get the id back which is additional work if you really don’t need that id.

SQL Bulk Copy Example: Using same database and same table.

Bulk Copy code:

private static void BulkCopyDemo(string connectionString)
        {
            using (SqlConnection connection = new SqlConnection(connectionString))
            {
                connection.Open();
                DataTable personTable = CreateTableStructureUsingTable(“Person”, connection);
                AddPersonRow(new Person
                {
                    FirstName = “John”,
                    LastName = “Brown”,
                    Age = 28,
                    Country = “USA”,
                    Gender = “M”
                }, personTable);
                AddPersonRow(new Person
                {
                    FirstName = “Michelle”,
                    LastName = “Brown”,
                    Age = 28,
                    Country = “USA”,
                    Gender = “F”
                }, personTable);
                SqlBulkCopy inserter = new SqlBulkCopy(connection);
                inserter.DestinationTableName = “Person”;
                inserter.WriteToServer(personTable);
            }
        }
        private static void AddPersonRow(Person p, DataTable personTable)
        {
            var row = personTable.NewRow();
            row[“FirstName”] = p.FirstName;
            row[“LastName”] = p.LastName;
            row[“Age”] = p.Age;
            row[“Country”] = p.Country;
            row[“Gender”] = p.Gender;
            personTable.Rows.Add(row);
        }
        private static DataTable CreateTableStructureUsingTable(string table, SqlConnection connection)
        {
            DataTable dt = new DataTable();
            using (SqlCommand command = new SqlCommand())
            {
                command.Connection = connection;
                command.CommandType = CommandType.Text;
                command.CommandText = “select TOP 0 * from “+ table;
                using (SqlDataReader dataReader = command.ExecuteReader())
                {
                    dt.Load(dataReader);
                }
            }
            return dt;
        }

Queries Captured in SQL server profiler:

INSERT BULK PERSON ([FIRSTNAME] NVARCHAR(64) COLLATE SQL_LATIN1_GENERAL_CP1_CI_AS, [MIDDLENAME] NVARCHAR(64) COLLATE SQL_LATIN1_GENERAL_CP1_CI_AS, [LASTNAME] NVARCHAR(64) COLLATE SQL_LATIN1_GENERAL_CP1_CI_AS, [AGE] INT, [COUNTRY] NVARCHAR(128) COLLATE SQL_LATIN1_GENERAL_CP1_CI_AS, [GENDER] VARCHAR(16) COLLATE SQL_LATIN1_GENERAL_CP1_CI_AS)

If you notice carefully, the query generated is a bulk insert in person which inserts both the rows at a single go.

Conclusion:

If you have large amounts of data on which you want to perform bulk queries, entity framework does not support that and it will make round trips to the SQL server which can have a huge performance impact. Therefore, your data access layer should have a right mix of Entity Framework and SQL Bulk queries based on your requirements.