Software Development

Middleware is a class of software designed to support the development and operation of other software (e.g. end-user applications, other middleware). Middleware that is considered infrastructural software, as its features are mostly invisible and often expressed through client code features. It can take the form of toolkits, libraries or services which are incorporated.

In this paper by W. Keith Edwards, Victoria Bellotti, Anind K. Dey, and Mark W. Newman addresses the problem of designing and evaluating user-centred middleware, a difficult task since the the technical features of the underlying infrastructure are directly invisible and typically expressed in the features of the client applications. Apart from the general criteria for evaluating software (performance, scalability, security, robustness, …) the authors found  no user-centred criteria, based on usability and usefulness, for designing and evaluating the features of the middleware itself.

This prompted the authors to formulate the following questions about the middleware design gap:

  • Is it possible to more directly couple the design of infrastructure features to the design of application features?
  • How can this more direct coupling exist when the applications that will be built atop the middleware don’t yet exist…and may be impossible to build without the middleware itself?
  • Could the context of either the users or the use of these unknown applications have an important impact on the features we decide upon?
  • How can we avoid building a bloated, overly complex system incorporating every conceivable useful feature, at the same time as developing a system that will not need to be constantly updated (and thus repeatedly broken) throughout its life span?
  • Are there better models for deciding on the features of “experimental” middleware, designed to support completely

And also for the middleware evaluation gap:

  • How do we choose which applications to build to evaluate the middleware?
  • What kinds of users and contexts (types of uses) for these applications should we consider as appropriate for testing purposes?
  • What does the manifestation of the technology in a particular application say about the capabilities (or even desirability) of the middleware itself? How useful is this “indirect” evaluation?
  • Are the techniques we normally use to evaluate applications acceptable when our goal is to evaluate the middleware upon which those applications are based?
  • Is it possible to evaluate the middleware outside of the context of a particular application?

Thus, authors present the major challanges and lessons learned over the design and evaluation of three case studies: (1) Placeless documents, (2) Context toolkit and (3) SpeakEasy. The lessons learned set encompass the following list:

  • Lesson 1 – Prioritise Core-middleware Features.
  • Lesson 2 – First, build prototypes with high fidelity for expressing the main objectives of the middleware.
  • Lesson 3 – Any test-application built to demonstrate the middleware must also satisfy the usual criteria of usability and usefulness.
  • Lesson 4 – Initial proof-of-concept applications should be lightweight.
  • Lesson 5 – Be clear about that your test-application prototypes will tell you about your middleware.
  • Lesson 6 – Do not confuse the design and testing of experimental middleware with the provision of an infrastructure for other experimental application developers.
  • Lesson 7 – Be sure to define a limited scope for test- applications and permissible uses of the middleware.
  • Lesson 8 – There is no point in faking components and data if you intend to test for user experience benefits.
  • Lesson 9 – Understand that the scenarios you use for evaluation may not reflect how the technology will ultimately be used.
  • Lesson 10 – Anticipate the consequences of the tradeoff between building useful/usable applications versus applications that test the core features of the middleware.

Messing again in OSS, and after a workstation migration thought this might be useful for the next one. I am basing my post on Rumen Filkov’s post, but commenting and clearing some of the steps which might tricky.

1. Download and unpack OpenNI2 and NiTE2 tar balls. (Make sure you’re getting the x86_64 version whenever possible)

2. Open ‘/etc/launchd.conf’ and set the needed environment variables: ‘setenv OPENNI2_REDIST [path-to-openni2]/Redist‘ and ‘setenv NITE2_REDIST [path-to-nite2]/Redist‘. You might need to create the file from scratch. Then restart your Mac. Use the ‘printenv’ command to verify that the variables are actually set.

3. If needed install libusb 32-bit: ‘brew install –universal libusb‘. At this point, if your sensor is PrimeSense, the OpenNI2-Unity package should work.

4. Kinect only: Install libfreenect: ‘brew install –universal libfreenect‘. This will shortcut on some dependencies.

5. Kinect only: If OpenNI2 still cannot find the Kinect sensor, take the driver’s sources from and build a universal dylib, i.e. for both x64 and i386 architectures. In order to accomplish an universal build just edit the wscript in your OpenNI2-FreenectDriver directory and append the following:

conf.env.append_value(‘CXXFLAGS’, [‘-arch’, ‘i386’])
conf.env.append_value(‘CXXFLAGS’, [‘-arch’, ‘x86_64’])
conf.env.append_value(‘LINKFLAGS’, [‘-arch’, ‘i386’])
conf.env.append_value(‘LINKFLAGS’, [‘-arch’, ‘x86_64’])

6. copy the files bellow to ‘[path-to-openni2]/Redist/OpenNI2/Drivers’ and try again.


That’s it ;)

Last week, yours truly finally submitted Ubisign‘s first iOS app to Apple’s AppStore. PRIMAVERA Mobile Explorer went through all the stages of the submission process, with a bit of natural suspense along with it. There was some pressure in having the app ready for sale this week, and the submission-approval estimated time was pointed to around seven days. Actually it took a bit longer, and the ‘waiting for review’ stage was the longest, but it came out just in time!

November 30, 2012 09:29


Ready for Sale

November 30, 2012 09:27


Processing for App Store

November 30, 2012 07:01


In Review

November 22, 2012 03:27


Waiting For Review

November 22, 2012 03:26


Upload Received

November 21, 2012 09:49

Waiting For Upload

November 21, 2012 09:47

Prepare for Upload

I have to admit that I was a bit concerned with the fact that the service infrastructure was being switched from development to production during the approval process, which caused a lot of service downtime. I believed this didn’t interfere with the approval mainly because the app was implemented taking into consideration service setbacks. If services were down, the demo version data would kick in, allowing the user to navigate the app, just like Apple recommends.

So after a period fully dedicated to iOS development, this week I’m back to Windows/C# development. The Mac Mini which I used for development now stands shutdown in my desk. One interesting thing I came across was the difficulty of finding the app outside iTunes, just browsing for it! Yesterday I had some failed attempts to do so. But today, I actually put myself through the effort of finding it and must say it was an rather unexpected… There was no quick search, I had to find the Business apps category page, then browse by ‘P’ in alphabetic order to find huge load of pages just for Apps starting with that letter. A lot of apps around there! Ended up finding PRIMAVERA Mobile Explorer in the 13th page. So, if you wish to take a peek, here you go.

… aiming for the next super-duper web app!

I’ve found this great tutorial by Pragmatic Studio on setting up your ROR development environment. The first time I tried to do it, it didn’t quite finished well. But with this walkthrough things went as smoothly as one could wish for.

After everything is set,  Rails creates the whole structure of the app in a flash. Some more resources:

* The Getting Started Guide
* Ruby on Rails Tutorial Book



You have just ended your iOS app development stage, all features are implemented and bugs corrected. Your ready to submit the app for approval.  So what now?

The first thing to do is to create the test procedures. There are some rather extensive recommendations by Apple.Dev on the process, but I have been suggest to use this tool, TestFlight, in order to facilitate the testing process.

Afterwards, you need to create the app record in iTunes Connect, and decide on some attributes of the application. There is this great tutorial at with relevant comments to the process.

  • Default language
  • App Name:
  • SKU: 001
  • BundleID: Mobile Explorer – com.yourcompany.yourapp (can’t change!)
Now, something which really got annoying in what should be a linear process, was the difficulty of changing/renaming/deleting the bundleID. Awkwardly, Apple decided once you submit a BundleID you can’t change it or erase it. Furthermore, if you thing you can create a new one with previous information like the same reverse domain, think again! It is tested for collisions and doesn’t allow repeated values. So think well before you submit you BundleID first time.