Middleware is a class of software designed to support the development and operation of other software (e.g. end-user applications, other middleware). Middleware that is considered infrastructural software, as its features are mostly invisible and often expressed through client code features. It can take the form of toolkits, libraries or services which are incorporated.
In this paper by W. Keith Edwards, Victoria Bellotti, Anind K. Dey, and Mark W. Newman addresses the problem of designing and evaluating user-centred middleware, a difficult task since the the technical features of the underlying infrastructure are directly invisible and typically expressed in the features of the client applications. Apart from the general criteria for evaluating software (performance, scalability, security, robustness, …) the authors found no user-centred criteria, based on usability and usefulness, for designing and evaluating the features of the middleware itself.
This prompted the authors to formulate the following questions about the middleware design gap:
- Is it possible to more directly couple the design of infrastructure features to the design of application features?
- How can this more direct coupling exist when the applications that will be built atop the middleware don’t yet exist…and may be impossible to build without the middleware itself?
- Could the context of either the users or the use of these unknown applications have an important impact on the features we decide upon?
- How can we avoid building a bloated, overly complex system incorporating every conceivable useful feature, at the same time as developing a system that will not need to be constantly updated (and thus repeatedly broken) throughout its life span?
- Are there better models for deciding on the features of “experimental” middleware, designed to support completely
And also for the middleware evaluation gap:
- How do we choose which applications to build to evaluate the middleware?
- What kinds of users and contexts (types of uses) for these applications should we consider as appropriate for testing purposes?
- What does the manifestation of the technology in a particular application say about the capabilities (or even desirability) of the middleware itself? How useful is this “indirect” evaluation?
- Are the techniques we normally use to evaluate applications acceptable when our goal is to evaluate the middleware upon which those applications are based?
- Is it possible to evaluate the middleware outside of the context of a particular application?
Thus, authors present the major challanges and lessons learned over the design and evaluation of three case studies: (1) Placeless documents, (2) Context toolkit and (3) SpeakEasy. The lessons learned set encompass the following list:
- Lesson 1 – Prioritise Core-middleware Features.
- Lesson 2 – First, build prototypes with high fidelity for expressing the main objectives of the middleware.
- Lesson 3 – Any test-application built to demonstrate the middleware must also satisfy the usual criteria of usability and usefulness.
- Lesson 4 – Initial proof-of-concept applications should be lightweight.
- Lesson 5 – Be clear about that your test-application prototypes will tell you about your middleware.
- Lesson 6 – Do not confuse the design and testing of experimental middleware with the provision of an infrastructure for other experimental application developers.
- Lesson 7 – Be sure to define a limited scope for test- applications and permissible uses of the middleware.
- Lesson 8 – There is no point in faking components and data if you intend to test for user experience benefits.
- Lesson 9 – Understand that the scenarios you use for evaluation may not reflect how the technology will ultimately be used.
- Lesson 10 – Anticipate the consequences of the tradeoff between building useful/usable applications versus applications that test the core features of the middleware.