This is a summary-in-progress of the chapter “Starting New Online Communities” by Paul Resnick, Joseph Konstan, Yan Chen, and Robert Kraut, from the book “Building Successful Online Communities: Evidence-Based Social Design” edited by Paul Resnick and Robert Kraut.

The authors identify the major challenges for designing new successful online communities and go through the chapter arguing on a set of design claims based on cost-benefit assessments.

Major challenges:

  1. Carving out a useful niche (ensuring that net utility, benefits minus costs, must be positive for all members in steady state).
  2. Defending the niche (ensuring net utility must be higher than that of competing communities).
  3. Getting to critical mass (assuring net positive utility for each of the members as they join, even though the community has not yet reached steady state).

Opportunities Model:

(match_value *collection_size) – navigation_cost [pull model]

(match_value *collection_size) – (interruption_cost * collection_size) [push model]

Design Claims

Carving Out a Niche

1: Lower volume and higher time-sensitivity of interaction opportunities, and lower interruption costs increase the benefits of push notification.

2: A mixed-topic scope reduces expected match value.

3: An ambiguous scope for an interaction space reduces expected match value.

4: Activities that bridge interests in different topics increase match value in spaces with mixed-topic scope.

5: A transcendent or bridging topical identity increases match value in communities with mixed-topic scope.

Communities with Multiple Spaces

6: Personalised collections of “most related content” enhance match_value but reduce shared context.

7: Subdividing spaces after they become active creates more net benefits for participants than having lots of inactive spaces.

8: In communities with lots of interaction spaces, navigation aids that highlight more active spaces will increase the net benefits members experience.

9: In synchronous spaces that are not always active, a schedule of “expected active times” can coordinate visitors and become a self-fulfilling expectation.

10: In communities with lots of interaction spaces, recommender systems that help people navigate to spaces that best suit them will increase the net benefits people experience.

11: Ambiguity of scope for the community creates opportunities for adjustment and member ownership.

Competing for a Niche

12: A larger community leads to lower match-value in bond-based communities.

13: Differentiated user interface elements in the competitor community create startup costs and thus favor the incumbent community in any competition over members.

14: Non-shared user IDs and profiles between incumbent and competitor communities creates startup costs and thus favors the incumbent community in any competition over members.

15: content sharing between competing communities raises awareness of the exporting community and the value of posting there, but raises the value of consuming content in the importing community.

16: Conveying a succinct unique selling proposition will attract members.

17: Advertising and celebrity endorsements can help to create awareness of a community and thus make it a focal point in a competition between communities.

Bootstrapping: Leveraging Early Members to Get More Members

18: incentives for early members to generate content can increase bootstrapping.

19: User-generated primary content will do more to bootstrap additional membership than will user-generated metadata, in the community startup stage.

20: services that enable displays of membership that are visible to non-members will lead to bootstrapping.

21: services that make members’ actions in the community visible to their acquaintances outside the community will lead early participants to attract later participants.

22: services that allow members to forward content from the community to their acquaintances outside the community will lead early participants to attract later participants.

23: services that allow members to invite acquaintances outside the community to join will lead early participants to attract later participants.

24: pay-for-referral and revenue-sharing from referrals increase bootstrapping

Attracting Early Members

Increase Stage 1 Value of the Community

25: single-user and small-group productivity, entertainment, or commerce tools can attract people to an online space before the community features are successful.

26: providing access to professionally generated content can help attract people to an online space before the community features are successful.

27: providing access to syndicated data can help attract people to an online space before the community features are successful, if the syndicated data is not otherwise easily accessible or if it is presented in a novel way that adds value.

28: participation by professional staff can help attract people to an online space before the community features are successful.

29: starting with a limited scope and expanding later allows focusing of staff resources toward getting to critical mass in the limited scope.

30: If professionals act as contributors of last resort, they will be needed less and less as the community achieves critical mass.

31: Bots that simulate other participants can help attract people to an online space before the community features are successful.

Early Adopter Benefits

32: Promising permanent discounts to early adopters can attract early adopters to the community.

33: Promoting the status or readiness benefits of being early to an online community can attract early adopters to the community.

34: Promoting a site as cool but undiscovered can attract early adopters.

35: Creating scarce, claimable resources can induce prospective members to join earlier.

36: Contribution minima for maintaining scarce status can lead to greater contribution by status-holding members.

Setting Expectations for Success

Signals of Convener Quality and Commitment

37: Professional site design increases expectations about the probability of success.

38: Visible expenditures can be a credible signal of commitment to future investment in a community, and thus help to increase expectations about the probability that the community will eventually succeed.

39: Images of members will convey the presence of other people, and thus expectations of future success.

40: Prominent display of user-contributed content will convey activity, and thus expectations of future success, as long as there is new user-contributed content.

41: Indicators of participation levels will convey activity, and thus expectations of future success, as long as there actually is activity.

42: Indicators of membership and content growth signal a higher probability that the community will eventually reach critical mass, provided there really is growth.

43: When a community is small and slow growing, acknowledging each new member or contribution creates a more favorable signal of growth than showing total numbers or percentage change.

44: When a community is small and fast growing, displaying percentage growth creates a more favorable signal of growth than displaying absolute numbers.

45: When a community has reached critical mass, displaying absolute numbers conveys a signal that the community is already successful.

46: Conditional participation commitments can draw people to join communities that they would not join if they had to do so without knowledge that others were also joining.

47: Drawing analogies to successful communities can raise expectations that a new community will be similarly successful.

48: Drawing attention to external publicity and endorsements can raise expectations about future success.

So today after a session of the Goldsmiths Deep Learning with Tensorflow group, I decided to go back to the basics and start ML101. I found a good resource for that:

The first chapter in this book gives a primer in how basic artificial neurons, such as the perceptron and the sigmoid neuron work.

The perceptron is a basically a mathematical approach to define a decision making model. The perceptron defines a set of binary variables (the inputs xs and output y) and a set of parameters (the respective weights for each input, ws, and a threshold value). Each configuration of weights and threshold provide us with a different decision-making model.

Now, you can scale up the power of a perceptron to a network of perceptrons, composed by several interconnected layers of perceptrons. Each input is connect to the first layer of perceptrons and each perceptron output is multiply connected to all the perceptrons of the subsequent layer. Thus, each layer subsequent layer will be making more complex and abstract decisions, providing a very sophisticated mechanism for decision making.

Here I’m using the dot product and the negative of the threshold to express the bias

Output = 0 => w≤ 0

Output = 1 => wb > 0

To be updated…

Also began the basic Tensorflow starter tutorial with the MNIST dataset, for recognition of handwritten digits.


Middleware is a class of software designed to support the development and operation of other software (e.g. end-user applications, other middleware). Middleware that is considered infrastructural software, as its features are mostly invisible and often expressed through client code features. It can take the form of toolkits, libraries or services which are incorporated.

In this paper by W. Keith Edwards, Victoria Bellotti, Anind K. Dey, and Mark W. Newman addresses the problem of designing and evaluating user-centred middleware, a difficult task since the the technical features of the underlying infrastructure are directly invisible and typically expressed in the features of the client applications. Apart from the general criteria for evaluating software (performance, scalability, security, robustness, …) the authors found  no user-centred criteria, based on usability and usefulness, for designing and evaluating the features of the middleware itself.

This prompted the authors to formulate the following questions about the middleware design gap:

  • Is it possible to more directly couple the design of infrastructure features to the design of application features?
  • How can this more direct coupling exist when the applications that will be built atop the middleware don’t yet exist…and may be impossible to build without the middleware itself?
  • Could the context of either the users or the use of these unknown applications have an important impact on the features we decide upon?
  • How can we avoid building a bloated, overly complex system incorporating every conceivable useful feature, at the same time as developing a system that will not need to be constantly updated (and thus repeatedly broken) throughout its life span?
  • Are there better models for deciding on the features of “experimental” middleware, designed to support completely

And also for the middleware evaluation gap:

  • How do we choose which applications to build to evaluate the middleware?
  • What kinds of users and contexts (types of uses) for these applications should we consider as appropriate for testing purposes?
  • What does the manifestation of the technology in a particular application say about the capabilities (or even desirability) of the middleware itself? How useful is this “indirect” evaluation?
  • Are the techniques we normally use to evaluate applications acceptable when our goal is to evaluate the middleware upon which those applications are based?
  • Is it possible to evaluate the middleware outside of the context of a particular application?

Thus, authors present the major challanges and lessons learned over the design and evaluation of three case studies: (1) Placeless documents, (2) Context toolkit and (3) SpeakEasy. The lessons learned set encompass the following list:

  • Lesson 1 – Prioritise Core-middleware Features.
  • Lesson 2 – First, build prototypes with high fidelity for expressing the main objectives of the middleware.
  • Lesson 3 – Any test-application built to demonstrate the middleware must also satisfy the usual criteria of usability and usefulness.
  • Lesson 4 – Initial proof-of-concept applications should be lightweight.
  • Lesson 5 – Be clear about that your test-application prototypes will tell you about your middleware.
  • Lesson 6 – Do not confuse the design and testing of experimental middleware with the provision of an infrastructure for other experimental application developers.
  • Lesson 7 – Be sure to define a limited scope for test- applications and permissible uses of the middleware.
  • Lesson 8 – There is no point in faking components and data if you intend to test for user experience benefits.
  • Lesson 9 – Understand that the scenarios you use for evaluation may not reflect how the technology will ultimately be used.
  • Lesson 10 – Anticipate the consequences of the tradeoff between building useful/usable applications versus applications that test the core features of the middleware.

At the current point of my PhD research, I believe I now have a broad view of the field, a plan outline for my research, have tackled with practical work, exchanged ideas with other researchers and now looking to define my specific research questions or problems. Further than that, I am preparing to start writing my upgrade I keep thinking about the whole process as whole that needs a successful conclusion.

One article that I was lucky to see passing before my eyes when I was beginning my PhD at UCP was “How to Choose a Good Scientific Problem” by Uri Alon. The title seems rather prescriptive but the analysis that it presents is highly enlightening.

The starting point of the article is that choosing a problem is, just as the culture of a specific lab, related to nurturing. When choosing a problem, both for a lab or for an individual researcher or student, the goal is maximising their potential by fostering growth and self-motivated research.

For that, Alon frames scientific problems in two dimensions: feasibility and interest. Feasibility reports to how hard/easy it is to complete a project, in what concerns time. Interest reports to “the amount in which they increase verifiable knowledge”. So considered options for positioning your research problems are: “low hanging fruit” – easy but not to interesting; “difficult is good” – difficult and low interest, and finally the best of options, feasible and with high interest. So choosing the right problem follows the Pareto principles according to an increasing level of difficulty and career development.

Some heuristics are provided that attempt to give students a more wise, defensive stance; “Do not commit to a problem before 3 months have elapsed” (whilst reading, discussing an planning) or “Resist the urge to “we must produce – let’s not waste time and start working”, with the given consideration to practical issues that usually arise such as funding, deadlines, etc.

The author departs to analyse how the ranking of problems occurs. Here, the value assigned by the community competes with the value, with the inner voice from the student or researcher. And a special mention is made to the importance of the supportive environment that the supervisors can provide and how much this helps to strengthen this inner voice. And how recurrent questions that go around inside for years can make the basis of good projects, how the self-motivation that emerges out of this can lead to a bigger commitment, a more rewarding routine and a greater appeal to the audience.

So how can one converge towards his problems? This way the author puts it reminded me of the old adage “Know thyself”. What are the personal interests, what is our perspective on a specific problem, what resonates with one’s values to explore? Achieving self-expression is one of the most important goals in research that may make work self-driven and revitalising.

On the concluding part of the paper, Alon focus on the schema of research, a path that is taken from beginning of research (A) to a particular end (B), and that is erroneously believed to be linear and predefined by most. In fact, in most of the cases the destination of research has been a newly found problem (C) in the way to solve the initial destination problem (B). In the course of a fuzzy stage called meandering of research, C became more interesting, feasible and worthwhile than that. As Alon puts it, the mentors’ task “is to support students through the cloud that seems to guard the entry to the unknown”.

After having a go with the MYOs in our lab and compiling some C++ code for Atau and Miguel for the Metagesture project, and, having been involved on the 24h hackthon in Sonar 2015, which had “Wearables” for its main topic, all this brought all the motivation that I had around 2007 to do some serious hacking in this field. At the time I had just beginning to write my master thesis in mobile and ubiquitous computing, and one of the ideas that I had was to develop a bracelet that connected through Bluetooth, was localisable and had big array of sensors that I could measure and do some data mining on. So, more that seven years passed, here we stand now with the Smart Watch from Apple and the Band from Microsoft, amidst others. So I decided to give myself a treat for my birthday and buy one Microsoft Band to hack.

The initial setup wasn’t as trivial as I expected… I was doing it late in the night, was tired, my Windows 8.1 phone was almost out of battery, and only charged the Band for about half-an-hour. I got it to pair with my phone but connection didn’t last. And the Health app seemed to do nothing about that. Spend almost 30m trying to work it out without success. Being both a Microsoft and Apple consumer and developer, I must confess that the recurrent thought that builds on frustration came to my mind “Why doesn’t this work? If it was an Apple product this would have been a flawless process…”. Some reflections on this later on. But now in the morning, after devices were recharged during the night, restarting my smartphone, removing the previous Bluetooth pairing entry, everything seemed to work well. I would say it was all about the order of the steps in the pairing process, Health app initialization and connection. I am still not sure about what failed in the initial process, but I suspect you mustn’t pair the bracelet before launching the app.

Now everything is setup and about to do some more testing. Leaving for a bike ride and testing the tracking features ;)

On project RAPID-MIX, Goldsmiths EAVI team had a task assigned which has been recently completed. The main outcome is a report on the methodological framework to be adopted, based on User-Centred Design, which involved choosing and adopting a code of research ethics. This involved going through some of the most relevant codes of research, such as:

We decided to adopt the last one since it seemed the most encompassing and adequate to international collaborative research, and provided an extensive coverage and guidelines for good practice. These claim for values such as integrity, uniformity, fairness and confidentiality applied to guidelines such as:

  • Data practices for availability and access
  • Research procedures
  • Publication-related, review and editorial conduct

Overall, it provides a great ethical framework to work on and one of the fundamental aspects of any serious research work.

Just found this picture online. Me playing with :papercutz on my last gig with them, at the beautiful concert room in Nogueira da Silva museum, Braga, Portugal. At the time I was the multi-instrumentalist at service, playing classical guitar, melodica, xylo, synth and electronics, and backing vocals.


Bittersweet memories… :P

Last week, went to IRCAM, Paris, for a three-day intensive work meetings. Had the pleasure to see some of the great work that is being conducted there in HCI research for music technologies. Here’s three projects that show some of the amazing work:

Stonic App by IRCAM

Playing sound textures Project

Project COSIMA

Just fresh out of CHI 2015 this week, a very interesting article on transferring HCI research into commercial product. “From User-Centered to Adoption-Centered Design: A Case Study of an HCI Research Innovation Becoming a Product” by Chilana, P., Ko, A.J. & Wobbrock, J.O.,  presents a “case study of how an HCI research innovation goes through the process of transitioning from a university project to a revenue-generating startup financed by venture capital.”


Commercialization; productization; dissemination; research impact; technology transfer; adoption-centered design.

ACM Classification Keywords

H.5.2. Information interfaces and presentation (e.g., HCI): User Interfaces—evaluation / methodology.

Structure and content

The paper begins with introducing the “the motivations for adopting different HCI methods at different stages during the evolution of the research, product, and startup business and the tradeoffs made between user-centered design” and what they have coined as “adoption-centered design”. It contextualises the case study within the borders of technology transfer in SE, innovation in the marketplace, and generalisability of HCI research evaluation.

After that, the authors provide two blocks of different knowledge, first one focused on HCI research to come up with a prototype, and the second one, focused on market innovation by transitioning from the research outcome to a commercial product.

After the describing the motivation for innovation and product, authors describe the HCI methods applied in the development and evaluation of a research prototype. First, a formative evaluation to inform interaction design, in which they have explored the design space of the concept and used a lo-fi paper based user study. Then, system feasibility technical evaluation using a mTurk-based crowdsourced user inquiry with simulated data. Finally, an ecological validity evaluation through a longitudinal field study, by getting potential adopters and deploying the prototype in the wild.

This process lead to a validated design and HCI research outcome, however, authors claim that had mainly demonstrated end user value. Not enough evidence of success for financing, or other business reasons like paying customers.

The second block depicts an incursion of the research outcome through the commercial scope of innovation. Business models, marketing, productisation, stakeholders, value proposition, market entry barriers and B2B adoption.

The questions the authors highlight along the paper:

  • Should we expect that good HCI science outcomes be transferable to users or costumers and would this be in the scope of HCI?
  • Should “potential for adoption” be adopted as criteria for research systems evaluation?
  • Is the traditional focus on generalisability only from end users restraining HCI tech transfer?
  • How to augment research systems evaluation with stakeholders perspectives?
  • Does it make sense to focus research on adoption given the lag of adoption of innovations?
  • Should “success” for this criteria focus on knowledge about its adoption barriers?
  • Could these perspectives increase the chances of product adoption and are they valid/adequate for delivering high quality and innovative research?

In the discussion authors reflect, on the one hand, on how “user-centered research innovation can be the invaluable foundation of a B2B software company”, drawing on how HCI evaluations inform business milestones. On the other hand, on how “user-centered focus typical of HCI research also occluded B2B adoption issues by not revealing important insights about the real-world customer support ecosystem and stakeholder dependencies.” making them depart into “adoption-centered design, uncovering knowledge specific to our business and product to fuel customer acquisition and inform product priorities.”

Also, authors provide arguments on the need to investigate Adoption-Centered in, its concerns for incorporating in HCI research, and suggest possible methods to achieve it. The also expose the benefits of such ordeal, suggesting that “more explicit adoption-centered approach to research might increase the chances that an investor, entrepreneur, or prospective employee would see business opportunities in HCI research. Combined with other systemic changes, such as more ex- tensive and rapid publicity of research innovations for the public and greater awareness of university intellectual property policy, an adoption-centered focus in HCI research might lead to a discipline of HCI technology transfer”.

Authors conclude by exposing the limitations of their study to ”one technology, one business, one university project and one perspective.” and by calling for further informing efforts that help ”transform HCI technology research from a source of ideas to a source of commercially disseminated solutions that create widespread value”.

Reference selection:

.Previous from authors:

[3] Chilana, P.K., Ko, A.J., Wobbrock, J.O. & Grossman, T. 2013. A multi-site field study of crowdsourced contextual help: usage and perspectives of end users and software teams. ACM CHI, 217–226.

[4] Chilana, P., Ko, A.J. & Wobbrock, J.O. 2012. Lemon- Aid: selection-based crowdsourced contextual help for web applications. ACM CHI, 1549–1558.

.Innovation and tech transfer:

[9] Henderson, A. 2005. The innovation pipeline: design collaborations between research and development. interac- tions, 12, 1, 24–29.

[11] Isaacs, E.A., Tang, J.C., Foley, J., Johnson, J., Ku- chinsky, A., Scholtz, J. & Bennett, J. 1996. Technology transfer: so much research, so few good products. ACM CHI Companion, 155–156.

[13] Kolko, J. 2014. Running an entrepreneurial pilot to identify value. interactions, 21, 4, 22–23.

[15] Larsson, M., Wall, A., Norström, C. & Crnkovic, I. 2006. Technology transfer: why some succeed and some don’t. Software Tech Transfer in Soft. Engr., 23–28.

[19] Pfleeger, S.L. 1999. Understanding and improving technology transfer in software engineering. J. of Systems and Software, 47, 2, 111–124.

[21] Rogers, E.M. 2010. Diffusion of innovations. Simon and Schuster.

[26] Winkler, D., Mordinyi, R. & Biffl, S. 2013. Research Prototypes versus Products: Lessons Learned from Software Development Processes in Research Projects. Systems, Soft- ware and Services Process Improvement. Springer, 48–59.


[16] Lee, A.S. & Baskerville, R.L. 2003. Generalizing generalizability in information systems research. Information systems research, 14, 3, 221–243.