Archive

Uncategorized

This is a summary-in-progress of the chapter “Starting New Online Communities” by Paul Resnick, Joseph Konstan, Yan Chen, and Robert Kraut, from the book “Building Successful Online Communities: Evidence-Based Social Design” edited by Paul Resnick and Robert Kraut.

The authors identify the major challenges for designing new successful online communities and go through the chapter arguing on a set of design claims based on cost-benefit assessments.

Major challenges:

  1. Carving out a useful niche (ensuring that net utility, benefits minus costs, must be positive for all members in steady state).
  2. Defending the niche (ensuring net utility must be higher than that of competing communities).
  3. Getting to critical mass (assuring net positive utility for each of the members as they join, even though the community has not yet reached steady state).

Opportunities Model:

(match_value *collection_size) – navigation_cost [pull model]

(match_value *collection_size) – (interruption_cost * collection_size) [push model]

Design Claims

Carving Out a Niche

1: Lower volume and higher time-sensitivity of interaction opportunities, and lower interruption costs increase the benefits of push notification.

2: A mixed-topic scope reduces expected match value.

3: An ambiguous scope for an interaction space reduces expected match value.

4: Activities that bridge interests in different topics increase match value in spaces with mixed-topic scope.

5: A transcendent or bridging topical identity increases match value in communities with mixed-topic scope.

Communities with Multiple Spaces

6: Personalised collections of “most related content” enhance match_value but reduce shared context.

7: Subdividing spaces after they become active creates more net benefits for participants than having lots of inactive spaces.

8: In communities with lots of interaction spaces, navigation aids that highlight more active spaces will increase the net benefits members experience.

9: In synchronous spaces that are not always active, a schedule of “expected active times” can coordinate visitors and become a self-fulfilling expectation.

10: In communities with lots of interaction spaces, recommender systems that help people navigate to spaces that best suit them will increase the net benefits people experience.

11: Ambiguity of scope for the community creates opportunities for adjustment and member ownership.

Competing for a Niche

12: A larger community leads to lower match-value in bond-based communities.

13: Differentiated user interface elements in the competitor community create startup costs and thus favor the incumbent community in any competition over members.

14: Non-shared user IDs and profiles between incumbent and competitor communities creates startup costs and thus favors the incumbent community in any competition over members.

15: content sharing between competing communities raises awareness of the exporting community and the value of posting there, but raises the value of consuming content in the importing community.

16: Conveying a succinct unique selling proposition will attract members.

17: Advertising and celebrity endorsements can help to create awareness of a community and thus make it a focal point in a competition between communities.

Bootstrapping: Leveraging Early Members to Get More Members

18: incentives for early members to generate content can increase bootstrapping.

19: User-generated primary content will do more to bootstrap additional membership than will user-generated metadata, in the community startup stage.

20: services that enable displays of membership that are visible to non-members will lead to bootstrapping.

21: services that make members’ actions in the community visible to their acquaintances outside the community will lead early participants to attract later participants.

22: services that allow members to forward content from the community to their acquaintances outside the community will lead early participants to attract later participants.

23: services that allow members to invite acquaintances outside the community to join will lead early participants to attract later participants.

24: pay-for-referral and revenue-sharing from referrals increase bootstrapping

Attracting Early Members

Increase Stage 1 Value of the Community

25: single-user and small-group productivity, entertainment, or commerce tools can attract people to an online space before the community features are successful.

26: providing access to professionally generated content can help attract people to an online space before the community features are successful.

27: providing access to syndicated data can help attract people to an online space before the community features are successful, if the syndicated data is not otherwise easily accessible or if it is presented in a novel way that adds value.

28: participation by professional staff can help attract people to an online space before the community features are successful.

29: starting with a limited scope and expanding later allows focusing of staff resources toward getting to critical mass in the limited scope.

30: If professionals act as contributors of last resort, they will be needed less and less as the community achieves critical mass.

31: Bots that simulate other participants can help attract people to an online space before the community features are successful.

Early Adopter Benefits

32: Promising permanent discounts to early adopters can attract early adopters to the community.

33: Promoting the status or readiness benefits of being early to an online community can attract early adopters to the community.

34: Promoting a site as cool but undiscovered can attract early adopters.

35: Creating scarce, claimable resources can induce prospective members to join earlier.

36: Contribution minima for maintaining scarce status can lead to greater contribution by status-holding members.

Setting Expectations for Success

Signals of Convener Quality and Commitment

37: Professional site design increases expectations about the probability of success.

38: Visible expenditures can be a credible signal of commitment to future investment in a community, and thus help to increase expectations about the probability that the community will eventually succeed.

39: Images of members will convey the presence of other people, and thus expectations of future success.

40: Prominent display of user-contributed content will convey activity, and thus expectations of future success, as long as there is new user-contributed content.

41: Indicators of participation levels will convey activity, and thus expectations of future success, as long as there actually is activity.

42: Indicators of membership and content growth signal a higher probability that the community will eventually reach critical mass, provided there really is growth.

43: When a community is small and slow growing, acknowledging each new member or contribution creates a more favorable signal of growth than showing total numbers or percentage change.

44: When a community is small and fast growing, displaying percentage growth creates a more favorable signal of growth than displaying absolute numbers.

45: When a community has reached critical mass, displaying absolute numbers conveys a signal that the community is already successful.

46: Conditional participation commitments can draw people to join communities that they would not join if they had to do so without knowledge that others were also joining.

47: Drawing analogies to successful communities can raise expectations that a new community will be similarly successful.

48: Drawing attention to external publicity and endorsements can raise expectations about future success.

Advertisements

So today after a session of the Goldsmiths Deep Learning with Tensorflow group, I decided to go back to the basics and start ML101. I found a good resource for that:

http://neuralnetworksanddeeplearning.com

The first chapter in this book gives a primer in how basic artificial neurons, such as the perceptron and the sigmoid neuron work.

The perceptron is a basically a mathematical approach to define a decision making model. The perceptron defines a set of binary variables (the inputs xs and output y) and a set of parameters (the respective weights for each input, ws, and a threshold value). Each configuration of weights and threshold provide us with a different decision-making model.

Now, you can scale up the power of a perceptron to a network of perceptrons, composed by several interconnected layers of perceptrons. Each input is connect to the first layer of perceptrons and each perceptron output is multiply connected to all the perceptrons of the subsequent layer. Thus, each layer subsequent layer will be making more complex and abstract decisions, providing a very sophisticated mechanism for decision making.

Here I’m using the dot product and the negative of the threshold to express the bias

Output = 0 => w≤ 0

Output = 1 => wb > 0

To be updated…

Also began the basic Tensorflow starter tutorial with the MNIST dataset, for recognition of handwritten digits.

https://www.tensorflow.org/versions/r0.11/tutorials/mnist/beginners/index.html

 

Middleware is a class of software designed to support the development and operation of other software (e.g. end-user applications, other middleware). Middleware that is considered infrastructural software, as its features are mostly invisible and often expressed through client code features. It can take the form of toolkits, libraries or services which are incorporated.

In this paper by W. Keith Edwards, Victoria Bellotti, Anind K. Dey, and Mark W. Newman addresses the problem of designing and evaluating user-centred middleware, a difficult task since the the technical features of the underlying infrastructure are directly invisible and typically expressed in the features of the client applications. Apart from the general criteria for evaluating software (performance, scalability, security, robustness, …) the authors found  no user-centred criteria, based on usability and usefulness, for designing and evaluating the features of the middleware itself.

This prompted the authors to formulate the following questions about the middleware design gap:

  • Is it possible to more directly couple the design of infrastructure features to the design of application features?
  • How can this more direct coupling exist when the applications that will be built atop the middleware don’t yet exist…and may be impossible to build without the middleware itself?
  • Could the context of either the users or the use of these unknown applications have an important impact on the features we decide upon?
  • How can we avoid building a bloated, overly complex system incorporating every conceivable useful feature, at the same time as developing a system that will not need to be constantly updated (and thus repeatedly broken) throughout its life span?
  • Are there better models for deciding on the features of “experimental” middleware, designed to support completely

And also for the middleware evaluation gap:

  • How do we choose which applications to build to evaluate the middleware?
  • What kinds of users and contexts (types of uses) for these applications should we consider as appropriate for testing purposes?
  • What does the manifestation of the technology in a particular application say about the capabilities (or even desirability) of the middleware itself? How useful is this “indirect” evaluation?
  • Are the techniques we normally use to evaluate applications acceptable when our goal is to evaluate the middleware upon which those applications are based?
  • Is it possible to evaluate the middleware outside of the context of a particular application?

Thus, authors present the major challanges and lessons learned over the design and evaluation of three case studies: (1) Placeless documents, (2) Context toolkit and (3) SpeakEasy. The lessons learned set encompass the following list:

  • Lesson 1 – Prioritise Core-middleware Features.
  • Lesson 2 – First, build prototypes with high fidelity for expressing the main objectives of the middleware.
  • Lesson 3 – Any test-application built to demonstrate the middleware must also satisfy the usual criteria of usability and usefulness.
  • Lesson 4 – Initial proof-of-concept applications should be lightweight.
  • Lesson 5 – Be clear about that your test-application prototypes will tell you about your middleware.
  • Lesson 6 – Do not confuse the design and testing of experimental middleware with the provision of an infrastructure for other experimental application developers.
  • Lesson 7 – Be sure to define a limited scope for test- applications and permissible uses of the middleware.
  • Lesson 8 – There is no point in faking components and data if you intend to test for user experience benefits.
  • Lesson 9 – Understand that the scenarios you use for evaluation may not reflect how the technology will ultimately be used.
  • Lesson 10 – Anticipate the consequences of the tradeoff between building useful/usable applications versus applications that test the core features of the middleware.

Last year I had a very special and memorable birthday lunch. It was a saturday and I had Phd classes and there was this incredible gathering of people there, between high profile composers, performers, electronic music scientists, interactive artists and my PhD colleagues… Thank you all for the great time!

WP_001097

From left to right: Flo Menezes (composer of maximalist music), André Perrotta (interactive arts), Henrique Portovedo (augmented saxophonist), António Sousa Dias (composer), Samuel Van Ransbeeck (composer), Filipe Jensen (interactive marketing), Sofia Lourenço (virtuoso pianist), Jean-Claude Risset (composer and computer music pioneer), myself and Peter Beyls (algorithmic art pioneer).

Giving my very first lecture in academia about research strategy and methodology design for the Master degree in Music Teaching – Portuguese Catholic University, Porto, Portugal. Presented the broad picture of the music industry and its value chain. Focused on the specific research strategy, based on qualitative research and on a multiple case-study around musicians and music professionals.

2014-04-14 11.42.36

Going after the possibilities that audio analysis and visualisation bring to video games development, I have decided to use Aubio library within PureData environment. Thus, I have decided to post a walkthrough on compiling Aubio on Mac OS X Mavericks (10.9.2).

First, get Aubio latest release (0.4.0) source, from http://aubio.org/download

Get homebrew to install dependencies, you’ll find how to in http://brew.sh. This might bring you some conflicts with macports, but it is definitely worthwhile. I’ll post here the only command that you have to use in terminal for convenience:

>ruby -e “$(curl -fsSL https://raw.github.com/Homebrew/homebrew/go/install)”

Use “brew doctor” between installations to check if things are correct. You might need to append

export PATH=’/usr/local/bin:$PATH’

to your .bash_profile

In the Aubio directory, issue waf configuration to check missing dependencies

>./waf configure

I had to install some of them using brew, so bellow is part of my bash history. Also install JackOSX latest binary version for audio routing.

>brew install pkg-config
>brew install libsndfile
>brew install doxygen
>brew install txt2man
>brew install libav
>brew install samplerate
>brew install ffmpeg

This will help you overcome the configuration step in Aubio directory.  Some of the installs are optional but I installed all of them, and FFMpeg lib will bring some needed dependencies also. After all have checked fine you can proceed

>./waf build install

Some other things missing, I will get back to it soon.