Skip to main content

Crowdsourcing Software Evaluation.

Sherief, N, Jiang, N., Hosseini, M., Phalp, K. T. and Ali, R., 2014. Crowdsourcing Software Evaluation. In: The 18th International Conference on Evaluation and Assessment in Software Engineering (EASE 2014)., 13--14 May 2014, London.

Full text available as:

[img]
Preview
PDF
Nada_Sherief_et_al_EASE2014_Crowdsourcing_Software_Evaluation.pdf - Accepted Version

218kB

Abstract

Crowdsourcing is an emerging online paradigm for problem solving which involves a large number of people often recruited on a voluntary basis and given, as a reward, some tangible or intangible incentives. It harnesses the power of the crowd for minimizing costs and, also, to solve problems which inherently require a large, decentralized and diverse crowd. In this paper, we advocate the potential of crowdsourcing for software evaluation. This is especially true in the case of complex and highly variable software systems, which work in diverse, even unpredictable, contexts. The crowd can enrich and keep the timeliness of the developers’ knowledge about software evaluation via their iterative feedback. Although this seems promising, crowdsourcing evaluation introduces a new range of challenges mainly on how to organize the crowd and provide the right platforms to obtain and process their input. We focus on the activity of obtaining evaluation feedback from the crowd and conduct two focus groups to understand the various aspects of such an activity. We finally report on a set of challenges to address and realize correct and efficient crowdsourcing mechanisms for software evaluation

Item Type:Conference or Workshop Item (Paper)
Uncontrolled Keywords:Crowdsourcing, Software Evaluation, Users Feedback.
Group:Faculty of Science & Technology
ID Code:21895
Deposited By: Symplectic RT2
Deposited On:27 Apr 2015 15:32
Last Modified:14 Mar 2022 13:51

Downloads

Downloads per month over past year

More statistics for this item...
Repository Staff Only -