In the context of Domain-Specific Modeling Language (DSML) development, the involvement of end-users is crucial to assure that the resulting language satisfies their needs.
In our paper presented at SLE 2017 in Vancouver, Canada, on October 24th within the SPLASH Conference context, we discuss how crowdsourcing tasks can exploited to assist in domain-specific language definition processes. This is in line with the vision towards cognification of model-driven engineering.
The slides are available on slideshare:
[slideshare id=81122273&doc=crowdsourcingthenotationofdomain-specificlanguages-171023233727]
Indeed, crowdsourcing has emerged as a novel paradigm where humans are employed to perform computational and information collection tasks. In language design, by relying on the crowd, it is possible to show an early version of the language to a wider spectrum of users, thus increasing the validation scope and eventually promoting its acceptance and adoption.

We propose a systematic (and automatic) method for creating crowdsourcing campaigns aimed at refining the graphical notation of DSMLs. The method defines a set of steps to identify, create and order the questions for the crowd. As a result, developers are provided with a set of notation choices that best fit end-users’ needs. We also report on an experiment validating the approach.
Improving the quality of the language notation may improve dramatically acceptance and adoption, as well as the way people use your notation and the associated tools.
Essentially, our idea is to spawn to the crowd a bunch of questions regarding the concrete syntax of visual modeling languages, and collect opinions. Based on different strategies, we generate an optimal notation and then we check how good it is.
In the paper we also validate the approach and experiment it in a practical use case, namely studying some variations over the BPMN modeling language.
The full paper can be found here: https://dl.acm.org/citation.cfm?doid=3136014.3136033. The paper is titled: “Better Call the Crowd: Using Crowdsourcing to Shape the Notation of Domain-Specific Languages” and was co-authored by Marco Brambilla, Jordi Cabot, Javier Luis Cánovas Izquierdo, and Andrea Mauri.
You can also access the Web version on Jordi Cabot blog.
The artifacts described in this paper are also referenced on findresearch.org, namely referring to the following materials:
- Overview on crowdsearcher site: http://crowdsearcher.deib.polimi.it/casestudies/
- Code of the UI (including the configuration of the tasks and of the modeling alternatives analyized): https://github.com/janez87/tef
- Code of the crowdsourcing platform: https://github.com/janez87/crowdsearcher/tree/modeling-new
- Results summary: http://crowdsearcher.deib.polimi.it/casestudies/crowd-experiment-results-sle2017.xlsx