Hi, welcome to this website! If you are reading this text, I suppose you are interested in knowing something either about me or about my work, which is good. As for the former, I'm Florian Daniel, an assistant professor at the Dipartimento di Elettronica, Informazione e Bioingegneria (DEIB) of the Politecnico di Milano in Milan, Italy. As for the latter, my work concentrates on web engineering, mashups, UI-oriented comptuing, service-oriented computing, business process management and crowdsourcing.

The purpose of this site is to allow you to get some more insight into the above and to enable myself to keep track of what's useful to remember. It's a hard endeavor, but I'll try to keep the site as updated as possible. Should you not find what you are looking for, just drop me an email and I'll try to help.

News and sponsored events

October 11. Have a look at the different calls (papers, short papers, workshops, demos, etc.) of ICWE 2018 and participate. It's for sure going to be an outstanding event!

October 2. Going to be published soon (just got accepted):
F. Daniel, P. Kucherbaev, C. Cappiello, B. Benatallah, M. Allahbakhsh. Quality Control in Crowdsourcing: A Survey of Quality Attributes, Assessment Techniques and Assurance Actions. ACM Computing Surveys, accepted for publication, 2017. Abstract. Paper

Crowdsourcing enables one to leverage on the intelligence and wisdom of potentially large groups of individuals toward solving problems. Common problems approached with crowdsourcing are labeling images, translating or transcribing text, providing opinions or ideas, and similar – all tasks that computers are not good at or where they may even fail altogether. The introduction of humans into computations and/or everyday work, however, also poses critical, novel challenges in terms of quality control, as the crowd is typically composed of people with unknown and very diverse abilities, skills, interests, personal objectives and technological resources. This survey studies quality in the context of crowdsourcing along several dimensions, so as to define and characterize it and to understand the current state of the art. Specifically, this survey derives a quality model for crowdsourcing tasks, identifies the methods and techniques that can be used to assess the attributes of the model, and the actions and strategies that help prevent and mitigate quality problems. An analysis of how these features are supported by the state of the art further identifies open issues and informs an outlook on hot future research directions.

September 27 Ecco un aggiornamente delle proposte di tesi per studenti del II livello di Ingegneria Informatica (eventualmente si possono anche fare progetti più piccoli, non solo tesi). La pagina sarà in continua evoluzione e nuove proposte possono spuntare da un giorno all'altro.
July 12. Not sure what happend on ResearchGate, but must be good :-)

June 15. 15th International Conference on Service-Oriented Computing (ICSOC 2017)
16th International Semantic Web Conference (ISWC 2017)

Biannual Conference of the Italian SIGCHI Chapter (CHItaly)

15th International Conference on Business Process Management (BPM 2017)

24th International Conference on Web Services (ICWS 2017)

IEEE International EDOC Conference 2017

June 28. Eventually online:
C. Rodríguez, F. Daniel and F. Casati. Mining and Quality Assessment of Mashup Model Patterns with the Crowd: A Feasibility Study. ACM Transactions on Internet Technology 16(3), Article 17, 2016. DOI 10.1145/2903138. Abstract. Paper. Appendix

Pattern mining, that is, the automated discovery of patterns from data, is a mathematically complex and computationally demanding problem that is generally not manageable by humans. In this article, we focus on small datasets and study whether it is possible to mine patterns with the help of the crowd by means of a set of controlled experiments on a common crowdsourcing platform. We specifically concentrate on mining model patterns from a dataset of real mashup models taken from Yahoo! Pipes and cover the entire pattern mining process, including pattern identification and quality assessment. The results of our experiments show that a sensible design of crowdsourcing tasks indeed may enable the crowd to identify patterns from small datasets (40 models). The results, however, also show that the design of tasks for the assessment of the quality of patterns to decide which patterns to retain for further processing and use is much harder (our experiments fail to elicit assessments from the crowd that are similar to those by an expert). The problem is relevant in general to model-driven development (e.g., UML, business processes, scientific workflows), in that reusable model patterns encode valuable modeling and domain knowledge, such as best practices, organizational conventions, or technical choices, that modelers can benefit from when designing their own models.

February 15. Have a look at the site of the new Data Science Lab I am involved in since the beginning of this year. Need help with the integration, management, analysis and/or visualization of your data? Drop us an email, and we'll help! Not sure you need help with your data? Again, drop us an email, and we'll show you much more value you could get out of your data.
July 19. Three of the recently published books:

Recent publications