Lora Aroyo is working in the following projects:

ReTV Project, (3,5M Euro, PI): Re-purposing and re-using digital content across Smart TVs, Web and mobile applications, social media and other emerging platforms (to be referred to as “vectors”). ReTV helps broadcasters decide when, in what form and on which vector(s) to deliver which content. We propose the Trans-Vector Platform (TVP) to address this challenge and help media companies gain a competitive advantage through guided content re-purposing and re-publication, on the fly and across vectors. Partners: regional public broadcaster (RBB), a national TV archive (NISV) and an OTT TV distributor operating in multiple EU markets (Zattoo), WebLyzard, Modul University, VU University Amsterdam (3 years, starting January 2018); ReTV offers them a better match between content and viewers across vectors, time and five EU languages (English, German, French, Spanish, Dutch);

Capturing Bias Project, (400K Euro, PI): We aim at achieving reliable and explainable big data analysis of media related collections. We develop a framework of models for bias- and diversity-aware accuracy measures (with confidence scores and thresholds to assess reliability) and diversity-driven human computation methods for continuous gathering of opinion- and perspectives-aware training data (using crowdsourcing with citizens and ‘nichesourcing’ with experts) to support macro-level analysis, e.g role of political and gender bias in media programmes & micro-level analysis, e.g. close reading of specific programmes, bias effects of speaker selection and topic presentation. Partners: Prof.dr. Lora Aroyo (Vrije Universiteit Amsterdam CS & Amsterdam Data Science), dr. Alessandro Bozzon (TU Delft CS & Delft Data Science); Prof.dr Alec Badenoch (Utrecht University, Department of Media and Culture Studies); dr. Antoaneta Dimitrova (Leiden University, FGGA,Institute of Public Administration) Data partner: Netherlands Institute for Sound and Vision (the CLARIAH centre for Media Studies) (3 years, starting January 2018); The project provides bias- and diversity-aware methods & tools to support accurate analysis and interpretation of big media data over time;

DIVE+: Dynamically Linking Objects with Events: DIVE+ is a linked data browser for cultural heritage collections. It supports an integrated and interactive access to multimedia objects from a number of heterogeneous heritage collections. The main goal of the project is to enrich the structured metadata of the online collections with linked (open) data vocabularies. Here we focus specifically on different types of events (e.g. news events, historic events, personal events, etc), as well as traditional entities, e.g. people, locations, time and concepts that are all depicted or associated with particular collection objects. Based on these metadata enrichments, we aim to support serendipitous exploration paths, i.e. navigation patterns. The DIVE+ demonstrator is a winner of numerous awards, e.g. Best Paper at MTSR2017 Conference, Grand Prize at the LODLAM Challenge 2017 and 3rd Prize at the Semantic Web Challenge 2014. Partners: Frontwise Inc, Dutch Audio-Visual Archive, VU University Amsterdam. (2014 – 2018); The DIVE+ Explorative Search Browsers is in production use by Tagasauris Inc and Dutch Audio-Visual Archive; Project website:

CLARIAH Media Suite: Common Lab Research Infrastructure for the Arts and Humanities. The Media Suite is a research environment of the Dutch infrastructure for digital humanities and social sciences (CLARIAH) which aims to serve the needs of media scholars (and other scholars who use audiovisual media) by providing access to audiovisual collections and their contextual data. The role of the VU team is to provide Linked Data access to collections and crowdsourcing framework for gathering reliable events, event types and their representation in collection objects. Partners: Various humanities and computer science groups from across the Netherlands (2015-2018); The DIVE+ explorative search browser is currently integrated as part of the digital humanities support tools in CLARIAH Media Suite;,

SealicMedia project (COMMIT/ website), funded by the COMMIT/ program combining automatic machine multimedia content labeling with multimedia annotations done through various forms of crowdsourcing and nichesourcing – in order to improve the quality of metadata of online multimedia content and thus provide richer interaction between users and online multimedia collections

CrowdTruth framework: CrowdTruth is a long-term project running since 2012 investigating various forms of human-assisted computing, specifically targeting workflows for the creation of ground truth data; first developed in the context of IBM’s Watson and in general for various Cognitive Computing systems. Currently also collaborating with Google Research on refining the CrowdTruth ambiguity metrics and their application for Relation Extraction and Frame Disambiguation tasks. Partners: IBM Watson Group, IBM CAS Benelux, Google Research, Netherlands eScience Center, VU University Amsterdam (2012 – ongoing); The system CrowdTruth is in use by IBM Watson Group and by four other research projects Recently CrowdTruth metrics have been used by New York Times emotion annotation

Vista-TV project, funded by the FP7 (Integrated Project) generate a high-quality linked open dataset describing live TV programming, combine this dataset with behavioral information to provide highly accurate market research information about viewing behavior, and employ the information gathered to build a recommendation service to provide real-time viewing recommendations.

NoTube project, funded by the FP7 (Integrated Project): how Semantic Web can be used to connect TV content & the Web through Linked Open Data, in the context of TV and Web convergence.

CHIP project, funded by the NWO CATCH program: how Semantic Web can be deployed to enrich the Rijksmuseum vocabularies and build services for explorative search and recommendations on top of it; Semantic Web Challenge 2007, 3rd price

PrestoPRIME project, funded by the FP7 (Integrated Project): gamified annotation of videos, e.g. user-generated video metadata; Grand Challenge Winner: Waisda? Video Tagging Game @EuroITV Competition 2010

e-culture project, funded by the BSIK MultimediaN program

Passepartout project, funded by the ITEA EU program: bringing tailored high-definition (HD) media content into the lives of the family by seamless integration into home media centers and networks, with smart graphic displays controlling systems operation and interactive contents packaging

VU INTERTAIN Experimental Research Lab: design and developed the concept for experimental research environment for computer science


Stitch by Stitch: Annotating Fashion at the Rijksmuseum

DIVE+: Explorative Search for Digital Humanities

To be AND not to be: quantum intelligence? | Lora Aroyo & Chris Welty | TEDxNavesink


CrowdTruth: Machine-Human Computation Framework for Harnessing Disagreement in Gathering Annotated Data

FP7 Integrated Project NoTube project

FP7 Integrated Project ViSTA-TV project

COMMIT SealicMedia project

Crowd-Watson Project


FP7 Integrated Project NOTUBE: 



NWO-CATCH project CHIP: Cultural Heritage Information Personalization (Scientific Coordinator) 

Integrated Project PrestoPRIME: Keeping Audiovisual Contents Alive

Dutch BSIK project MultimediaN e-culture project

ITEA project Passepartout: Personalized Ambient Multimedia

VU INTERTAIN Experimental Research Lab



Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s